text stringlengths 1.23k 293k | tokens float64 290 66.5k | created stringdate 1-01-01 00:00:00 2024-12-01 00:00:00 | fields listlengths 1 6 |
|---|---|---|---|
Mutation accumulation under UV radiation in Escherichia coli
Mutations are induced by not only intrinsic factors such as inherent molecular errors but also by extrinsic mutagenic factors such as UV radiation. Therefore, identifying the mutational properties for both factors is necessary to achieve a comprehensive understanding of evolutionary processes both in nature and in artificial situations. Although there have been extensive studies on intrinsic factors, the mutational profiles of extrinsic factors are poorly understood on a genomic scale. Here, we explored the mutation profiles of UV radiation, a ubiquitous mutagen, in Escherichia coli on the genomic scale. We performed an evolution experiment under periodic UV radiation for 28 days. The accumulation speed of the mutations was found to increase so that it exceeded that of a typical mutator strain with deficient mismatch repair processes. The huge contribution of the extrinsic factors to all mutations consequently increased the risk of the destruction of inherent error correction systems. The spectrum of the UV-induced mutations was broader than that of the spontaneous mutations in the mutator. The broad spectrum and high upper limit of the frequency of occurrence suggested ubiquitous roles for UV radiation in accelerating the evolutionary process.
Results
Viability and mutation probability in response to UV exposure in E. coli. UV radiation is a toxic mutagen and was expected to decrease the viability of the cells but also increase the probability of the emergence of mutants. We used three strains of E. coli to confirm how these standard actions of UV radiation affected them regarding inherent mutability, spontaneous mutation rate in the absence of UV exposure, and UV sensitivity. One was E. coli MDS42 with a proficient error correction system that was used as a standard control as was denoted Co. Two mutator strains, ΔS and ΔHSB, were constructed from Co by deleting the genes involved in error correction (ΔmutS for ΔS and ΔmutH, ΔmutS, and ΔuvrB for ΔHSB). The mutS and mutH genes are involved in the mismatch repair system, so the deletion of either/both genes increase the mutation rate as demonstrated previously 8 . The uvrB gene is one of the genes involved in the nucleotide excision repair function. This function has a role in repairing genomic DNA lesions mainly caused by UV radiation. Defects in uvrB reduce native UV resistance 16,17 . Therefore, ΔHSB was designed to be a UV-sensitive mutator strain.
First, we confirmed the increase in the spontaneous mutation rate of the mutators from Co. A fluctuation test based on a mutation resulting in resistance to nalidixic acid (Nal R ) verified that the two mutators had about 40-times higher mutation rates compared to that of Co (Fig. 1a). These mutator strains exhibited only a slight difference in the growth rate (Fig. 1b, ANOVA, F(2,177) = 109, p < 0.05). These comparable rates were consistent with a previous study which reported that the growth rate decreased notably as the mutation rate increased beyond 100-fold of that of the wildtype 8 . Next, we confirmed the effects of UV exposure as a mutagen on the viability and mutability of the three strains ( Fig. 1c-e). As expected, the survival rate, the viable fraction of the cell population, decreased as the UV dose increased for all strains. In contrast, the Nal R mutant fraction increased. That is, the viability and mutability were negatively related (Fig. 1f). We found that ΔHSB was more susceptible to UV exposure in terms of reduced viability compared to the other strains, consistent with the lack of uvrB (Fig. 1e, black circles). In contrast, the Nal R mutant fraction of this strain increased markedly more in response to the lower UV dose (red circles). When considering the equivalent mutation rates between the two mutator strains in the absence of UV exposure, the UV-sensitive viability of ΔHSB is due to its high, and specifically lethal, mutant production rate per UV dose. Furthermore, these results suggested that the UV-induced mutation rate could be kept at a high level by simply keeping the survival fraction constant regardless of different doses of UV exposure, different sensitivities to UV exposure, and/or different spontaneous mutation rates.
Evolution experiment with UV exposure. To test whether the UV-induced mutation rate could be kept equivalent among the different strains by keeping the survival fractions equivalent, we performed an evolution experiment by exposing bacterial cultures UV radiation for 28 days with the maximum dosage of UV which did not annihilate the population. The daily procedure included performing growth assays on the cultures, serial transferring the selected culture, and exposing them to UV irradiation (Fig. 2a). First, the growth assay was performed by measuring the optical density at 595 nm, OD 595 , of the overnight cultures on micro-well plates (100 μl/ well, 6 lineages/strain, and 5 wells/lineage) with a plate reader. We selected the well which was exposed to the largest UV dose among the sufficiently growing cultures (OD 595 > 0.1) within each lineage. Next, the selected culture was transferred into fresh medium (5 wells/lineage) after a 100-fold dilution. Subsequently, the 5 wells were exposed to different doses of UV and incubated overnight for the next cycle. The growth selection (OD 595 > 0.1) tended to keep the survival fraction in response to the UV exposure equal when the viability in response to UV exposure changed during the evolution experiment. We also conducted the evolutionary experiment without UV irradiation as a control (Fig. 2b). In this experiment, the overnight cultures of the selected wells were transferred to fresh medium with different dilution rates. We selected the wells which had the largest dilution rates among the sufficiently growing cultures (OD 595 > 0.1) to maximise the number of generations. These cycles were repeated for 28 days for both evolution experiments. The exposed UV doses were monitored as shown in Fig. 2c. Overall, the UV doses increased at the end (day 27 and 28) compared to the beginning (day 1 and 2) (Wilcoxon's signed rank test, p < 0.05 for all strains). The results revealed almost similar time series between Co and ΔS, while ΔHSB maintained a lower UV dose relative to the others. This fact was consistent with the differences in UV sensitivity and viability among the ancestral strains.
Scientific REPORTS | 7: 14531 | DOI:10.1038/s41598-017-15008-1 Evolutionary changes in viability and mutability in the presence of UV exposure. In agreement with the increase in UV doses during the evolution experiment, the survival rate in response to UV exposure (5 mJ/cm 2 and 10 mJ/cm 2 ) slightly but significantly (t-test, p < 0.05) increased in some lineages of all strains evolved in the presence of UV exposure (Fig. 3a, purple bars with asterisks). In contrast, this increase was rarely observed in the lineages evolved in the absence of UV exposure (Fig. 3a, grey bars). These results indicated that adaptive evolution toward UV tolerance occurred in some lineages in the presence of UV radiation. We also found that the survival rates varied between lineages in the presence of UV exposure, which was consistent with the large variance in UV dose identified during the evolution experiment. That is, many lineages did not increase their UV tolerance a significant amount in this short-term study, in particular ΔS and ΔHSB. Interestingly, mutability in response to UV exposure also tended to increase during the evolution experiment in the presence of UV exposure. In addition, this tendency was also found even in some lineages in the absence of UV exposure (Fig. 3b). That is, the mutability of the mutants selected in the populations did not decrease even though the viability increased (Fig. 3c), implying that possible error-correcting mechanisms didn't improve during the evolution experiments. Therefore, the effect of UV tolerance on viability was not an evolutionary consequence of the improvement of possible error-correcting mechanisms to protect against UV-induced mutations.
Growth rates can be a selective trait in evolution experiments regardless of UV exposure because there is growth selection after UV exposure even in evolution experiments in the presence of UV exposure (Fig. 1a). To examine this possibility, we compared the maximum growth rates of the evolved lineages in the absence of UV exposure with those of their ancestors (Fig. 4). As expected, the maximum growth rates of all strains evolved in the presence of UV exposure slightly increased after the evolution experiments (Mann-Whitney U test, p < 0.05). In addition, the lineages of two mutator strains also exhibited statistically significant increases in the maximal growth rates even in the absence of UV exposure (Mann-Whitney U test, p < 0.05). Only the lineages of Co without UV exposure during the evolution experiment exhibited no significant increase, indicating the accelerated growth adaptation caused by the higher mutation rate. The higher maximum growth rates of the lineages with UV exposure during the evolution experiments were almost similar to those of the mutator lineages without UV exposure, suggesting that no or only a small fraction of the UV-induced mutations contributed to the growth adaptation of the mutator lineages.
Mutation accumulation during the evolution experiment with UV exposure. Whole genome sequencing revealed the number of genomic mutations fixed within the populations (Tables 1 and S1). Using The overnight cultures were transferred to fresh media in multiple-well microplates after a 100-fold dilution (5 wells for each independent lineage). Subsequently, the cells were exposed to UV irradiation through UV cut films so that each well was exposed to different UV doses. The cell cultures were incubated overnight at 37 °C. The wells which were exposed to the largest UV dose (indicated by an asterisk) were selected among the well-growing cultures (OD 595 > 0.1). This cycle was repeated for 28 days. We established six independent replications for each strain. (b) Schematic of the rounds of the evolution experiment without UV irradiation. The overnight cultures of the selected wells were transferred to fresh media with different dilution rates (10 2 -, 10 3 -, 10 4 -, 10 5 -, and 10 6 -fold). The wells which had the largest dilution rates (indicated by an asterisk) were selected among the well-growing cultures (OD 595 > 0.1). This cycle was repeated for 28 days. Six independent replications were established for each strain. (c) The time series of the UV doses. The mean values among the six lineages were plotted for each strain (Co, ΔS, and ΔHSB from left to right). The error bars represent standard deviations. . Viability and mutability of evolved strains in response to UV exposure. Survival rate (a) and Nal R mutant fraction (b) in response to UV exposure are shown for the final populations in the evolution experiment and their ancestral strains. UV doses of 5 and 10 mJ/cm 2 were used for Co and ΔS, while 0.6 and 1.2 mJ/cm 2 were used for ΔHSB. The mean values were calculated for six lineages of each evolved strain. The error bars represent standard deviations. Asterisks indicate that the survival rates or mutation fractions significantly increased (single asterisks) or decreased (double asterisks) compared to the ancestors after evolution (t-test or Mann-Whitney U test, FDR < 0.05, see Methods). (c) Scatter plots of survival rates and Nal R mutant fractions of the ancestral strains (black circles), the lineages with/without UV exposure during evolution experiment (purple/grey circles, respectively). The values were replotted from (a) and (b). The vertical/horizontal dashed lines indicate the values of the ancestral strains. the number of synonymous substitutions, we calculated the accumulation rate of base-pair substitutions (BPSs) during the evolution experiments (Fig. 5). The accumulation rates in the absence of UV reflected the spontaneous mutation rate (Fig. 1a). The number of synonymous mutations in Co in the absence of UV exposure was too small to obtain a reliable mutation rate, which was consistent with the wild-type value in a previous study 3 . The accumulation rates of the mutators in the absence of UV were detectable and, as expected, were higher compared to that of Co 6 . We found that the accumulation rate of ΔHSB was slightly higher than that of ΔS, which differed from the values obtained from the fluctuation test (Fig. 1a). This difference may reflect the difference in the number and/or locus of marker genes between the two methods, as discussed previously 6 . That is, the fluctuation test monitored a few mutations on a few genomic loci, while genomic sequencing monitored many mutations over the genome. Another possible reason includes subtle differences in the mutational spectrum between the two mutators. The resistant mutants might have different mutations in these strains, which could result in the different mutation rates obtained from the two assessments 18 . Thus, the accumulation rates during the evolution experiment without UV exposure and the spontaneous mutation rates varied for the different genetic backgrounds as expected. Next, we estimated the accumulation rates in the presence of UV exposure. We found no significant correlation between the number of synonymous substitutions and advantageous traits during the evolution experiment such as survival rate in response to UV (ρ = 0.04 and 0.27 for 5 mJ/cm 2 and 10 mJ/cm 2 , respectively, p > 0.05) or maximal growth rate (ρ = 0.04, p > 0.05), indicating that few of the accumulated mutations were advantageous. We noted that the UV-induced mutations might occur intermittingly in contrast to spontaneous mutations. We found almost the same accumulation rates in the presence of UV exposure for all strains (ANOVA, F(2,14) = 0.15, p = 0.86). These accumulation rates were higher than those in the absence of UV exposure (26-fold and 3.4-fold increase for ΔS and ΔHSB, respectively). These results indicated that the UV-induced mutagenesis was dominant relative to the spontaneous mutation rates and that the accumulation rates could be equivalent ( Fig. 5) regardless of differences in the exposed UV dose (Fig. 2c), viability in response to UV (Fig. 1c-e), and/or spontaneous mutation rates (Fig. 1a).
Mutational spectrum and local sequence context of the UV-induced mutations. The analysis of the synonymous substitutions that accumulated during the evolution experiments revealed a unique mutational spectrum ( Fig. 6a and Table S2). The spectra of the spontaneous substitutions were previously explored for a wild type 3 and mutS-defective mutator strain 6 (Fig. 6a, top and middle). Contrary to the wide distributions for the wild type, the mutator strain exhibited two peaks at AT to GC and GC to AT and very small frequencies for the other substitutions (Fig. 6a, bottom). This transition-biased spectrum of the mutators was similar to that other mutator strains with deficient mismatch-repair and/or proofreading processes 3,6 . We also confirmed the similar spectra properties of the two mutators used in this study, ΔS and ΔHSB, even though there were fewer accumulated substitutions than the values reported in the previous studies (Table 1). Compared with the typical spectrum of mutators, the spectrum of the UV-induced synonymous substitutions for all strains, including Co, was broader. The fraction of GC to TA was still high, while the fractions of AT to TA, AT to GC, and GC to TA were at levels comparable to that of the wild type. That is, the UV-induced substitutions included not only transitions but also transversions similar to the spontaneous substitutions in the wild type. We further explored the local sequence context of synonymous BPSs (Fig. 6b-e) to identify which single nucleotide within the neighbouring sequence (−10 bp to 10 bp) is likely to appear with each BPS. To detect some specific sequence contexts of BPSs, Monte Carlo simulations were performed as a null hypothesis based on no biased context, where BPSs were generated in the genomes at random with the corresponding mutational spectra. The frequent occurrence of UV-induced BPSs at a G or C reflects the corresponding mutational spectrum (Fig. 6a, and green and purple lines at 0 bp in Fig. 6d). In addition, these BPSs were likely to occur at the 5ʹ side of A or the 3ʹ side of T on a given DNA sequence (red line at −1 bp and blue line at +1 bp in Fig. 6d). Thus, the motif sequences "5ʹ-TC-3ʹ" or "5ʹ-TG-3ʹ" were prone to be mutated by UV exposure. The former motif is a dipyrimidine, which is consistent with the evidence that dipyrimidine sites can be a mutation hot spot since they frequently form DNA lesions (pyrimidine dimers) in response to UV exposure and are likely to introduce BPSs, in particular C to T, at the damaged sites, as reviewed previously 19 . In contrast, we did not detect these simple motifs for spontaneous synonymous substitutions in the mutS-defective strain (Fig. 6e), which was used in a previous study 6 . We examined the simulation note that +2 bp side of G, −2 bp side of C, 3ʹ side of G, or 5ʹ side of C might at least be an error-prone motif.
Next, we explored how dipyrimidine sites, two adjacent pyrimidines such as TT, relate to the BPSs within a short range of sequences (Fig. 6f). Sixteen possible DNA doublets were divided into two types: eight dipyrimidines (TT, TC, CT, CC, AA, AG, GA, and GG) and eight doublets. UV-induced mutations occurred at the dipyrimidine sites (solid grey line in Fig. 6f top) about 1.6-fold as often as expected based on a no local sequence context (broken grey line). In contrast, spontaneous mutations in the mutS-defective strain occurred at dipyrimidine sites less often than expected (Fig. 6f, bottom). More importantly, mutations also frequently occurred at the non-dipyrimidine sites in the presence or absence of UV exposure (solid black lines). These results indicated that UV-induced mutations occurred at various sites in the genome even though there was slight bias according to the local sequence context. Evolutionary consequence of spontaneous mutation rates. Interestingly, the spontaneous mutation rates increased for some lineages with UV exposure during the evolution experiment (Fig. 7). We measured the spontaneous mutation rates for the evolved strains using the fluctuation test in the absence of UV exposure. The Figure 5. Base-pair substitution rate during the evolution experiments. The base-pair substitution rates were calculated using the number of accumulated mutations. The substitution rate for Co without UV exposure during the evolution experiment was below the detection limit. The error bars represent standard deviations (n = 6 for the lineages with UV, n = 2 for the lineages without UV).
spontaneous mutation rates of the lineages without UV exposure during the evolution experiment were almost steady. In contrast, the spontaneous mutation rates for the evolved strains of Co and ΔS which were transferred in the presence of UV exposure during the evolution experiment increased a few fold compared to their respective ancestral strains. Importantly, the increased spontaneous mutation rates were still far below the mutation rate during the evolution experiments with UV exposure (Figs 1a and 5). This implies that the advantage of low spontaneous mutation rate by the error correcting systems was nearly diminished under the higher endogenous mutation rate by the UV exposure. That is, the increase in the spontaneous mutability was considered to be derived from neutral evolution rather than adaptive evolution 20 . The spontaneous mutation rates for ΔHSB were almost the same among all conditions. This steadiness was rationale since the genetic mutation rate, monitored by BPSs, of this strain had already been higher than that of the other strains (Fig. 5, grey bars) and was only a fraction of the UV-induced mutation rate. Thus, these results possibly explain the increase in the spontaneous mutation rate and indicated that it can be prevented by negative selection against harmful mutations.
Discussion
In this study, we explored both the frequency and spectrum of UV-induced mutations at the genomic level. We found that the mutation rate could be increased by UV exposure to a level hundreds times that of the typical spontaneous mutation rate with proficient error-correcting mechanisms. This frequency was comparable to or Scientific REPORTS | 7: 14531 | DOI:10.1038/s41598-017-15008-1 even greater than that of mismatch repair-deficient mutator strains, indicating that the upper limit of the mutability of UV radiation is very high even though UV exposure may introduce other toxic effects unrelated/related to mutations. The mutational spectrum of the UV-induced mutations was narrower than that of the spontaneous mutations in the wild type but broader than that of the mutators. Thus, UV exposure can compensate for some mutations which could not be increased in the mutators.
In conclusion, based on the genomic mutations accumulated in the presence of UV exposure, we demonstrated the broad spectrum and high upper limit of the frequency of occurrence in the UV-induced mutations. These results support the considerable contributions of UV as an extrinsic mutagen for generating diverse mutants rapidly both under natural and artificial conditions in the presence of massive UV exposure. This implies that evolution under research conditions without an extrinsic source of mutations might be different from that under natural or clinical conditions. For example, the probability of mutants gaining resistant to an antibiotic under these conditions might be higher or lower than expected based on the experiments under laboratory conditions.
The similar rates of mutation accumulation among the different genetic backgrounds implies there is a limit to mutation production per generation. The majority of the accumulated mutations in the lineages in the presence of UV exposure were caused by UV exposure. It is interesting that the rates of mutation accumulation in these lineages were quite similar even though the exposed doses varied among the lineages. This seeming contradiction can be explained by considering the survival rate in response to UV exposure. In our evolution experiments with UV exposure, the UV dosage in each round was determined for each lineage by considering the growth rate after each UV exposure and the survival rate in response to UV exposure. Here, the growth rates of the lineages were roughly similar. Therefore, the differences in UV dosage among the lineages were thought to be mainly caused by differences in the survivability to UV exposure. In the evolution round, a small UV dose was selected to maintain the daily propagation level in case the survivability to UV was low. Thus, the survival fraction after UV exposure, that is, the ratio of the final surviving cells after UV exposure in each round to the cells present before the UV exposure, was kept constant in order to maintain daily propagation by tuning the UV doses. This was consistent with the small UV doses used for the UV-sensitive strain, ΔHSB, relative to those used for the other strains. The reason why the number of accumulated mutations was the same when the survival fraction was the same among the lineages was that the uvrB gene is a part of a mutation repair system; the low survivability to UV exposure results from the high production rate of deleterious or lethal mutations caused by UV exposure. Accordingly, the number of these toxic mutations would be the same between the low survivability with a small UV dose and the high survivability with a large UV dose as long as the survival fraction was kept constant. Assuming that the fractions of the deleterious and lethal mutations in all mutations were the same for all strains, the number of accumulated mutations in the survivors would be the same for populations with the same survival fraction.
To determine how efficiently UV exposure can promote mutation accumulation, we assumed that UV irradiation causes at least two types of damage simultaneously in a cell population. One type includes both lethal and non-lethal mutations. Following conventional assumptions, we considered that the number of mutations per cell, m, follows a Poisson distribution with the rate parameter λ. The other type includes deleterious physiological side effects such as unrecoverable double-strand breaks in genomic DNA. For simplicity, we assume that the physiological damages are lethal and countable. In practice, the number of physiological damages per cell, p, also follows Poisson distribution with the rate parameter γ. For m mutations, their lethality follows a binomial distribution with the rate of lethal mutations (l). Then, the probability distribution at which a cell has p lethal physiological damages with q lethal mutations out of m mutations is given by ( ) We assume that the non-lethal mutations are neutral, i.e. they do not affect cell growth. Then, the survival rate (S) of the population after UV irradiation is written as Note that the ratio of λl to the power exponent, i.e. λl/(λl + γ), indicates the mutation production efficiency of UV against cell death. We could not estimate λ directly because we could not measure the number of mutations for dead cells. Alternatively, we estimated λ by considering a relationship between λ and the average number of mutations per survival cell, ρ. Here, ρ is detectable by genomic sequencing for survival cells and is given by the average number of non-lethal mutations with non-lethal physiological damages per survival subpopulation divided by the number of survival cells as follows, tot m tot 0 Ν tot is the total number of cells in a population. Thus, we obtained a relevant relationship, where ρ is equivalent to the nun-lethal fraction, 1−l, of λ. We estimated S = 10 −4 in this experiment according to the growth rate (Fig. 4). l was set as ~0.1 because the dN/dS values were roughly 0.8 ~ 0.9 ( Table 1), suggesting that about 10~20% of mutations were eliminated from the population by selection. We used ρ = 3.13 [bps/genome/day] (Fig. 5, Co). Introducing these rough estimates into equations (2) This suggests that the dominant cause of cell death with UV exposure was not lethal genetic mutations but physiological side effects.
In order to discuss the general efficiency of UV radiation as a mutagen, we performed a similar estimate for spontaneous mutations as follows. Previously, we reported that the increase in the spontaneous mutation rate caused by deleting genes related to error correction also introduced a reduction in growth 6,8 . Therefore, we can also calculate the efficiency of mutation production due to the lack of error-correcting genes from the growth defects in the hyper-mutator strains. Let us consider that the lack of these genes also causes two types of lethal damage: lethal mutations and some lethal side effects. For simplicity, the mutations and the deleterious side effects occur every replication event in this case. That is, equation (1) is also applicable, and S in equation (2) indicates the rate of success for each self-replication event in this situation. The growth rate of the hyper-mutable strains (μ mut ) is represented as follows, where μ WT represents the growth rate of the wild type strain with a sufficiently low mutation rate. Equation (4) was fitted to the corresponding data indicating that the growth rate decreased with the mutation rate ( Fig. 2 in Ishizawa et al. 8 ). Assuming that γ is proportional to λ and l is 0.1, we obtained γ/λ = 4.85 ± 2.72 (average ± standard error). Then, the efficiency of mutation production for the lack of error-correcting genes was calculated as λl/ (λl + γ) = 0.020 (standard error interval was 0.013-0.045). Interestingly, this value is comparable to that of UV exposure, suggesting that the low efficiency of mutation production for UV exposure is not a specific property of UV light.
Methods
Bacterial strains. We used E. coli MDS42 21 and two mutator strains, MDS42ΔmutS::Cm, and MDS42ΔmutH, ΔmutS,ΔuvrB::Cm. These stains were named Co, ΔS, and ΔHSB, respectively. ΔS and ΔHSB were constructed by combinatorial deletions of MMR genes (mutS, mutH) and a UV-resistant gene (uvrB). All deletions were examined by standardised λ-red homologous recombination using the pKD46 plasmid 8,22 . The chloramphenicol resistance gene, Cm R , was employed repeatedly as a selection marker in each deletion. Cm R was amplified by PCR from the pKD32 plasmid 22 with appropriate primers for each deletion (Table S3). To examine multiple deletions in ΔHSB, Cm R was removed by FLP-FRT recombination using the pCP20 plasmid prior to subsequent deletion experiments 22 Evolutionary experiment. Evolutionary experiments consisted of cycles of serial transfer and UV exposure. The bacterial cells were cultured in mM63 liquid medium in a 96-well microplate (100 μl/well). The cells were diluted 100 times with fresh mM63 and transferred to five wells (100 μl each) of a new microplate. The five wells were covered by a UV cut film with a different UV diming rate for each well and exposed to UV radiation under a germicidal lamp (GL-15, Panasonic). The UV cut film was made in-house by patching plastic sheets cut out from a standard clear file folder (40% UV cut per sheet). The UV diming rate was adjusted by overlaying the number of the plastic sheets (0, 2, 4, 6, and 8 for the five wells). Subsequently, the microplate was sealed and incubated with shaking at 37 °C for 1 day. After incubation, the optical density (OD) at 595 nm of the five wells was measured with a plate reader (Infinite F200 PRO, Tecan). The cells in the well whose UV dose was the highest among the wells with growing cells (OD 595 > 0.1) was used for the next round. The UV exposure time was extended as the UV resistance of the cells increased. UV intensity was recorded using a UV dosimeter (UVA, UVC light meter, YK-37UVSD, Lutron Electronics Inc., USA) before exposure (typically 0.10-0.21 mW/cm 2 ). Frozen stocks of the cells were prepared every seventh round. The UV doses of the start (averaege doses of day 1 and 2) and the end (averaege doses of day 27 and 28) of the experiments were statistical tested with Wilcoxon's signed rank tests for each strain (n = 6) in order to check if the doses increased through the experiment.
Measurement of maximal growth rate. Cells from frozen stocks were inoculated into 100 μl of mM63 broth and incubated at 37 °C for 12 hours. The growing cells were then diluted 100 times with fresh mM63 and transferred to multiple wells (10 wells for each evolved strain and 60 wells for each ancestral strain) in a 96-well microplate (100 μl/well). The microplate was shaken at 37 °C in the plate reader, and OD 595 was measured every 15 minutes. The maximum growth rate [h −1 ] was obtained from the slopes of the growth curves during the exponential growth phase (OD 595 = 0.01~0.06) according to the standard Malthusian growth model.
Measurement of survival rate and mutant production rate in response to UV exposure. Glycerol-stock cells were inoculated into 5 ml of mM63 broth and incubated at 37 °C. The overnight cell culture was diluted with fresh mM63 broth (over 20 ml) to a concentration of 10 7 cells/ml. Then, 5 ml of the culture was sampled, and the remaining 15 ml of the diluted culture was transferred to a petri dish and exposed to UV radiation under a germicidal lamp (5 mJ/cm 2 for Co and ΔS, 0.6 mJ/cm 2 for ΔHSB). Subsequently, 5 ml of the culture was sampled, and the remaining 10 ml of the culture was exposed to UV radiation again, after which 5 ml of the culture was sampled. These culture samples (three samples for each glycerol stock) were spread on mM63 agar plates and incubated at 37 °C for 2 days. The number of colonies was counted to determine the colony forming units (CFUs). Survival rate was calculated by dividing the CFUs of the UV-exposed cultures by the CFUs of the culture without UV treatment. We also prepared cell cultures with UV treatment in the same manner Fluctuation test. The rates of mutation resulting in nalidixic acid resistance (Nal R ) were measured with fluctuation tests 25 . Glycerol-stock cells were inoculated into 5 ml of mM63 broth and shaken at 37 °C. The overnight cell culture was analysed with a flow cytometer to obtain the cell concentration. The cells were transferred to fresh mM63 broth in 20 test tubes (5 ml each) with a cell concentration of 100 cells/ml. These tubes were shaken at 37 °C for 18 ~ 30 hours. The grown cultures were centrifuged at 5160 × g for 5 min at room temperature. The pellets were resuspended with the residual supernatant and spread on LB + Nal plates. These plates were incubated at 37 °C for 2 ~ 3 days, and the colonies that appeared were counted. The mutation rate was calculated using the MSS-maximum likelihood method 26 .
Whole-genome sequencing. Genomic DNA was extracted using Wizard Genomic DNA Purification kits (Promega). DNA libraries were prepared as previously reported 6 . We performed multiplex analysis (typically 6 plex) with paired-end sequencing (251 bp) using a MiSeq Reagent Kit v2 and 500 cycles (Illumina). Raw sequences were processed by removing adaptor sequences, trimming bases with a quality below Q20 from the 3′ end of each read, and removing reads with lengths shorter than 40 bp, using cutadapt-1.4.1 27 as previously described 6 . Using Burrows-Wheeler Aligner software (BWA), the reads were aligned onto the E. coli MDS42 reference chromosome (Accession: AP012306, the origin of NC_020518, GI: 471332236). Base pair substitutions (BPSs) were identified using SAMtools 28 with default parameters, where the maximum read depth (-D option) was set to 500. For all samples, at least 99.8 percent of the genomic region was covered with read(s). The depth of coverage was (2.3 ± 0.5) × 100 (average ± sd). The called mutations with a value <100 for the Phred quality score 29,30 were removed. Subsequently, BPSs with values <90% of the frequency of "mutant" reads were also removed. We regarded that the filtered mutations were dominant or fixed in the population.
Monte Carlo simulations for the local sequence context of BPS. Synonymous BPSs were generated
in the corresponding genome at random with the observed mutational spectra. We used the genome sequence of MG1655 for the mutS-defective strain, ΔS', derived from MG1655 6 and that of MDS42 for all strains evolved under UV. The simulation was examined for 10,000 trials for each dataset.
Measurement of base-pair substitution rate during the evolution experiment. The rate of base pair substitution (genome −1 day −1 ), ρ, was calculated using the following formula: syn CDS where N syn , L CDS , L, and D are the number of synonymous substitutions, the length of total coding DNA sequences per genome, genome size (3.98 Mbp for the MDS42 derivative strains), and the number of days of the evolution experiments, respectively. F (syn) represents the probability that a substitution is synonymous when a substitution occurs and was calculated based on the codon usage and the probability of each substitution as described previously 6 .
Calculating dN/dS values. The dN/dS value was estimated as the ratio of the number of nonsynonymous substitutions per nonsynonymous site, dN, and the number of synonymous substitutions per synonymous site, dS 6 . The values were calculated by the following equation: where, N syn and N nsyn represent the number of synonymous and nonsynonymoys substitutions shown in Table 1, respectively. F (syn) is, as descrived above, a probability of what a occurred substitusion is synonymous. | 8,153.6 | 2017-11-06T00:00:00.000 | [
"Biology"
] |
Steering and cloaking of hyperbolic polaritons at deep-subwavelength scales
Polaritons are well-established carriers of light, electrical signals, and even heat at the nanoscale in the setting of on-chip devices. However, the goal of achieving practical polaritonic manipulation over small distances deeply below the light diffraction limit remains elusive. Here, we implement nanoscale polaritonic in-plane steering and cloaking in a low-loss atomically layered van der Waals (vdW) insulator, α-MoO3, comprising building blocks of customizable stacked and assembled structures. Each block contributes specific characteristics that allow us to steer polaritons along the desired trajectories. Our results introduce a natural materials-based approach for the comprehensive manipulation of nanoscale optical fields, advancing research in the vdW polaritonics domain and on-chip nanophotonic circuits.
Polaritons are well-established carriers of light, electrical signals, and even heat at the nanoscale in the setting of on-chip devices.However, the goal of achieving practical polaritonic manipulation over small distances deeply below the light diffraction limit remains elusive.Here, we implement nanoscale polaritonic in-plane steering and cloaking in a low-loss atomically layered van der Waals (vdW) insulator, α-MoO 3 , comprising building blocks of customizable stacked and assembled structures.Each block contributes specific characteristics that allow us to steer polaritons along the desired trajectories.Our results introduce a natural materials-based approach for the comprehensive manipulation of nanoscale optical fields, advancing research in the vdW polaritonics domain and on-chip nanophotonic circuits.
The pursuit of light propagation at extreme subwavelength scales has been a prominent subject within nanophotonics.Achieving control over this phenomenon is pivotal for the realization of photonic circuits and on-chip devices 1,2 .In this context, metamaterials offer a constructive paradigm for manipulating light through the arrangement of numerous subwavelength unit cells [3][4][5][6] .In addition to artificial structures, polaritons in natural materials, which are hybrid light-matter modes, offer a powerful framework for light control with field confinement far below the diffraction limit [7][8][9][10][11][12][13][14] .Thus, polaritons have emerged as effective carriers of light, electrical signals, and even heat at the nanoscale within on-chip circuits 15 .
In this study, we introduce a strategy that enables the steering and cloaking of hyperbolic polaritons, which is leveraged by vdW crystals of α-MoO 3 as the fundamental building blocks of customizable stacked and assembled structures with great versatility.Each block contributes layer-specific characteristics that effectively mold the flow of polaritons along desired trajectories.Based on a high polaritonic transmission across various structures and interfaces, which benefits from the robust hybridization and strong modal-profile alignment of phonon polaritons, we demonstrate in-plane polaritonic cloaking devices deeply below the light diffraction limit.Our study provides a promising platform for realizing practical polaritonic circuits.
Directional control of transmission by twist angles
In our experiments, hyperbolic polaritons in the bottom α-MoO 3 film are launched by a resonant gold antenna 38 , as depicted in Fig. 1a and detailed in Methods and Supplementary Note 1. α-MoO 3 is a highly anisotropic vdW material that supports hyperbolic polaritons along the [100] crystal direction within the Reststrahlen band II (816 cm −1 -972 cm −1 ) 19,39 , which we capitalize in this work.These polaritons refract at the interface of the twisted double films, propagate inside the twisted region, and refract again into the original film.Figure 1a sketches real-space infrared nanoimaging of polariton propagation and transmission in this geometry using scattering-type scanning near-field optical microscopy (s-SNOM) (Supplementary Figs. 1 and 2).The measured images in Fig. 1b-f illustrate polariton refraction at different twist angles.The regions above the gray horizontal line represent the bottom single α-MoO 3 film, where polaritons propagate with regular hyperbolic wavefront.However, a topological transition occurs due to the robust hybridization of hyperbolic polaritons in the twisted double films 26,27,40,41 (see the theoretical model in Supplementary Fig. 3 and Supplementary Note 2 along with the corresponding isofrequency contours (IFCs) in Supplementary Fig. 4).We have further marked the refraction process of various typical incident directions.By conserving the polariton wave vector along the direction of the refracting interface, we can obtain the corresponding refraction direction for each incident wave vector (Supplementary Fig. 4).Upon transmission of hyperbolic polaritons across the interface, our analysis reveals a transition from normal to negative refraction by rotating the top α-MoO 3 film.
The in-plane symmetry of the polariton IFC is broken when the interface is not aligned with the natural crystal axes of α-MoO 3 , leading to a change in the direction of polariton refraction, as shown in Fig. 1ce.As the twist angle θ increases, the direction of polariton propagation gradually deviates from the original direction after crossing the interface.The deflection angle φ is dependent on the orientation of the polaritonic IFCs in the twisted region (Supplementary Fig. 5).Besides, the dielectric function in twisted α-MoO 3 can be expressed as a linear superposition of the dielectric functions of the layers 42 , which results in the presence of off-diagonal terms.Therefore, the distorted and asymmetrical hyperbolic modes in the twisted region can be interpreted as a shear mode that has recently been observed in natural crystals with low symmetry such as monoclinic β-Ga 2 O 3 and CdWO 4 24,25 .
To quantify losses during propagation and refraction, we extract the quality factor Q associated with the overall propagation path including transmittance at the interfaces, and defined in terms of the polariton intensities in different traversed spatial regions (Supplementary Fig. 6 and Supplementary Note 3).Thanks to the inherent robustness of the topological transition and the appropriate match of modal profiles, the transmittance remains over 85% to 95% at various twist angles, as demonstrated in Fig. 1g and Supplementary Note 4 and Figs.7 and 8.The trend of the change in the experimentally measured transmittance with the twisting angle does not match well with simulations.We attribute this disagreement to the fact that negative refraction produces focusing and enhances the transmittance 37 .
Compared to conventional bulk material, the layered structure formed by vdW interactions minimizes the introduction of structural roughness during fabrication, leading to a reduced level of scattering losses at the interfaces.Besides, gold flakes serve as a flat low-loss substrate in our experiments because a new image mode is formed, which stems from the coupling between collective charge oscillations and hybridization of polaritons with their mirror image in the metal.Notably, the image phonon polaritons provide both stronger field confinement and a longer lifetime compared to phonon polaritons on a dielectric substrate [43][44][45][46][47] .Therefore, having such low interface losses, combined with the near-field amplitude enhancement produced by the mirror image in the gold substrate, the ad hoc overall quality factors remain around 30 for different twist angles (Fig. 1h).The experimentally observed propagation direction and fringe wavelength (Fig. 1b-f) are in agreement with numerical simulations (Supplementary Fig. 9).
Steering polaritons with differently oriented microribbons
The long propagation and low refraction losses of mixed polaritons in twisted α-MoO 3 render these structures a powerful platform for demonstrating advanced optical functionalities.We accomplish this potential by assembling highly anisotropic α-MoO 3 ribbons with various cut orientations, for which we choose the ribbon width close to the polariton wavelength.As depicted in Fig. 2a and Supplementary Figs. 10 and 11, nanofabrication with typical exposure and etching conditions enables cutting α-MoO 3 microribbons with the required widths and smooth edge quality.
When the hyperbolic polariton is launched in the underlying α-MoO 3 film (Fig. 2b, c, h, i), it undergoes refraction twice as it traverses the region decorated with each microribbon (Fig. 2d, e, j, k).This results in a horizontal deflection of the propagation direction (we accompany the near-field images with red-dashed arrows indicating Poynting vector propagation directions), performing a zigzag-shaped waveguided steering similar to those found in photonic crystals 48 .The analysis of IFCs allows for a precise determination of the propagation direction of polariton energy flow.Note that the complexity of the material anisotropies renders Poynting vectors with a broad range of components, but we only mark the primary direction.
Although the polaritons cross the interface twice, optical losses are surprisingly low, as evidenced by the fact that the fringe intensities in Fig. 2h and j remain essentially constant.Therefore, we can more aggressively install a second transverse ribbon in the propagation direction of the polariton.When the two adjacent ribbons have identical cutting-edge angles, the direction of propagation shifts in the same direction twice (Supplementary Fig. 12).Instead, when the edge angles are mirror-symmetric, the second refraction causes the propagation direction to return to the initial incidence direction (Fig. 2f, g, l, m).Supplementary Fig. 13 compares the experimental and simulated refractions and illustrates the polariton coupling process in different structures through the cross-sectional electric field.High wave vector modes may be lost at the interface because the asymmetry of IFCs makes it challenging to satisfy all wave vector matches of the two sides in the fabricated structures.
Refraction-based polariton cloaking
To realize a more sophisticated optical functionality, we fabricated an in-plane cloaking device.Previously, this has been accomplished by employing special designs of materials or structures to refractively obscure an object 49,50 or diminish its scattering strength toward light signals for cloaking carpets of multiple colors and broad bands 51,52 .
Here, we build a polariton cloaking device made up of four microribbons that are symmetrically arranged with two different orientations (Fig. 3a, b).This is enabled by an AFM-probe-based transfer method that allows us to accurately move the α-MoO 3 microribbons (Supplementary Fig. 10).
The hyperbolic waves are split and deflected by these ribbons, with each of the two split beams undergoing two deflections such that a central region in the structure is hidden from the polaritons, as shown in the experimental near-field image in Fig. 3c.Increasing the cut-edge angle, width, and thickness of the ribbons should extend the cloaking region in the y direction.We have conducted a thorough comparison of polariton transmission and phase accumulation in this device before and after introducing a defect (a graphite disk) to assess the effectiveness of the cloaking effect (Supplementary Fig. 14).The extracted near-field amplitude profiles demonstrate that the defect has little impact on the intensity and wavelength of the polariton wave (Fig. 3d,e).The electromagnetic simulations provide further evidence of the complex field distribution and involved propagation path in the cloaking device (Fig. 3f), in good agreement with experimental observations (Fig. 3c).In contrast, the presence of a defect along the propagation path of the polaritons without a cloaking structure induces a substantial reduction in the intensity (Supplementary Fig. 15).This effect is primarily attributed to the scattering and reflection of propagating polaritons at the defect edge as well as the modified dielectric environment within the defect region.
We note that most of the optical cloaking works in the literature have so far been realized through transformation optics 53,54 .This approach has revolutionized the field of cloaking with a great deal of freedom, flexibility, and high precision in designing and implementing cloaking devices.We employ anisotropic refraction to steer polariton propagation and enable cloaking, which arises from a topological transition in the iso-frequency contours due to the hybridization of different hyperbolic modes.Therefore, our study may provide a foundation for future designs as well as validation of transformative polaritons involving the use of pseudo-continuous media or metamaterials to confine these excitations.
Discussion
We demonstrate the in-plane steering and cloaking of strongly confined hyperbolic polaritons using carefully designed double-layer vdW heterostructures through in-plane stacking and splicing of α-MoO 3 with various orientations.This approach allows us to steer polaritons along any desired trajectories, leading to the demonstration of inplane cloaking devices at deep subwavelength scales.Customizable stacked and spliced structures of natural vdW materials provide high quality factors and low interface losses.From a scientific perspective, our findings open up a wealth of possibilities for advances in transformation polaritonics and represent a solid step in the quest towards achieving the ultimate optical manipulation goal through a meticulous organization of atomically thin interfaces 55,56 .Technologically, our work has substantial potential for the development of nanoscale optical circuits and devices.
A 100 kV electron-beam lithography (EBL) setup (Vistec 5000 + ES, Germany) was used to define patterns with different cut angles of α-MoO 3 on ~1 μm thickness of PMMA950K lithography resist (RDMICRO Inc.).The patterns were etched with SF 6 and Ar using reactive ion etching (RIE).The samples were further treated by immersion in hot acetone at 80 °C for 20 min and IPA for 3 min to remove any residual organic materials, followed by nitrogen gas drying.
To construct the gradually rotatable α-MoO 3 structure in the main text, we used a deterministic dry transfer process with a PDMS/PC stamp.First, the mechanically exfoliated α-MoO 3 flakes were transferred onto gold (60 nm)/Si (500 μm) substrates.Then, the α-MoO 3 patterns were transferred onto the α-MoO 3 flakes step by step (i.e., one at a time).Specifically, we used a plateau AFM tip to push or rotate the structures to specific positions.
Gold antenna arrays were patterned on the devices using approximately 350 nm of PMMA950K lithography resist.We deposited 50 nm of Au using electron-beam evaporation in a vacuum chamber at a pressure of <5×10 -6 Torr, followed by liftoff to remove any residual organic materials and the Au film.
In the cloaking device, we also used a graphite disk with 50 nm thickness and 1 μm diameter as a defect for the following reasons: firstly, this structure is easy to process and allows for precise manipulation; secondly, it introduces significant interference in the transmission of polaritons; and thirdly, it avoids excessive excitation of polaritons, minimizing interference with experimental observations.Scattering-type scanning near-field optical microscopy (s-SNOM) measurements s-SNOM measurements.We utilized a commercially available s-SNOM (Neaspec GmbH) to perform infrared nanoimaging of polaritons in α-MoO 3 .The system employed a platinum-coated atomic force microscope tip (NanoWorld) with an approximate radius of 25 nm as the primary scanning platform for approaching and scanning the sample.A monochromatic mid-infrared light source from a quantum cascade laser (QCL) with a tunable frequency range of 890 to 2000 cm −1 was used to illuminate the tip.The laser beam, with p-polarization and a lateral spot size of around 25 μm, was focused through a parabolic mirror at an incident angle of 55°to 65°.This setup effectively covered a large area of interest in the samples.The near-field nanoimages were captured by a pseudoheterodyne interferometric detection module, with the AFM tip-tapping frequency and amplitudes set to approximately 270 kHz and 30-50 nm, respectively.The detected signal was demodulated at the third harmonic (denoted S 3 ) of the tapping frequency to obtain near-field amplitude images that were free of any background interference.
Polariton launched by gold antenna.In s-SNOM measurements, polaritons can be excited by a variety of structures, including tips, antennas, edges, and even defects.We primarily utilize metal antennas as the excitation source due to their ability to separate the excitation and detection processes.This separation enables us to directly observe a diversity of refractive transmission of polaritons.When tip excitation is employed, only the interference fringes of the mode would be observed, and the refractive transmission cannot be directly visualized.
In addition, metal antennas can provide high excitation efficiency.When infrared light irradiates the metal antenna, it can excite the plasmon resonance in the antenna and form an in-plane oscillating dipole, thereby exciting the polaritons in the sample with high efficiency.To do this, we designed resonant antennas with a length of approximately 3.0 μm.
Fig. 1 |
Fig. 1 | Refractive transmission of hyperbolic polaritons.a Schematic of the device structure and experimental setup consisting of two twisted α-MoO 3 films, a launching gold antenna on the bottom one, and a probing near-field tip hovering the structures.E inc and E sca represent the incident and scattering electromagnetic waves, respectively.θ indicates the twist angle between the in-plane crystallographic orientation of the two films; β represents the angle of the cut edge relative to the crystal axis [001] of the top α-MoO 3 film; and φ indicates the deflection angle of refracted polaritons (at the edge delimiting the boundary between the single film and the double film) relative to the incident ones.The cut edge (i.e., the interface) of the top film is kept perpendicular to the antenna long axis in our experiments, such that polaritons impinge at normal incidence.The thickness of the α-MoO 3 films at the bottom and top is denoted as t 0 and t 1 , respectively.d 0 and d 1 represent the propagation distance of polaritons in the bottom and twisted α-MoO 3 region, respectively.b-f Experimental near-field images (amplitude signal S 3 ) recorded at different twisted angles θ = 0°, 22.5°, 45°, 67.5°, and 90°.The thicknesses of the bottom and top α-MoO 3 films are t 0 = 550 nm and t 1 = 150 nm, respectively.The illumination frequency is fixed at 893 cm −1 .The gold antenna is positioned d 0 ≈ 5 μm away from the interface (represented by horizontal gray lines).Red-dashed arrows indicate the propagation direction of polaritons.The scale bar indicates 3 μm.Note that experimental near-field images are normalized in this work, as well as the simulated images.g Experimentally measured (red dots) and numerically simulated (black curve) polariton transmittance across the interface for various twist angles θ.Error bars indicate 95% confidence intervals.h Ad hoc overall quality factor of the mixed (hyperbolic and hybrid) polaritons in the stacking structures as a function of twist angle.
Fig. 2 |
Fig. 2 | In-plane steering of polaritons with misaligned crystallographic orientations.a Illustration of tailored α-MoO 3 microribbons with different cut angles β relative to the crystallographic orientation of a common source film.b-g Optical images (b, d, f) and atomic force microscopy (AFM) images (c, e, g) of different polaritonic devices composed of a bottom α-MoO 3 film and tailored α-MoO 3 microribbons.Two α-MoO 3 microribbons with β = 45°(labeled 1) and β = 135°( labeled 2) are used in (b-g).h, j, l Near-field amplitude images corresponding to the devices in (b, d, f), respectively.The polariton propagation path is controlled by the top tailored α-MoO 3 microribbons: one deflection at the microribbon labeled 1 in (j), leading to a lateral shift of polaritons; and two deflections with opposite angles in (l), leading to a final undeflected transmitted beam.Red-dashed arrows indicate the polariton propagation direction dictated by the Poynting vector S, as obtained from the IFCs analysis presented in (i, k, m).The thicknesses of the bottom film and top ribbons are t 0 = 184 nm and t 1 = 154 nm in (h, j, l).Scale bars in (b, d, f, h, j, l) indicate 3 μm.i, k, m Calculated isofrequency contours of polaritons (blue curves) corresponding to each region in the devices.Horizontal black lines (labeled ①-④) indicate interfaces between different regions.Red arrows represent Poynting vectors S, directed along the energy flow and normal to the IFCs.Scale bars indicate 20 k 0 , where k 0 indicates the incident wavevector.
Fig. 3 |
Fig. 3 | Hyperbolic polariton cloaking.a Illustration of the crystallographic orientation of α-MoO 3 microribbons used in the cloaking device and tailored from the same film.b Optical image of a polaritonic cloaking device composed of four microribbons with β = 45°(ribbons 1 and 3) and β = 135°(2 and 4).The thicknesses of the bottom film and four top ribbons are t 0 = 207 nm and t 1 = 143 nm, respectively.The green dot is a graphite disk (50 nm thickness, 1 μm diameter), which serves as a cloaked defect.c Experimentally measured near-field amplitude images from the device in (b) at an illumination frequency of 900 cm −1 .The incident hyperbolic wave undergoes splitting and subsequent recombination, thus realizing in-plane cloaking of the graphite defect.Red-dashed arrows indicate the polariton propagation direction dictated by the Poynting vector, as obtained from the IFCs analysis presented on both sides of the experimental image.The calculated IFCs for each region in the device are shown as blue curves in the left and right parts, with scale bars indicating 20 k 0 .d Measured near-field profiles of the cloaking device with (red) and without (blue) the defect placed in the hidden region (blue shaded area).The green shaded area depicts the near-field intensity of the defect.The data is extracted along the red and blue dashed vertical lines in Supplementary Fig. 11. e Close-up view of near-field profiles in (d).Gray vertical dashed lines represent the position of each peak of the near-field profiles.f Simulated near-field (Re{E Z }) image illustrating the cloaking performance.The red-dashed circle marks the location of the defect in correspondence with the experimental structure.Scale bars in (b, c, f) indicate 3 μm. | 4,722.2 | 2024-05-25T00:00:00.000 | [
"Physics"
] |
Determinants of Effects of Foreign Direct Investment in Terms of Slovak Republic and Wood-Processing Industry of Slovakia
The presence of foreign direct investment in certain sectors or country determines several factors the determinants of foreign direct investment. The article analyzes the selected factors of FDI infl ows to the Slovak Republic and to the wood-processing industry in SR; it focuses primarily on assessing the contemporary situation of the business environment in Slovakia and investment incentives provided to foreign investors. The article also presents the development of foreign direct investment in Slovakia, in the branch of wood processing, analyzing the effects of FDI in specifi c conditions of the Slovak Republic and wood-processing industry.
INTRODUCTION 1. UVOD
The infl ow of foreign direct investments (FDI) into the country affects a number of factors described in literature or published annually for example in UN-CTAD surveys.Major factors in terms of savings are low labor costs, availability of resources (material, energy, fi nancial).On the other hand, factors that infl uence revenues are usually market size and market growth (World Investment Prospect Survey, 2009).However, it is also important to evaluate factors that affect the business environment and the presence of foreign investors in the country.These factors are condition and quality of business environment, the level of corruption, but also the rate of assistance from the stateinvestment incentives (Ferenčíková et al, 2010).
The results of analysis of rating institutions show that the attractiveness of some country for foreign direct investment is crucially dependent on favorable business environment, the quality of institutional environment, as well as the relative price and cost competitiveness (Drábek and Polách, 2008).However, the dynamics of FDI fl ow is signifi cantly infl uenced by the targeted state policy to promote foreign investments (Drábek and Jelačić, 2007).Results of the global economic crisis, as well as the acceptance of a comprehensive system of measures to reduce them, may affect not only the long-term macroeconomic stability, but also the policy towards FDI, and hence foreign direct investment infl ows to Slovakia both in the short and long term.
It can be concluded that maintaining long-term political stability in Slovakia is also refl ected positively on the real economy, which is impacted in the continuity of macroeconomic stability and keeping up the suitable business environment.The evaluation of the International Monetary Fund (IMF, 2009) shows that long term positive economic development in the SR is manifested in the rapid GDP growth, which was based on health macroeconomic and structural policies and helped to speed up the convergence process of the Slovak economy.Since the Slovak economy is an open and export-oriented economy, its development is signifi cantly infl uenced by development in the external economic environment.This is confi rmed by time-coordinated course of the economic crisis in the external environment and in Slovakia, which also shows that the Slovak economy is tightly integrated into European and world economy (Okáli et al, 2009).Simultaneously, the imported recession also causes many negative consequences on the domestic economy.Such close connection with the external environment is also refl ected in forecasts of economic development for the years 2012-2015, which can be evaluated as positive in comparison with other EU countries.
The government defi cit exceeding 3 % of GDP is not the cause for investors concerns in the current situation.However, the expected economic recovery will be refl ected in the re-tightening of the fi scal policy, and it will be a positive signal to encourage the investor confi dence, underlining the government's responsible approach to meeting the commitments under the Stability and Growth Pact (SGP).
The main objective of each company is an effi cient and successful business.There is a general economic principle: to achieve the maximum result with the minimum of means (Oblak et al, 2008, Stasiak-Betlejewska et al, 2007).The objective of this research was to evaluate the impact of investment and foreign direct investment in the Slovak Republic with the focus on the woodprocessing industry in SR all based on the analysis of time series of selected economic indicators, business and investment environment and investment incentives.
To achieve this objective partial objectives were formulated: the analysis of foreign direct investment in the Slo-vak Republic as well as in individual sectors of the wood-processing industry, analysis of the business and investment environment, investment incentives in Slovakia, the evaluation of selected economic indicators in -SR and wood-processing industry of SR with the application of selected statistical methods (correlation and regression analysis), the interpretation of the solution and obtained results.
METODA ISTRAŽIVANJA
Statistical methods were used to analyze and evaluate the effects of investment in SR and in wood-processing industry of SR.Correlation analysis describes the relationship between two quantitative variables.This analysis does not imply cause and effect relationship between two variables.Linear regression allows to examine the cause and the subsequent relationship between two variables x and y.The regression line determines the dependence.The chart of correlation and linear regression analysis shows the values of independent and dependent variables in each year and the regression line.Values closer to the regression line mean stronger impact on the examined variable.
Correlation and regression analysis of research focuses on foreign direct investment in Slovakia, the GDP growth of Slovakia, investment and selected variables in the wood-processing branch.These are indicators that characterize the economic situation in the mentioned sector, focusing on indicators that have a positive impact on the economic development of the wood-processing industry.
The selected economic indicators are evaluated over a period of 10 years in the 1999-2008 time series.Software products STATISTICA 9 of the company StatSoft and the application Excel from Microsoft Offi ce 2003 from Microsoft, as well as spreadsheet and graphics tools of applications were used for data processing.
Poslovno okruženje u Slovačkoj
Generally, it can be stated that the business environment in Slovakia is not quite good.According to the Slovak Chamber of Commerce and Industry (SCCI), it is getting gradually worse.The survey of SCCI shows that 71 % of the 170 surveyed companies consider the business environment as adverse.Only 2 % of surveyed companies identifi ed the Slovak business environment as favorable.Based on the survey, it follows that 59 % of surveyed companies expect no change in the business environment.Only 6 % of respondents expect improving of the business environment and 35 % its further deterioration.
Slovakia had its economic growth based on the quality of business environment, of course using the comparative advantages which the country still has, but their strength in relation to other countries gradually weakens (Merková and Drábek, 2010).Justice and legislation are among the worst areas of business environment in Slovakia.Worse legislative environment according to the SCCI survey is particularly evident in the rapid adoption of amendments and laws (Merková, 2010).
The situation of the business environment in Slovakia and other countries was analyzed on the basis of fi ve ratings -indexes and rankings compiled by various world expert organizations and institutions.Ratings are not only supported by statistical data of the economic development of countries; they are the result of experts' opinions and independent assessors' perceptions of the development of each country in comparison with the development in other economies.Although rankings are not scientifi c facts, the mentioned institutions that compiled the rankings are considered as independent, objective and credible.The ratings refl ect the perceptions of the situation in the country from the perspective of the business sector.
Indeks ekonomske slobode
According to the Index of Economic Freedom1 in 2010, Slovakia improved by 0.3 points and was ranked 35 th with the overall assessment of 69.7 points (in 2009 it was 69.4 points, 36 th place).The overall score is higher than the world average.The Czech Republic took the highest position among V4 countries after fi ve years, and was ranked 34 th (37 th in 2009).Hungary was ranked 51 st (44 th in 2009) and Poland 71 st (82 nd step in 2009).However, Poland is included among top ten countries with the best annual improvement in the ranking.According to the European region, Slovakia ranked 18 th out of 43 countries (in 2009 it was ranked 20 th ).The former British colony of Hong Kong has been declared the freest economy in the world for 16 times.
The Index of Economic Freedom contains ten subcriteria: [8] Property Rights [4] Government [9] Freedom from Spending Corruption [5] Monetary Freedom [10] Labor Freedom Experts in evaluation of Slovakia show a significant deterioration in the category of freedom in the labor market, but offset by improvements in other areas.Slovakia is still limited by two institutional weaknesses -the judicial system is ineffi cient and slow, and in recent years efforts to eliminate the corruption have shown only limited progress; in the long-term perspective, investors consider these weaknesses to be a serious factor in locating the foreign enterprises.
3.1.2Ease of Doing Business 3.1.2.Lakoća poslovanja In the Ease of Doing Business2 Slovakia was ranked worse in 2010 than in previous years.Among all countries, Slovakia fell from 32 nd place in 2008 to 35 th in 2009 and 42 nd in 2010.Although Slovakia has maintained its leading position among the V4 countries (Czech Republic is 74 th , Hungary 47 th and Poland 72 nd ), Slovakia is lagging behind faster reformers of Eastern Europe such as Georgia, Estonia, Latvia and Lithuania.Despite this, however, Slovakia overtook fi ve industrially more developed economies of Europe (Portugal 48 th , Spain 62 nd , Luxembourg 64 th , Italy 78 th , Greece 109 th ).Unexpectedly, Slovakia as the largest manufacturer of automobiles per capita in the world, belongs to the countries in the EU with the lowest "trading across borders".The Slovak government needs to reduce requirements and shorten the time required for exports and imports, and to optimize this process through competitiveness and transparency.
Slovakia can support the entrepreneurial spirit by simplifying the procedures for starting business and by providing steps aimed at simplifying business registration and making it more acceptable for enterprises.In addition, the report of Doing Business indicates that Slovakia needs to shorten the time needed for the enforcement of contracts and decrease costs associated with enforcement.Slovakia has one of the longest waiting times among European countries for obtaining a building permit (287 days), followed only by Poland (308 days) and Cyprus (677 days).This is especially troubling when compared with countries such as Finland or Denmark, where the same may be carried out in 38 or 69 days according to the World Bank.
Slovakia has a poor rating in the category of closing the business, particularly in two areas: the duration of the bankruptcy settlement -about 4 years (followed only by the Czech Republic with a period of 6.5 years) and bankruptcy costs as a percentage of assets, which is 18 % in Slovakia as well as in Austria (followed by Poland with only 20 % and Italy with 22 %).Slovakia has a better score than most EU countries in the cost of obtaining a building permit (the second lowest in the EU, 13.8 multiple of the average wage).Building permits in Hungary is 9.8 times higher than the average wage, while in Bulgaria it is an overwhelming factor -436.5 times more than the average wage.Investors may obtain a building permit in Slovakia completing 13 treatments, which is less than most other EU countries require (Report on the state of business environment in SR, Ministry of Economy, 2010).
What makes Slovakia particularly attractive is the process of acquiring ownership.Slovakia is the country with the lowest costs for this process, which is very fast and effi cient.Slovakia is among the top six countries in the strength of their legal rights in obtaining the loan.This index measures the protection rules in relation to the possession of movable property.However, Slovakia was ranked between 18 th and 23 rd place in terms of quality and availability of debt information obtained from public and private debt registries (rokovania.sk).In EU Slovakia is ranked between the 7 th and 10 th place according to the employment index, which evaluates the rules for hiring people, working time, number of leave days and statutory requirements for dismissal of employees for economic reasons (spectator.sk).
Indeks globalne kompetitivnosti
According to the Global Competitiveness Index 3 Slovakia was ranked in the group of developed countri- Slovakia was gradually decreasing from the 36 th place in 2006 to 37 th , 41 st , 46 th and fi nally 47 th place in 2010.The Czech Republic annually rose by 2 positions, Poland's position improved by 7 places and Hungary also moved upward.Slovakia is, thus, the only country from the V4 group, rating decreases.
According to the Executive Director of the Business Alliance of Slovakia, which is a partner institution of the World Economic Forum, the global economic crisis means that most countries assessed lower competitiveness index score this year.However, due to the strong interdependence of economies, there was no pronounced movement in the ranking.Regarding the ranking of Slovakia he says: "Poor government's ability to improve the business environment, reform and eliminate the major barriers of business were the cause for Slovakia to fall in the ranking for the third time in a raw." (alianciapas.sk).
The basic disadvantage of Slovakia is that most foreign companies have their innovative potential organized in the home country, so the share of R&D capacities is gradually reduced, and thus it fails to engage the capacities into innovative projects.
Smanjenje globalne kompetitivnosti
According to Global Competitiveness Breakdown 4 Slovakia results in 33rd position in year 2010, occupies second position among V4 countries in longterm situation.The Global Competitiveness Breakdown is compiled on the base of four indicators as economic performance, government effi ciency, business effi ciency and infrastructure.Each one consists from next fi ve subcriteria.In 2009 Slovakia became one of the 9 countries with the worst decline in scores.According to the Corruption Perceptions Index, Slovakia set back four years ago.According to the Corruption Perceptions Index 5 in 2009, the score of Slovakia dropped the most in the history of measurements since 1998, from the level of 5.0 to 4.5.Slovakia also worsened annually in the countries ranking: it dropped from the 52 nd -53 rd to the 56 th -60 th place.For the fi rst time since 2001 Slovakia is ranked worst of the V4 countries (Transparency International).
Finally, in connection with the presented ratings, it should be noted that the evaluation does not always refl ect the real and actual situation of the country's economy.Can the Index of Economic Freedom be considered as objective, if in 2010 Ireland was ranked 5th, and in 2009 it nearly declared the state bankruptcy due to extreme indebtedness?Can this country be considered economically free in these conditions?The same applies to the USA (8 th place).Can its Quality of Business Environment be assessed as reliable, if the best ranked countries are those most indebted in the world such as the USA (4 th place), Great Britain (5 th place) and the abovementioned Ireland (7 th place)?Even less credible are the results of the Global Competitiveness Index.According to this indicator the USA are excellent (2 nd place), and the objectively the most competitive China is ranked 29 th place.However, experts' rankings are accepted by investors, of course in terms of their insights into this issue (Drábek and Merková, 2010).
Investment incentives for the development of investing 3.2. Ulagački poticaji za razvoj investiranja
The analyzed data show that Slovakia still has signifi cant comparative advantages (high correlation between wage and labor productivity, low cost of release, index of rights of creditors and debtors, a healthy banking sector, relatively good availability of fi nancing by loans, low duty barriers, support of the FDI, good conditions for technology transfer and FDI), which should be used, while the negative factors that foreign investors analyze with the location of their business activities should be removed (Merková and Drábek, 2011).In connection with the FDI infl ow and encouragement of foreign companies to invest, it is necessary to present the investment incentives -all the measurable economic benefi ts provided by the host government to foreign investors for the purpose of motivation in business activities.The primary role of the investment incentives should be to motivate the investors to place their new projects in the so called disadvantages areas, which means in the regions with higher unemployment, lower infrastructure quality, etc.The positive impact of a new investment shall be proved by job creation, by chances for the graduates to be used as well as by creation of new entrepreneurial opportunities for local companies (Ministry of Economy, 2010).
Investment aid is a form of state aid targeted at promoting economic development of the most disadvantaged regions and at mitigating regional disparities.Granting of investment should stimulate the creation of new jobs.
Investment aid benefi ciary can be a legal person or a natural person-entrepreneur with a registered offi ce in the Slovak Republic, incorporated in the Commercial Register or the Trade License Register, ready to implement an investment plan in the Slovak Republic; the benefi ciary must be 100 % owned by the applicant, or the applicant must be a controlling person of the benefi ciary.The benefi ciaries' investment activities and projects have to be in compliance with the Act 565/2007 Coll. the "Act on Investment Aid".
One of the factors affecting the investor's decision on its investment placement is also the amount and the structure of the investment incentives that may be obtained.The so-called intensity of the aid means the maxi-mum proportion of the eligible costs that may be approved for the investor in the form of particular in ves tment incentives.The maximum intensity differs depending on the district.The limit in Bratislava region is 0 %, Western Slovakia 20-40 %, Central Slovakia 25-50 % and Eastern Slovakia 25-50 % (Ministry of Economy, 2010).
The Act on Investment Aid 565/2007 Coll.divides the projects that may be supported into four categories: Industrial Investment incentives mean the price or cost that the country must cover to some extent in connection with the infl ow of foreign capital (in periods of defi cit in domestic fi nancial resources) considering the positive effects that FDI will bring (in the past it was the solution of two serious problems in the SR -employment growth, improved trade balance) (Drábek and Merková, 2010).
Foreign direct investment fl ows in the SR and WPI SR 3.3. Izravna strana ulaganja u Slovačku i u slovačku preradu drva
Data of the United Nation Conference for Trade and Development (UNCTAD) held in 2008 show that Slovakia was most highly ranked among 27 EU countries according to the indicator of FDI infl ows per capita -16 th place with the value of 632 USD/capita.The evaluation of the total FDI infl ows in millions of USD, as well as the percentage of FDI in GDP of the country (17 th place) show similar results.
At the beginning of transformation, Slovakia had similar comparative advantages as other countries in Central and Eastern Europe, particularly qualifi ed and cheap labor, cheap raw material and energy inputs, good location and close relations with the EU.Until
Forms of investment incentives in the Slovak Republic Oblici poticaja investicijama u Slovačkoj Republici Direct support for: Izravna potpora za:
Indirect support for: Neizravna potpora za: construction / izgradnju technology / tehnologije research and development / istraživanje i razvoj job creation, retraining of the workforce / nova radna mjesta, izobrazbu kadrova allowance for staff training / pozajmice za obuku kadrova land acquisition and implementation of infrastructure / kupnju zemljišta i uvođenje infrastrukture loan policy, lower interest, longer repayment period, the state guarantee / zajmove s nižom kamatom, duljim rokovima povrata, državna jamstva income tax relief / oslobađanje od poreza na dohodak transfer of real estate or exchange of real estate for the price lower than the general value / prijenos nekretnina ili za iznajmljivanje nekretnina za manji iznos od uobičajenoga providing advisory services free of charge or for a partial payment or deferred tax payment / davanje savjetodavnih usluga bez naknade, za djelomično plaćanje ili za plaćanje pojedinih poreznih davanja Source: data of the Ministry of Economy in SR / Izvor: podaci Ministarstva gospodarstva Slovačke 2000, FDI infl ows had risen, but its volume lagged behind the volume of FDI infl ows in the other V4 countries (The concept of management of FDI, Ministry of Economy, 2009).
FDI infl ows into the wood-processing industry (WPI) in the presented period of 5 years reached the largest volume in 2005, amounting to 1.557 billion SKK, and however 90 % of these resources was absorbed by the furniture industry.In other years, less than half of this value was achieved.The second largest infl ow was in 2006, when 835 million SKK were invested into WPI.Pulp and paper industry dominated in 2006 and 2007 with the infl ows of 608 million SKK and 606 million SKK of FDI, respectively.
The smallest amount of foreign investment fl owed into the sector of wood industry (annually and totally) with the exception of 2004, when the wood industry recorded FDI infl ows of 556 million EUR.The opposite trend was recorded in the industrial production of the Slovak Republic, with the lowest FDI infl ows, amounting to 10.901 billion EUR, in 2005.
Stagnation of investment in sawmilling, construction and carpentry was reported in the period 1999-2002, and in the period 2003-2006 an increase was recorded amounting to nearly 1.7 to 2.6 billion Slovak crowns (SKK) per year (NLC, 2009).A significant increase of investment to the level of 6.07 billion SKK occurred in 2007, but this growth was followed by a drop to the level of 2.25 billion SKK.The sector of furniture production has seen better investment in the years 2000,2001,[2004][2005][2006] and this fact caused the growth of labor productivity.The investment was in the range of 1.5 to 2.8 billion SKK in the years mentioned above.Similarly as in the sector of wood industry (WI), in furniture industry (FI) an equally sharp increase was recorded in 2006-2008, from 2.8 billion SKK to 4.5 billion SKK in 2007 and then a fall to 1.6 billion SKK in 2008.
Rapid changes of investment in the pulp and paper industry (PPI) were reported following the realization of signifi cant business actions during the whole period.Major modernizations in this sector were made in 1999 and 2003-2005, but the overall trends suggest that the highest volume of investment of all three sectors of wood processing industry were made into the pulp and paper sector ranging between 1.6 and 6.6 billion SKK per year (Merková et al, 2011).Effects of investment and FDI were analyzed through correlation and regression analysis, which was applied to detect dependencies between investment and other economic indicators.Selected analytical results, which demonstrate the positive impact, are presented in Table 15.
The fi rst signifi cant dependence is between foreign direct investment stock in the SR and GDP growth of SR with the correlation coeffi cient r = 0.94, which demonstrates that the growth of FDI causes GDP growth.Regression coeffi cient b = 0.000009 means that the growth of FDI in 100 billion SKK causes the GDP growth of 0.9 % on average.
Subsequent correlation and regression analysis examined the correlation between variables in the wo- as to the negative trend caused by unsolved problems for a long time, as shown by the annual decline in sales, value added and profi t.Development of selected indicators is shown in Tables 15 to 17.The employment dropped in all sectors of the WPI and it can be assumed that a smaller number of employees has an impact on labor productivity growth, resulting in wage increases, as correlation and regression analysis showed a high dependence (correlation coeffi cient 0.95) between labor productivity growth and wage growth.to the decline in reinvested earnings (World Invesment Report, 2009).In the period of restructuring of parent companies, foreign subsidiary units were often involved in balancing the outstanding debt.
All economies have been affected by the global crisis in terms of decrease in exports and industrial production, the slowdown of FDI infl ows and rising unemployment.
FDI infl ows into the region of the V4 countries affect various factors in crisis.In relation to individual V4 countries, however, expectations are primarily the highest average GDP growth over the long term (Slovakia), a large domestic market (Poland) and a relatively stable service sector (Czech Republic, Hungary).The V4 countries in crisis and uncertain investors have the advantage of a predictable and well known environment, in the case of Slovakia even strengthened by the membership in the monetary union.
There is a review of the perception of prices, as investors will certainly not decide for the lowest current price -meaning low production costs and cheap labor or low tax cost -but primarily for the lowest cost throughout the life cycle of the investment (Jelačić et al, 2010).Apart from the quality of infrastructure, size of the domestic market or access to regional and international markets, foreign investors will particularly take into account the factors such as energy costs, availability of suppliers and customers, suffi cient qualifi ed and skilled workforce, predictability of economic development, stability of legislative conditions, security of companies and others.One of the biggest challenges of the Slovak economy is the ambition to remain an attractive country for foreign direct investment.
This paper is the result of a partial solution of the Ministry of Education grant project VEGA Nr. 1/0089/11 -Measurement and performance management of the wood industry companies in SR.
affecting the acquisition of investment aid are(Ministry of Foreign Affairs, 2009): the investor meets all the requirements of the investment aid in individual areas, it can apply for the following forms of investment incentives (Slovak Investment and Trade Development Agency, SARIO): a) subsidy for the acquisition of material assets and immaterial assets, b) an income tax relief, for new jobs created, d) transfer of immovable property or exchange of immovable property at a price lower than a general asset value.
3. 4
Effects of investment and FDI in the SR and WPI SR 3.4.Učinci investicija i FDI-a u Slovačku i slovačku preradu drva
Figure 8
Figure 8Correlation in the WPI: Investment ~ Labor productivity (period 1999-2008) Slika 8. Korelacija investicija i produktivnosti radaWPI-a (razdoblje 1999WPI-a (razdoblje -2008) ) The global fi nancial crisis also had a negative impact on the development of the foreign direct investment fl ows.Since the end of 2008, global FDI infl ows have decreased in all three forms.Equity shares, reinvested earnings and other capital fl ows (especially inside-corporate loans) fell mainly in developed economies.Investments in equity shares have been reduced due to the weakening of foreign mergers and acquisitions.Lower profi ts of subsidiary units contributed Productivity of sales in the WPI (thousand SKK) = 1100,6 + ,16209 * Investment in the WPI (million SKK
5
Corruption Perceptions Index is compiled by the Transparency International, covers 180 countries worldwide.A composite index, the CPI is based on 13 different expert and business surveys.Eight surveys are made for Slovakia every year.Transparency International makes neither of them, and they are made by different institutions.For Slovakia, the last time they were as follows: the World Economic Forum, Freedom House, The Economist Intelligence Unit, International Institute for Management Development, IHS Global Insight and Bertelsmann Foundation.Source of data: http:// www.transparency.sk/vystupy/rebricky/
Source: data of Global Competitiveness Report / Izvor: podaci s Global Competitiveness Reporta
Source: data of the World Competitiveness Yearbook / Izvor: podaci iz World Competitiveness Yearbooka
Table 13
Investment in the WPI and industrial production of the Slovak Republic (mill.SKK) Tablica 13.Investicije u WPI i industrijsku proizvodnju Slovačke Republike (u mil.SKK) | 6,377 | 2012-01-01T00:00:00.000 | [
"Economics",
"Business",
"Environmental Science"
] |
Strongly nonnegative curvature
We prove that all currently known examples of manifolds with nonnegative sectional curvature satisfy a stronger condition: their curvature operator can be modified with a 4-form to become positive-semidefinite.
Wilking [22] and Ziller [23] for surveys. On the one hand, the only currently known examples of closed manifolds with sec > 0 different from compact rank one symmetric spaces (CROSS) occur in dimensions 6, 7, 12, 13 and 24. On the other hand, a wealth of examples of closed manifolds with sec ≥ 0 have been produced (beyond homogeneous spaces and biquotients), notably by methods developed by Cheeger [7] and Grove and Ziller [12,13]. It follows from our previous work [4,5] that almost all known examples of closed manifolds with sec > 0 actually satisfy a stronger curvature condition, called strongly positive curvature. The purpose of this paper is to show that all known examples of manifolds with sec ≥ 0 have strongly nonnegative curvature. This further corroborates the importance of strongly nonnegative and positive curvature in the study of sec ≥ 0 and sec > 0.
A Riemannian manifold (M, g) is said to have strongly nonnegative curvature if, for all p ∈ M, there exists a 4-form ω ∈ ∧ 4 T p M such that the modified curvature operator (R + ω) : ∧ 2 T p M → ∧ 2 T p M is positive-semidefinite. This is an intermediate condition between sec ≥ 0 and positive-semidefiniteness of the curvature operator. It is worth recalling that manifolds satisfying the latter have been classified [6,22], see Sect. 2 for details. Some of the key properties of strongly nonnegative curvature is that it is preserved by products, Riemannian submersions and Cheeger deformations [4, Thm. A, Thm. B], see also [4, §6.4]. In particular, since any compact Lie group G with bi-invariant metric has positive-semidefinite curvature operator, all compact homogeneous spaces G/H and all compact biquotients G/ /H have metrics with strongly nonnegative curvature.
Using a gluing method inspired by the construction of Berger spheres, Cheeger [7] produced another class of closed manifolds with sec ≥ 0. Our first main result is that all manifolds in this class also have strongly nonnegative curvature: Theorem A The connected sum of any two compact rank one symmetric spaces (with any orientation) admits a metric with strongly nonnegative curvature.
We remark that some manifolds in Theorem A are diffeomorphic to biquotients, while others are not even homotopy equivalent to biquotients [21], see Remark 4.2. A significant generalization of the gluing construction in [7] was achieved by Grove and Ziller [12], in the context of cohomogeneity one manifolds. These are manifolds with an isometric group action whose orbit space is 1-dimensional, see Sect. 3 for details. Our second main result is that their method to produce metrics with sec ≥ 0 actually yields strongly nonnegative curvature: Theorem B Every cohomogeneity one manifold whose nonprincipal orbits have codimension ≤ 2 admits an invariant metric with strongly nonnegative curvature.
The class of manifolds in Theorem B is surprisingly rich. For instance, it includes all 4 oriented diffeomorphism types homotopy equivalent to RP 5 , see [12,Thm. G]. Even more interestingly, it includes a number of total spaces of principal G-bundles, which can be used to construct metrics with strongly nonnegative curvature on associated vector bundles and sphere bundles (see Corollary 3.4). Remarkably, in combination with other techniques, this implies that all exotic 7-spheres admit metrics with strongly nonnegative curvature (see Sect. 3 for details).
Constructions of metrics with nonnegative sectional curvature on vector bundles can be interpreted as instances the "converse" to the Soul Theorem of Cheeger and Gromoll [8]. This celebrated result states that any complete open manifold M with sec ≥ 0 has a totally convex compact submanifold S ⊂ M without boundary, called the soul of M, such that M is diffeomorphic to the normal bundle of S in M. Observe that if M has strongly nonnegative curvature, then so does its soul S, as it is a totally geodesic submanifold [4,Prop. 2.6]. The "converse" question of which vector bundles over closed manifolds with sec ≥ 0 admit a complete metric with sec ≥ 0 has been studied by several authors. It follows from our results that all the progress made to date regarding this problem can be transplanted to the context of strongly nonnegative curvature (see Corollary 3.4).
In the context of complete open manifolds with sec ≥ 0, Guijarro [14] proved the existence of an "improved" metric which is isometric to a product outside a neighborhood of the soul. Our third main result is that the same improvement can be obtained with strongly nonnegative curvature: be a complete open manifold with strongly nonnegative curvature and soul S. There exists another metric g on M with strongly nonnegative curvature, such that S remains a soul, and (M, g ) is isometric to a product ν 1 (S) × [1, +∞) outside a compact neighborhood of S. The constructions in Theorems A, B, and C comprise an exhaustive list of all the currently known methods to produce manifolds with sec ≥ 0. Therefore, as claimed in the first paragraph, all known examples of manifolds with sec ≥ 0 have strongly nonnegative curvature.
Besides the fundamental fact that strongly nonnegative curvature is preserved under Riemannian submersions [4], there are two main technical tools needed to prove the above results. The first (Lemma 3.2) is that bi-invariant metrics on Lie groups retain strongly nonnegative curvature after being dilated by a factor of up to 4 3 in the direction of an abelian subalgebra. This result is a strengthening of a result in Grove and Ziller [12,Prop. 2.4], see also Ziller [23, Lemma 2.9], using the same key fact that such dilations are "backwards" Cheeger deformations, or submersions from a certain semi-Riemannian manifold. The second technical result (Lemma 4.1) asserts that certain disk bundles whose boundary is a homogeneous space with strongly positive curvature have a metric with strongly nonnegative curvature which is a product near the boundary. This is proved through certain estimates that generalize those in Cheeger [7].
This paper is organized as follows. Section 2 provides a recollection of the definitions and basic properties of strongly nonnegative curvature, as well as a discussion of basic examples. Constructions of metrics with strongly nonnegative curvature on cohomogeneity one manifolds are given in Sect. 3, where Theorem B is proved and its consequences for associated bundles are described. In Sect. 4, we explain a method to endow certain disk bundles with strongly nonnegative curvature, leading to the proof of Theorem A. Finally, Sect. 5 contains the proof of Theorem C.
Definitions and basic properties
A detailed account on strongly positive and nonnegative curvature can be found in [1,[3][4][5]. As a service to the reader, a short summary is provided below.
Modified curvature operators
Let (M, g) be a Riemannian manifold. Using the inner products induced by g, identify all exterior powers ∧ k T p M with their duals ∧ k T p M * . Denote by Sym 2 (∧ 2 T p M) the space of symmetric linear operators S : ∧ 2 T p M → ∧ 2 T p M, and by b : Furthermore, identify ∧ 4 T p M as a subspace of Sym 2 (∧ 2 T p M), by means of so that Sym 2 (∧ 2 T p M) = ker b ⊕ ∧ 4 T p M is an orthogonal direct sum decomposition, and b is the orthogonal projection operator onto ∧ 4 T p M.
With the above setup, we may add to the curvature operator R ∈ ker b of (M, g) any 4-form ω ∈ ∧ 4 T p M, and the resulting modified curvature operator (R + ω) ∈ Sym 2 (∧ 2 T p M) has the same sectional curvature function as R. Indeed, by (2.1), the quadratic form associated to ω ∈ ∧ 4 T p M vanishes on the Grassmannian of (oriented) 2-planes Gr 2 (T p M) = {X ∧ Y ∈ ∧ 2 T p M : X ∧ Y 2 = 1}, and hence (2.2)
Strongly nonnegative curvature
The manifold (M, g) is said to have strongly nonnegative curvature if, for all p ∈ M, there exists ω ∈ ∧ 4 T p M such that the modified curvature operator R + ω is positivesemidefinite. Strongly nonnegative curvature is clearly an intermediate curvature condition between sec ≥ 0 and positive-semidefiniteness of the curvature operator. All these curvature conditions are equivalent in dimensions ≤ 3, and strongly nonnegative curvature remains equivalent to sec ≥ 0 in dimension 4, see [20] and [1,Prop. 6.83].
Basic properties
Elementary arguments show that products and totally geodesic submanifolds of manifolds with strongly nonnegative curvature also have strongly nonnegative curvature [4, §2]. In addition, strongly nonnegative curvature is preserved under Riemannian submersions. This fundamental result was established in [4], by rewriting the Gray-O'Neill formula [2,Thm. 9.28f] that relates curvature operators of a Riemannian submersion π : M → M and its A-tensor as Similar arguments also show that strongly nonnegative curvature is preserved under Cheeger deformations [4, §2.5].
Basic examples
The simplest examples of manifolds with strongly nonnegative curvature are those whose curvature operator is positive-semidefinite. Closed manifolds with this property have been classified, mainly through the work of Böhm and Wilking [6], see Wilking [22,Thm. 1.13]. Namely, each factor in the de Rham decomposition of the universal covering of such a manifold is isometric to one of: (i) Euclidean space; (ii) Sphere with positive-semidefinite curvature operator; (iii) Compact irreducible symmetric space; (iv) Compact Kähler manifold biholomorphic to CP n whose restriction of the curvature operator to real (1, 1)-forms is positive-semidefinite.
An important subfamily are Lie groups G with a bi-invariant metric Q. Recall that the curvature operator R G : ∧ 2 g → ∧ 2 g of (G, Q) is given by which is clearly positive-semidefinite. Since Riemannian submersions preserve strongly nonnegative curvature, all compact homogeneous spaces G/H and all compact biquotients G/ /H have metrics with strongly nonnegative curvature. For instance, one may take on G/H the so-called normal homogeneous metric, that is, the metric induced by the bi-invariant metric Q on G via the quotient map, and similarly for G/ /H.
Remark 2.1
It is an interesting question whether the moduli spaces of homogeneous metrics with strongly nonnegative curvature and sec ≥ 0 coincide on a given compact homogeneous space. This has been studied for Wallach flag manifolds in [5] and Berger spheres in [3,4]. In the former, these moduli spaces coincide, but that is not the case in the latter. In fact, the spheres S 4n+3 = Sp(n + 1)/Sp(n) and S 15 = Spin(9)/Spin(7) endowed with the Berger metric g λ = λ g V ⊕ g H have sec ≥ 0 for all 0 < λ ≤ 4 3 , but do not have strongly nonnegative curvature if λ is sufficiently close to 4 3 .
Cohomogeneity one manifolds
A cohomogeneity one manifold is a Riemannian manifold (M, g) with an isometric action by a compact Lie group G such that the orbit space M/G is 1-dimensional. It is natural to investigate strongly nonnegative curvature among these manifolds after observing that all compact homogeneous (that is, cohomogeneity zero) spaces admit strongly nonnegative curvature, see Sect. 2. After briefly describing the basic structure of cohomogeneity one manifolds (see, e.g., [1,11,12] for details), we strengthen the gluing construction of Grove and Ziller [12] from sec ≥ 0 to strongly nonnegative curvature, proving Theorem B.
Topological structure
The orbit space M/G of a cohomogeneity one manifold M is, up to rescaling, isometric to one of R, S 1 , [0, +∞) or [−1, 1]. In the first two cases, all orbits are principal, and hence the quotient map q : M → M/G is a fiber bundle. In the last two cases, there are nonprincipal orbits S corresponding to boundary points of M/G, which are called singular or exceptional, according to their dimension being respectively smaller or equal to that of principal orbits. If M/G = [0, +∞), then M is equivariantly diffeomorphic to the total space of a disk bundle over the unique nonprincipal orbit S. More precisely, fix p ∈ S, denote by K = G p its isotropy group, and denote by V = ν p S the normal space to S. The slice representation ρ : , then M is G-equivariantly diffeomorphic to the union of two disk bundles as above, one over each of the two nonprincipal orbits S ± = G/K ± , glued along their common boundary, which is a principal orbit G/H.
Strongly nonnegative curvature
The construction of cohomogeneity one metrics with strongly nonnegative curvature is straightforward in case M/G is one of R, S 1 , or [0, +∞). We thus focus on the more involved case M/G = [−1, 1], which requires that nonprincipal orbits S ± have codimension ≤ 2. We follow the same strategy as in Grove and Ziller [12] to glue two disk bundles. Namely, we construct metrics g ± with strongly nonnegative curvature on each "half" M ± = G × K ± V ± , which outside of a compact set are isometric to G/H×[0, ε) with a product metric g 0 +dt 2 , where g 0 is normal homogeneous. This is achieved with a scale up/scale down procedure involving the bi-invariant metric on G.
Since the construction is the same on each half, we henceforth drop the subscripts ± .
The desired metric g on G × K V is induced by a metric on G × V of the form is an odd smooth function such that f (0) = 1 and f (t) > 0 for all t > 0, and dθ 2 is the round metric on the unit sphere S(V ). Let π : We use subscripts to denote the components in these subspaces, e.g., X k and X m are the components of X ∈ g in k and m respectively. Routine computations show that the vertical and horizontal spaces of the Riemannian submersion π : where X * tv 0 = d ds ρ(exp(s X))tv 0 s=0 is the value at tv 0 ∈ V of the action field X * induced by X ∈ p, and B is the L-symmetric automorphism B : p → p such that L(·, B·) = dθ 2 .
The following description of the metric on the principal orbits can be obtained from the above splitting, see for instance [7,12]. Lemma 3.1 (Scale down) Using the above notation, for each t > 0, we have: (i) The metric ·, · on the principal orbit G [e, tv 0 ] ⊂ G× K V induced by the metric L + dt 2 + f (t) 2 dθ 2 on G × V is given by L(·, C·), where C : m ⊕ p → m ⊕ p is the L-symmetric automorphism defined as Then L is Ad K -invariant, and the metric L + dt 2 + f (t) 2 dθ 2 on G × V induces the metric L| m⊕p on the principal orbit G [e, tv 0 ] .
The second key ingredient in the construction is the following strengthening of [12,Prop. 2.4], which states that a bi-invariant metric Q on G retains (strongly) nonnegative curvature when it is dilated by a factor of up to 4 3 in the direction of an abelian subgroup A ⊂ G. This is accomplished (just as in [12,Prop. 2.4]) by viewing this process as a "backwards" Cheeger deformation, that is, the enlarged metric on G is induced by a submersion from G × A with a semi-Riemannian metric. Lemma 3.2 (Scale up) Let (G, Q) be a Lie group with bi-invariant metric, a be an abelian subalgebra of g, and n be its Q-orthogonal complement. The left-invariant metrics Q t = t Q| a ⊕ Q| n on G have strongly nonnegative curvature for all 0 < t ≤ 4 3 . Proof The result is obvious for t = 1, since the curvature operator of (G, Q) is positive-semidefinite, hence (G, Q) trivially has strongly nonnegative curvature.
Consider t > 0, t = 1, and let A be the unique connected Lie subgroup of G with Lie algebra a. Endow G × A with the semi-Riemannian product metric Q + t 1−t Q| a . A straightforward computation shows that the map is a semi-Riemannian submersion. Indeed, the horizontal lift of X ∈ g is given by and The A-tensor of this semi-Riemannian submersion can be computed as Thus, By the Gray-O'Neill formula (2.3) and (2.4), the curvature operator of (G, Q t ) is where b is the Bianchi map. Let us expand the above first term, by separating the components in a and in n and using that [a, n] ⊂ n.
Substituting (3.3) in the above and combining with the first term, we conclude that
3 , then R t + ω t is a sum of positive-semidefinite operators, hence positive-semidefinite. Thus, (G, Q t ) has strongly nonnegative curvature for all 0 < t ≤ 4 3 . We now use Lemmas 3.1 and 3.2 to prove Theorem B, in analogy with the sec ≥ 0 construction of Grove and Ziller [12,Thm. 2.6].
Proof of Theorem B The other cases being straightforward, let M be a cohomogeneity one G-manifold with M/G = [−1, 1]. Let S ± = G/K ± be the nonprincipal orbits, and consider separately each of the two "halves" G × K ± V ± of M, which are disk bundles over S ± . Fix a bi-invariant metric Q on G. We will construct a metric g on each disk bundle G × K V that has strongly nonnegative curvature, and near the boundary is isometric to G/H × [0, ε) with a product metric, where G/H is endowed with the normal homogeneous metric defined by Q. Gluing these two halves together along their common boundary G/H yields the desired metric on M.
If S = G/K is exceptional, i.e., has codimension 1, then the metric induced on G × K V by the product metric Q + dt 2 on G × V clearly has the desired properties. Thus, assume that S = G/K has codimension 2, which means that dim p = 1. This implies that p is an abelian subalgebra of g, and the standard metric dθ 2 on the circle S 1 is given by Q(·, B·) where B = b Id for some b > 0, cf. Lemma 3.1 (ii). Let f (t) be an odd smooth function such that f (0) = 1, f (t) > 0 and f (t) ≤ 0 for all t > 0, and f (t) ≡ a is constant for t ≥ t 0 , where a satisfies a ≥ 2 The cigar metric dt 2 + f (t) 2 dθ 2 on V has positive-semidefinite curvature operator, hence trivially has strongly nonnegative curvature. Consider the scaled up metric L (·, ·) = Q(·, E·) on g = m ⊕ k, where E : m ⊕ p ⊕ h → m ⊕ p ⊕ h is given by Since this metric L on G has strongly nonnegative curvature by Lemma 3.2, the product metric L +dt 2 + f (t) 2 dθ 2 on G× V also has strongly nonnegative curvature. It is easy to see that L is Ad K -invariant [12, p. 341], and as K acts orthogonally on V , we have that L + dt 2 + f (t) 2 dθ 2 descends to a scaled down metric g on G × K V . The quotient map π : G × V → G × K V is hence a Riemannian submersion, so (G × K V, g) has strongly nonnegative curvature. Finally, Lemma 3.1 (ii) implies that, for any t ≥ t 0 , the metric induced by g on the principal orbit G [e, tv 0 ] is the normal homogeneous metric defined by Q. This concludes the construction of the desired metric with strongly nonnegative curvature on each half of M. Remark 3.3 Instead of gluing the two halves (G × K ± V ± , g ± ) of M identifying their common boundary G/H via the identity map, one may use any other isometry φ of G/H. Despite being the union of the same two cohomogeneity one disk bundles, the resulting manifold M = G × K − V − ∪ φ G × K + V + is in general not diffeomorphic to M, and unless φ ∈ N(H)/H, it does not have a global isometric G-action, but has strongly nonnegative curvature. Obviously, one may also replace one of the disk bundles G × K + V + by any other disk bundle with the same boundary; e.g., gluing two copies of the same disk bundle G × K − V − one produces the double of that bundle. For instance, consider the cohomegeneity one 2-disk bundle determined by the groups H = U(n − 1), K = U(n − 1)U(1), and G = U(n). This is the normal disk bundle of CP n−1 ⊂ CP n , and is diffeomorphic to the complement of a disk in CP n , see Sect. 4 for details. Gluing with the identity map on G/H = S 2n−1 , the result is CP n #CP n , while with the antipodal map it is CP n #CP n . The former manifolds admit a cohomogeneity one G-action, however the latter do not for n = 2, 3 [16,17].
Principal and associated bundles
The class of manifolds that can be shown to admit metrics with strongly nonnegative curvature due to Theorem B extends far beyond that of cohomogeneity one manifolds, thanks to the associated bundle construction. Recall that given a principal G-bundle P and an isometric G-action on a manifold F, the associated bundle P × G F is the orbit space of a free G-action on P × F, see [1, §3.2]. Strongly nonnegative curvature, just as sec ≥ 0, is preserved under products and Riemannian submersions [4]; so if both P and F have strongly nonnegative curvature, then so does P × G F. In the remainder of this section, we list all currently known applications of this technique.
As an important example, all principal SO(k)-bundles over S 4 have a cohomogeneity one SO(3) × SO(k)-action with singular orbits of codimension 2, see [12, Thm. F] and [23,Thm. 2.10], and hence metrics with strongly nonnegative curvature by Theorem B. Via the associated bundle construction, it follows that all vector bundles and sphere bundles over S 4 have complete metrics with strongly nonnegative curvature. This accounts for 20 of the 28 oriented diffeomorphism types of spheres in dimension 7, which includes all Milnor exotic spheres. It has been announced that the 8 remaining exotic 7-spheres are orbit spaces of free Sp(1)-actions on cohomogeneity one manifolds of dimension 10 with codimension 2 singular orbits [9], hence they also admit metrics with strongly nonnegative curvature by Theorem B.
Using these techniques on other principal G-bundles, one obtains the following comprehensive list of instances where the "converse" to the Soul Theorem of Cheeger and Gromoll [8] explained in the Introduction is currently known to hold: The total space of the following vector bundles, and the corresponding sphere bundles, admit complete metrics with strongly nonnegative curvature: (i) All vector bundles over S 4 and S 5 ; (ii) All vector bundles over S 7 of rank 3, and 88 of the 144 of rank 4; (iii) All vector bundles over CP 2 with nontrivial second Stiefel-Whitney class; (iv) All complex rank 2 vector bundles over CP 2 whose first Chern class c 1 is odd, or whose c 1 is even and the discriminant := c 2 1 − 4c 2 satisfies ≡ 0 mod 8; (v) All vector bundles of rank ≥ 6 over CP 2 , S 2 × S 2 and CP 2 #CP 2 ; (vi) A representative of any class of stable vector bundles over any compact rank one symmetric space.
Connected sum of two compact rank one symmetric spaces
One of the main inspirations for the cohomogeneity one gluing construction of Grove and Ziller [12] described in the previous section was an earlier result of Cheeger [7] about gluing two compact rank one symmetric spaces (CROSS). In this section, we also strengthen this construction from sec ≥ 0 to strongly nonnegative curvature, proving Theorem A.
We follow the same strategy as in Cheeger [7], showing that the complement of a ball in each CROSS admits a metric with strongly nonnegative curvature, which near the boundary is isometric to the round cylinder S d−1 ×[0, ε). In this way, any two such objects of the same dimension d can be glued together along their boundary S d−1 , with an identification that preserves or reverses the orientation.
Geometric structure
There is a natural cohomogeneity one G-action with a fixed point S − = {p} on each CROSS, see e.g. [1, §6.3] for details. Using the notation from Sect. 3, the groups in these actions are given in Table 1.
All inclusions above are matrix block embeddings, except for Spin(8) ⊂ Spin(9) which comes from the spin representation. We may restrict our attention to the last 3 cases, since S n and RP n clearly have metrics with these desired properties. 1 Denote the above projective spaces by KP n , where K is one of the real normed division algebras R, C, H, or Ca, and set k = dim R K. The principal orbits G/H are Berger spheres S kn−1 , which are boundaries of metric balls centered at p. The other singular orbit S + = G/K + is a totally geodesic KP n−1 , which is the cut locus of p, and the homogeneous bundles K + /H → G/H → G/K + are Hopf bundles S k−1 → S kn−1 → KP n−1 . Thus, the complement M of a metric ball centered at p is diffeomorphic to the normal bundle of Cut( p) = KP n−1 . In particular, this "half" M ∼ = G × K + V + of the cohomogeneity one manifold is a disk bundle exactly as those in the previous section. However, note that the codimension of S + in KP n is k, so the gluing method in the proof of Theorem B does not apply unless K = C, cf. Remark 3.3. In addition, we remark that the normal homogeneous metric on G/H = S kn−1 is Table 1 Cohomogeneity one actions with a fixed point in a CROSS Ca P 2 Spin(9) Spin(9) Spin(8) Spin (7) Ca 2 Ca ∼ = R 8 not isometric to the round metric unless kn = 2 or 4, so a different construction is required. 2
Strongly nonnegative curvature
In order to carry out the above mentioned strategy, we need to construct a metric on the disk bundle M ∼ = G × K V (we drop the subscript + to simplify notation) with strongly nonnegative curvature, which is isometric to a round cylinder near the boundary G/H. The cases of CP n and HP n can be easily dealt with because K = H × L, where L is respectively U(1) and Sp (1), and the slice representation ρ : so that a metric on M with (strongly) nonnegative curvature may be produced from L-invariant metrics with (strongly) nonnegative curvature on the sphere G/H and the vector space V . This is precisely the argument found in Cheeger [7]. On the other hand, the case of the Cayley plane Ca P 2 does not admit such a simplification, and a more delicate (but ultimately analogous) argument is required. For the sake of completeness, we state it below in greater generality than what is needed for proving Theorem A, using the same notation as in Lemma 3.1. Proof We restrict ourselves to the claims regarding strongly nonnegative curvature, as the case of sec ≥ 0 is similar and less involved. Since G acts on G × K V with cohomogeneity one, it is enough to consider points along a radial geodesic γ (t) = π(e, tv 0 ), t ≥ 0. Moreover, we may assume t > 0, since strongly nonnegative curvature is a closed condition. Applying the Gray-O'Neill formula (2.3) to the Riemannian submersion π : We proceed by estimating from below the first 2 terms in the right hand side of the above formula.
To estimate the second term 3α, we write A = A 1 + A 2 according to the splitting (3.1) of the vertical space V. Namely, A 1 is the component of A with image in h × {0} and A 2 in − X, X * tv 0 : X ∈ p . It then follows that β 1 ).
and computing
Since G/H has strongly nonnegative curvature, there is a 4-form ω G/H ∈ ∧ 4 (m ⊕ p) such that R G/H + ω G/H is positive-semidefinite. Define ω ∈ ∧ 4 H by Combining the estimates (4.1), (4.2), and the Gray-O'Neill formula (2.3) applied to the Riemannian submersions G → G/H and G × V → G × K V , we have concluding the proof that G × K V has strongly nonnegative curvature.
We are finally ready to give a proof of Theorem A, using Lemmas 3.1 and 4.1.
Proof of Theorem A. As discussed above, it suffices to construct a metric with strongly nonnegative curvature on the complement of a ball in CP n , HP n , and Ca P 2 , which is isometric to a round cylinder near the boundary. Each of these manifolds is diffeomorphic to a disk bundle G × K V , where G, K and V are given in Table 1. In all cases, p is Ad H -irreducible, hence the assumption that B = b Id in Lemma 3.1 (ii) is satisfied due to Schur's Lemma. Assume that f (t) is a function as in Lemma 4.1 and f (t) ≡ a is constant for t ≥ t 0 , with a 2 > 1 b . Let L be the left-G-invariant and right-K-invariant metric on g which induces the round metric on the sphere G/H, and consider the scaled up metric L (·, ·) = L(·, E·) on g, where E is given by (3.4). Since L converges to L as a → ∞, and L induces a metric with strongly positive curvature on G/H (which is an open condition), it follows that the constant a can be chosen sufficiently large so that L also induces a metric with strongly positive curvature on G/H. Therefore, the metric on G × K V induced by L + dt 2 + f (t) 2 dθ 2 has strongly nonnegative curvature, by Lemma 4.1. Finally, according to Lemma 3.1 (ii), the disk bundle G × K V with this metric is isometric to a round cylinder near the boundary, concluding the proof.
Remark 4.2 Some connected sums of two CROSS, such as CP n #CP n , HP n #HP n , and Ca P 2 #Ca P 2 are diffeomorphic to biquotients, providing an alternative way of endowing them with metrics of strongly nonnegative curvature. Nevertheless, there are also some connected sums of CROSS, such as CP 8 #Ca P 2 , HP 4 #Ca P 2 , and Ca P 2 #Ca P 2 , that are not even homotopy equivalent to biquotients [21, Thm. 2.1].
Remark 4.3
By the proof of Theorem A, it suffices that G/H has strongly positive curvature and p is irreducible, for the disk bundle G × K V to have a metric with strongly nonnegative curvature which is product near the boundary. Thus, one is led to asking what manifolds can be obtained by gluing two such disk bundles along their common boundary. Combining the classifications of homogeneous spaces G/H with strongly positive curvature [4,5] and homogeneous structures K/H on spheres, it follows that the only possibilities are the above connected sums of two CROSS, besides doubles, homogeneous spaces, and biquotients. Thus, despite the relatively general framework provided above, this method cannot produce any new examples.
Open manifolds and their souls
The celebrated Soul Theorem of Cheeger and Gromoll [8] states that a complete open manifold M with sec ≥ 0 is diffeomorphic to the normal bundle ν S of a totally convex (hence totally geodesic) compact submanifold S without boundary, called a soul of M. Note that if M has strongly nonnegative curvature, then its soul S also has strongly nonnegative curvature [4,Prop. 2.6]. Theorem C in the Introduction is a direct consequence of the following, which is the analogue of a result of Guijarro [14,Thm. A] for strongly nonnegative curvature. Proof It follows from Perelman [18] that there exists r * > 0, smaller than the focal radius of the soul S, such that the tubular neighborhood D r of radius r around S is convex for all 0 ≤ r ≤ r * and the Sharafutdinov retraction sh : D r * → S is C ∞ , see Guijarro [14,Lemma 2.2] and Guijarro and Walschap [15,Prop. 2.4]. Using this fact, Guijarro [14] constructed a smooth convex hypersurface N ⊂ D r * × R, given by the union of the graph of a function : D r 1 → R that vanishes identically on D r 0 , and the cylinder ∂ D r 1 × [1, +∞), for some 0 < r 0 < r 1 < r * . In particular, N is diffeomorphic to the normal bundle ν S, and hence to M. Since (M × R, g + dt 2 ) has strongly nonnegative curvature and N ⊂ M × R is convex, the induced metric on N also has strongly nonnegative curvature by the Gauss equation. The desired metric g is obtained pulling back this induced metric by the diffeomorphism N ∼ = M.
A consequence of Theorem 5.1 is the existence of a metric with strongly nonnegative curvature on the double of the normal disk bundle to the soul S of any open manifold with strongly nonnegative curvature, cf. Guijarro [14,Thm. 1.2]. | 8,065.4 | 2016-08-05T00:00:00.000 | [
"Mathematics"
] |
Experimental Design: Utilizing Microsoft Mathematics in Teaching and Learning Calculus
The experimental design was conducted to investigate the use of Microsoft Mathematics, free software made by Microsoft Corporation, in teaching and learning Calculus. This paper reports results from experimental study details on implementation of Microsoft Mathematics in Calculus, students’ achievement and the effects of the use of Microsoft Mathematics on students’ attitudes in relation to such experience. Two classes of the students from the first year student in Universitas Serang Raya were participated in the study. Control group was taught by using conventional teaching method, whereas experimental group was taught by using Microsoft Mathematics software. Assessment of students’ achievement was collected previous to and after experiment by giving the test. At the end of the lecture, both groups completed the questionnaires indicating their attitudes toward and self-confidence in mathematics and computers. In addition, the experimental group was asked to complete a questionnaire about their attitudes toward using Microsoft Mathematics. Interview process was added to complete the data. This study found that students who taught by using Microsoft Mathematics had higher achievement and has a positive effect on students’ confidence of mathematics.
Microsoft Mathematics program is free software made by Microsoft Corporation that has a symbolic computing system and work based on mathematical expressions. Microsoft Mathematics as mathematics computing software is appropriate to utilize in assist students to solve the problem of Linear Algebra, Statistics, Calculus, and Trigonometry.
LITERATURE REVIEW
One of the main goals in mathematics education is to ensure the success of all students in understanding mathematics. Mathematics is regarded as one of the subjects in the most challenging and problematic aspects of education. However at the same time, mathematics is the most important study in science, considering mathematics as knowledge that is widely used in daily life and applied to many others fields of science. Mathematics is a basic tool in analyzing the concepts of each field in the aspect of human life [8]. For this reason teachers should focus on the development of students' understanding of mathematical concepts and need to provide a quality educational environment. Many students judge complicated to engage with mathematical concepts. Duval (in states that, "there is no understanding of mathematics without visualization [9]". The visualization is intended to be a concrete tool that facilitates students to explore mathematical concepts. Technology is useful to support students in understanding concepts, reasoning, building and exploring knowledge, solving problems and generating new information. Furthermore, it helps students to better visualize the mathematical concepts. The previous researches have revealed that the activities supported by visualization can improve the learning mathematics [10]. Ashburn and Floden (2006) emphasized the importance of using technology in mathematics learning is to build a graphical representation and symbolic expression of mathematics to assist students in creating the goal of understanding [11]. The approach of technology involves the actions, perceptions, and learning products based on doing, teaching, and seeing [12]. By using multimedia instructions students can communicate information, involve more than one model of presentation, and reminding how a material can be presented. Multimedia instruction is a method for students to represent an abstract object of mathematics. Research showed that using the tools of representation in teaching and learning can support the development of students' mathematics understanding [13]. Being able to connect the different mathematical representations or generate new representations of the same object has proven to be a strong indicator in an effort to increase students' knowledge and understanding.
There are many researchers who concluded that interactive technology especially as visualization tools are an effective media to engage students in the learning and create meaningful learning [14]. It creates interactive visualization is an important aspect in learning process. Connecting different visualization provides different benefits in cognitive development as well. Technology used for the education purposes should be complemented with dynamic animation and flexible so that students can build an understanding in the better way. Hogstad and Brekke (2010) states, "students need to see things moving to understand and to process information [15]".
Calculus is part of the mathematics. Calculus plays an important in the curriculum in almost all disciplines, such as engineering, science, business, economics, computer science, and information system. Calculus concepts are arranged in a systematic, logical, and hierarchical from the simplest to the most complex. In other words, understanding and mastery of a concept is a requirement to recognize the further concept. Therefore naturally the mastery of Calculus is essential in learning. However, many students make the Calculus as a trouble in the learning process. Calculus as part of the mathematics has an abstract object that most students are not able to imagine the object.
Microsoft Mathematics is free software made by Microsoft Corporation. This software provides users to perform computational mathematics. Writing, calculating, and manipulating of mathematical expressions and graphical visualization of 2D, 3D and animation can be carried out with simple instructions. Solving problems using Microsoft Mathematics featured step solution of a problem as that obtained when done manually. For example, the function = 2 − 3 + 4 will be investigated the first derivative of f(x) and its graph in 2D and 3D.
METHODOLOGY
The main aim of this study was conducted to answer the following research questions: a. What is the role of Microsoft Mathematics in the teaching, learning, and understanding of Calculus? b. What are the effects of the use of Microsoft Mathematics on students' attitudes towards mathematics in the classroom? The first aim of this study is to find out descriptive information the use of Microsoft Mathematics by students to build their knowledge and construct their understanding related to mathematical content. This data was obtained from the pre-test and post-test of mathematical test both experimental group and control group. Then the data was analyzed quantitatively comparing the group that had experimental teaching using Microsoft Mathematics with the one that had conventional teaching. Quantitative and qualitative analysis of data collected by earnings of questionnaire about students attitudes toward using Microsoft Mathematics to answer the second research question.
A mixed method research design that integrated both quantitative and qualitative research methods has been used in this study. The quantitative approach assisted to evaluate students' understanding and learning of Calculus concepts using Microsoft Mathematics according to the scores earned. The subjects that participated in this study were 22 students. The students were divided randomly in two groups: control and experimental group. The experimental group was students who taught by using Microsoft Mathematics and the control group was students who taught by conventional teaching. At the end of the lecture the experimental group completed a questionnaire about their attitudes toward using Microsoft Mathematics. The questionnaire was used in experiment study was taken from an article written by Fogarty, Cretchley, Harman, Ellerton and Konki (2001) [16]. In the article clarified that the questionnaire is validated and used to measure attitudes toward using technology in learning mathematics.
FINDING AND DISCUSSION
As the experimental work according to the use of Microsoft Mathematics in learning mathematics started by illustrating a review of activities developed to indicate how Microsoft Mathematics has been used in learning Calculus. The results of the mathematical tests, comparing the performance for each student in the pre and the post test between the experimental and the control group were shown.
Experimental teaching using Microsoft Mathematics
This study worked the design experiment methodology that engages in stating the aims, planning to accomplish the aims, collecting and analyzing data (MacDonald, 2008) [17]. This study organized the activities for the use Microsoft Mathematics during the lecture based on three main purposes of using a tool in teaching and learning process (Wilson, 2008) [18]. These activities are shown in the following Accessibility of Microsoft Mathematics recommends students not only in discovering the concepts of Calculus, but also in communicating the concepts mathematically. By using Microsoft Mathematics students were able to present their solutions and provide feedback to the teacher during the lecture.
Pre and post test result
In the following tables show the results of the pre and post test and the mean of the result both experimental group and control group. In the experimental group can note that the average of the difference between pre test and post test is 5.14 that are significantly higher than in the control group, which is 0.82. This shows that the students in the experimental group advanced more in the post test than students in the control group. There are two students in the experimental group and four students in the control group which had lower performance in the post test. In Table4 According to Table5 above, there are no significant differences from two groups in pre test results. It means that the level of students in two groups generally is the equal. This information is important for the analysis of post test results, as compared the control group that had conventional teaching with the experimental group that participated in experimental teaching using Microsoft Mathematics. The knowledge level of students in both groups based on the mathematical content can be defined as a good level as their average scores are 72-74 (the maximum of scores was 100).
Figure4 shows the graphical representation regarding to the frequency of scores gained by the students for each group.
Journal of Education and Practice www.iiste.org ISSN 2222-1735 (Paper) ISSN 2222-288X (Online) Vol. 6, No.25, 2015 Figure4. Graphical representation of pre test in both groups From the Figure4 noted that the frequency of scores for both groups is approximately the same, particularly in the interval of scores 70-85. The other intervals are compensated, for illustration in the intervals 50-59 there are no students from the experimental group, but in the interval > 85, there are many students from the experimental group.
Next here below is presented the descriptive statistics of the post test scores of two groups (Table6 and Table7).
Table6. Normality of the data post test In Table7 noted that the higher mean is earned by the experimental group with a difference is 6.09. This illustrates a significant result for the study that the experimental group had better performance than the control group in the post test. Here below shows the graphical representation of the post test result based on frequency of scores gained by the students in both experimental and control group. The data clearly specify that the performance of students in the experimental group was better than in the control group.
Students' attitudes toward using Microsoft Mathematics
Cronbach's Alpha reliability pre and post both groups, as shown in Table8, are high and accordingly describe that the mathematics attitude investigation could be accepted as a reliable instrument for the purpose of the study.
Table8 Both control and experimental group have given the questionnaire 1 and 2. The questionnaire 1 is about mathematics confidence attitudes and the questionnaire 2 is about computer confidence attitudes. Here below are the following details of the results.
Figure6. Categories of attitudes both groups on the responses from the mathematics confidence questionnaire
According to the Figure6 mostly students in the control group had "very good" attitudes related to mathematics confident and generally students in the experimental group had "excellent" attitudes.
Journal of Education and Practice www.iiste.org ISSN 2222-1735 (Paper) ISSN 2222-288X (Online) Vol. 6, No.25, 2015 Figure7. Categories of attitudes both groups on the responses from the computer confidence questionnaire Associating with computer confident, from the Figure7 can be noticed that commonly students in the control group had "very good" attitudes and in the main students of the experimental group had "excellent" attitudes.
CONCLUSION
The main aim of the research is to investigate the roles of Microsoft Mathematics in teaching and learning Calculus. Here below presented the answers of the research question, according to the results of the research. a. What is the role of Microsoft Mathematics in the teaching, learning, and understanding of Calculus?
From this study along with previous researches on the use of computer for education goals demonstrated that applying program computer, in this study is Microsoft Mathematics, in classroom is important to improve students' learning. Interactive visualization which is a branch of graphic visualization in computer science is essential components offered by Microsoft Mathematics. It facilitates students to better understand in mathematical content and involve studying how students interact with computer to improve representation that is not obtained with conventional teaching. As shown by the students' achievement in the experimental group in the post test compared to the students' achievement in the control group. The qualitative analysis of this study is according to on the model below [19].
Figure8. Qualitative analysis model by b. What are the effects of the use of Microsoft Mathematics on students' attitudes towards mathematics in the classroom? Generally this study informs the positive attitudes toward the use of Microsoft Mathematics. It assists students to gain the better understanding, enrich students' mathematics learning, and increase students' motivation to get more involved in learning activities. As illustrated by students' response through completing the questionnaires. | 3,017 | 2015-01-01T00:00:00.000 | [
"Mathematics",
"Education"
] |
Forecasting the number of dengue fever based on weather conditions using ensemble forecasting method
ABSTRACT
INTRODUCTION
Dengue fever is a dangerous infectious disease whose cases have steadily increased over years.Dengue fever is caused by a virus that is in the saliva of Aedes mosquito that injects human body parts that varies from mild into severe conditions [1], [2].As stated from Epidemiological data and Surveillance Center, Ministry of Health, Indonesia, in Indonesia, dengue fever is still a crucial problem, this is because the number of infections and the area of distribution is increasing along with the increase in mobility and population density.
Based on the Ministry of Health Republic of Indonesia's data, in 2019, the case fatality rate (CFR) of dengue fever showed a value of 0.67% on a national scale.CFR is obtained from the proportion of deaths to all reported cases.A province is said to have a high CFR if it exceeds 1%.One of the provinces that has a high CFR is East Java with 1.01%.Based on the Malang Regency Health Office, Malang Regency is the area with the highest number of cases and deaths from dengue fever in East Java in 2019, therefore efforts are needed to control the death rate from dengue fever in Malang.
To control the mortality rate in Malang Regency, one of the efforts is to predict the number of dengue fever cases in the future, one of the research conducted is building a model to forecast so that the parties in charge could take steps and arrange policies to minimize the increase in cases and mortality rates.Several forecasts related to dengue fever have been carried out by utilizing weekly or monthly number of cases [3]- [5].Based on the previous research, there is a fairly high correlation between the number of cases of dengue fever with rainfall, temperature [6], and humidity [7].
ISSN: 2252-8938
Forecasting the number of dengue fever based on weather conditions using … (Mursyidatun Nabilah)
497
Penalized regression is a regression model using penalty that aims to reduce overfitting in multiple linear regression [8].In this study, Ridge, Lasso, Elastic Net, smoothly clipped absolute deviation (SCAD), and minimax concave penalty (MCP) were explored.In order to overcome limitations of single forecasting model, ensemble methods are able to increase the performance of base model with higher accuracy and identify complex object, and uncertainties [9]- [12].
METHODOLOGY
Based on Figure 1, there are four big steps to build ensemble model with penalized regression.First, raw data are gathered from various sources.It consists of climate data (temperature, humidity, wind speed, rainfall) and number of dengue cases.Data cleaning is carried out to produce processed data that are ready to be used for model development.Data will be splitted into two parts that consist of training data and testing data.Training data will be used to train the model and the testing one will be used to measure model performances, also those datas will be used to determine parameter of penalized.
In building ensemble forecasting model, five penalized regressions that consist of Ridge, Lasso, Elastic Net, SCAD, and MCP will be trained and validated by each.Ridge regression widely used for high dimensional data where independendent variables are highly correlated, this method aims to reduce multicollinearity [13], Lasso is a method that used regularization and variable selection to increase interpretability and accuracy [14], Elastic Net is a combination of Ridge and Lasso regressions, so it will retain the advantage of both methods [15], SCAD regression aims to improve Lasso's penalty by reducing the bias in the model because the Lasso penalty tends to be linear in the size of the regression coefficient [16], and MCP is other alternative to give less biased variables in sparse model [17].Penalized regression parameter will be determined.
After evaluating each model, aggregated prediction is formed by calculating the average prediction results from the model (averaging).In general, the steps carried out are implemented by taking sequential data based on the time dimension.After the ensemble forecasting equation has been successfully formed, then forecasting is carried out on the dependent variable (weekly number of cases of dengue fever) using the ensemble forecasting model that has been formed on the test data.After the formation of forecasting models and predictions have been made, the analysis is carried out by predicting the magnitude of the incidence of dengue fever and strategy analysis.After that, the model performance test was carried out.
This study will try to test two forms of data to get the most optimal model results.The form of data to be tested consists of normal data and data that has been transformed into natural logarithm (ln).Based on the research, the natural logarithm transformation was carried out to stabilize the variance when performing standard regression procedures.In addition to the unstable variance (not constant), the transformation can also be used to correct for non-linearity and residuals that are not normally distributed (non-normality) [18].
RESULTS AND DISCUSSION
The experiments were performed on an Intel® Core™ i5-7200U central processing unit (CPU) @ 2.50 GHz 2.70 GHz, random-access memory (RAM) with 8 GB (gigabyte) which is running on Windows 10 home single language x64 bit.The software tool used is Rstudio and R for the programming language.The steps show the results from each research steps that consist of splitting data, determining parameters, building penalized regression model, ensembling model, and compare the model's performance with other related methods.
Splitting data training and data testing
To conduct training on the model, the data is divided into two parts into training data and testing data.Training data is part of a dataset that is trained to make predictions or perform functions from other machine learning algorithms according to their respective goals.Basically, the user provides clues through an algorithm so that the trained machine can find the correlation on its own.While data testing is part of the dataset that is tested to see the accuracy of the model, in other words, its performance.The distribution of overall data from 2014 to 2018 sequentially with the proportion of training data compared to testing data of 70% and 30%.
Determined penalized regression parameter
The parameter used in penalized regression is the lambda value.The lambda parameter controls the amount of regularization applied to the regression model.The larger the lambda value, the more coefficients are depreciated to zero.When the lambda value is equal to 0, the regularization does not apply and the model runs linear regression.Lambda value with cross validation score with the smallest error value are taken for each model and the proportion of data [19].Table 1 shows the selected lambda values along with the lowest mean squared error (MSE) score for each lambda in the Ridge, Lasso, and Elastic Net models.When compared with the selected lambda, Lasso and Elastic Net have a large enough lambda value from the Ridge model.This is because if the lambda value is greater, there is a possibility that a variable has a coefficient equal to zero.It can be said that several independent variables are not chosen to be predictors in Lasso and Elastic Net regression models, considering that Lasso regression has a variable selection feature in it [20], and Elastic Net is a combination of Ridge and Lasso models [21].
The calculation of the best lambda values for the SCAD and MCP models in Table 2 is slightly different from Ridge, Lasso, and Elastic Net models.The best lambda value is selected based on the lowest cross-validation error (CVE) value.The best lambda that can be used on SCAD is 0.908 with a CVE of 73.17, and MCP has a lambda of 0.520 with a CVE of 69.34.
Building penalized regression model
Testing data is carried out in each penalized regression model with the proportion of 70:30 for training and testing data.The performance of each model is measured based on the root mean squared error (RMSE) and symmetric mean absolute percentage error (SMAPE) values.The model was tested on both forms of data, namely normal and logarithmic transformation data.Since the number of cases' smallest error numbers on normal data (RMSE: 6.38) is lower than logarithmic transformation (RMSE: 8.95), normal data is chosen for building the penalized regression model.The performance results of each penalized regression on normal data can be seen in Table 3.Based on the test results on normal data, the SCAD model has the best performance among other penalized regression models, followed by the Elastic Net, Ridge, MCP, and finally Lasso models.Based on the order of the smallest RMSE, the models will be combined based on scenarios based on the best RMSE value.When it is viewed from the prediction pattern of each model, Ridge on Figure 2 can capture the pattern quite well, it can be seen from the prediction pattern which tends to follow the increase and decrease in the actual data.
The SCAD model in Figure 3 can also capture data patterns well.When compared to the Ridge and MCP models, SCAD tends to be more able to follow patterns in the early period with time range of October 28, 2017 (10/28/2017) to December 28, 2017 (12/28/2017).It can be seen from the pattern of predictive data that tends to decrease so that the range of error values is smaller in this section.Even so, the increase in data that occurred in the period from October 31, 2018 (10/31/2018) to November 30, 2018 (11/30/2018) could not follow the pattern as good as the Ridge and MCP models.The MCP model in Figure 4 can also follow the data pattern quite well.It can be seen from the ability of the prediction results to follow the actual value.Forecasting using Lasso has a variable selection feature, where the model will select independent variables that have relevance to the dependent variable.Even so, the results of the Lasso model are less able to capture the actual data pattern and tend to be less sensitive.It can be seen in Figure 5 where the increase in cases cannot be captured properly by the Lasso model, so this can also be the cause of the RMSE of the Lasso model having the greatest value among other models.The prediction of the Elastic Net model in Figure 6 also has a variable selection feature like Lasso's.However, Elastic Net can still capture patterns and spikes in the testing data well, this is because the model also combines the Ridge model in it, so that a combination of Ridge predictions is obtained that can handle multicollinearity [22] and could select variables according to existing data patterns.
Building ensemble model
Based on the best RMSE value of the model that has been tested further, a combination of each is carried out according to the scenario that has been set in.The normal data ensemble scenario in Table 4 shows The best model from the experimental results is used to predict the next 8 weeks, starting from January 2019 to February 2019.To predict the number of dengue fever cases in the next 8 weeks, the main thing to do is to predict each independent variable first, such as temperature, humidity, rainfall and wind speed.In the independent variable forecasting process, the methods used are different depending on the data pattern.Based on observations, the variables of air temperature, air humidity, and wind speed have cyclical data patterns, where the data patterns are repeated over a long period of time [23].Therefore, the three variables can be predicted using the multiplicative decomposition method [24].The results of temperature forecasting for the next 8 weeks are shown in Figure 7.The forecast used the best model by predicting the number of cases each week ahead, and the previous forecasting results are used as a lag feature in the next data.This is done 8 times until the forecasting results are formed.The combination of SCAD + Elastic Net with normal data which is the model with the best performance is used to predict the number of cases of dengue fever in Malang Regency.Forecasting period is the first 8 weeks of the year, from January 2019 to February 2019.From the forecasting results obtained, there will be a decrease in the number of dengue fever cases for the next 8 weeks.This visualization is shown in Figure 8.When compared with the actual data on dengue fever cases in 2019 listed in Table 5, the number of cases in 2019 tends to increase.The cause of these differences can be caused by the presence of other factors or variables that cause an increase in cases of dengue fever.The dengue transmission of dengue cases in East Java tends to be influenced by population density, population mobility, urbanization, residental areas and in public places [25].In this research, the variable used to predict is only based on climate, so it is possible that there are other factors than climate that are sufficient to have more influence on the increase or decrease in the number of dengue cases.
Comparison with other methods
To find out more about whether the forecasting method using the SCAD + Elastic Net method that has been carried out is good enough, the forecasting results need to be compared with other methods.Comparison is generated by the SCAD + Elastic Net model with another method, namely multiple linear regression [26].The models were compared based on the RMSE value where performances are contained in Table 6.In addition, in terms of determining the regression coefficient of the independent variable, the Elastic Net and SCAD models which are part of penalized regression have the ability to reduce the regression coefficient value to 0. In other words, it can eliminate independent variables that are less significant to the model [27].In multiple linear regression, all independent variables such as wind velocity, rainfall, humidity, air temperature, lag-1, and intercept (the mean value of the response variable when all predictor equals to zero) are considered in the model development.It is different with the Elastic Net model, which is only 1 variable was selected, namely lag-1, lag-1 consists of the number of dengue cases that were pushed back one day from the original data.While in the SCAD model, humidity was eliminated from the model.These results can be seen in Table 7.
CONCLUSION
Based on the results of the research that has been done, the following conclusions can be drawn: The best method for forecasting dengue fever cases is the ensemble model using a combination of SCAD + Elastic Net finalized regression with RMSE of 6.38.The logarithm transformation of the data on the number of cases does not provide better performance than normal data.It can be seen from the smallest RMSE value of the data from the ln transformation is 8.95 and for the normal data is 6.38.Based on the results of variable selection from one of ensemble forming models (Elastic Net), only the lag-1 variable has a regression coefficient that is not equal to 0, it means that in the Elastic Net regression, only the lag-1 variable is used in constructing model.While in the SCAD, there is only one variable has a regression coefficient that is equal to 0. In order to improve the forecast performance, the selection of variables need to be reconsidered.In addition to the climate factors ISSN: 2252-8938 Forecasting the number of dengue fever based on weather conditions using … (Mursyidatun Nabilah) 503 such as temperature, humidity, rainfall and wind speed, other variables can be explored for future research such as population density, population mobility, economic growth, environmental sanitation, urbanization, and community behavior.
Figure 2 .
Figure 2. Comparison between ridge vs actual
ISSN: 2252-8938Int J Artif Intell, Vol. 12, No. 1, March 2023: 496-504 500 that the BEST II scenario with the combination of SCAD + Elastic Net models has the lowest RMSE, that means the model has the lowest error compared to others.And then based on low to high RMSE, the performance of the model followed by BEST III, ALL, and lastly BEST IV scenarios.
Figure 6 .
Figure 6.Comparison between elastic net and actual
Figure 7 .
Figure 7. Forecast of climate data
Figure 8 .
Figure 8. Forecast of dengue cases' numbers in 8 weeks ahead
Table 2 .
SCAD and MCP's lambda values
Table 3 .
Penalized regression's performances Forecasting the number of dengue fever based on weather conditions using … (Mursyidatun Nabilah) 499
Table 4 .
Performances of ensemble model scenario Forecasting the number of dengue fever based on weather conditions using … (Mursyidatun Nabilah) 501
Table 5 .
Predicted vs actual dengue cases of 2019
Table 6 .
Ensemble model vs multiple linear regression
Table 7 .
regression coefficient between multiple linear regression, SCAD, and Elastic Net | 3,830.4 | 2023-03-01T00:00:00.000 | [
"Computer Science",
"Medicine"
] |
isa4j: a scalable Java library for creating ISA-Tab metadata
Experimental data is only useful to other researchers if it is findable, accessible, interoperable, and reusable (FAIR). The ISA-Tab framework enables scientists to publish metadata about their experiments in a plain text, machine-readable format that aims to confer that interoperability and reusability. A Python software package (isatools) is currently being developed to programmatically produce these metadata files. For Java-based environments, there is no equivalent solution yet. While the isatools package provides a lot of flexibility and a wealth of different features for the Python ecosystem, a package for JVM-based applications might offer the speed and scalability needed for writing very large ISA-Tab files, making the ISA framework available in an even wider range of situations and environments. Here we present a light-weight and scalable Java library (isa4j) for generating metadata files in the ISA-Tab format, which elegantly integrates into existing JVM applications and especially shines at generating very large files. It is modeled after the ISA core specifications and designed in keeping with isatools conventions, making it consistent and intuitive to use for the community. isa4j is implemented in Java (JDK11+) and freely available under the terms of the MIT license from the Central Maven Repository ( https://mvnrepository.com/artifact/de.ipk-gatersleben/isa4j). The source code, detailed documentation, usage examples and performance evaluations can be found at https://github.com/IPK-BIT/isa4j.
Introduction
In recent years, the question of how to publish research data has increasingly come into the limelight of discussions among scholars, funders, and publishers 1 . Wilkinson et al. 2 establish a set of principles to ensure that data are shared in a way that is useful to the community and worthwhile for data producers: Data should be findable, accessible, interoperable, and reusable (FAIR) -not only by humans but also by computers. In some scientific fields, there are well-curated, consistent, and strongly integrated databases that provide easy access for both humans and machines, such as Genbank and UniProt for nucleotide and protein sequences 3,4 . Other areas, like plant phenotyping, do not yet have central databases or established file formats and things become especially difficult when data from different domains need to be published in conjunction. The Investigation-Study-Assay (ISA) framework and the corresponding ISA-Tab file format 5 provide a clearly defined, machinereadable, and extensible structure for explanatory metadata that bundles common elements while keeping data in separate files using appropriate formats. Several communities have already created specific standards (such as MIAPPE 6 or MIAME 7 ) and infrastructure 8 based on the ISA framework. Furthermore, tools have been developed for validating, converting, and manually crafting ISA-Tab metadata 7, 9,10 . However, given the ever-increasing volume of research data generated in high-throughput experiments, the manual creation of metadata is simply not feasible in many situations. A Python package called isatools for programmatically generating ISA-Tab metadata is currently under development (https://isatools.readthedocs.io) featuring methods to parse, validate, build, and convert ISA files. It also offers a feature to create sample collection and assay run templates according to a specified experimental design which can be useful when planning an experiment. Building ISA-Tab files, isatools provides great flexibility and ease of use: users can create and connect ISA objects in arbitrary order and degree of detail and isatools automatically determines the appropriate formatting when the ISA-Tab text is rendered.
Naturally, this flexibility requires isatools to keep the whole object structure in memory and resolve the optimal path through the object chain when the content is serialized. This can notably impact performance when describing large and complex studies including a high number of replicates and attributes, as for instance required by the MIAPPE standard for plant phenotyping experiments. This could make it challenging to use isatools in interactive and time-sensitive applications. Additionally, in the majority of cases, the desired file structure is already clear beforehand based on such community standards or your own decision of what needs to be documented, so this flexibility is often not needed. We therefore set out to develop a solution that focuses on high performance and scalability, and which would integrate well into JVM-based data publishing ecosystems. The library, called isa4j, addresses these goals by providing interfaces for exporting ISA-formatted metadata not only to files, but also to any data stream provided by the application (e.g. a HTTP response stream in a web application) and using an iterative approach for creating ISA-Tab files: Instead of loading all records into memory and writing them in one go, an output stream is opened, a single record is created, flushed out into the stream, and then immediately dropped again from memory. This guarantees memory usage to remain constant so that isa4j imposes no limit on the size of the generated metadata and is able to process datasets too big to fit into memory. The output stream can also be picked up by the application and piped into further processing steps, such as calculating checksums or compressing the ISA-Tab content. In exchange, the user needs to structure rows consistently as headers cannot be modified once they are written. The schema in Figure 1 shows the exemplary integration of isa4j into different application scenarios for supporting the FAIR data sharing paradigm. In this article, we explain how isa4j can be used to generate ISA-Tab metadata and compare it to isatools in performance and scalability regarding both quantity and complexity of ISA-Tab entries.
Methods
Implementation isa4j is implemented in Java (JDK11+) and can therefore also be used with other JVM-based languages like Groovy or Kotlin. It uses the Gradle Build Tool (https://gradle.org) to resolve Figure 1. Exemplary integration of isa4j into different application scenarios for supporting the FAIR data sharing paradigm. Heterogeneous data sources like SQL and NoSQL databases, laboratory information management systems (LIMS) and application programming interfaces (API) that store data and metadata of scientific experiments can be fed into isa4j to integrate and transform this data to output complying with the ISA specifications: The isa4j library can, for example, be embedded in command line interface (CLI) applications to create ISA-Tab files in a batch processing manner. It may also be embedded in web services to create ISA-Tab files on the fly via an API based on specific user requirements. ISA-Tab files created with CLI applications could be uploaded to public research data repositories for long-term storage and web applications as graphical user interfaces would allow lowbarrier interactive access to experimental data. Both examples demonstrate how isa4j can be used for FAIR data sharing. dependencies and create arti-facts. Logging is realized via the framework-agnostic SLF4J library (http://www.slf4j.org/) so that isa4j works with a variety of logging libraries. The object-oriented Java class structure is modelled according to the published ISA specifications (https://isa-specs.readthedocs.io) to make isa4j intuitive to use and keep consistency with other ISA applications. The Ontology and OntologyAnnotation classes allow linking characteristics, units, and other metadata to established vocabularies such as those collected by the OBO Foundry 11 .
Operation isa4j is not an application itself but a software library providing methods for generating ISA-Tab metadata in JVM-based applications or scripts. As a result, operation requires at least a basic level of coding skills in Java or another JVM-based language. When using a build tool like Maven or Gradle, isa4j can simply be added as a dependency to be downloaded from the Central Maven Repository (https://mvnrepository.com/ artifact/de.ipk-gatersleben/isa4j). Otherwise, the JAR file can be downloaded from there and manually included in the class path. To use isa4j's logging feature, one of the SLF4J bindings needs to be included the same way (http://www.slf4j.org/ manual.html).
You can then import isa4j classes and start building Investigation, Study, and Assay files. For examples and details on the code interface itself, please consult the current project page (https:// github.com/IPK-BIT/isa4j) as things may change in future versions and we do not want to confuse you with potentially outdated information.
Scalability evaluation
Scalability of isa4j was assessed and compared to the Python isatools API in two dimensions: number of entries and complexity of entries.
At the simplest complexity level (Minimal), Study file rows consisted only of a Source connected to a Sample through a Process, and that Sample connected to a DataFile through another Process in the Assay File, with no Characteristics, Comments, or other additional information (6 columns in total). At the second degree of complexity (Reduced), a Characteristic was added to the Sample in the Study File, and the Assay File was expanded to include an intermediary Material Object (11 columns). The third and final level of complexity (Real World) was modelled after the MIAPPE v1.1 compliant realworld metadata published for a plant phenotyping experiment (https://doi.org/10.5447/IPK/2020/3, 119 columns). Exemplary ISA-Tab output for each of the three complexity levels can be found at https://ipk-bit.github.io/isa4j/scalability-evaluation.html# complexity-levels.
For each complexity level, CPU execution time was measured for writing a number of n rows in Study and Assay File each, starting at 1 and increasing in multiplicative steps up to a million rows. Every combination of complexity level and number of rows was measured for 5 consecutive runs in isatools and 15 runs for isa4j (here results varied more) after a warm-up of writing 100 Real World complexity rows. Additionally, memory usage was measured for realistic complexity in 5 separate runs after CPU execution time measurements.
All evaluations were carried out on a Linux server with two Intel Xeon E5-2697 v2 CPUs running at 2.70 GHz, 256 GB DDR3 RAM running at 1600 MHz and CentOS 7. 8.2003. isatools was evaluated under Python 3.7.3 [Clang 11.0.0 (clang-1100.0.33.16)] using isatools version 0.11 and memoryprofiler version 0.57 for measuring RAM usage. isa4j was evaluated under AdoptOpenJDK 11.0.5. For both libraries, a memory consumption baseline was calculated after the warm-up runs and an additional Garbage Collector invocation. This baseline consumption was subtracted from all subsequent memory consumption values as we wanted to measure purely the memory consumed by the ISA-Tab content, not libraries and other periphery 1 . The actual code generating the files and measuring time and memory usage for Python isatools 2 and isa4j 3 can be found on the isa4j GitHub repository. Figure 2 shows the performance of both libraries at increasing file size for three different levels of complexity. isa4j consistently takes up less CPU execution time than isatools for all tested scenarios, reducing the time required for writing 1 million rows of Real World complexity from 8.6 hours to 43 seconds.
Results
The emphasis on being useful especially in large-scale datasets is further amplified by isa4j's memory usage stability: While there is no notable increase for either library up to a volume of 25 rows, starting at about 250 rows, isatools memory consumption increases linearly with the number of rows being formatted, resulting in a maximum consumption of 15.8 GB for one million rows. isa4j memory consumption remains stable at about 0.5 MB independently of the number of rows written, demonstrating that the iterative technique of formatting and writing the rows had the desired effect.
Use Case: BRIDGE Web Portal
We have integrated isa4j into the BRIDGE portal, which is a visual analytics and data warehouse web application hosting data of 22621 genotyped and 9527 phenotyped germplasm samples of barley (Hordeum vulgare L.) 12 . The underlying data was derived from the study of Milner et al. 13 . isa4j was integrated to allow the MIAPPE-compliant 16 export of customized subsets of phenotypic data of germplasm samples together with the corresponding passport data 14 in the ISA-Tab format. These subsets can be derived from germplasm selections identified by the user during exploratory data analysis. In the ISA-Tab export dialog, the user can choose whether the associated plant images should be physically contained as files in the resulting ZIP file or whether they should only be linked as URLs to a version of the images available online. Due to the support of streaming in isa4j, the phenotypic data export module of BRIDGE is able to export large ZIP archives of several gigabytes with low main memory consumption of the web server. Another advantage over non-streaming approaches is that the download can start without delay and that no temporary files have to be created on the server. The process flow concept is shown in Figure 3.
Discussion
We have created a library for programmatically generating ISA-Tab metadata files in JVM-based environments and shown that it is considerably more performant and scalable than the existing Python based solution. It has been integrated into a largescale data warehouse web software to validate practical feasibility and provide an example of how the library could help make ISA-Tab metadata available in time-sensitive applications.
CPU execution time appears to have a roughly linear relationship with the number of rows being written at n > 250 but this is only valid as long as isatools memory consumption does not surpass what the system can provide. Exceeding that, additional time for swapping from and to the hard disk will be required. There may also be further non-linear effects due to optimization steps, such as the compilation to native machine code some JVMs perform for frequently used code parts. Lastly, exact CPU time requirements will naturally depend on the specific system in use but the overall relationships and proportions shown here should hold true for all situations.
Conclusions
The presented isa4j library provides a simple interface to create and export ISA-Tab metadata and can be seamlessly integrated into existing JVM-based pipelines, desktop tools or web applications. isa4j is less flexible than the Python-based isatools as it does not allow one to change the file structure after streaming has started, but the desired ISA-Tab configuration is often known beforehand, making this a peripheral limitation. In exchange, isa4j provides significantly better performance, especially for large datasets. We hope that this library will make the ISA framework available to an even wider audience and range of situations and help make published research data more interoperable and reusable for others. As a next step, we are going to begin developing a specialized isa4j extension for plant phenotyping experiments, isa4j-miappe, intended to make it even easier for researchers in the field to ensure their metadata comply with the community standard. If you would like to contribute or develop an isa4j extension for your own community, please feel free to get in touch with us.
Nils Hoffmann
Center for Biotechnology (CeBiTec), Bielefeld University, Bielefeld, 33594, Germany The authors describe a JAVA-based implementation of the Investigation-Study-Assay (ISA) framework for the structured description of biological experiments, their protocols and their results. They position their implementation as a complement to the existing Python-based implementation that uses a complete in-memory model of the ISA data structures before writing them to the actual output files. For large studies, e.g. for whole populations or large cohorts, this can mean that memory and CPU requirements are very demanding.
Thus, the authors implemented their JAVA library to write out lines as they arrive, requiring that the user fix their data format description before starting to write out to the final files. Therefore, their implementation's memory complexity scales constant with the number of rows to be written, as each row can be created ad-hoc and then written out to the target file. In order to underline this advantage over the Python-based library, the authors created different benchmarks, illustrating the memory and CPU time usage of each library for a collection of different study designs with increasing levels of complexity, highlighting the significant speed and memory advantage of their implementation.
Finally, the authors demonstrate the practical feasibility of their library through integration into the BRIDGE web portal where they employ isa4j to generate ISA-tab files on the fly for the studies stored in BRIDGE.
The support for ISA-tab in programming languages other than Python, especially with a focus on performance, is a timely and needed addition. For JAVA, the graphical client ISACreator was previously developed, but has not seen any significant updates throughout the last few years. Specifically, where neither the flexibility of the Python-based ISA tools nor a graphical user interface for a predefined ISA format are required, the isa4j library can be a valuable, performant, yet still validating tool to generate ISA-tab files in many different life-sciences domain, such as metabolomics, proteomics, genomics, etc. Thus, it addresses a current need and does this in a well designed and performant way.
Minor comments:
The manuscript states, that the library is available from mvnrepository.com, while the GitHub page states that it is available from Maven Central, please update the manuscript accordingly.
© 2021 Izzo M. This is an open access peer review report distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Massimiliano Izzo
Oxford e-Research Centre, Department of Engineering Science, University of Oxford, Oxford, UK The authors present isa4j, an optimised Java-based library to generate and serialise ISA-TAB metadata. I find particularly interesting that isa4j supports writing the ISA-metadata output on streams as well as files, as this can be very useful when building modern client-server applications. isa4j has an interesting approach of loading into memory only one row at the time, hence limiting memory consumption. The memory consumption comparison with isatools makes a good argument for using isa4j for certain large scale experiment.
I am curious to know whether isa4j-generated ISA-TABs comply with the ISA-TAB validation rules, also with respect to the configuration files for specific assays (latest version can be found here.
There are a few discrepancies with respect to the official ISA-TAB specifications: for instance, Processes cannot have names in "isa4j" and as a consequence "Assay Name" or synonymous columns are missing. Characteristic categories are treated as strings in isa4j, while are OntologyAnnotations in the ISA-API. I might have missed other, minor, discrepancies. I would suggest adding more equivalence tests with existing datasets to align more this library with isatools.
The developers don't seem to have put the tests into continuous integration; I think it would be worth doing so.
In any case, I think isa4j is a useful tool with strong performance, that will be very helpful to produce ISA-TAB metadata from a variety of large-scale experiments with noteworthy performances and low resources consumption.
Is the description of the software tool technically sound? Yes
Are sufficient details of the code, methods and analysis (if applicable) provided to allow replication of the software development and its use by others? Yes
Is sufficient information provided to allow interpretation of the expected output datasets and any results generated using the tool? Yes
Are the conclusions about the tool and its performance adequately supported by the findings presented in the article?
Yes
Competing Interests: No competing interests were disclosed.
Reviewer Expertise: Applied Computer Science, Software Engineering I confirm that I have read this submission and believe that I have an appropriate level of expertise to confirm that it is of an acceptable scientific standard.
The benefits of publishing with F1000Research: Your article is published within days, with no editorial bias • You can publish traditional articles, null/negative results, case reports, data notes and more • The peer review process is transparent and collaborative • Your article is indexed in PubMed after passing peer review • Dedicated customer support at every stage • For pre-submission enquiries, contact research@f1000.com | 4,478.6 | 2020-12-03T00:00:00.000 | [
"Computer Science"
] |
Design and Analysis of Enhanced Modulation Response in Integrated Coupled Cavities DBR Lasers Using Photon-Photon Resonance
: In the last few decades, various solutions have been proposed to increase the modulation bandwidth and, consequently, the transmission bit-rate of semiconductor lasers. In this manuscript, we discuss a design procedure for a recently proposed laser cavity realized with the monolithic integration of two distributed Bragg reflector (DBR) lasers allowing one to extend the modulation bandwidth. Such an extension is obtained introducing in the dynamic response a photon-photon resonance (PPR) at a frequency higher than the modulation bandwidth of the corresponding single-section laser. Design guidelines will be proposed, and dynamic small and large signal simulations results, calculated using a finite difference traveling wave (FDTW) numerical simulator, will be discussed to confirm the design results. The effectiveness of the design procedure is verified in a structure with PPR frequency at 35 GHz allowing one to obtain an open eye diagram for a non-return-to-zero (NRZ) digital signal up to 80 GHz. Furthermore, the investigation of the rich dynamics of this structure shows that with proper bias conditions, it is possible to obtain also a tunable self-pulsating signal in a frequency range related to the PPR design.
Introduction
Semiconductor laser diodes with a wide direct modulation bandwidth represent an important element to fulfill the continuously increasing request for low-cost optical communications systems with a high bit-rate (see, e.g., [1]).Whilst the maximum bit-rate achieved by direct modulated lasers is typically limited by the well-known resonance between carriers and photons (carrier-photon resonance (CPR)) [2], many solutions have been proposed to overcome this restriction; see, for example, [3] for a recent review.
A first mechanism identified to extend the modulation bandwidth is the detuned loading (DL) due to the dispersion effect introduced by a coupled passive cavity [4,5] or by a distributed mirror (DBR [6][7][8] or Distributed Feedback (DFB) [9]) when the lasing mode is properly positioned at a slightly higher wavelength with respect to the minimum threshold gain condition.
A second approach used to extend the lasers dynamic properties is to take advantage, in a properly designed cavity, of the interaction between the lasing mode and an adjacent longitudinal cavity mode.This interaction is made possible by the carrier pulsation introduced by the current modulation applied at the gain section electrode [6,7,[10][11][12].This interaction introduces a resonance Photonics 2016, 3, 4 2 of 14 in the intensity modulation response at the frequency corresponding to the two cavity modes' separation; such a resonance is frequently called photon-photon resonance (PPR).
Since the PPR usually occurs at a frequency that is much higher than the CPR frequency, the request for an almost flat modulation response implies the need for a proper cavity design to have the PPR at the correct frequency, allowing one to fill the gap between the PPR and CPR peaks of the modulation response.In this condition of modulation bandwidth extension, it is possible to obtain an open eye diagram of an NRZ signal at a greater bit-rate than in the corresponding single-section DBR laser.
An approach that is frequently used to achieve this condition is the introduction of an external feedback to the laser cavity.On this concept, various cavity designs have been studied and realized as, e.g., the complex cavity injection grating (CCIG) [13][14][15][16], DFB with integrated feedback (IFB-DFB) [17] or single-mode cavity with feedback effects [18,19].Furthermore, the modulation bandwidth extension can be obtained with the injection-locking of a laser to the optical signal of an external source (see, e.g., [20][21][22]).The modulation bandwidth extension has also been obtained exploiting the coupling between two modes in a waveguide supporting two transverse modes [23] or in coupled vertical-cavity surface-emitting lasers (VCSELs) [24,25].In all of the previously-cited cases, the bandwidth extension by PPR can be used either (1) to improve the dynamic characteristics of a laser that shows a limited modulation bandwidth because of its non-optimal material [26] or cavity [1] properties or (2) to extend the dynamic properties of a device that already exhibits a good modulation response.
The structure investigated in this manuscript consists of two coupled cavity DBR lasers integrated in a single chip, as shown in Figure 1a.This structure can be seen both as an example of a DBR laser with an external feedback from an active cavity or as an integrated injection-locked DBR laser, as has been already presented in [27][28][29][30] depending on the injection level in the two cavities.In this paper, just for simplicity, we will call the master the unmodulated laser cavity and the slave the other one.
The additional contributions in this paper with respect to the previous ones [27][28][29][30] are: the definition of a design procedure allowing one to determine the cavity parameters necessary to achieve a prescribed PPR frequency ( f PPR ), the validation of the design results by showing the possibility of large signal operation conditions with a clearly open eye diagram at a higher bit-rate with respect to that of the single-cavity configuration and also a more complete mapping of the dynamic characteristic with the demonstration of the existence of conditions of good self-pulsation operation in proper bias conditions.The paper is organized as follows: First, in Section 2, we will present the cavity design procedure and the obtained PPR frequency maps as functions of the cavity parameters.In Section 3, the Photonics 2016, 3, 4 3 of 14 results obtained from the simulations of the small and large signal modulation and of self-pulsation operation will be discussed.Finally, in Section 4, we will draw the conclusions.
Design of the Coupled DBR Laser Cavity
For the composite DBR cavity under consideration (Figure 1a), as well as for the other cases of laser cavity referenced in Section 1, a proper choice of the cavity parameters is the essence for the exploitation of the PPR mechanism between the lasing mode and the nearest neighbor with the requested frequency separation.
The need for a design derives from two competing factors: if the cavity length is reduced to avoid the parasitic effect limiting the laser dynamics, the mode separation is then usually too high to obtain a flat dynamic modulation response using the PPR effect.Therefore, careful design of the cavity is needed in order to control the separation between the two modes that must interact to obtain the modulation extension effect.
The precise mode separation can only be obtained from the above-threshold analysis [16,17], but an analysis at the threshold [12,13] allows a good estimation.Thus, the results found at the threshold may be used to design the structure to be analyzed in the above-threshold regime.
Master and Slave Lasers' Definition
The first step in the cavity design is the definition of the role of the two lasers.Since we assumed that the slave (right DBR) laser will be modulated, the output power will be therefore extracted from the right mirror.The second assumption is that the master (left DBR) laser has only a small output power on the left side of the cavity in order to maximize its power coupling to the slave laser.Obviously, with these preliminary assumptions, the higher reflectivity at λ B will be the peak reflectivity of the left DBR mirror (R L ), while the lower one will be the peak reflectivity of the right grating (R R ).The peak value of the central mirror reflectivity (R C ) will be in between R L and R R ; obviously, it will determine the strength of the coupling between the two cavities and, consequently, the frequency splitting f PPR between the resonances of the composite cavity.
Photon-Photon Resonance Frequency Calculation
We are interested in the case where both cavities are above-threshold with independent current injection in the active regions.First, we assume that the active section of the master (left) cavity is at transparency and that there is gain only in the slave (right) cavity.In this condition, we perform a below-threshold analysis, and we search for the threshold condition for the full coupled cavity structure.To emphasize the role of the two cavities in the lasing mode selection, we assume a wavelength-independent gain function.In this condition, we compute at threshold the gain and the frequency of the lasing mode and of its adjacent ones and the round trip gain (RTG) and phase (RTP) function of the full cavity in the frequency range around the grating Bragg condition.The RTG and RTP functions at the lasing condition are expressed as: where the equivalent reflectivities ← − r (λ) and − → r (λ) are calculated using the transmission matrix approach [2] considering a reference plane placed at the left input of the slave grating (Figure 1b).Details on the calculation of ← − r (λ) and − → r (λ) are presented in Appendix A1.
This threshold analysis allows one to obtain the frequency separation between the lasing and the adjacent modes and also gives preliminary information on the mode competition from the gain margin between the lasing mode and the non-lasing ones.
Examples of RTG and RTP functions around the lasing frequency are reported in Figure 2 for three values of the control phase φ S .In order to qualitatively appreciate the dynamic behavior in the three operation conditions in Figure 2, we present in Figure 3a preview of small signal modulation responses discussed more completely in Section 3 for a suitable choice of the currents injected in the active sections.When φ S = 0, the lasing mode (red circle) and its closest cavity mode (green square) are separated by 47 GHz (Figure 2a); however, the PPR peak is barely visible in the small signal modulation response (Figure 3a) due to the large gain margin between the lasing mode and the closest one, which is located near the RTG minimum.In this condition, the −3 dB bandwidth only depends on the CPR.When φ S = π, two modes have RTG = 1, and their frequency separation is f PPR = 35 GHz (Figure 2b): the corresponding small signal modulation response typically shows a strong peak at the PPR frequency (Figure 3b), indicating that the beating between the two modes generates a self-pulsating output power.While this condition could be useful for radio-frequency (RF) photonics applications, it cannot be employed for direct digital modulation.
In order to obtain a modulation response suitable for digital transmission applications, the correct balancing between PPR frequency and the gain margin of the mode closest to the lasing one must be ensured, as in Figure 2c and in the corresponding Figure 3c, where the −3 dB bandwidth is enhanced up to 43 GHz.With respect to the case with φ S = π, now, the f PPR is increased by 10 %, but the PPR peak is strongly reduced.Good large signal operations can be obtained around this phase condition; on the contrary, an excessively large f PPR could generate a gap between the CPR and the PPR peaks, again limiting the −3 dB modulation bandwidth.From our experience, a good choice for the f PPR in order to maximize the bit rate for digital transmission application is f PPR ≈ 3 f CPR .
Coupled Cavities' Design
Different from previous studies [27,28], in our analysis, the distributed characteristics of the DBR mirrors have been considered; therefore, the cavities' parameters for the laser design are: • the three DBR mirrors maximum reflectivities (R L , R C , R R ) or the corresponding lengths (L gr,M , L gr,C , L gr,S ); we assumed the coupling coefficient κ to be the same for all of the gratings; • the right (L S ) and left (L M ) cavity lengths.
In order to obtain an estimate of the cavity f PPR , once the cavity and material parameters have been fixed, we calculate the frequency separation in the condition indicated in Figure 2b.While in this condition, the device could be self-pulsating, in proper operation conditions above-threshold, a small tuning of the cavity modes' positions will ensure an optimal intensity of the PPR peak with a small increase of f PPR , as shown in Figure 2c, producing the extra peak in the small signal modulation response due to PPR, which allows a significant extension of the laser modulation bandwidth [12,16].
In particular, the PPR frequency is computed choosing the value of the lossless left grating reflectivity R L and assuming the output right facet to be either cleaved, for realization simplicity, or realized with a grating with maximum reflectivity R R = 32 %.Three values of the grating coupling coefficient (κ = 50, 100, 200 cm −1 ) were considered, and the waveguides and grating losses were assumed to be 10 cm −1 .The remaining cavity parameters (central grating reflectivity R C and the total master and slave cavity lengths L S and L M ) have been varied; for this first analysis, we assume L S = L M .Results and considerations for different values of the ratio L S /L M will be reported in the following.
The results of this PPR frequency analysis at threshold have been represented in Figure 4 in which we kept constant the left grating reflectivity R L = 90 %, and we considered as parameters for each figure the coupling coefficient κ.These figures are fundamental to select the structures to consider also on the basis of the available technology for the grating realization and the active material gain characteristics.
In Figure 4, the f PPR map is represented both for the case of the cleaved (continuous line) and grating (dashed line) output facet.In the figure, the dotted horizontal lines indicate the number of the cavity modes inside the central reflectivity lobe of the left grating of the master cavity ← − r gr,M (λ), which is the grating with the largest optical bandwidth (see Figure 1).This parameter is important because the larger is its value, the greater is the possibility of competition between the lasing mode and the adjacent ones, which can lead to mode jumps when tuning the cavity phase.As can be seen from the maps in Figure 4, f PPR can be obtained over a broad range of frequencies, from 10 to 80 GHz.As we predicted, small f PPR values are obtained for higher values of the central grating reflectivity R C due to the weak coupling between the two cavities, while the opposite is obtained for smaller values.Obviously, also the cavity length affects the value of f PPR , which decreases for longer cavities because of the reduction of the free spectral range (FSR).The regions in Figure 4's maps where the constant f PPR lines have not been reported indicate operation conditions in which, with the approximation used to generate the maps, the gain margin with respect to the other modes of the cavity is very small or lasing can be found at frequencies strongly shifted from the Bragg condition due to the complexity of the RTG reflectivity curve.The latter behavior is usually due to the higher number of cavity modes inside the main reflectivity lobe of ← − r .From our experience [16], in these regions, the extended modulation bandwidth conditions could still be obtained, but they are more sensitive to the cavity parameters.To simplify the reading of the maps, the truncation points have been connected with a thick dash-point blue line.As can be seen, the frequency selection introduced using a Bragg grating at the output facet instead of a cleaved surface allows one to extend the f PPR curves over all of the considered parameters' range.R C ) and for the cleaved (dashed lines map) and grating (continuous lines map) right facet with the same maximum reflectivity R R = 32 %.Results are obtained at the laser threshold using the transmission matrix method for a lossless left reflectivity R L = 90 % and with κ = 50 cm −1 (a), κ = 100 cm −1 (b) and κ = 200 cm −1 (c).In the area above the double dotted line, where the dashed constant frequency lines are not reported, the PPR conditions are more critical to be found.The red dotted lines, referring to the right vertical axes, indicate the number of longitudinal modes of the slave cavity inside the reflectivity main lobe of the grating with the largest optical bandwidth.The large markers indicate the structures that we have analyzed in their small and large signal modulation responses; the square marker indicates the structure whose dynamic results have been reported in this paper. .Three values of the grating coupling coefficient are considered: κ = 50 cm −1 (continuous lines), κ = 100 cm −1 (dashed lines) and κ = 200 cm −1 (dashed-dotted lines); the labels on the lines report the PPR frequency.The left grating maximum reflectivity is 90%.
The previous analysis has been repeated also for different lengths of the two sections (L S = L M ) while keeping constant the total cavity length; the results show a significant reduction of the PPR frequency and an increase of the area of mode competition as long as the master cavity length is reduced.In the following, we therefore present results only for the case of equal cavity lengths.
To quantitatively highlight the fundamental role of the central grating reflectivity on the f PPR , the map in Figure 5 shows its value when R R and R C are varied for a constant value of R L = 0.90 and two values of the cavity lengths L L = L R = 150 µm and 250 µm.The estimated values of f PPR show a large independence from R R and a significant dependence on κ due to the change of the effective grating length for a given R C value.
Dynamic Characteristics
Recently, the small signal properties of the two integrated DBR laser structures have been analyzed using composite mode theory [27,28] and also experimentally realized and characterized [28,29] with respect to their small signal modulation response.
In this section, we report the results of the dynamic simulations using our FDTW numerical code [31].These simulations have as objectives the verification of operation conditions showing the presence of the PPR in the small signal modulation response to confirm the results of the previous static design procedure in Section 2, but they also aim to show the existence of extended modulation bandwidth conditions allowing digital data transmission at a higher bit-rate with respect to one obtainable in single-cavity DBR lasers realized with the same active material.
More details about the implemented FTDW method are presented in Appendix A2.As highlighted in Figure 2 and in Figure 3, a fine tuning of the longitudinal cavity modes' position obtained varying the phase φ S is generally required in order to obtain the modulation bandwidth enhancement.In real devices, this tuning is generally accomplished using the phase-control sections currents I M,P and I S,P (Figure 1a).In the following simulations, for consistency with the presented analysis at threshold and in order to simplify the results interpretations, we simply represented the phase tuning adding a phase φ S to the electric field propagating in the slave laser section (Figure 1b).
We simulated the six laser cavities that we indicated with markers in Figure 4's maps.We have chosen cavities with a PPR around 35 GHz, and between the various options, we considered the cases with short cavity lengths to limit the parasitic effects.Furthermore, we have decided to operate with cavity parameters granting only a limited number of longitudinal modes in the ← − r main reflectivity lobe, allowing the use, for realization simplicity, of an output cleaved facet.The material and the waveguide parameters used in the simulation are reported in Table 1; the values have been chosen in agreement with [27].
For all of the considered structures, when varying the tuning phase, the obtained modulation results present a very similar behavior with respect to the injected currents in the active sections.In the map in Figure 6a, we summarize this behavior as a function of the currents I M and I S normalized with respect to the threshold currents I th,M and I th,S of the master and slave lasers in isolation: the former is composed of the left grating, the master active region and the central grating, while the latter is defined as the central grating, the slave active section and the right cleaved facet.
In the upper part of the map (Region (A)), in the considered structures, we obtained typically self-sustained relaxation oscillations (RO) as described, e.g., in [32] for lasers with feedback.In this region, sinusoidal self-pulsations (SP) can generally be found over wide ranges of the tuning phase when the currents in the two active sections are properly chosen: in the two examples shown in Figure 7, we report two cases of SP found when the two active section currents are the same and for two values of the phase control currents.The phase control allows one to shift the oscillation frequency from a minimum value corresponding to the PPR frequency obtained in the previously-reported design procedure to higher values.The results show in both cases the possibility to obtain a good extinction ratio and narrow RF spectra.
In Region (B), single mode operation has been typically obtained changing the control phase; examples of the corresponding small signal modulation characteristics have been reported in Figure 6b.Obviously, these operation conditions are not suitable for the extended digital modulation we are looking for.
The single mode behavior in continuous wave (CW) is also obtained in Region (D) of higher slave injection current, over a wide range of the control phase; in this region, however, the modulation The two gray horizontal strips in the map represent regions with self-sustained RO similar to those in Region (A) shown in Figure 6, but with a smaller extinction ratio corresponding to the presence of a mode in the optical spectrum with a side mode suppression ratio greater than 20 dB.In these operation conditions, the large signal modulation eye diagram is practically close also for a low bit rate.The blue "horn" pointing to the lower part of the map shows the effect of the presence of PPR on the modulation response; for high frequency PPR, the resonance peak is weak and becomes stronger as the mode separation decreases.In the lower part of the map, this effect does not appear, because in this range of phases φ S , the closest side mode is on the shorter wavelength side with respect to the lasing mode: in this condition, the PPR effect does not take place [12].The horizontal discontinuities appearing in the lower part of the modulation response when varying the tuning phase, as for example at φ S = 65 • in Figure 8, are associated with longitudinal mode jumps.This behavior highlights the importance of limiting the number of longitudinal modes inside the main lobe of the reflectivity peak.
For the value of the slave cavity phase highlighted by the red dashed horizontal line, where the modulation bandwidth is extended with not a too high resonance peak, the value of the PPR frequency is around 40 GHz, in good agreement with the f PPR value used to choose the cavity parameters from the map in Figure 4; this result, valid for all of the simulated devices, confirms the reliability of the proposed design procedure based on the PPR analysis at threshold.
The laser operation condition in Figure 8 was considered for the large signal modulation analysis using an FDTW approach.For this analysis, the master section current was kept constant, and a NRZ pseudo-random bit sequence (PRBS) composed of 2 15 − 1 bits was applied to the slave section, and the slave section phase φ S was set to 240 • in order to take advantage of the bandwidth extension provided by the PPR effect.
Results are presented as eye diagrams in Figure 9.For each eye, on the right axes, we report the output power P, while on the left vertical axes, we indicate P normalized with respect to the bit "0" and "1" levels P 0 and P 1 : p = (P − P 0 )/(P 1 − P 0 ).In order to allow for an easy estimation of the eyes opening, in the figure, we also report the limits for optical transmission systems indicated by the IEEE P802.3ba standard [33].
As expected, the results reported in Figure 9a,b show that open eyes can be obtained using a 40 Gbit/s bit-rate when P 1 /P 0 = 6 dB (a) and P 1 /P 0 = 3 dB (b).Similar results were obtained in a wide range from 150 • to 270 • of the tuning phase shift φ S .For a large-scale deployment of these devices, a control electronic circuit with a lookup table could be used for the correct selection of the currents in the phase control regions to ensure this extended modulation bandwidth operation.
We then increased the bit-rate and found that, with respect to the considered mask, the upper limit allowed one to obtain an open eye diagram as 60 Gbit/s when operating with P 1 /P 0 = 6 dB (c) and 80 Gbit/s when P 1 /P 0 = 3 dB (d), which indicates an extension of the modulation frequency well above the CPR frequency.These results were obtained in a significantly reduced range of the tuning with T gr,C the transmission matrix of the central grating section and β M and β S the complex propagation constants of the master and slave sections, respectively.In Equation (A1), φ S is a phase term included to control in the simulations the effects due to tuning in the slave section, which can be experimentally accomplished with a fine tuning of the passive section current I S,P ; ← − r gr,M (λ) = T 12 gr,M /T 22 gr,M is the reflectivity indicated in Figure 1b, being T gr,M the transmission matrix of the master cavity output grating.
The equivalent reflectivity − → r (λ) is simply −T 21 gr,S /T 22 gr,S , with T gr,S the transmission matrix of the slave cavity output grating.
A2. Finite Difference Traveling Wave Model
In our FDTW model, we just consider the slowly-varying forward (E + (z, t)) and backward (E − (z, t)) components of the electric cavity field E(z, t): E(z, t) = E + (z, t)e −jπz/Λ + E − (z, t)e jπz/Λ e jω 0 t where Λ = λ B /2n e f f ,0 , λ B and n e f f ,0 are the pitch, the Bragg wavelength and the effective refractive index of the grating sections, respectively, and ω 0 = 2πc/λ B .
The electric fields E + (z, t) and E − (z, t) are normalized in such a way that the photon density S(z, t) is |E + (z, t)| 2 + |E − (z, t)| 2 ; in the master and slave regions, they are coupled with the carrier density (N(z, t)) rate equation, yielding the system: where v g is the group velocity, α i are the material optical losses, S sp is the spontaneous emission term, J(z, t) is the injected current density, A, B, C are carriers recombination parameters, Γ xy is the transversal optical confinement factor and ε is the non-linear gain compression factor.
In Equation (A2), the variation of the effective refractive index δn e f f (z, t) is represented as − λ 0 4π α LEF Γ xy g(z, t), with α LEF the linewidth enhancement factor; we consider a linear dependence of the material gain with the carrier density, g(z, t) = a (N(z, t) − N 0 ), with a 0 differential gain and N 0 carrier density at transparency.
For the gratings regions, we include the coupling between E + and E − : The system of non-linear differential Equations (A2) and (A3) is discretized in space and time and is numerically integrated using the split-step algorithm [35].A digital filter is also included to describe the dependence of the material gain with respect to wavelength [36].
Figure 1 .
Figure 1.(a) Schematics of the coupled-cavity DBR laser.L M and L S are the sum of the lengths of the active and phase-control sections of the two laser cavities.The currents I M,A (I M,P ) and I S,A (I S,P ) are injected in the active (phase-control) sections of the master (left) and slave (right) laser cavities, respectively.(b) Analyzed coupled cavity.The phase tuning sections are replaced by a lumped phase term φ S .The right dashed line indicates the reference plane at which the equivalent reflectivities − → r and ← − r are calculated using the transfer matrix method.
Figure 2 .FrequencyFigure 3 .
Figure 2. Round trip gain (RTG) (continuous blue line) and phase (RTP) (dashed red line) functions plots at threshold around the Bragg frequency for three different additional phase shifts φ S in the slave laser cavity.The lasing mode is indicated by the red circle, the closest non-lasing mode by the green square marker and the other non-lasing modes by the blue dots.L gr,M = 181 µm, L M = L S = 250 µm, L gr,C = 65 µm, L gr,S = 70 µm, κ = 100 cm −1 .(a) φ S = 0 and f PPR = 47 GHz, (b) φ S = π and f PPR = 35 GHz and (c) φ S = π/2 and f PPR = 38 GHz.
Figure 4 .
Figure 4. Coupled cavities DBR laser design maps for equal cavity length (L M = L S ): constant photon-photon resonance (PPR) frequency (in GHz) curves as functions of the central grating reflectivity (R C) and for the cleaved (dashed lines map) and grating (continuous lines map) right facet with the same maximum reflectivity R R = 32 %.Results are obtained at the laser threshold using the transmission matrix method for a lossless left reflectivity R L = 90 % and with κ = 50 cm −1 (a), κ = 100 cm −1 (b) and κ = 200 cm −1 (c).In the area above the double dotted line, where the dashed constant frequency lines are not reported, the PPR conditions are more critical to be found.The red dotted lines, referring to the right vertical axes, indicate the number of longitudinal modes of the slave cavity inside the reflectivity main lobe of the grating with the largest optical bandwidth.The large markers indicate the structures that we have analyzed in their small and large signal modulation responses; the square marker indicates the structure whose dynamic results have been reported in this paper.
Figure 5 .
Figure 5. PPR frequency map as a function of the maximum central grating reflectivity R gr,C and the maximum right grating reflectivity R gr,R , for active section lengths L M = L S = 150 µm (a) and L M = L S = 250 µm (b).Three values of the grating coupling coefficient are considered: κ = 50 cm −1 (continuous lines), κ = 100 cm −1 (dashed lines) and κ = 200 cm −1 (dashed-dotted lines); the labels on the lines report the PPR frequency.The left grating maximum reflectivity is 90%.
Figure 7 .Figure 8 .
Figure 7. Two examples of sinusoidal the self-pulsation (SP) operation in Region (A) of Figure 6a for I S = I M = 30 mA for two different values of φ S : (a) the time domain signals and (b) the corresponding RF spectrum.The dashed black line (φ s = 40 • ) is used for the minimum value of the PPR frequency f PPR ≈ 39 GHz, while the continuous red line (φ s = 120 • ) is used for the case f PPR ≈ 57 GHz.The small signal modulation results are show in Figure8's color map, where the modulation frequency is reported in the abscissa, while the ordinates indicate the static phase change φ S in the slave section used to tune the position of the cavity modes to obtain an operation condition allowing the extension of the modulation bandwidth.
Table 1 .
Main material parameters used for the FDTW simulations. | 7,470.4 | 2016-01-08T00:00:00.000 | [
"Physics"
] |
A Novel Adaptive Level Set Segmentation Method
The adaptive distance preserving level set (ADPLS) method is fast and not dependent on the initial contour for the segmentation of images with intensity inhomogeneity, but it often leads to segmentation with compromised accuracy. And the local binary fitting model (LBF) method can achieve segmentation with higher accuracy but with low speed and sensitivity to initial contour placements. In this paper, a novel and adaptive fusing level set method has been presented to combine the desirable properties of these two methods, respectively. In the proposed method, the weights of the ADPLS and LBF are automatically adjusted according to the spatial information of the image. Experimental results show that the comprehensive performance indicators, such as accuracy, speed, and stability, can be significantly improved by using this improved method.
Introduction
Since the introduction by Kass et al. [1], active contour models (ACMs) have been widely used in image segmentation [2][3][4]. The existing ACMs based on the level set method initially proposed to handle the topological changes during the curve evolution can be broadly classified as either edge-based models [5][6][7] or region-based models [8][9][10][11][12][13][14][15] according to the type of adopted image features. The basic idea of the level set method is to implicitly embed the moving contour into a higher dimensional level set function and view the contour as its zero level set [16]. ACMs have desirable properties over the conventional image segmentation methods, such as thresholding, edge detection, and region growing. First, ACMs can provide closed and smooth contours in segmentation results, which are necessary for further application such as shape analysis and recognition. Second, ACMs can get the object boundaries with subpixel accuracy.
Edge-based models utilize image gradient to stop evolving contours on the object boundaries. Recently, He et al. proposed adaptive distance preserving level set (ADPLS) [7] evolution for image segmentation, in which the initial curve is no longer required to surround (let in or keep out) the objects to be detected. Moreover, starting with only one initial curve whose position is anywhere in the image, it can automatically detect interior and exterior contours of an object and edges of multiobjects. In addition, a large time step can be used to speed up the curve evolution in the numerical solution of the partial differential equation. However, this method does not contain any local intensity information, which is crucial for segmentation of images with intensity inhomogeneity. As a consequence, the ADPLS method generally fails to segment images with significant intensity inhomogeneity, which is illustrated in the following vessel segmentation in Figure 1. Figure 1(a) is a typical image with intensity inhomogeneity, and Figure 1(b) shows the final segmentation result of the ADPLS method. It is obvious that boundary leakage arises in regions 1 and 3, and less segmentation arises in regions 2, 4, and 5. This example shows the inability of the ADPLS method to segment images with intensity inhomogeneity.
To guide the motion of the active contour, the regionbased models identify each region of interest by using a certain region descriptor such as intensity, color, texture, or motion. Region-based models have better performance than edge-based models in the presence of weak boundaries. Local binary fitting model (LBF) [14] is one of the classical region-based models. It can not only segment the image with weak boundaries well, but also overcome the segmentation error caused by intensity inhomogeneity. The LBF model draws upon spatially varying local region information; thus it is able to deal with intensity inhomogeneity. Figure 1(c) shows the final segmentation result of the LBF model, which shows much better results than that of the ADPLS method. However, compared to the ADPLS method, the LBF model also has its own disadvantages. First, the LBF model is sensitive to the initial contour, and inappropriate initial contours might lead to failure of segmentation. Second, its curve evolution is slow due to some limitation of the time step.
In this paper we proposed a novel adaptive level set method to combine the good properties of both ADPLS and LBF methods. In a simultaneous and automatic way, the proposed method adjusts the proportion of the ADPLS and LBF methods according to spatial image information adaptively. As a consequence, the advantages of the ADPLS method and the LBF method are exploited with minimized disadvantages. The following experiments on both simulated and real images show that the proposed method can achieve segmentation with higher accuracy.
The ADPLS Method.
Let Ω ⊂ 2 be the image domain, and let : Ω → be a given gray level image. The ADPLS method [7] was recently proposed to overcome the disadvantage of the distance preserving level set method proposed by He et al., which requires the initial curve surrounding (let in or keep out) the objects to be detected. It imports a variable weighting coefficient whose sign symbol and size are adjusted by image information. So the zero level set can choose its evolution direction adaptively.
In image segmentation, active contours are dynamic curves that move toward the object boundaries by minimizing a predefined energy functional. Let be the edge indicator function defined by where is the Gaussian kernel with standard deviation and > 0 is a constant.
In the ADPLS method, a variational framework on the level set function is defined as follows: where the first term is the internal energy of that characterizes the deviation of the level set function from a signed distance function, > 0 is the weight of this internal energy term, the second term computes the length of the zero level curve of , is the edge indicator function defined by (1), is the univariate Dirac function, is the Heaviside function, and > 0 is the weight of the second term. The definition of V( ) is given by where > 0 is a constant, sgn( ) is the sign function, and Δ × denotes the image convolved with a Gaussian Computational and Mathematical Methods in Medicine 3 smoothing filter and calculated by Laplace operator. The associated level set evolution equation is given by where Δ ADPLS is the time step; the definition of 1 is given by It is worth mentioning that the variable weighting coefficient V( ) in (4) plays a key role in the evolution of the zero level curves. First, it adaptively guides the zero level curves to the target contour according to the image information. Second, the size of it is adjusted adaptively according to the spatial image information, which can greatly improve the capability of zero level set to detect the edge of multicontour and concavities. Third, by adjusting the size of coefficient , the method can control the zero level curves' ability of capturing the target boundaries. If multicontours exist in the image, we give the coefficient a larger value. On the contrary, if the content of the image is simple, a smaller value is set to coefficient .
Although the ADPLS method has the above advantages, it also has disadvantage as follows. In the segmentation of the images with intensity inhomogeneity, the intensity in one place may vary dramatically from another place, no matter how the edge indicator function ( ) is adjusted; it also has the possibility that the decrease of ( ) is too quickly for one place but too slowly for another one. The situation can be seen from Figure 1(b).
The LBF Model.
Recently Li et al. [14] proposed the LBF model, which can overcome the segmentation error brought by intensity inhomogeneity using two fitting functions 1 ( ) and 2 ( ) which locally approximate the intensities outside and inside the contours. They extracted the object by minimizing the following energy function: where is a Gaussian kernel with standard deviation . The first two terms in (6) are the weighted mean square error of the approximation of the image intensities ( ) outside and inside the contour by the fitting values 1 ( ) and 2 ( ), respectively, with ( − ) as the weight assigned to each intensity ( ) at . The third term | | is the length of the contour, and the last term is the regularization term for level set evolution. 1 > 0, 2 > 0, > 0, and V > 0 are constants. The associated level set evolution equation is given by where Δ LBF is the time step, and the definitions of the edge indicator function 1 , 2 , 1 , and 2 are given by Because of the localization property of the kernel function ( − ), the contribution of the intensity ( ) to the fitting energy LBF decreases to zero as the point goes away from the center point . This localization property plays a key role in segmenting the image with intensity inhomogeneity. And a better result than that of the ADPLS method can be observed from Figure 1(c). However, many local minimums of the energy functional might be simultaneously introduced by this localization property, which means that the LBF model might be sensitive to initial curve, and inappropriate initial curve might lead to the failure of segmentations. Figure 2 shows the failure situations of the segmentations with inappropriate initial contours. Besides, the limitation about time step limits the speed of the LBF model in the evolution of zero level set.
The Proposed Method
The proposed method is a fusion method which combines the advantages of the LBF model and the ADPLS method by taking both local and global intensity information into account. It is built to apply the two methods simultaneously To make the proposed method adjust adaptively to image information, the value of ( ) should consider two aspects. Firstly, the variable weighting coefficient should reflect the degree of intensity homogeneity. Secondly, the proposed method should use ADPLS method mainly to be fast and robust when the zero level set evolves in the intensity homogeneity area and use LBF model mainly to segment image accurately when the zero level set evolves near the objective boundary. According to these points above, the variable weighting coefficient ( ) is designed as follows: where > 0 is a constant and ( ) is the standard deviation of the pixel in a 3 × 3 neighborhood as a matter of experience, which is inversely proportional to the homogeneity of the image in the neighborhood. The more homogeneous the image's intensity level is, the smaller the value of ( ) is. With bigger weight, the ADPLS method will lead the evolution of level set. There are two advantages of the proposed method. First, with the global search ability, the proposed method can avoid being trapped into local minimums. Second, as the ADPLS method, which has a much larger time step, is much faster than the LBF model, the proposed method will be faster than the LBF model. On the other hand, the value of ( ) will be bigger when the pixel is near or on the edge. Accordingly, the weight of the LBF model will increase and be even larger than that of the ADPLS method. So the accuracy of the result will be better than the ADPLS method alone. As discussed above, the proposed method combines the advantages of the ADPLS method and the LBF model. This can be seen from a simple experiment of a simulated image shown in Figure 3. Figures 3(b) and 3(c) show the dominant force of the zero level set evolution on each pixel. As can be seen, the dominant force near the real edge that comes from the LBF model is marked blue (the borders of the image with 3-pixel width are set blue). On the other intensity-homogeneous area, the dominant force that comes from the ADPLS method is marked green.
Implementation of Level Set.
The implementation steps of the proposed algorithm in this paper can be described as follows.
Step 1. According to the given arbitrarily region in the image domain, initialize the level set function 0 . If the pixel ( , ) belongs to the region , then set 0 ( , ) = −2; otherwise set 0 ( , ) = 2.
Step 2. According to the image intensity and (10), we solve the weighting coefficient ( ) and initialize parameters. Set the variable of iterations iterNum = 0.
Step 3. If the iterNum is less than the maximum number of iterations iterMax, then repeat the following steps.
Step 4. Neumann boundary conditions are applied on the boundary of the image. According to the literature [5,7] and (4) and (7), get Δ ADPLS and Δ LBF and then obtain Δ by solving (9).
Step 5. Update the level set function using the formula = + Δ and set iterNum = iterNum + 1; then return to Step 3.
Experiments
The proposed method has been tested with both synthetic and medical images from different modalities. Unless otherwise specified, we use the following parameters in this paper. The parameters used in LBF model are Δ LBF = 0.1, 2 = 1.0, = 3.0, and the parameter of punishment = 0.002 * 255 * 255; the values of 1 and 2 depend on the actual image to be segmented which is described in the literature [14]. The parameters used in ADPLS method are Δ GIF = 1.0, 1 = 1.5, = 2.0, and the parameter of punishment = 0.2/Δ GIF , = 10; the value of should be small when dealing with simple images and be great with multilayer-complex images, which is referred to in the literature [7]. The constant coefficient in ( ) of (10) is set by experience. It is appropriate to set between 1 and 10 according to a lot of experiments. When the image is intensity inhomogeneous, should be small and be great on the contrary. We do all the experiments with Matlab code run on a Dell Optiplex 210L PC, with Pentium 4 processor, 3.0 GHZ, 1 GB RAM, with Matlab 6.5 on Windows XP. Figure 4 shows the results of the proposed method and the LBF model using the same synthetic image with the same initial contours in the first row and the second row, respectively. Results in the first row show that the LBF model fails to segment the object correctly after 300 iterations and 26.922 seconds. However, satisfactory segmentations can be obtained by using the proposed method with 10 iterations and 0.672 seconds. In these experiments, we find that, no matter what the initial contours are, the proposed method can lead to better segmentations with less iterations and time costs than the LBF model. Figure 5 shows the results for an X-ray vessel image, which is a typical image with intensity inhomogeneity. In our experiments, the ADPLS method fails to segment the object correctly after 1000 iterations and 24.453 seconds. As can be seen from Figure 5 results (shown in Figure 5(c)). That weak part of the vessel boundaries can be segmented successfully using the proposed method. This demonstrates that, owing to combining LBF model, the proposed method is more accurate than the ADPLS method. Figure 6 shows the segmentation results for a brain MR image using the three methods mentioned above. Column 1 shows the initial contours, and Columns 2, 3, and 4 show the results of the LBF model, the ADPLS method, and the proposed method, respectively; they all have the same initial contours in column 1. Figure 7 gives the enlarged view of segmentation results on local region of row 1 in Figure 6. As to the segmentations in Figures 6 and 7, the stability, iterations, elapsed time, and accuracy are listed in Table 1. Regarding accuracy, the proposed method is close to the LBF model and much more accurate than the ADPLS method. With regard to elapsed time, the proposed method is close to the ADPLS method but much faster than the LBF model. With regard to algorithm stability, the proposed method and the ADPLS method are less sensitive to the initialization than the LBF model. Figure 8 shows the segmentation results for a synthetic image using the three methods mentioned above. Column 1 shows the initial contours, and Columns 2, 3, and 4 show the results of the LBF model, the ADPLS method, and the proposed method, respectively; they all have the same initial contours in Column 1. The extensive experimental results showed the superior performance of the proposed method over the state-of-the-art methods, in terms of both robustness and efficiency.
Conclusions
The fusion method proposed in the paper uses a variable weighting coefficient to combine both ADPLS method and LBF model. The proposed method allows the ADPLS method to be dominant force in intensity-homogeneous area of the image and the LBF model to be dominant force in intensity inhomogeneous area and objective boundaries, so it could take full use of these two algorithms' advantages, respectively, to complement each other. Compared with the LBF model, the fusion method is less susceptible to the initial contours and can always get good segmentation results and can be much faster. On the other hand, compared with the ADPLS, the proposed method can fully avoid boundary leakage and lack of segmentation with high speed when processing images with intensity inhomogeneity. Experiments show that the present method has superior comprehensive performance compared to ADPLS and LBF algorithm, respectively, in segmentation accuracy, speed, and stability. | 4,044.6 | 2014-09-01T00:00:00.000 | [
"Computer Science"
] |
Analyzing Some Economic Relations Based on Expansion Input-output Model
This paper is trial attempt for introducing the concept of Leontief inverse matrix and the Leontief extended system for Keynes multipliers, which can analyze the relationship between income groups and consumer groups, respectively. The model is also used to analyze the structure of income in order to describe quantitatively the relationship between income from production and income not from production. The empirical study used the Vietnam input-output table, 2005.
Introduction
In the previous decades, it was a lot of studies in extension of basic I/O model, including Social Accounting Matrix-SAM (Richard Stone, 1961), System of National Account -SNA, Demographic Model-economic modeling (Miyazawa, 1966), and inter -regional model (Miyazawa and other authors, 1976).These extension I/O models were built and applied by most countries in the world for analyzing and forecasting the economy (Pyatt and Roe, 1977;Cohen, 1988;Pyatt and Round, 1985).There are many different uses on this model such as I/O analysis, SAM analysis and CGE model.These analyzes are based mainly on the basic relationships in I/O model and SAM.
Demographic-Economic Model
Demographic-Economic model is created by Miyazawa (1966), it's a similar form to the Social Accounting Matrix, in order to describe the distribution and redistribution of the economy.Essentially, the Demographic-Economic model and the Social Accounting Matrix are similar and it could easily be changed from one model to another depending on other study purposes.In this study, Demographic-Economic model is developed in institutional regions (households, other type of enterprise, State region is divided by type of tax).These institutional regions are considered as endogenous variables: saving and relations with foreign countries are considered as exogenous variables.This model is a combination between the notion of inter-regional I/O model and demographic-economic model, as presented in matrix form below: With: A is the coefficient matrix direct costs; x1 is the vector of the output value of economic activity; x2 is the total income of households; x3 is the total income of the state sector; x4 is the total income of enterprise type; h is the matrix (vector) of income coefficient from the production of the group of households, income from production is understood as workers' income from production divided by 2 types of household; g is the matrix (vector) of coefficient on revenues from production (added value tax, special consumption tax, other taxes and fees); e is the coefficient matrix of income from the production of various types of enterprises (state enterprises, state owned enterprises and enterprises with foreign investments), income from the production here is understood including producer surplus and depreciation of fixed assets; c1 is the coefficient matrix of consumption by household groups corresponding to income groups; g1 is the vector of consumption coefficients of the State corresponding to type of state budget revenues; c2 is the coefficient matrix represented by the redistribution of income between State sector and household sector; c3 is a coefficient matrix represented by the redistribution between the enterprise sector and household sector; g2, g3 represented by state expenditures transfer to the household sectors and enterprise sector; e1, e2, e3 is a coefficient matrix represented by the redistribution from the enterprise sector to the household sector, the State sector and to the other types of enterprises.And f1, f2, f3 f4 are the exogenous variables.
Symbols
In which, vector v, c and B could be refined as follows: From that, relation (1) could be rewrite in form: Based on the theory of the Miyazawa and development of demographic-economic model of Batey and Madden (1983), relation (8) represent as: In which: ∆ 1 is consider as Leontief extended matrix, each element of matrix ∆ 1 includes: direct costs, indirect costs, dispersion effect by final consumption of households and spending for usual activities of the Government.These elements are greater than corresponding elements of popular Leontief's matrix (I-A) -1, because it includes the requirement of more production to meet production affect cause of requirement of final consumption.∆ 2 is known as extended multiplier Keynesian matrix and it can be decayed as follows: Of which: (I-B) -1 is considered as multiplier matrix and internally spread in redistribution processes: if the matrix B is the direct expenditure matrix of regions to create a unit income from redistribution, matrix (I-B) -1 represents total expenditure, direct redistribution to create 1 unit of income from redistribution (influence between regions).Factor (I-(I-B) -1 .v.(I-A) -1 .c) - represents the external spread from the manufacturing process to the redistribution process, which means income from redistribution not only depends on the internal relations in the redistribution process but also depends on income from the production of each region caused by the influence of final consumption.
∆ 1 .c is matrix showing the influence of production by final consumption.
v.(I-A)
-1 is an income matrix received from production.
Note that Equation ( 9) can be rewritten as follows: In which: ∆ 1 = ∆ 11 .(I-A) - and ∆ 2 = ∆ 22 .(I-B) - Equation ( 11) introduced the level of various type of effects, first of all is the influence of production regional and redistribution region, second of all is the effect of final consumption to production and income spread from production to non-production, and finally the external spread effect to areas of production and distribution regions.
In addition this model also allows quantification of the inversed effect from redistribution to production areas.From formula (8), ( 9) and ( 11), this relationship between X 1 and X' is represented as follows: Equation ( 12) and ( 13) describes the inverse relationship inter-regional (between sectors and regions and between production and non-production).
Above is the general model, in addition to depending on purposes of researches, internal and external variable could be changed.For example, to consider the impact of taxes, relations (1) can be rewritten as follows: In which, f' 1 , f' 2 , f' 4 is external variable including f i (i=1,2,4)-is the matrix of tax.
Here: L 'is the matrix transpose of the matrix L; f'' i include taxes and other external variables.
I/O analysis
In an economy, changes in the structure of the sectors are often closely related to each other: some sectors heavily depend on other sectors while a few of them not depend on others too much.Thus the change of some sectors will affect to the economy more than other sectors.I/O analysis is usually based on the backward Linkages and forward linkages.These linkages are tools to measure the relationship of a sector with other sectors, with the role of a sector using the input or supplying the input.
Reverse link is used to measure the relative importance of a sector as the role to use products and services to be the input for the entire production system.Reverse link is defined as the ratio of the sum of the elements (by column) of the Leontief matrix compared to the average of the entire production system.This ratio is called the index of the power of dispersion and is defined as follows: Where: ij -elements of Leontief's matrix.The higher rates mean larger backward linkages of the industry.
And when the industry developed, it will lead to the growth of the entire system.The policy makers can rely on this index as an important reference in decision making.
Forward linkages implies that the importance of the sector as a source of material products and services for the entire production system is seen as the sensitivity of the economy, which is measured by the sum of the elements in row of the Leontief inverse matrix compared to the average of the entire system.
There are 2 type of I/O table: competitive-import type and non-competitive-import type.In competitive-import I/O table, coefficient matrix of intermediate direct costs includes: import cost as domestic product and imported product.So that, Analyzing the power of dispersion and the sensitivity of the economy will be confused with the import path.A sector with a high power of dispersion doesn't mean that the sector affects well to production, but stimulating import.In the non-competitive-import I/O table, coefficient matrix of intermediate direct cost does not include import cost as import products, so that, the survey of the power of dispersion and sensitivity of an industry will reflect the impact of that industry to domestic product.
Impacts of Non-production Income to Savings by Institutional Regions
The calculation of impacts of non-production income to savings of five institutional regions is based on the multipliers in the Keynes matrix of the model.Calculation results are presented in Table 1: Picture 2. Impacts from production to income redistribution In addition, values of Δ 22 (analysis of Δ 2 ) shows impacts from production to income redistribution.This table shows that the household region is most affected from the production process with a coefficient of 2.13, and then the state with 1.36.
Income Impacts of Non-production to the Production Process
This impact is calculated based on the formula ∆ 1 .c. (I-B) -1 in relation ( 9).This formula shows the impact of our distribution outside the manufacturing process to final consumption and spread to the production process.For example, if the Government region obtained 1 unit from non-production, including direct taxes (personal income tax and corporate income tax), collection of social insurance and health insurance will lead to budget changed and stimulate on production is 1.20 units.The calculated result of income effects from non-production income to the production process in each institutional regions are presented in Table 2.The results showed that this impact of the household region is the largest, next is the Government region, however, the impact of industry structure from the two regions is different.While 1 unit of non -production is used for "food, tourist, and travel…" by the household region, the Government region focuses on "military", "management" and "food".But 3 regions of industry got quite small index of this impact, and it's nearly the same.
From past till now, Vietnam government always has expected the State (E1) and FDI (E3) are 2 main regions for helping and directing other institutional regions.However, The calculated result in Table 02 also showed one special thing: in 3 institutional regions, FDI (E3) is the smallest impact of non -production income to the production process.The second region is E1, and the third is E2.
Impacts of Production to Income Redistribution of Institutional Regions
This section calculates the ability of income redistribution from production to institutional regions (Households, Government and enterprises) by sector groups (combined for 30 sectors).The calculation based on formula ∆ 2 .v.(I-A) -1 and results showed in table 3. Results showed that the household region has the highest ability of income redistribution among most sectors.However, income redistribution between institutional regions and sectors are not even.This may suggest that the Government should tax personal income tax evenly at a certain level, while industrial income tax should be charged depend on type of industry.The result in Table 03 showed that the ability of income redistribution of FDI -E3 is the smallest among 3 regions (E1-E2-E3) and in almost economic activities.However, the ability of FDI -E3 in 16.Mines is high, because of this calculation included income which have to refund to foreign investors in crude-oil exploitation.
Interactive Impacts of Economic Activities in the Production Process (Analysis Leontief Expanded Inverse Matrix).
Using relation ( 11) and formula ∆ 1 = ∆ 11 .(I-A) - , we can calculate internal impacts in production process and external impacts from activities of non -production to the production process.Activities of non -production is understood as spending of institutional regions (Households, Government, enterprises), including final consumption, transfer and ownership.The calculation results are presented in appendix 3. The results showed that the production of products for consumption purposes and the processing industry often have higher level of external impacts than internal impacts.
Especially, industries with very high inter-impacts in production such as food processing industry, textile industry and parts production of motorcycle.This is an interesting point for policy makers, particularly industrial policy.Calculation results of this impact for 30 combined sectors are reported in appendix 3.
I/O Analysis for Sector's Impacts
As In theory, the sectors with large index of power of dispersion should be preferred because it has a strong impact over an entire production.However, due to the index of power of dispersion change over time, prioritized policy have to change to suit: many industry sectors may be the key sector in this period but is not the key sector in the next period.Compare two I/O models, shows that indexes of power of dispersion of the key service industries in the period 2005 I/O model are increased compared to the previous period.This is a good sign: does it the expression of modernization and industrialization to the service sectors?
If you look at the index of power of dispersion in competitive types and non-competitive types, you could see an interesting thing in competitive types (including imports in import cost), the index of power of dispersion is larger than 1 unit in sectors.Sectors such as: medicine, rubber and rubber products, chemicals (all kinds), precision equipment, home tools, machine tools, common machine, specialized machinery, transport equipment, transformer machine, electrical equipment, broadcasting equipments, black metal, thread (all kinds), textiles and leather have index of power of dispersion to domestic products is smaller than 1 unit; This means if these sectors are developed, it will stimulate stronger on import than domestic production.In contrast, sectors with import cost including imported products have index of power of dispersion smaller than 1 unit, but in non-competitive type (input only include domestic products), it's larger than 1 unit.According to Rasmussen-Hrishman, the industry in non-competitive types with the higher index of power of dispersion should be as key sectors.
Appendix 2 shows that these sectors belong to industry group of meat processing and meat products, processed fruits and vegetables, sugar, coffee, tea types, alcohol, tobacco, seafood processing, other foods...
Conclusion and Policy Suggestion
The analysis of I/O models and demographic -economic model showed the changes of the economy cause of different impacts to sectors and institutional regions.So, calculation on this element is necessary to plan the tax policy and other policies.Such as, analyze the index of power of dispersion shows that this index of the sector is very large, then, if you stimulate development of this sector, it highly impacts other sectors in the economy.Calculations show that these sectors are almost processing sectors: meat processing, vegetable, coffee, seafood, etc… For example on coffee, mainly used to export but in raw form.Fruit and vegetable processing industry in Vietnam is still very weak, so there is no production stimulated and it created low "added value".Thus, the potential of processing industry is great, the scale as well as the economic impacts, calculated in terms of the overall economy.On the other hand, the development of these sectors helps to stimulate and enlarge the value of agricultural labor, to minimize the negative impact of the integration process on the lives of more than 10 million rural households in Vietnam.
Appendix 1. Economic structure changed through the index of power of dispersion we presented in Section II, Part 2, here are results of calculations of backward linkage -BL of two I/O model in 2000 and 2005 in Appendix 1. I/O model in 2000 represents the economic structure of the period 1998-2002 and I/O model in 2005 represents the period from 2003 to 2007.Through Appendix 2, we can clearly see the structural change of the economy through changes in the index of power of dispersion of sectors (112 sectors).
Table 1 .
Calculation results of Keynes extended multiplier (∆ 22 ) Picture 1. Backward linkage of Keynes extended matrix Calculation results of Keynes extended matrix shows the impact of non-production income to saving of each institutional region (Household region, Government region, and industrial regions).In which, the most clearly effect in Government region is 2.21, then Household region is 2.207 and the less one in FDI industry is 1.72.This suggests that if the Government region received 2.21 units of income from non -production, they will have 1 unit of saving.While if FDI industry region received 1.72 units of property and transfer, they will have 1 unit of saving and transfer capital broad.
Table 2 .
Impact of non-production income to the production process by institutional regions
Table 3 .
The ability of income redistribution of institutional regions | 3,787 | 2012-09-25T00:00:00.000 | [
"Economics"
] |
Broadband and tunable time-resolved THz system using argon-filled hollow-core photonic crystal fiber
We demonstrate broadband, frequency-tunable, phase-locked terahertz (THz) generation and detection based on difference frequency mixing of temporally and spectrally structured near-infrared (NIR) pulses. The pulses are prepared in a gas-filled hollow-core photonic crystal fiber (HC-PCF), whose linear and nonlinear optical properties can be adjusted by tuning the gas pressure. This permits optimization of both the spectral broadening of the pulses due to self-phase modulation (SPM) and the generated THz spectrum. The properties of the prepared pulses, measured at several different argon gas pressures, agree well with the results of numerical modeling. Using these pulses, we perform difference frequency generation in a standard time-resolved THz scheme. As the argon pressure is gradually increased from 0 to 10 bar, the NIR pulses spectrally broaden from 3.5 to 8.7 THz, while the measured THz bandwidth increases correspondingly from 2.3 to 4.5 THz. At 10 bar, the THz spectrum extends to 6 THz, limited only by the spectral bandwidth of our time-resolved detection scheme. Interestingly, SPM in the HC-PCF produces asymmetric spectral broadening that may be used to enhance the generation of selected THz frequencies. This scheme, based on a HC-PCF pulse shaper, holds great promise for broadband time-domain spectroscopy in the THz, enabling the use of compact and stable ultrafast laser sources with relatively narrow linewidths (<4 THz).We demonstrate broadband, frequency-tunable, phase-locked terahertz (THz) generation and detection based on difference frequency mixing of temporally and spectrally structured near-infrared (NIR) pulses. The pulses are prepared in a gas-filled hollow-core photonic crystal fiber (HC-PCF), whose linear and nonlinear optical properties can be adjusted by tuning the gas pressure. This permits optimization of both the spectral broadening of the pulses due to self-phase modulation (SPM) and the generated THz spectrum. The properties of the prepared pulses, measured at several different argon gas pressures, agree well with the results of numerical modeling. Using these pulses, we perform difference frequency generation in a standard time-resolved THz scheme. As the argon pressure is gradually increased from 0 to 10 bar, the NIR pulses spectrally broaden from 3.5 to 8.7 THz, while the measured THz bandwidth increases correspondingly from 2.3 to 4.5 THz. At 10 bar, the THz spectrum extends to 6 THz, limited only b...
I. INTRODUCTION
[10][11][12][13][14][15][16][17][18] Common THz-TDS systems, including most of the ones commercially available, are now able to resolve with great sensitivity the spectral range covering 0.5-4 THz.One of the next frontiers in THz photonics is therefore the development of efficient schemes for expanding this spectral window beyond 4 THz, so as to allow access to both a wider range of molecular resonances for sensing applications and new microscopic interactions in condensed matter.Some experimental schemes have already been reported for achieving ultra-broadband THz spectroscopy.They are based on nonlinear optical generation and detection in laser-induced gas plasmas (THz wave air photonics), [19][20][21][22] GaP or several-micron-thick ZnTe crystals [23][24][25] and birefringent LiGaS 2 (LGS), 26,27 GaSe, [27][28][29][30][31] and organic crystals such as DAST. 32,33Although these configurations rely on different types of nonlinear media, they all share an essential common component: an ultrafast near-infrared (NIR) laser capable of delivering broadband femtosecond pulses.Such an optical source is crucial for accessing the high THz frequency range since THz radiation is generated by nonlinear difference frequency mixing of NIR pulses, which means that the highest generated THz frequencies are determined by the spectral bandwidth of the NIR pulses.Furthermore, efficient time-resolved THz detection requires ultrashort NIR pulses with a duration shorter than the oscillation cycle of the highest THz frequency components to achieve broadband detection.These two conditions impose stringent requirements on the NIR ultrafast source.As a result, expensive and bulky optical systems are often necessary for broadband THz-TDS.We propose an alternative setup for generating broadband THz radiation, one that can be used with a compact and stable MHz laser system delivering pulses of sub-microjoule energy and a few hundreds of femtoseconds in duration.We use a commercial Yb:KGW ultrafast amplifier in combination with a gas-filled kagomé-type hollow-core photonic crystal fiber (kagomé-PCF) to achieve efficient broadband THz generation and detection. 346][37][38][39] It provides weak anomalous dispersion that can be counter-balanced by the normal dispersion of the gas filling the fiber, allowing propagation of ultrashort pulses with minimal temporal distortion.In contrast to solid-core fibers or highly nonlinear materials, the linear and nonlinear properties of the system can be adjusted simply by changing the species and the pressure of the gas filling the HC-PCF.Here, we take advantage of this unique feature to broaden the spectrum covered by the THz-TDS system out to ∼6 THz, limited only by the choice of the nonlinear crystal for time-resolved detection.More importantly, the general concept of using a HC-PCF pulse shaper in combination with a relatively narrow spectral linewidth (<4 THz) laser could be extended to other schemes based on different generation and detection crystals such as LGS or GaSe, which would extend further the spectral coverage of THz-TDS.
II. EXPERIMENT AND NUMERICAL SIMULATIONS
The experimental configuration is sketched in Fig. 1.The optical source is a commercial Yb:KGW amplifier delivering 185 fs pulses at a central wavelength of 1035 nm, an average power of 1 W, and a repetition rate of 1.1 MHz.The emitted pulses are launched into an Ar-filled HC-PCF with 75% coupling efficiency.The fiber, a 55 cm-long kagomé-PCF with a 34 µm-diameter core, is entirely placed inside a gas cell within which the Ar pressure can easily be adjusted.This scheme allows us to change the properties of the optical medium and tune the effects of self-phase modulation (SPM) broadening and restructuring the NIR pulse spectrum.A pair of identical chirped mirrors (CMs), providing a total dispersion of −2500 fs 2 , is placed after the HC-PCF to compensate for the positive chirp resulting from SPM and to ensure that the pulses are nearly Fourier-transform-limited. A standard THz-TDS configuration is then used to generate and detect the THz radiation. 35Briefly, the NIR beam is split into two paths.In the first path, phase-locked THz radiation is generated by difference frequency mixing inside a 220 µm thick 110-oriented GaP crystal.The second path is used as an optical gate for time-resolved electro-optical detection.An identical GaP nonlinear crystal is used for detection.
A. Near-infrared pulse propagation in the HC-PCF pulse shaper
The NIR pulse properties are measured after the CMs using a USB spectrometer and a homemade autocorrelator based on second harmonic generation in a 150-µm-thick beta-barium borate (BBO) crystal.As the Ar pressure (P Ar ) is increased from 0 to 10 bar, the NIR spectrum gradually broadens from a full-width at half-maximum (FWHM) of 3.5 to 8.7 THz [Fig.2(a)].The spectral broadening manifests itself mainly in two sidelobes separated by ∆ν SL = 3.1 THz at 7.5 bar and 4.7 THz at 10 bar.The corresponding autocorrelation traces are shown in Fig. 2(b) from which, assuming that the structured NIR pulses have a sech 2 temporal shape, the original pulse duration can be recovered.The pulse duration is observed to decrease gradually, from 185 fs to 65 fs (FWHM), as P Ar is increased.The fact that the spectral bandwidth increases by a factor of 2.5 while the temporal duration decreases by 2.8 indicates that the pulses are close to Fourier-transform-limited at all the pressures used in the experiment.
Figure 2(c) shows the simulated spectra at the fiber output at different Ar pressures for a 185 fs (FWHM) Gaussian pulse with 0.85 µJ energy.The simulations are based on a unidirectional field equation 41 and approximate the fiber dispersion by that of a narrow-bore capillary. 42Over the pressure range used, the NIR pulses lie in the anomalous dispersion range within the fiber.For these spectral bandwidths, however, the fiber dispersion is insufficient to compensate for the positive chirp resulting from SPM, which therefore requires further compensation using negatively chirped mirrors after the fiber.Figure 2(d) shows the simulated temporal profiles at the fiber output after introducing 2000 fs 2 negative chirp (as in the experiment).As the argon pressure increases from 0 to 10 bar the temporal FWHM decreases from 189 fs to 68 fs, which is in excellent agreement with the experiments.The simulations show no contribution related to pulse-induced gas ionization over the range of parameters used in the experiments.
B. Phase-locked THz
The NIR pulses measured in Figs.2(a) and 2(b) are injected in the THz-TDS scheme for phaselocked THz generation and detection by electro-optic sampling.Figure 3(a) shows the resulting time-resolved THz field.Simply by adjusting the gas pressure, the peak THz amplitude can be increased by a factor of ∼4.This increase is caused by temporal compression of the NIR pulses, leading to higher peak powers and, consequently, to more efficient nonlinear frequency down-conversion.The corresponding THz spectral amplitudes are shown in Fig. 3(b).Distinctly different behavior is observed above and below a frequency of 1 THz: the amplitude of higher spectral components is enhanced as P Ar is increased, while no significant change is observed in the sub-1-THz portion of the spectrum.As a result, the THz bandwidth can be increased from 2.3 THz to 4.5 THz (FWHM).The sudden drop in the spectral amplitude at 6.2 THz is related to the restricted phase-matching conditions in the two 220 µm-thick GaP crystals used for THz generation and detection, which ultimately limit the attainable THz bandwidth.The results agree well with the calculated phasematching cut-off frequency at 6.6 THz. 43nterestingly, SPM in kagomé-PCF produces an unevenly distributed spectrum in the NIR pulses, which will have a direct impact on the generated THz spectrum.At P Ar > 5 bar the NIR spectrum departs from a Gaussian-like distribution, displaying two side-lobes separated by ∆ν SL .Since the THz radiation is produced by difference frequency mixing between NIR pulse components, these side-lobes are expected to enhance THz generation around ∆ν SL , resulting in distinct peaks in the spectra.However, in the experiment, the electro-optic detection process prevents us from clearly distinguishing this peak since the detection efficiency is not homogeneous over the whole spectral bandwidth. 44,45Due to the time-resolved configuration, the amplitude of the lowest and highest THz frequencies is under-estimated: Low frequencies have a larger spot size on the detection crystal and do not overlap as well with the focused gating pulse, while high THz frequencies suffer from a phase mismatch with the gating pulse inside the detection crystal.The peak in the measured THz spectra is therefore mostly determined by the time-resolved detection response rather than the spectral shape of the generated THz radiation.
III. CONCLUSION
A pressure-tunable pulse shaper based on gas-filled kagomé-PCF can be used to prepare NIR pulses for efficient broadband THz-TDS.As the Ar pressure is increased in the PCF, spectral broadening and temporal compression of the NIR pulses allow the bandwidth of the measured THz spectrum to be broadened by a factor of 2, the highest frequency component at ∼6 THz being limited only by phase-matching conditions in the experiment.This scheme could also be used for accessing higher THz frequencies if the argon pressure is increased beyond 10 bar and different nonlinear generation and detection crystals are used, such as GaSe, LGS, DAST, or AgGaS 2 .In brief, a single fiber-based module, combined with an ultrafast source with relatively narrow linewidth (<4 THz), can be used for broadband THz-TDS, paving the way to the design of more compact and cost-effective THz-TDS setups capable of reaching high THz frequencies without the need for complex optical sources based on ultrashort Ti-sapphire amplifiers, synchronized fiber lasers, or optical parametric chirped-pulse amplifiers.Since HC-PCFs are robust and able to guide extremely high peak powers, they may also enable the use of high power and high repetition rate lasers for THz-TDS. 46,47
FIG. 2 4 Cui
FIG. 2. (a) Spectra of the NIR pulses measured after the HC-PCF and the CM pair for different Ar pressures.(b) Corresponding autocorrelation traces.The FWHM durations measured at P Ar = 0, 2.5, 5, 7.5, and 10 bar are 185, 150, 115, 85, and 65 fs, respectively.[(c) and (d)] For the same conditions, the numerical simulations 41 of the pulse spectra and duration show good agreement with the experiments. | 2,793 | 2018-09-07T00:00:00.000 | [
"Physics"
] |
Related bifunctional restriction endonuclease-methyltransferase triplets: TspDTI, Tth111II/TthHB27I and TsoI with distinct specificities
Background We previously defined a family of restriction endonucleases (REases) from Thermus sp., which share common biochemical and biophysical features, such as the fusion of both the nuclease and methyltransferase (MTase) activities in a single polypeptide, cleavage at a distance from the recognition site, large molecular size, modulation of activity by S-adenosylmethionine (SAM), and incomplete cleavage of the substrate DNA. Members include related thermophilic REases with five distinct specificities: TspGWI, TaqII, Tth111II/TthHB27I, TspDTI and TsoI. Results TspDTI, TsoI and isoschizomers Tth111II/TthHB27I recognize different, but related sequences: 5'-ATGAA-3', 5'-TARCCA-3' and 5'-CAARCA-3' respectively. Their amino acid sequences are similar, which is unusual among REases of different specificity. To gain insight into this group of REases, TspDTI, the prototype member of the Thermus sp. enzyme family, was cloned and characterized using a recently developed method for partially cleaving REases. Conclusions TspDTI, TsoI and isoschizomers Tth111II/TthHB27I are closely related bifunctional enzymes. They comprise a tandem arrangement of Type I-like domains, like other Type IIC enzymes (those with a fusion of a REase and MTase domains), e.g. TspGWI, TaqII and MmeI, but their sequences are only remotely similar to these previously characterized enzymes. The characterization of TspDTI, a prototype member of this group, extends our understanding of sequence-function relationships among multifunctional restriction-modification enzymes.
Background
Subtype IIS enzymes are a growing group of atypical REases that recognize a specific DNA sequence and cleave outside it at a defined distance, up to 21 nt, within any sequence [1,2]. Since their discovery, they have attracted considerable attention as objects of basic research in the field of protein-DNA interactions and as advanced tools for genetic engineering. One of the most intensively studied REases is FokI, specific to 5'-GGATG (N9/13)-3' sites, where asymmetry of the recognition site apparently imposes an unusual type of interaction with DNA: the large protein, monomeric in solution, transiently forms dimers and binds two recognition sites while the DNA loop is being generated [3]. Another subtype IIS REase -MmeI -not only cleaves DNA at 20/18 nt -one of the sites furthest removed from the recognition site -but also represents a model of a minimal restriction-modification system, where only one (the top) strand of the recognition site is methylated [4]. Molecular applications of subtype IIS enzymes, especially FokI, have been developed since the 1980s, including universal REase, cleaving DNA at a pre-programmed site [5][6][7][8], Achilles' Heel Cleavage [9,10], gene amplification [11], gene fusion [12], unidirectional DNA trimming [13], locating methylated bases in DNA [14], gene mutagenesis using excision linkers [1], and others [1,[5][6][7][8]. Chandrasegaran et al. have developed a series of genetically engineered fusions of a non-specific C-terminal nuclease domain of FokI and specific DNA binding proteins, such as zinc-finger [15,16], Ubx homeodomain [17] or structure-specific Z-DNA nuclease [18]. Such artificial constructs have been used to rearrange mammalian genomes [16]. A recently discovered family of enzymes from Thermus sp. [19,20] belongs not only to subtype IIS, but also to subtypes IIC and IIG. These enzymes are bifunctional, with REase and MTase activities within a single polypeptide (Type IIC) and their cleavage is affected by SAM (subtype IIG). The experimentally characterized members of this family include TspGWI [5'-recognition sequence-3': ACGGA (11/9) [19], TaqII [GACCGA (11/9) [21]], TspDTI [(ATGAA (11/9) [20]], and the TsoI [TARCCA (11/9) [2]] as well as Tth111II/TthHB27I isoschizomer pair [(CAARCA (11/9) [2,22]]. The family shares several functional aspects, including a large molecular size of approximately 120 kDa (larger than typical REases and averagesized prokaryotic proteins, but similar to other subtype IIC enzymes [2]), similarity of amino acid sequences despite distinct specificities (unusual for REases), an identical cleavage distance of 11/9 nt, an acidic isoelectric point around 6 (except for TsoI), a domain structure related to simplified Type I REases, REase activity affected by SAM, and an origin from within the same genus Thermus, suggesting that they have evolved from one or a few common ancestors [19,20,23]. We recently reported for TspGWI enzyme a new type of substrate specificity change, induced by the replacement of SAM with its analogue -sinefungin (SIN) [24]. The chemically-induced recognition site relaxation changes the cleavage frequency of the REase from 5-bp to 3-bp. Such a molecular tool may be useful for generating quasi-random genomic libraries, as it is the second (after CviJI/CviJI*) most frequently cleaving REase [25]. In this paper we describe the cloning, expression and characterization of TspDTI, followed by a bioinformatics analysis of a subfamily of closely related enzymes (TspDTI, Tth111II/TthHB27I and TsoI), which appears to be distinct from the more remotely related sub-family that includes TspGWI and TaqII REases [23].
Results and discussion
Sequencing, cloning and expression of the tspDTIRM gene In the course of studying the new Thermus sp. family of enzymes, we cloned the genes coding for TaqII, TthHB27I, TsoI (manuscripts in preparation) and TspGWI [23]. Initial data referring to TspDTI sequence we have previously deposited in GenBank (EF095489.1). In this work the sequencing data were confirmed and the TspDTI coding gene was de novo cloned into different expression system to improve protein yield. In our attempts to clone the tspDTIRM gene we experienced serious difficulties. Neither the biochemical selection for the methylation phenotype approach nor the 'whiteblue' screen for DNA damage/modification resulted in the isolation of recombinant clones, which was also the case with tspGWIRM gene cloning [23]. Apparently, low enzymatic turnover of the enzymes of the Thermus sp. family, greatly reduced activity at 37°C and the incomplete cleavage of the plasmid DNA precluded positive results with the classic methods listed above, even though complete cleavage is not required for DNA damage detection in the 'white-blue' method. We therefore used a modification of the previously established, successful tspGWIRM cloning protocol ( [23]; see Additional file 1). The protocol includes two stages: (i) a gene nucleotide sequence prediction starting from Nterminal and internal amino acid sequences of REase proteolytic fragments followed by PCR using degenerated and non-degenerated primers, and (ii) direct inframe insertion of an amplified tspDTIRM gene into a strictly temperature-regulated Escherichia coli (E.coli) pACYC184-derived expression vector, containing a P R bacteriophage lambda promoter and overexpressing the bacteriophage lambda thermolabile CI repressor. The system makes use of low temperature cultivation under permissive conditions (at ca 28°C), both of which prevent REase expression and suppress the activity of any leaking thermophilic REase.
The tspDTIRM gene nucleotide sequence was determined using an approach similar to that of TspGWI [23]; however, there are substantial differences in the execution of the method. The native TspGWI N-terminus could not be sequenced, so we had to initiate sequencing starting from internal proteolytic peptides and perform the PCR/sequencing divergently with degenerated primers, followed by non-degenerated ones. In contrast, native TspDTI N-terminus sequencing was not problematic. The sequencing of intact protein resulted in a long 35-amino acid stretch with a relatively good signal -MSPSREEVVAHYADRLHQVLQK-TIAQNPNEAEFRR. In addition short internal 18-, and 12-amino acid sequences -LGAPVFSALAAADGGTLQ (peptide 1) and REPREPEFYGIMDIG (peptide 3) -were obtained from proteolytic fragments (Figure 1; Additional file 1). Based on the amino acid sequences, the primers were designed in most part arbitrarily, being founded on a back-translated amino acid sequence using codons, which were assumed to exist with the highest probability, as concluded from codon usage data from ORFs of Thermus sp. genes available in GenBank. The high GC content of Thermus genes (app. 70% GC) was also considered in codon selection, whenever applicable. Sets of combined primers were used to complete entire tspDTIRM ORF as well as short stretches of flanking regions (see the Methods section).
The verified tspDTIRM ORF was cloned into a P R promoter vector and subjected to E. coli expression optimization experiments (data not shown). Recombinant TspDTI protein was purified using a 6-step procedure, with the protein expression optimized in E. coli ( Figure 2). Interestingly, in spite of the cloning being under the control of a strong P R promoter, the protein becomes detectable in the induced cells after only 3 h of growth under non-permissive conditions, and keeps accumulating until the late stationary phase, even after 12 h cultivation at 42°C. This is probably due to the combination of the following factors: (i) the GC rich ORF sequence distant from E. coli optimum codon usage, (ii) the slow transcription of the GC-rich tspDTIRM gene, (iii) the presence of numerous hairpin structures within the gene, and (iv) the very large size of the protein to be translated. Nevertheless, optimization of expression culture growth/induction conditions yielded adequate amounts of TspDTI (about 0.4 mg of the protein per litre of bacterial culture).
Properties of the tspDTIRM gene
The tspDTIRM gene ORF coding for the REase-MTase bifunctional protein is 3339 bp in length coding for the 1112 amino acid polypeptide [GenBank: EF095489, ABO26711]. The calculated molecular weight of the TspDTI is 126 885 Da, atypically large for a prokaryotic protein. The sizes of the Thermus sp. family enzymes were compared and shown to match the estimation from the SDS/PAGE (Figures 2 and 3) and molecular sieving of the native protein [20], indicating its monomeric organization, just like other Thermus sp. family members ( Table 1). The calculated isoelectric point is 6.68, indicating that TspDTI is a slightly acidic protein.
Typically, REases and other DNA-interacting proteins are rather basic proteins. The low pI is associated with 5 out of 6 Thermus sp. family enzymes: TspDTI, Tth111II, TthHB27I, TaqII and TspGWI. Only TsoI is moderately basic, with a calculated pI of 8.11 (Table 1). No sequence similarity of TspDTI to any MTase or DNA-binding protein was found in the flanking regions of the TspDTI ORF. The ORF begins with the ATG START codon and contains 3 putative upstream RBSs: -8 AG, -11 AGAAA and -18 GGA (see Additional file 2). The ORF is GC rich (57.99%); however, it is markedly lower than the tspGWIRM gene (69.19%) (Gen-Bank: EF095488, ABO26710) and TaqII (Table 1 GenBank: AY057443, AAL23675) (Table 1) and other Thermus genes [20,26], suggesting that tspDTIRM might have been acquired/evolved differently, at least diverging at a later stage, which may have included horizontal gene transfer from a non-related bacteria.
Bioinformatics analyses of TspDTI: Prediction of domains and functional motifs
Isolation and sequencing of the tspDTIRM gene revealed the predicted amino acid sequence of the encoded protein. Searches of REase sequences deposited in REBASE exhibited an overall similarity to a number of genuine and putative Type IIC enzymes, including the previously characterized nucleases TthHB27I and Tth111II (BLAST e-value 0, alignment covering essentially the whole protein length). Despite very high sequence similarity, these two enzymes exhibit different sequence specificity (CAARCA) [2,22] than TspDTI. Interestingly, two other Type IIC enzymes from Thermus, i.e. TspGWI (Gen-Bank: EF095488, ABO26710) and TaqII (GenBank: AY057443, AAL23675), showed very low sequence similarity to TspDTI in pairwise comparisons (BLAST evalue 0.001 limited to a very short region of~75 residues) and were thus excluded from the alignment (Figure 4).
Further bioinformatics analyses, in particular the comparison of sequence profiles, which is more sensitive to pairwise sequence comparisons (see Methods), showed that the central and C-terminal regions of TspDTI (aã 370-1050) exhibit significant similarity to DNA:m 6 A MTase M.TaqI, whose structure is known (HHSEARCH e-value 0). M.TaqI belongs to the γ-class of DNA:m 6 A MTases, which is characterized by the following primary structure: the N-terminal catalytic Rossmann-fold MTase (RFM) domain with the order of motifs: X-I-II-III-IV-V-VI-VII-VIII, followed by the DNA binding domain, the so-called 'target recognition domain' (TRD) in the C-terminus [27,28]. The alignment between TspDTI and M.TaqI spanned both RFM and TRD domains. The N-terminal region of the TspDTI sequence, which extends beyond the region of homology to M.TaqI, exhibited a limited sequence similarity (HHSEARCH e-value 0.087) only to the HSDR_N family, which belongs to the PD-(D/E)XK superfamily of nucleases (accession number pfam04313/g in the PFAM database). Further, a multiple sequence alignment of TspDTI homologues revealed the presence of a candidate PD-(D/E)XK motif (Figure 4), resembling the active site of many REases and other nucleases [29]. Thus, TspDTI appears to comprise domains homologous to known nuclease and DNA:m 6 A MTase catalytic domains, and to the 'TRD' domain characteristic of γclass DNA:m 6 A MTases.
To confirm the sequence-based predictions, we carried out a protein fold-recognition (FR) analysis (see Methods) with the aim of predicting the structures of individual domains in TspDTI. Since the FR method is designed to identify remote homology and predict structure for domain-size sequence fragments (20-500 aa), the TspDTI sequence was split into a series of overlapping segments and submitted to the GeneSilico metaserver [30]. FR analysis of the TspDTI sequence confirmed the existence of enzymatic and DNA-recognizing domains predicted by sequence analysis, albeit with low scores for the N-terminal and C-terminal domains ( Table 2). Structure prediction also revealed the presence of a helical linker between the PD-(D/E)XK and RFM domains. The sequence of this region alone displayed no significant similarity to any known protein domain. When this region was analysed together with the neighbouring RFM domain, some of the fold-recognition methods proposed Type I enzyme structures as templates, with the alignment spanning both the RFM domain (present in all DNA MTases), and the helical domain characteristic only of Type I enzymes, which is involved in mediating protein-protein interactions [31]. However, the alignment of the helical linker regions of TspDTI and Type I MTases was too poor to establish with confidence whether they have similar tertiary structures.
Combined sequence analysis and structure prediction (Figures 4 and 5) enable us to propose the key functional residues of TspDTI. In the PD-(D/E)XK domain, the putative catalytic residues are D66, E75, and E77. Thus, the nuclease domain of TspDTI exhibits an atypical D-EXE pattern, which has been observed previously, e.g. in the R.BamHI enzyme [32]. Interestingly, the same pattern is present in the nuclease active sites of TthHB27I, Tth111II and TsoI (see Additional file 3), while other homologues of TspDTI (putative nucleases) exhibit the typical D-EXK pattern (Figures 4 and 5). In the RFM domain, the SAM-binding site includes the carboxylate residues D422, D464, and D500 (from motifs I, II and III respectively), while the target adenine-binding site includes the NPPW626 tetrapeptide (motif IV) and F732 (motif VIII). At this stage of the analysis, the details of sequence-specific DNA recognition by TspDTI cannot be predicted. However, based on the identification of homologous loops in the sequence of TspDTI and in the protein-DNA complex of M.TaqI, we suggest that the following regions may harbour specificity determinants of TspDTI: WTRLAK968, PQET987 and KSMGS1028. In accordance with this prediction, while the corresponding regions in TthHB27I and Tth111II (enzymes that recognize a DNA sequence other than TspDTI) possess significantly different amino acid residues, they show greater similarity to TspDTI in surrounding regions that are not expected to make direct contact with the DNA. The testing of these predictions, however, is beyond the scope of this article.
Summarizing, the results of the bioinformatics analyses suggest that TspDTI is a fusion protein
TsoI_TARCCA ---------------------------------------------1116
In spite of the remote sequence similarities between REases, certain structural analogies are emerging. Particularly, based on the crystal structure of a Type IIG bifunctional enzyme BpuSI, an alpha-helical domain that connects the endonuclease and MTase domains has been suggested to regulate and physically couple their relative conformations and activities and that it may establish the cleavage distance from the enzyme's target site [34]. The helical domain in TspDTI and its relatives is likely to fulfil a similar role as its counterpart in BpuSI.
Enzymatic properties of TspDTI
Native and recombinant TspDTI proteins were purified to homogeneity ( [20]; Figure 2) and used to study the biochemical features and reaction conditions of DNA cleavage and methylation activity of the enzyme. The apparent molecular mass of the native protein under denaturing conditions was found to be 114.5 kDa [20], corresponding to the molecular mass of cloned TspDTI isolated from E. coli DH11S [pRZ-TspDTI] ( [20]; Figure 2). A comparative assay of recognition specificity, cleavage distance and reaction buffer requirements of both enzymes revealed no difference (not shown). Controlled purification from E. coli (limited to the first chromatographic step), devoid of pRZ-TspDTI, did not show any DNA cleaving activity (not shown). The molecular mass was evidently very similar between three members (TspDTI, TsoI and TthHB27I) of the 'TspDTI subfamily', as only prolonged SDS/ PAGE showed a slight differentiation between the enzymes (Figure 3). Recombinant TspDTI was also subjected to analytical gel filtration in a buffer with a composition close to the physiological, containing 3 mM MgCl 2 (in the absence of DNA), using conditions described previously [20]. The experiment showed that the recombinant REase behaved like a monomer, just as the native TspDTI (Table 1). We showed previously that the temperature activity range extended from 42°C to 85°C (10% or more activity), with the maximum observed at 65-75°C, while a 20 min incubation at 89°C deactivated the enzyme. Incubation at 37°C resulted in approx. 5% activity. The optimal ionic strength is in Tris-HCl buffered (pH 8.0-8.5) MgCl 2 solution, without any added salt [20]. As expected, TspDTI maintains the absolute requirement for Mg 2+ for cleavage activity. Remarkably, the effect of Ca 2+ ions differs from that of TspGWI [23]. TspGWI MTase activity is strongly stimulated by Ca 2+ and SAM, whereas restriction activity is not supported [23]. Compared with the effect of Mg 2+ ions, TspDTI restriction activity is stimulated by Ca 2+ ions, but to a lesser extent and there is no difference in the digestion patterns in the presence and absence of SAM ( Figure 6A, lanes 2 and 3). Essentially the same TspDTI digestion patterns were observed, regardless of whether the substrate DNA was incubated with enzyme and Ca 2+ ions in the MTase buffer only or subjected to subsequent cleavage with TspDTI in the presence of Mg 2+ , following previous incubation with the enzyme and Ca 2+ ions ( Figure 6B, lanes 4 and 6). These results indicate that after the incubation of TspDTI and substrate DNA in the MTase buffer with Ca 2+ ions, the TspDTI cannot further cleave such DNA, even though the Ca 2+ /TspDTI-treated DNA is carefully purified and subjected to subsequent incubation with an excess of TspDTI in the optimal TspDTI REase buffer supplemented with Mg 2+ ions ( Figure 6B, lanes 4 and 6). The observed predominance of 'resistant' DNA indicates that Ca 2+ ions: (i) do not inhibit MTase activity, while stimulating REase only marginally, or else (ii) stimulate both enzyme activities, with a bias towards methylation activity. Hence, it is possible that after the incubation both the restriction and methylation processes are completed, leaving the substrate DNA either cleaved or methylated. The observed effect could also be explained by the existence of methylation-independent (unmodified), REase resistant sites. Such possibility, however, is rather remote, as in comparison to the Ca 2 + /TspDTI-treated DNA ( Figure 6B, lane 6) the previously non-incubated substrate DNA, subjected to cleavage in the presence of Mg 2+ ions, is cut to a greater extent ( Figure 6A, lanes 4-5; Figure 6B, lane 2).
These results corroborate those of enzymology investigations into the mode of action of another subclass IIG/ IIC bifunctional enzyme Eco57I (Lubys Arvydas, personal communication). This may reflect differences in the structure of the catalytic sites. In the standard restriction buffer with Mg 2+ ions TspDTI restriction activity is not stimulated by S-adenosylhomocysteine (SAH) and ATP (Figure 7, lanes 4 and 5), but is stimulated equally by both SAM and its analogue -SIN, which is not a methyl group donor (Figure 7, lanes 2 and 3). This leads to the conclusion that both restriction and methylation activities are SAM-stimulated. Nevertheless, it is important to note that the enzymes of the Thermus sp. family exhibit a spectrum of responses to SAM, which makes this group interesting. TspGWI restriction activity is actually slightly inhibited by SAM [23]. The observed negative response suggests that the enzyme is still capable of binding SAM. But then, conformational allosteric stimulation of TspGWI is somehow anti-functional. Alternatively, the effect described above may be associated with two competing reactions: DNA restriction and methylation. The addition of SAM may shift the equilibrium of the reactions in favour of DNA methylation. On the other hand, the inhibition of REase activity by SAM may play a role in the regulation of TspGWI REase versus MTase activities in vivo. Two analogues of SAM -SIN with charge distribution reversed compared to SAM and SAH -exert a very different influence on TspDTI (Figure 7, lanes 2 and 4). SIN stimulates REase activity, which also suggests that the Thermus sp. family enzymes may have two physically separate binding sites for SAM: one for allosteric stimulation of the REase activity and another one for typical SAM binding/ methylation. Alternatively, binding to a single SAM-specific protein region may induce a conformational change in this large protein, also affecting the distant REase catalytic domain, so that cleavage activity is enhanced several times. Another possibility is that SIN, being an analogue mimicking an 'unreacted' methyl group donor (SAM), causes the enzyme to maintain a conformation different from that maintained in the presence of the MTase reaction product -SAH. These results also suggest a mixed type of mutual dependence of REase and MTase activities: while autonomous enough to perform functions independently, some sort of intertwined communication still occurs between functional domains. We managed to separate REase and MTase activities using site-directed mutagenesis of the genes encoding TspGWI and TaqII enzymes ( [23], manuscript in preparation). Since the recombinant tspDTIRM gene alone was cloned into E. coli without an additional MTase (which has thus far not been found) and stably maintained at 28°C, it is possible that built-in MTase activity is sufficient to protect the recombinant host DNA from autorestriction. Moreover, a much reduced restriction activity < 5% at low temperature and the presence of cellular SAM appear to favour methylation in vivo.
Further research is needed to evaluate the existence of a separate MTase, contributing to the overall TspDTI modification activity and SAM influence. The gene encoding such an MTase may be located at a greater distance than the flanking regions of the tspDTIRM gene sequenced so far.
Conclusions
(i) The modified protocol for cloning thermophilic REases was applied.
(ii) The tspDTIRM gene coding for 126.9 kDa TspDTI was sequenced and cloned.
(iii) Active bifunctional REase-MTase protein was expressed in E. coli and purified to homogeneity.
(iv) Bioinformatics studies predicted REase and MTase binding/catalytic motifs: the atypical D-EXE pattern as opposed to the TspGWI/TaqII PD-(D/E)XK pattern, DPACGSG and NPPW, and showed a modular structure of TspDTI.
Native TspDTI purification and proteolysis of TspDTI and amino acid sequence determination
The native TspDTI enzyme was isolated from Thermus sp. DT as described previously [20]. Purified native TspDTI was subjected to limited TPCK-trypsin digestion to obtain internal polypeptides. Proteolysis of TspDTI was conducted in buffer T (20 mM Tris-HCl pH 8.3, 25 mM KCl, 3 mM MgCl 2 , 5% glycerol, 0.05% Tween 20, 0.5 mM DTT) with gel-immobilized TPCKtrypsin with shaking at 24°C for 3 h. The immobilized TPCK-trypsin was removed by centrifugation. Purified native TspDTI and the supernatant, containing TspDTI protein fragments were run on 10% SDS/ PAGE denaturing gel and electroblotted onto a PVDF membrane in 100 mM CAPS-NaOH buffer pH 11.0. The N-terminal amino acid sequence analysis of polypeptides was performed on a gas-phase sequencer (Model 491, Perkin Elmer-Applied Biosystems). The phenylthiohydantoin derivatives were analysed by online gradient high performance liquid chromatography on a Microgradient Delivery System Model 140 C equipped with a Programmable Absorbance Detector Model 785A and Procise software (Perkin Elmer-Applied Biosystems).
Determination of the nucleotide sequence and cloning of the tspDTIRM gene The gene nucleotide sequence was obtained using a combination of PCR employing degenerated and nondegenerated primers (Additional file 1). In the first step, sets of degenerated/arbitrary primers, forming alternative pairs, were designed. The primers were designed arbitrarily on the basis of a back-translated amino acid sequence using codons, as concluded from codon usage data from ORFs of Thermus sp. genes and assumed from the high GC content of Thermus genes (app. 70% GC). A 105 bp tspDTIRM gene fragment was amplified with primers designed on the basis of 35 amino acid Nterminal sequences of TspDTI (Additional file 1). The forward primer 5'-ATGT(GC)CCCCTCCCGGGAG-GAGGT(GC)GT(GC)GC(GC)CACTA-3' and the reverse primer 5'-CCG(GC)CGGAACTC(GC)GCCTCGTTGG GGTTCTG-3' were used. PCR was performed using an Applied Biosystems 2720 thermocycler in 100 μl of a reaction mixture containing 10 mM Tris-HCl pH 9.1, 50 mM KCl, 1.5 mM MgCl 2 , 0.1% Triton X-100, 6% formamide, 100 ng Thermus sp. DT genomic DNA, 0.25 μM of each primer, 100 μM of each dNTP and 5 U Taq DNA Polymerase. The cycling conditions employed a denaturation step of 3 min at 97°C, followed by the addition of the Taq DNA Polymerase at 85°C and 30 cycles of 30 s denaturation at 95°C, 30 s annealing at 55°C and 1 min elongation at 72°C. A 105 bp PCR product was agarose gel isolated and cloned into the pAPS vector at the SmaI site and sequenced. Insert sequencing established an internal 50 bp native sequence, non-primer modified, and thus used as a new non-degenerated primer (FdtN-ter: 5'-TGACAGGCTTCACCAAGTTCTT CAGAAAACCA-3') anchor for amplification with downstream degenerated/arbitrary reverse primers (Additional file 1).
The downstream portion of the tspDTIRM gene was obtained using amino acid sequences of internal proteolytic fragments. Homogeneous native (isolated from Thermus sp. DT) [20] TspDTI was subjected to limited proteolysis ( Figure 1). Digestion yielded five bands of a stable partial digestion pattern (with limiting amounts of the protease used) (Figure 1). Three of the five bands of approximate sizes 60, 35, 25, 21 and 14.4 kDa were subjected to protein sequencing ( Figure 1, peptide 1, 2 and 3) and yielded short internal 18-, and 12-amino acid sequences: LGAPVFSALAAADGETLQ (peptide 1) and REPEFYGIMDIG (peptide 3). Two reverse primers, designed on the basis of the amino acid sequences obtained -1RDTpep1 5'-TCGGCGGCGGCGAGGG CGCTGAACAC-3' and 1RDTpep3 5'-CC(GT)AT(GA) TCCAT(GT)AT(ACGT)CCGTAGAACTC(GT)GGC TCCC-3' -resulted in PCR products of approx. 800 bp and 2800 bp (Additional file 1). These DNA fragments were cloned into the pAPS vector at the SmaI site and sequenced.
Combination of both standard PCR and the 'promiscuous PCR' setup yielded a sequence of 3943 bp contig with the complete tspDTIRM gene. Each strand was resequenced from de novo amplified entire contig using a Thermus sp. DT genomic DNA template. Regions containing discrepancies were sequenced several times under various conditions.
Analysis of nucleotide sequences
DNA sequences were obtained using the ABI Prism 310 automated sequencer with the ABI Prism BigDye Terminator Cycle Sequencing Ready Reaction Kit (Perkin Elmer Applied Biosystems, Foster City, CA, USA). The sequence data were analysed using ABI Chromas 1.45 software (Perkin Elmer Applied Biosystems) and DNA-SIS 2.5 software (Hitachi Software, San Bruno, CA, USA).
Overexpression of the TspDTI enzyme employed the modified vector pRZ4737 [36], a derivative of pACYC184 plasmid [37], carrying a lambda DNA section with the PR promoter under the control of the CI thermolabile repressor. The cI gene was located on the pRZ4737 backbone, allowing for host-independent expression in E. coli.
The tspDTIRM gene was PCR amplified with proofreading Taq-Pfu DNA polymerase using the oligonucleotides 5'-CGCCATGGCGAGCCCTTCCAGGGAA GAAGTTGTTG-3' and 5'-CAAAGATAATTTCGTC-GACCCGCTCCTCTTC-3', which introduced the NcoI recognition site (underlined) at the 5'-end and the SalI recognition site (underlined) after the tspDTIRM gene STOP codon at the 3'-terminus. The introduction of the NcoI site, generating unique sticky ends, resulted in the addition of a GCG codon (encoding alanine) following the START codon. The PCR fragment obtained was digested with both NcoI and SalI REase and cloned into a pRZ4737 vector digested with compatible BspHI and SalI REases [35] to form a pRZ-TspDTI clone.
Expression of the tspDTIRM gene under P R promoter in E. coli and purification of the recombinant TspDTI RM enzyme The pRZ-TspDTI clones were subjected to protein expression experiments. E. coli DH11S [pRZ-TspDTI] was subjected to mini-scale expression in 50 ml TB media supplemented with chloramphenicol and maltose at 28°C with vigorous aeration, followed by PR promoter induction by a temperature shift to 42°C when OD 600 reached 0.7. The culture growth was continued for 12 h at 42°C. Uninduced control and induced cells were subjected to SDS/PAGE, and gels were analysed for the appearance of the expected band size of approx. 126 kDa and for endonucleolytic activity in crude lysates.
Expression of tspDTIRM in E. coli DH11S [pRZ-TspDTI] was initiated with bacteria inoculum washed out from a Petri dish into 1 L of rich TB media, supplemented with chloramphenicol at 28°C. The culture was grown with vigorous aeration until OD 600 reached 0.3; then the culture was transferred to a fermentor vessel containing 9 L of the media and grown until OD600 was 0.6. Induction was achieved with a rapid temperature increase to 42°C by the addition of 7 L of the medium warmed to 70°C, and growth was continued for 17 h at 42°C. Once an OD600 of 2.0 was reached, the culture was cooled down to 4°C and the cells were recovered by centrifugation. The purification steps used were as we have described previously for the native enzyme [20], with the following modifications: 1. Polyethyleneimine (PEI) removal of nucleic acids was performed with a bacterial pellet suspended in buffer A1 (50 mM Tris-HCl pH 7.0; 150 mM NaCl; 5 mM EDTA; 5 mM βME; 0.1% Triton-X-100, 1 mM AEBSF and 20 μg/ml benzamidine).
5. Heparin-agarose chromatography was used as the fifth step, using buffer D.
REase and MTase assays
For REase assays various conditions were used, depending on the experiment. The reactions were performed in 50 μl of 'primary TspDTI REase' buffer (10 mM Tris-HCl pH 8.5 at 25°C; 1 mM DTT), supplemented with appropriate additives and DNA substrates. One unit of the TspDTI REase is defined as the amount of enzyme required to hydrolyse 1 μg of pUC19 in 1 h at 70°C in 50 μl of 'primary TspDTI REase' buffer, enriched with 10 mM MgCl 2 and 50 μM SAM, resulting in a stable partial cleavage pattern.
The in vitro modification activity of TspDTI enzyme was tested by the DNA protection assay, where 0.5 μg of 390 bp PCR DNA fragment was used as a substrate in 50 μl of TspDTI MTase buffer (10 mM Tris-HCl pH 8.5; 1 mM DTT; 200 μM SAM) supplemented either with10 mM CaCl 2 or with 10 mM EDTA. After addition of TspDTI protein, the reaction mixture was incubated for 16 h at 70°C. Proteinase K was added to the solution and the incubation continued for an additional 60 min at 55°C. Samples were purified to remove all traces of proteins and divalent cations from the methylation reaction mixture, and the resulting DNA was challenged with an excess of TspDTI (2 : 1 molar ratio of enzyme to recognition sites) for 1 h in 50 μl of 'primary TspDTI REase' buffer supplemented with 10 mM MgCl 2 at 70°C. The reaction products were then resolved by agarose gel electrophoresis. | 7,191.8 | 2012-04-10T00:00:00.000 | [
"Biology",
"Chemistry"
] |
Dengue research in India : A scientometric analysis of publications , 2003-12
Address for the correspondence: Dr. Mueen Ammed KK, Director, SciBiolMed.Org, No. 24, Bore Bank Cross Road, Harris Main Road, Benson Town, Bangalore, India E-mail<EMAIL_ADDRESS>The present study quantitatively analyze Indian dengue research output during the 10 years from 2003 to 2012, using Scopus international multidisciplinary database. The study focused on global publication output, share, rank, and citation impact of top 15 most productive nations, India’s publications output, growth, global publication share and research impact, international collaborative papers share in national output and the share of major international collaborative partner countries in total India’s international collaborative papers, contribution of various sub-fi elds and distribution by population age groups, productivity and citation impact of its leading Indian institutions and authors and Indian contribution in most productive journals. Indian contribution in dengue fever research consisted of 910 papers, which increased from 27 papers in 2003 to 193 papers in 2012, witnessing an annual average growth rate of 28.19%.Among the top 15 most productive countries, India holds second position in dengue fever research output, with global publication share of 10.22% during 2003-12. The average citation per paper scored by India was 3.27, the least among the top 15 most productive countries during 2003-12. India’s share of international collaborative papers was 10.55% during 2003-12, which increased from 9.12% during 2003-07 to 11.13% during 2008-12. The present India’s research efforts in dengue research are low in view of the 50,222 cases of dengue in 2012 alone. The country needs to increase its research output and also increase its research impact substantially particularly through enhanced national and international collaboration, besides evolving a national policy for identifi cation, monitoring and control of dengue cases and also evolving a research strategy with suffi cient funding commitment to solve this growing national problem.
INTRODUCTION
Viruses are tiny agents that can infect a variety of living organisms, including bacteria, plants, and animals.Like other viruses, the dengue virus is a microscopic structure that can only replicate inside a host organism.The dengue viruses are members of the genus Flavivirus in the family Flaviviridae.Along with the dengue virus, this genus also includes a number of other viruses transmitted by mosquitoes and ticks that are responsible for human diseases.Flavivirus includes the yellow fever, West Nile, Japanese encephalitis, and tick-borne encephalitis viruses. [1]ngue fever (DF), also known as breakbone fever, is an infectious tropical disease, caused by the dengue virus.Symptoms include fever, headache, muscle and joint pains, and a characteristic skin rash (similar to measles).In smaller proportion of cases, the disease develops into the life threatening dengue hemorrhagic fever (DHF) resulting in bleeding low levels of blood platelets and blood plasma leakage, or into dengue shock syndrome (DSS), where dangerously low blood pressure occurs. [2]nternational Journal of Medicine and Public Health | Jan-Mar 2014 | Vol 4 | Issue 1 Dengue is transmitted between people by the mosquitoes Aedes aegypti and Aedes albopictus, which are found throughout the world.The Aedes aegypti mosquitoes are the primary vector of dengue.The virus is transmitted to humans through the bites of infected female mosquitoes.The Aedes aegypti mosquitoes live in urban habitats and breeds mostly in manmade containers.Unlike other mosquitoes Aedes aegypti is a day time feeder, its peak biting periods are early in the morning and in the evening before dusk.Aedes albopictus, a secondary dengue vector in Asia, has spread largely due to international trade in used tyres (a breeding habitat) and other goods (e.g., lucky bamboo). [3]e incidence of dengue has grown dramatically around the world in recent decade.Over 2.5 billion people -over 40% of the world's population -are now at risk from dengue.The incidence of dengue has increased 30-fold over the past 50 years.World Health Organization (WHO) currently estimates that they may be 50-100 million dengue infections worldwide every year.Before 1970, only nine countries have experienced severe dengue epidemics.The disease is now endemic in more than 100 countries.As estimated, 500,000 people with severe dengue are hospitalized each year, a large proportion of them are children.About 2.5% of those affected die. [3]veral southeast Asian countries are seeing record numbers of people infected with DF, a mosquito-borne virus for which there is currently no approved vaccine or specifi c drug treatment.Researchers from the University of Oxford and the Wellcome Trust estimated that 70% of the world's serious dengue cases are in Asia, with India alone accounting for 34% of the total. [4]e Directorate of National Vector Borne Disease Control Program (NVBDCP) is the central nodal agency for the prevention and control of vector borne diseases, that is, malaria, dengue, lymphatic fi lariasis, kala-azar, Japanese encephalitis, and chikungunya in India.It is one of the Technical Departments of Directorate General of Health Services, Government of India.There has been a surge in dengue cases in the country in 2012 when as many as 50,222 cases were reported against 18,860 in 2011.According to India's Health Minister, 28,292 cases were reported during the year 2010.He said in view of upsurge and geographical spread of dengue to newer areas, a mid-term plan has been developed by Government of India for prevention and control of dengue and many advisories have been issued from time to time to control and manage the outbreak of dengue in India.Field visits are carried out to assess the preparedness and to provide technical guidance to states.Training is also imparted to clinicians on case management as per Government of India guidelines and to other health care functionaries on program activities.For augmenting diagnostic facilities, the number of Sentinel Surveillance Hospitals (SSHs) with laboratory support has been increased to 347 across the country from 110 and linked with 14 apex referral laboratories with advanced diagnostic facilities for back up support for dengue across the country. [5]e Department of Biotechnology (DBT) of India has launched a program to promote and accelerate research activities in containing dengue disease and enhance the capacity and capability of those committed to dengue research.Establishment of dengue laboratory network program with defi ned gene amplifi cations for understanding the genotypes, serotypes, strengthening regional laboratories for rapid and confi rmatory diagnosis of suspected dengue cases, development of integrated companion diagnostic tests (Ag/Ab) and new rapid point of care diagnostic test systems and validation, and novel strategies for vaccine development has be promoted under the initiative.It will also enable utilization of primary human cell targets to understand the translational data on platelet, monocytes, DCs, and endothelial cells by using stem cell technology, studies on pathogenic antibodies in dengue hemorrhagic fever (their generation, specificity, regulation, seroepidemiology, etc.), development of advanced research program on platelet pathology and study on molecular markers of neurotropism and vector preference. [6]
Literature review
Few studies have been undertaken in the past on scientometric analysis of dengue research output.Dutt et al. [7] analysed 2566 papers on global research output in dengue, as covered in Science Citation Index (SCI)-Expanded from 1987 to 2008.The total output came from 74 countries of which 17 countries contributed 87% of the total output.The highest number of publications came from USA, followed by India.More than half of the scientifi c output is concentrated among four sub-disciplines of microbiology and virology, immunology and vaccine, epidemiology, and entomology.Among the prolifi c institutions, the publication output of institutions from the US and Taiwan had higher impact.About 80% of the papers appeared in journals originating from USA, the UK, the Netherlands, France, and Germany.Raja et al. [8] also analyzed world DF publication from 1999 to 2012, as covered in SCI database.The publications data is analyzed to know the authorship pattern, degree of collaboration and geographical distribution of papers, year-wise research output, geographical distribution of research output, and nature of collaboration, characteristics of highly productive institution and the channel of communication used by the scientists.
No paper has been published focusing on the analyses of Indian dengue research output till today.However, Gupta et al. analyzed bibliometric characteristics of Indian publications on several other diseases, such as typhoid, [9] diabetes, [10] tuberculosis, [11] malaria, [12] asthma, [13] HIV/AIDS, [14] and measles. [15]jectives of the study 3. To study the share of international collaboration in Indian publication output and the contribution of different collaborating countries; 4. To study the Indian contribution by sub-fi elds and by type of population groups; 5. To study the publication productivity and the impact of Indian leading institutions and authors; and 6.To study the media of communication
Materials and Methods
The study retrieved the publications data on India and top 15 most productive countries in dengue research from the Scopus database (http://www.scopus.com)for 10 years from 2003 to 2012.The keyword "dengue" was used in "title, abstract and keyword fi eld" along with India in "country fi eld" and "2003 to 2012" in time fi eld was used for searching the main publication data used in the study and this become the main search string.Similar strings were used to generate to generate publications output data on top 15 countries.For generating citation impact data, the 3, 2, 1, and zero years' citation window was used for publications during 2003-09, 2010, 2011, and 2012.For searching the international collaborative papers, a separate search strategy, which combines India's collaboration with more than 200 countries, was prepared and this string was combined with main search strategy string to generate India's total international collaborative papers and contribution of leading countries in India's collaborative papers.For analyzing institutional, author, and journals output, separate search strategies were developed, which later combined with the main search string to generate the desired output.
Analysis
The global publication share to top 15 most productive countries in dengue research varies from 2.38% to 25.26% during 2003-12.The USA tops the list with global publication share of 25.26%, followed far behind by India (10.22% share and second rank), Brazil (9.75% share and third rank), Thailand, UK, France, and Singapore (their global publication share ranging from 4.32% to 7.39% and rank from fourth to seventh), Australia, Taiwan, and Malaysia (their global publication share ranging from 3.02% to 3.81% and rank from eight to tenth ) and Japan, China, Cuba, Germany and Mexico (their global publication share ranging from 2.38% to 2.77% and rank from 12 th to 15 th ) during 2003-12 [Table 1].
Among the top 15 most productive countries, the publication rank has increased in France (from 6 th to 5 th ), Singapore (from 9 th to 7 th ), Malaysia (from 13 th to 9 th ), and China (from 15 th to 11 th ) as against decrease in UK (from 5 th to 6 th ), Australia (from 7 th to 8 th ), Taiwan (from 8 th to 10 th ), Japan (from 11 th to 12 th ), Cuba (from 12 th to 13 th ), Germany (from 10 th to 14 th ), and Mexico (from 14 th to 15 th ) from 2003-07 to 2008-12.During the same period, the publication rank has remained the same in USA, India, Brazil, and Thailand [Table 1].
In terms of research impact, the ranks have altered compared to ranks in publication productivity.The highest rank in terms of research impact among 15 most productive countries is occupied by UK with average citation per paper of 11.76 (with 5 th rank in In all international collaborative papers of India, there may be one or more collaborating countries.As a result, the combined output of 15 foreign collaborating countries listed above in Indian international collaborative output will be more than its total international collaborative papers
Profi le of 15 productive Indian institutions in dengue research
The top 15 most productive Indian institutions involved in dengue research have published 14 or more papers each during 2003-12.
These 15 institutions involved in dengue research together have contributed 41.87% (381 papers) in the cumulative publications output of India in dengue research.The publication profi le of these 15 institutions along with their research output, citations received and h-index values are presented in Table 6.The average publication productivity per institution reported by the top 15 institutions was 25.4 and only 7 institutions have registered higher output than the group average.These are All India Institute of TP = Total papers, TC = Total citations, ACPP = Average citation per paper, *There is some overlapping of papers under sub-fi elds.As a result, the combined output of India under seven sub-fi elds will be more than its actual total output
Profi le of top 15 most productive authors in dengue research
The top 15 most productive Indian authors involved in dengue research have published 12 or more papers each during 2003-12.
The publication profi le of these 15 authors along with their research output, citations received, and h-index values are presented in
Research communication in high productive journals
The 15 most productive journals publishing Indian research papers in dengue research together contributed 306 papers, which accounts for 33.63% share of the total output of India during 2003-12.The cumulative publications output share of these 15 most productive journals showed a decrease in India's publications output from 42.70% during 2003-07 to 29.87% during 2008-12 [Table 8].The severity of the dengue endemic in India is underestimated by lack of accurate information related to the incidence and cost of dengue illness.Furthermore, the manifestation of dengue in India appears to be changing from its benign form to its severe forms of DHF and DSS.This change is leading to an increase in the frequency of outbreaks, morbidity, and mortality.Dengue is a notifi able disease in India since 1996.However, misdiagnosis and underreporting of dengue cases persist due to clinical defi nition challenges, scarcity of diagnostic tools, and lack of healthcare providers' familiarity with dengue The huge hike in dengue cases is posing questions about the way India is going about its dengue prevention and control strategy aimed at source reduction.The government is now trying to rope in rural and urban local bodies for carrying out a special campaign for sanitation and cleanliness, fogging, increasing sentinel surveillance sites and training of health personnel.
Dengue mortality can be reduced by implementing early case detection and appropriate management of severe cases; reorienting health services to identify early cases and manage dengue outbreaks effectively; and training health personnel, along with appropriate referral systems, at primary health-care levels.Dengue morbidity can be reduced by implementing improved outbreak prediction and detection through coordinated epidemiological and entomological surveillance; promoting the principles of integrated vector management and deploying locally adapted vector control measures including effective urban and household water management.Effective communication can achieve behavioral outcomes that augment prevention programs.Research will continue to play an important role in reversing the trend in dengue, a neglected tropical disease, by improving methods and systems for surveillance, prevention, and control.
India urgently needs a permanent dengue surveillance system to monitor and control a mosquito-borne viral disease.Existing technologies such as geographical information systems, polymerase chain reaction, rapid antigen tests, genetic sequencing, and bioinformatics can be harnessed to provide a holistic approach to suppress dengue resurgence, in collaboration with the WHO's Dengue Net.Databases could be continuously updated and the reporting of dengue cases from India's existing network of institutions and laboratories standardized, with a view to predicting epidemics and reducing fatality rates.
India's research output is very low keeping in view of 50,222 cases of dengue in India in 2012 alone.Therefore, the country needs to increase its research output and also increase its research impact substantially particularly through enhanced national and international collaboration.
There is also need to evolve a national policy of identifi cation, monitoring, and control of dengue cases and also evolving a research strategy with suffi cient funding commitment and involvement of different type of Indian organizations to solve this growing national problem.
Table 1 : Publication output, share, rank and research impact of top 15 countries in dengue fever research, 2003-12 Name of country Number of papers Share of papers Rank of papers TC
38% share), Brazil (8.33% share), etc.It is observed that India's international collaboration has increased with Thailand by 14.08%, Malaysia by 12.68%, UK by 11.94%, Indonesia by 7.04%, Germany by 3.04%, and Vietnam (1.63%) in contrast to decrease in USA by 12.56%, Sri Lanka by 11.55%, France by 9.18%, Canada by 7.55%, Brazil by 4.96%, Philippines by 2.37%, Singapore by 0.96%, Switzerland by 0.73%, and Australia by 0.73% from 2003-07 to 2008-12 [Table 3].India's publication output in dengue research during 2003-12 has been published in the context of seven sub-fi elds (as refl ected in database classifi cation based on journal title subject), with highest publication India's contribution, citation impact, and international collaborationIndian contribution in DF research has increased from 27 papers in 2003 to 193 papers in 2012, witnessing an annual average growth rate of 28.19%.The average citation impact per paper registered by India's research in DF during 2003-12 was 3.27, which has decreased from 4.37 during 2003-07 to 2.79 during 2008-12.India has contributed 10.55% international collaborative papers share in DF research during 2003-12, which has increased from 9.12% during 2003-07 to 11.13% during 2008-12 [Table 2].
Table 5 : Dengue research output by population age group, 2003-012 Population group Number of papers Share of papers 2003-07 2008-12 2003-12 2003-07 2008-12 2003-12
, and Christian Medical College, Vellore (4.14).The average h-index value of these 15 Indian institutions was 6.53 and 7 Indian institutions have achieved higher h-index value than the group's average.These are Defence Research & Development Establishment, Gwalior with h-index value of 11, followed by National Institute of Virology, Pune(10), International Centre for Genetics & Biotechnology, New Delhi(10), All India Institute of Medical Sciences, New Delhi (8), C S Medical University(8), Sanjay Gandhi Postgraduate Institute of Medical Sciences, Lucknow(7), and National Institute of Communicable Diseases, Delhi(7).
International Journal of Medicine and Public Health | Jan-Mar 2014 | Vol 4 | Issue 1
Table 7 .
These 15 authors involved in dengue research together have contributed 32.75% (298 papers) in the cumulative publications output of India in dengue research during 2003-12.
Table 7 : Productivity and citation impact of 15 most productive authors in dengue research, 2003-12 Name of the author Address of the author TP
International Journal of Medicine and Public Health | Jan-Mar 2014 | Vol 4 | Issue 1 86% share).Among different population age groups, the largest focus of research on dengue was on adults (with 28.35% share), followed by child (21.43% share), adolescents (19.78% share), middle-aged (10.99% share) and aged (6.59% share) during 2003-12.Among the Indian institutions contributing to dengue research, only 15 have published 14 or more papers each during 2003-12 and together have contributed 41.87% publication share to the cumulative publications output of India in dengue research.The average citation per paper and h-index registered by the total papers of these 15 institutions was 3.69 and 6.53, respectively, during 2003-12.Among the Indian authors contributing to dengue research, only 15 have published 14 or more papers each during 2003-12 and together have contributed 32.75% share to the cumulative publications output of India in dengue research.The average citation per paper and h-index registered by the total papers of these 15 authors was 5.33 and 6.87, respectively, during 2003-12.Among various journals publishing Indian dengue research papers, the top 15 journals have contributed 33.63% share of the total output of India during 2003-12, which has decreased from 42.70% to 29.87% from 2003-07 to 2008-12.
Table 8 : Media of communication of indian scientists in most productive journals in dengue research, 2003-12 Name of journal Number of papers 2003-07 2008-12 2003-12
International Journal of Medicine and Public Health | Jan-Mar 2014 | Vol 4 | Issue 1 | 4,523.4 | 2014-01-01T00:00:00.000 | [
"Economics"
] |
Radiative MHD thin film flow of Williamson fluid over an unsteady permeable stretching sheet
In this research work we have examined the flow of Williamson liquid film fluid with heat transmission and having the impact of thermal radiation embedded in a permeable medium over a time dependent stretching surface. The fluid flow of liquid films is assumed in two dimensions. By using suitable similarity transformation the governing non-linear partial differential equations have been transformed into non-linear differential equations. An optimal approach has been used to acquire the solution of the modelled problem. The convergence of the technique has been shown numerically. The impact of the Skin friction and Nusslet number and their influence on thin film flow are shown numerically. Thermal radiation, unsteadiness effect and porosity have mainly focused in this paper. Furthermore, for conception and physical demonstration the entrenched parameters, like porosity parameter k, Prandtl number Pr, unsteadiness parameter S, Radiation parameter Rd, Magnetic parameter M, and Williamson fluid parameter have been discussed graphically in detail with their effect on liquid film flow.
Introduction
The flow analysis of thin film has got important loyalty due to its enormous applications in the field of engineering and technology in several years. The field of thin film flow problems is vast and is realized in many fields, starting from the particular situation of the flow in human lungs to lubricant problems in industry. Investigating the uses of thin liquid film flow is an interesting interaction between structural mechanics, fluid mechanics, and theology. Extrusion of polymer and metal, striating of foodstuff, constant forming, elastic sheets drawing, and fluidization of the devices, exchanges, and chemical treating apparatus are several well-known uses of liquid films. In observations of these uses and applications, the study of liquid film becomes necessary for researchers to further investigate and make further development in it. Different approaches with modified geometries have been adopted by many researchers from time to time. In view of the industrial applications of thin film flow, stretching surface has become an important topic for researchers. In early days, the study of liquid film flow was limited to viscous fluids. Crane [1] is the pioneer to deliberate the flow of viscid fluid in a linear extending surface. Dandapat [2] has deliberate viscoelastic fluid flow on an extending surface with heat transfer. Wang [3] was the first one to investigate finite liquid film at a time depending stretched surface. Ushah and Sridharan [4] have investigated the flow of finite thin liquid over a time depending stretching surface. The same work is extended by Liu and Andersson [5] using numerical techniques. Aziz et al. [6] has examined the consequence of inner heat production on flow in a thin liquid film on a time depending stretching sheet.
Recently, Tawade et al. [7] has reviewed the liquid flow over an unstable extending sheet with thermal radioactivity. Andersson [8] is the forerunner to investigate the flow of tinny liquid films of non-Newtonian fluids in an unsteady stretching sheet by considering the Power law model. Waris et al. [9] has studied the nanoliquid film flow over an unstable stretching sheet with varying viscosity and thermal conductivity. Andersson et al. [10], Chen [11,12], and Wang et al. [13], have deliberated thin liquids flow using different physical configuration. Singh Megahe et al. [14] has examined tinny film flow of Casson fluid in the occurrence of irregular heat flux and viscid dissipation. Abolbashari et al. [15] work out thin film flow with entropy generation. Qasim et al. [16] has studied the Nano fluid thin film on an unstable extending surface taking Buongiorno's model.
Non-Newtonian fluids have so many types in nature as well as in artificial. Williamson fluid is one of significant subtypes between them. A number of researchers investigated Williamson fluid with different effects. Practical application has produced interest in searching the solvability of differential equation governing in flow of Non-Newtonian liquids, which have numerous uses in engineering field, applied mathematics and computer science. Many environmental and industrial systems like system of geothermal energy and system of heat exchanger design include the convection flow subject to permeable medium. The adapted form of classical Darcian model is the non-Darcian porous medium, which contains the inertia and boundary topographies. The standard Darcy's law is effective under constrained range of small permeability and little velocity. Forchheimer [17] has predicted the inertia and boundary features by including a square velocity term to the countenance of Darcian velocity. Muskat [18] has entitled this term as "Forchheimer term" which is permanently operative for large Reynolds number. Dawer et al. [19] have studied fluid flow in porous media. The more current investigational and theoretical study of Sheikholeslami [20,21,22] on nanofluids using dissimilar phenomena, with modern application, possessions and properties with usages of diverse approaches can be studied in Tahir et al. [23] have studied flow of a nano-liquid film of maxwell fluid with thermal radiation and magneto hydrodynamic properties on an unstable stretching sheet. The stuided and application of porous media can been seen in [24,25].
In (1992) Liao [26,27] was the first one to investigate Homotopy Analysis method. Due to its fast convergence, many researchers Shah et al. [28,29,30,31], Ishaq et al. [32], Saleem et al [33]. Hameede and Muhammad et al. [34,35] have used this method to answer highly non-linear combined equations. Khan et al. [36,37] have used this method for the solution of Boundary layer flow problems. Prasannakumara et al. [38] investigated Williamson nanofluid with impact of chemical reaction and nonlinear radiation embedded in a permeable sheet. Krishnamurthy et al. [39] have investigates slip flow and heat transmission of nanofluid over a porous stretching sheet with impact of nonlinear thermal radiation. Chaudhary et al. [40] has explored thermal radiation properties of fluid on the extending stretching surface.
Das [41] has studied properties of thermophoresis and thermal radiation convective flow with heat transmission analysis. Muhammad et al. [35] have examined radiative flow of MHD carbon nanotubes. The more recent study about thermal radiation and can be studied in [42,43].
In all of the discussed work, researchers consider heat and mass transmission features of Newtonian or non-Newtonian fluid at a time depended and a time independent extending surface, taking one or more physical characteristics. The main goal of this research is to investigate liquid film flow Williamson fluids over a stretched surface in the existence of magnetic field and thermal radiation. Keeping in view all these assumptions taken into the modelled problem and the similarity transformation method, the concerned PDEs are converted to non-linear ODEs, and the obtained, transformed equations are analytically solved using HAM.
Theory/Calculation
Consider the flow of non-Newtonian liquid film flow (considering Williamson fluid) with impact of thermal radiation over an unsteady porous stretching sheet. The coordinate system is chosen in such a way that the x-axis is parallel to the slit while the y-axis is perpendicular to the surface respectively (Fig. 1). The x-axis is taken along the spreading surface with stress velocity as U 0 ðx; tÞ ¼ εx 1Àxt ; where x_0, is the stretching parameter. The heat transmission to the fluid flow and the temperature is defined as ð1 À εtÞ À1:5 ; called surface temperature fluctuating with the distance x from the slit. The time dependent term εx 2 yð1ÀxtÞ can be renowned as the local Reynold number, reliant on the velocity U 0 ðx; tÞ. Here T 0 is temperature at the slit, T ref is the reference temperature such that 0 T ref T 0 . The slit is fixed at the origin initially and then some exterior force is acting to stretch the slit at the rate εx 1Àxt in time 0 x 1 with velocity U 0 ðx; tÞ is in the positive x-direction. Also T s ðx; tÞ designates the sheet temperature, reduce from T 0 at the slit in 0 x 1.
In the interpretation of above expectations, the main leading equations are articulated as: Here in Eqs. (1) and (2) y represents the kinematics viscosity where y ¼ m 0 r , G > 0 represents the material constant of the Williamson fluid, r is the density of the fluid and s denotes the electrical conductivity.
Here q r is Rosseland approximation of the radioactive heat flux and is modelled as, Here T represents the temperature fields, s * is the Stefan-Boltzmann constant, K * is the mean absorption coefficient, k is the thermal conductivity of the liquid film. Expanding T 4 using Taylor's series about T 0 as below Neglecting the higher order terms Eq. (5) Inserting Eq. (6) in Eq. (4) we obtain By putting Eq. (7) in Eq. (4), it reduced as The accompanying boundary conditions here in Eqs. (1) and (2) Familiarizing the dimensionless (f) variables and similarity transformations ðhÞ to reduce Eqs. (2), (8), and (9) f ðhÞ ¼ f ðhÞ ¼ jðx; y; tÞ vb 1 À at The stream function jðx; y; tÞ satisfying Eq. (1), and in term of velocity components is obtained as Using Eqs. (10) and (11) in (1) After interpretation we obtained the following physical parameters as: Here in Eq. (15) Pr signifies the Prandtl number, S used for unsteadiness Parameter, Rd represents the radiation parameter and We is a fluid material constant, M is magnetic parameter, k represents porosity parameter and all of these are defined respectively. The Skin friction is defined as Where S xy in Eq. (16) is defined as The dimensionless form of Eq. (17) Where R ex in Eq. (18) is called local Reynolds number. The Nusselt number is defined , Here the dimensionless form of Nu is obtained in Eq. (19) below
Methodology
For solution of the problem we implement the Homotopy Analysis Method to fin the solution of Eqs. (12) and (13), consistent with the boundary constraints (14). The solutions enclosed the secondary parameters Z, which standardize and switches to the combination of the solutions. Initial solution of Eqs. (12) and (13) are given in Eq. (20) The linear operators can be chosen as The differential operators in (21) content are defined as Here in Eq. (22) P 6 i¼1 j i ; where i ¼ 1; 2; 3::: are arbitrary constants. Expressing q1 0 1 as an entrenching parameter with associate parameters Z f and Z q where Zs0.
Then the problem in case of zero order deform to the following form ð1 À qÞL q b qðh; qÞ À q 0 ðhÞ ¼ qZ q N q b f ðh; qÞ; b gðh; qÞ; b qðh; qÞ : ð24Þ The subjected boundary conditions for Eqs. (23) and (24) are obtained in Eq. (25) The resultant nonlinear operators are Using the Taylor's series expansion to expand fðh; qÞ and qðh; qÞ in Eq. (26) in term of q we get Where Differentiating Zero th order Eqs. (27) and (28) i th time we obtained the i th order deformation equations with respect to q dividing by i! and then inserting q ¼ 0.
Analysis
Here our interest is to analyze analytical solution of obtaining system of ordinary differential equations by Homotopy Analysis Method. When the series solution of the velocity and temperature profile are computed by HAM, the assisting parameters h f ; h q seems which responsible for adjusting of convergence. In the acceptable region of h, h-curves of f 00 ð0Þ and q 0 ð0Þ are plotted in Fig. 2, displaying the valid region.
Results and discussion
The current research has been conceded out to study the flow of Williamson liquid film flow in a time dependent starching sheet with the impact of MHD and thermal radiation. The determination of this section is to examine the physical consequences Pr. Fig. 10 shows the influence of radiation parameter Rd on temperature profile.
When we increase thermal radiation parameter Rd, then it is perceived that it augments the temperature in the fluid layer. This increase leads to drop in the rate of cooling for thin film flow.
The numerical values of the surface temperature qðbÞ for different value of M; Rd and k are given in Table 2. It is observed that the increasing values of M; Rd and k increase the surface temperature qðbÞ, where opposite effect is found for Pr; that is the large value of Pr reduces the surface temperature qðbÞ. The numerical values of the heat flux Q 0 ð0Þ for dissimilar values of embedded parameters Rd; b; Pr; S have been shown in Table 3. It is perceived that larger values of thermal radiation Rd; b and Pr decrease the wall temperature and S increases the wall temperature gradient Q 0 ð0Þ. The numerical values of M; k; b and We on skin friction C f are given in Table 4. From this table it is obvious that high values of M; k and b decrease C f while increasing We increases skin friction.
Conclusion
The conclusion of the present work is mainly focused on the behaviour of embedded parameters and solutions of the obtained results. The central concluded points are: Thermal boundary layer thickness reduces with rise of radiation parameter Rd So, Nusselt number Nu rises with rise of radiation parameter Rd. The increasing values of M; Rd and k increase the surface temperature qðbÞ, where opposite effect is found for Pr; that is the large values of Pr reduce the surface temperature qðbÞ.
Increasing k reduce the flow of thin films.
For skin friction C f it is found that it increases when the viscosity parameter R is decreased.
It is notice that the strong magnetic field reduce the velocity he liquid films.
It is also concluded that liquid film flow is affected by the Lorentz's force.
Declarations
Author contribution statement Zahir Shah, Ebenezer Bonyah, Saeed Islam, Waris Khan, Mohammad Ishaq: Conceived and designed the analysis; Analyzed and interpreted the data; Contributed analysis tools or data; Wrote the paper.
Funding statement | 3,235.4 | 2018-10-01T00:00:00.000 | [
"Engineering",
"Physics"
] |
Octree Optimized Micrometric Fibrous Microstructure Generation for Domain Reconstruction and Flow Simulation
Over recent decades, tremendous advances in the field of scalable numerical tools and mesh immersion techniques have been achieved to improve numerical efficiency while preserving a good quality of the obtained results. In this context, an octree-optimized microstructure generation and domain reconstruction with adaptative meshing is presented and illustrated through a flow simulation example applied to permeability computation of micrometric fibrous materials. Thanks to the octree implementation, the numerous distance calculations in these processes are decreased, thus the computational complexity is reduced. Using the parallel environment of the ICI-tech library as a mesher and a solver, a large scale case study is performed. The study is applied to the computation of the full permeability tensor of a three-dimensional microstructure containing 10,000 fibers. The considered flow is a Stokes flow and it is solved with a stabilized finite element formulation and a monolithic approach.
Introduction
The properties and behavior of a discontinuous fiber-reinforced thermoplastic are induced by the mechanisms involved during the forming process. Modeling and numerical simulation have a major role in understanding and predicting these mechanisms, especially at the microscopic scale, which provides the most accurate results. Nevertheless, at this scale of computation, numerical simulations are generally expensive in terms of computing resources and time. Optimizing and evaluating the used algorithms is a constant challenge. One of the most expensive issues when using finite elements and immersed boundary approaches for discontinuous reinforced composites simulation is the computation of distances. Fibers generation, immersion, and reconstruction techniques particularly rely on these evaluations, as the distances between fibers must be regularly evaluated during microstructure generation and distances from each point of the computational mesh to the frontiers of the immersed elements have to be measured. However, without any optimization, whenever the number of points and fibers in a simulation rises, the cost of reconstruction increases dramatically. In order to make these techniques applicable in the context of composites materials, an optimization of the distance evaluation is required. A first idea is to implement distance computation algorithms that save computational time. Reducing the number of expensive functions or operations used to compute each distance is a key element, as well as defining properly the data types used to limit memory footprint. This paper proposes a reduction in the number of distances to evaluate, which is performed using an octree.
The octree data structure [1] is a partition of a three-dimensional space built from recursive subdivisions into eight sub-domains. The sub-cubes obtained are hierarchically organized, which allows to easily reduce search time. Octree algorithms are widely used in various fields and their application range is significantly extensive, especially when positions must be accessed and manipulated. These applications include construction of a three-dimensional object model from a set of images [2] and simulation of displacement of free surface [3]. Octrees are broadly applied for collision detection algorithms in virtual reality, rigid bodies contacts, characters animation, or machining simulation, such as cutter-path generation for numerical control machines which require efficient collision detection routines [4][5][6]. Another significant example involving octree algorithm is the mesh generation procedure. Octree can be used to create meshes tied to geometrical objects [7], for adaptive mesh refinement (AMR), e.g., with structured grids in fluid dynamics [8], or combined with others techniques in advanced mesh generation processes [9].
In this paper, an octree-optimized microstructure generation and domain reconstruction with adaptative mesh is presented. An application of flow simulation through the reconstructed domains dealing with the identification of the full-component permeability tensor is conducted.
Microstructure Generation and Optimization Using Octree
The microstructure of a discontinuous fiber composite greatly affects its properties. For that, virtual numerical sample generation is crucial in order to carry out precise prediction simulations. However, a major difficulty to generate such a microstructure lies in the establishment of an optimized methodology that allows generating a very large number of fibers without interpenetration and with a minimum computation time and resources. In this work, a Random Sequential Adsorption RSA algorithm [10,11], widely used for rigid particles generation, is chosen.
A collection of N random unit orientations P, N homogeneously distributed mass center positions X and N lengths L, following a normal distribution law with mean length < L > and standard deviation σ, are primarily created. The program begins with one initial fiber (i) randomly oriented with P i . Subsequently, another fiber (j) with a random orientation P j is selected and then the system is checked for overlap. If the fiber (j) intersects a pre-existing fiber, it is repositioned by randomly changing orientation vector P j while retaining the same position vector X j . The selection of a new P j is repeated up to a maximum number of trials until the overlap condition is released. In this method, the generated geometry is periodic, so that any fiber cutting a boundary will be extended on the opposite one. This means that fibers close to surfaces can interact with the fibers of the near domains. Therefore, every new fiber to be placed is verified for interaction with all already pre-existing fibers and their 26 periodic images in the near domains. Figure 1 presents an example of a generated microstructure with 1000 cylindrical fibers having a same diameter d, a mean aspect ratio r =< L > /d = 20 and a fiber volume fraction V f = 0.1.
In the previously described algorithm, N * 27 distances evaluations are required to generate the N + 1-th fiber. Presuming that no intersection is detected, a minimum of 27 * N * (N − 1)/2 distances has to be computed, thus leading to a N 2 complexity. However, this number can increase, as once an intersection occurs, new random positions and orientations must be generated for the fiber. This computational cost is acceptable when N remains small, but becomes unaffordable when N reaches the order of the millions of fibers. To limit the number of distances to evaluate, this paper proposes the use of an octree algorithm. This tree structure enables to browse rapidly across all the elements and to select them based on their position. Consequently, a selection of the closest elements can be performed, which allows measuring the distances to these elements only. The complexity is decreased and can reach N log(N) for an optimal problem. The next paragraphs describe the octree building procedure, while the use of the octree is explained in Section 3.1. This data storage concept is a tree structure built recursively from a computational domain, in which elements, e.g., fibers, are dispersed. To clarify this paragraph, an analogy is performed between the computational domain and box bounding all the elements. In practice, there is a possibility for elements to be concentrated in a particular area of the computational domain. In that situation, the octree building procedure is processed in the interest region only, which does not cause any problem later on. The tree is built through refinement steps where the computational domain is divided in two along each dimension, thus generating subdomains (children). The name octree comes from the characterization of the tree in 3D, where 8 subdomains are generated by the division procedure ( Figure 2). After refinement, the elements shall no more be contained in the initial computational domain, but are defined using pointers towards every child they intersect. This choice characterizes the octree class, which is composed of the dimensions of the computational domain and pointers to, either the elements contained inside it or the children generated. The corollary of this choice is that fibers can be duplicated if they intersect several children. After a refinement step, all the children are overlooked with emphasis on the number of elements it contains. If a subdomain remains empty, i.e., no elements intersect it, it is immediately deleted. If too many elements are found in this child, the refinement procedure is repeated in this particular subdomain. The recursiveness is applied in that way until: either an acceptable number of elements is obtained in the deepest subdomains (leaf), or the maximal depth of the octree is reached.
The repartition of elements into the children is handled using bounding boxes. Axis-Aligned Bounding Boxes AABB have been used, which offer different advantages. First of all, these boxes are very easy to determine, both computationally speaking and in terms of access to data. It also allows reducing the computational effort for the determination of the intersections, as the boxes are oriented along the same axes as the computational domain. Finally, this choice enabled to generalize the octree to very different usage, from fibers to, e.g., 3D facets used to define surface meshes. The drawback brought by these bounding boxes lies in the intersections, as an "ill-oriented" fiber may be duplicated in leaves it does not intersect, only because its bounding box does. In that case, we can implement Oriented Bounding Box OBB in future works to enclose fibers as tightly as possible. Another limitation occurs when very long elements (proportionally to the size of the computational domain) are present, as again the fibers may be highly duplicated. However, the following developments of this paper will show that octree usage remains appropriate for elements with a small length to width ratio.
This paragraph presents the octree generation on an example that features 14 fibers, with a maximal depth for the octree of 2 and 1 fiber allowed per leaf. The procedure is drawn in Figure 3. The octree parameters mean that any subdomain containing more than 1 element needs to be refined, with a limit of only 2 levels. After the first step of the refinement, the fibers presented in Figure 3a are allocated to every subdomain their bounding box intersect. An interesting emphasis can be placed on the blue fiber (second "row" from the top, middle of the computational domain), which is duplicated into both of the two children on top of the initial computational domain in Figure 3. Consequently, after a second step of refinement this fiber can be found in two different octree leaves, the asterisked ones in Figure 3b. Figure 3c corresponds to the final octree as obtained with the parameters detailed previously. Even if the presence of only one fiber per leaf was authorized, subdomains containing more than one fiber can be found because of maximum refinement allowed. Note that the subdomains containing ∅ have been created by octree refinement, and immediately deleted as no fiber was allocated to it.
When adding a new fiber following the RSA algorithm, thanks to the implementation of the octree, the check for overlap will be carried out among a reduced number of fibers initially judged by the octree as potential candidates for collision. Fibers with which there is a possible collision are the fibers in the leaf or leaves to which the new fiber belongs and whose AABBs intersect. Figure 4 shows a schematic diagram of this method: it shows a leaf of an octree (large box black) to which we would like to add the red fiber and where the blue and green fibers already exist. Thus, a possibility of intersections can only occur with blue fibers. The green fibers will not be concerned because their AABBs do not intersect the AABB of the red fiber. During this process, fibers are dynamically added to the octree. For that, two major conditions should be verified to update the octree after adding a new fiber: • A new fiber must be always included in the global domain initially built for octree and, if it is not the case, it is necessary to destroy the octree and to reconstruct it; • The size of a leaf should not exceed the defined maximal size and, if it is not the case, it is necessary to refine the octree.
To quantify the gain brought by the octree, we study the evolution of the CPU time, t, according to the number of generated fibers, N. For all the simulations, we consider r = 20, V f = 0.1, and a maximum number of trials equal to 5000. The leaf maximal size is fixed to 100. Figure 5 shows a considerable gain on time which improves as the number of fibers becomes more important.
Mesh Immersion and Optimization Using the Octree
Mesh immersion is a technique enabling the representation of complex bodies using a single computational mesh. The main idea is to compute the distance from each point of the computational mesh to an object immersed, which can be represented by an analytical function, a mesh, or any set of data.The only constraint is the need to build an interior for the object, thus defining a frontier. This definition enables to establish a signed distance function α, as presented in Equation (1) for the immersion of a shape ω of the frontier Γ = ∂ω into a domain Ω. This interior can be concave or even split, as the mathematical evaluation of α does not have any prerequisite. However, the more complex ω will be, the more points in the computational mesh will be needed to represent it accurately.
Once the signed distance function is defined, any computational point x has a signeddistance either positive or negative. The union of points with positive α defines the interior, and the inverse set gives the exterior. This formulation mathematically corresponds to using a Heaviside function as a level-set function, which gives 1 for α positive and 0 for α negative. However, this approach is not suitable for multiphase flows, as strong discontinuities are sources of instability when using Galerkin approximation for the resolution of the Navier-Stokes equations. To overcome this issue, a smoothed Heaviside function based on a width parameter ε has been defined and is presented in Equation (2). with This paradigm introduces a transition phase of a width of about 2ε which smooths the shifting between physical parameters of the two phases. The "blurred area" does not operate as a gray zone in terms of mesh immersion, as the norm and sign of the result given by H ε in this region is depending on α. Compared to immersion results giving either 0 or 1 for a classical Heaviside function, a better capture of the interfaces can even be achieved. However, the quality of the reconstruction of ω remains highly dependent on the meshing of Ω. Fine meshes are needed around interfaces, and if the meshing of ω is complex, a high effort will be put in either mesh generation or distance evaluation. This interdependency is addressed by coupling the immersion with a mesh adaptation procedure. An anisotropic mesh generated automatically concentrates its points around Γ, guaranteeing that an important portion of them will be located in the transition region highly impacted by H ε . Further explanations about this procedure can be found in Section 3.2 and in [12]. Figure 6a presents the results of α for a circle of a radius R, and Figure 6b presents the results obtained for H ε with ε = R/100. A slice of the computational mesh is also drawn, where the major part of the points are gathered in the interest zones ( Figure 6c).
The level-set function is defined analytically from α, making the evaluation of α the major effort of the immersion procedure. If an analytical definition of α requires only one distance computation per point and does not need to be optimized, considering more complex representations generates computing complexity, e.g., when meshes or fibers set are immersed. Those cases use a set of elements to define ω or Γ, so the determination of the closest neighbor is not immediate. The performance of the immersion code then highly depends on the computational effort needed to evaluate a single distance, but also on the number of distances to compute before finding the closest element of ω. Without any optimization of the immersion procedure, the computation of α for a single point x and M fibers defining ω require M distance evaluations. Consequently, the immersion of M fibers in Ω composed of N points forces the computation of N × M distances. When few fibers are immersed in small meshes, this cost is affordable. However, when 10,000 fibers are immersed, as proposed in the case of study of this paper, the number of computations is extremely high (assuming that N is quasi-linearly related to M), which is somewhere between not competitive and unrealizable computationally. The coupling of the mesh immersion procedure with an octree is a way to reduce the complexity. The construction of the octree was overlooked in Section 2, and its contribution to the reduction in computational costs is detailed in the next paragraphs. Instead of computing the distance from a point x to each element defining ω, the idea behind the octree is to select elements located near x, and to compute the distance from them only. The distance computation algorithm is discussed in the following, with use of the nomenclature defined in Table 1. All starts with the determination of the octree leaf OL x which is the closest from x. From the definition of the octree, OL x is proven not to be empty. Even if the closest element from x, named E c , is not imperatively stored inside OL x , its distance to x is inferior or equal to the distance from x to the closest element located inside OL x . A well-parametrized octree guarantees that the size of the set of elements contained inside a leaf is reasonable. The distances from x to the bounding boxes of every element contained inside OL x are then computed. The distance to the furthest point of every bounding box is computed, and the minimum obtained is selected. This minimal distance d x defines a circle/sphere C x of center x and of radius d x , in which the closest elements is compulsorily located. The octree is then browsed to determine all the leaves it intersects, which are candidates to host E x . The bounding boxes of all the elements located in the selected leaves are browsed, and if the minimum distance from x to it is inferior to d x , the distance from x to the element is computed. α x is then obtained by selecting the minimum among the distances to elements evaluated.
Octree has been defined to be computationally efficient and stand-alone, and the use of bounding boxes is a key factor to that extent. Large computational savings are enabled as the octree only knows the elements as bounding boxes and, until the very end of the algorithm, distances computed are between x and the boxes. The number of distances from x to the elements, which can be very expensive computationally, is limited to the elements whose bounding box intersect C x . Browsing all the boxes contained inside OL x to determine d x might seem unnecessary, but if this procedure is not completed, the maximal theoretical distance to E x is the distance to the furthest point of OL x . Overlooking the boxes enables to reduce the span of C x , which may translate to a smaller selection of octree leaves and to a reduced number of distances from x to elements. The computational cost of this stage, implying few distance computations to bounding boxes, often tends to be worth the savings brought by the optimization of C x . The usage of bounding boxes also bring easy generalization of the octree procedure. The selection of the closest elements, to which distance from x is evaluated, is totally independent on the type of elements used. Heterogeneous sets can even be used, with, e.g., facets and fibers mixed. Figure 7a presents the refined octree defined in Figure 3, where all the leaves of the computational tree are colored in red. To compute the distance from a point P to ω, OL P is determined and drawn in green in Figure 7b. All the bounding boxes of fibers immersed in this leaf are browsed to determine d P and C P . The octree leaves intersecting this circle are determined and asterisked in Figure 7c. The intersection between the bounding boxes of fibers contained in those leaves and C P is examined, and if, and only if, an intersection is found, the distance from x to the fiber is determined. The same procedure is followed for points Q and R. Those three examples depict the efficiency of the method in different situations (the most frequent situation is the one described by the point R), where the number of evaluations of distances to elements is largely reduced. Table 2 shows a large decrease despite the low number of fibers immersed, which reduces the efficiency of the method. The octree construction and closest leaves determination costs are not included in this situation. However, the recursive construction and the distance to bounding boxes determination are cheap computationally compared to the distance to fibers evaluation, which requires projections. When a deeper octree is used for much bigger ω, evaluating distances to fibers become quite expensive, and savings brought by the octree rise rapidly.
Parallel Anisotropic Mesh Adaptation
Octree-optimized mesh immersion procedure is an efficient way to represent geometries if an accurate computational mesh is used as Section 3.1 stated. The results obtained with this technique are highly dependent on the position of the points, particularly at the interfaces. To that extent, a coupling between mesh immersion and the automatic generation of an anisotropic mesh is proposed in order to reduce the size of the problem to be treated. This iterative process starts with a coarse initial mesh, where geometries are immersed and reconstructed using the methods proposed in Section 3.1. A-posteriori error estimator [13,14] evaluates errors from the level-set results at each computational point, using the smoothed Heaviside function H ε described in Equation (2). In order to generate an anisotropic mesh, a tensor is defined at each point, enabling to measure the errors along each dimension. In other words, at each computational point, the variation of the function H ε along each direction is observed.
The adaptation relies on a uniform distribution of the error along the edges of the mesh in all the computational domain. A metric can be built, which allows to deform the mesh in order to attain uniform error: refinement is performed in the areas where the error is too important, while mesh is coarsened where low error is observed. As H ε is defined from a hyperbolic tangent, major gradients variation are found around the interfaces while the function is almost constant far from the frontiers. Consequently, around the interfaces, low edges are required to attain errors equivalent to the one obtained with large edges where gradients are almost null. Consequently, the new mesh will feature more nodes in the interest zones, and the reconstruction will gain precision. As the metric is built as a tensor, different stretching factors are used for each direction, which guarantees anisotropic meshing.
After several iterations, the errors are uniformly dispersed in the computational domain. Nodes are mostly concentrated around Γ, and the immersed geometry is well described. Highly-stretched mesh cells can be found in regions where very thin description is needed in one dimension while the others do not require particular attention. However, the stretching ratio of the mesh cells is limited, in order to ensure convergence of computations. The automatic and anisotropic mesh adaptation brings versatility, and at the same time guarantees that the results obtained with the mesh immersion procedure will be accurate. The reduction in the number of points required for the reconstruction enables to reduces both memory usage and computational costs. Coupled with an octree, an efficient optimization of the reconstruction is obtained. Moreover, this reconstruction process is executed on a multi-cores context in order to be able to combine the optimizations related to the use of mesh adaptation and octree with massively parallel computing. The parallelization of the process is performed by an iterative coupling between operations of independent adaptive mesh in different partitions and displacement of the interface between these partitions [15,16].
Weak Scalability Test of the Proposed Reconstruction Approach
To determine the scaling capability of the whole reconstruction procedure, weak scaling tests have been performed on the western French region, Pays de la Loire cluster Liger (a BULL/Atos DLC720 cluster, 6384 cores Intel Xeon (Haswell and Cascade Lake) (compute and visualization parallel procedures), a total of 36,608 Gigabytes of system memory, 5.33 GB per core, FDR Infiniband interconnect (56 GB/s)). Five microstructures were generated, as described previously, while keeping the same geometrical characteristics of fibers. To realize tests with similar workload per processor, the size of the computational domain and the number of immersed fibers were proportionally increased according to the number of the used cores, as detailed in Table 3. The reconstruction process started from an initial coarse mesh and took 30 iterations with constant precision and fixed octree parameters. For the different test cases, an average number of mesh nodes per core equal to 3 × 10 5 was maintained with the exception of test 1 (1.8 × 10 5 nodes) where the volume of fibers that extend outside the computational domain and are therefore sliced is significant, so leading to a decrease in the number of nodes. Total time of the immersion and adaptation process as a function of the number of cores is represented in Figure 8. For an ideal weak scale test, the run time is expected to stay constant while the workload is increased in direct proportion to the number of processors. For real case, as shown in Figure 8, a deviation can be observed due to communications and partitioning efforts. However, according to the same figure, the running time variation is relatively small between the tests (except for the first one where the workload is different) which allows to consider that for a scaled problem size, the domain reconstruction approach has good efficiency in terms of weak scalability.
Flow Simulation
The resulting mesh from the reconstruction process can be used to simulate various physical phenomena, such as those involved in fluid-structure interaction problems. Generally, for composite flow applications, incompressible Stokes flow around the fibers is considered. By considering a stationary regime and neglecting the volume forces, the variational form of the Stokes problem for velocity field, u, and pressure field, p, is written: where ε is the strain rate tensor. A monolithic approach is used, i.e., the flow Equation (4) are solved on the single mesh defined over the whole computational domain, Ω, regardless of the type of phase it contains. The different phases are distinguished by their physical properties which are taken into account through a mixing law. A linear mixture relation is used for the viscosity, η, and described by the Equation (5).
η f and η s are, respectively, the viscosities of the liquid and solid phases. η s acts as a penalty parameter: when it is high enough, shear rate in the penalized phase becomes close to zero and we find a rigid body motion. This is a simple way to obtain results similar to those provided by an augmented Lagrangian method where a Lagrange multiplier is used to impose a constraint on the solid phase to avoid its deformation [17]. To solve the system (4) using a finite element method, a stabilized approach of VMS type is employed [12]. The used software in this work is ICI-tech, developed at the High Performance Computing Institute (ICI) of Centrale Nantes and implemented for massively parallel context.
Permeability Computation Procedure
Predicting permeability is a very important issue in the field of composite forming process. However, it is tricky and complex to obtain experimentally and numerically reliable results, because most simulations are carried out in small periodic representative elementary volumes, under a lot of simplifying assumptions that idealize the real media. Here, we chose to rise to the challenge to numerically determine the permeability tensor of a large virtual sample of fibrous media that imitates sophisticated real media. In threedimensional cases, permeability is characterized by a symmetric second-order tensor K. This tensor relates the average fluid velocity u to the average pressure gradient on the fluid domain ∇p f , as shown by the Darcy law below: Using a monolithic approach with finite element discretization, the homogenized velocity and pressure fields are written as the sum of their integration on each mesh element Ω e of the simulation domain Ω: where V Ω is the volume of the total domain and V Ω f is the volume of the fluid domain.
To predict permeability, the proposed simulation procedure relies on microstructure generation, phase reconstruction, mesh adaptation, and resolution of the Stokes equations, considering that fibers are static and impermeable. In fact, to determine all components of K, three flows in the three directions x, y, and z are successively simulated, an exponent {1, 2, 3} is referred to each one. The flow is induced by an imposed pressure gradient. Depending on the direction where the flow is desired, a constant pressure field on the input face of the simulation domain against a null field on the output face is imposed. For the other faces of the domain, only the normal component of the velocity field is imposed as null. Assuming that the permeability tensor is symmetric and positive definite, its components can be calculated by the resolution of the overdetermined linear system given by: The solution obtained from the resolution of this matrix system (9) is, obviously, an approximate solution. To ensure a perfect symmetry of K, if necessary, the following modification to the extra diagonal terms is made:
Permeability Computation Validation
To validate permeability computation, the whole procedure was applied to a parallel square packing of fibers having an identical diameter. Rigidity of fibers was ensured by imposing η s = 500η f and a zero velocity condition was imposed upon them. Figure 9a shows the used geometry configuration for V f = 25.65%. Equation (11) represents its calculated permeability tensor adimensionalized by the square of fiber radius which respect a transverse isotropic form as expected from the symmetry of the packing.
Permeability evolution according to fiber volume fraction was studied by varying fiber diameter and keeping same the domain size for all simulations. The obtained results of normalized transverse permeability are reported in Figure 9b and compared to the model of [18][19][20]. The observed permeability values through this graph are in the same order than the one obtained from analytical laws which is relevant to our approach. The first step of the process is the microstructure generation using the octree optimized algorithm described in Section 2. A sample of approximately 10,000 (exactly 10,062) collision-free fibers is created in a cubic domain with an edge length of 1.35 mm. The fibers have a common diameter of 15 µm and a length that follows a normal distribution of mean 0.2 mm and standard deviation 0.03 mm. The obtained volume fraction is V f = 14%. The orientation state is nearly isotropic and is given by the following orientation tensor a 2 [21]: Figure 10 shows the set of the generated fibers. Despite the fact that the generation is sequential, these fibers are created in only 1min44s thanks to the octree contribution.
Microstructure Reconstruction with Adaptative Mesh
The computation was performed on 384 cores. Starting from an initial mesh of ≈4.6 million nodes and ≈27 million elements, after 30 iterations, an adapted final mesh of ≈67 million nodes and ≈391 million elements is created by the methods described in Sections 3.1 and 3.2. For H ε with ε = 3.125 µm, the total immersion and adaptation process required 4h52min for the 30 iterations. Figure 11 shows the evolution, in a number of elements for each iteration of the mesh adaptation, as well as the computational time. During the first iterations of immersion of the generated fibers in the initial mesh, the mesher adds a considerable number of elements until reaching a peak at the ninth iteration, in order to properly capture the geometries of all the fibers at first. Then, the mesher focuses its work on optimizing the mesh adaptation at the interfaces while respecting a criterion of mesh quality. Once an efficient mesh is achieved, the number of elements stabilizes. The time evolution curve naturally follows the evolution of the mesh size. Three pressure gradients are applied to the constructed finite element mesh in order to generate the flows required for the identification of K. Figure 12 shows the pressure field and velocity vectors around the immersed fibers for the flow in the x direction. These results were obtained for a resolution time of the system (4) equal to approximately 7min minutes on 384 CPUs. The predicted full permeability tensor adimensionalized by the square of fiber radius for this media is as follows: For isotropic material, only the three diagonal elements are non-null and they are equal. Here, the studied sample is nearly isotropic. For this reason, the obtained diagonal elements are quite similar and the off-diagonal elements are smaller by around two orders of magnitude.
Conclusions
Obtained results show our capability thanks to an octree implementation to deal with big data in terms of input of permeability simulation and to perform reliable finite element calculation on complex geometries. Through the proposed method, further studies can be conducted to better quantify the impact of the microstructural parameters on the permeability and, thus, avoiding problems related to the choice of the size of the simulation domains, which remains rather delicate to define, especially in the case of non-periodic geometries. We can also think about exploring the permeability of multiaxial tissues of the non-crimp fabric (NCF) or textile type. Thanks to the several numerical optimization, the permeability can thus be evaluated at the microscopic scale on several layers by representing the fibers inside the wicks. | 7,917 | 2021-09-01T00:00:00.000 | [
"Computer Science"
] |
Polaron Problems in Ultracold Atoms: Role of a Fermi Sea across Different Spatial Dimensions and Quantum Fluctuations of a Bose Medium
: The notion of a polaron, originally introduced in the context of electrons in ionic lattices, helps us to understand how a quantum impurity behaves when being immersed in and interacting with a many-body background. We discuss the impact of the impurities on the medium particles by considering feedback effects from polarons that can be realized in ultracold quantum gas experiments. In particular, we exemplify the modifications of the medium in the presence of either Fermi or Bose polarons. Regarding Fermi polarons we present a corresponding many-body diagrammatic approach operating at finite temperatures and discuss how mediated two- and three-body interactions are implemented within this framework. Utilizing this approach, we analyze the behavior of the spectral function of Fermi polarons at finite temperature by varying impurity-medium interactions as well as spatial dimensions from three to one. Interestingly, we reveal that the spectral function of the medium atoms could be a useful quantity for analyzing the transition/crossover from attractive polarons to molecules in three-dimensions. As for the Bose polaron, we showcase the depletion of the background Bose-Einstein condensate in the vicinity of the impurity atom. Such spatial modulations would be important for future investigations regarding the quantification of interpolaron correlations in Bose polaron problems. dispersion ω + µ ↑ p 2 / ( 2 m ) , minority form the attractive polaron at negative energies ( ω + µ ↓ < 0) and a broadened peak associated with the repulsive impurity branch at positive energies ( ω + µ ↓ > 0 ) . comparison, we provide the spectral functions of the medium ( b1 ) and the impurities ( b2 ) in the of T 0.3 T , x ln ( p F a ) Evidently, the feedback on the medium from the impurities is enhanced low-momentum (cid:39)
Introduction
The quantum many-body problem, which is one of the central issues of modern physics, is encountered in various research fields such as condensed matter and nuclear physics. The major obstacle that prevents their adequate description stems from the presence of many degrees-of-freedom as well as strong correlations. The polaron concept, which was originally proposed by S. I. Pekar and L. Landau [1,2] to characterize electron properties in crystals, provides a useful playground for understanding related nontrivial many-body aspects of quantum matter and interactions. For instance, a key advantage of the polaron picture is that, under specific circumstances, it enables the reduction of a complicated many-body problem to an effective single-particle or a few-body one with renormalized parameters. In the last decade, the polaron concept has been intensively studied for two-component ultracold mixtures, where a minority component is embedded in a majority one (host) and becomes dressed by the low-energy excitations of the latter forming a polaron. Indeed, ultracold atoms, owing to the excellent controllability of the involved system parameters, are utilized to quantitatively determine polaron properties, as has been demonstrated in a variety of relevant experimental efforts. These include, for instance, the measurement of the quasiparticle excitation spectra [3][4][5][6][7][8][9][10][11][12], monitoring the quantum dynamics of impurities [13,14], the observation of a phononic Lamb shift [15], the estimation of relevant thermodynamic quantities [16,17], the identification of medium induced interactions [18,19], and polariton properties [20][21][22].
Polarons basically appear in two different types, namely, Fermi and Bose polarons where the impurity atoms are immersed in a Fermi sea and a Bose-Einstein condensate (BEC) respectively. Both cases are experimentally realizable by employing a mixture of atoms residing in different hyperfine states or using distinct isotopes. The impuritymedium interaction strength can be flexibly adjusted with the aid of Feshbach resonances [23], and as such strong interactions between the impurity and the majority atoms can be achieved. Due to this non-zero interaction, the impurities are subsequently dressed by the elementary excitations of t heir background atoms, leading to a quasi-particle state that is called the polaron. In that light, the polaron and more generally the quasiparticle generation is inherently related to the build-up of strong entanglement among the impurities and their background medium [24][25][26]. Moreover, since various situations such as mass-imbalanced [5], low-dimensional [6], and multi-orbital [11] ultracold settings can be realized, atomic polarons can also be expected to be quantum simulators of quasiparticle states in nuclear physics [27][28][29][30][31]. Recently, a Rydberg Fermi polaron has also been discussed theoretically [32].
In this work, we first provide a discussion on the role of the background atoms in manypolaron problems that are tractable in ultracold atom settings. Particularly, we present diagrammatic approaches to Fermi polaron systems and elaborate on how mediated twoand three-body interpolation interactions are consistently taken into account within these frameworks [55,56]. Importantly, a comparison of the Fermi polaron excitation spectral function in three dimensions (3D) and at finite temperatures is performed among different variants of the diagrammatic T-matrix approach. Namely, the usual T-matrix approach (TMA) which is based on the self-energy including the repeated particle-particle scattering processes consisting of bare propagators [79,80], the extended T-matrix approach (ETMA) where the bare propagator in the self-energy is partially replaced [81][82][83], and the selfconsistent T-matrix approach where all the propagators in the self-energy consist of dressed ones [84,85] are employed. We reveal how medium-induced interactions are involved in these approaches and examine their effects in mass-balanced Fermi polaron settings realized, e.g., in 6 Li atomic mixtures. Subsequently, we discuss the polaron excitation spectrum in two (2D) and one (1D) spatial dimensions. The behavior of the spectral function of the host and the impurities at strong impurity-medium interactions is exemplified. Finally, the real-space Bogoliubov approach to Bose polarons in 3D is reviewed. The latter allows us to unveil the condensate deformation due to the presence of the impurity and appreciate the resultant quantum fluctuations [86]. We argue that the degree of the quantum depletion of the condensate decreases (increases) for repulsive (attractive) impurity-medium interactions, a result that is associated with the deformation of its density distribution. This is in contrast to homogeneous setups where the depletion increases independently of the sign of the interaction.
This work is organized as follows. In Section 2, we present the model Hamiltonian describing ultracold Fermi polarons in 3D. For the Fermi polaron, we consider uniform systems and develop the concept of the diagrammatic T-matrix approximation. After explaining the ingredients of the diagrammatic approaches in some detail, we clarify how mediated two-and three-body interactions are incorporated in these approaches. The behavior of the resultant polaron spectral function at finite temperatures and impurity concentrations in three-, two-, and one-dimensions is discussed. In Section 3, we utilize the real-space mean-field formulation for Bose polarons and expose the presence of quantum depletion for the three-dimensional trapped Bose polaron at zero temperature. In Section 4, we summarize our results and provide future perspectives. For convenience, in what follows, we use k B =h = 1.
T-Matrix Approach to Fermi Polaron Problems
Here we explain the concept of many-body diagrammatic approaches to Fermi polarons, namely, settings referring to the situation where fermionic impurity atoms are immersed in a uniform Fermi gas. Since such a two-component Fermi mixture mimics spin-1/2 electrons, we denote the bath component as σ = B =↑ and the impurity one by σ = I =↓. Note that these are standard conventions without loss of generality. The model Hamiltonian describing this system reads where ξ p,σ = p 2 /(2m σ ) − µ σ is the kinetic energy minus the chemical potential µ σ , and m σ is the atomic mass of the σ component. The parameters c p,σ and c † p,σ refer to the annihilation and creation operators of a σ component fermion, respectively, possessing momentum p.
We measure the effective coupling constant g of the contact-type interaction between two different component fermions by using the low-energy scattering parameter, namely, the scattering length a. In 3D, it is known [87] that the coupling constant g 3D and the scattering length a are related via with m −1 r = m −1 ↑ + m −1 ↓ being the reduced mass. In this expression, the momentum cutoff Λ is introduced to avoid an ultraviolet divergence in the momentum summation of the Lippmann-Schwinger equation expressed in momentum space. This allows us to achieve the effective short-range interaction of finite range r e ∝ 1/Λ. Similarly, the relevant relations in 2D and 1D read [88] respectively, where g 2D and g 1D are the coupling constants in 2D and 1D.
First, we introduce a thermal single-particle Green's function [89] where ω n = (2n + 1)πT is the fermion Matsubara frequency introduced within the finitetemperature T formalism and n ∈ Z [89]. The effect of the impurity-medium interaction is taken into account in the self-energy Σ σ (p, iω n ). The excitation spectrum A ↓ (p, ω) of a Fermi polaron can be obtained via the retarded Green's function G R ↓ (p, ω) = G ↓ (p, iω n → ω + iδ) (where δ is a positive infinitesimal) through analytic continuation [89]. In particular, it can be shown that Experimentally, this quantity can be monitored by using a radio-frequency (rf) spectroscopy scheme where the atoms are transferred from their thermal equilibrium state to a specific spin state which interacts with the medium [90]. Indeed, the reverse rf response I r (ω) [10] and the ejection one I e (ω) [16] are given by and respectively. Here, ξ p,i represents the kinetic energy of the initial state in the reverse rf scheme. In Equations (6) and (7), Ω Rabi is the Rabi frequency. Importantly, the self-energy Σ ↑ (p, iω n ) of the background plays an important role in describing the mediated interpolaron interactions. This fact will be evinced below and it is achieved by expanding Σ ↑ (p, iω n ) with respect to G σ and G 0 σ . The chemical potentials µ σ are kept fixed by imposing the particle number conservation condition obeying Moreover, in the remainder of this work, we define the impurity concentration as follows Additionally, within the TMA [34,54] the self-energy Σ σ (p, iω n ) of the σ component reads where Γ(q, iν ) is the many-body T-matrix, as diagrammatically shown in Figure 1a, with the boson Matsubara frequency iν = 2 πT ( ∈ Z). Here, G 0 σ (p, iω n ) = (iω n − ξ p,σ ) −1 is the bare thermal single-particle Green's function. Furthermore, by adopting a ladder approximation illustrated in Figure 1d, the T-matrix Γ(q, iν ) is given by where is the lowest-order particle-particle bubble. The latter describes a virtual particle-particle scattering process associated with the impurity-medium interaction g which is replaced by g 3D , g 2D , and g 1D in 3D, 2D, and 1D, respectively. Note that in Equation (10) the impurity-impurity interaction is not taken into account. The extended T-matrix approach (ETMA) [55] constitutes an improved approximation that allows us to take the induced polaron-polaron interactions into account in a self-consistent way. In this method, as depicted in Figure 1b we include higher-order correlations by replacing the bare Green function G 0 in Equation (10) with the dressed one G σ . Namely = + + + ... Γ and Γ S are the manybody T-matrices, whose perturbative expansions are shown schematically in (d,e), consisting of bare and dressed propagators G 0 σ and G σ , respectively. While in TMA, all the lines in the self-energy (a) consist of G 0 σ , they are replaced with G σ partially (upper loop of (b)) in ETMA and fully in SCTMA (c) (see also (e) where G 0 σ is replaced by G σ compared to (d)), respectively. Importantly, the TMA and ETMA approaches are equivalent to each other in the singlepolaron limit i.e., x → 0, where the self-energy of the fermionic medium Σ E ↑ (capturing the difference between G 0 ↑ and G ↑ in Equations (10) and (13), respectively) is negligible. Additionally, at zero temperature, these two treatments coincide with the variational ansatz proposed by F. Chevy [33]. Recall that µ ↑ = E F and µ ↓ = E Proceeding one step further, it is possible to construct the so-called self-consistent Tmatrix approach (SCTMA) [56,91,92] which deploys the many-body T-matrix Γ S composed of dressed propagators as schematically shown in Figure 1e. In particular, the corresponding T-matrix is given by where which describes a scattering process denoted by G ↑ and G ↓ , of the dressed medium atoms with the impurities and the dressed ones (polarons), respectively. This is in contrast to Equation (12) obtained in ETMA and consisting of G 0 σ which represents the impuritymedium scattering process of only the bare atoms. Using this T-matrix, we can express the SCTMA self-energy Σ S σ (see also Figure 1c) as We note that within the ETMA, the impurity self-energy Σ E ↓ (Equation (11)) can be rewritten as with the higher-order correction δΣ ↓ (p, iω n ) beyond the TMA being eff (p, iω n , p , iω n ; p, iω n , p , iω n )G ↓ (p , iω n ). (18) In this expression, V eff (p 1 , iω n 1 , p 2 , iω n 2 ; p 1 , iω n 1 , p 2 , iω n 2 ) represents the induced impurityimpurity interaction (diagrammatically shown in Figure 2a) with incoming and outgoing momenta and Here, δ i,j is the Kronecker delta imposing the energy and momentum conservation in the two-body scattering. The self-energy Σ S ↓ of the impurities within the SCTMA involves a contribution of induced three-impurity correlations due to the dressed pair propagator Σ S ↓ . The latter can again be decomposed as where Here we defined which represents the difference between the Π and Π S , namely, the medium-impurity and the medium-polaron propagators. In the last line of Equation (22), we assumed that G ↑ G 0 ↑ and Σ S ↓ 0. Thus, one can find a three-body correlation effect beyond the ETMA as shown in Figure 2b and captured by where V eff (p 1 , iω n 1 , p 2 , iω n 2 , p 3 , iω n 3 ; p 1 , iω n 1 , p 2 , iω n 2 , p 3 , iω n 3 ) is the induced three-polaron interaction term. Its explicit form reads eff (p 1 , iω n 1 , p 2 , iω n 2 , p 3 , iω n 3 ; p 1 , iω n 1 , p 2 , iω n 2 , p 3 , iω n 3 ) = δ p 1 +p 2 +p 3 ,p 1 +p 2 +p 3 δ n 1 +n 2 +n 3 ,n 1 +n 2 +n 3 From the above discussion, it becomes evident how the medium-induced two-body and three-body interpolaron interactions are included in the ETMA and the SCTMA treatments.
Recall that in the TMA the interpolaron interaction is not taken into account. Even so, observables such as thermodynamic quantities (e.g., particle number density) and spectral functions obtained via rf spectroscopy can in principle provide indications of the effect of interpolaron interactions through Σ σ (p, iω n ).
Spectral Response of Fermi Polarons
In the following, we shall present and discuss the behavior of the spectral function of Fermi polarons for temperatures ranging from zero to the Fermi temperature of the majority component as well as for different spatial dimensions from three to one. For simplicity, we consider a mass-balanced fermionic mixture i.e., m ↑ = m ↓ ≡ m. The latter is experimentally relevant for instance by considering two different hyperfine states, e.g., |F = 1/2, m F = +1/2 and |F = 3/2, m F = −3/2 of 6 Li. In this notation, F and m F are the total angular momentum and its projection, respectively, of the specific hyperfine state [10] at thermal equilibrium.
Three-Dimensional Case
The resultant spectral function A σ (p = 0, ω) of the fermionic medium (σ =↑) and the impurities (σ =↓) is depicted in Figure 3 as a function of the single-particle energy ω. Here, we consider a temperature T = 0.3T F , impurity concentration x = 0.1, and impuritymedium interaction at unitarity, i.e., (p F a) −1 = 0. The Fermi temperature is T F = p 2 F /(2m ↑ ) and the Fermi momentum p F . Evidently, the spectral function of the majority component ( Figure 3a) exhibits a peak around ω + µ ↑ = 0 in all three diagrammatic approaches introduced in Section 2. The sharp peak around ω + µ ↑ = 0 corresponds to the spectrum of the bare medium atoms given by A(p, ω) = δ(ω − ξ p,↑ ) at p = 0. This indicates that the imprint of the impurity-medium interaction on the fermionic host is negligible for such small impurity concentrations x = 0.1; see also the discussion below. Indeed, the renormalization of µ ↑ (which essentially evinces the backaction on the majority atoms from the impurities) in the ETMA at unitarity is proportional to x [55] and in particular It can be shown that in the weak-coupling limit, this shift is given by the Hartree correction [89]. However, at the unitarity limit presented in Figure 3, such a weakcoupling approximation cannot be applied and therefore the factor 0.526 in Equation (25) originates from the existence of strong correlations between the majority and the minority component atoms. The corresponding polaronic excitation spectrum is captured by P is the attractive polaron energy. Notice here that since this peak is located at negative energies it indicates the formation of an attractive Fermi polaron. This observation can be understood from the fact that in the absence of impurity-medium interactions, the bare-particle pole, namely, the position of the pole of the bare retarded single-particle Green's function G 0,R ↓ (p = 0, ω) = (ω + iδ + µ ↓ ) −1 , occurs at ω + µ ↓ = 0. Moreover, the attractive polaron energy E P ). Recall that, in general, for finite temperatures T and impurity concentrations x, µ ↓ = E (a) P holds in contrast to the single-polaron limit at T = 0 [55]. Additionally, a weak amplitude peak appears in A ↓ (p = 0, ω) at positive energies ω E F . It stems from the metastable upper branch of the impurities, where excited atoms repulsively interact with each other. This peak becomes sharper at positive scattering lengths away from unitarity. Indeed, for positive scattering lengths, the quasi-particle excitation called a repulsive Fermi polaron emerges [25]. Figure 4a presents the polaron spectral function A ↓ (p = 0, ω) with respect to the interaction parameter (p F a) −1 obtained within the ETMA method at T = 0.03T F and x = O(10 −4 ). From the position of the poles of G R ↓ (p = 0, ω), one can extract two kinds of polaron energies, namely, E (a) P and E (r) P corresponding to the attractive and the repulsive polaron energies, respectively. The interaction dependence of these energies is provided in Figure 4b. E (r) P approaches the Hartree shift Σ H ↓ = 4πa m N ↑ without the imaginary part of the self-energy (being responsible for the width of the spectra) and finally becomes zero [25]. Indeed, the spectrum in Figure 4a shows that the peak of the repulsive polaron at ω + µ ↓ > 0 becomes sharper when increasing (p F a) −1 , indicating the vanishing imaginary part of the self-energy. On the other hand, E (a) P decreases with increasing (p F a) −1 as depicted by the position of the low-energy peak (where ω + µ ↓ < 0) in Figure 4a. Eventually, the attractive polaron undergoes the molecule transition as we discuss below. Another important issue here is that in the strong-coupling regime the attractive polaron undergoes the transition to the molecular state with increasing impuritybath attraction [93]. Although this transition was originally predicted to be of first-order, recent experimental and theoretical studies showed an underlying crossover behavior and coexistence between polaronic and molecular states [17]. We note that in the case of finite impurity concentrations, a BEC of molecules can appear at low temperatures; see also Equations (26) and (27) below. It is also a fact that the interplay among a molecular BEC, thermally excited molecules, and polarons may occur at finite temperatures [94]. In the calculation of the attractive polaron energy E (a) P for different coupling strengths (Figure 4b), however, we do not encounter the molecular BEC transition identified by the Thouless criterion [95] 1 In particular, in the strong-coupling limit, from Equation (26) combined with the particle number conservation (Equation (8)) the BEC temperature T BEC of molecules satisfies [96] T BEC 2π x where ζ(3/2) 2.612 is the zeta function. Since we consider a small impurity concentration According to the above-description, induced polaron-polaron interactions are mediated by the host atoms, which are taken into account within the ETMA and the SCTMA methods as explicated in Section 2, are weak in the present mass-balanced fermionic mixture. These finite temperature findings are consistent with previous theoretical works [51][52][53] predicting a spectral shift of the polaron energy ∆E = FE FG x with F = 0.1∼0.2 at T = 0 (where E FG is the ground-state energy of a non-interacting single-component Fermi gas at T = 0) as well as the experimental observations of Ref. [4]. On the other hand, the presence of induced polaron-polaron interactions in the repulsive polaron scenario cannot be observed experimentally [10], a result that is further supported by recent studies based on diagrammatic approaches [55].
Furthermore, the spectral deviations between the TMA and the ETMA treatments represent the effect of induced two-body interpolaron interactions in the attractive polaron case. However, in our case there is no sizable shift between the spectral lines predicted in these approaches (Figure 3b). Indeed, the induced two-body energy is estimated to be of the order of 10 −3 E FG at x = 0.1. The induced three-body interpolaron interaction, which is responsible for the difference among the ETMA and the SCTMA results, exhibits a sizable effect on the width of the polaron spectra. We remark that at T = 0.3T F and x = 0.1 (Figure 3b) although the minority atoms basically obey the Boltzmann statistic, since their temperature is higher than the Fermi degenerate temperature T F,↓ = (6π 2 N ↓ ) 2 3 2m [55] namely T = 0.3T F 1.39T F,↓ , effects of the strong medium-impurity interaction on the polaron spectra are present manifesting for instance as a corresponding broadening. Although the SCTMA treatment tends to overestimate the polaron energy, the observed full-widthat-half maximum (FWHM) of the rf spectrum given by 2.71(T/T F ) 2 [16] can be well reproduced by this approach. The latter gives 2.95(T/T F ) 2 whereas the FWHM in ETMA is 1.61(T/T F ) 2 [56]. We should also note that the decay rate related to the FWHM for repulsive polarons as extracted using TMA (and simultaneously ETMA) agree quantitatively with the experimental result of Ref. [10]. For the attractive polaron, the quantitative agreement between the experiment and these diagrammatic approaches is broken at high temperatures. For instance, the recent experiment of Ref. [16] showed that the transition from polarons to the Boltzmann gas occurs at T 0.75T F [16], while the prediction of the diagrammatic approaches is above T F [56]. Besides the fact that such polaron decay properties may be related to multi-polaron scattering events leading to many-body dephasing [12], they are necessary for further detailed polaron investigations at various temperatures and interaction strengths that facilitate the understanding of the underlying physics of the observed polaron-to-Boltzmann-gas transition.
The dependence of the polaron spectra A ↓ (p, ω) on the energy and the momentum of the impurities is illustrated in Figure 5 for T = 0.2T F , x = 0, and (p F a) −1 = 0. To infer the impact of the multi-polaron correlations on the spectrum we explicitly compare A ↓ (p, ω) between the ETMA and the SCTMA methods. As it can be seen, A ↓ (p, ω) exhibits a sharp peak which is associated with the attractive polaron state and shows an almost quadratic behavior for increasing momentum of the impurities. It is also apparent that the SCTMA spectrum ( Figure 5b) at low momenta is broadened when compared to the ETMA one ( Figure 5a) due to the induced beyond two-body interpolation correlations, e.g., three-body ones. At small impurity momenta, the spectral peak of the attractive Fermi polaron within the present model as described by Equation (1), is generally given by where Z a and m * a are the quasiparticle residue [25] and the effective mass of the attractive polaron, respectively. At unitarity it holds that Z a 0.8, m * a 1.2m, and E (a) P −0.6E F within the zero-temperature and single-polaron limits [34]. The behavior of these quantities has been intensively studied in current experiments [3,4,10] and an adequate agreement has been reported using various theories. For instance, Chevy's variational ansatz (being equivalent to the TMA at T = 0 and x → 0) [33,34] gives Z a = 0.78, m * a = 1.17m, and E In this sense, nowadays, the corresponding values of these quantities can be regarded as important benchmarks, especially for theoretical approaches. It is also worth mentioning that higher-order diagrammatic approximations such as the SCTMA do not necessarily lead to improved accuracy in terms of the values of relevant observables. In particular, a detailed comparison between the predictions of the TMA and the SCTMA has been discussed in Ref. [54] demonstrating that the former adequately estimates the experimentally observed polaron energy whereas the SCTMA overestimates its magnitude in the strong-coupling regime. Moreover, the diagrammatic Monte Carlo method based on bare Green's functions in self-energies exhibits a better convergence behavior compared to the ones employing dressed Green's functions due to the approximate cancellation of higher-order diagrams [44]. As such, the partial inclusion of higher-order diagrams by replacing the bare Green's functions with the dressed ones may lead to overestimating the molecule-molecule and the polaron-molecule scattering lengths in the strong-coupling regime [56].
As we demonstrated previously (see Figure 3), besides the fact that the spectral response within the SCTMA method is broader compared to the one obtained in the ETMA, the two spectra feature a qualitatively similar behavior. Indeed, both approaches evince that the spectra beyond p = p F are strongly broadened. Recall that in this region of momenta the atoms of the majority component, which form the Fermi sphere, cannot follow the impurity atoms. This indicates that the dressed polaron state ceases to exist due to the phenomenon of the Cherenkov instability [97,98], where the polaron moves faster than the speed of sound of the medium and consequently it becomes unstable against the spontaneous emission of elementary excitations of the medium. Such a spectral broadening can also be observed in mesoscopic spin transport measurements [99] and may also be related to the underlying polaron-Boltzmann gas transition [16] since the contribution of high-momentum polarons can be captured in rf spectroscopy due to the thermal broadening of the Fermi distribution function in Equation (7) at high temperatures. Moreover, the momentum-resolved photoemission spectra would reveal these effects across this transition. While the two approaches predict qualitatively similar spectra with a sharp peak at low momenta and broadening above p = p F , the SCTMA result (b) shows a relatively broadened peak at low momenta compared to the ETMA one (a).
We remark that the medium spectral function A ↑ (p, ω) is also useful to reveal the properties of strong-coupling polarons in the case of finite temperature and impurity concentration. Figure 6 presents A ↑ (p, ω) for various impurity-medium couplings ((p F a) −1 = −0.4, 0, 0.4, 0.7, and 1.0) at T = 0.4T F and x = 0.1. At (p F a) −1 = −0.4 and (p F a) −1 = 0, A ↑ (p = 0, ω) features a single peak at ω + µ ↑ = 0. On the other hand, at intermediate couplings (p F a) −1 = 0.4 and (p F a) −1 = 0.7, besides a dominant spectral maximum a second peak appears around ω + µ ↑ = E F . The latter evinces the backaction from the repulsive polaron because the inset of Figure 6 shows that the repulsive polaron is located around ω + µ ↑ E F . Moreover, at (p F a) −1 = 1, another peak emerges in the low-energy region (ω + µ ↑ −3E F ). This low-energy peak elucidates the emergence of two-body molecules with the binding energy given by E b = 1/(ma 2 ) due to the strong impuritymedium attraction. Concluding, the spectral function of the medium atoms can provide us with useful information for the recently observed smooth crossover from polarons to molecules [17]. Notice also that spectral and thermodynamic signatures of the polaronmolecule transition have been recently reported within a variational approach [100], while the associated molecule-hole continuum can be captured using the TMA method [101].
In the following, we shall elaborate on the behavior of the spectral function of lower dimensional Fermi polarons solely within the TMA approach. The latter provides an adequate description of the polaron formation in our case since the induced interpolaron interaction [59,60] is weak in the considered mass-balanced system. The inset shows the corresponding impurity spectral functions A ↓ (p = 0, ω). While the sharp peak at ω + µ ↑ 0 in A ↑ (p = 0, ω) is associated with the bare state, the small amplitude side peaks at positive (ω + µ ↑ E F ) and negative energies (ω + µ ↓ −3E F for the case with (p F a) −1 = 1) originate from the backaction due to the impurities.
Spectral Response of Fermi Polarons in Two-Dimensions
In two spatial dimensions, the attractive impurity-medium effective interaction g 2D < 0 is always accompanied by the existence of a two-body bound state whose energy scales as −1/(ma 2 2D ) [102]. Simultaneously, the repulsive polaron branch appears at positive energies [25] in addition to the attractive one located at negative energies. This phenomenology is similar to the case of a positive impurity-bath scattering length in 3D [101]. To elaborate on the typical spectrum of 2D Fermi polarons below we employ a homogeneous Fermi mixture characterized by an impurity concentration x = 0.1, temperature T = 0.3T F , and a typically weak dimensionless coupling parameter ln(p F a 2D ) = 0.4 where a 2D is the 2D scattering length introduced in Equation (3). The spectral response of both the fermionic background (A ↑ (p, ω)) and the impurities (A ↓ (p, ω)) for varying momenta and energies of the impurities within the TMA approach is depicted in Figure 7. We observe that the small impurity concentration, i.e., x = 0.1, leads to the non-interacting dispersion of the spectrum of the majority component given by A ↑ (p, ω) δ(ω − ξ p,↑ ); see Figure 7a. In this case, therefore, the medium does not experience any backaction from the impurities. Importantly, one can indeed identify a sizable backaction on the medium in the case of a larger impurity concentration and smaller impurity-medium 2D scattering length as shown in Figure 7(b1,b2) where T = 0.3T F , x = 0.3, and ln(p F a 2D ) = 0. Moreover, since the repulsive interaction in the excited branch of the impurities (ω + µ ↓ E F ) is relatively strong, the impurity excitation spectrum at positive energies (ω + µ ↓ > 0) is largely broadened. We note that the stable repulsive polaron branch can be found in the case of small a 2D . It also becomes evident that the impurity spectrum in 2D is largely broadened beyond p = p F as compared to the 3D spectral response ( Figure 5). Simultaneously, the intensity of the metastable impurity excitation in the repulsive branch becomes relatively strong in both the 2D and 3D cases. This result implies that fast-moving impurities do not dress the medium atoms and occupy the non-interacting excited states in such high-momentum regions. + µ ↑ = p 2 /(2m), the minority atoms (b) form the attractive polaron at negative energies (ω + µ ↓ < 0) and a broadened peak associated with the repulsive impurity branch at positive energies (ω + µ ↓ > 0). For comparison, we provide the spectral functions of the medium (b1) and the impurities (b2) in the case of T = 0.3T F , x = 0.3 and ln(p F a 2D ) = 0. Evidently, the feedback on the medium from the impurities is enhanced in the low-momentum region (p 0).
Fermi Polarons in One-Dimension
In one spatial dimension the quasiparticle notion is somewhat more complicated as compared to the higher-dimensional case. Interestingly, various experiments are nowadays possible to realize 1D ensembles and thus probe the properties of the emergent quasiparticles. Below, we provide spectral evidences of 1D Fermi polarons and in particular calculate the respective A σ (p, ω) (Figure 8) for the background fermionic medium and the minority atoms within the T-matrix approach including the Hartree correction. The system has an impurity concentration x = 0.326, it lies at temperature T = 0.157T F , and the 1D dimensionless coupling parameter for the impurity-medium attraction is (p F a 1D ) −1 = 0.28 in Figure 8(a1,a2). For comparison, we also provide A σ (p, ω) in Figure 8(b1,b2) for the repulsive interaction case (p F a 1D ) −1 = −0.55 with system parameters x = 0.264 and T = 0.598T F . We remark that the impurity-medium attraction is considered weak herein such that the induced interpolaron interactions are negligible. In this sense, we do not expect significant deviations when considering the ETMA or even the SCTMA approaches. The system is at temperature T = 0.157T F and dimensionless coupling parameter (p F a 1D ) −1 = 0.28. P T = √ 2mT is the momentum scale associated with the temperature T. The vertical dashed line marks the Fermi momentum p = p F of the background atoms. The majority component (a1) is largely broadened due to the backaction from the impurities in the low-momentum region (p < ∼ p T ). On the other hand, the minority component (a2) exhibits a sharp peak in the low-momentum region below p = p F and it is broadened above p = p F . For comparison, we show the (b1) medium and (b2) impurity spectral functions in the case of repulsive medium-impurity interaction characterized by (p F a 1D ) −1 = −0.55, where the temperature and the impurity concentraion are given by T = 0.598T F and x = 0.264. Although the impurity quasiparticle peak in the low-energy region (ω + µ ↓ 0) is shifted upward, the tendency of a spectral broadening is similar to the attractive case.
It is also important to note here that in sharp contrast to higher spatial dimensions, the coupling constant g 1D does not vanish when Λ → ∞ in the renormalization procedure; see Section 2.1. Thus, we take the Hartree shift Σ H σ = g 1D N −σ into account in the building block of the self-energy diagrams [103]. This treatment is not necessary in the single-polaron limit since Σ H ↑ → 0 and Σ H ↓ → g 1D T ∑ p,iω n G 0 ↑ (p, iω n ) (which is included in the TMA self-energy) when x → 0. The non-vanishing coupling constant in 1D plays an important role in the emergence of induced interpolaron interactions as it has been recently demonstrated, e.g., in Refs. [61,104,105]. The polaronic excitation properties obtained within the TMA approach show an excellent agreement with the results of the thermodynamic Bethe ansatz [106]. The latter provides an exact solution in 1D and in the single-polaron limit at T = 0 [102,107]. From these results, it is found that there is no transition but rather a crossover behavior between polarons and molecules. As it can be seen by inspecting Figure 8(a1) the spectrum of the majority component is affected by the scattering with the impurities. This is attributed to the relatively large impurity concentration x considered here. In particular, A ↑ (p, ω) is broadened at low momenta below p = p F . On the other hand, the spectral response of the impurities in Figure 8(a2) exhibits a sharp peak associated with the attractive polaron below p = p F and it becomes broadened above p = p F . Apparently, the curvature of the position of the polaron peak corresponding to the effective mass (curvature of the dispersion) is changed around this value of the momentum. Similar broadening effects of sharp peaks can be found even in the case of repulsive impurity-medium interaction shown in Figure 8(b1,b2). However, the low-energy sharp peak (corresponding to the repulsive polaron) in the impurity spectrum (Figure 8(b2)) is shifted to larger energies as a consequence of the impurity-medium repulsion.
Bose Polarons
In this section, we shall discuss the Bogoliubov theory of trapped Bose polaron systems in real space [86,108,109]. The reason for focusing on a real-space Bogoliubov theory is to elaborate on the deformation of the BEC medium in the presence of an impurity. Indeed, the interaction between the impurity and the medium bosons leads to significant inhomogeneities of the density distribution of the background which cannot be described within a simple Thomas-Fermi approximation. Such a modification of the boson distribution causes, for instance, enhanced phonon emission [61,78]. Moreover, in cold atom experiments the background bosons and the impurity are generally trapped. Considering the impact of inhomogeneity that naturally arises in trapped systems, therefore, we treat the Bose polaron in real space without plane wave expansion because the momentum is not a good quantum number. Below, we review the description of a Bose polaron in trapped 3D systems at zero temperature using the Bogoliubov theory and elaborate on the ground state properties. We remark that our analysis, to be presented below, is applicable independently of the shape of the external potential while for simplicity herein we consider the case of a harmonic trap.
In particular, we consider a 3D setting where a single atomic impurity is trapped in an external harmonic potential denoted by V I (r) and is embedded in a BEC medium that is also trapped in an another harmonic potential V B (r) whose center coincides with that of V I (r). Hereafter, we use units in whichh = 1. This system is described by the following model Hamiltonian Here,φ andψ are the field operators of the bosonic medium and the impurity, respectively. m I(B) is the mass of the impurity atom (the medium bosons) and µ is the chemical potential of the medium bosons. The effective couplings g IB and g BB refer to the impurity-boson and boson-boson interaction strengths, respectively.
Bogoliubov Theory for Bose Polaron Problems
First, we calculate the expectation value of the Hamiltonian in terms of the singleimpurity state |imp =â † imp |0 imp in order to integrate out the impurity's degree-of-freedom whereâ imp denotes the annihilation operator of an impurity in the ground state; ψ(r) is the corresponding wave function that can be determined self-consistently by Equation (35).
In this way, we have obtained the effective Hamiltonian for the medium bosons, in which the bosons experience an effective potential constructed by the external trap and the density of the impurity g IB |ψ(r)| 2 . Since we have set the temperature to zero in the present study, we have to assume that the medium bosons possess a condensed part, the so-called order parameter or the macroscopic wavefunction, when using perturbation theory. It is known [87,110,111] that when BEC occurs, the vacuum expectation value of the field operatorφ leads to a non-zero function which is used as an order parameter, i.e., φ (r) b = φ(r), where · · · b means b 0| · · · |0 b . The vacuum |0 b is determined from the effective Hamiltonian (30) within the Bogoliubov theory to the second order of fluctuations. This is equivalent to splitting the operator asφ = φ +φ, where φ b = 0. Substituting this into the Hamiltonian of Equation (30) and expressing it in terms of the different orders ofφ, we can readily obtain the expansionĤ B H (0) + H (1) + H (2) because the number of the non-condensed bosons is significantly smaller than that of the condensed ones at zero temperature and weak couplings. In this expression, the individual contributions correspond to where L(r) = − ∇ 2 2m B + V B (r) + g IB |ψ(r)| 2 + 2g BB |φ(r)| 2 − µ, and M(r) = g BB φ 2 (r). Note that we assume the weakly interacting limit of the medium to ensure the BEC dominating condition and thus g BB is adequately small such that the perturbation theory is valid. In the above expansion we ignore the contributions stemming from the third-and fourth-order terms in the field operator assuming that they are negligible for the same reason.
Subsequently, let us derive the corresponding equations of motion that describe the Bose-polaron system. From the Heisenberg equation, the bosonic field operatorφ satisfies i∂ t φ b = [φ,Ĥ (1) +Ĥ (2) ] b = 0 in the interaction picture. Accordingly, it is possible to retrieve the celebrated Gross-Pitaevskii equation describing the BEC background We remark that here, for simplicity, we consider the stationary case where the condensate is time-independent. Next, by following the variational principle for ψ namely δ H B b /δψ * = 0, we arrive at the Schrödinger equation for the impurity wavefunction where n ex (r) = φ † (r)φ(r) b is the density of the non-condensed bosons in vacuum, the socalled quantum depletion.
To evaluate this expectation value, we need the ground state |0 b of the Hamiltonian that can be obtained by the diagonalization of Equation (33). Namely, H (2) = ∑ n E nb † nbn is achieved using the following field expansionφ(r) = ∑ n b n u n (r) +b † n v * n (r) . Here the complete set {u i , v i } satisfies the following system of linear equations being the so-called Bogoliubov-de-Gennes (BdG) equations [112,113] We remark that the BdG equations are commonly used in mode analysis of condensates. In this context, the real eigenvalues constitute the spectrum, while the complex eigenvalues unveil the dynamically unstable modes of the condensate [114,115]. More precisely, if complex eigenvalues exist then the Hamiltonian can not be expressed in the above-mentioned diagonal form in terms of the annihilation/creation operators. As such, the dynamically unstable situation is beyond the scope of the present description. By using this expansion, we can calculate the vacuum expectation, e.g., n ex (r) = ∑ n |v n (r)| 2 . For the numerical calculations, to be presented below, the total number of bosons N B is conserved, i.e., This condition is achieved by tuning the chemical potential µ of the bosonic medium. Notice that N ex becomes non-zero due to thermal fluctuations at finite temperature, while in the ultracold regime it can be finite due to the presence of quantum fluctuations, otherwise termed quantum depletion [116]. We also remark that all of the above Equations (34)- (36) need to be solved simultaneously. The above-described treatment will be referred to in the following as the real-space formulation of the Bose-polaron problem.
Quantum Depletion around a Bose Polaron
Since N B is fixed (Equation (37)), the number of condensed particles N 0 changes due to the existence of N ex . This is a quantum effect that occurs even at zero temperature, and it is called quantum depletion [111]. We need to clarify that the term quantum depletion refers to the beyond mean-field corrections for the description of the bosonic ensemble. In the following, we shall investigate the effect of an impurity on the quantum depletion of the medium bosons at zero temperature. Indeed, the quantum depletion is a measurable quantum effect that is included in Equation (35) and its quantification makes it possible to evaluate the backaction of the impurity on the medium condensate.
A commonly used external confinement in cold atom experiments is the harmonic potential. As such, here, we consider that the traps of the impurity and the bosonic medium are spherically symmetric, namely, Accordingly, the order parameter of the BEC and the impuritys' wave function have spherically symmetric forms, and therefore the underlying BdG eigenfunctions are separable with the help of spherical harmonics as φ(r) = φ(r), ψ(r) = ψ(r), u n r m (r) v n r m (r) = U n r (r) where r = |r|. Here, (n r , , m) denote the radial, azimuthal, and magnetic quantum numbers, respectively. As a further simplification, we consider the situation where ω I is sufficiently larger than ω B , namely, the impurity is more tightly confined than the medium bosons. As such, the order parameter φ of the condensate changes much more gradually with respect to the spatial change of the impurity's wave function ψ. Since the impurity's wave function is relatively narrow compared to the condensate and the impurity-medium interaction is weak, the impurity essentially experiences to a good approximation an almost flat (homogeneous) environment. This also means that trap effects are not very pronounced in this case. In this sense, φ can be regarded as being constant and the impurity's wave function can be well approximated by a Gaussian function i.e., ψ(r) π m I ω I − 3 4 exp − m I ω I 2 r 2 . We remark that in the presence of another external potential, e.g., a double-well, one naturally needs to employ another appropriate initial wavefunction ansatz for the impurity. To experimentally realize such a setting it is possible to consider a 40 K Fermi impurity immersed in a 87 Rb BEC, where m I /m B 0.460. For the medium we employ a total number of bosons N B = 10 5 and the ratio of the strength of the trapping potentials ω I /ω B = 10 with ω B = 20 × 2π Hz [9]. Moreover, for the boson-boson and impurity-boson interactions, we utilize the values 1/(a BB n 1/3 B ) = 100 and 1/(a IB n 1/3 To reveal the backaction of the impurity on the bosonic environment we provide the corresponding ground state density profiles of the condensed and the depleted part of the bath in Figure 9a,c, respectively. In the case of g IB > 0 ( g IB < 0), the condensate experiences an additional potential hump (dip) at the location of the impurity and eventually it seems to be slightly repelled from (pulled towards) the impurity as shown in Figure 9b, where the deformation of the radial profile of the condensate from the case of zero impuritymedium interactions is provided. Moreover, in order to appreciate the role of the quantum depletion of the BEC environment we illustrate its depletion density in the absence and in the presence of the impurity in Figure 9b,d, respectively. Apparently, the degree of the quantum depletion decreases (increases) (Figure 9d) for g IB > 0 (g IB < 0), a phenomenon that is accompanied by the deformation of the condensate density. The effect of the impurity on the quantum depletion of the condensate is summarized in the Table 1. Inspecting the latter we can deduce that the quantum depletion decreases (increases) when the interaction is repulsive (attractive). This is a non-trivial result caused by the presence of the trap since in a uniform system [117][118][119] the depletion always increases irrespectively of whether the interaction is positive or negative. Figure 9. Radial profiles of (a) the order parameterφ(r) = φ(r; g IB = 0)/ √ N 0 /4π and (c) the density of depletion n ex (r) = n ex (r; g IB = 0) in the absence of an impurity. Differences of the radial profiles of (b) the order parameter δΦ(r) = (φ(r; g IB ) − φ(r; g IB = 0))/ √ N 0 /4π and (d) the density of depletion δn ex (r) = n ex (r; g IB ) − n ex (r; g IB = 0) in the presence of an impurity from the result depicted in (a) and (c), respectively. Table 1. The number of depletion N ex and its deviation δN ex = 4π dr r 2 δn ex (r) from the case of zero impurity-medium interaction. It is evident that degree of depletion increases (decreases) for attractive (repulsive) interactions.
Conclusions
In this work, we have discussed the existence and behavior of Fermi and Bose polarons that can be realized in ultracold quantum gases focusing on their backaction on the background medium. We have explicated three different diagrammatic approaches applicable to Fermi polarons in the homogeneous case. These include the TMA, the ETMA, and the SCTMA frameworks, where the ETMA considers induced two-body interpolaron interactions and the SCTMA includes two-and three-body ones. Importantly, we have explicitly derived the mediated two-and three-body interpolaron correlation effects as captured within the different diagrammatic approaches. Although these induced interactions are weak in the considered mass-balanced Fermi polaron systems, our framework can be applied to various settings such as mass-imbalanced Fermi polaron systems. Using this strong-coupling approach, we analyze the spectral response of the Fermi polaron in one-, two-, and three-spatial dimensions at finite temperature. It has been shown that the spectral function of the minority component exhibits a sharp polaron dispersion in the lowmomentum region but it is broadened for higher momenta. Moreover, we argue that the spectral response reflects the character of majority atoms forming a Fermi sphere while a strong interaction between the majority and the minority atoms induces a two-body bound state between a medium atom and an impurity particle. The presence of this two-body bound state becomes more important in lower dimensions.
Next, we present the mean-field treatment of trapped Bose polarons in three-dimensions and analyze the role of quantum depletion identified by the deformation of the background density within the framework of Bogoliubov theory of excitations. A systematic investigation of the latter enables us to deduce that the repulsive (attractive) impurity-medium interaction, giving rise to repulsive (attractive) Bose polarons, induces a decreasing (increasing) condensate depletion captured by the deformation of the density distribution of the host. This effect is a consequence of the presence of the external confinement since for a homogeneous background the quantum depletion increases independently of the sign of the impurity-medium interaction. Therefore, this result is considered as a particular feature of the trapped system.
Our investigation opens up the possibility for further studies on various polaron aspects. In particular, the effect of finite temperatures and the impurity concentration on the 2D Fermi polaron spectral response is expected to play a significant role close to the Berezinskii-Kosterlitz-Thouless transition of molecules [120]. Moreover, systems characterized by highly mass-imbalanced components, e.g., heavy polarons, provide promising candidates for the realization of more pronounced polaron-polaron induced interactions. However, the treatment of these settings will most probably require a more sophisticated approach including for instance three-body correlations between the atoms of the medium. Additionally, the investigation of finite sized systems at non-zero temperatures in the dimensional crossover from 3D to 2D as it has been reported e.g., in Ref. [121] but in the ultracold and single-polaron limits offers an interesting perspective for forthcoming endeavors. Furthermore, the comparison of the predictions of our methodology to treat the effect of quantum fluctuations in Bose polaron settings with other approaches based also on the mean-field framework [118,119] is certainly of interest. Finally, the backaction of the impurities on the medium when considering dipolar interactions between the medium atoms may affect the density collapse of the medium at strong impurity-medium attractions [122] and thus provides another intriguing prospect. Data Availability Statement: All data discussed in this study are available within the article. | 12,273.8 | 2021-03-09T00:00:00.000 | [
"Physics"
] |
Tracing Road Network Bottleneck by Data Driven Approach
Urban road congestions change both temporally and spatially. They are essentially caused by network bottlenecks. Therefore, understanding bottleneck dynamics is critical in the goal of reasonably allocating transportation resources. In general, a typical bottleneck experiences the stages of formation, propagation and dispersion. In order to understand the three stages of a bottle neck and how the bottleneck moves on a road network, traffic flow data can be used to reconstruct these dynamics. However, raw traffic flow data is usually flawed in many ways. For instance some portion of data may be missing due to the failure of data collection devices, or some random factors in the data make it hard to identify real bottlenecks. In this paper a “user voting method” is proposed to deal with such raw-data-related issues. In this method, road links are ranked according to the weighed sum of certain performance measures and the links that are ranked relatively high are regarded as recurrent bottlenecks in a network, and several bottlenecks form a bottleneck area. A series of bottleneck parameters can be defined based on the identified bottleneck areas, such as bottleneck coverage, bottleneck link length, etc. Identifying bottleneck areas and calculating the bottleneck parameters for each time interval can reflect the evolution of the bottlenecks and also help trace how the bottlenecks move.
Introduction
Traffic congestion often occurs at or originates from bottleneck areas. Therefore, modeling and analysis of traffic bottleneck are crucial for relieving traffic congestion and improving transportation operation. When bottlenecks propagate in the network, the network operational efficiency would be greatly decreased [1,2].
Literature on traffic bottlenecks can be found in fields like transportation economies, physics of traffic flow and traffic engineering [3]. The majority of the work in transportation economies focuses on congestions induced by bottleneck during morning commute time. The "bottleneck model" formulated by Vickrey [4] is a fundamental model in congestion analysis. The model provided significant insights for understanding many features of traffic congestion. Many follow-up research papers extended the "bottleneck model", e.g. Xiao et al [5] considered the case that two flows merged into one link, and the link capacity was stochastic. In physics, traffic bottleneck studies usually focus on local-level bottlenecks. For example, in Zhang et al. [6], the bottleneck was caused by lane drops; Nakata et al. [7] studied the game dilemma at a two into one bottleneck junction; in a two-route system, Hino and Nagatani [8] derived the travel time and mean density according to the bottleneck's strength; Kerner et al. [9] proved the nucleation nature of empirical traffic breakdown at highway bottlenecks with traffic flow data of nearly 20 years. From the perspective of traffic engineering, much effort has been devoted to analyze the characteristics of bottlenecks and the control of bottlenecks. Bottlenecks can be either dynamic or static. Schrank et al. [10] described how to extract 100 most congested roadway sections, based on velocity and volume data; Seeherman and Skabardonis [11] analyzed the effect of weather on the recurrent delay at two types of freeway bottlenecks, one incurred by traffic merge and the other by lane drop; Li et al. [12] used a logic-tree based method to adjust the speed of a bottleneck at Auckland; Sun et al. [13] studied the mechanism of the formulation of expressway bottlenecks, and their focus was mainly on the lane changing behaviors around the bottleneck; John et al. [14] used speed data to report the bottlenecks of the nation-wide road network. The slope of the fitted line between the speed vectors of any pair of neighbored links was used to describe the trend of traffic congestion; Chen and Ahn [15] investigated the variable speed limit control of non-recurrent bottlenecks.
The above studies provide various methods to analyze freeway bottlenecks, which may not be directly applicable to the urban street network, since urban street system operates different from freeway system. Without a mainstream-merge-diverge structure, it is hard to decide where the bottlenecks are located. All these make it difficult to analyze the dynamics of bottlenecks in the urban street network, which is the reason of why urban network bottleneck models are lacking.
Due to the development of intelligent transportation system, various types of traffic flow data become available, making the analysis of bottlenecks with a data-driven approach possible [16]. However, due to various reasons, the raw data usually exhibits many problems. For example, some portion of the data may be missing, due to unavailability of detectors or failures of data collection devices. Some random factors, such as incident and temporal parking, can also bring in noises which make it unreliable to directly derive the bottleneck dynamics from the raw data.
Due to numerous random factors in the transportation system, it may not be easy to directly recognize the bottlenecks from large scale traffic data, since the data themselves is stochastic in nature. To deal with the uncertainties of the link performance measures, this paper introduces the method of voting and ranking. In the typical ranking problem, a lot of goods are voted by many users. The eventual rank of the good is derived based on the weighed sum of all voted scores, taking the random factors of the users' preferences into account. Similarly, the ranking of the performance of a specific link for each time interval can also be regarded as a "score". The ranking of a link is determined by the weighed sum of the velocity data of the link. Links ranks higher than certain criteria can be considered as potential bottlenecks. Analyzing the time dependent link rankings can help track bottlenecks. Fig 1 describe the workflow of the method.
The rest of the paper is structured as follows: the next section describes the data set used in our analysis; following that the ranking models as well as the bottleneck selection method are given in Section 3; Section 4 presents some numerical results to demonstrate the validation of the proposed model. Conclusions are given in the last section.
Data Source
The data includes two types: the geographical data set and the velocity data set. The geographical data is in a shapefile format which is developed and regulated by Esri (Environmental Systems Research Institute). This data set helps to visualize the traffic state in the network. The vehicle velocity data is the taxi GPS data collected from 2012-11-01 to 2012-11-30 in city of Hangzhou, China, included in the S1 File. There are totally around 9000 taxis in the city. The GPS equipments send second by second location and travel direction information to the traffic management center. The location can be matched to the geographical data to determine which link the taxi is driving on. In this way, the travel trajectory of each vehicle can be obtained. To calculate the velocity of each link, the entire modeling time horizon is divided into intervals of five-minutes. For a specific link i at time interval k, vehicles traversed the link is selected from raw data. Suppose that, for a vehicle selected within this time interval, the initial spatial-temporal coordinate is (t 1 , x 1 ) and the final coordinate is (t 2 , x 2 ), then the mean velocity within this spatial-temporal space can be calculated as x 2 Àx 1 t 2 Àt 1 . These mean velocities are then averaged over all the vehicles to obtain the link velocity. For each link, there will be totally 288 successive velocities within an analysis period of 24 hours. The directional velocity is not considered. But for the description of the whole network, such simplification is acceptable. A two-way street is modeled as two links here, and the velocity is calculated separately for each link.
The studied network covers all the links within the "Ring Highway" of Hangzhou City, as shown in Since the majority of the roads velocity is available, and the number of links is relatively low in the early morning, and we mainly focus on the peak hours rather than off-peak hours, such coverage is acceptable. exhibit the same trend. In the early morning, velocity increases and reaches the peak around 04:00 and decreases sharply afterwards till about 08:00. The velocity during the day time is generally smaller and the peak appears around 12:30. The evening congestion period of Hangzhou City normally last from17:00 till 19:00. As can be observed from the figure, during this period a majority of the vehicles go below 30 km/h, and among them quite many goes below 20 km/h.
Links under Different LOS
Level of service (LOS) is an intuitive measure for traffic flow operation. Highway capacity manual defines different LOSs according to link velocities. The detailed definitions are given in Table 1.
Traffic State for Different Road Types
Roads of different types operate substantially differently. Major road types in the study area include principal arterials, minor arterials, principal branches, minor branches, expressways and highways. perform slightly better than the branch roads. The temporal profiles of different road types present similar trend, with same trough hour. Fig 7 displays the standard deviations of the velocities for different road types. The deviations are computed across all links of the same types within the same time interval. Interestingly, the deviations of most road types remain relatively stable during a whole day, except that of the expressway. The reason may be attributed to the function of the expressway. Expressway serves the traffic flowing in and out of the city. The arterial and branch roads mainly serve the commuters within the city. When morning/evening congestion peak approaches, part of the expressway system is also influenced by the daily commuting traffic, while the remaining part stays uninfluenced, which leads to a large variation of the velocity. The capacities of expressways are generally larger than those of surface streets, making them more attractive to travelers during congested periods. Thus when peak hour comes, drivers may divert to expressways even though at the cost of longer travel distance. The coincidences of larger standard deviations and smaller mean velocities during peak periods show that expressway system plays as a buffer role in accommodating daily traffic.
Rank of the Traffic State
We first define the following terms: 1. score: the evaluation result of each link by each time interval. In our analysis, the velocity is regarded as a type of score; 2. rank: the relative congestion level of a link in the road network. We use the parameter velocity to denote the traffic state, and a higher rank implies a more congested traffic state; 3. bottleneck links: based on some congestion threshold, a series of links which rank top in the network can be selected. These connected bottleneck links form a bottleneck area. Subsequently, bottleneck parameters can be defined, like coverage, center, etc.
The basic idea of ranking is that during some time domain, the relative traffic state of a link should be stable, which can be represented by the rank of a reference parameter, such as velocity, among all the links. The rank of a link in each time interval within the time domain can be seen as a "realization" of the rank hence operates at a random manner. It is assumed the worst performance ranks the first, thus the bottleneck links will be ranked at the top, either because the velocities of these links are relatively smaller or the travel times on these links are larger. It is supposed that all velocity data associated with this link contribute to the rank, with different weighting factors. The rank can be obtained according to the weighed velocity, or called score here.
The above description is similar to the user voting problem, in which a lot of users vote for a series of goods. Each user will evaluate one or more goods and give a score to each of them. Due to the user preferences and other uncertain factors, the scores for a good essentially form a distribution. The final evaluation of the good is an aggregation of all these scores. We implement the voting process in a recursive way. During each iteration, the weight of each velocity is updated, until some indexes are within a predefined threshold.
By referring to a term "time domain", we want to track the evolution of bottlenecks in a network along the time horizon. For example, we first analyze the velocity data of one hour, and the bottleneck links within this hour can be identified. Then we roll ahead the time domain by 10 minutes and analyze the corresponding one-hour velocity data again, and the bottleneck links during this hour can also be recognized. The new set of bottleneck links may not be the same, due to the change of network demand and supply. By performing the same analysis for each one-hour domain, the evolution of bottlenecks can be tracked. For instance, whether the bottleneck area expands or shrinks, or shifts to other locations.
Model description. v ij,k is used to denote the velocity of link k during time interval j of day i. Scores by intervals in peak hours are more reliable than off-peak hours. The weighting factors of the scores are determined based on the following rules: 1) the weight is larger for peak hour intervals; 2) the rank of a link during one interval should be similar to all other intervals. The rank of a link during one interval can be different from its ultimate rank, which represents the overall congestion degree.
We relate the weight of a velocity data of a link during certain interval to three factors: 1. the average velocity of the whole network during that time interval; 2. the discrepancy between the rank derived according to this data and the rank derived according to the ultimate score; 3. the discrepancy between the rank given by that interval and the ranks given by all other intervals; Some notations are first given as follows: s k 2 (0,1]: the ultimate score of a link k, S = {s k } is the vector for all links; θ ij,k : normalized velocity, θ ij;k ¼ L i;j;k 1 v ij;k , and function L(), which is used to normalize data, is defined as: τ ij : the "importance" of an interval; δ ij,k : the discrepancy between the rank of link k given by interval j of day i and the overall ranks, i.e. S = {s k }; δ ij,k corresponds to factor 2.
σ ij,k : the discrepancy between the rank of link k given by interval j at day i and those of other intervals; σ ij,k corresponds to factor 3.
Thus the weight of a vote score is expressed as: And w ij,k should satisfy the relationship S i,j w ij,k = 1.
For description purpose, we define: The function is to calculate the mean of ψ ij,k across all i and j. N i; j c ij;k denotes the total number of data. Determination of τ ij . Note that during peak hours, more time is required to traverse unit distance, and thus we determine τ ij as: Where v ij is the mean velocity of all links during a specific interval j of day i: is the number of velocity data during interval j of day i.
Determination of δ ij,k . It is assumed that the smaller the discrepancy δ ij,k is, the greater the weight should be, hence, the we determine δ ij,k as follows: is the rank of 1 v ij;k for given i and j. Determination of σ ij,k . For the score in interval j of day i, we assume it is better if its rank is closer to the average rank across, all time intervals. Hence the following function is utilized to determine σ ij,k : values across i and j.
Generally two forms of functions can be adopted for w ij,k = f(τ ij , δ ij,k , σ ij,k ).: additive and multiplicative. For the sake of simplicity, we use the following multiplicative form: Once the weighting factors are derived, the score s k can be obtained as: The steps for calculating link ranks can be summarized as follows: 1. Initialize the weights of all the votes, and making S i,j w ij,k = 1, 8i, j, k 2. Compute the score of each link, s k ¼ R k ðS i;j w ij;k  1 v ij;k Þ , S = {s k }; 3. Normalize the score to the interval [0, 1]; 4. Renew the weights, w ij,k = f(τ ij , δ ij,k , σ ij,k ); 5. If stop criteria is not satisfied, return to 2); end otherwise.
Rolling Time Domain Method
The lifecycle of a congestion area can be summarized as follows: at first, over-saturated queue forms at a single link. Then, this link will send as much flow as possible to its downstream links, which results in the over-saturation of downstream links. At the same time, the potential overflow queue will consume the capacity of the upstream intersection, which results in the over-saturation of upstream links. Thus a congestion area arises. The dispersion process is just the opposite. Thus a congestion area can be represented by a list of congestion link sets. Each set corresponds to a time interval and contains the congestion links during that time interval. Moreover, the links within the set form a connected sub-graph when we present the whole network as a graph. In order to obtain these sets, rolling time domain method is used. The idea is illustrated in Fig 9. There are two parameters, time domain length T and rolling step rt. Each voting process will generate a ranking list of all the links. Then, we step forward to implement another voting, which will generate the voting result for the next time domain.
Bottleneck Parameters
When the ranks are derived according to the velocities, some threshold can be set to select the top ranked links, which are more congested than others. These links comprise a sub-network. Within the sub-network, some links are connected while others are not, which means the subnetwork can be composed of several components. We call these components the basic congestion unit. The congested components during adjacent time domains are relevant, since traffic states evolve from one time domain to the next. Various parameters can be defined based on the congested component generated, such as the size of the component, the center of the component, the average velocity of the component, etc. These parameters are used in the next section to facilitate the analysis. Fig 10 presents the top 1500 congested links of a whole day. The time domain length T is 24 hours. Gray lines denote uncongested links, while the thicker ones represent congested links. It can be observed that most congested links are located in or around the city center. Some other bottlenecks are scattered outside. While 24hours is too long to catch the variation of bottleneck areas, we set the time domain length T to one hour to get the bottleneck trend along time, since one hour is typical peak hour length thus allow us to analyze the dynamics of entering and exiting of congestion period. Longer time domain length leads to averaged result across time. Besides we set rt to ten minutes to apply the time rolling voting, since 5 minutes will make the computation efficiency relative low and longer rolling step makes it hard to observe the evolution of bottleneck areas.
Analysis Results
Here the rolling time domain length T and rolling step rt are set to one hour and ten minutes, respectively. Fig 11 presents the numbers of congested components in different time domains. We examine the 1500 top ranked links for each time domain. These links comprise many isolated components. As congestion grows, some components expand and different components start to merge into larger components, which lead to a decrease in the number of components as shown in the figure. A sharp decrease is observed at about 08:00. Due to the stochastic nature of the transportation system, the number of components is not stable along the time horizon. Fig 12 presents the size of the largest component in different time domains. The size means the number of links in the component. Generally, the size of the largest component increases when the morning peak approaches, which is reasonable since congestion area expands and different components start to merge. While even during peak hours, the size of the sub-network changes relatively.
Several typical time domains are selected to analyze the spatial changing of the congestion areas. These moments include 04:35, 05:53, 07:35 and 08:35. The largest congested components are shown in Figs 13-16. The links are all located to the east of "West Lake", the most famous landscape in the city. This area is the traditional CBD area of the city, and the road density is very high. It can be observed in Fig 4 that the congestion peak in the morning is about 08:20. At first (04:35, Fig 13), the links in the component are mostly major roads, and only a few are branch roads. The density of the links in the component is lower compared to other three. This implies that most travelers select major roads during uncongested condition. As the rush time approaches, the congestion area changes spatially. At 07:35 as in Fig 15, the most congested area almost reaches the "XiXing bridge", a major corridor that cross the QianTang river, which is indicated in Fig 13. In the morning, commuters enter the city through major roads, making these roads congested.
From Fig 16, another important factor contributing to the spatial change of congestion area can be identified, which is the skyway almost across the entire city in the north-south direction. The skyway serves most of the relatively long trips. Because the capacity of the skyway is relatively high, and the interaction between this skyway and surface street network is controlled by ramp metering system, the velocity of the skyway is fast at the beginning, which makes it attractive to the long trips. Thus the flow on the skyway increases drastically. Once the congestion forms on the skyway, it will propagate very fast. We can see that the congestion almost covers the entire skyway. In the midday, at 12:05 in Fig 17, such congestion on the skyway does not exist.
Conclusion
Urban traffic bottleneck has been the focus of many transportation studies, however, the methods and theories to analyze and eliminate bottlenecks are still lacking. Due to the development of modern data collection technologies, large scale traffic flow data becomes available and it provides great opportunities to construct urban bottleneck models. This paper takes advantage of new-technology based speed data and analyzed the dynamics of bottlenecks in the urban area of Hangzhou. The method of "voting" is introduced, where each velocity data is viewed as a score evaluated for the associated time interval, and the final score of a link is calculated based on the weighed sum of all the velocity data of this link. According to the scores of all the links, a link list can be created with more congested links ranked higher, thus the bottleneck links can be identified at the top of the list. The congestion areas, consists of connected bottleneck links, can be recognized based on the ranking result. Subsequently, by apply the rolling time horizon method, the evolution of the bottlenecks can be tracked explicitly. Numerical examples have been provided to demonstrate the proposed data analysis approach.
However, identification of bottleneck is the first step towards operation improvements. In order to make decision on improvement plans, deep understanding of sensitivity of road capacity and network demand to bottleneck dynamics is required. Furthermore, the overall evaluation of the congestion, namely the supply-demand structure should be carried out before the decision, which can be realized based on the results of this research.
Supporting Information S1 File. Original tabular data of velocity in the paper. (ZIP) | 5,587.4 | 2016-05-26T00:00:00.000 | [
"Business",
"Computer Science"
] |
DLD: An Optimized Chinese Speech Recognition Model Based on Deep Learning
Speech recognition technology has played an indispensable role in realizing human-computer intelligent interaction. However, most of the current Chinese speech recognition systems are provided online or offline models with low accuracy and poor performance. To improve the performance of offline Chinese speech recognition, we propose a hybrid acoustic model of deep convolutional neural network, long short-term memory, and deep neural network (DCNN-LSTM-DNN, DLD). This model utilizes DCNN to reduce frequency variation and adds a batch normalization (BN) layer after its convolutional layer to ensure the stability of data distribution, and then use LSTM to effectively solve the gradient vanishing problem. Finally, the fully connected structure of DNN is utilized to efficiently map the input features into a separable space, which is helpful for data classification. Therefore, leveraging the strengths of DCNN, LSTM, and DNN by combining them into a unified architecture can effectively improve speech recognition performance. Our model was tested on the open Chinese speech database THCHS-30 released by the Center for Speech and Language Technology (CSLT) of Tsinghua University, and it was concluded that the DLD model with 3 layers of LSTM and 3 layers of DNN had the best performance, reaching 13.49% of words error rate (WER).
Introduction
As we all know, artificial intelligence has been developing rapidly, and the intelligent interaction between human and machine has also become a key research area. Speech recognition is one of the important technologies for realizing intelligent interaction between human and machine. Currently, almost all of the commercial speech recognition technologies are provided online, such as WeChat, Baidu, and Xunfei. When the user recognition content needs to be made confidential, this kind of online speech recognition technology makes its process unable to ensure the privacy of user's information. erefore, the online speech recognition technology is not suitable for all scenarios. To solve this problem, this study will focus on the analysis of model for offline Chinese speech recognition.
Speech recognition technology is mainly composed of signal preprocessing, feature extraction, acoustic model, language model, and decoder. At present, the most popular feature extraction methods are Mel frequency cepstral coefficients (MFCC), Fbank, and spectrogram. Spectrogram is a picture that converts speech signals into information represented by dots of different colours, which can retain most information among these methods. erefore, based on the characteristics of spectrogram, this paper uses this method as feature extraction method. Convolutional neural network (CNN) has shown outstanding performance in the field of image recognition, so in the process of speech recognition, CNN is widely used as the main part of acoustic model to learn the acoustic information contained in spectrogram and has been shown to achieve good results.
e N-gram model, which is a contiguous sequence of n items from a given sample of text or speech, is currently the most widely used in the language model, which is to calculate the probability of the arrangement relationship between words. e acoustic model and the language model are the two most important parts in Chinese speech recognition. e relationship between them is that the acoustic model outputs, Pinyin sequence, a Romanization of the Chinese characters based on their pronunciation, after training and learning, will be used as the input of the language model and finally outputs the text sequence.
Among the traditional speech recognition models, the GMM-HMM (Gaussian mixture model-hidden Markov model) model has been widely used as a very effective acoustic model. Mohamed et al. applied deep belief networks (DBN) for the first time to construct an acoustic model [1]. e tests on the TIMIT, a standard data set used for evaluation of automatic speech recognition systems of the optimal DBN acoustic model, had achieved a phone error rate of 23%. Compared with the GMM-HMM model, it had been proven that the recognition performance of the DNN-HMM (deep neural network-hidden Markov model) model had significantly improved [2,3], so that deep neural networks began to replace the Gaussian mixture model in traditional acoustic models.
On the basis of DNN, Recurrent Neural Network (RNN), LSTM, CNN etc. could also improve the modeling ability. Graves e character error rate of this training framework model was 14% lower than that of the model with similar real-time functions [5]. C. Z. Liu and L. Liu proposed the use of fractional order theory to process the activation functions of nodes in convolutional neural networks. rough calculation, they found that the reciprocal of fractional order could speed up the convergence of the Sigmoid function and reduce training time [6]. Yang and Wang added Fisher criterion and L2 regularization to the convolutional neural network, making the trained network weight and bias closer to the optimal value, and effectively alleviating the overfitting problem caused by the small amount of data. And using a new type log activation function, compared with the sigmoid function, it could further improve the accuracy of speech recognition [7]. e study by Sainath et al. found that the DNN-LSTM-CNN model is suitable for speech recognition and performs well. However, since Chinese speech recognition has complex features such as voice intonation, synonyms, etc., we replace DNN with DCNN to capture these high-dimensional features. In Liu deep convolutional neural network speech recognition paper, it is also highlighted that the DCNN model has better robustness and accuracy in Chinese speech recognition. erefore, based on the research of these two papers, we believe that the DCNN-LSTM-DNN model has a more prominent performance in Chinese speech recognition.
In the past few years, deep neural networks (DNNs) have achieved great success on the task of large-vocabulary continuous speech recognition (LVCSR) compared to the Gaussian mixture model/hidden Markov model (GMM/ HMM) systems. Both Convolutional Neural Networks (CNN) and Long Short-Term Memory (LSTM) have shown improvements over Deep Neural Networks (DNNs) in various speech recognition tasks. CNN, LSTM and DNN are complementary in modeling capabilities, as CNN is good at reducing frequency variation, LSTM is good at temporal modeling, and DNN is suitable for mapping features into a more separable space. erefore, exploiting their complementarity by combining CNN, LSTM and DNN into a unified architecture can improve speech recognition performance [8]. In addition, the traditional speech signal recognition results are susceptible to noise interference. is problem can be solved by a multinoise speech recognition method based on deep convolutional neural network (DCNN), which can improve the signal-to-noise ratio of speech signals and improve the accuracy of speech recognition rate. Based on these theoretical foundations, we propose a DCNN-LSTM-DNN architecture suitable for Chinese speech recognition.
According to the above work, involving deep learning to improve the accuracy of audio recognition has become an important research direction. e present work focuses on optimizing the Chinese speech recognition model by deep learning methods. Firstly, the DCNN-CTC acoustic model is designed, and the effectiveness of this model is proved by experiments. en, we combine several neural networks, i.e., DCNN, LSTM, and DNN, propose the DCNN-LSTM-DNN (DLD) acoustic model. We employed the Deep Convolutional Neural Network (DCNN) acoustic model as the basic acoustic model and used the statistical language model based on hidden Markov as the language model. Our mainly work is to optimize the DCNN acoustic model and try to improve the performance of the Chinese speech recognition model. After adjusting the accuracy of the algorithm many times, we proposed the DCNN-LSTM-DNN acoustic model, which we call DLD has the lowest WER. is model achieves the goal of improving the performance of Chinese speech recognition. According to the experimental results on THCHS-30, the word error rate (WER) of the DLD is 13.49%, which is about 7% lower than that of the DCNN-CTC.
In this paper, a novel DCNN-LSTM-DNN model was proposed, which combines three different network structure models of DCNN, LSTM and DNN for Chinese speech recognition, and this new acoustic model significantly improves the accuracy. We addressed the two main issues in the field of Chinese speech recognition. e first issue is the poor performance of the existing Chinese speech recognition model, while the second issue is most Chinese speech recognition is online, and there are few local speech recognition systems. We designed the new DCNN-LSTM-DNN acoustic model that added the BN layer and incorporated the LSTM and DNN network structure which effectively improve the model accuracy and created local Chinese speech recognition.
2 Complexity e approach we proposed solved three critical problems in the current existing research. At present, the existing research uses convolutional neural network in the training process, each iteration of the network needs to relearn the data distribution, which causes the problems of network training difficulty and slow network convergence. We proposed to use the BN algorithm to solve this problem. In convolutional neural networks, the BN layer is usually added after the convolutional layer. It standardizes the input data of each layer to make it a standard normal distribution. is method ensures the stability of data distribution and achieves the purpose of accelerating network training. Another challenge in current research is the disappearance of gradients during training. e solution we proposed is to add a Long Short-term Memory Network. is network structure not only has a memory function, but also effectively alleviates the problem of disappearance of gradients. Finally, we designed a DCNN-LSTM-DNN Chinese speech recognition model with batch normalization processing to make the model performance better.
Related Work
Our approach is based on recent excellent work that proposed to use a novel end-to-end age and gender recognition convolutional neural network (CNN) with a specially designed multiattention module (MAM) from speech signals [9].
is model uses MAM to extract spatial and temporal salient features from the input data effectively.
Our work is also inspired by the recent success of a stacked network with dilated CNN feature [10,11], which is a one-dimensional dilated convolutional neural network (1D-DCNN) for speech emotion recognition (SER) that utilizes the hierarchical features learning blocks (HFLBs) with a bidirectional gated recurrent unit (BiGRU).
LSTM is a recurrent neural network architecture, which is mainly used to solve the problems of gradient disappearance and gradient explosion of traditional recurrent neural networks. Haim et al. [12] proposed a recurrent neural network structure based on a long short-term memory network (LSTM), which can use the parameters of the model to train the acoustic model. By comparing the experimental results of LSTM, RNN, and DNN models, it is found that the LSTM model has better recognition performance.
Convolutional neural network (CNN) is a well-known and widely used deep learning network structure. Convolutional neural network is gradually applied to the field of speech recognition due to its translation invariance. Abdel-Hamid et al. [3] proposed a limited-weight-sharing scheme that can better model speech features, which can better process speech features.
DCNN-LSTM-DNN Hybrid Acoustic Model
In this section, we propose our DCNN-LSTM-DNN hybrid acoustic model. First of all, we provide a brief review of the DCNN-CTC acoustic model that combines Connectionist Temporal Classification (CTC) and DCNN. Although DCNN-CTC achieves better performance than DCNN, its overall acoustic performance is still not good enough. In order to improve the offline Chinese speech recognition performance, we propose the DCNN-LSTM-DNN (DLD) acoustic model, DLD optimizes the previous network, adds batch normalization (BN) layers into DCNN, and insert DNN between LSTM and CTC. As a result, DLD reaches lower WER, and the speech recognition performance is improved.
DCNN-CTC Acoustic Model.
e training process of the acoustic model belongs to supervised learning, which is necessary to know the label corresponding to each frame of the input data. erefore, in the preparation stage of the training data, the audio data must be aligned with the corresponding pinyin sequence. An alignment rule for characters and audio is needed to ensure that it is effective even if different people's speaking rate are different. If all the audio manually are aligned manually, it will be unrealistic due to excessive workload and time-consuming when facing large data sets apparently. e CTC algorithm, proposed by Graves et al. [13], can solve the above problem very well. Its advantage is that when the model is introduced into the CTC algorithm, the training of the model only requires an input sequence and its corresponding output sequence, without the need for alignment and label.
Many scholars have proved that the acoustic model with CTC algorithm can effectively improve the recognition performance, at the same time, it can also present an end-toend model training structure, reduces the difficulty of speech recognition [14][15][16][17]. Based on the DCNN acoustic model, the DCNN-CTC acoustic model is proposed, which consists of 10 convolutional layers and 5 maximum pooling layers, takes ReLU as the activation function of the convolutional layer, CTC algorithm as the loss function of the model. At the end of the DCNN-CTC, the pinyin sequence classified by the Softmax layer is the output. e specific structure and parameter settings are shown in Figure 1. e filter_size is set to 3, the pool_size is set to 2, and the number of feature maps is set to 32, 64, and 128, respectively.
Assuming that there are m neurons in the l − 1th layer and n neurons in the lth layer, the linear coefficients w of the lth layer form an n × m matrix w l . e bias b of the lth layer forms an n × 1 vector b l , the output a of the l − 1 layer forms a m × l vector a l−1 , the inactive linear output z of the lth layer forms an n × l vector z l , and the output a of the lth layer forms an n × l vector a l . Represented by matrix method, the output of the l layer is
DCNN-LSTM-DNN Acoustic
Model. Based on DCNN-CTC acoustic model, we propose an optimization method that adds a batch normalization layer after each convolution operation and integrates the LSTM and DNN network structure behind the DCNN. e specific network structure is shown in Figure 2.
Complexity
Batch normalization layer can normalize the input data of each layer, make it easier and faster for the model to learn the laws in the data, the part of DCNN can reduce the number of parameters and get the key information during learn the input features of spectrogram. en, the LSTM layer predicts the current time step information according to the previous time step information, and finally inputs the predicted results into DNN for classification to obtain the Pinyin sequence.
DCNN Component.
Due to the characteristics of the translation invariance, CNNs are gradually applied to the field of speech recognition [18]. It plays an indispensable role in solving machine learning problems in fields involving high-dimensional data such as speech recognition and computer vision [19]. e main function of CNN is to collect key features in the data through convolution operation and at the same time eliminate features with less information through pooling operation. erefore, convolution and pooling operations enable the model to learn data features more accurately and faster with fewer parameters.
Visual geometry group network (VGGNet) is very effective in image recognition, which is based on CNN [20,21]. In order to realize speech feature extraction, our work first converts the audio data into the corresponding spectrogram, and then takes it as the input of acoustic model. We design a DCNN structure with 10 convolutional layers and 5 pooling layers by referring to the structure of VGGNet.
ere are two commonly used activation functions in CNN, Sigmoid, and ReLU. Different activation functions are suitable for different data [21,22]. In order to verify which of Sigmoid and ReLU would be more effective for voice information, we established Sigmoid-based network DCNN and ReLU-based network DCNN (ReLU), respectively. e DCNN consists of 10 convolutional layers and 5 maximum pooling layers. e specific process of spectrogram feature extraction is convolution-convolution-maxpooling. Cross entropy loss function is used as the loss function of the model. At the end of the model, the results are classified through the Softmax layer. e structure of DCNN (ReLU) acoustic model is the same as DCNN, and the only difference is that the activation function of the DCNN (ReLU)'s convolutional layers is ReLU, while the DCNN is Sigmoid.
Batch Normalization (BN) Layers.
As one of the most widely used optimization methods in deep learning, batch normalization (BN) [23] can improve the performance and stability of neural networks. By adding BN layers, the impact of data shift and increase in the training process can be effectively resolved, and the training speed of the model can be accelerated while avoiding the disappearance of the gradient.
Long Short-Term Memory Network Component.
Long short-term memory (LSTM) network is a neural network architecture with memory function and gating mechanism, which has the advantage of being better for the processing of time-related data. For the large-vocabulary speech recognition, the LSTM model had the best experimental results and the fastest convergence speed compared to the RNN model and the DNN model [12]. In essence, Complexity speech recognition is a process of recognition based on the relevance of context front and back. LSTM has a recurrent neural network architecture, which solves the problems of gradient disappearance and gradient explosion in traditional recurrent neural networks by adding a gate mechanism [24], including a forget gate, an input gate, and an output gate. e role of the three gates is to decide which information to discard, to keep, and to update.
As shown in Figure 3 [25], x t is the input information of the cell and f t , i t , and o t represent the output of the forget gate, input gate, and output gate, respectively. All of the gates use Sigmoid as the activation function, denoted by σ, while the cell unit uses Tanh. h t−1 and C t−1 , h t and C t are the hidden states and cell states of the previous time step and the current time step, respectively. w f , w i , w o , w c and b f , b i , b o , b c represent the weight matrix and bias coefficient of the forget gate, input gate, output gate, and cell unit, respectively, then the specific calculation process is as follows: (2) One problem with LSTM is that temporal modeling is done on the input features. However, advanced modeling of features can help clarify the underlying factors for changes in the input, which will make it easier to learn the temporal structure between successive time steps. Studies have shown that DCNNs can learn speaker-adaptive discriminatively trained features that eliminate variations in the input. erefore, it would be beneficial to have several fully connected DCNN layers for the LSTM. e specific implementation of the combined use of DCNN-LSTM is divided into three steps. First, we reduce the frequency variation of the input signal by passing the input through several convolutional layers. e size of the last layer of the DCNN is large due to the number of feature map temporal-frequency contexts. erefore, we pass it to the LSTM layer before adding a linear layer to reduce the feature size and find that adding this linear layer after the DCNN layer reduces parameters without the loss of accuracy. Next, after frequency modeling, the DCNN output is passed to the LSTM layer, which is suitable for modeling the signal in time. Finally, after performing frequency and temporal modeling, we pass the output of the LSTM to several fully connected DNN layers. ese higher layers are suitable for generating higher order feature representations of different classes that are easier to distinguish. erefore, we believe that combining CNN-LSTM is helpful to improve the performance of Chinese speech recognition.
DNN Component.
DNN is composed of an input layer, hidden layers, and an output layer. e difference between DNN, CNN, and LSTM is that DNN is fully connection structure and does not include convolutional units or temporal associations. So, the fully connection properties can effectively map the input features into a separable space, which is helpful for data classification. Figure 3: e structure of long short-term memory network.
Training Data and Environment Setup.
In the experiments, we use the THCHS-30, an open source Chinese acoustic data. e THCHS-30 consists of audio news longer than 30 hours, as shown in Table 1, recorded by college students who can speak Mandarin fluently in quiet environments. e sampling rate of the recording is 16,000 Hz, and the sample size is 16 bits. e 1000 sentences (text prompts in recording) of THCHS-30 were selected from a large volume of news. In the whole data, 80% is divided into training data and the rest 20% is test set data.
All our experiments are implemented on the server with powerful computational capabilities, and Python is used as the programming language. We use TensorFlow to build an acoustic model and choose the version of TensorFlow-GPU to speed up the training speed. Among the packages that use during the experiment, python_speech_features and SciPy are used to complete the audio data processing, and matplotlib is to draw results diagrams. e specific configuration information is given in Table 2. Table 3. e WER of the DCNN on the test set is 22.23%. When the Sigmoid of convolutional layer of DCNN is replaced with ReLU, the WER of the DCNN(ReLU) is 21.76%, which is 0.47% lower than that of DCNN. Due to the import of CTC, DCNN-CTC has a better result among the three models that is 20.18%, which has improved by 2.05% compared with DCNN. According to the results in Table 3, we know that the import of CTC improves the performance of the acoustic model to a certain extent, and the recognition effect taking ReLU as the activation function is better.
Results of DLD Acoustic Model.
We have trained the data on the network with different LSTM layers and DNN layers to find the best architecture based on DCNN-CTC. First, we test the DCNN-CTC acoustic model combined with LSTM to determine the optimal number of layers. en, we test the DCNN-LSTM acoustic model combined with DNN and determine the best architecture of DLD acoustic model. e number of nodes of LSTM and DNN in each layer is different. Table 4 illustrates the number of nodes corresponding to each layer of the LSTM and DNN.
Due to the memory function of LSTM, it can effectively process the text content related to time series, which also indicates that LSTM can play a good role in the acoustic model. erefore, we test LSTM with different layers and get the best layer number of the model. Table 5 shows that the WER drops by 0.52% from DCNN + 2 LSTM to DCNN + 3 LSTM. However, when the number of LSTM layers increases to 4, the WER does not change significantly, and the speed of model training will be reduced. us, integrating LSTM can effectively improve the performance of audio recognition, and the layer number has an optimal value. e fully connected DNN layer is used for classification, if the number of layers is set to be too many, the model parameters will increase, resulting in the decrease of training speed.
erefore, we only test the model combining two layers and three layers. From Table 6, on the basis of DCNN-3LSTM model, the WER of the model incorporating 2-layer DNN is 13.87%, while 3-layer DNN is 13.49%. In comparison, the recognition effect is improved by 0.38%. erefore, we can determine that the performance of the DCNN-CTC acoustic model after integrating 3 layers of LSTM and 3 layers of DNN in turn has the best performance. In addition, we did a 10-fold cross-validation. First, we divide the entire training set S into k disjoint subsets. Assuming that the number of training examples in S is m, each subset has m/10 training examples, and the corresponding In the divided two test sets, we select one as the validation set to detect whether the model has overfitting, underfitting problems, and the best training rounds of the model during the training process. en put this model on the test set to get the classification rate. Finally, calculate the average of the classification rates obtained 10 times as the true classification rate of the model. is method makes full use of all samples, but the computation is cumbersome, requiring 10 training and 10 testing. We found that the DLD acoustic model has the best performance with 3 layers of LSTM and 3 layers of DNN. From the above experimental results, the final WER of the DLD acoustic model is 13.49%, while that of the DCNN-CTC acoustic model is 20.18%. In contrast, the WER of the acoustic model has dropped by about 7%. It also can be clearly seen from Figure 5 the change trend of the accuracy of the DLD acoustic model and the DCNN-CTC acoustic model.
Comparison with Other Acoustic Models.
e SE-MCNN-CTC acoustic model [26] was proposed by Zhang et al. that consisted of MCNN and SENet (squeeze-and-excitation Nnetworks) on the basis of the DCNN-CTC acoustic model. MCNN is a multipath convolutional neural network, which combines multiple branch networks in parallel.
e DCNN-BGRU-CTC acoustic model [27] was proposed by Lv, which integrated bidirectional gate recurrent unit (GRU) into DCNN-CTC. e bidirectional GRU is used for capturing the context information, and the CTC is used as the loss function for an end-to-end training. Table 7 shows the comparison results among DCNN-CTC designed at the beginning, the optimized DLD, and another two models. According to the table, our DLD has a 14.28% WER, which is the lowest among all the results. At the same time, it also proves that our DLD has better recognition performance on THCHS-30 than the other models.
Conclusions
In this paper, we mainly optimize the acoustic model in Chinese speech recognition. Based on the DCNN-CTC acoustic model, we propose the deep convolutional neural network-long short term memory-deep neural network (DCNN-LSTM-DNN, DLD) acoustic model. is model adds a batch normalization (BN) layer, a long short-term memory network (LSTM), and a deep neural network (DNN) structure. As a result, the WER of the acoustic model is reduced by 7%. As shown in our work, batch normalization and integration of LSTM and DNN is found to improve the training speed and accuracy of the acoustic model. However, since the THCHS-30 dataset we used is recorded in a quiet environment by college students who can speak Mandarin fluently, its accuracy will decrease when it recognizes accented speech or when it comes to noisy data in the actual application. In the future work, we will try to introduce a self-attention mechanism into the language model to strengthen the language model's ability on learning homophones, so as to further improve the Chinese speech recognition accuracy.
Data Availability e data called THCHS-30 used to support this research are completely free to academic users. THCHS-30 is a classic Chinese speech data set, which provides a toy database for new researchers in the field of speech recognition (https:// www.openslr.org/18/).
Conflicts of Interest
e authors of this paper declare that there are no conflicts of interest regarding the publication of this paper. | 6,307 | 2022-05-02T00:00:00.000 | [
"Computer Science"
] |
Ultimate precision limit of noise sensing and dark matter search
The nature of dark matter is unknown and calls for a systematical search. For axion dark matter, such a search relies on finding feeble random noise arising from the weak coupling between dark matter and microwave haloscopes. We model such process as a quantum channel and derive the fundamental precision limit of noise sensing. An entanglement-assisted strategy based on two-mode squeezed vacuum is thereby demonstrated optimal, while the optimality of a single-mode squeezed vacuum is found limited to the lossless case. We propose a `nulling' measurement (squeezing and photon counting) to achieve the optimal performances. In terms of the scan rate, even with 20-decibel of strength, single-mode squeezing still underperforms the vacuum limit which is achieved by photon counting on vacuum input; while the two-mode squeezed vacuum provides large and close-to-optimum advantage over the vacuum limit, thus more exotic quantum resources are no longer required. Our results highlight the necessity of entanglement assistance and microwave photon counting in dark matter search.
A fundamental question that puzzles us today is the nature of the hypothetical dark matter (DM) that makes up a large portion of the entire Universe's energy density, as inferred from multiple astrophysical and cosmological observations and simulations [1][2][3]. Due to its weak interaction with ordinary matter, DM is extremely challenging to search for. Moreover, as the frequency of DM is unknown, a search requires a scan over a huge frequency range from Terahertz to hertz, involving different systems ranging from opto-mechanical [4][5][6][7][8][9][10][11][12] and microwave [13][14][15][16][17], which can easily take hundred of years with the state-of-the-art technology [17,18]. As much attention has been on utilizing quantum metrology, empowered by quantum resources such as squeezing [14,15,17] and entanglement [19], to boost the DM search, it is crucial to understand the ultimate precision limits of DM search allowed by quantum physics.
Axion dark matter search relies on microwave haloscopes-microwave cavities in presence of magnetic field [13][14][15][16][17] that allows axion particles to convert to microwave photons. Such a search process can be modelled as a quantum sensing problem over a covariant bosonic quantum channel [20,21], whose additive noise level reveals the existence of DM. The ultimate precision limit of DM search can therefore be understood from the ultimate precision limit of additive noise sensing. However, while the ultimate limits of phase sensing [22], displacement sensing [23,24], loss sensing [25,26] and amplifier gain arXiv:2208.13712v2 [quant-ph] 26 Mar 2023 sensing [27] have been explored, little is known about the limit of noise sensing [28] in bosonic quantum channels when there is energy constraint.
In this paper, we derive the ultimate precision limit of energy-constrained [26,27] noise sensing in covariant bosonic quantum channels, and therefore reveal the DM search performance limit allowed by quantum physics. Via quantum Fisher information (QFI) calculations, we show that entangled source in the form of two-mode squeezed vacuum (TMSV) is optimal for noise sensing in the parameter region of interest. On the other hand, (single-mode) squeezed vacuum source is only optimal in the lossless case, and even underperforms the vacuum limit when loss is large.
Next, we consider measurement protocols. Although it has been shown [15,17,29] that squeezed vacuum input improves the performance of homodyne measurement, our analyses show that protocols with homodyne detection are sub-optimal in general. Instead, a 'nulling' measurement strategy based on squeezing and photon counting is optimal and beats the homodyne detection by orders of magnitude. For vacuum input, the nulling strategy simplifies to direct photon counting. In particular, when implemented ideally in the lossless case, it takes 15dB of squeezing for a homodyne-based strategy to overcome the (photon counting) vacuum limit.
Finally, we interpret our results in DM search in the setting of microwave haloscopes. We show that the total Fisher information is proportional to the previously wellaccepted figure of merit-the scan rate (information acquisition rate across all frequencies [15,17,19]) in homodyne detection. Then, we recover squeezing's advantage over vacuum homodyne. However, we show that these strategies are below the vacuum-limit scan-rate (vac-uum+photon counting), in the practical range of squeezing (< 20dB). In contrast, an entanglement-assisted strategy based on TMSV enables optimal advantage over the vacuum limit at an arbitrary squeezing level, achievable with our nulling receiver based on photon counting. This provides a guideline for the next-generation haloscopes-developing good microwave photon counting measurement is of high priority; while more exotic quantum sources such as Gottesman-Kitaev-Preskill state [30] are not necessary since a simple TMSV state is optimal. Our results can also apply to a sensornetwork [19,31], all quantum advantages hold with an additional scaling advantage due to coherent signal processing.
A.
DM search as additive noise sensing We consider dark matter search for the axion DM model (see Fig. 1), while we note that our results may also apply to other DM hypotheses. In a search for axion dark matter, an important experimental set-up involves Conceptual plot of the entanglement-enhanced microwave haloscopes for axion dark matter search. A pair of entangled probes, including the signal (red beam) and the ancilla (blue beam), are generated at the bottom-right box. Note that the blue beam becomes inaccessible when entanglement assistance is forbidden. The signal probe is then shined on the input port of a microwave cavity, which is coupled with the axion (top-right beam) via a strong magnetic field. At this moment, the unknown parameter of axion is encoded on the signal probe, while the ancilla is shared with the receiver intact. The receiver applies a nonlinear processing to the returned probes jointly, e.g. a two-mode squeezer for the entanglement-enhanced case. Finally, the receiver collects the photon counts and estimate the unknown parameter of axion based on the readout. Background image credit: ©James Webb Space Telescope. a cavity in presence of electric and magnetic field (microwave haloscopes), where axion DM can couple to cavity modes. Due to the large number density, axion DM is assumed to behave as classical waves [19]-the mean field at position x has the form where the center frequency ω 0 is determined by the axion DM mass m a , k 0 is the wave factor and φ a is a phase factor. As the potential DM induced cavity signal is weak, to determine the presence or absence of DM, one considers a long observation time, during which φ a is completely random in [0, 2π). Due to the randomness of the axion field, the input-output relation of the cavity at each frequency ω can be effectively modeled aŝ where are the susceptibilities determined by the cavity coupling rates (γ m , γ and γ a for measurement port, loss and ax-ion),â B describes the thermal background with mean photon number determined by the cavity temperature. In general, γ a γ m , γ . For simplicity, we have omitted the noise independent phase factors, as it does not affect our analyses. Most importantly, the DM induced signal contributes to the additive noise-µ a is a complex Gaussian random number with variance equaling n a , the number of axion particles in the cavity. The search for DM is therefore a parameter estimation task of the additional additive noise χ 2 ma n a from axion DM. The input-output relation in Eq. (2) is a special case of a phase-covariant bosonic Gaussian channel (BGC) [20,21] N κ,nB with transmissivity κ and dark count noise of mean photon number n B , which maps a vacuum input state to a thermal state with mean photon number n B and an input mean field α to output mean field √ κα. The transmissivity κ ranges from 0 to ∞: for 0 ≤ κ < 1, N κ,nB is a thermal-loss channel, which corresponds to the ones in dark matter search Eq. (2); for κ = 1, N κ,nB is an additive white Gaussian noise (AWGN) channel; for κ > 1, N κ,nB is a thermal amplifier channel. For the channel to be physical, the dark photon count must be larger than the intrinsic amplification noise: n B ≥ max{κ − 1, 0}. Note that our definition of noise n B is different from some other conventions for the purpose of simplifying our notations, see Appendix A.
For the case of Eq. (2), the quantum channel is the thermal loss case, with transmissivity κ(ω) = χ 2 mm (ω) and the noise coming from the thermal background and DM axion where the thermal photon number of the background environment mode, n T ≡ 1/[exp ( Ω/k B T ) − 1], is approximately taken at the center frequency Ω. Before we proceed with our analyses, we provide some realistic parameter settings. At practical operating condition [17,19] of temperature T = 35mK and frequency f 7GHz, the environment thermal photon number n T ∼ 10 −4 from the Bose-Einstein distribution. According to theoretical predictions, axion density n a /V ∼ 10 15 λ km cm −3 [19] is noticeably large in the microwave region, where λ is the De Broglie wavelength of DM. At the same time, the coupling between axion and cavity χ 2 ma (ω) is extremely weak, such that the added noise χ 2 ma (ω) n a 1 is infinitesimal and therefore hard to verify or nullify.
In the above we have modeled a single sensor case. However, as we will discuss at the end of the paper, identical sensor arrays can be reduced to the above single sensor case [19], and therefore our results below can be adopted to sensor-networks.
B. General noise sensing strategies
We consider the estimation of additive noise n B in BGCs, assuming that the transmissivity is known from → , Figure 2. Schematic of the entanglement-assisted strategy. A probe system S is allowed to be entangled with an ancilla A. The probe S is input to the channel N ⊗M κ,n B (ρ). The output system R is jointly measured with the ancilla A.
prior calibration. To measure the additive noise n B , one can in general input a probe system S in stateρ and measure the output system R of the channel N ⊗M κ,nB (ρ). Here, to achieve a good performance, we have considered probing the channel M times with input stateρ potentially entangled across M probings and a joint measurement on the entire output. To model a physically meaningful setting operating with finite energy, the total mean photon number of the input state is constrained to be M N S over M modes.
A general strategy can also rely on entanglement to boost the sensing performance [32][33][34]. As shown in Fig. 2, this is implemented by allowing an ancilla A entangled with the probe, such that the joint state of system AS is pure. In general, one can write out the joint input-ancilla pure state as where |n = ⊗ M =1 |n is the number state basis of the M -mode signal S, p n 's are probabilities and normalized to unity and ancilla states {|χ n } are normalized but not necessarily orthogonal [26,27]. We adopt the vector notation n = {n 1 , · · · , n M }. For such a state, the energy constraint is specified as M j=1 â † S,jâ S,j = n p n j n j ≤ M N S , whereâ S,j is the j-th signal mode. In an entangled strategy, one can perform measurement on the output R and the ancilla system in the quantum stateρ where identity channel I models perfect ancilla storage.
In both strategies, we quantify the performance via the root-mean-square (rms) error δn B . In the following, we provide the ultimate limits on δn B , when one is allowed to optimize any entangled input-ancilla state |ψ 0 AS (subject to the energy constraint) and any measurement strategy. Then we consider practical protocols consisting of a source and a measurement to achieve the limit in the parameter region of interest.
C. Ultimate limit on noise sensing Given a fixed stateρ(n B ) dependent on parameter n B , the rms error in estimating n B when allowing an arbitrary measurement is lower bounded by the asymptotically tight quantum Cramér-Rao bound [35][36][37] .
The quantity J [ρ (n B )] is the QFI [38] defined via where the fidelity between two quantum statesρ 0 ,ρ 1 is defined as F (ρ 0 ,ρ 1 ) ≡ Tr √ρ 0ρ1 √ρ 0 . As QFI J [ρ (n B )] depends on the input-ancilla state via Eq. (7), in order to understand the ultimate limit of noise sensing precision, we need to maximize J [ρ (n B )] over all 2M -mode general quantum states |ψ 0 AS , subject to the total photon number constraint of M N S on the input system A. This is in general a challenging task, as the states can be arbitrary and entangled across 2M modes; however, we are able to obtain the following upper bound on J [ρ (n B )], via making use of the fidelity interpretation of QFI in Eq. (10). We detail the full proof based on unitary extension (UE) of channels in Appendix B.
Theorem 1 The quantum Fisher information per mode for energy constrained additive noise sensing of a phasecovariant Bosonic Gaussian channel N κ,nB has the following upper bound where N S is the input mean photon number per mode. Furthermore, the upper bound is additive: J [ρ (n B )] ≤ M J UB,UE for any 2M -mode input-ancilla state subject to mean photon number constraint M N S .
The additivity of the above upper bound can also be proven in a more general setting, where multiple channels are dependent on a global noise parameter θ. Consider a compound channel ⊗ K =1 N κ ,n B, (θ) , the noise of each sub-channel n B, (θ) is a general smooth function of θ. Suppose one utilizes N S, mean photon number on each sub-channel, the total Fisher information about θ is upper bounded by where J UB (N S , κ, n B ) makes the functional dependence explicit in Eq. (11). The detailed proof is presented in Appendix B. This additivity property is non-trivial as in general the inputs to different channels can be entangled. Before proceeding to applying the upper bound, we would like to compare with some known results. In Ref. [28], there is an upper bound from teleportation (TP)-stretching that holds for the energy unconstrained problem of noise estimation. In our notations, it provides an upper bound which holds true for arbitrary values of N S . In our analyses, we will obtain the best upper bound combining both Eq. (11) and Eq. (13). As shown in Fig. 3, in the practical region of squeezing (region below the cyan dashed line), J UB = J UB,UE is adopted; while in the large squeezing region (above the cyan dashed line), J UB = J UB,TP is adopted. It is noteworthy that the teleportation-based bound J UB,TP does not depend on the photon-number constraint N S . This is due to its derivation allows infinite energy-it is the QFI achieved by the Choi state of the channel, which is the channel output when the TMSV input becomes infinitely squeezed [28]. Naturally, it is a loose bound for finite N S . In contrast, our unitary extension bound J UB,UE is much tighter for small N S , but the assumption of receiver being able to access the environment of the unitary extension makes J UB,UE loose at the limit of large N S : When N S increases, eventually the environment contains too much information about the noise level; therefore, such assumed access to the environment increases the QFI drastically and makes the resulting QFI upper bound loose.
D. Performance of Gaussian sources
With the ultimate limit in hand, we now consider QFI enabled by different type of input-ancilla states |ψ 0 . We consider M identical probes, each with mean photon number N S , i.e. â † Sâ S ≤ N S . Due to the additive nature of QFI for multi-copies, we will just consider the Fisher information for a single probe. All sources considered here are Gaussian [21] and the QFI can be evaluated analytically, as we detail in Appendix C.
We begin with the N S = 0 case of vacuum input. The vacuum-limit (VL) QFI can be evaluated as which also coincides with J UB at zero input photon. In this case the performance is limited by vacuum noise fluctuations, thus, we name the corresponding QFI as the vacuum limit. As we will address later, this vacuum limit is much better than the performance of vacuum input with homodyne detection, the latter often considered as benchmark in previous works [15,17]. Now we consider exotic quantum resources to overcome the vacuum limit. We begin with the squeezed vacuum state, in absence of any entangled ancilla. Squeezed vacuum states have been considered in DM search [15], however, the QFI enabled by it remains unclear. A squeezed vacuum state is prepared by applying single-mode squeezingŜ(r) = exp −r â 2 S −â †2 S /2 to a vacuum mode, whereâ S is the annihilation operator of the initial mode and r is the squeezing parameter. The resulting mode has mean photon number N S = sinh 2 r and quadrature variances G ≡ exp(2r) and 1/G ≡ exp(−2r) for position and momentum, where we have chosen vacuum variance as unity. Here G is often referred to in decibels (dB) as the squeezing strength. A single-mode squeezed vacuum (SV) yields the following QFI First, as a sanity check, with zero mean photon number N S = 0, the QFI result J SV = 1/n B (n B + 1) = J VL agrees with the vacuum limit. While when the input mean photon number N S 1 is large, J SV 2/(1 − κ + 2n B ) 2 converges to a finite value; From the above, we see that single-mode squeezing can even worsen the performance when n B 1 in the lossy case of κ < 1. While at the lossless case of κ = 1, the squeezed state QFI J SV = J UB achieves the upper bound when n B min[1, 1/N S ]. The above performance from asymptotic analyses can be verified in an example in Fig. 3 (a) and Fig. 4(a), where we plot the relative ratio to the upper bound and the vacuum limit. Overall, we see closeto-optimal performance of single-mode squeezed vacuum only when κ ∼ 1 is close to the lossless limit. Note that the optimality at κ ∼ 0 is trivial and input-independent.
To further improve the performance, we consider entanglement-assisted strategies, where one stores an ancilla A entangled with the input signal S and jointly measure the signal and the ancilla for noise estimation. In this work, we consider entanglement in the form of TMSV, which are readily available in both optical domain and microwave domain. A TMSV state can be prepared by applying the two mode squeezinĝ to two vacuum modes. After the two-mode squeezing, the signal mode has mean photon number N S ≡ â † Sâ S = sinh 2 r. Similar to the single-mode squeezing case, we define the squeezing strength G = exp(2r), as such a TMSV state becomes two independent squeezed states of strength G via passing through a balanced beamsplitter. The QFI for noise estimation enabled by TMSV can be evaluated as .
(17) Different from single-mode squeezing, we find that TMSV always overcomes the vacuum limit: The values of maximum and minimum in each subplot are highlighted in red. The range between the adjacent two contours is 2 decibel. nB = 10 −3 . Note that in larger nB region neither source has much of advantage compared to vacuum. Figure 5.
In particular, for the ideal lossless scenario of κ = 1, the TMSV source can be proven to be optimal in the weak noise limit, namely J TMSV /J UB 1 when n B min[1, 1/N S ]. Furthermore, the TMSV source achieves the teleportation bound exactly at the limit of large squeezing, N S → ∞. We verify the above conclusions numerically. In Fig. 3(b), we indeed see the ratio J TMSV /J UB is close to unity in most of the parameter space, not limiting to κ = 1. In Fig. 4(b), we see that the TMSV source yields an appreciable advantage over the vacuum limit, which survives in the entire parameter region and is largest in the high squeezing and high transmissivity region, as expected from Eq. (18).
E. Measurement protocols on Gaussian sources
Now we consider the measurement to achieve the QFI for the various types of input quantum states.
When the input is vacuum, the output state is a thermal state with mean photon number n B , which is a photon-number diagonal state. Consequently, the vacuum limit can be achieved by a photon-counting measurement. Note that in this case homodyne detection on vacuum input is strictly sub-optimal, with the Fisher information And the performance degradation from the vacuum limit is large when n B is small, as I Vac−hom /J VL ∼ 2n B in such a weak noise limit. This vacuum-homodyne performance is often regarded as the 'standard quantum limit' in the literature [15][16][17]. The quantum optimum vacuum limit J VL has an infinite-fold advantage over the vacuum homodyne I Vac−hom as n B → 0. Via this Fisher information analyses, we make the advantage of photon counting proposed in Ref. [16] rigorous. Now we proceed to consider measurement for singlemode squeezed vacuum input. We propose two strategies shown in Fig. 5. First, let us begin with the homodyne measurement shown in Fig. 5(a). As we detail in Appendix D 1 a, a simple homodyne detection on the squeezed quadrature (here the momentum quadraturep) of a single-mode squeezed vacuum state provides Figure 6. Entanglement-assisted measurement strategies. (a) Bell measurement; (b) nulling receiver based on two-mode squeezingŜ2(r 2 ) and photon detection (PD).
the Fisher information (20) Note that the protocol of squeezing followed up with anti-squeezing in the HAYSTAC experiment [17] in theory yields the same Fisher information as direct homodyne (see Appendix D 1 a). In the HAYSTAC experiment, anti-squeezing is applied to make the signal robust against additional detection noises.
Indeed, assuming homodyne detection, squeezed vacuum input provides a better performance, I SV−hom ≥ I Vac−hom , with equality achieved at N S = 0 as expected. When the squeezing strength G is limited, the bottleneck is the homodyne measurement that fails to achieve the full potential of the squeezed vacuum source. With unlimited energy budget such that G → ∞, we have I SV−hom = 2/(2n B + 1 − κ) 2 which converges to the squeezed-vacuum quantum limit J SV . However, compared with the vacuum limit J VL in Eq. (15), I SV−hom for squeezing-homodyne is only advantageous when κ is very close to unity. Fig. 7 confirms the gap between the I SV−hom (purple) and the QFI of squeezed vacuum source J SV (blue dashed), also the promised advantage of J SV over the vacuum case I Vac−hom .
To exploit more advantage from single-mode squeezing, as shown in Fig. 5(b) we design a nulling receiver which is proven to be optimum at the n B → 0 limit. Specifically, the receiver first aims to null the return mode for squeezed-state sources by performing an anti-squeezinĝ In experiments, such an anti-squeezing can be realized via optical parametric amplification. Indeed, it successfully nulls the return mode to vacuum for an identity channel, while it leaves residue noise for a general BGC. At this moment, the photon count is subject to the probability distribution in the Legendre function, which yields Fisher information in Eq. (D8). At the identity-channel limit n B → 0, κ → 1, we find that the nulling receiver achieves the SV limit (see Appendix D) As shown in Fig. 7, our numerical results of the nulling receiver (blue) achieves the optimal performance allowed by the squeezed vacuum source J SV (blue dashed) in both κ = 1 (subplot a) and κ = 0.6 < 1 (subplot b) cases. The nulling receiver secures the optimal advantage over the vacuum limit (gray dashed) when κ = 1, whereas the squeezed state source per se fails to beat the vacumm limit when κ = 0.6.
In the entanglement-assisted case, the receiver has access to both the ancilla A and the return R. We propose two measurement schemes shown in Fig. 6, one is based on Bell measurement [subplot (a)], the other is an extension of the nulling receiver proposed above [subplot (b)]. To begin with, we introduce the Bell measurement, where one performs homodyne detection after passing the return modeâ R and the ancilla modeâ A through a balanced beamsplitter. The Bell measurement on the TMSV input yields the classical Fisher information which is sub-optimal in general. In particular, at the limit n B → 0, one can analytically show that I Bell is worse than I SV−hom , the resolution of single-mode squeezing and homodyne. This is confirmed in Fig. 7. In the κ = 1 case of subplot (a), I Bell (orange) is constantly 3dB worse than I SV−hom (purple); in the κ < 1 case of subplot (b), I Bell drops after the squeezing strength G surpassed a threshold, as expected from Eq. (22).
By contrast, the nulling receiver, now based on two-mode-anti-squeezing, is again near optimum at the n B → 0 limit for TMSV input. Specifically, the receiver aims to null the returned signal mode to vacuum for TMSV sources byŜ 2 (r 2 ) with The return mode is nulled to vacuum over a pure loss channel at the limit n B → 0 (which does not work for amplifier channels or n B > 0). At this moment, the joint photon count statistics at the signal and ancilla modes can be analytically solved, which yields Fisher information Eq. (D25). At the low-noise limit n B → 0, we find that the nulling receiver achieves the TMSV limit (see Appendix D) We numerically evaluate it in Fig. 7. In a wide range of G, the nulling receiver (red solid) is shown to achieve the QFI of the TMSV state source (red dashed). Remarkably, when κ < 1, the EA nulling receiver yields an appreciable advantage over the quantum limit of the single-mode squeezed-state source (blue dashed).
In the above, we have considered photon counting on both the signal and ancilla.
It is noteworthy that the optimality still holds if the receiver only measures the signal, I TMSV−null,signal J TMSV ; while if one only measures the ancilla, the performance is much worse, . On the other hand, the actual implemented nulling parameter r 2 in experiments can deviate from our proposed value r 2 . In Appendix D 2 b, we numerically compare the measure-both strategy and the measure-signal strategythe measure-both strategy is much more robust than the latter against the deviation.
Overall, in the noise sensing scenario, nulling receivers based on (single-mode or two-mode) squeezing and photon counting performs much better than the quadrature measurements (homodyne for single-mode squeezing and Bell measurement for two-mode squeezing). Remarkably, even the classical vacuum source yields an achievable advantage up to ∼ 30dB with the assistance of photon number resolvable measurement (see gray dashed lines in Fig. 7). This is in contrast to the phase sensing scenario, as in noise sensing the photon number carries the information, while in phase sensing the quadratures carry the information .
Notes added.-Upon the completion of our manuscript, a related work [39] appeared.
There, a different model of displacement statistics is taken and the anti-squeezing and photon counting strategy for single-mode squeezed vacuum source is proposed, while no entanglement assistance is considered. Ref. [39] considers no additional loss or noise and therefore does not directly apply to dark matter haloscopes, where the predominant scenario is lossy for any off-resonance detuning or any detuning under unbalanced coupling, as we will explain in the next section.
F. Implications on axion dark matter search
Now we focus on the axion DM search with microwave cavity haloscopes and analyze the performance boost in more details. To maximize the initial search efficienty, the typical cavity linewidth is much larger than the predicted bandwidth of axion DM, thus an axion signal can be considered monochromatic [18]. The formal inputoutput relation can be found in Eq. (2). In this paper, we will frequently use the normalized coupling rates γ m ≡ γ m /γ andγ s ≡ γ a /γ . In the formalism of bosonic Gaussian channel, here the input probe at detuning ω is subject to transmissivity κ(ω) = χ 2 mm (ω). From Eq. (2), the noise n B has contribution from both the environment thermal bath and the DM perturbation. As the contribution from DM is χ 2 ma n a , we can relate the Fisher information J na about DM density n a and the Fisher information J nB about bosonic channel additive noise n B via the parameter change rule, J na = χ 4 ma J nB . The classical Fisher information achieved by measurement, denoted as I, can also be related similarly.
Now we consider the dark matter search process in more detail. As the center frequency of axion is unknown, one has to search through the whole frequency domain with a uniform prior. Consider a search protocol consisting of 2n + 1 measurements with large n, where the cavity resonance frequency is tuned such that the detuning ω to a fixed frequency covers the range of [−n∆ω, n∆ω] with a small discrete step ∆ω. Note that Fisher information is additive for independent measurements (c.f. joint quantum measurement), the total Fisher information about axion DM occupation number at the fixed frequency is the summation of the Fisher information at each measurement, n k=−n J na (k∆ω) . Take the continuous limit of ∆ω → 0, n → ∞, the total Fisher information lim ∆ω→0 1 ∆ω ∞ k=−∞ J na (k∆ω)∆ω is proportional to the continuous-spectrum total Fisher information [40] J ≡ˆ∞ up to a constant prefactor of 1/∆ω. This prefactor shall not lead to divergence in practice because the scanning step is finite. Our approximation of the sum to the integral is valid as long as the susceptibility functions χ ma (ω), χ mm (ω) are smooth enough relative to the discrete step ∆ω, which is indeed the case in DM scan [19]. The same procedure applies to define the continuousspectrum total classical Fisher information for a particular measurement protocol as As we will show later, considering homodyne measurement, the Fisher information I na (see Eq. (A4) for noisy vacuum input and Eq. (F1) for noisy squeezed vacuum) is equivalent to the squared signal visibility α 2 (ω) [15,19], up to a constant factor. This Fisher information interpretation of the scan rate allows us to obtain insights to the DM scan rate from the total Fisher information I. At the same time, J therefore characterizes the quantum limit of the scan rate given a specific input source and its upper bound will bring the ultimate limit of the DM scan rate.
Upper bound on DM scan rate
To obtain an upper bound on the performance of DM search, we assume that the input port can be arbitrarily engineered, without being affected by any additional thermal noise that often appears in experiments.
In general, we will utilize Eq. (14) to obtain the upper bound. However, as most of our evaluations are with limited N S of 10 or 20 dB of squeezing, we will focus on Eq. (11) to obtain analytical solution, while our numerical evaluation utilizes Eq. (14). For convenience, we will denote all results as 'UB' without specifying 'TP' or 'UE'. From Eq. (11) and the parameter change rule, the Fisher information upper bound about n a can be derived as where we have not made the frequency dependence on ω explicit and n B can be taken as (1 − χ 2 mm )n T due to the axion signal being weak.
Thanks to the additivity property of Eq. (12), the total Fisher information upper bound can be directly obtained through integration, J UB =´dωJ UB na (ω). Note that this upper bound is general-it allows arbitrary entanglement across all frequencies. While the closed form solution is lengthy (see Appendix E), at the low temperature limit of n T 1, the scan rate upper bound has a simple form The maximum of J UB can be obtained as at the over coupling limit ofγ m → ∞. It reveals an increasing quantum advantage proportional to N S over the vacuum limit (N S = 0).
With the upper bound in hand, in the following we consider the performance with different sources and measurements. As homodyne measurement using vacuum probes is prevalent in the experiment proposals at the current stage [15,17,19], we take it as a benchmark for classical schemes, which is to be surpassed by the non-classical probes and receivers. We will focus on parameters on par with the experiment reported in Ref. [17], where the cavity is cooled to 61mK and the cavity resonant frequency is at around 10GHz.
Ideal input engineering
Here we focus on the ideal input engineering case where the input port is not affected by thermal noise before the probing. For these cases, all of our previous results can be directly translated to DM search with the parameter change rule of Fisher information-the Fisher information about n a at detuning ω is . Similar results also apply to the performance of the nulling receivers for both the single-mode squeezing input and TMSV input.
We evaluate the spectrum of the Fisher information for various probes and receivers in Fig. 8 (ω = 0)| γm=γ m , the peak Fisher information of the classical benchmark at the critical coupling ratio γ m = γ . In general, we see Lorentzian-type of envelops due to the χ 4 ma (ω) term in Eq. (29). In subplot (a) of the critical coupling case (γ m /γ = 1), the transmissivity κ(ω) = 0 at resonance ω = 0, and any input trivially converges to vacuum. When ω deviates from resonance, generally the Fisher information deviates from the peak value (peak sensitivity). Interestingly, the single-mode squeezed vacuum performance limit J SV na (blue dashed) is worse than the vacuum limit (gray dashed) for the range of detuning ω of interest, due to the loss in the probing. This can be intuitively understood by considering the anti-squeezed quadrature contributing greatly to noise while the squeezed quadrature is still almost vacuum at the large loss limit. This is also seen in Fig. 4(a), where the ratio to vacuum limit also sharply decay when κ increases near κ = 0. By contrast, the TMSV state (red dashed) demonstrates a huge advantage over the whole frequency domain. Remarkably, in the presented scenarios J TMSV na achieves the upper bound (black dot-dash) almost everywhere, which indicates that the TMSV is the optimal input state here. Squeezed-vacuum homodyne performance (purple solid) is better than vacuum homodyne (gray solid) but worse than the vacuum limit (gray dashed) enabled by photon counting. In subplot (b), we consider the over-coupling case and the Fisher information spectrum broadens, while the peak sensitivity at ω = 0 decreases as expected. In both subplots, we find the nulling receiver (blue solid and red solid) to be optimal as expected.
Now we proceed to analyze the scan rate from the total Fisher information. We begin with the performance enabled by homodyne detection, where closed-form solutions can be obtained. We put the lengthy expressions in Appendix E and present our results for special cases or asymptotic analyses here. For vacuum input of N S = 0, we have When n T 1, the optimum is achieved atγ m = 2 and I Vac−hom = 2πγ lγ 2 a × 16/27. As shown in Fig. 9(a) by the gray solid line, the optimal coupling ratioγ m ≡ γ m /γ l can be verified numerically, with peak value being zero dB due to normalization. For highly squeezed quantum source (N S 1) at low temperature (n T 1) The optimum I SV−hom 2πγ γ 2 a × 8N S /3 3/2 is achieved atγ m 8N S 2G. Indeed in Fig. 9(a), we see the peak of the squeezing homodyne (purple solid) is when the coupling ratio is about 2G = 13 dB for the 10 dB squeezing. We see that squeezing homodyne provides an advantage of ∼ 2.60N S over vacuum homodyne, as we confirm in Fig. 9(b). Our analyses of homodyne-based strategy indeed recovers previous known results in Refs. [15,19], even more precisely when we consider the extra thermal noise in the practical input source engineering case (see Appendix F). We also note that the optimal scan rate increases with squeezing gain G (equivalently N S ) linearly. This is due to the larger effective bandwidth growing with G, while the peak sensitivity will saturate to a G independent constant. Now we evaluate the performance limits. For the vacuum limit, the total Fisher information has a closed form solution, where in the last step we considered the n T 1 limit. For the single-mode squeezing performance limit, instead of presenting the lengthy closed-form result (see Appendix E), we plot the results in Fig. 9. In subplot (a), a peak emerges for J SV (blue dashed) at γ m /γ = 1 (0 dB), due to the peak in Fig. 8 that emerges only at critical coupling. For strong squeezing N S 1, the maximum total Fisher information is achieved at the over coupling limit ofγ m → ∞. When n T 1, we can compare the performance of homodyne versus the limit enabled by squeezing, I SV−hom /J SV = 2/3 √ 3 0.385 −4.1dB. This constant factor difference can be verified numerically in Fig. 9(b). At the same time, we note that it takes almost 40 dB of squeezing for the performance enabled by squeezed vacuum (purple solid and blue dashed) to reach the vacuum limit (gray dashed). This indicates the importance of a good photon counting detection in dark matter search.
In Fig. 9(b), we also observe a minimum point of the total Fisher information with respect to the squeezing gain G ≡ 1 + 2N S + 2 (1 + N S )N S for the squeezed vacuum source (blue dashed), in contrast to the monotonicity of all other sources. Indeed, the peak atγ m = 1 competes with the overcoupling limit atγ m = ∞ as shown Appendix E. After G increases beyond a specific threshold, the overcoupling limit always dominates, which increases linearly with G. The linear growth at large G verifies Eq. (35), as G ∼ 4N S when N S is large.
Finally, we address the total Fisher information enabled by the TMSV source, where in the last step we take the low temperature limit of n T 1. In the over coupling limit ofγ m → ∞, the maximum achieves the upper bound as we can verify in Fig. 9 by comparing J TMSV (red dashed) with the upper bound J UB (black dot-dash). We also note that in general TMSV performance overwhelms the single-mode squeezing by a large factor 1/n T . As expected, the TMSV source yields an increasing advantage proportional to G over the vacuum limit.
In the above, we have assumed the squeezed vacuum and two-mode squeezed vacuums can be prepared perfectsly. In practice, their preparations also have noise and we analyze this practical state engineering in Appendix F. In this regime, the TMSV QFI still achieves the large advantage over the vacuum limit, while it now falls below the upper bound with a constant gap of around 6.9dB.
III. DISCUSSIONS
In this work, we have shown that an entanglementassisted strategy with two-mode squeezed vacuum as source and nulling receiver (anti-two-mode-squeezing+photon counting) as detector is optimal for noise sensing. In terms of dark matter search, such a strategy provides the optimal scan-rate and outperforms the single-mode squeezed vacuum and homodyne strategy by orders of magnitude. In this regard, developing a quantum-limited photon counting detector, such as that in Ref. [16] is crucial for the next generation microwave haloscopes.
At the same time, our results reaffirm that other types of more exotic resources such as the Gottesman-Kitaev-Preskill (GKP) state [30] are not necessary for microwave haloscopes [19]. For an energy constrained case, GKP states also obey the QFI upper bound, which is already achieved with squeezed vacuum states. Even when the energy constraint is relaxed, practical considerations also forbid GKP states to be worthwhile engineering in microwave haloscopes [19].
In our dark matter search model, we have not considered the case of a local array of microwave cavities [19,[41][42][43], where the dark matter induced noise at different sensors are correlated. However, as Ref. [19] showed, due to the correlation, the signals can be coherently combined and the problem can be reduced to a single sensor, especially for identical sensors (see Appendix G). The coherent combining between M identical sensors will provide a M 2 boost to the scan rate, in addition to the quantum advantages considered here. One can adopt the protocols addressed in this work to sensornetworks via performing a passive linear network on the signal and send to all sensors, and then recombine with another passive linear network. Such sensor-network approach provides another approach of scan-rate boost without the need of a quantum-limited photon counting detector.
Bosonic Gaussian channel
A phase-covariant bosonic Gaussian channel N κ,n B is characterized by transmissivity/gain κ and additive Gaussian noise n B . Specifically, given a signal mode described by the annihilation operatorâ S , which satisfies canonical commutation relation [â † S ,â S ] = 1, the annihilation operator of the return mode is given by the linear input-output relation for 0 ≤ κ < 1, and for κ > 1.
For 0 ≤ κ < 1, the channel mimics a beamsplitter, attenuates the mean of input signal modeâ S by √ κ and mixes in the environment modeâ E attenuated by √ 1 − κ. The environment modeâ E is of mean thermal photon number n E . Overall, the additive noise mixed into the return is n B = (1 − κ)n E . Concretely, given a coherent-state input of mean α, the output is in the displaced thermal state of mean √ κα and mean thermal photon number n B .
2.
Practical source engineering In an experimentally feasible scenario, the input is inevitably affected by thermal noise. To begin with, vacuum input is never perfect in an experiment. Practical vacuum input still has some weak thermal noise n T . In this case, the output of Eq. (2) is a thermal state with mean photon number χ 2 mm n T + (1 − χ 2 mm )n T + χ 2 ma n a = n T + χ 2 ma n a . From Eq. (15), the vacuum limit for axion sensing is therefore From Eq. (19), we have the performance of vacuum homodyne Similarly, the nonclassical sources is also affected by thermal noise. Instead of single or two-mode squeezing on vacuum, the squeezing operations are performed on thermal states with mean photon number n T . To characterize nonclassical sources, we use the squeezing strength G, for both the single-mode and two-mode squeezers. The input photon number N S is contaminated by n T as N S = 2 G 2 + 1 n T + (G − 1) 2 /4G. The upper bound Eq. (14) with N S as the mean photon number of the processed input still applies, however is much loser due to the inevitable initial noise.
With the above input state adopting the thermal noise, the procedures for further analyses are the same as in the maintext: the squeezed sources are shined on the measurement port of the cavity, which is modelled by a phase-covariant BGC N χ 2 mm (ω),nB(ω) ; finally the receiver measures the returned quantum states.
which is much easier to evaluate.
To obtain the purifications, we adopt the Stinespring representation: as shown in Fig. 10, the channel N S→R κ,nB is extended to a unitary transform U SE1E2→RE 1 E 2 N by further including two environment modes E 1 , E 2 in vacuum state. By such means, the output state remains pure if the input is pure. To obtain a simple form of extension, we decompose a phase-covariant BGC N κ,nB to a concatenation of a quantum-limited loss and a quantumlimited amplifier as with g(n B ) = 1 + n B , η(n B ) = κ/g(n B ) = κ/(1 + n B ), where L η = N η,0 and A g = N g,g−1 are the special cases of the general BGC. Therefore, the unitary extension also decomposes U , as shown in Fig. 10. Now let the overall input state be |ψ 0 AS ⊗ |0 E1 ⊗ |0 E2 , where we have considered the M channel uses, with environments in product of vacuum states. Using the decomposition of the unitary extension for each of the M channel uses, the output can be expressed as |ψ(n B ) = n, k≤n √ p n A n−k, B n,k |χ n , n − k + , k, ARE1E2 , where the summation is over vectors with non-negative integer elements, k ≤ n is element-wise and the coefficients are [44] (B6) here we define the distribution of total photon number p n = n:||n||1=n p n and the one-norm ||n|| 1 = j n j . Figure 10. The Stinespring representation of channel N S→R κ,n B . In general, the unitary extension of a thermal bosonic Gaussian channel takes two environment modes E1, E2, due to the decomposition Eq. (B3).
With Ineq. (B2) and Eq. (B6), from further simplification we have the upper bound of QFI J UB,UE in Eq. (11) of the main text and the resulting additivity property.
Compound channel with heterogeneous structure. Now we generalize the additivity to a compound channel ⊗ K =1 N κ ,n B, (θ) with a heterogeneous structure of channel noises n B (θ) that is determined by a single unknown parameter θ. In this case, Eq. (B6) stops at the first equality, because now ζ 1 and ζ 2 are non-identical across the channels and should also be defined accordingly as a vector ζ p ≡ [ζ p (n B,1 , n B,1 ), . . . , ζ p (n B,K , n B,K )] T for p = 1, 2. However, we can show that the second derivative is linear to the photon numbers of each input mode: Here we have used the fact that ζ 1 | θ =θ = 1, ζ 2 | θ =θ = 1, ∂ θ ζ 1 | θ =θ = 0, ∂ θ ζ 2 | θ =θ = 0 and therefore only the terms with second order derivatives of the same variable remains nonzero. Due to the photonnumber linearity of the fidelity, intermodal correlation never increases the quantum Fisher information. Formally, let us define the marginal probability of n as p n ≡ n1,n2,...,n −1 ,n +1 ,...,n K p n . Note that the up-per bound for each channel we obtain the additivity Appendix C: Gaussian-state evaluation Here we derive formula of QFI for two well-studied Gaussian-state quantum probes: single-mode (without ancilla) squeezed vacuum state and two-mode squeezed vacuum (TMSV) state. To describe an nmode stateρ, we define a vector of annihilation operatorsâ = [â 1 ,â † 1 , . . . ,â n ,â † n ], satisfying the commutation relation [â i ,â j ] = Ω ij , where the symplectic metric Ω = ⊕ n k=1 iY and Y is the Pauli-Y matrix. We define its mean d ≡ â , and covariance matrix , where  ≡ tr ρ is the expectation value. A Gaussian state is entirely characterized by its mean and covariance matrix [21]. We are interested in the zero-mean case d = 0 in our analyses.
In a quantum sensing problem, the transmitter prepares a quantum state (Gaussian state in this work) and pass it through a bosonic Gaussian channel N κ,nB defined in the main text. The receiver performs measurement on the output state and estimate the unknown parameter upon the obtained quantum state.
In practice, the input source comes with a thermal noise n T -squeezed thermal state and two-mode squeezed thermal state. When n T > 0, the input photon number is contaminated by the thermal noise n T , where G is the single-mode or two-mode squeezing strength. Thus we characterize nonclassical sources using the squeezing strength G, for both the single-mode and two-mode squeezers, on thermal inputs with mean photon number n T . The covariance matrix of the channel output from a noisy squeezed state is The covariance matrix of the channel output from a TMSV state is where X, I are the Pauli matrices. Although we have not made the input state covariance matrix explicit in the above, it can be directly obtained by setting κ = 1, n B = 0 in Eq. (C2) and Eq. (C4). Based on the covariance matrices, the QFIs of these zero-mean Gaussian states are accessible via the formula [45].
where Σ ± ≡ Σ ± Ω/2, R ≡ Σ ⊗ Σ + Ω ⊗ Ω/4. Here θ can be an arbitrary parameter, while we focus on the estimation of the additive Gaussian noise n B in this paper. We present the results for ideal n T = 0 input sources in the maintext. We omit the results for general n T > 0 cases as they are too lengthy.
Appendix D: Details on the measurements designs
In this section, we will frequently use the quadrature covariance matrix, which completely characterizes a Gaussian state. Based on the annihilation operatorâ, the position and momentum quadratures are defined asq = a +â † ,p = −i(â −â † ). For M -mode Gaussian state, one can define quadrature vectorx = [q 1 ,p 1 , . . . ,q M ,p M ] T . For zero-mean Gaussian states, the quadrature covariance matrix is defined as V ≡ xx T , which is equivalent to the annihilation operator covariance matrix up to a bi-linear transform.
Following the formalism in Appendix C, we consider Gaussian input states to probe a bosonic Gaussian channel N κ,nB defined in the main text. Here we consider practical (noise present with mean photon number n T ) input source without loss of generality, as the ideal (noiseless) input source reduces to the n T = 0 practical source, while maintaining n B the same.
Single-mode squeezing
After channel N κ,nB , the squeezed state has the quadrature covariance matrix As a reminder, µ, ν are defined in Eq. (C3).
a. Homodyne measurement
Homodyne detection measures the squeezed quadrature of the output state, here the momentum quadraturê p. The readout is a zero-mean Gaussian random variable with variance σ 2 = κ[2n T − (G − 1)]/2G + n B + 1/2. It yields the Fisher information where we have defined the notation C p = N S (N S + 1). Note that the above performance is invariant if any squeezing is further performed before the final homodyne detection.
b. Nulling receiver
In the nulling receiver, one squeezes the return mode e −r (â 2 For an identity channel κ = 1, n T = 0, n B = 0, it nulls the return mode to vacuum. The derivation of covariance matrix of the general n T = 0 case is trivial but the result is too lengthy, thus here we present the result for the ideal n T = 0 case. The covariance matrix is The photon count distribution of such a covariance is in the Legendre function [46] where P n is the Legendre function of the first kind, A = [ µ 2 (G + 1 G ) + 2κν − 2]/4, B = µ 2 ( 1 G − G)/4. The Fisher information is evaluated as Remarkably, at the asymptotic identity-channel limit n B → 0, κ → 1, we derive Compared to the asymptotic value of Eq. (16) of the main text, we find that the nulling receiver is optimum for SV probes at this limit. For κ < 1, the output state is highly involved and the performance of SV probes degrades rapidly, thus we omit the analysis in this paper.
Entanglement-asssisted strategy
After channel N κ,nB , the TMSV state has the quadrature covariance matrix where µ, ν are defined in Eq. (C3) and I, Z are the Pauli matrices.
a. Bell measurement
In a Bell measurement, one first passes the return mode and ancilla mode through a balanced beamsplitter, outputting a two-mode Gaussian state with covariance matrix Then the quadrature measurements onp R andq A gives two i.i.d. Gaussian variables with variance The joint 2-D Gaussian distribution yields the classical Fisher information For the ideal n T = 0 input, G = 1 + 2N S + 2C p , we have In the nulling receiver, one squeezes the return mode and the ancilla mode via a two-mode squeezing process For n T = 0, n B = 0, it nulls the returned signal mode to vacuum. The derivation of the covariance matrix of the general n T = 0 case is straightforward but the result is too lengthy to display, thus here we present the result for the ideal n T = 0 case. The resulting covariance matrix is where The photon count distribution of such a two-mode state is subject to the hypergeometric distribution where 2 F 1 is the regularized hypergeometric function, x = c 2 − (e + 1)s + e + 1, y = c 2 − (e − 1)(s + 1). The Fisher information is evaluated as (D25) Figure 11. The ratio of Fisher information of the nulling receiver over the direct photon detection, both using TMSV probes. NB = 10 −3 .
Note that the nulling step is indispensable. As shown in Fig. 11, the Fisher information of a nulling receiver is strictly larger than direct photon detection. The advantage expands significantly when N S increases. For κ → 1 the advantage is negligible for small N S , nevertheless we still see an increasing trend with N S .
Remarkably, at the asymptotically low-noise limit n B → 0, we derive Compared to the asymptotic value of Eq. (17) of the main text, we find that the nulling receiver is optimum for TMSV probes at this limit. One may consider the strategy of measuring only one of the two ports. No matter which port one measures, the reduced state is always a thermal state. A thermal state with mean photon number N has photon count n satisfying the distribution Consider N = N (n B ), one can immediately obtain the Fisher information about n B as If one only measures the ancilla, the mean photon number is If one only measures the returned signal, the mean photon number is Indeed, only measuring the returned signal also achieves the Fisher information scaling of measuring both when the noise n B 1: (D31) However, if one only measures the idler, (D32) In general, we can consider an arbitrary nulling parameter as r 2 = R · r 2 , where R is the nulling (deviation) factor that characterizes its deviation from our proposal r 2 , ideally R = 1. We find that measuring both signal and ancilla modes significantly improves the robustness against such deviation, as shown in Fig. 12. When the nulling factor R deviates from 1, the Fisher information degrades from I TMSV−null achieved by r 2 , while the measure-both strategy (blue) decays much slower than the measure-signal-only strategy (red). Figure 12. The ratio of Fisher information of the nulling receiver with nulling (deviation) factor R over the ideal I TMSV−null with R = 1, using TMSV probes. (a) κ = 0.6; (b) κ = 0.9. The measurement on both the returned signal and ancilla modes (blue) is compared with that on the returned signal mode only (red). NB = 10 −3 , G = 10dB (NS = 2.025).
Appendix E: Fisher information in dark matter search
Before presenting the formulas, we clarify our treatment of the thermal background of input source in derivations. For the upper bound of the total quantum Fisher information, we consider the energy constraint on the input state to the channel, rather than the output state of the channel. Such a treatment greatly simplifies the optimization over source states and yields analytical formulas. In contrast, for all the other Fisher information quantities, in consistence with Appendix C, we regard the input state as a possibly noisy quantum state contaminated by the thermal noise.
For the upper bound, we present the full formula for Eq. (27) of the main text and the derivation has been described in the main text. In the ideal source case, the spectra of Fisher information quantities can be derived by simply substituting Eqs. (15) (19) (16) (20) (17) into Eq. (29) of the main text. We derive the total Fisher information upper bound and total Fisher information for various quantum sources and measurements as follows.
a γ m n T (n T + 1) (γ a + γ l ) 4γ m n T (γ a + γ l ) + (γ a + γ l + γ m ) 2 3/2 × 2γ l γ a (2N S n T + N S + n T + 1) + γ m (2n T + 1) (N S n T + n T + 1) + 2γ a γ m (2n T + 1) (N S n T + n T + 1) Here we omit the formula for J SV as it is too lengthy to display. At the same time, we provide an additional Fig. 13 to explain the optimization of J SV overγ mentioned in the main text. We see that when the squeezing G is very small, the optimal coupling ratio isγ = ∞ (green dashed), the same as the vacuum limit; while when squeezing G is in a certain range, the optimal coupling ratioγ = 1 (magenta dashed); in the large squeezing region above a threshold, the optimal coupling ratio is agaiñ γ = ∞ (green dashed). Theγ m = 1 peak always decays as G increases, because in this case the on-resonance peak, which suffers severe loss for κ(ω) 0, contributes most of the total Fisher information (See Fig. 8 of the main text) and after such a severe loss the squeezing on the source degenerates to harmful noises. Similar phenomenon occurs to the overcoupling limit for small N S , as Eq. (35) of the main text shows.
Appendix F: Practical input engineering
As discussed in Appendix A, in an experimentally feasible scenario, the input will also be affected by thermal noise. Here we give detailed analysis of such effect.
The squeezed-vacuum homodyne performance can be obtained as (see Appendix F 1 for derivations) where γ ≡ γ m + γ + γ a is the total coupling strength. From Eq. (F1), we can see that the continuous-spectrum Fisher information is identical to α 2 (ω)/2n 2 a , i.e., only a constant factor different from the square of visibility α(ω) defined in Eq. (1) of Ref. [15]. Therefore, our results provide the Fisher information interpretation of the visibility for DM signal.
In the practical input case, the closed-form formulas for J TMSV na and J SV na are too lengthy to display (See Appendix F 1 for derivations); instead, we directly plot the results for a comparison. As shown in Fig. 14, due to the additional thermal noise, the performance of different sources and measurements is overall worse than the performance in the ideal case. In this practical case, we cannot show the optimality of the noisy TMSV, as there is a gap between the upper bound (black dot-dash) and the TMSV performance (red dashed). The homodyne detection performance (purple and gray solid) is still worse than the vacuum limit (gray dashed). While the same nulling receivers are still optimal given the noisy TMSV and noisy single-mode squeezed vacuum sources (red solid and blue solid). Now we consider the total Fisher information to obtain insights into the scan rate. Consider homodyne detection on vacuum input (up to weak thermal noise), combining Eq. (A4) and Eq. (25) The result agrees with the ideal case Eq. (30) in the low tempetaure limit of n T = 0, as expected. We can see that the total Fisher information is indeed proportional to the scan rate [17,19]. Similarly, the maximal total Fisher information is achieved atγ m = 2, as we numerically verify in Fig. 15(a) with the gray solid plot. When there is squeezing, from Eq. (F1) and Eq. (25) of the main text, we can obtain the corresponding total Fisher information which is again proportional to the scan rate in Refs. [15,19]. For a highly squeezed quantum source with G 1, the maximum total Fisher information is again achieved atγ m = 2G (same as the ideal case), leading to Similarly, the optimal coupling rate is verified in Fig. 15(a) by the purple solid line. Now we consider the quantum limits in the practical source case. For the vacuum source (up to weak thermal noise), from Eq. (A3) and Eq. (24) of the main text we have which is achieved by photon counting. When the noise n T is small at low temperature, we see that J VL will have an advantage scaling with 1/n T compared with homodyne detection. More specifically, atγ m = 2, the optimal vacuum limit total Fisher information J VL satisfies the relation J VL I Vac−hom = (1 + 2n T ) 2 2n T (1 + n T ) .
(F18)
Here ν is defined in Eq. (C3). The formula for J SV is too long to display.
Appendix G: Memory channels and distributed sensing So far we have been focusing on the memoryless channels, where the M -mode probe travel through M -product of independent identical channels. Nevertheless, a highlighted scenario is to estimate the fully-correlated thermal noise, which is modelled by an M -mode memory channel N where J is the QFI defined by (10) of the main text.
Here the second equality is because that B is a unitary transform, and the third equality is because that for any optimum state σ that maximizes the QFI for channel N | 14,743.4 | 2022-08-29T00:00:00.000 | [
"Physics"
] |
Enhancing Alzheimer's Disease Diagnosis: The Efficacy of the YOLO Algorithm Model
—The diagnosis and early detection of Alzheimer's Disease (AD) and other forms of dementia have become increasingly crucial as our aging population grows. In recent years, deep learning, particularly the You Only Look Once (YOLO) architecture, has emerged as a promising tool in the field of neuroimaging and machine learning for AD diagnosis. This comprehensive review investigates the recent advances in the application of YOLO for AD diagnosis and classification. We scrutinized five research papers that have explored the potential of YOLO, delving into the methodologies, datasets, and results presented. Our review reveals the remarkable strides made in AD diagnosis using YOLO, while also highlighting challenges, such as data scarcity and research lacking. The paper provides insights into the growing role of YOLO in the early detection of AD and its potential to transform clinical practices in the field. This review aims to inspire further research and innovation to enhance AD diagnosis and, ultimately, patient care.
I. INTRODUCTION
There is a surging interest in the application of Artificial Intelligence (AI) within the realm of healthcare.Health carerelated AI research has seen a rapid acceleration in publication growth since 2012, with a 45.1% increase in the past five years, driven by technological breakthroughs and is expected to continue doubling approximately every two years based on this growth trend [1].AI has solidified its position as a transformative power in the healthcare sector, completely reshaping the approaches to diagnosis, treatment, and medical condition management.In recent years, AI has emerged as an indispensable asset in the healthcare industry, offering groundbreaking solutions to some of the most formidable challenges in medicine, particularly when addressing neurological diseases.Neurological diseases, encompassing a diverse spectrum of conditions, such as Alzheimer's disease (AD), stroke and Parkinson's disease, pose intricate challenges in terms of diagnosis and treatment [2,3].AI has decisively altered the landscape in this context.AI applications in the realm of neurological diseases are both diverse and promising.AI, particular machine learning (ML) and deep learning (DL) architecture have the capability to scrutinize extensive volumes of brain imaging data, encompassing magnetic resonance imaging (MRI), positron emission tomography (PET), and computed tomography (CT) scans, in order to unearth subtle anomalies that might elude human perception [4,5,6].In contrast to conventional diagnostic and treatment methodologies, these AI-driven approaches address several limitations inherent in traditional methods, such as subjectivity, delayed diagnoses often resulting from inconspicuous early-stage symptoms, or findings imperceptible to human observers.This proficiency in early detection of neurological disorders offers the potential for swifter and more precise diagnoses.
In particular, the deep learning object detection algorithm known as You Only Look Once (YOLO) shows great promise in enhancing the accuracy, efficiency, and automation of diagnosing neurological diseases, with a special emphasis on Alzheimer's disease.The primary aim of this brief review is to investigate the present applications of YOLO in the classification of neurological diseases with a particular focus on Alzheimer's disease.Additionally, we will delve into the methods used and the challenges faced when applying AI to the diagnosis and treatment of neurological diseases.
A. Artificial Intelligence in AD Diagnosis
AD is a formidable and complex neurological condition that has captured the attention of scientists, healthcare professionals, and society at large.Named after Dr. Alois Alzheimer, who first described the disease in the early 20th century [7], Alzheimer's is a progressive and degenerative brain disorder that predominantly affects memory, cognitive function, and daily life activities.The impact of AD extends far beyond the affected individuals themselves, as it profoundly affects their families and caregivers, often placing an immense emotional and practical burden on them.It is the most common cause of dementia, a term that encompasses a range of cognitive impairments that interfere with an individual's ability to think, reason, remember, and communicate.AD is a devastating and relentless neurological disorder that presents a profound challenge to both the medical community and society as a whole [8].It is estimated that over 50 million people worldwide are currently affected by AD [9].As the global population ages, this number is projected to escalate significantly in the coming decades.This ailment has grown into one of the most prevalent and impactful health concerns of our time [10].
As is the case with numerous other neurological disorders [11], early diagnosis holds a crucial position in the care and strategic planning for Alzheimer's disease (AD).The classification of AD is based on different levels, which include Alzheimer's disease (AD), mild cognitive impairment (MCI), and cognitively normal (CN).Early identification in MCI level empowers individuals and their families to take proactive steps in addressing critical aspects of their future, encompassing healthcare preferences, support requirements, and financial and *Corresponding Author.www.ijacsa.thesai.orglegal considerations [12,13].Additionally, early detection allows for proactive safety measures to reduce the risk of wandering or disorientation-related incidents.Moreover, it opens up the possibility of participating in clinical trials for innovative treatments during the disease's early stages, contributing to advancements in research.
Despite recent advancements in clinical trials related to Alzheimer's disease, several challenges have emerged.These challenges include the difficulty of distinguishing AD from normal age-related cognitive changes, limited access to specialized diagnostic tools in certain geographic regions, and the growing number of individuals affected by the disease [14].Consequently, the role of computer applications in AD diagnosis has become increasingly crucial.Among these, deep learning, which falls under the umbrella of machine learning and constitutes a pivotal element of artificial intelligence, has showcased impressive accomplishments in fields like object recognition and computer vision [15].This has led to the extensive integration of deep learning in the realm of neuroimaging analysis, where its neural network architecture, featuring non-linear activation functions, plays a pivotal role in tasks like image classification [16], particularly in the domain of neuroimaging and AD neuroimaging [17].This encompasses various modalities, including MRI, PET, CT, fMRI, and more [18].
B. Advanced in Machine Learning in Neuroimaging
Brain imaging can be categorized into distinct types based on various criteria.One such classification pertains to imaging modality, which can be categorized into structural and functional imaging.Structural imaging, exemplified by MRI, offers high-resolution images that unveil detailed brain anatomy, encompassing gray and white matter, as well as cerebrospinal fluid.It detects changes in brain volume and atrophy patterns, key indicators of Alzheimer's disease.While primarily used for functional studies, fMRI can also provide insights into structural connectivity through techniques like resting-state functional connectivity.Alterations in functional connectivity can be associated with structural changes in AD.In recent times, deep learning architectures have demonstrated the capability to handle complete 3D brain images seamlessly from start to finish (end-to-end) [19,20,21].However, the foremost challenge is the high computational cost, which demands substantial processing power and can result in extended training times.Overfitting is another issue of concern, as is the need for ensuring model interpretability.Data preprocessing is a critical stage in preparing both 2D and intricate 3D data, albeit with the introduction of added complexities.
In more detail, data preprocessing is a fundamental process in the preparation of raw data for machine learning algorithms.Its significance stems from the fact that real-world data can be noisy, incomplete, or poorly formatted.By cleaning and structuring the data, data preprocessing significantly enhances the accuracy and effectiveness of machine learning models.Within the domain of neuroimaging analysis, the pivotal stages of data preprocessing and feature extraction hold an indispensable role.These critical components serve to enhance data quality, mitigate noise, establish data consistency, augment statistical power, facilitate data interpretation, and enhance research precision.Nevertheless, it is essential to recognize that data preprocessing may also introduce certain inherent limitations that warrant consideration in the research process.
C. Limitation of Deep Learning in Alzheimer's Disease Diagnosis
The increasing importance of deep learning in Alzheimer's Disease (AD) classification has become increasingly apparent, resulting in a notable upswing in research endeavors from 2017 onward [17].These investigations have yielded a spectrum of reported accuracy levels, spanning from 70% to 99% [22].Notably, Sarraf et al. ( 2016) achieved outstanding accuracy rates of 98.84% for MRI [23] and an impressive 99.99% for fMRI [24] pipelines, while Suk et al. (2013) [25] attained an accuracy of 98.8%.However, a common reliance on diverse MRI pre-processing techniques to attain optimal results and a predominant focus on Convolutional Neural Networks (CNN) have contributed to a distinct research gap in the domain of deep learning for object detection.Consequently, there exists a pressing need to explore new research avenues that minimize the dependence on these pre-processing techniques.
D. Advancement of YOLO for Alzheimer's Disease Diagnosis
The diligent efforts of numerous researchers have been dedicated to the deployment of deep learning models for object detection within the realm of medical imaging, particularly within the domain of Alzheimer's Disease diagnosis.This dedication has culminated in the emergence of the YOLO model and its various iterations, representing significant milestones in the development of this innovative approach.
E. Convolutional Neural Networks
A key technique within the domain of deep learning is the Convolutional Neural Network (CNN) [26].These networks take inspiration from the human system and are designed to conduct hierarchical learning using sophisticated algorithms.This process involves the modeling of features at various levels, allowing the extraction of abstract representations from the input data.CNNs are constructed with multiple layers, including convolutional, activation, and pooling layers.To produce final output predictions, one or more Fully-Connected layers (FC) are added to the network.Ang et al. (2017) illustrated the architecture of a CNN using a diagram (see Fig. 1).www.ijacsa.thesai.orgVarious notable variations in the field of deep learning have been developed, with some well-known models leading the way.These models include LeNet [28], AlexNet [29], ResNet [30], and GoogLeNet [31].Moreover, these models can be categorized into two main types: one-stage architectures and two-stage architectures.In a two-stage CNN, such as the Faster R-CNN (Region-based Convolutional Neural Network) [32], the object detection process is divided into two distinct steps: region proposal and classification.Initially, the model generates region proposals, which are essentially candidate regions within an image where objects might be situated.Once these region proposals are generated, each one is passed through a classifier to determine if it contains an object and, if so, to identify the class of the object.On the other hand, onestage CNNs are designed for a more streamlined approach, where object detection occurs in a single step, without the need for a separate region proposal stage.These models directly predict bounding boxes and class labels for objects within an image, making them efficient and suitable for real-time object detection.However, it's worth noting that they may not always achieve the same level of accuracy as two-stage models in certain situations.Examples of one-stage CNNs include YOLO and the Single Shot MultiBox Detector (SSD).
F. LeNet Architecture
LeNet, a condensed form of "LeNet-5," represents an architectural framework introduced by LeCun et al. in 1998 [28] as depicted in Fig. 2.This landmark innovation has played an integral role in shaping the landscape of deep learning and CNN.It was one of the first successful applications of neural networks for computer vision tasks particular in handwritten digit recognition, specifically for recognizing digits in postal codes and zip codes.
LeNet's structure is distinctly organized into two core components: the Convolutional Part and the Fully-Connected Part.Within the Convolutional Part, three vital layer types are evident: an Input Layer designed to handle 32x32 grayscale images (though adaptability is included for zero-padding, as seen in datasets like MNIST), two Convolutional Layers (CL) employing 5x5 filters, and two Max-Pooling Layers tasked with efficient feature map downsampling.Meanwhile, the FullyConnected Part incorporates three FC, also known as Dense layers, responsible for capturing intricate data relationships, concluding with an Output Layer featuring a softmax function to categorize handwritten digits, as exemplified in the MNIST dataset, which consisted of images of numbers from 0-9 in black and white.Nevertheless, it was primarily designed for the specific task of recognizing handwritten digits, limiting its applicability to a broader range of image classification tasks.
G. AlexNet Architecture
In 2012, Krizhevesky et al. [29] introduced AlexNet, a pioneering convolutional neural network (CNN) that revolutionized deep learning.This innovation significantly enhanced the depth of CNNs and incorporated effective parameter optimization strategies, marking a breakthrough in the prestigious ImageNet Large Scale Visual Recognition Challenge (ILSVRC).AlexNet's remarkable achievement was evident in its top-5 error rate of just 15.3%, outperforming traditional computer vision methods and setting a new standard at the time.The concept of AlexNet is illustrated in Fig. 3.
AlexNet marked a significant milestone in the realm of deep convolutional neural networks by pioneering the training of complex models on an extensive dataset, comprising more than 15 million images and involving millions of model parameters.This achievement underscored the capacity of deep networks to extract intricate features from massive datasets.Moreover, AlexNet popularizedthe adoption of Rectified Linear Units (ReLU) [33] as an activation function, www.ijacsa.thesai.orgFig. 2. The concept of LeNet [28].Fig. 3.The concept of AlexNet [29].
which not only improved computational efficiency but also expedited training convergence.Furthermore, to combat overfitting, a key concern in deep learning, the technique of dropout was introduced.This involved randomly setting 50% of the hidden neuron outputs to zero during training, effectively excluding them from the backpropagation process.These innovations not only contributed to AlexNet's success but also inspired the design of subsequent modern architectures.
H. GoogLeNet Architecture
In the 2014 ILSVRC, GoogLeNet, also known as Inception-V1, achieved first place [31] (Figure 4).A significant innovation of GoogLeNet lies in its use of inception modules, which are tailored to capture features at multiple spatial scales.These modules employ convolutional filters of different sizes, including 5x5, 3x3, and 1x1, to effectively integrate channel and spatial information across a range of spatial resolutions, enabling the network to extract features at both fine and coarse levels simultaneously.This design enhances feature learning efficiency.
Additionally, GoogLeNet incorporates 1x1 convolutions, which have the effect of reducing the dimensionality of feature maps, resulting in a computationally efficient architecture.This not only permits the construction of deeper networks but also significantly reduces the number of parameters to 5 million, as compared to AlexNet's 61 million.These designs make GoogLeNet well-suited for real-time and resource-efficient applications.However, GoogLeNet's limitations include its complexity, resource-intensive training, and reduced suitability for tasks beyond image classification.
I. ResNet Architecture
ResNet, introduced by He et al. [30], made a significant breakthrough in deep learning by winning the ILSVRC 2015 competition with a remarkably deep architecture of 152 layers, over 20 times deeper than AlexNet.The core challenge that ResNet addresses is the training of such deep neural networks, which previously suffered from issues like vanishing gradients and a decline in accuracy with increased depth.In order to overcome these challenges, ResNet introduces a groundbreaking concept known as residual connections, commonly denoted as skip connections.These ingenious connections serve to ease the training of exceptionally deep networks by promoting the efficient flow of gradients during the training process.Each residual block in a ResNet contains a www.ijacsa.thesai.org"shortcut connection" that bypasses one or more layers, enabling the network to learn residual functions.Essentially, this results in a combination of a traditional feedforward network and a residual connection.These residual functions capture the difference between the desired output and the current layer's output, making it easier for the network to learn identity mappings.ResNet models are available in various depths, including ResNet-50, ResNet-101, ResNet-152, which are widely adopted for image classification tasks.
J. Faster R-CNN
Ren et al. [32] proposed Faster R-CNN algorithm, with the idea of introducing the idea of integrating region proposal generation within a deep neural network.Faster R-CNN introduces the RPN, also known as region proposal network, a neural network module designed to generate region proposals directly from the input image.This replaces the need for external algorithms like selective search or edge boxes.
The RPN take an image from any size and suggests candidate object bounding boxes based on learned features from the image.The RPN employs anchor boxes, which are pre-defined bounding box shapes at various scales and aspect ratios.These anchor boxes are used to propose object regions efficiently.Faster R-CNN uses a two-stage detection approach.In the initial stage, the Region Proposal Network (RPN) is responsible for generating region proposals.Subsequently, the second stage entails the involvement of another CNN, known as Fast R-CNN [34], which carries out object detection and precise bounding box regression based on the generated region proposals.
K. YOLO Architechture
The primary innovation in Faster R-CNN lies in its Region Proposal Network (RPN), which generates high-quality region proposals directly within the network.This advancement results in faster inference times while upholding the required accuracy for object detection tasks.However, Faster R-CNN's two-stage architecture introduces a complex pipeline, demanding precise tuning of each stage independently, resulting in a system with significant computational overhead.
In an attempt to simplify the process and make it more efficient, YOLO (see Fig. 5), created by Redmon and his team [35], takes a unique approach.YOLO partitions the input image into a grid cells, grid cell is tasked with object detection if the object's center is located within it.These grid cells make predictions for B bounding boxes, complete with confidence scores and C class probabilities.These predictions are organized as a tensor with dimensions .Within this framework, the input image is effectively partitioned into sub-images, where 'five' signifies the detection of attributes like height, width, confidence score, and central coordinates for each bounding box.
Moreover, YOLO consolidates the various aspects of object detection into a unified neural network, utilizing information from the entire image to make predictions for each bounding box.This integration enables YOLO to simultaneously forecast bounding boxes for all categories within a given image.YOLO's architecture offers the advantages of end-to-end training and real-time processing speed, all while upholding a high level of precision in object detection.Taking cues from the architectural advancements of GoogLeNet, YOLO is structured with a series of 24 CL, supplemented by 2 FC layers.In contrast to GoogLeNet's inception modules, YOLO follows a more straightforward approach, integrating 1×1 reduction layers followed by 3×3 CL.Additionally, YOLO exhibits certain similarities with R-CNN, particularly Faster R-CNN, where each grid cell generates potential bounding boxes and assigns scores to them.Subsequently, a Non-Maximum Suppression (NMS) mechanism is employed to eliminate redundant or overlapping bounding boxes after predictions are computed across all grid cells using convolutional features.
Since its initial introduction in 2016, YOLO has undergone a series of evolutionary iterations, adapting to the specific requirements of diverse fields within human life.Each subsequent version of YOLO has been meticulously refined to meet the ever-evolving challenges and demands of real-time object detection and various computer vision applications.IV.DISCUSSION Uddin et al. [40] conducted a comparative analysis of three distinct deep learning architectures, namely YOLOv4, AlexNet, and Faster R-CNN.Their research encompassed a substantial dataset comprising 6400 MRI images, making it the largest dataset among the studies reviewed.However, a notable aspect of their dataset was the relatively limited number of CN (cognitively normal) images, which stood at 2560 training images.This dataset composition, characterized by an abundance of CN images and a scarcity of AD (Alzheimer's disease) and MCI (mild cognitive impairment) images, raised concerns about the potential for overfitting.The resulting models exhibited a propensity to classify most images as CN due to the skewed distribution of classes.This highlights the need for improved dataset balance, including a more representative inclusion of AD and MCI images.Addressing this class imbalance could lead to more reliable and accurate classification results, reducing the risk of overfitting and enhancing the model's overall performance.
In a study conducted by Alon et al. [36], the YOLOv3 architecture demonstrated an accuracy rate of 80%, which was notably the lowest among the studies under review.It's important to highlight that this study employed a significant dataset comprising 1000 MRI images for training and validation, achieving impressive results with training accuracy reaching 98.617%, validation accuracy at 98.8207%, and a mean average precision (mAP) of 96.17%.However, it's crucial to consider certain factors that might impact the reliability and generalizability of these findings.One notable concern is the study's reliance on a relatively small subset of only 20 MRI images for testing.The limited size of the testing dataset introduces an element of uncertainty into the model's performance, as it may not fully capture the intricacies and variations present in a more extensive dataset.Additionally, the absence of information regarding any pre-processing procedures applied to the dataset raises questions about the data's quality and its readiness for deep learning analysis.To enhance the credibility of these findings and ensure their generalizability, it is advisable to conduct further evaluations on larger and more diverse datasets.This would not only provide a more comprehensive assessment of the model's robustness but also validate its performance across a broader range of MRI images.
In a concurrent research effort, Islam et al. [37] undertook a comprehensive investigation into the use of various YOLO versions for image classification.Their study aimed to evaluate the performance of different YOLO iterations in the context of object recognition.Comparatively, the findings revealed that YOLOv3 and YOLOv4 outperformed YOLOv5.This difference in performance was attributed to the adaptable Darknet3 backbone, a crucial component of YOLOv3 and YOLOv4, which excels in the task of object detection.The Darknet3 backbone's architecture and capabilities enhanced the accuracy and efficiency of these YOLO versions.A noteworthy advancement came in the form of YOLOv6 and YOLOv7, which surpassed the capabilities of YOLOv4.This improvement was achieved by passing the input through multiple (CNN) layers in the backbone, resulting in increased computational efficiency and better overall performance.However, it's important to note that these models primarily focused on single-class detection, which may limit their applicability in scenarios where multi-class detection is required.The detailed results are presented on Table II.[38] (see Table III) embarked on an extensive investigation aimed at streamlining the preprocessing stage in the context of medical image analysis.They achieved this by implementing YOLOv3 and employing a dataset consisting of Abd-Aljabar et al. [39] also utilized YOLOv2 with a dataset of 300 raw MRI images, achieving a result of 98% accuracy, which is slightly lower than Fong et al.'s research at 99.8%.Nevertheless, this outcome reaffirms the effectiveness of YOLO variations in handling raw and unprocessed MRI images, offering an alternative approach to streamline the pre-processing stage in medical image analysis.These findings collectively emphasize the adaptability and robustness of YOLO-based models in handling diverse image data without the need for extensive pre-processing, potentially simplifying the workflow for neuroimaging analysis individual predictions.
In summary, YOLO has proven to be a promising tool for tasks related to Alzheimer's disease diagnosis and classification.However, it's crucial to acknowledge the persistent challenges that hamper progress in the field of neuroimaging research.These challenges encompass the scarcity of available data, a pronounced imbalance in class distribution within datasets, and a noticeable research gap.Addressing these issues through further data collection, careful dataset curation, and expanded research efforts is essential to fully unlock the potential of YOLO and other deep learning approaches in the critical domain of neuroimaging research.
V. CONCLUSION
In conclusion, our review provides a comprehensive exploration of the evolving landscape in the application of the You Only Look Once (YOLO) architecture for the diagnosis of AD.In a world where an aging population underscores the critical need for early and accurate AD detection, deep learning methods have emerged as a promising solution.YOLO, with its lightweight design, rapid processing, and impressive accuracy, showcases immense potential for reshaping the landscape of neuroimaging in AD classification.As we look ahead, further research in YOLO and deep learning is strongly encouraged.Moreover, techniques like explainable AI (X-AI) could be applied, or specific architectures based on or inspired by YOLO could be developed.This continued exploration promises to advance the quality of care for individuals afflicted by AD and various neurodegenerative disease.
Fig. 5 .
Fig.5.The concept of YOLO[34].III.RESULTSWhile YOLO's recent trends have leaned towards real-time applications, its potential in the medical imaging field, particularly for diagnosing Alzheimer's disease (AD), has drawn significant interest.Originally developed for object recognition, the adoption of YOLO in AD diagnosis has shown promise.Nevertheless, the need for further research, as highlighted in TableI, emphasizes the importance of ongoing investigations to advance AD diagnosis and treatment.
TABLE I .
SUMMARY OF ALZHEIMER'S DISEASE DIAGNOSIS STUDIES | 5,532.8 | 2023-01-01T00:00:00.000 | [
"Computer Science",
"Medicine"
] |
Multimedia Real-Time Transmission Protocol and Its Application in Video Transmission System
The aim is to provide corresponding quality of service (QoS) guarantee for real-time video data transmission. To ensure the high quality and smooth playback of video sequence at the receiving end, the design of multimedia transmission is made. In view of the shortcomings of selective frame loss, this paper adopts the active frame loss algorithm, which discards the nonkey frames according to the probability. With the increase of the frame loss rate at the transmitter, the proportion of decoded frames increases rapidly and reaches the maximum when the frame loss rate is 0.1. It is proved that active frame loss can control the bit rate more accurately to make full use of bandwidth resources and avoid the waste of bandwidth resources.
Introduction
With the increasing demand for network video, how to make users obtain good viewing effect when viewing network video has become a hot research topic [1]. Because the network video playback is subject to the user's equipment and network transmission, it brings adverse effects such as low definition or not smooth enough to the user's browsing. How to develop an adaptive video player and transmit appropriate video to users according to users' equipment performance parameters and network performance is an important research content in the field of adaptive video transmission [2]. Among many adaptive control methods of video transmission, selective frame loss has very low time complexity and good real-time performance, which can be applied in most occasions. However, most of the existing frame loss algorithms adopt the hierarchical rate adjustment method, which is equivalent to a coarse-grained bandwidth matching method, which makes the code rate and decoding quality change step by step, and cannot achieve the accurate matching with the channel bandwidth. Finally, this paper combines the active frame loss strategy by reducing the segment length and improves the parallelism of data transmission by reducing the number of transmitted video frames and giving up part of the bandwidth for the transmission of additional overhead caused by the increase of the number of video frame segments, so as to improve the realtime requirements of delay sensitive data transmission and achieves good results.
Literature Review
As the problems of traditional streaming media transmission methods become more prominent, adaptive streaming has gradually become the mainstream technology of video transmission, as shown in Figure 1, but there are few adaptive algorithms designed with new standards. is leaves room for competition among scholars [3]. Neil et al. proposed a buffer-based fuzzy logic control bit rate adaptive algorithm, according to the change of the buffer. e video bit rate is controlled according to certain rules [4]. Qi and others described a typical rate adaptive control algorithm based on slow change of playback [5]. Mok and others start from the quality of experience (QoE), through the analysis of the factors that affect the QoE value; a video bit rate adaptive algorithm is proposed; and the value of QoE is made the best [6]. Kim and others used the monitoring mechanism in the algorithm, every cycle, to monitor the available bandwidth in the network, the size of client cache data, and the amount of frame loss, as a parameter to determine the switching code rate, and when the current available bandwidth is lower than the current video stream bit rate, the average frame loss rate is greater than 10%, or when the remaining buffer length is less than the set active buffer length B a , the client switches to a lower video bit rate [7]; Verkatraman et al. introduced the concept of sliding window in the algorithm, based on the average download time in the sliding window, as a parameter to measure the state of the network, used the logistic equation, modeled the network throughput and rate reduction factor, limited the amplitude of the bit rate jump, maintained the video buffer within the equalization interval, and reduced the number of code rate switching [8]. Meng and others pointed out bandwidth estimation and bit rate selection as they are the core of the client-side bit rate adaptive algorithm and used Q learning enhanced learning algorithm to train a decision-making Q matrix; according to the current network status and buffer saturation, the Q matrix is referred to determine an action that can obtain the maximum return value; the algorithm can achieve higher bit rate level, less bit rate switching time, and balance the amount of buffered data [9]. Hassan and others used historical download data to predict the network bandwidth in the next stage, using the standard deviation method; replaced the original SF algorithm for the calculation of volatility parameters [10]; made the bandwidth prediction result smoother and slow down the occurrence of the "burr" phenomenon, and combined with the buffer management strategy to make the basis for the code rate decision, avoiding the shortcomings of relying on a single parameter as the code rate selection. e algorithm has high stability, but when the state of the cache is in the balance zone, even if the available bandwidth resources of the network become larger, the algorithm pursues stability. It will not switch to a higher level of video bit rate. Altaf et al. implemented the video bit rate switching algorithm in the OpenFlow network, using the OpenFlow network to be able to pass through the network controller, and obtained real-time traffic information of all switch ports. e accuracy and usability of the algorithm are improved, and the interference of other additional data is avoided [11]. But Mongay Batalla and others found that because when data are transmitted over the network, the network bandwidth will fluctuate greatly and only make bit rate decisions based on the throughput of the network. ere is a certain degree of one-sidedness. If the cache state is not considered in the decision, it is likely to cause a series of buffer overflow and frequent bit rate switching, which will affect the user's viewing experience [12]. Hooft et al., according to the shortcomings of the current existing algorithms, proposed an algorithm that prioritizes caching. When the network changes significantly, priority is given to the client's cache status, and then according to the available bandwidth of the network, it is judged whether or not to switch the code rate [13].
e Best Frame Loss Rate and Its Determination.
In the case of insufficient bandwidth, the method of actively discarding nonkey frames can significantly improve the decoding quality of video [14]. At that time, for each B frame sent, the probability Pdrop calculated by the following formula is discarded: e purpose is to adjust the data sending rate to match the bandwidth. In order to determine the probability Pdrop here, whether it is the parameter that maximizes the ratio of decodable frames, we need to examine the relationship between different frame loss rates and the ratio of decodable frames. Here, through the method of simulation experiment, the relationship between the frame loss rate and the ratio of decodable frames is studied, and the network topology is simulated. e video recording file Verbose_StarWarsIV.dat is still used here to generate data traffic, and the required transmission bandwidth is 320 kb. Our fixed network bottleneck link, that is, the link bandwidth between R1 and R2, is 304 kb, that is, 95% is needed to simulate the situation of insufficient bandwidth. Here, the frame loss rate Pdrop is adjusted from 0 to 0.2, and the change of the decodable frame ratio is observed [15]. e experimental results are shown in Figure 2. As can be seen from the figure, when the frame loss rate is 0, because the available network bandwidth is insufficient, the network is in a congested state, a lot of packet loss makes the frame. In the process of transmission, the dependence is destroyed. Although the available bandwidth reaches 95% of the required bandwidth, the proportion of decodable frames is only a little over 77%, and random packet loss has a serious adverse effect on the decoding quality of the video. As the frame loss rate at the sender increases, the proportion of decoded frames increases rapidly, and it reaches the maximum value when the frame loss rate is 0.1 [16]. When the frame loss rate is 0.1, the amount of data that the sender actively loses is 5% of the total video data because we only discard noncritical B-frames and the B-frame data volume accounts for half of the total data volume. erefore, only 95% of the data is sent to the network, and the network is in a bandwidth matching state at this time. In the process of controlling the frame loss rate from 0 to 0.1, it is the process of adjusting the data transmission rate of the sender to match the bandwidth. Although the network is still congested during this period, but as the degree of congestion decreases, the forced random packet loss on the network is reduced, as shown in Figure 3. e adverse impact of random packet loss on video decoding is alleviated, and the proportion of decodable frames increases rapidly. When the control frame loss rate continues to increase from 0.1, because the data transmission rate is lower than the transmission capacity of the network, the packet loss rate is reduced to 0; at this time, as the total amount of data sent by the sender decreases, the video data received by the receiving end is also reduced accordingly. erefore, the proportion of frames that can be decoded decreases as the frame loss rate increases. At this time, the network is under light load.
To sum up, for a certain available bandwidth, when it is insufficient to meet the minimum demand of video transmission, the maximum decodeable frame ratio can be obtained by determining the optimal frame loss rate and implementing the active frame loss strategy under the optimal frame loss rate. Here, the best frame loss rate is to match the data transmission rate with the network bandwidth. is also verifies the correctness of the formula for calculating the frame loss rate in our above algorithm. Using only the UDP protocol, the network can only provide besteffort services. is was originally designed for data service transmission, for video data with strong real-time performance, large data volume, sensitive to random data loss, and high requirements for transmission delay and jitter. e quality of its service cannot be guaranteed. In order to improve the quality of video data transmitted through traditional networks, IETF (Internet Engineering Task Force) formulated RTP and RTCP protocols in 1996, which provides a real-time transmission standard for the network [17,18]. Real-Time Transport Protocol (RTP) is only responsible for the transmission of real-time data, is a data transfer protocol, and used for the transmission of multimedia data streams on the Internet. Real-Time Transport Control Protocol RTCP is used in the process of data transmission and provides feedback on network status and service quality for data senders. e RTCP protocol defines several datagrams as shown in Figure 4 to achieve its functions: the sender reports (SR), the receiver reports (RR), and the source description package (SDES). e conversation person leaves the package BYE and specific application package APP. Among them, the sender reports and the receiver reports are used for RTP participants to exchange statistical information with each other [19,20].
Measurement and Improvement of Continuous Frame
Loss. e selective frame loss algorithm is a "deterministic" algorithm. e certainty here means that when the control parameters are determined, for a certain video sequence, whether a frame is discarded is certain, generally, there will be no continuous frame loss, unless the algorithm specifically arranges.
is may cause continuous frame loss. Continuous frame loss will cause jumps and interruptions in video playback at the receiving end and affect the quality of video playback. Number the B frames at different positions in a GOP, the first one is 1, the second is 2, the third is 3, . . ., the eighth is 8. Use probability p(x i ) to represent the active frame loss algorithm. e probability that the frame number i is discarded and then the conditional probability p(x i + 1|x i ) means that when the frame numbered i is discarded, the probability that the frame numbered i + 1 is discarded [21,22]. Obviously, the conditional probability p(x i + 1| x i ) can describe the continuity of dropped frames, the greater the probability, the greater the possibility of continuous frame loss, and the worse the uniformity. For different i, the conditional probability p(x i + 1 | x i ) is not necessarily the Computational Intelligence and Neuroscience same, take its statistical average here, in order to measure the continuity of frame loss in an algorithm: Call H(X i+1 |x i ) the average conditional probability, the larger the value, the greater the possibility of continuous frame loss, the worse the uniformity of dropped frames. We use the average conditional probability H(X i+1 |x i ), in order to examine the uniformity of three different frame dropping algorithms, let the average frame loss rate be P. For each B frame, regardless of its number, all follow the same probability P [23,24]. e discard probability is shown in Table 1.
At this time, because the discarding probabilities of each frame are independent of each other, the conditional probability is shown in Table 2.
Considering that the rate adjustment needs to consider TCP friendliness and stability, here are two methods to set the optimal frame loss rate: the detection method and the model method. After using the RTCP protocol to obtain the state information of the network, the detection method imitates the additive growth of TCP according to the congestion state of the network and adjusts the setting of the frame loss rate according to the multiplicative reduction behavior: the formula rule is to calculate the effective bandwidth of the network according to the TFRC congestion control algorithm and then directly calculate and set the optimal frame loss rate according to the formula. Since the congestion control algorithms on which the two methods are based are both TCP-friendly rate control algorithms, the frame loss rate adjustment method here is also TCP-friendly.
Results and Analysis
In the method, we use an equal-probability frame loss algorithm, regardless of the number of each B frame, and all are discarded according to the same probability p a [25,26]. Although in comparison with Section 3, this algorithm is better than the uniformity of the arithmetic series probability of frame loss. e uniformity of frame loss proportional series probability is also good. However, the probability of continuous frame loss, especially the average frame loss rate P a , is still relatively high especially when it is relatively large. In order to improve the uniformity of dropped frames, consider the distance between the current frame and the last actively discarded frame and determine the size of the discarding probability. e closer the distance, the smaller the probability of discarding. e greater the distance, the probability of discarding is also greater, which reduces the possibility of continuous frame loss. e discarding probability of the current frame at this time is only related to the distance from the last dropped frame. We can use a Markov chain to design an algorithm that meets the above requirements. k is used to represent the distance between the current frame and the last actively discarded frame S � 1, 2, 3, . . . { } is the state space of Markov chain, so the one-step transition probability is specified as where h > 1. ere is where k is the distance between the current frame and the last discarded frame. e one-step transition probability matrix of this model is h is a parameter corresponding to the average frame loss rate Pa, which can control the average frame loss rate. Obviously, p k.k+1 � 1/h k represents the rate at which the previous frame is not discarded when the distance between the previous frame and the last actively discarded frame is k p k.1 � 1 − 1/h k means that when the distance between the current frame and the last actively discarded frame is k, the probability of being discarded is high. e greater the k, the greater the probability of the current frame being discarded; the smaller the k, the smaller the probability of being discarded. e frame loss uniformity of active frame loss algorithm is improved. Next, we study the relationship between the control parameter h and the P(X 1 ) P(X 2 ) P(X 3 ) P(X 4 ) P(X 5 ) P(X 6 ) P(X 7 ) P(X 8 ) P a P a P a P a P a P a P a P a average frame loss rate p a . According to the one-step transition probability matrix P, . . .
where f n 11 is starting from state 1, probability of returning to state 1 for the first time after n steps, that is, the probability of discarding a video frame at intervals of n steps. So, there is an average number of steps that return to state 1, that is, the average interval between dropped frames is And, because the average frame loss interval and the average frame loss rate p a , there is the following relationship: So, the relationship between the control parameter h and p a is Computational Intelligence and Neuroscience erefore, p a is determined, and h is also determined. e relationship between the two is shown in Figure 5. Now, use the improved algorithm and the equal-probability jade frame algorithm and compare the uniformity of dropped frames. Just look at H(X i+1 | X i ), the larger the value, the worse the uniformity of frame loss; for the equalprobability frame loss algorithm, For the improved algorithm of this paragraph, e relationship between h and p a is determined by formula (9). We draw the H(X i+1 | X i ) of the two algorithms under different average frame loss rate p a , as shown in Figure 6: As can be clearly seen from the figure, in the case of the same average frame loss rate, the H(X i+1 | X i ) of the improved algorithm is much smaller, and it can make the frame loss more even.
Conclusion
is paper proposes an improved method to determine the optimal frame loss rate. When the frame loss rate is 0, although the available bandwidth reaches 95% of the required bandwidth, the proportion of decodeable frames is only a little more than 77%. Random packet loss has a serious adverse impact on the decoding quality of the video. With the increase of the frame loss rate at the transmitting end, the proportion of decodeable frames increases rapidly and reaches the maximum value when the frame loss rate is 0.1 [27][28]. When the frame loss rate is 0.1, the amount of data actively lost by the sender is 5% of the total video data. Because we only discard non critical B frames and the amount of B frame data accounts for half of the total data, only 95% of the data is sent to the network. At this time, the network is in the state of bandwidth matching. e process of controlling the frame loss rate from 0 to 0.1 is to adjust the data transmission rate of the transmitter to match the bandwidth.
e active frame loss algorithm may have continuous frame loss. In order to solve this problem, the concept of average conditional probability is used to measure the frame loss uniformity of a frame loss algorithm, and its calculation method is explained by an example. In order to reduce the probability of continuous frame loss events, the equal probability frame loss algorithm is improved to improve the uniformity of the frame loss algorithm.
is paper studies the relationship between the control parameters of the improved algorithm and the average frame loss rate, gives the method to determine the control parameters, and verifies its correctness through simulation experiments. Finally, we compared two groups of experiments and investigated the average frame loss interval and frame loss interval variance of the two algorithms under the same network conditions. e conclusion shows that the improved algorithm greatly improves the uniformity of frame loss.
Data Availability
e data used to support the findings of this study are available from the corresponding author upon request.
Conflicts of Interest
e authors declare no conflicts of interest. | 4,783.8 | 2022-05-23T00:00:00.000 | [
"Computer Science"
] |
The Odd Gamma Weibull-Geometric Model : Theory and Applications
In this paper, we study a new four-parameter distribution called the odd gamma Weibull-geometric distribution. Having the qualities suggested by its name, the new distribution is a special member of the odd-gamma-G family of distributions, defined with the Weibull-geometric distribution as baseline, benefiting of their respective merits. Firstly, we present a comprehensive account of its mathematical properties, including shapes, asymptotes, quantile function, quantile density function, skewness, kurtosis, moments, moment generating function and stochastic ordering. Then, we focus our attention on the statistical inference of the corresponding model. The maximum likelihood estimation method is used to estimate the model parameters. The performance of this method is assessed by a Monte Carlo simulation study. An empirical illustration of the new distribution is presented by the analyses two real-life data sets. The results of the proposed model reveal to be better as compared to those of the useful beta-Weibull, gamma-Weibull and Weibull-geometric models.
Introduction
The parametric models based on standard (probability) distributions are not always suitable to reveal the finer detail of the underlying structure of a data set.This limitation has triggered the creation of new families of distributions, often defined as compounding or weighting existing distributions.The most useful of them can be found in the surveys of [1,2].In particular, the families of distributions defined with gamma generators has demonstrated a high ability to construct flexible models, showing nice fits for various kinds of real-life data sets.See, for instance, [3][4][5][6].The odd-gamma-G family introduced by [5] will be at the heart of our study.For this reason, it is briefly described below.Let G(x) be a cumulative distribution function (cdf) of a continuous univariate distribution and g(x) be the corresponding probability density function (pdf).Then, the odd-gamma-G family of distributions is constructed from the gamma distribution and the odd transformation given by odd G (x) = G(x)/G(x), with G(x) = 1 − G(x).The corresponding cdf given by where γ(α, z) = z 0 t α−1 e −t dt, α > 0, z ≥ 0 is the lower incomplete gamma function and Γ(α) = +∞ 0 t α−1 e −t dt is the gamma function.The corresponding pdf is given by This modification can significantly enriches the former model related to G(x).This is supported by [5] with the uniform distribution as baseline and by [6] with the exponentiated version of the uniform distribution and the exponentiated version of the Weibull distribution as baselines.See also more general distributions in [7].In terms of modelling, it is shown that they enjoy better goodness of fit properties to other useful competitors.
On the other side, Ref. [8] introduced and studied another generalization of the Weibull distribution called the Weibull-geometric distribution.As indicated by the name, it is obtained by compounding the Weibull and geometric distributions.The corresponding cdf is given by where β > 0, c > 0 and p ∈ [0, 1).The corresponding pdf is given by One can remark that the Weibull distribution arises as a special case when p = 0.It is shown in [8] that the Weibull-geometric pdf and hrf take more general forms that the standard Weibull distribution.Among others, thanks to the presence of the parameter p, the related model is of interest for modeling unimodal failure rates (contrary to the standard Weibull model).
In this paper, we focus our attention on a new distribution with cdf defined by the compounding of the odd-gamma-G cdf given by (1) and the Weibull-geometric cdf given by (2).The obtained distribution is then called the odd gamma Weibull-geometric distribution (OGWG for short).We thus aim to benefit of the respective merits of the two compounded distributions to create a new one having a great flexibility in modelling.Among others, we show that the OGWG pdf can have reversed-J, right skewed shapes, left-skewed and approximately symmetric, and the OGWG hrf can have increasing failure rate, decreasing failure rate and bathtub shapes.These aspects are welcome to the construction of new flexible models for a wide variety of data sets.
The rest of the paper is organized as follows.In Section 2, the main functions related the OGWG distribution are presented, with analytical results and graphical illustrations on their shapes.The mathematical properties of the new distribution are derived in Section 3. In Section 4, estimators for the model parameters are obtained by the method of maximum likelihood estimation.A simulation study is then performed to show the numerical performance of the estimators.In Section 5, two real data sets are considered for analysis, showing the nice fit of the proposed model in comparison to useful competitors.Some concluding remarks are given in Section 6.
Main Probability Functions
Let us recall that the cdf of the OGWG distribution is defined by (1) with G(x) given (2).By noticing that odd G (x) = (e (βx) c − 1)/(1 − p), the corresponding cdf is given by with c > 0, α > 0, p ∈ [0, 1) and β > 0. When needed, the OGWG distribution will be denoted by OGWG(c, α, p, β) in order to specify the parameters.By differentiation (almost surely), the pdf corresponding to (3) is given by The survival function of the OGWG distribution is given by The corresponding hazard rate function (hrf) is given by Remark 1.One can remark that he OGWG distribution is special case of the general gamma-Weibull-Weibull distribution introduced by ([7], Section 6) (with the notations of [7], it corresponds to β = 1, simplifying the complexity of the distribution, α = 1/(1 − p), λ = 1/β and k = c).It is also an extension of the so-called exponentiated exponential power distribution thanks to the presence of the parameter p.
Analytical Properties of the Shapes
We now investigate the critical points and asymptotes for the OGWG pdf, i.e., f (x) given by (4), and the OGWG hrf, i.e., h(x) given by (5).In order to deal with tractable equations for the critical points, we work with the logarithmic functions: log[ f (x)] and log[h(x)].Thus, the critical points of f (x) are the solution x 0 of the nonlinear equation The nature of x 0 can be determined by the study of the sign of the value For given parameters c, α, p and β, this aspect can be evaluated numerically by using a standard software (R, Matlab, Mathematica. . .).In a similar way, the critical points of h(x) are the solution x * of the nonlinear equation Again, the nature of x * can be determined by the study of the sign of the value The asymptotic properties of F(x), f (x) and h(x) are now studied.Since γ(α, u) ∼ u α /α when u → 0, for x → 0, we have In this case, note that f (x) tends to 0 when αc > 1, tends to β/[αΓ(α)(1 − p) α ] when αc = 1 and tends to +∞ when αc ∈ (0, 1).The same results hold for h(x).
Since γ(α, u) ∼ Γ(α) − u α−1 e −u when u → +∞, for x → +∞, we have Let us remark that, in this case, f (x) tends to 0, whereas h(x) tends to +∞, for all the possible values of the parameters.Also, the parameter α plays no role in the asymptotic behavior of h(x).
Figures 1 and 2 display some plots of f (x) and h(x) when β = 1 for different values of c, α and p.The plots in Figure 1 reveal that f (x) can have reversed-J, right skewed shapes, left-skewed and approximately symmetric.The plots in Figure 2 indicate that the h(x) can have increasing failure rate, decreasing failure rate and bathtub shapes.
Quantile Function
The quantile function of the OGWG distribution, denoted by Q(y), y ∈ (0, 1), is characterized by the non-linear equation : F(Q(y)) = y.After some algebra, we obtain where γ −1 (α, x) is the inverse lower incomplete gamma function, i.e., satisfying γ −1 (α, γ(α, x)) = x for x > 0. The median of the OGWG distribution is given by M * = Q(0.5).For i ∈ {1, 2, 3}, the i-th quartile is given by ) and for j ∈ {1, . . ., 7}, the j-th octile is given by Among others, we can use the quartiles and the octiles to investigate the effect of the parameters c, α, p and β on the skewness and kurtosis of the OGWG distribution.One of the earliest skewness measure is the Bowley skewness introduced by [9] and defined by For the kurtosis, one can use the Moors kurtosis introduced by [10] and defined by The sign of B is informative on the skewness nature of the distribution.Indeed, if B = 0 then the distribution is symmetric, if B > 0 then the distribution has a right skewed tail and if B < 0 then the distribution has a left skewed tail.On the other side, the heaviness of the tail is evaluated numerically by M. A large M corresponds to a heavy tail.Also, we can derive the quantile density function by differentiation of Q(y).It comes , y ∈ (0, 1).
This function is useful to defined numerous statistical quantities (asymptotic confidence intervals, inference procedures. . .).We refer to [11].
Some Characterizations
Let U be a random variable following the uniform distribution over (0, 1) and Q(y) be the quantile function given by (6).Then, the random variable X defined by follows the OGWG(c, α, p, β) distribution.By noticing that Z = γ −1 (α, UΓ(α)) follows the gamma distribution with parameters 1 and α, i.e., with cdf R(x) = (1/Γ(α)) . This characterization is useful to generate data according to the OGWG(c, α, p, β) distribution.Furthermore, if X is a random variable following the OGWG(c, α, p, β) distribution, then the random variable Y defined by follows the gamma distribution with parameters 1 and α.
Series Expansion of the OGWG pdf
The following result presents a series expansion for the OGWG pdf.
Proposition 1.The OGWG pdf can be expressed as sums of Weibull pdfs.More precisely, there exists a sequence of real number (v r,s ) (r,s)∈N 2 such that the pdf f (x) given by can be expressed as where r,s (x) = c(r + s + 1)β c x c−1 e −(r+s+1)(βx) c and one can remark that r,s (x) is the pdf of the Weibull distribution with parameters (r + s + 1) 1/c β and c.
Proof.Let us consider the expression of f (x) depending on G(x) and g(x), i.e., By virtue of the power series expansion of the exponential function, i.e., e x = +∞ ∑ i=0 and the generalized binomial series expansion, i.e., (1 By applying the generalized binomial formula twice, we have Putting all the above equalities together, we get . This ends the proof of Proposition 1.
The result of Proposition 1 is useful to have sum expressions of various probability measures, specially those of the form where k(x) denotes a certain function.Indeed, when the dominated convergence theorem can be applied, we have the following series expansion:
It follows from the asymptotic study of f (x) performed in Section 2.2 that, for any integer m, m-th moment of X exists (by using Riemann integrals).Furthermore, it is given by Several expressions of this integral are given below.First of all, by applying the natural change of variable y = (e (βx) c − 1)/(1 − p), i.e., x = (1/β)[log(1 + (1 − p)y)] 1/c , we obtain To the best of our knowledge, there is no closed form for this integral.However, for given parameters c, α, p and β, it can be computed numerically by using a scientific softwares (see Table 1
below).
The following result proposes bounds for µ m .This gives an approximative analytical view on the roles of the parameters on the possible values of µ m .
Proposition 2. Let us set
Then, we can bound µ m as Proof.Let us consider the expression (8) for µ m .The following bounds hold for the logarithmic function: for any x > 0, x/(1 + x) ≤ log(1 + x) ≤ x.On the other hand, we have e x ≥ 1 + x for any x ∈ R. Therefore, for any x > 0, we have xe −x ≤ log(1 + x) ≤ x.Hence, by using xe −x ≤ log(1 + x) and a change of variable, we obtain In a similar way, by using log(1 + x) ≤ x, we have By combining the previous bounds, we end the proof of Proposition 2.
Alternatively, one can use directly the quantile function given by (6).Indeed, by the change of variable x = Q(y), we have dy.
As a final approach, one can use the result of Proposition 1 and more specially, the result (7).Hereafter, let X r,s be a random variable following the Weibull distribution with parameters (r + s + 1) 1/c β and c, i.e., with the pdf given by r,s (x) = c(r + s + 1)β c x c−1 e −[(r+s+1) 1/c βx] c , x > 0.Then, we have From any of the above expressions of µ m , we can derive central measures as the mean of X given by E(X) = µ 1 , the variance of X given by V(X) = µ 2 − (µ 1 ) 2 and the m-th central moment of X given by We can also determine other standard measures as the coefficient of skewness and coefficient of kurtosis, respectively given by Table 1 presents the numerical values of µ 1 , µ 2 , µ 3 , µ 4 , V(X), CS and CK for selected values of the parameters.
Moment Generating Function
The moment generating function of X is given by It is well defined for t ∈ R. By applying the change of variable y = (e (βx) c − 1)/(1 − p), we can express M(t) as Alternatively, by the change of variable x = Q(y), we have dy.
For given parameters c, α, p, β and t, the integrals above can be computed numerically.
A series expansion of M(t) can be derived from (7).Indeed, we have As always, we have the following relations between the moments and the moment generating function:
Incomplete Moments
Let 1 A be the indicator function of over an event A. Then, the m-th incomplete moment of X is defined by By applying the change of variable y = (e (βx) c − 1)/(1 − p), we obtain One can determined bounds for θ m (t) by proceeding as in the proof of Proposition 2.
Alternatively, by the change of variable x = Q(y), we have immediately dy.
Again, for given parameters c, α, p, β and t, we can evaluate these integrals numerically.Also, another expression comes from (7).We have With the first incomplete moment of X, one can define several kinds of means deviation.For instance, there are the mean deviation of X about the mean µ 1 given by and the mean deviation of X about the median M * given by We can also express the Bonferroni and Lorenz curves respectively given by and , y ∈ (0, 1).
As an example of the use of higher orders of the incomplete moments, the m-th moment of the residual life of X is given by
Stochastic Ordering
Under some assumptions on the parameters, the result below shows that the OGWG(c, α, p, β) distribution is ordered with respect to the likelihood ratio ordering.Further details and applications on stochastic ordering can be found in [12].Proposition 3. Let X be a random variable following the OGWG(c, α 1 , p 1 , β) distribution and Y be a random variable following the OGWG(c, α 2 , p 2 , β) distribution.Suppose that α 1 ≤ α 2 and p 2 ≤ p 1 .Then X is smaller than Y in the likelihood ratio order, i.e., the function defined by ratio of the pdf of X over the pdf of Y is decreasing.
Proof.Let f 1 (x) be the pdf of X and f 2 (x) be the pdf of Y.Then, we have 1 − e (βx) c .
Let us consider the logarithmic function to have a more tractable expression.We have 1 − e (βx) c .
Since α 1 ≤ α 2 and p 2 ≤ p 1 , as sum of two negative functions, we have The proof of Proposition 3 is completed.
Estimation of Parameters
Hereafter, we focus our attention on the applied aspect of the OGWG distribution, considering it as a statistical model.Indeed, motivated by its flexibility discussed in the above sections, the OGWG model is appropriate for the analyses of data sets with a non-trivial structure, as those frequently encountered in engineering, medicine, hydrology, economics and finance.
Maximum Likelihood Estimation
Several parameter estimation methods are available in the literature.Among them, thanks to its strong theoretical guaranties, the maximum likelihood method remains the most popular.In particular, it can be used to construct confidence intervals for the model parameters and also in test statistics.For these reasons, we consider the estimation of the unknown parameters for the OGWG model from complete samples with this method only.Let x 1 , . . ., x n be a sample of size n from the OGWG(c, α, p, β) distribution.The log-likelihood function for the vector of parameters Θ = (c, α, p, β) is given by The corresponding score vector is given by By solving the system U(Θ) = (0, 0, 0, 0) , we obtain a solution denoted by Θ = ( ĉ, α, p, β) (assuming that it is unique).Hence, ĉ, α, p and β are the maximum likelihood estimates (MLEs) of c, α, p and β, respectively.The analytical expressions of these estimates do not exist in our case.However, they can be solved numerically by using iterative techniques (quasi-Newton BFGS, Newton-Raphson algorithms. . .).Further details can be found in [13].Assuming that the parameters are in the interior of the parameter space but not on the boundary, the subjacent distribution of Θ can be approximated by a 4 dimensional normal distribution with mean Θ and covariance matrix given by J( Θ) −1 , where J(Θ) denotes the 4 × 4 symmetric matrix defined by whose elements are given in Appendix A. From this asymptotic property, one can construct approximate confidence intervals for α, p and β.More precisely, for h ∈ {c, α, p, β}, an approximate confidence interval for h at the level 100(1 − ω)% is given by CI h = [ ĥ − z ω/2 s ĥ, ĥ + z ω/2 s ĥ], (10) where s ĥ is the square-root of the diagonal element of J( Θ) −1 at the same position as h and z ω/2 is the quantile 100(1 − ω/2)% of the standard normal distribution.Also, we are able to compute the likelihood ratio (LR) statistics for testing goodness-of-fit of the OGWG model with its sub-models.
Monte Carlo Simulation Study
Now we assess the asymptotic properties of the MLEs for the parameters of the OGWG model using Monte Carlo simulations.The simulation study is repeated N = 5000 times each with sample sizes n = 50, 100, 200 and with the following parameter scenarios: I: c = 1.5, α = 0.5, p = 0.5 and β = 0.5, II: c = 1.5, α = 0.5, p = 0.1 and β = 0.5 and III: c = 1.5, α = 1.5, p = 0.5 and β = 0.8.We investigate the empirical bias (Bias), mean square error (MSE) and coverage probability (CP) at the nominal level 95%.For h ∈ {c, α, p, β}, they are respectively defined by where ĥi denotes the MLE of h obtained at the i-th repetition of the simulation and z 0.975 is the quantile 97.5% of the standard normal distribution, i.e., u = z 0.975 ≈ 1.95996.Table 2 gives the values of these measures under the scenarios and different sample sizes as indicated above.In most of the cases, we see that the empirical biases tends to zero when n increases, the empirical MSEs decay toward zero as n increases and the empirical CPs are quite close to the level 95%.Thus, based on these simulation results, we conclude that the MLEs perform well in estimating c, α, p and β.Therefore, the MLEs and their asymptotic results can be adopted for estimating and constructing approximate confidence intervals for c, α, p and β.
Data Analysis
In this section, the OGWG distribution is used as model to analyze two a real-life data sets.We compare the fits of the OGWG model with the beta-Weibull (BW) (see [14]), Weibull-geometric (WG) (see [8]) and gamma-Weibull (GW) models (i.e., the OGWG model with p = 0).The corresponding pdfs are respectively given by We estimate the model parameters by using the maximum likelihood method as presented in Section 4. We compare the goodness-of-fit of the models using Cramér-von Mises (W * ) and Anderson-Darling (A * ) statistics, which are described in detail by [15].In addition, we consider the Kolmogorov-Smirnov (K-S) statistic, AIC and BIC.In general, the smaller the values of these statistics, the better the fit to the data.Two analyses are performed on two different data sets, as described below.
Data analysis 1:
The first data set, taken from [16], represents the failure times of air conditioning system of air plan.Some descriptive statistics are given in Table 3.The skewness is positive (right-skewed data) and the kurtosis is positive.The boxplot and the TTT plot are given in Figure 3.In particular, the TTT plot shows a possible monotonically increasing or constant hrf, indicating that the OGWG model could be appropriate for the fitting of this data set.The MLEs (with SEs in parenthesis), A * , W * , K-S statistics, AIC and BIC are listed in Table 4.For each criterion, the smallest values is reached by the OGWG model, indicating that it provides the best fit.For a visual approach, the estimated pdf and cdf of the OGWG model are displayed in Figure 4. We also see the P-P and Q-Q plots.All the graphics show nice fits for the OGWG model.Finally, the asymptotic confidence intervals of the OGWG parameters given by (10) are presented in Table 5 with the levels 95% and 99%.
Data analysis 2:
The second data set, taken from [17], represents the tensile strength, measured in GPa, of 69 carbon fibers tested under tension at gauge lengths of 20 mm.The summary statistics are given in Table 6.The skewness and kurtosis are positive but close to zero. Figure 5 shows the boxplot and the TTT plot.Since the curve in the TTT plot is concave, it seems to correspond to a monotonically decreasing hrf, so the OGWG model could be suitable for the fitting of data set 2. The MLEs (with SEs in parenthesis), A * , W * , K-S statistics, AIC and BIC are listed in Table 7.All are favorable to the OGWG model.Also, the estimated pdf and cdf of the OGWG model are displayed in Figure 6, as well as P-P and Q-Q plots.All the graphics show nice fits for the OGWG model.Finally, the asymptotic confidence intervals of the OGWG parameters given by (10) are presented in Table 8 with the levels 95% and 99%.Among others, the obtained results of this data set suggest that the OGWG family of distributions can be also utilized in calibration and errors-in-variables modeling (see [18]).
Concluding Remarks
This paper proposed a new distribution called the odd gamma Weibull-geometric distribution (OGWG) and developed its merits from the mathematical and practical points of view.In particular, we study its shapes, asymptotes, quantile function, quantile density function, skewness, kurtosis, moments, moment generating function and stochastic ordering.Then, the statistical inference of the OGWG model is studied, with the maximum likelihood estimation method as benchmark.A simulation work is performed to show the usefulness of the obtained estimators.Then, the analyses two real-life data sets are explored, revealing that the OGWG model is better in terms of fit as compared to the useful beta-Weibull, gamma-Weibull and Weibull-geometric models.It is hoped that the new perspectives of applications presenting by the OGWG distribution will attract the statistician and the practitioners in general.
Table 2 .
Biases, MSEs and CPs of the simulation study.
Table 3 .
Descriptive statistics for data set 1.
Table 4 .
MLEs, their SEs (in parentheses) and goodness-of-fit measures for data set 1.
Table 5 .
Confidence intervals of OGWG for data set 1.
Table 6 .
Descriptive statistics for data set 2.
Table 7 .
MLEs, their SEs (in parentheses) and goodness-of-fit measures for data set 2.
Table 8 .
Confidence intervals of OGWG for data set 2. | 5,724.6 | 2019-05-02T00:00:00.000 | [
"Mathematics"
] |
Introducing Time based Competitive Advantage in IT Sector with Simulation
Incompletion of projects in time leads to project failure which is the major dilemma of the software industry. Different strategies are used to gain a competitive advantage over competitors in business. In software perspective, time is an incredibly critical factor, software products should be delivered in time to gain competitive advantage. However, at a halt, there is no such strategy that covers time perspective. In this paper, a time-based strategy for software products is introduced. More specifically, the importance of time-based strategy by analyzing its associated factors is highlighted using simulations. Keywords—Business strategy; competitive advantage; timebase; a competitor; simulation; software industry
INTRODUCTION
A strategy is responsible for designing plan of actions and assigning required resources to achieve long-term goals of an organization [1].The strategy is viewed as the process of creating a unique and valuable position by means of a set of activities in a way that creates synergistic pursuit of the objectives of a firm [2].In terms of its importance, strategy helps to gain substantial advantages and it is considered as a vital source for generating favorable situations between the firm and its competitors [3], [4].The strategy is a pattern of resource allocation that enables firms to maintain or improve their performances [5].Identify trends and opportunities in the future.The firm can strive to gain competitive advantage over its competitors only when it maintains a difference with competitors [3].Entire visualization of the firm is created by business strategy.The business strategy describes the internal and external condition of firm required for competing with competitors.It is crucial that goals and missions of organization would be clear to everyone.The strategy provides help to stable firm's goal.
Since the 1980s, competitive advantage is the most significant concerns of business administration.In a business perspective, competitive advantage is described as an attribute and unique features through which an organization outperforms its competitors for the targeted market [6].Researchers have different opinions about the concept of competitive advantage and widely studied in [3], [7]- [9] to analyze the firm's performance.Competitive advantages can take a number of perspectives; these can include organizational structure and process [10], knowledge and capital derived from employees [11], all of which constitute resources residing internally within the firm.There are three basic categories of competitive strategies [12] that can be applied by companies in order to achieve sustainable competitive advantages: low-cost leadership, Differentiation, and focus.The differentiation and cost leadership are two major strategies to compete with opponent firms [6], [13]- [15].Finally, by forming a business strategy a firm can achieve competitive advantage competitive advantage and eventually realize more about their current and future situations.
In time-based competition, time is a critical source and most important factor for gaining competitive advantage in a worldwide context."Time-based competition will be the rule of the day" [16].Strategic timing is the primary choice of the firms, to become the first, second or last move to the market [17].The purpose of the time-based strategy is to trim down the time for the completion of the task."Time-based Competitors are offering greater varieties of products and services, at lower costs and in less time than are their more pedestrian competitors" [18].Researchers, practitioners, and companies demonstrated through case studies, surveys and empirical approaches that the business and IT (Information Technology) performances are tightly coupled [19]- [27] and enterprises cannot be competitive if their business and IT strategies are not aligned.Time consideration in IT firms is even more curious.According to the Chaos report, only 16.2% projects can complete on time, remaining may fail due to the delay in completion time [28].As time is a very critical factor in the production of software, in this paper, a time-based strategy for software products is introduced.Multiple factors associated with time perspective are identified and finally, the positive and negative effects of these factors are analyzed using simulations so that importance of time-based factors can be www.ijacsa.thesai.orghighlighted.Time base simulations will provide a systematic view, when and how to launch a software product to get the maximum competitive advantage either it should be in time, pre-time or post time launch.
Remaining paper is structured as follows: Section II comprises extensive background knowledge.Section III discusses the importance of time-based competitive advantage.Simulation results are described in Section IV.Section V offers a conclusion and future work.
II. BACKGROUND KNOWLEDGE
Business competitive strategies were used to improve the business performance and to gain a competitive advantage for the firm.A survey of the literature was conducted comprising business strategies, author and year along with their main purpose as shown in Table 1.
The researchers proposed the different competitive strategies that can be used globally in a different context of business.There were three basic strategies introduced by Porter in 1980 i.e.Cost leadership, Differentiation and Focus.With the passage of time, some other strategies were added into and derived from basics ones.Customer oriented and marketoriented strategies were used in 2006 for building strategies to fulfill customer and market need.Market differentiation is a sub-strategy of differentiation used in 2007 for producing a unique product in term of marketing.Similarly, quality differentiation, service differentiation, innovation differentiation was used in 2007, 2014, 2015 and onward for building service wise and quality wise unique products.Then in 2016, innovation strategy was further extended as product innovation and process innovation for the introduction and implementation of innovative product and process respectively.
Contemporary studies proved that time is a critical factor but in existing studies, time-based factors are merged with other strategies as sub factors.So, there is a need of timebased strategy for gaining a competitive advantage against competitors.
III. IMPORTANCE OF TIME-BASED COMPETITIVE ADVANTAGE
Traditional business strategies i.e.Differentiation, Cost leadership and/or focus consider "time" as a subfactor having less influence in competitive advantage.It may or may not be a subfactor in businesses other than IT but here it is a critical one.Based on the literature, it is found that time is a critical factor so a trinity of factors that can influence performance is presented in Fig. 1.A company can gain its competitive advantage against its competitor by focusing on differentiation, time, and cost.A company can use a strategy to manage low cost to get competitive advantage, it may invest more money to make a unique product, if it is investing money to deliver a unique product by utilizing Differentiation strategy and launching it without focusing on time then there is a chance that its competitor might take the competitive advantage by launching a product in adequate time i.e. pre-time, in time or post time.Fig. 2 shows that individual competitive strategies have a positive impact on competitive advantage for example if a company used Differentiation as a business strategy, then it will enhance the Competitive advantage for itself.Same is the case for cost leadership (to minimize overall cost).If we talk about Time leadership or time-based strategy then it will impact on the competitive advantage positively by completing product within planned time.
The introduction to the market of the product depends upon competitor's launch.This is because both competitors are aware of each other's commercialization, launching time and features of the product by different means.The most common way is to track the events of commercialization organized by competitors to get updates about being a launching product.Competitors add more features if needed to make the product more ideal to launch.If the features take more time than post-time is focused otherwise pre-time is suitable.In time product launch is ideal in the case of unique features are added in the product or the competitor is not launching its product at the same time.www.ijacsa.thesai.orgFig. 3. shows some common factors that affect all three business strategies to gain competitive advantage.If all these three strategies would apply to any product development than the result will be an ideal product i.e. a unique product is developing at low cost with minimum time.
A unique product with innovation will impact positively on Differentiation but it will take the time to market, more maintenance will require, time is important here because uniqueness will take more time.A flexible process claims low cost while commercialization will increase the cost.A more suitable and concrete planning will aid time leadership along with time to market and market needs those also impact positively on it.Here, time to market has a positive impact on time leadership; therefore, we cannot neglect individuality of both.[36].
Cost Leadership
About reducing the cost to the organization of delivering products or services.
Differentiation
Making your products or services unique from or more striking than those of your rivals.
Focus Focus on a narrow fragment and within that fragment try to accomplish either cost advantage or differentiation.Gaurav & Himanshu 2016 [39], Wa'el 2015 [40].
Customer Relation Management (CRM)
Organizing a company's relations with existing and outlook customers for improving business relationships and retaining customers John et al. 2006 [29] Customer Oriented Concentrate on fulfilling a customer's needs except only increasing profit.Devakumar & Barani 2016 [37] Market Penetration Practical for successfully using your product, when company enters a new market Helpful to increase product demand and raise market outline.John et al. 2006 [29], Bulent et al. 2007 [36]
Marketing and Market Oriented
Focus on how you increase sales by getting and keeping customers.Focus on the needs of the market.
Devakumar & Barani 2016 [37] Sale Service Support Provide maintenance after delivering the services.
Staff Development
The Staff Development Strategy is an information strategy with supporting process documents for Staff Development topics Jeffrey & Joohyung 2016 [31] Family Owned In which two or more family members are concerned and the mainstream of rights or control lay within a family.By using this strategy creativity, human resource efficiency, structural R&D factors, and cost in form of return and business growth may increases.
Innovation Differentiation
Finding ways to optimize a precise set of differentiators that are most relevant to a specific set of needs.Amir et al. 2014 [30], Amir et al 2015 [34].
Quality Differentiation
Used to differentiate product in terms of quality Amir et al. 2014 [30], Amir et al 2015 [34].
Marketing Differentiation
Used to differentiate product in terms of marketing www.ijacsa.thesai.org
IV. SIMULATION RESULTS
We only present simulation for limited scenarios here.Simulation results for more concrete and real scenarios will be investigated in the upcoming paper.According to the simulation results, right decision can be made by foreseeing the impact of the result in the future.To investigate the factors that have an impact on the competitive advantage with respect to time, an extensive literature was carried out.Most influencing factors are extracted and mapped with respect to time.Time-tomarket of own versus a competitor, resources, HR skills, Project management, features of the own product, competitors and maintenance were the factors that affect competitive advantage.
The Likert scale was initially used for all factors to get the idea of winning competitive advantage of company A against its competitors more simply.Time to market was calculated for 2 companies to start the simulation; i.e. company A and its competitor.Then the difference was analyzed.If the company A has more week's spare to launch than its competitor and number of features it has built are more and/or unique than company A has more chances of winning competitive advantage by launching the product in time.On contrast, if it has low features and then it has to apply a sub-strategy to launch the product in post-time after completing the features, adding some more features and improving the limitations found in the recently launched product by the competitor.It is obvious that all the competitors keep track of each other's products updated by different means.If the competitor has more chances of launching the product in-time as it has a more positive value of time to market as compared to company A than company A has no other option unless it launches at posttime.Here, to apply another strategy of increasing resources, by simulating time difference and features differences but it will increase the cost.Now, let us simulate the impact of competitive advantage dependency.After performing simulations, generated results are presented in graphs to represent the trends how different factors are influencing when a time-based strategy is introduced.Fig. 4. shows, when the time difference is greater than 0 and feature difference is equal or greater than 0, then competitive advantage will be greater than zero and product could be launched.Competitive Advantage could be negative as well if the three differences go overall negative.In that case, Time based sub-strategies are applied as shown in Fig. 5.
V. CONCLUSION AND FUTURE WORK
We tried to highlight time-based competitive advantage with more focusing on time to complete and launch the product on pre-time, in time or post time to get the maximum advantage from the competitor.In the literature, time is used as a sub factor, however, intentions were to highlight its primary importance for example, in Information Technology time, cost and quality are most critical factors for a product to complete and majority of the software does not meet deadline and hence, their competitor take the advantage by launching the product at adequate time.They attract the market towards them.The simulation was performed to check the influence of different factors on time to get a competitive advantage.Small strategies were applied to get the competitive advantage with respect to the competitors.Simple scaling and raw data were used which will be replaced by the more concrete scale and empirical study in coming paper.We are claiming on this result that time-based competitive advantage needs primary importance as a strategy along with cost leadership and differentiation.In future work, we also intended to introduce time-based competitive advantage strategy in local as well as in global software development context (GSD).In GSD, time zone difference is the main reason for increasing problem in synchronous communication and not completing the project with-in time that leads to the failure of the projects or causing a delay in the launching of the product.By introducing time-based strategy in global context, we may gain a competitive advantage in round the clock development as compared to our competitors.www.ijacsa.thesai.org
Fig. 1 .
Fig. 1.Trinity of time, cost, and differentiation to gain competitive advantages. | 3,262 | 2017-01-01T00:00:00.000 | [
"Business",
"Computer Science"
] |
Intermittent saltation drives Mars-like sand transport on Titan
,
Introduction
The Cassini-Huygens mission has revealed that Titan's low latitude surface presents a variety of landforms [Lorenz et al., 2006, Lopes et al., 2019, MacKenzie et al., 2021], including gigantic linear dunes similar in shape to those of the Namib desert [Radebaugh et al., 2008[Radebaugh et al., , 2010]].Analyses of Cassini spectral data, combined with atmospheric and radiative transfer modeling, have further revealed that Titan presents an active dust cycle [Charnay et al., 2015, Rodriguez et al., 2018].This observational evidence suggests that, much like on Earth, Titan dunes actively evolve by an aeolian, or wind-driven, transport process known 1 as saltation: after being lifted and accelerated by the wind, surface grains hop along the granular bed, rebounding and splashing other grains into the airflow [Kok et al., 2012, Pähtz et al., 2020].Titan's sand grains are not made of silicates as on Earth but mainly of solid organics precipitated from the atmosphere [Lorenz, 2014].Even though their physical properties are not precisely known, previous studies have suggested that these organic grains could be less dense and more cohesive than quartz sand [Imanaka et al., 2012, Hörst and Tolbert, 2013, He et al., 2017, Méndez-Harper et al., 2017, Yu et al., 2017, 2020a,b].This, in combination with Titan's denser atmosphere and reduced gravity, leads to fundamental differences in dune formation on Earth and Titan.One of the main, yet poorly understood differences is that Titan's dunes appear to be shaped by surface winds opposite in direction to the prevailing atmospheric circulation [Tokano, 2008, McDonald et al., 2016, Ewing et al., 2015].A commonly accepted explanation is that the threshold wind speed required to initiate saltation, the so-called fluid threshold, lies above the speed of the prevailing easterly winds [Burr et al., 2015] but below the speed of stronger westerlies generated by equatorial methane storms at equinox [Charnay et al., 2015].This is based on the consensus that particle lifting on Titan is done primarily by aerodynamic forces and that transport cannot be sustained below the fluid threshold [Kok et al., 2012].Recent studies, however, have suggested that saltation of cohesive grains, such as those on Titan, can be sustained through rebound and granular splash at much lower wind speeds than those required to initiate grain motion [Comola et al., 2019a, Pähtz et al., 2021].The role of the fluid threshold in Titan's dust cycle and landscape evolution may therefore be less relevant than previously thought.
Here, we aim to shed light onto aeolian transport processes on Titan through a combination of laboratory experiments, theory, and numerical modeling.For this purpose, we propose novel parameterizations for the aerodynamic entrainment and granular splash processes that account for the effect of cohesive forces among surface grains.We specify the key physical parameters of these parameterizations, namely grain density, elasticity, and cohesion, based on recent experimental investigations [Yu et al., 2017] and test their performance against the results of a discrete element model.We then account for the proposed entrainment parameterizations in the saltation model COMSALT [Kok and Renno, 2009] to investigate how sediment mass flux scales with friction velocity on Titan.We finally include the mass flux parameterization in the general circulation model TAM [Lora et al., 2015[Lora et al., , 2019] ] to quantify yearly sediment transport rates and drift directions on Titan.We find that Titan's prevailing circulation drives intermittent sediment transport of the order what is found in mobile Martian dune fields.
Fluid threshold on Titan
A correct estimation of the fluid threshold is essential to understand the conditions that allow for aerodynamic entrainment of grains on Titan.To estimate the fluid threshold, we use the well-known parameterization [Shao and Lu, 2000] where A N = 0.0123 is an empirical dimensionless parameter, g ≈ 1.35 m s −2 is the gravitational constant, d is the particle diameter, ρ f ≈ 5.2 kg m −3 is the air density, ρ p ≈ 950 ± 450 kg m −3 is the particle density (uncertainty estimations throughout the paper refer to standard errors), and γ is a cohesion coefficient.
Cohesion is related to the intrinsic stickiness of the material (the surface energy), the particle shape and roughness, the stiffness of the contacting grains, and the moisture conditions [Israelachvili, 1986].It is usually assumed that γ ∝ β = F φ /d, that is, the ratio between the cohesive force F φ and the particle size d.It is important to note that β represents the average cohesive force between grains, whereas γ represents the cohesive force acting on the grains that are more readily lifted by the wind.Because of this discrepancy, the proportionality constant between γ and β is generally unknown.We therefore estimate γ for Titan grains by assuming that the ratio of γ and β is equal on Earth and Titan, that is, Measurements of fluid threshold for quartz sand suggest that γ E ≈ 0.33 ± 0.17 mN m −1 [Shao and Lu, 2000].
Furthermore, laboratory measurements of the cohesive forces for quartz sand and Titan-analog grains, known as tholins, suggest that β E ≈ 1.2 mN m −1 [Corn, 1961] and β T ≈ 27 ± 20 mN m −1 [Yu et al., 2017].Based on equation ( 2) and accounting for error propagation, we estimate that γ T ≈ 7.3 ± 6.7 mN m −1 .We use this range of γ T in equation ( 1) to estimate the variation in fluid threshold with particle size (Figure 1).The results indicate that the minimum shear velocity required to lift a grain on Titan (red curve in Figure 1) is u * ,f t ≈ 0.12 m s −1 , which is approximately three times larger than expected if cohesive forces among organic grains on Titan were equal to those among sand particles on Earth (green curve in Figure 1).Furthermore, this minimum value corresponds to a particle size d ≈ 2 mm, meaning that the particles that are easiest to lift on Titan are roughly one order of magnitude larger than sand particles that are easiest to lift on Earth (blue curve in Figure 1).Critically, we find that previous measurements carried out in a wind tunnel with environmental conditions similar to those on Titan (black circles in Figure 1) may have significantly underestimated the fluid threshold and the size of the more mobile grains on Titan due to the low cohesion of the sediments used for the experiments [Burr et al., 2015, Yu et al., 2017].
3 Impact threshold on Titan Our analyses have so far suggested that the fluid threshold on Titan may be significantly higher than previously thought due to the high cohesion of surface grains.We now investigate the effect of cohesion on the minimum wind speed required to sustain saltation through the granular splash process, the so-called impact threshold u * ,it [Pähtz and Durán, 2018].Granular splash is a complex and highly stochastic process controlled by interparticle collisions and cohesive bonds among neighboring grains.To predict the mean velocity of the splashed grains v s , we extend an expression for loose granular materials [Kok and Renno, 2009] with an additional term that accounts for the effect of cohesion (see supporting information section S1 for the analytical derivation) In equation (3), φ is the elastic energy released upon the breaking of cohesive bonds, and δ ≈ 0.3 is the fraction of elastic energy dissipated.The elastic energy φ is a function of the cohesive force F φ and the effective bond elastic modulus E, which we estimate from experiments on tholin particles [Yu et al., 2017] (see supporting information section S1 and Table S2).Further, µ ≈ 0.15 is the average fraction of impacting momentum spent on splashing surface particles [Kok and Renno, 2009].The proportionality coefficients a ≈ 0.03 and b ≈ 1.2, which scale the contributions of collisional and cohesive forces to the ejection velocity, are assigned based on literature values [Kok and Renno, 2009] and by fitting data from discrete element simulations of splash process over cohesive surfaces (see supporting information section S1).
To estimate the mean number of splashed grains N s , we adopt a splash model derived from the energy and momentum conservation equations [Comola and Lehning, 2017].This model was shown to be in good agreement with a variety of experimental results, including granular splash data of cohesive snow and ice grains.For a granular bed of uniform spherical grains, the average number of splashed grains predicted by the energy conservation equation equals where ǫ r is the fraction of impact energy retained by the rebounding grain, P r is the probability of rebound [Anderson andHaff, 1991, Andreotti, 2004], and ǫ f is the fraction of energy dissipated to the bed.Furthermore, the average number of splashed grains predicted by the horizontal momentum conservation equation equals where µ r is the fraction of momentum retained by the rebounding grain, µ f is the fraction of momentum lost to the bed, α i is the vertical impact angle, cosα s is the cosine of the vertical splash angle, and cosβ s the cosine of the horizontal splash angle.The values of all parameters in equations ( 4) and ( 5) are assigned based on experimental measurements [Willetts and Rice, 1986, 1989, Rice et al., 1995, 1996, Nalpanis et al., 1993, Ammi et al., 2009] (see Table S1 in the supporting information).Following previous approaches [Kok andRenno, 2009, Comola andLehning, 2017], we take the number of splashed grains as to represent the transition from a momentum-limited to an energy-limited splash process.We discuss the generalizations of equations ( 3)-( 5) for mixed-sized granular beds in the supporting information (section S1).
We test the predictions of equations ( 3)-( 5) against the results of a discrete element model that was previously used to investigate the role of cohesion in the granular splash process [Comola et al., 2019a] (see the supporting information section S2 for details on the model equations).Equations ( 3)-( 5) closely reproduce the variation in velocity and number of splashed grains with cohesion predicted by the discrete element simulations for different combinations of grain size and impact velocity (Figure 2).The results suggest that the splash process is weakly sensitive to cohesion when the energy released by cohesive bonds is small compared to the gravitational potential energy (φ/mgd 10 −1 ).Conversely, cohesion exerts a relevant control on the splash process for larger values of φ/mgd by increasing the mean splash velocity (Figures 2a and 2c) and decreasing the number of splashed grains (Figures 2b and 2d).The physical reason for these results is that stronger cohesive bonds, albeit more unyielding, release a larger amount of elastic energy upon breaking, thereby increasing the grain ejection velocity [Comola et al., 2019a].We find that cohesive forces have a small impact on the granular splash of organic grains on Titan.The cohesive energy φ, estimated from the experimental measurements [Yu et al., 2017], is in fact barely sufficient to affect the granular splash process of particles of size d = 0.25 mm (gray areas in Figures 2a and 2b).Cohesion is even less relevant for the granular splash of coarser grains of size d = 2.5 mm (gray areas in Figures 2c and 2d), which are primarily splashed through chains of interparticle collisions, similar to how sand grains on Earth are splashed (black markers in Figures 2a and 2b) [Crassous et al., 2007].
To investigate the effect of cohesion on the impact threshold u * ,it on Titan, we implement equations ( 3)-( 5) in the comprehensive saltation model COMSALT [Kok andRenno, 2009, Kok, 2010a,b] and simulate Titan saltation for a wide range of cohesive forces (see the supporting information sections S3 and S4 for the implementation details).Critically, we find that, even though cohesive forces greatly increase the fluid threshold on Titan, they only slightly affect the impact threshold of grains larger than 0.1 mm (blue lines in Figure 3), and only moderately increase the impact threshold of smaller grains.Most importantly, the minimum impact threshold u * ,it ≈ 0.03 m s −1 is a factor of four smaller than the minimum fluid threshold, suggesting that Titan saltation may be sustained at wind speeds much smaller than those required to initiate it.Furthermore, the minimum impact threshold corresponds to a particle size d ≈ 0.1 mm, which is one order of magnitude smaller than the size of particles most easily lifted by aerodynamic forces.
Size of saltating grains on Titan
Our results have thus far indicated that the minimum fluid threshold corresponds to a particle size d ≈ 2 mm, whereas the minimum impact threshold corresponds to a particle size d ≈ 0.1 mm.It follows that the size of grains in saltation may depend on the wind speed, that is, coarser near the transport initiation threshold and finer near to the transport cessation threshold.
To investigate the size range of saltating grains, we assume that Titan's surface presents mixed-sized grains in the range 0.05 − 2 mm, similar to sand grains on Earth.We further assume that, whenever the wind speed exceeds the minimum fluid threshold, all surface grain sizes are susceptible to motion according to the equal susceptibility principle [Martin and Kok, 2019].We follow a similar approach to previous studies [Greeley and Iversen, 1985, Nishimura and Hunt, 2000, Sullivan and Kok, 2017] and investigate the size distribution of grains in saltation by evaluating the ratio w s /u * , where w s is the terminal fall velocity as a function of the grain size (green curve in Figure 3).Values of w s /u * near unity indicate that gravitational and turbulent forces are of the same order of magnitude and grain transport is therefore transitional between saltation and suspension.We find that, near the threshold for transport initiation (u * ≈ u * ,f t ), w s /u * > 1 for d > 0.2 mm, whereas, near the threshold for transport cessation (u * ≈ u * ,it ), the ratio w s /u * > 1 for d > 0.05 mm.
These results suggest that the size of saltating grains at the onset of transport lies in the range d ≈ 0.2 − 2 mm, as smaller grains become suspended in turbulent eddies.Conversely, close to the cessation of transport, the size of saltating grains lies in the lower range d ≈ 0.05 − 0.1 mm, because the wind speed is not sufficient to sustain saltation of larger grains through rebound and splash.
Mass flux scaling on Titan
Our analyses indicate that initiation and cessation of saltation on Titan occur at very different wind speeds, yielding a ratio between the impact and fluid thresholds u * ,it /u * ,f t ≈ 0.25 much smaller than previously thought [Kok et al., 2012].This suggests that saltation on Titan can be sustained at much lower wind speeds than those required to initiate it, similarly to the transport mechanisms on Mars [Sullivan and Kok, 2017].We find that the surface wind speeds in Titan's equatorial band (30 • S -30 • N) predicted by general circulation models [Tokano, 2010, Lebonnois et al., 2012, Lora et al., 2015, Newman et al., 2016] exceed the impact threshold 15 -30% of Titan's year and can therefore sustain sediment transport (see the supporting information section S5).To quantify the sediment transport rates driven by the prevailing circulation, we derive a saltation mass flux parameterization for Titan conditions and test its accuracy against COMSALT simulations.
Previous studies have suggested that the general expression for the steady-state saltation mass flux reads , where L is the mean hop length of saltating grains and ∆v is the mean difference in grain horizontal velocity before and after impacting the bed [Durán et al., 2011, Kok et al., 2012].In steady-state saltation, the impact velocity is bound to yield a mean replacement capacity equal to 1, that is, to generate on average one splashed grain for every impactor that fails to rebound [Ungar and Haff, 1987].It follows that ∆v is independent of u * and rather scales as ∆v ∼ u * ,it .Conversely, the hop length L is determined in part by particle speeds higher up in the saltation layer.For saltation on Earth, L is only a weak function of u * and is often assumed to scale as L ∼ u 2 * ,it /g [Durán et al., 2011, Martin andKok, 2017].However, saltation on Titan is characterized by much longer hop times than on Earth due to the higher air density, thus higher air drag, and smaller gravity.It follows that particle speeds in the upper part of the saltation layer can scale with u * without producing a strong increase in impact velocity.
Assuming similar proportions in the populations of grains in saltation and in reptation near the surface [Andreotti, 2004, Lämmel et al., 2012], the mean hop length on Titan scales as L ∼ u * ,it (u * + u * ,it )/g.
The proposed scalings for L and ∆v on Titan are confirmed by COMSALT simulations (see supporting information section S6) and yield a mass flux where A ≈ 2.3 is a dimensionless scaling coefficient and η q ∈ (0, 1) is the intermittency factor that quantifies the fraction of time that saltation is active when the unsteady wind speeds oscillate between the impact and fluid thresholds.We calculate η q using the parameterization of Comola et al. [2019b], which was validated using extensive field data from three different locations on Earth.This parameterization predicts transport intermittency based on the friction velocity and the Obukhov stability parameter, which quantify the shear-generated and buoyancy-generated turbulence driving the variability in wind speed (see supporting information section S5 for details).We find that the mass fluxes predicted with equation ( 6) are in good agreement with steady-state mass fluxes obtained with COMSALT for a variety of particle sizes and friction velocities (Figure 4a).
The mass flux scaling Q ∝ u 3 * of equation ( 6) is typical of particle flows that dissipate energy through a combination of fluid drag, particle-bed collisions, and binary collisions between airborne grains [Pähtz and Durán, 2020] and is found in another mass flux parameterization by Kawamura [1951], which has been commonly used in planetary saltation studies [e.g., White, 1979, Lee and Thomas, 1995, Charnay et al., 2015, Gebhardt et al., 2020].However, in the original parameterization by Kawamura [1951] it is assumed that fluid lifting drives continuous sediment transport and that the friction velocity at the bed, for which the threshold friction velocity is a proxy in the mass flux equation, is equal to the fluid threshold.We find that our parameterization that accounts for transport intermittency (equation ( 6)) predicts a significantly larger mass flux than the continuous transport parameterization by Kawamura [1951] when the wind speed lies between the impact and fluid thresholds, as is often the case on Titan (Figure 4b).
Aeolian activity on Titan
We assess the aeolian transport potential of Titan's general circulation by implementing the proposed mass flux parameterization (equation ( 6) in the Titan Atmospheric Model (TAM) [Lora et al., 2015[Lora et al., , 2019] ] accounting for the effect of large-scale topography [Corlies et al., 2017] (see supporting information section S7 for additional detail).We perform runtime calculations of the wind-driven saltation mass flux for five Titan years, using surface friction velocities and intermittency factors computed at the model time step of 10 minutes, horizontal resolutions of approximately 5.6 degrees, and a vertical grid with 48 levels of varying pressure thickness.We assign the instantaneous mass flux direction equal to the corresponding wind direction at the first node above the surface.
The simulated yearly mass fluxes on Titan show a significant spatial variability in magnitude and direction (red arrows in Figure 5a).We find that accounting for Titan's large-scale topography leads to drift directions that diverge significantly from Titan's prevailing easterly circulation (the drift directions in absence of topography are shown in Figure S7 of the supporting information).The effect of large-scale topography might thus partly explain the inconsistency between the direction of the prevailing winds and the eastward orientation of the linear dunes, which previous studies have thus far attributed to the occurrence of eastward-propagating methane storms [Charnay et al., 2015], long climate cycles [Ewing et al., 2015], and orbital forcing [McDonald et al., 2016].The model results indicate that Titan presents regions of significant aeolian activity, with yearly mass fluxes of the order of 10 5 kg m −1 year −1 (note that one Titan year corresponds to approximately 29.5 Earth years).Furthermore, sediment transport is active approximately 30% of the year (Figure 5b), with higher saltation activity during the summer season in the northern and southern regions (blue and red lines in Figure 5b) and with little seasonality in the equatorial region (green line in Figure 5b).
Discussion
We combined experimental results, theory, and modeling to investigate the conditions that lead to sediment transport initiation and cessation on Titan.We found that the minimum fluid threshold (u * ≈ 0.12 m s −1 ) corresponds to a particle size d ≈ 2 mm, whereas the minimum impact threshold (u * ≈ 0.03 m s −1 ) corresponds to a particle size d ≈ 0.1 mm (Figure 3).Furthermore, the impact threshold is smaller than the fluid threshold for grains smaller than 2 mm, whereas the fluid threshold is smaller than the impact threshold for larger grains.The granular splash process is thus more effective than aerodynamic forces in lifting submillimeter grains from the surface.Conversely, transport of super-millimeter grains is primarily sustained by aerodynamic entrainment, which typically occurs in dense fluid flows such as fluvial environments on Earth [Pähtz et al., 2020].It is noteworthy that the fluid threshold values predicted by equation ( 1) are representative of wind tunnel conditions, where turbulence scales are much smaller than in the atmospheric boundary layer.Because the aerodynamic entrainment is predominantly caused by turbulent fluctuation events [Pähtz et al., 2020], it is possible that the fluid threshold on Titan may be up to 50% smaller than what is predicted by equation ( 1) due to the large turbulent motions in the thick boundary layer [Pähtz et al., 2018].Despite these uncertainties, the separation between the minimum fluid and impact thresholds on Titan is likely to be significantly larger than on Earth [Martin andKok, 2018, Ho et al., 2011].Much like saltation on Mars, Titan saltation may therefore be characterized by a process of hysteresis whereby the occurrence of transport below the fluid threshold depends on the history of the wind, that is, saltation occurs only if transport was initiated (u * > u * ,f t ) more recently than it was terminated (u * < u * ,it ) [e.g., Kok, 2010a].
We investigated the size of saltating grains on Titan by evaluating the ratio between settling velocity and friction velocity, w s /u * , for a wide range of grain sizes.We found that the size range of saltating grains may depend on the wind speed, varying from 0.2 − 2 mm near the transport initiation threshold to 0.05 − 0.1 mm near the transport cessation threshold.Note that our analysis based on the equal susceptibility assumption may provide incorrect estimations of the size of saltating grains if some grain sizes are more susceptible to motion than others.For instance, Sullivan and Kok [2017] have found that 0.1 mm grains are prevalent in actively-migrating ripples on Mars even though w s /u * ,f t is much larger than one for this particle size.
Our analyses further indicated that the saltation mass flux on Titan scales with the third power of the wind friction velocity, that is, Q ∝ u 3 * (equation ( 6) and Figure 4).This suggests a higher sensitivity of the transport rate to the wind speed compared to Earth conditions, where Q ∝ u 2 * [Martin and Kok, 2017].
However, the larger separation between the fluid and impact thresholds on Titan, combined with the typically low wind speeds of the prevailing circulation, is more likely to cause intermittent transport than on Earth [Comola et al., 2019b].We implemented the proposed mass flux scaling in the Titan general circulation model TAM and estimated that the regions with more intense aeolian activity present transport rates of the order of 10 5 kg m −1 per Titan year (Figure 5a).This is similar to the transport rate observed on the most mobile dune fields on Mars [see, e.g., Bridges et al., 2012], where the atmosphere is more energetic but less dense than on Titan.Our TAM simulations indicate that transport intermittency causes saltation to be active approximately 30% of the year, with significant seasonal variations (Figure 5b).Given the poor constraints on Titan's topography and the limitations involved in solving for convective processes in current Titan GCMs, large uncertainties remain in how methane storms and fine-scale topographic features affect Titan's aeolian activity.Nevertheless, our analyses indicated that Titan's prevailing winds are capable of generating a significant "background" aeolian activity and that the effect of large-scale topography on near-surface winds is critical to explaining Titan's geomorphology and landscape evolution.
thank Tetsuya Tokano, Claire Newman, Kirby Runyon, and Benjamin Charnay for sharing their Titan GCM and RCM model outputs.The authors also wish to thank Thomas Pähtz for the insightful discussions on the uncertainty in the fluid threshold value and the effect of the viscous sublayer on the impact threshold.All data presented in this paper will be made available at the following repository doi:10.17632/97j874sph6.1.Bagnold, 1937, Chepil, 1945, Zingg, 1953, Iversen et al., 1976, Fletcher, 1976].Black circles indicate fluid threshold measurements carried out in the Titan wind tunnel [Burr et al., 2015] for sediments with weaker cohesive bonds than organic grains on Titan (silica sand, basaltic sand, glass spheres, walnut shells).[Willetts and Rice, 1985, 1986, Rice et al., 1995, Willetts and Rice, 1989, Rice et al., 1996, Gordon and McKenna-Neuman, 2011].(c) Velocity and (d) number of splashed grains from a monodisperse granular bed with particle size d = 2.5 mm.The shaded gray areas indicate the estimated range of cohesion for organic grains on Titan.6) for different combinations of particle size and wind friction velocity.Because COMSALT simulates continuous transport, we assumed η q = 1 in equation ( 6).(b) Titan mass flux scaling predicted with equation ( 6) in conditions of intermittent transport (η q < 1, dashed red line) and with the mass flux equation by Kawamura [1951] commonly used in planetary aeolian transport studies (solid blue line).We assumed that grain sizes on Titan lie within the range 0.05 − 2 mm, similar to sand on Earth, and set the impact and fluid thresholds equal to the corresponding minima in this range, that is u * ,it = 0.03 m s −1 (dashed black line) and u * ,f t = 0.12 m s −1 (dotted black line).The gray area indicates the range of saltation intermittency between the impact and fluid thresholds.We computed the intermittency factor η q assuming a neutrally stable atmosphere and a Titan boundary layer height equal to 3 km [Lorenz et al., 2010] (see supporting information for details on the calculation of η q ).[Bagnold, 1937, Chepil, 1945, Zingg, 1953, Iversen et al., 1976, Fletcher, 1976].Black circles indicate fluid threshold measurements carried out in the Titan wind tunnel [Burr et al., 2015] for sediments with weaker cohesive bonds than organic grains on Titan (silica sand, basaltic sand, glass spheres, walnut shells).[Willetts and Rice, 1985, 1986, Rice et al., 1995, Willetts and Rice, 1989, Rice et al., 1996, Gordon and McKenna-Neuman, 2011].(c) Velocity and (d) number of splashed grains from a monodisperse granular bed with particle size d = 2.5 mm.The shaded gray areas indicate the estimated range of cohesion for organic grains on Titan.6) in conditions of intermittent transport (ηq < 1, dashed red line) and with the mass flux equation by Kawamura [1951] commonly used in planetary aeolian transport studies (solid blue line).We assumed that grain sizes on Titan lie within the range 0.05 − 2 mm, similar to sand on Earth, and set the impact and fluid thresholds equal to the corresponding minima in this range, that is u,it = 0.03 m s−1 (dashed black line) and u,ft = 0.12 m s−1 (dotted black line).The gray area indicates the range of saltation intermittency between the impact and fluid thresholds.We computed the intermittency factor ηq assuming a neutrally stable atmosphere and a Titan boundary layer height equal to 3 km [Lorenz et al., 2010] (see supporting information for details on the calculation of ηq).
Supplementary Files
This is a list of supplementary les associated with this preprint.Click to download.suppinfo.pdf
Figure 1 :
Figure 1: Variation in the fluid threshold with particle size on Earth and Titan.The red curve refers to organic grains on Titan, the blue one refers to quartz grains on Earth, and the green one is the hypothetical fluid threshold of quartz grains on Titan.The shaded areas indicate standard errors, obtained by propagating the uncertainties in the cohesion coefficient γ and particle density ρ p .Black diamond markers indicate wind tunnel measurements of the fluid threshold in Earth conditions[Bagnold, 1937, Chepil, 1945, Zingg, 1953, Iversen et al., 1976, Fletcher, 1976].Black circles indicate fluid threshold measurements carried out in the Titan wind tunnel[Burr et al., 2015] for sediments with weaker cohesive bonds than organic grains on Titan (silica sand, basaltic sand, glass spheres, walnut shells).
Figure 2 :
Figure 2: Variation in velocity and number of splashed grains with cohesion.Colored lines refer to analytical results of equations (3)-(5), colored markers refer to discrete element simulations, and black markers refer to experimental results.Marker and line colors indicate different values of impact velocity v i , with red color indicating v i = 1 m s −1 , green color v i = 3 m s −1 , and blue color v i = 5 m s −1 .(a) Velocity and (b) number of splashed grains from a monodisperse granular bed with particle size d = 0.25 mm.The number of splashed grains is normalized by the Froude number of the impacting grain Fr i = v i / √ gd.Black markers indicate experimental results for sand particles of similar size ( d ≈ 0.3 mm) [Willetts and Rice, 1985, 1986, Rice et al., 1995, Willetts and Rice, 1989, Rice et al., 1996, Gordon and McKenna-Neuman, 2011].(c) Velocity and (d) number of splashed grains from a monodisperse granular bed with particle size d = 2.5 mm.The shaded gray areas indicate the estimated range of cohesion for organic grains on Titan.
Figure 3 :
Figure 3: Variation in fluid threshold, impact threshold, and settling velocity of grains on Titan.The fluid threshold u * ,f t (red curve, also shown in Figure1) is estimated with equation (1).The impact threshold u * ,it is estimated with the saltation model COMSALT for three different values of the cohesion coefficient β (blue curves), which span the whole uncertainty range of cohesive forces among organic grains on Titan.The settling velocity w s (green curve) is calculated by balancing the gravitational, drag, and buoyancy forces acting on spherical grains in still air.The shaded areas indicate one standard error from the mean.
Figure 4 :
Figure 4: Saltation mass flux scaling for Titan conditions.(a) Comparison of mass fluxes predicted by COMSALT and estimated with equation (6) for different combinations of particle size and wind friction velocity.Because COMSALT simulates continuous transport, we assumed η q = 1 in equation (6).(b) Titan mass flux scaling predicted with equation (6) in conditions of intermittent transport (η q < 1, dashed red line) and with the mass flux equation byKawamura [1951] commonly used in planetary aeolian transport studies (solid blue line).We assumed that grain sizes on Titan lie within the range 0.05 − 2 mm, similar to sand on Earth, and set the impact and fluid thresholds equal to the corresponding minima in this range, that is u * ,it = 0.03 m s −1 (dashed black line) and u * ,f t = 0.12 m s −1 (dotted black line).The gray area indicates the range of saltation intermittency between the impact and fluid thresholds.We computed the intermittency factor η q assuming a neutrally stable atmosphere and a Titan boundary layer height equal to 3 km[Lorenz et al., 2010] (see supporting information for details on the calculation of η q ).
Figure 5 :
Figure 5: Sediment transport rates and intermittency on Titan.(a) Cumulated mass fluxes (yellow and red arrows) predicted by Titan general circulation model TAM using equation (6) for one Titan year (approximately 29.5 Earth years).Dashed black lines and background blue colors indicate surface elevation at the model resolution.(b) Intermittency factor annual variability in the equatorial region (green line), southern region (red line), and northern region (blue line).Higher values of η q indicate more intense saltation activity.The shaded areas indicate one standard error from the mean.
Figures
Figures | 7,600.6 | 2021-04-12T00:00:00.000 | [
"Environmental Science",
"Geology",
"Physics"
] |
Novel 3D Pixel Sensors for the Upgrade of the ATLAS Inner Tracker
The ATLAS experiment will undergo a full replacement of its inner detector to face the challenges posed by the High Luminosity upgrade of the Large Hadron Collider (HL-LHC). The new Inner Tracker (ITk) will have to deal with extreme particle fluences. Due to its superior radiation hardness the 3D silicon sensor technology has been chosen to instrument the innermost pixel layer of ITk, which is the most exposed to radiation damage. Three foundries (CNM, FBK, and SINTEF), have developed and fabricated novel 3D pixel sensors to meet the specifications of the new ITk pixel detector. These are produced in a single-side technology on either Silicon On Insulator (SOI) or Silicon on Silicon (Si-on-Si) bonded wafers by etching both n- and p-type columns from the same side. With respect to previous generations of 3D sensors they feature thinner active substrates and smaller pixel cells of 50 × 50 and 25 × 100 µm2. This paper reviews the main design and technological issues of these novel 3D sensors, and presents their characterization before and after exposure to large radiation doses close to the one expected for the innermost layer of ITk. The performance of pixel modules, where the sensors are interconnected to the recently developed RD53A chip prototype for HL-LHC, has been investigated in the laboratory and at beam tests. The results of these measurements demonstrate the excellent radiation hardness of this new generation of 3D pixel sensors that enabled the project to proceed with the pre-production for the ITk tracker.
INTRODUCTION
The Large Hadron Collider (LHC), located at CERN, is the world's largest and most powerful particle accelerator. ATLAS is one of two general-purpose detectors at the LHC. In order to advance our understanding of elementary particles and their interactions, the LHC accelerator will be improved to be able to reach about seven times its current nominal instantaneous luminosity. The High Luminosity LHC (HL-LHC) is currently foreseen to start operations in 2027. To cope with the higher particle rate, hit occupancy and radiation damage associated with the HL-LHC period, and maintain the overall detector performance, several ATLAS sub-systems will have to be upgraded. In particular, the current ATLAS tracking system will be replaced by a full silicon detector, called ITk, composed of an inner pixel detector surrounded by a microstrip system. Located in the immediate proximity of the protonproton beam collision region, the silicon pixel system is critical for the precise determination of particle tracks and vertices, allowing the identification of b-jets (b-tagging). The HL-LHC presents an unprecedented challenge to the silicon pixel sensor technologies: the detector has to provide excellent position resolution while sustaining radiation levels exceeding 1 × 10 16 n eq /cm 2 during its lifetime.
A new generation of 3D sensors has been designed to fulfill the requirements in terms of occupancy and radiation hardness for the innermost layers of the ITk pixel detector at the HL-LHC. 3D sensors are an established technology that has been already employed in experiments at the LHC such as in the ATLAS Insertable B-Layer (IBL) [1] and for the tracker of the AFP experiment [2]. With respect to these designs the new ITk 3D sensors feature a reduced pixel cell size of 25 × 100 and 50 × 50 µm 2 with one collecting electrode. In the following these pixel cell geometries will be abbreviated as 25 × 100-1E and 50 × 50−1E. A 25 × 100 µm 2 pixel cell design with two electrodes (25 × 100−2E) was also investigated, but eventually dropped because of technical difficulties in achieving a satisfactory production yield. All these sensor designs are compatible with the future ATLAS readout chip, the ITkPix. Moreover, in order to lower the occupancy, the thickness of the active substrate of these new sensors is reduced to 150 µm in comparison to the previous generation of 230 µm thick 3D sensors. To achieve such thickness the use of a single-side technology was necessary. This paper will describe the design and the technological challanges of the recent 3D sensor productions for ITk as well as the results of the characterization performed on assembled module prototypes to validate the performance of these novel 3D sensor designs.
THE ITK PIXEL DETECTOR
The ATLAS detector [3] at the CERN LHC accelerator complex is the largest volume detector ever constructed for a particle collider. ATLAS has the dimensions of a cylinder, 46 m long, 25 m in diameter, and sits in a cavern 100 m below ground. The ATLAS detector weighs 7,000 tonnes and consists of six different detecting subsystems wrapped concentrically in layers around the collision point to record the trajectory, momentum, and energy of outgoing particles, allowing them to be individually identified and measured. A huge magnet system bends the paths of the charged particles so that their momenta can be measured. As mentioned above, the ATLAS subsystems have to be upgraded in view of the challenges of the HL-LHC. The current tracking system will be completely replaced by the new ITk tracking detector.
Layout
The ITk tracker is composed of an inner Pixel Detector, followed by the Strip Detector ( Figure 1). The Strip Detector covers a pseudo-rapidity of η < 2.7 and a radial region of 40-100 cm, while the five-layered Pixel Detector extends the coverage to η < 4.0 and has an inner radius of r 3.4 cm. The Pixel and Strip Detector volumes are separated by a Pixel Support Tube (PST). In addition, the inner two pixel layers are separated from the outer three layers by an Inner Support Tube (IST), that facilitates a replacement of the two inner layers. The combined Strip plus Pixel Detectors provide a total of nine space points in the η < 4.0 volume. The new detector has been designed with less inactive material in the tracking volume and aims at maintaining and possibly improving the performance of the existing detector, but in a much more hostile tracking environment with an average of up to 200 proton-proton collisions per beam crossing. The innermost pixel layer will include sensors with a 25 × 100 µm 2 pixel geometry in the flat barrel region, and a 50 × 50 µm 2 geometry in the rings.
The HL-LHC Challenges
The HL-LHC presents two key challenges to the ATLAS experiment: while the detector has to cope with unprecedented radiation levels, it also has to be able to disentangle the huge amount of simultaneous events that will be generated in the proton-proton beam collisions. This is especially critical for the innermost silicon pixel layer, since it is the one that suffers the higher radiation levels and has to operate in the most dense particle environment. Thus, the inner layers of the ITk tracker demand the most radiation hard technology, but at the same time small pixels and thin active regions to mitigate the effect of the high particle multiplicity. These two requirements could be seen as competing, since, in standard silicon devices, thinner sensors usually collect less charge and thus are more sensitive to charge trapping effects introduced by radiation damage.
The HL-LHC period is expected to deliver a total integrated luminosity of 4,000 fb −1 , however, the ITkPix front-end is not expected to survive the large radiation doses associated with this scenario. The current strategy foresees replacement of the two inner layers. From the sensor point of view, the expected maximum particle fluence in the innermost layer (r 3.4 cm) after 2,000 fb −1 is 1.2 × 10 16 n eq /cm 2 (with a specification of 1.7 × 10 16 n eq /cm 2 including safety factors).
The ITk Pixel Front-End Chip
The front-end chip integrates the charge generated in the sensor by crossing particles, amplifies and digitizes the signal, and sends the hit information downstream to the DAQ system. The ITk pixel readout chip, called ITkPix, will present a matrix of 400 × 384 pixels of 50 × 50 µm 2 , which determines the active area of the chip. The ITkPix is based on the prototype chip developed by the RD53 Collaboration, the RD53A [4]. The RD53A readout chip presents a matrix of 400 × 192 pixels of 50 × 50 µm 2 with a total size of 20.0 × 11.8 mm 2 . The ASIC, also fabricated in the 65 nm CMOS technology, includes three different analog front-end designs to allow performance comparisons. These are the synchronous, linear and differential front-ends. The latter has been selected by the ATLAS Collaboration for the development of the final ITkPix chip for ITk.
Measurements of bare RD53A chips and modules (the chip interconnected to a pixel sensor) have shown that the ASIC achieves the desired functionality and can operate at a low threshold, which is a fundamental parameter to ensure high hit reconstruction efficiency after irradiation as it will be discussed below.
Sensor Requirements
Sensors shall be interconnected to the ATLAS ITk pixel frontend chips which impose a maximum capacitance per pixel of 100 fF. In the innermost layer the 3D modules are attached in groups of three to a single flex readout board (triplet) sharing the same bias voltage line connecting them in series. To ensure the proper operation of the sensors in this configuration, before irradiation the depletion voltage (V dep ) has been specified to be lower than 10 V and the breakdown voltage larger than the foreseen operational voltage of V op V dep + 20 V. A limit of 2.5 μA/cm 2 has been imposed for the leakage current at V op .
As mentioned above, the innermost layer imposes strict requirements in terms of performance after irradiation. With the replacement strategy, the fluence for the innermost pixel layer will exceed 1 × 10 16 n eq /cm 2 . In these conditions the hit efficiency is required to be larger than 97% (96%) before (after) irradiation at normal beam incidence and larger than 98% (97%) at 14-15°incidence (the final orientation of the modules in the ITk detector). During the whole lifetime of the detector the power dissipation performance of the 3D sensors shall be better than 40 mW/cm 2 at the benchmark sensor temperature of −25°C. The maximum operational bias voltage after irradiation should not exceed 250 V.
Single-Side Approach
Double-sided technologies, which proved to offer several advantages in terms of process complexity and throughput for the IBL production [5], are not favored for the fabrication of thin 3D pixels, due to mechanical integrity considerations. As a result, a single-side approach with a support wafer has finally been adopted by all the processing foundries. As compared to the technology developed in Stanford in the late 1990's, the process has been modified to be compatible with Si-on-Si wafers, which consist of a high-resistivity float zone active layer of the desired thickness directly bonded to a low-resistivity Czochralski handle wafer of adequate thickness, that might be different depending on the wafer diameter. All columnar electrodes are etched from the front-side and are (at least partially) filled with poly-Si: the junction (readout) columns stop at a safety distance of ∼ 25 µm from the handle wafer, so as to avoid early breakdown, whereas the ohmic (bias) columns are etched deeper and penetrate into the handle wafer. By doing so, the sensor bias can be applied from the back-side, which is a definite advantage for the assembly of the sensors in a real detector system. In case its thickness is large, so as to represent a possible issue for multiple scattering, the handle wafer shall be partially removed as a post-processing step, but a thin portion of it could be left that would still allow a back-side metal to be deposited to apply the bias voltage. A schematic cross section of a device in this latter configuration is shown in Figure 2.
Note that surface insulation layers (p-stop or p-spray) are not shown.
Small Pixel Cells
On one hand, small pixel cells involve small inter-electrode distances, which inherently improves the radiation hardness, since they can be made comparable to the maximum drift length of charge carriers [6]. On the other hand, a high density of columnar electrodes could also result in a loss of geometrical efficiency, since the electrodes themselves are not efficient, as well as in an increase of the sensor capacitance (and noise). However, both these problems are attenuated by the use of thin active layers: 1) the electrode capacitance scales with its depth to a large extent, so thinner substrates naturally lead to smaller capacitance; 2) assuming the aspect ratio (depth to diameter) attainable with Deep Reactive Ion Etching (DRIE) to be constant (typical values range from ∼ 20:1 to ∼ 30:1), thinner substrates allow for narrower electrode diameters (typical values range from ∼ 5 to ∼ 8 µm), that also contribute to decrease the capacitance and improve the geometrical efficiency.
The layout of small 3D pixel cells is quite straightforward, provided the device geometries are all down-scaled. As an example, Figure 3 shows the pixel layouts, specific to the FBK technology, for the three considered geometries.
It can be seen that the main constraint is due to the fixed size (∼ 20 µm diameter at the metal level) of the bump pads, which is relatively large with respect to the pixel sizes. In addition, it should be noted that also the pad footprint is fixed (50 × 50 µm 2 ) due to the layout of the readout chip. While this is not an issue with the 25 × 100−1E and 50 × 50−1E pixels, it represents a critical problem in 25 × 100−2E pixels, where the bump pads are very close to both the readout and the ohmic columns. This makes the layout very sensitive to lithography misalignments of just a few micrometers, that are typically achieved with mask aligners, making it difficult to obtain a good fabrication yield.
FOUNDRIES AND PRODUCTIONS
Productions of 3D sensors compatible with RD53A chips have been carried out at CNM, FBK, and SINTEF. These productions all employ a single-side approach either with Silicon-On-Insulator wafers (SOI) or Silicon-On-Silicon wafers (Si-on-Si). Even if in some productions the thickness is still less with respect to the design specifications of the final sensor (130 µm in FBK, and 50 µm or 100 µm in SINTEF sensors), most of the results can be considered as worst-case scenarios due to the reduced number of initial charge carriers generated by Minimum Ionising Particles (MIPs). Indeed, a thicker active substrate is expected to perform better in terms of charge collection and hit efficiency before and after irradiation. Additional significant differences between the foundry prototype designs are: the edge termination, obtained with an edge doping (active edge) at SINTEF and instead with p-type column fences in CNM and FBK (slim edge design [5]); and the inter-pixel surface insulation where FBK and SINTEF use p-spray, while CNM uses p-stop rings around the n-type columns. To reduce the thickness of the active volume, CNM developed ultra-thin 3D single-sided detectors for Neutral Particle Analysers (NPA) and thermal neutron detection in 2009 [9]. This technology was the basis for the development of the upgrade of the ATLAS Inner Tracker. Initially, the detectors were fabricated on SOI wafers with a p-type backside implant and a total thickness of 400 µm [10]. The active thickness is composed of a 150 µm (or 100 µm) thick p-type wafer with a nominal resistivity in the range 10-50 kΩcm; it is separated from a low resistivity handle wafer by a 1 µm layer of Buried OXide (BOX).
The fabrication procedure is performed on four-inch wafers and requires eight photo-lithography mask levels in single-sided processing; and a total of 140 steps are carried out in the controlled environment of the clean room facility. At the end of the process a temporary metal layer is deposited on the surface of the detectors to short the pixels together and test the electrical characteristics of the devices. This layer is then removed.
Run 9761 was the first run including pixel designs compatible with the RD53A chip. The majority of the wafer layout of this run is devoted to RD53A sensors, named as the corresponding readout chip, with 50 × 50−1E pixel geometry. Each of these sensors holds a matrix of 400 × 192 pixels with a total size of 20 × 11.8 mm 2 including 150 µm per side of p-type column fences filling the space between the last pixel cell and the dicing edge. Each wafer also includes two RD53A sensors with 25 × 100−1E pixel and two with 25 × 100−2E pixel geometry. These are both matrices of 200 × 384 pixels with the same total size of the 50 × 50−1E design including the p-type column fences at the edge.
Typical electrical characteristics of the different sensor geometries in this run, as measured on dedicated 3D diodes, are shown in Figure 4. The C-V curves show a depletion voltage lower than 5 V, when the capacitance saturates, with small differences depending on the specific pixel geometry in agreement with the simulation reported in [11]. The I-V characteristics show that a rectifying p-n junction has been formed between the electrodes. The breakdown voltage can be larger than 100 V for the 50 × 50−1E and the 25 × 100−1E geometries and it is in general lower for the 25 × 100−2E, but it should be considered that the full depletion voltage foreseen for these devices is very low. Therefore, the detectors can be operated in over-depleted mode without any constraint. A larger reverse leakage current density and a lower breakdown voltage are measured for sensors with 25 × 100−2E pixel geometry which have a smaller inter-electrode distance. This may be due to the increase of the electric field among the electrodes for the same bias applied.
In order to simplify the fabrication process and avoid the etching of the handle wafer, Si-on-Si bonded wafers were chosen for the production of 3D detectors of the ATLAS Inner Tracker. The first run with these Si-on-Si wafers (11119) was produced using a total thickness of 350 µm where 150 µm is the active layer and 200 µm the handle wafer. Around 70% of the RD53A 50 × 50−1E sensors and more than 50% of the RD53A 25 × 100−1E sensors produced in this run have a breakdown voltage higher than 25 V, whereas the 25 × 100−2E configuration suffers from a very low yield and could be very problematic for a large production.
FBK: 2nd and 3rd Batches
FBK started to develop 3D sensor technologies on four-inch wafers in 2004, in collaboration with INFN. For a few years the focus was on double-side fabrication processes, and led to the development of two device versions: one with columnar electrodes partially passing through the substrate [12], and another one with full-through electrodes [13]. The latter was the technology of choice for the ATLAS IBL 3D pixel production: for this application FBK introduced the slim-edge design based on p-type column fences and the temporary metal for the onwafer electrical characterization of pixel sensors before bump bonding [5]. , of less than 10% has been estimated for both current and capacitance measurements, due to temperature variations and the LCR circuit, respectively. The systematic uncertainties due to the test instrument for the considered voltage and current range are smaller than 1%.
Frontiers in Physics | www.frontiersin.org April 2021 | Volume 9 | Article 624668 5 After the completion of the IBL production, FBK upgraded its clean room to allow production on six-inch wafers, that require a minimum thickness of 300 µm to ensure a sufficient mechanical robustness. Therefore, given the requirement of a thin active thickness for the HL-LHC upgrades, FBK deemed more appropriate to develop a new generation of small-pitch, thin 3D sensors with a single-side process, pioneering the use of Si-on-Si substrates [14].
After producing a first R&D batch in 2015 [15], mainly aimed at the technology development, FBK produced two batches oriented to the ITk project. All these batches were made on six-inch diameter wafers. The second batch was made in 2017 using mask aligner lithography [16]. The wafer layout included 18 pixel sensors of different geometries compatible with the RD53A chip: 25 × 100−1E (three samples), 25 × 100−2E (seven samples), and 50 × 50−1E (eight samples). Many other sensors compatible with the ATLAS FE-I4 and CMS PSI46dig readout chips were also present, as well as test structures (mainly 3D diodes and strips). This second batch consisted of ten wafers (five SOI and five Si-on-Si) with 130 µm active thickness. The third batch was made in 2019 using stepper lithography. The wafer layout was arranged to host 47 RD53A-compatible pixel sensors of different geometries: 25 × 100−1E (ten samples), 25 × 100−2E (24 samples), and 50 × 50−1E (13 samples), as well as test structures. The third batch consisted of eight wafers (all Si-on-Si) with 150 µm active thickness.
Although the very large variety of different layout options and process splits implemented in both the second and third batches prevent accurate calculation of the fabrication yield, some preliminary conclusions can be drawn from the electrical characterization of the RD53A-compatible pixel sensors. In the second batch, about 60% of the 50 × 50−1E and 25 × 100−1E meet the specifications described in Section 2.4, although with large non-uniformities from wafer to wafer. On the contrary, these percentages fall to less than 20% for the more critical 25 × 100−2E sensors. This motivated the use of stepper lithography in the third batch, in order to improve the pattern definition and layer alignment which are essential for such a dense layout.
Despite the choice to accommodate many design variants, which caused the stepper to be operated in a non-optimal way, in the third batch the percentage of 50 × 50−1E and 25 × 100−1E sensors meeting the specifications was still about 60%, as in the second batch. On the contrary, for the 25 × 100−2E sensors, the percentage doubled with respect to the second batch, reaching almost 40%. While this result is still worse than for the other geometries, it confirms that by using stepper lithography in an optimized way, FBK would be able to produce also this critical sensor layout.
Electrical Characteristics
Besides measurement of the I-V characteristics of pixel sensors using the temporary metal, the quality of 3D sensors from different batches is monitored at FBK by measuring test diodes. These devices reproduce the electrode configurations and layout details (including the edge region) of their parent pixel sensors, but, due to their small size ( ∼ 2 mm 2 ), they are often free from process-related defects, making it possible to investigate the intrinsic properties of the different structures.
As an example, the I-V and C-V curves of diodes from the third batch are shown in Figure 5. The leakage currents of Figure 5 Left are very small: after normalization to the number of columns present in each diode, their values at 25 V reverse bias (much larger than the depletion voltage) are ∼ 1 pA/ column, regardless of the geometry. Also the breakdown voltage ( ∼ 130 V) is roughly the same for all devices, whereas the slopes of the curves increase as the inter-electrode distance decreases, in agreement with the expectation based on the increasing electric fields.
In the C-V curves of Figure 5 Right, the knee at low voltage ( ∼ 2 V) corresponds to the full depletion of the active volume between the columns. However, the curves do not show a perfectly flat saturation, mainly due to the increasing depletion
SINTEF: Run 4 and Run 5
The SINTEF 3D sensor technology was developed as single-sided from the beginning, initially on four-inch SOI wafers and, more recently, on six-inch Si-on-Si wafers. The technology features columnar electrodes completely filled with polysilicon and an active edge, a doped trench around the sensor that allows for termination of the active volume with minimal inactive material along the edge of the detectors.
SINTEF has currently produced a total of five batches of 3D silicon sensors. The early production runs delivered good electrical yield but they showed some weakness from the point of view of the mechanical yield. This issue was solved in more recent iterations, which also delivered improved electrical yield.
The first 3D sensor batch fabricated at SINTEF including small-pitch 3D pixel detectors was completed in early 2018. This sensor production, known as "run 4", was carried out on Si-on-Si wafers with active thicknesses of 50 and 100 µm. The layout of run 4 was mostly focused on FE-I4 pixel detectors but also included two detectors compatible with the RD53A chip, in 50 × 50−1E configuration. In addition, other flavors of pixel detectors were available, compatible with the MediPix, FE-I3 and CMS PSI46 readout chips.
The latest 3D sensor batch fabricated at SINTEF, "run 5", was completed in January 2020. Produced on Si-on-Si wafers with active thickness of 150 µm, run 5 featured two different layouts (12 wafers per layout) including both RD53A and ITkPix compatible sensors. The first layout was fully tailored to the 50 × 50−1E pixel configuration, while the second layout was split into two parts, the top part of the wafers for 25 × 100−1E sensors and the bottom part of the wafer for 25 × 100−2E sensors. Each layout had 3D diodes and strip detectors with the same electrode pitch as the main pixel detectors in addition to standard planar test structures for process monitoring. Both layouts still featured the active edge. The fabrication procedure was similar to the one used in run 4, with some minor modifications that also included extra quality assurance procedures at critical steps in the process. Particular care was taken in monitoring oxide thicknesses and evolution of the wafer curvature during the process. After deposition of the temporary metal layer, the electrical characterization of the sensors was carried out on wafer using an automatic probe station. After completion of the measurements, the temporary metal layer was removed and the final metal layer deposited. The final passivation was then deposited and patterned and a final inspection of the wafers was completed.
The overall electrical yield of run 4 was excellent, with over 70% of sensors on average across the different pixel configurations showing properties compatible with the requirements described in Section 2.4. Unfortunately, due to the tight layout of the active edge, the pads on the temporary metal for the small-pitch RD53A pixel configuration were too small to be contacted manually and a full on-wafer estimate of the yield was not possible for these devices.
A reliable estimate of the yield for small-pitch 3D detectors produced at SINTEF could only be carried out in run 5. For 50 × 50−1E devices, an average yield of 47% was achieved for RD53A sensors, while ITkPix sensors showed a considerably lower yield of around 21%. Similar figures were achieved for 25 × 100 µm 2 devices, with the 1E implementation showing considerably higher yield due to much more relaxed distances between electrodes of opposite type. In fact, the 25 × 100−1E design showed yield as high as 65% for the RD53A compatible devices and 35% for ITkPix compatible devices. The 25 × 100−2E configuration showed very low yield, below 20% for all sensor implementations. The outcome of run 5 was satisfactory, but a large variation in yield was found from wafer to wafer with some wafers exhibiting average yields above 60%, and other wafers well below 50%. The reason for the large variation in yield is related to the short distance between the active edge and the n + columns, in combination with the many challenges posed by the lithographic step creating the trench. The issues encountered in the formation of the active edge also explain why the ITkPix implementation generally shows lower yield (longer trenches means higher chance of photolithographic defects). For the next fabrication run, SINTEF will implement the common layout used by all foundries, which will not feature the active edge, hopefully solving the issues identified in run 5.
Electrical Characteristics
The measurements performed on the temporary metal can introduce spurious effects due to the positioning of the PADs and to the geometry of the metal grid necessary for the measurement. In order to isolate these effects, further electrical tests can be performed on 3D test diodes.
The SINTEF wafer layout includes 3D diodes in all flavors placed around the main detectors. The devices have an active area of roughly 1 × 1 mm 2 . The layout has active-edge termination on three sides and a slim-edge termination (column fence) on the fourth side, in order to fit the PADs. I-V and C-V measurements were performed on 3D diodes of each configuration at the end of the process, on the final metal layer. The results are reported in Figure 6. The I-V curves (Figure 6 Left) saturate at very low voltage and remain almost flat (slope equal to 1) up to the breakdown which typically occurs between 120 and 140 V. Current levels are higher for devices with a larger number of columns (e.g., 25 × 100−2E) and scale correctly for the other configuration. The C-V curves (Figure 6 Right) show initial saturation at around 5 V, but continue to decrease as the p-spray layer is progressively depleted. At 30 V a step is present in the C-V curves which indicates that the depletion region managed to completely deplete the slim-edge termination. In this layout the slim edge only featured three rows of ohmic columns; these measurements suggest that an additional row should be added to further prevent the depletion region from extending beyond the active area. The capacitance value correctly scales with the number of columns with the 25 × 100−2E diode, showing the highest capacitance.
BUMP BONDING AND ASSEMBLY
As mentioned above, the final 3D sensors will be read out by the ITkPix front end. Since this chip is not yet available, the results of RD53A prototypes will be presented. Each pixel of the 3D sensors is connected to the corresponding readout channel of the RD53A chip through a process called bumpbonding. The process consists in depositing a metal layer on top of ASIC and sensor pads (Under Bump Metallization or UBM) followed by solder balls (SnAg, SnPb, or Indium). Then the devices are interconnected through a thermal compression cycle called flip-chip.
The wafers of FBK second and third batches were sent for hybridization to the Leonardo Company (Rome, Italy) which performed electroplating UBM and bump-bonding to RD53A chips with Indium bumps. Wafers from the SINTEF run 4 were instead processed at IZM (Berlin, Germany) and flip-chipped to RD53A chips with solder bumps (SnAg). Also in the case of CNM sensors, RD53A chips with solder bumps were used, but the run 9761 was processed with electroless UBM at CNM, and flip-chipped at IFAE, while the run 11119 was sent to IZM, processed with standard electroplated UBM and flip-chipped partially at IZM and partially at IFAE. For module testing, the assemblies are then attached and wirebonded to a dedicated PCB designed by the University of Bonn.
IRRADIATION
The spectrum of particles expected in the ITk pixel detector consists mostly of charged hadrons (more than 80%) and, in particular, it is dominated by pions. Other contributions coming from neutral hadrons, electrons, positrons, muons and photons are minor. Presently none of the existing irradiation facilities is able to offer a pion beam with sufficient flux to reach the required high doses in a reasonable time. Nevertheless, many facilities offer proton beams with energies ranging from tens of MeV up to GeV which enable to study the radiation hardness of sensors in terms of damage caused by both Non-Ionizing Energy Loss (NIEL) and Total Ionizing Dose (TID) effects.
Proton beams with relatively low energies have been employed for RD53A module irradiation in three different facilities: at the cyclotron of the Karlsruher Institut für Technologie (KIT) in Germany where 23 MeV protons are provided; at the Medical Physics cyclotron of the University of Birmingham in the UK with proton beams of approximately 27 MeV; and at the Cyclotron Radio Isotope Center (CYRIC) of the Tohoku University in Japan using 70 MeV protons. In all these facilities, during irradiation, the modules are kept at temperatures below 0°C and the beam is moved over the sensor surface to obtain a uniform dose. Irradiations to reach the final target fluence of 1 × 10 16 n eq /cm 2 were mostly performed in sequential steps. This allowed mitigation of the effect of the TID damage on the chip which in most of the cases was not able to sustain a direct irradiation up to such high dose. Since the larger the energy of the protons, the lower is the TID and the damage to the chip, facilities offering higher energy beam have been preferred when available.
More RD53A modules have been irradiated with a 23 GeV proton beam from the PS accelerator at the IRRAD facility of CERN. The high energy of the particles provided by this facility allows irradiation of several modules at the same time and reaches directly the target fluence of 1 × 10 16 n eq /cm 2 without affecting the performance of the chip. In contrast to the other facilities, at IRRAD the temperature of the modules is not controlled during irradiation and significant annealing is expected due to the heat generated by the large particle flux. In addition, the irradiation dose over an RD53A module is not uniform due to the limited size of the beam, which is Gaussian with a standard deviation of about 12 mm, and due to the impossibility of scanning over the sensor surface. On one hand, these conditions add uncertainties to the effective fluence and do not allow extraction of a reliable leakage current or power dissipation measurements for the FIGURE 6 | SINTEF 3D diode measurements performed on wafer at the end of the process on the final metal. I-V (Left) and C-V (Right) curves are shown. A systematic uncertainty, not shown in the figures, of less than 10% has been estimated for both current and capacitance measurements, due to temperature variations and the LCR circuit, respectively. The systematic uncertainties due to the test instrument for the considered voltage and current range are smaller than 1%.
Frontiers in Physics | www.frontiersin.org April 2021 | Volume 9 | Article 624668 8 sensors. On the other hand, they allow probing of the effects of non-uniform irradiation on the module performance.
Irradiations have also been performed with neutrons in the TRIGA Mark II research reactor of the Jožef Stefan Institute (JSI) in Ljubljana, Slovenia. The neutron reactor allows investigation of only the NIEL damage since the TID due to gamma emission is very low, about 0.1 Mrad per 10 14 n eq /cm 2 [17]. Even if such irradiation is not as expected for ITk, it has the advantage of negligible radiation damage in the chip which allows disentanglement of its performance from the one of the sensor. Unfortunately, the tantalum contained in the RD53A chip gets activated by the interaction with neutrons resulting in several inconveniences such as long times for cooling down, expensive radiation transports and difficult handling. For this reason only a few RD53A modules have been irradiated at JSI, while this facility has been mostly used for diodes.
A summary of the RD53A modules employed for the following studies and their corresponding irradiation history can be found in Table 1. For proton irradiation, the dose received and the corresponding equivalent neutron fluence are measured from the activation of an aluminum foil placed in front of the sensors during irradiation. In the case of neutron irradiation, it is instead extrapolated from the predicted neutron flux and the immersion time in the reactor. In both cases a systematic uncertainty of 10% is associated to the fluence values.
Electrical Properties
The leakage current is a basic property of silicon detectors that indicates their electrical quality. This has been measured as a function of the bias voltage for 3D diodes and 3D sensors bumpbonded to RD53A chips before and after irradiation. The 3D modules are kept at constant temperature and relative humidity of less than 50% within a climate chamber. Before irradiation the measurements are carried out at 20°C, while after irradiation the sensors are cooled down to −25°C. Diodes are measured in the same conditions either on a probe station with a temperature controlled chuck or on dedicated Printed Circuit Boards (PCBs) inside a climate chamber. Systematic uncertainties for the measurements of the leakage current are dominated by the variation of the temperature which is measured with a precision better than 1°C corresponding to an uncertainty on the measured leakage current of less than 10%.
The leakage current as a function of the bias voltage for CNM and FBK RD53A sensors with different pixel geometries is compared in the plots in Figure 7. The properties before irradiation are mostly consistent with the measurements at wafer level with the exception of a few sensors which show a degradation of the I-V curves both in terms of leakage current level and breakdown voltage.
After irradiation the leakage current increases with the fluence and the breakdown is usually shifted towards larger voltages. Most RD53A sensors measured after irradiation show a breakdown larger than 120 V, while for two sensors (one from CNM and one from FBK) the steep increase of the leakage current is observed between 50 and 100 V. Conversely, the other sensors show very similar electrical behaviors after irradiation despite their different I-V curves observed before irradiation. No significant difference is observed in the I-V of these modules, for different pixel cell geometries after similar irradiation levels.
To increase the statistical data set and understand the variability of the I-V characteristics after irradiation, several When no irradiation facility or beam test facility is specified the module was not irradiated or was not measured at beam test, but only in the laboratories. An uncertainty of 10% is associated to the irradiation fluence (Φ) measured at the different irradiation facilities.
Frontiers in Physics | www.frontiersin.org April 2021 | Volume 9 | Article 624668 diodes from the FBK and CNM productions have been irradiated at JSI with neutrons. Only diodes that before irradiation met the specification set in Section 2.4 have been selected for this study. Their leakage current as a function of the bias voltage after irradiation up to 1.5 × 10 16 n eq /cm 2 is shown in Figure 8.
The different leakage current observed is consistent with the separation of the electrodes of the different pixel cell geometries that determines the strength of the electric field inside the active substrate. The diodes with the 25 × 100−2E pixel geometry show the highest leakage current levels and in many cases have a breakdown with a sudden increase of the current (hard breakdown) before reaching 250 V; the diodes with the 50 × 50−1E pixel geometry have lower currents, and in a few cases of the CNM production on Si-on-Si wafer (run 11119) a hard breakdown is observed at less than 250 V. Most of the CNM diodes have a smooth increase of the current (soft breakdown) around 150 V, which is consistent with the measurements on RD53A modules, while FBK diodes have a soft breakdown starting between 100 and 150 V; the 25 × 100−1E pixel geometry is the one showing the lowest current levels and better performance in terms of breakdown voltage both for FBK and CNM diodes. For this design no hard breakdown is observed in the measured diodes up to 250 V.
Only a small increase of the current can be seen between the diodes irradiated to 1 × 10 16 n eq /cm 2 and the ones irradiated to 1.5 × 10 16 n eq /cm 2 . This is more evident in the CNM diodes, since they have a more uniform behavior after irradiation, while the FBK diodes show a larger spread of the leakage current levels.
The larger current measured in the FBK diodes with respect to the CNM ones is consistent with the smaller dimensions of the former and the consequently larger fraction of the current coming from the edges with respect to that generated in the active area. A similar difference is observed on average between the two sizes of
Beam Test Measurements
Pixel modules have been measured before and after irradiation in two different beam test facilities: at the Super Proton Synchrotron (SPS) of CERN using pions of 120 GeV and at the Deutches Electronen-Synchrotronen (DESY) in Hamburg with electrons of about 5 GeV [18]. In both cases a EUDET-type telescope [19] has been used to reconstruct the trajectories of the particles in the beam and determine the impact point on the studied devices (Detectors Under Test or DUT). The telescope has two arms, each one consisting of three planes made of MIMOSA26 [20] monolithic sensors with an active area of 2 × 1 cm 2 , and can provide a pointing resolution of up to 2 µm depending on the beam energy, the amount of material between the telescope planes and the consequent multiple scattering. The coincidences of up to four scintillators placed at the two ends of the telescope and covering the MIMOSA26 sensor area are used to generate the trigger signal for the readout. The DUTs are placed between the two arms of the telescope inside a cooling box. At SPS a custom designed cooling box based on a commercial chiller allows the DUT to reach temperatures as low as -50°C to study irradiated modules, as well as to keep a constant temperature of 20°C for the operation of non-irradiated modules. The same solution could not be used at DESY due to the lower particle energy and the large mass of the chiller-based box which would lead to a significant amount of multiple scattering. In this facility a lightweight Styrofoam box is used instead. Here cooling for operations with irradiated modules is provided by dry ice which allows them to reach temperatures as low as the chiller-based box, but without the flexibility of a precise temperature control. Two different data acquisition systems (DAQ) have been used to equalize the threshold and operate the RD53A chips: the BDAQ53 [21] developed by the Bonn Silicon Laboratory and the YARR system developed at LBNL [22]. Due to the large integration time of the MIMOSA26 sensors FIGURE 9 | Hit efficiency as a function of the bias voltage for RD53A modules with CNM 3D sensors from run 9761 before and after irradiation. On the left (right) results of sensors with 50 × 50−1E (25 × 100−1E) pixel geometry obtained with the Differential (Linear) front-end are shown. The BDAQ readout system has been used for tuning and data-taking. The modules are tuned to a mean threshold of 1 ke. A systematic uncertainty of 0.3% is associated to all efficiency measurements. Partially adapted from Reference [25,26]. (about 300 µs) with respect to the DUTs (of the order of 25 ns), an additional reference plane made of a hybrid module composed of an FE-I4 chip and a planar sensor is used to provide timing information for track selection with 25 ns precision. In Table 1 are listed the sensors that have been investigated with particle beams at beam tests.
Hit Efficiency
The hit efficiency is defined as the fraction of events in which a particle passing through the active part of the DUT causes a recorded hit. Hence, it is highly influenced by the number of electron-hole pairs generated (proportional to the active thickness) and the chip threshold settings. The minimum hit efficiency target for ITk is 97% throughout the whole lifetime since lower values would cause problems for track pattern recognition [23]. Experimentally, the efficiency is determined in beam tests by extrapolating the reference track position to the DUT and searching for a hit cluster in the surroundings within a matching distance of up to two times the DUT pixel size. To disentangle the performance of the sensor from the one of the RD53A chip and from the hybridization process, noisy as well as disconnected channels are excluded from the efficiency calculation.
Results of hit efficiency measured as a function of the bias voltage are shown in Figures 9, 10 and 11 for RD53A chip modules with sensors produced by CNM, FBK and SINTEF, respectively. [25]. From top to bottom the maps show modules measured before irradiation at perpendicular beam incidence (Φ 0, ϕ 0), after irradiation to 5 × 10 15 n eq /cm 2 at perpendicular beam incidence (Φ 5, ϕ 0) and at 15°beam incidence angle (Φ 5, ϕ 15). The hit efficiency maps of the pixel cells are obtained displaying the reconstructed track impact point expressed in pixel coordinates and projecting the data for all identical structures onto the same image. Inefficiencies mainly correspond to particles crossing the fully passing p-type columns which are inactive as can be appreciated looking at the efficiency distribution over one pixel cell shown in Figure 12. Indeed, the efficiency increases to over 99% when the sensors are tilted by 15°as it is possible to see from the results of RD53A modules using CNM 3D sensors in Figure 9. In this case the efficiency distribution over the pixel cell is uniform since the particles are not passing all the way through the p-columns.
After Irradiation
Results of RD53A modules with 50 × 50−1E sensors from all three production sites have demonstrated the possibility of reaching a hit efficiency larger than 96% for perpendicularly incident tracks after irradiation to 5 × 10 15 n eq /cm 2 with about 40 V and after irradiation to 1 × 10 16 n eq /cm 2 in the voltage range of 80-150 V with thresholds around 1 ke or lower. The 25 × 100−1E sensor from CNM after irradiation to a particle fluence of 5 × 10 15 n eq /cm 2 and operated at a threshold of 1 ke shows compatible results to the 50 × 50−1E design. The RD53A module with 25 × 100−1E sensor from FBK has been instead operated with a threshold of 1.5 ke and after irradiation to a fluence of 5 × 10 15 n eq /cm 2 reaches the 96% hit efficiency at 80 V. After irradiation to 1 × 10 16 n eq /cm 2 the 25 × 100-1E design has been measured only on one CNM sensor both at perpendicular track incidence and by tilting the device by 15°a round the long pixel side. In the first case a hit efficiency over 98% is obtained with a bias voltage of 140 V or larger, while tilting the devices an efficiency of more than 97% with just 100 V is achieved. In both cases the device has been tuned to a threshold of 1 ke.
These results are also compatible with FBK modules coming from the second batch production and measured by the CMS Collaboration [24], where an efficiency close to 97% was obtained for both 50 × 50−1E and 25 × 100−1E modules operated at 150 and 120 V, respectively.
The hit efficiency results of the different modules and productions can be considered to be consistent within the tuning uncertainties ( ∼ 200 e), taking into account the different active thicknesses of the prototypes and therefore meeting the requirements for the innermost pixel layer of the ATLAS ITk.
Charge Sharing and Cross-Talk
Cross-talk and in general larger charge sharing has been observed in 25 × 100−1E sensor prototypes before irradiation. The cross-talk threshold, defined as the average charge injected in a pixel for which a hit is observed in a neighboring pixel, has been measured on FBK RD53A modules and is shown in Figure 13. In the 25 × 100−1E design cross-talk thresholds between 12 and 25 ke have been measured by injecting one pixel and reading out the pixels adjacent to the 100 µm sides. In the 50 × 50−1E design a cross-talk threshold of about 150 ke has been estimated by injecting the four neighboring pixels at the same time and scaling the result. A larger charge sharing in the 25 × 100−1E design with respect to the 50 × 50−1E design has been observed also in beam test measurements with CNM sensors. Before irradiation about 50% of the perpendicular incident tracks passing through the 25 × 100−1E sensor resulted in a cluster size larger than 1, while only 20% have this characteristic in the 50 × 50−1E sensor. After irradiation the charge sharing is mostly suppressed and for both designs the number of clusters with a size larger than 1 is lower than 20% at perpendicular track incidence. This level of cross-talk and charge sharing is not considered critical for the readout and occupancy of the ITk innermost layer. Figure 14 shows the power dissipation of RD53A modules and diodes after irradiation. The curves are calculated from the I-V characteristics presented in Section 7.1. The annealing times of the modules before the I-V was measured are of the order of 3-4 days. The diodes have been instead irradiated with neutrons up to 1.5 × 10 16 n eq /cm 2 and measured on a probe station with cold chuck set to −25°C after 7 days of annealing at room temperature. As expected a higher current is measured for pixel cell designs with larger inter-electrode distances, especially for the 25 × 100−2E design. Nevertheless, no significant difference is observed in the power dissipation between 50 × 50−1E and 25 × 100−1E sensors at this irradiation level for the operational voltage ranges between 80 and 150 V where more than 96% efficiency has been measured. Results of the RD53A irradiated with protons can be considered compatible with the measurements of diodes irradiated with neutrons given the uncertainties in the annealing times and irradiation levels. A few diodes and RD53A sensors show a larger power dissipation before 100 V due to an early breakdown. Nevertheless, results previously presented showed the possibility to operate RD53A modules irradiated to 1 × 10 16 n eq /cm 2 at even 60 V with more than 90% efficiency.
CONCLUSIONS
Novel 3D pixel sensor prototypes featuring small pixel cells and thin active substrates have been produced at FBK, CNM, and SINTEF using a single-side approach. These sensors have been assembled with the new RD53A readout chip and evaluated to instrument the innermost pixel layer of ITk. Results showed that these sensors meet the ITk specifications before irradiation in terms of capacitance, depletion voltage and leakage current. After irradiation up to 1 × 10 16 n eq /cm 2 , both the 50 × 50−1E and 25 × 100−1E designs were demonstrated to be able to reach a hit efficiency of 97% with a bias voltage lower than 150 V and a corresponding power dissipation within 40 mW/cm 2 . Thanks to the excellent radiation hardness of this novel technology, both of the 50 × 50−1E and 25 × 100−1E 3D sensor designs have been chosen to instrument the innermost pixel layer and rings of the ATLAS ITk. Due to the manufacturing complexity and the consequent low yield, the 25 × 100-2E design has instead been discarded at least before the replacement of the inner layers. The ITk pixel project is now advancing to the production phase of the fullsize ITkPix 3D sensor pre-series aiming at establishing the reliability of the manufacturing process for the final production. Further studies of the performance of these 3D sensors are foreseen after their irradiation up to 2 × 10 16 n eq /cm 2 to consider the safety factors required by the ITk specifications.
DATA AVAILABILITY STATEMENT
The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation. great support and discussions at the beam test as well as A. Dierlamm and F. Bögelspacher (KIT), K. Nakamura (CYRIC), L. Gonella and A. Hunter (University of Brimingham), V. Cindro (JSI) for the excellent support with the irradiation at the different facilities. The measurements leading to these results have been partially performed at the Test Beam Facility at DESY Hamburg (Germany), a member of the Helmholtz Association (HGF). | 12,181.4 | 2021-04-21T00:00:00.000 | [
"Physics"
] |
Accounting for Variability in ULF Wave Radial Diffusion Models
Many modern outer radiation belt models simulate the long‐time behavior of high‐energy electrons by solving a three‐dimensional Fokker‐Planck equation for the drift‐ and bounce‐averaged electron phase space density that includes radial, pitch‐angle, and energy diffusion. Radial diffusion is an important process, often characterized by a deterministic diffusion coefficient. One widely used parameterization is based on the median of statistical ultralow frequency (ULF) wave power for a particular geomagnetic index Kp. We perform idealized numerical ensemble experiments on radial diffusion, introducing temporal and spatial variability to the diffusion coefficient through stochastic parameterization, constrained by statistical properties of its underlying observations. Our results demonstrate the sensitivity of radial diffusion over a long time period to the full distribution of the radial diffusion coefficient, highlighting that information is lost when only using median ULF wave power. When temporal variability is included, ensembles exhibit greater diffusion with more rapidly varying diffusion coefficients, larger variance of the diffusion coefficients and for distributions with heavier tails. When we introduce spatial variability, the variance in the set of all ensemble solutions increases with larger spatial scales of variability. Our results demonstrate that the variability of diffusion affects the temporal evolution of phase space density in the outer radiation belt. We discuss the need to identify important temporal and length scales to constrain variability in diffusion models. We suggest that the application of stochastic parameterization techniques in the diffusion equation may allow the inclusion of natural variability and uncertainty in modeling of wave‐particle interactions in the inner magnetosphere.
Introduction
The Van Allen outer radiation belt is a typically quiescent torus-shaped region in near-Earth space between 13,000 and 40,000 km radial distance consisting mainly of electrons between 100s of keV and multiple MeV trapped by the Earth's geomagnetic field.Protons are also present and modeled in the radiation belts (Vacaresse et al., 1999), but here we focus on the high-energy electron population.The behavior of electrons in the outer radiation belt is affected by multiple processes, some of which are immediate responses to solar wind forcing, whereas some are more indirect energy pathways involving energy stored in the substorm cycle.Numerical modeling is a powerful tool to provide deep understanding of the behavior of the outer radiation belt, allowing us to quantify the effects of different processes (e.g., Glauert et al., 2014;Reeves et al., 2012;Shprits et al., 2008).
From a more practical standpoint, the ability to model these physical processes is becoming increasingly important as Earth becomes more dependent on space-based technologies.As of 31 March 2020 there were 135 satellites operating in medium Earth orbit (MEO; 2,000-35,786 km) and 554 in geostationary orbit (GEO;35,786 km), therefore operating in the heart of the belt (https://www.ucsusa.org).Outer radiation belt electrons can be hazardous to these spacecraft, but there are insufficient in situ measurements available to monitor the radiation environment directly.There remains a pressing need to develop accurate models of the outer radiation belt for operational purposes in addition to promoting further physical understanding.
One effective method to study the dynamics of the outer belt electrons is to model the evolution of electron phase space density (PSD) f(M, J, Φ; t) by a Fokker-Planck equation as a function of the three adiabatic invariants and time (Schulz & Lanzerotti, 1974).Here M, J, and Φ are the first, second, and third adiabatic invariants, respectively.It is helpful to consider Φ in terms of the adiabatic reference parameter L*, defined by (Roederer, 1970).Since a first-principles model of wave-particle interactions in the outer radiation belt is intractable across its large volume and long timescales, all the physics within the outer radiation belt can be effectively described by diffusive processes.Each type of diffusion-pitch angle, energy, and radial-by each wave mode is described in the Fokker-Planck equation by a diffusion coefficient D ij .A myriad of different wave-particle interactions is important for the radiation belts.For example, very low frequency (VLF) whistler mode chorus mediate energy diffusion (Thorne et al., 2013), whereas VLF whistler mode hiss (Lyons & Thorne, 1973;Meredith et al., 2007) and ULF electromagnetic ion cyclotron (EMIC) waves (Kersten et al., 2014) predominantly diffuse in pitch-angle and therefore contribute to loss.ULF wave-driven radial diffusion at Pc-5 frequencies is considered to be an important and effective mechanism to transport and accelerate relativistic electrons in the outer radiation belt (Elkington et al., 2003;Mann et al., 2013;Ozeke et al., 2017Ozeke et al., , 2018;;Shprits et al., 2008).
In this paper we focus on radial diffusion as a result of ULF waves, which in the diffusion framework can be modeled as a straightforward one-dimensional problem.All of the physics is contained in the radial diffusion coefficient D LL , which is proportional to ULF wave power.A wealth of data exists both on the ground and in space to calculate ULF wave power and construct D LL (Dimitrakoudis et al., 2015;Li et al., 2017;Liu et al., 2016;Ozeke et al., 2012Ozeke et al., , 2014;;Ukhorskiy et al., 2009).Empirical models formulate analytic expressions for D LL from ULF wave power data over long timescales, aiming to capture the spatiotemporal evolution of D LL in such a way that although rapid changes cannot be accurately captured, the long timescale behavior of the outer radiation belt may be adequately described (e.g., Ozeke et al., 2018).In this paper, we wish to highlight the numerical consequences of using different methods for modeling the temporal and spatial variability of D LL with more realistic values that represent the underlying probability distribution of ULF wave power.
Many theoretical approximations exist for the radial diffusion coefficient D LL based on a variety of assumptions and approximations (Ali et al., 2016;Birmingham, 1969;Cornwall, 1968;Elkington et al., 2003;Fälthammar, 1966Fälthammar, , 1968;;Fei et al., 2006;Lejosne et al., 2013;Liu et al., 2016;Schulz & Lanzerotti, 1974).All of these approximations are constrained by some statistical parameterization of ULF wave power obtained from many years of space or ground-based observations.The most widely used D LL parameterizations in radiation belt models parameterize by the geomagnetic index K p (Brautigam & Albert, 2000;Ozeke et al., 2012Ozeke et al., , 2014)).These parameterizations are deterministic with a single output for each value of Kp.
Typical approaches in radiation belt modeling follow a classical parameterization approach whereby average or median D LL values are used.These values only change when the fit parameters change, and therefore, there is a chance that the full range of variability of D LL is not captured in this classical approach.In numerical weather prediction and climate modeling, classical parameterizations have proven to be insufficient.Instead, stochastic parameterizations are used to capture the whole distribution of behavior in underlying physical processes to yield improved results.Note that previous attempts to capture more realistic variability in ULF-mediated radial diffusion have used observations to recreate event-specific models of diffusion (Perry et al., 2005;Riley & Wolf, 1992;Tu et al., 2012).These types of study, although potentially more accurate, are limited to test cases with available data in space and time.We propose that in cases where direct data is lacking, it is still possible to capture the full range of behavior in the problem using stochastic parameterizations (e.g., Watt et al., 2017), and we demonstrate a simple implementation of this technique in this paper.
Here we present a series of idealized numerical experiments of radial diffusion over a hypothetical period of constant geomagnetic activity.These experiments offer a proof of concept intended to explore the spatiotemporal impacts of including stochastic variability in comparison with the (Ozeke et al., 2014) ULF radial diffusion coefficients in the radial diffusion equation and highlight current deterministic model limitations.Any significant discrepancies between the deterministic and stochastic models should motivate further research questions to better understand the physical processes underlying ULF wave-driven radial diffusion to include in our models for improved accuracy.The remainder of this paper is structured as follows.Sections 2-4 describe the radial diffusion problem, implementation of stochastic parameterization, and setup and description of the idealized experiments, respectively.Section 3 presents the results from the numerical experiments.Section 4 discusses the impact of the results in the wider context of the outer radiation belt.Section 5 describes conclusions and remarks from this paper.
Modeling the Radial Diffusion Equation
We focus on the radial diffusion equation as a simplified approximate model of electron behavior in the outer radiation belt.Although the one-dimensional description of radial diffusion has successfully reproduced electron behavior during some events (e.g., Ozeke et al., 2018;Shprits et al., 2005), the diffusion framework itself is not always accurate.Previous studies have calculated radial diffusion coefficients directly in "event-specific" analysis (e.g., Ukhorskiy et al., 2009) and demonstrate that diffusion-based models can have difficulty accurately rendering event-specific dynamics (Ukhorskiy et al., 2009).Here, we intend these numerical experiments as a straightforward demonstration of the concept of stochastic parameterization.Radial diffusion is also a valid and important part of more complicated outer radiation belt models, where it is joined by diffusion processes in velocity space due to other wave modes.Over the long timescales studied in diffusion models, we observe that empirical models for D LL , in whichever theoretical framework they are constructed, naturally have some uncertainty.Investigating the consequences of that uncertainty is our aim in this work.
In this demonstration we simplify the behavior of high-energy electrons in the outer radiation belt and focus on radial diffusion across Roederer L* (Roederer, 1970), hereon denoted L. Here, the first and second adiabatic invariants, M and J, are conserved.The evolution of the distribution function of trapped particles f(M, J, Φ; t) can be related to the distribution function at time t + Δt (without sources or sinks) (1) where Π(Φ − ϕ, ϕ, t) is the probability that a particle with an invariant shell coordinate Φ − ϕ at time t will end up with coordinate Φ at time t + Δt.By Taylor expanding f, Π to first order in t on the left and second order in Φ in the integral, we obtain the one-dimensional Fokker-Planck equation Here D Φ and D ΦΦ are the firstand second-order Fokker-Planck diffusion coefficients, respectively.If we assume the following relation for D Φ , the average change of Φ per unit time for one particle on the shell Φ during that time interval For radial diffusion to be effective, a radial gradient in the PSD is required, which we assume here.A precipitation loss term is often also added to Equation 4, which is ignored here in the idealized case.Radial diffusion is considered across L = 2.5-6.Dirichlet and Neumann boundaries are imposed on the inner and outer boundaries, respectively: ∀t; (5) In reality the gradient across the outer boundary will not be 0, and many radiation belt models either determine the outer boundary from electron flux data observed by spacecraft (e.g., Drozdov et al., 2017;Glauert et al., 2018;Shin & Lee, 2013) or use plasmasheet characteristics (Christon et al., 1988(Christon et al., , 1991) ) and magnetic activity dependencies (Bourdarie & Maget, 2012) for analytic fits (Maget et al., 2015).
In Equation 4, D LL represents the ULF wave radial diffusion coefficient.Constructed through a coordinate transform of the flux invariant diffusion coefficient, D ΦΦ , D LL is formally defined by (Roederer & Zhang, 2014) where R s , ΔR s /R s , and τ d are the dipole-distortion parameter, its relative fluctuation, and the drift period, respectively.Here, <> denotes the drift-average operator.In a realistic setting, R s would be represented by a parameter that globally describes magnetospheric activity, such as Kp or ULF wave power.Application of different frameworks to describe large-scale fluctuations of electric and magnetic fields (e.g., Brautigam & Albert, 2000;Brautigam et al., 2005;Lejosne et al., 2013;Ozeke et al., 2012Ozeke et al., , 2014) ) employ different assumptions, but many ultimately require some estimate of the power spectral density of ULF fluctuations in electric and/or magnetic fields.We note that from Equation 7 and from theoretical estimates of D LL , there are inherent minimum temporal scales on which D LL is constructed: by definition D LL is constructed for timescales longer than the drift period of the electrons, longer than a few periods of the ULF wave fluctuations, and of the same order or longer than the solar wind driving processes that induce the ULF fluctuations.In many cases, ULF power spectral density is estimated from observations over a period of at least an hour (see Ozeke et al., 2014), and so we employ this as the smallest timescale of variability in our study.
We consider as a deterministic reference model the empirical L and K p parameterized D LL presented by Ozeke et al. (2012Ozeke et al. ( , 2014)).This model is a simplification of the theoretical analysis presented by Fei et al. (2006) and assumes that median ULF wave power is representative of expected ULF wave power.The most notable feature of this model is that the uncertainty in the statistical representation of ULF power spectral density has been quantified, allowing us to perform this demonstration using observationally derived constraints.Other models exist, which are similarly parameterized by Kp activity, with some following the same theoretical framework as Fei et al. ( 2006) (e.g., Brautigam et al., 2005) and others pursuing other frameworks (e.g., Lejosne et al., 2013), but all do not explicitly state and characterize the uncertainty in their models as in Ozeke et al. (2012Ozeke et al. ( , 2014)).We note that the accuracy of the theoretical framework used to estimate D LL is beyond the scope of this paper and direct the interested reader toward Lejosne (2019) for a thorough review of such frameworks.We reiterate that since the (Ozeke et al., 2014) empirical D LL model contains explicit estimates of uncertainty, that makes it appropriate for use in our demonstration.
Since the azimuthal electric field radial diffusion coefficient, D E LL , typically dominates, in these idealized experiments we omit the compressional magnetic component and base our stochastic parameterization around the model for D LL ¼ D E LL , expressed per day by We describe in the following section how we implement our estimates of D E L L ðtÞ, by perturbing Equation 8 in such a way as to recover a better representation of the underlying distribution of D E L L across a period of time.
10.1029/2019JA027254
Journal of Geophysical Research: Space Physics We solve the radial diffusion equation using a modified Crank-Nicolson second-order finite difference scheme presented by Welling et al. (2011), which is semi-implicit and unconditionally stable: where L 2 for modeling simplicity.The chosen grid and time steps for our numerical experiments are 0.1L and 1 s, respectively, following extensive model verification of the numerical scheme to determine a suitable trade off between numerical error and computational cost for the experiments (see the supporting information).
Stochastic Parameterization
We suggest that the most physically intuitive method to implement stochastic parameterization is to focus efforts on the representation of the diffusion coefficient, since it is the variable that contains all the information about the wave-particle interaction.The diffusion coefficient parameterization has been shown to result in a large amount of variability, especially during storm times (Murphy et al., 2016).In this work, we choose a straightforward method to model D LL (L,t) that involves constructing a noisy temporal or spatial series that retains the key known properties of the distribution of D LL .More sophisticated techniques, such as autoregressive moving average (ARMA) models, can be used to create spatiotemporal series of the diffusion coefficients with the appropriate autocorrelative properties.However, these rely on important characteristic scales of spatial and temporal variability that are not yet known.
We do, however, have access to some information constraining the expected distribution of D LL .Bentley et al. (2018) found that the probability distribution of ground-based ULF wave power appears log-normal (LN).We infer from this that D LL is also likely to be approximately LN; indeed, Ozeke et al. (2014) confirm that the distribution of D LL in space is not Gaussian and is log-symmetric, since the interquartile range (IQR) is reported between one third and three times the median.Hence, it is appropriate to construct a noisy time series for D LL by multiplying the median D LL by a random LN noise factor ϵ, resulting in a time series that, when aggregated over a long period of time, reproduces the required LN distribution.If we constructed a noisy temporal or spatial series by adding Gaussian noise to the median D LL , the resulting distribution of D LL cannot be LN since it has the potential to include negative values of diffusion, which would also be difficult to interpret in this context.
To investigate the consequences of variability, we consider ensembles of numerical experiments.In each case we compute the solutions of the radial diffusion equation using Equation 9, where D LL (t) is separately constructed each time using the methods described below.Our recreations of D LL (t) do not alter the underlying Fokker-Planck diffusion theory but produce realizations of D LL that better recover the underlying distribution of ULF power spectral density.Future work will seek to identify the most appropriate methods to model both the diffusion coefficient and its variability, but the straightforward methods we adopt here serve to illustrate the behavior of the radial diffusion equation when stochastic parameterization is adopted using known constraints.
In each numerical experiment we run an ensemble with 250 ensemble members, providing a span of possible realizations of 48 hr D LL time series resulting from the inclusion of a stochastic variability.Convergence testing of our numerical experiments (see the supporting information) demonstrates that 250 ensemble members is sufficient to realize the behavior of the experiment.
In all experiments we choose K p = 3, corresponding to "unsettled" geomagnetic activity.Unsettled geomagnetic activity allows us to explore stochastic variabilities during periods where the radial diffusion coefficients are large enough to see changes after 48 hr.We also wish to avoid the illogical situation of having a very high level of geomagnetic activity while enforcing a constant outer boundary.For the demonstrations approximated in this paper, a compromise of K p = 3 was felt to be appropriate.The initial PSD is chosen to provide a peak inside the computational domain as expected in the outer radiation belt, and a zero gradient at the outer boundary, for ease of computation in these illustrative experiments where we have chosen A = 9 × 10 4 , μ = 4, σ = 0.38, B = 0.05, and γ = 5 and erf is the error function.Such a profile is reasonable when compared to satellite observations (e.g., see Figures 1 and 2 in Boyd et al., 2018).
If one wanted to do the equivalent in L space (with a transformed diffusion equation), it suffices to use (Roederer & Zhang, 2014) The initial PSD profile and proposed boundary conditions result in the expected radial diffusion process drawing PSD from central L toward both boundaries.
Experiment 1: Temporal Variability of D LL
Our first experiment focuses on the temporal variation of D LL across a range of timescales.We employ a simple method, where the D LL in Equation 8 is multiplied by a random factor ϵ, which changes every Δt.The same factor ϵ is applied at each value of L in the model.The choice of distribution of ϵ is guided by the statistical analysis presented by Ozeke et al. (2014), who found that the IQR of observed wave power implies that D LL lies between a third of and three times the model value 50% of the time.We use this information to control the variance of the noise.Combined with recent studies that suggest that ULF wave power spectral densities appear LN (Bentley et al., 2018), we construct a log-normally distributed variability with the following parameters: where
Experiment 2: Spatial Variability of D LL
In Experiment 1, D LL was constructed with perfect correlation across all L, with the same ϵ applied to all Lshells.This is one extreme of L spatial correlation, with the (Ozeke et al., 2014) D L L scaling as a smooth, monotonically increasing profile.We hereon refer to this approach as global variability.However, we must consider that although the statistical profile of D L L (L) is smooth, individual cases of D L L (L,t) may be less smooth.In this experiment, we investigate how radial diffusion responds to a realized D L L , which may vary on local spatial scales, and not necessarily be a smooth monotonically increasing function of L.
We now consider the log-normally distributed variability applied every 3 hr, comparing the global variability with local spatial correlation scales.We consider cases where D L L varies independently on spatial scales of 1L,0.5L, and 0.1L.Example ensemble members for each of these cases are shown in Figure 2. The final case denotes the other extreme where measures of D L L (L,t) are independent at all grid points, that is, that independent ϵ is applied at each grid point in L to create an ensemble of D L L both spatially and temporally.
We have retained temporal variability in this experiment to maintain our goal of creating D L L time series that represent realistic values.Ground magnetometer ULF wave power measurements, and consequently D L L , do not typically remain constant over 2 days (e.g., Olifer et al., 2019).Results from differing spatial variability scales can therefore be interpreted in conjunction with the 3-hourly temporal variability.
In a more physical realization, we would expect spatial correlations across L to be less crude and abrupt, and are likely to exhibit smoother variations with appropriate length scales.However, for the purpose of this demonstration, we have chosen the simplest way to apply spatial variability in the model to motivate the importance of understanding the spatial structure of radial diffusion across L. .Larger variances may be necessary if the variability of D LL is not simply due to the variability in observed ground-based ULF power spectral density.Smaller variances have been considered to see the effect of an "improved" parameterization (i.e., one where the parameters are chosen in a way that minimizes the variance).In each of these cases, ensemble D LL time series are formulated by applying variability globally across L every 3 hr, with the distribution of the variability LN.
Experiment 4: Shape of the D LL Probability Distribution
Each experiment (1-3) utilized a log-normally distributed variability, chosen based on statistical studies of ULF wave power spectral densities parameterized by solar wind variables (Bentley et al., 2018).The IQR presented by Ozeke et al. (2014) describes the uncertainty in the deterministic parameterization, but we do not know how the D LL s are distributed in a K p -based model.Adopting the values and log-symmetric nature of the (Ozeke et al., 2014) IQR in order to preserve statistical averages (a zero mean and median in the logarithm), a range of log-symmetric distributions for the variability are tested.We consider log-uniform (LU), LN, log-Laplace (LL) and log-Cauchy (LC) distributions, which provides a set of distributions ranging from bounded to heavy tailed (for further information about each of these distributions, please see the supporting information).Since the heavy tailed distributions can easily produce variabilities resulting in a D LL which is unrealistically many orders of magnitude larger than the deterministic solution, for this experiment we bound the variability by 3 orders of magnitude (i.e., the variability can increase/decrease D LL up to a maximum/minimum of 3 orders of magnitude compared to the reference value).The respective probability density functions (PDFs) of the variability distributions are as follows: for x > 0, where I [,] is the characteristic function.Here the quantities a, b, σ N , σ L , and σ C are the parameters of the underlying uniform, normal, Laplace, and Cauchy distributions, respectively.The parameters were calculated from their corresponding cumulative density functions in order to preserve the IQR specified by Ozeke et al. (2014) (see the supporting information).
Results
The figures showcasing results for each experiment generally follow the same format.The initial PSD and resulting PSD from the constant deterministic D LL are shown.By the log-symmetric nature of the D LL probability distributions in each experiment, the constant deterministic D LL is precisely the median diffusion coefficient from the ensemble and a natural reference for comparison.The mean diffusion coefficient is deliberated in section 6.There is no convention regarding which statistical measure is most appropriate in ensemble modeling (Knutti et al., 2010), and we have therefore shown two natural measures, the ensemble mean and median.By ensemble mean (median) PSDs, we imply the PSD profile resulting from taking the mean (median) across all ensemble members at each L, and not representing a specific member of the ensemble.The kernel density estimates (KDEs) of the ensembles are also shown.Kernel density estimation is a mathematical process of finding an estimate PDF of a random variable, inferring attributes of a population based on a finite data set.In the case of our ensembles, the contribution of each ensemble member value in L-PSD space is smoothed out into a region of space surrounding it.Aggregating each of these smoothed points provides an image of the overall ensemble structure and density function.Ensemble modes, another useful measure of the ensemble result, can be estimated from this density function (Kourentzes et al., 2014).
In our figures KDEs shown are relative to each column, meaning that if a single L column were extracted, the result would be a PDF estimate of the PSD at that particular L. KDEs are therefore useful in an ensemble setting since they allow us to see where ensemble member solutions cluster in the phase space.In our estimates the KDEs are calculated over 100 bins.
Experiment 1-Temporal Scales
Results of the ensembles for the variety of temporal variability scales are shown in Figure 3.For ensemble medians, inclusion of a LN variability results in more diffusion than the constant deterministic D LL at all variability temporal scales less than 24 hr, with the magnitude of diffusion increasing as the temporal scale decreases.The ensemble median for a temporal variability of 24 hr is identical to the deterministic solution, suggesting that on long timescales, a deterministic parameterization of D LL is sensible for a D LL with daily variation.Results for the ensemble mean are similar, except we observe more diffusion than the constant D LL at all temporal scales.This is unsurprising since the (Ozeke et al., 2014) D LL is based on the median of log-symmetric distributions, where means are larger than medians.Therefore, the ensemble D LL time series at all temporal scales will have a mean larger than both the deterministic approximation and ensemble median, resulting in more diffusion.An interesting result lies in the comparison of ensemble medians and means.On the most rapid temporal D LL variability of 1 hr, results from the ensemble mean and median are identical.As the temporal variability becomes less rapid, both exhibit less diffusion, but the profiles separate with the ensemble median displaying increasingly less diffusion than the mean as it approaches the deterministic solution at daily variability.
Over all temporal variability scales, the occurrence of possible states in the set of all ensemble solutions spans similar regions.For the rapid 1 hr variability, the set of all solutions is more diffusive than the deterministic case.The deterministic solution becomes increasingly closer to the denser region of ensemble solutions with larger temporal scales, falling exactly in the region of highest probability for daily variation.We see that increasing the frequency of D LL variability tends to a single mode solution in density, which is more diffusive than that produced by the deterministic model.Inclusion of the variability expressed by Ozeke et al. (2014) in their 3-hourly deterministic model produces a span of solutions, which vary greatly from the deterministic case at all L, most of which are more diffusive.The use of the median-based deterministic parameterization may therefore not be robust.When we allow the stochastic D LL to vary daily, however, the deterministic solution fell exactly in the regions of highest probability, emphasizing again that the deterministic approximation is more suitable for a daily varying D LL .When including variability, the deterministic parameterization frequently produces lower estimates of radial diffusion, so understanding the temporal variability of ULF wave power spectral density is important to know the extent of potential underestimation.
Experiment 2-Spatial Scales
Ensemble results for Experiment 2 are shown in Figure 4. We find that on average all spatial scales of variability result in similar levels of diffusion, but all exhibit more diffusion than the deterministic solution.In each case the ensemble means and medians are almost identical.Most importantly, we observe variance reduction in the set of ensemble solutions as independence of D LL measurements occurs on increasingly smaller spatial scales, with the distributions tending toward a single mode solution of diffusion similar to those exhibited by the ensemble median and mean.A smaller variance implies possibility of a stronger parameterization with reduced uncertainty.It is important to investigate instantaneous observations of ULF wave power across multiple latitudes to better understand spatial correlations and coherence across L*, since regions of independent power measurements could allow for better parameterizations of D LL .
Experiment 3-Variance
Figure 5 shows the ensemble results for Experiment 3, with each variance expressed in terms of the variability IQR.It is evident that radial diffusion is very sensitive to the width of the variability distribution.Just doubling the multiplicative scaling of the IQR suggested by Ozeke et al. (2014) results in significantly more diffusion in both ensemble averages, reducing the peak in PSD by around 20,000.The shape of the distribution for the set of all ensemble solutions also drastically changes, with a large density of solutions tending to the asymptotic result controlled by the boundary conditions.Although a wider variability distribution equally allows for both significantly larger and smaller vales of D LL , the radial diffusion equation is clearly heavily sensitive to the larger values that drive radial diffusion to significant levels beyond the deterministic approximation.
As seen in the other experiments, introduction of any variability regardless of its width results in more diffusion than the deterministic solution, when considering ensemble averages.However, if the uncertainty in the deterministic model were to have a slightly smaller multiplicative IQR of ±2 the (Ozeke et al., 2014) D LL , the variance of all ensemble solutions decreases significantly.With this smaller variance, the ensemble mean and median PSDs are closer to the deterministic model, which also falls within the set of ensemble solutions.This suggests that parameterization of ULF radial diffusion coefficients should prioritize variance reduction in order to be better representative of the underlying physical process, which draws upon the efficiency of binning by geomagnetic index K p , from which most of the uncertainty arises (Ozeke et al., 2014).
Experiment 4-Underlying Distribution
Ensemble results for Experiment 4 are shown in Figure 6.Differences between the heavy and nonheavy tailed distributions are apparent in the ensemble medians.Although studies suggest that ground-based ULF power spectral density is LN when parameterized by solar wind variables (Bentley et al., 2018), the distribution of uncertainty in the K p -based (Ozeke et al., 2014) model is not disclosed.If the distribution were to be heavy tailed or LU (which may be considered to have the heaviest tail as all values in the uniformly distributed component have equal chance of being sampled), we see more than double the median diffusion than for a log-normally distributed variability.For scenarios where the expected ULF wave power is not a statistical average, the assumed LN variability can exhibit as much diffusion as some of the heavy tailed variabilities, but this is more unlikely as shown in the KDEs.In any case, with the inclusion of variability in D LL for all probability distributions, we see significantly more diffusion than the deterministic solution, with notable variance in ensemble solutions for all variability distributions.The heavier tailed variabilities have denser regions approaching that of the asymptotic solution, and the shape of the KDEs across L-shells is quite distorted contrary to the smoothness seen for a LN D LL .Since there are multiple components of interest in the ensemble results, studies investigating the true underlying probability distribution of ULF wave power are vital to quantifying the shortfall and uncertainty introduced by a deterministic empirical D LL based upon statistical averages.
Discussion
In the outer radiation belt, radial diffusion has the ability to both accelerate electrons to relativistic energies and produce fast losses, where the efficiency of the acceleration increases with increasing ULF wave activity (Elkington et al., 2003;Shprits et al., 2008).Many models use an empirical deterministic radial diffusion coefficient dependent on L and K p , which may sacrifice accuracy (Brautigam & Albert, 2000;Brautigam et al., 2005;Ozeke et al., 2012Ozeke et al., , 2014)).In this paper we present idealized numerical experiments, which investigate the impact of including variability in the radial diffusion equation.Our experiments reintroduce the variability into a parameterized model, where D LL has been binned by Kp.We use the observationally constrained variability in the model to model a variable D LL that reproduces a realistic distribution of values and compare against the constant parameterized value.We employ constant boundary conditions and only study one value of the controlling parameter Kp.In this way, we isolate only the variability of D LL due to its parameterization by Kp.
In all experiments we found that the mean and median of the ensembles exhibit increased diffusion above that for the deterministic approximation.One way to interpret these results is that when the likelihood of strong radial diffusion is large over a particular period (either because the variance in the parameterization is large or because the underlying distribution has a heavy tail), then the diffusion exceeds what one would expect from using a constant diffusion coefficient.It is important to bear in mind that the times where diffusion is weak will not counteract the times when diffusion is strong because there is no means of reversing the diffusion; hence, the periods when diffusion is much stronger than the median will dominate the temporal evolution of the experiment.When the diffusion varies more rapidly, then each member of the ensemble is more likely to contain a period of strong diffusion over the fixed 48-hr experiment length, thus contributing to a stronger diffusion in the mean/median of the ensemble.The ensembles are also sensitive to the size of the variance (see Experiment 3), again suggesting that it is the likelihood of ensemble members containing periods of very strong diffusion that dominates the ensemble results.
The collected range of numerical experiments suggests that over extended time periods, infrequent instances of very efficient ULF wave-particle interactions make important contributions to radial diffusion and should be included in models in some way.We also note that by using an ensemble framework, the uncertainty in the PSD is explicitly quantified, providing the means to provide a range of confidence in the model for more accurate radiation belt modeling.The quantification of uncertainty in D LL is also important for future data assimilation methods.
Experiment 1 indicates that the amount of diffusion depends upon how rapidly the diffusion coefficient varies.Hence, it is important to understand the timescales of variability.ULF wave power can vary on a range of timescales, which would ideally be accounted for in the radial diffusion coefficient.For example, ULF wave power can increase and persist on the order of tens of minutes during an auroral activation due to substorms (Rae et al., 2011), while decaying on hourly timescales during strong poloidal wave events (Liu et al., 2011).Parameterization of D LL with K p may therefore not be optimal, since it may not vary quickly enough.
We found that variation of D LL with the added inclusion of local spatial variabilities on a range of length scales resulted in more diffusion that the deterministic solution (see Experiment 2).However, when considering the ensemble averages, all levels of spatial coherence across L* performed similarly.Since applying variability to subglobal spatial scales still allows for an enhanced D LL at several L, this result is somewhat counterintuitive to those found in the other experiments.While it was found that instances of weaker diffusion cannot counteract the temporal evolution imposed by instances of stronger diffusion, counteractions can occur across spatial scales, creating a net diffusion that seems to follow that observed by a globally applied variability.More interestingly, we found that the variance of the possible states in the set of all ensemble solutions decreases significantly with variability applied to increasingly smaller subglobal spatial scales.It is important to understand and quantify these spatial scales.Rae et al. (2019) showed the evolution of ground-based ULF wave power during geomagnetic storms.ULF wave power can exhibit spatial coherence across ranges of L but does not rise and fall everywhere simultaneously due to the complicated evolution of cold plasma density and magnetic field strength in the inner magnetosphere.They also present evidence that the temporal variability of ULF wave power may vary with L. It may also be that spatial coherence varies with time and geomagnetic activity.The spatial variability (in the radial direction) of drift-averaged diffusion due to ULF waves throughout the outer radiation belt promises a rich vein of future work.
Sensitivity of radial diffusion to the variance of the full probabilistic distribution of D LL was explored in Experiment 3.For small variances, the diffusion results approach those of the deterministic model, as expected.But as the variance is increased, the diffusion results rapidly diverge.These results suggest that it is worth seeking alternative parameterizations that focus on variance reduction in the construction of the diffusion model.Another way to reduce the variance in the parameterization may be to focus on the calculation of D LL itself.For example, D E LL in the Ozeke et al. (2014) model was constructed via a mapping technique that utilized several assumptions: constant (low) wave number m = 1, constant width of the wave activity in latitude, and constant ionospheric conductance parameters (Ozeke et al., 2009).These quantities are typically not constant and contribute to the uncertainty in the deterministic model and should be included in the stochastic parameterization.The theoretical background from which D LL is based may also produce uncertainties.Several analytical diffusion rates based on magnetic and electric field assumptions exist, with L dependence ranging from L 6 -L 11 and frequency dependence on a range of wave modes (e.g., Birmingham, 1969;Cornwall, 1968;Elkington et al., 2003;Fälthammar, 1966Fälthammar, , 1968;;Fei et al., 2006;Schulz & Lanzerotti, 1974).If enough of the underlying variability in the deterministic model is known, the better the variability in the stochastic models can be characterized or accounted for.It should be mentioned however that natural variability might exist, which cannot be parameterized by any means.Deducing levels of natural variability in ULF wave-driven radial diffusion is necessary in understanding information always lost by a deterministic model.If these levels are substantial, our results suggest that a stochastic approach to modeling radial diffusion may be more robust.
The response of radial diffusion to higher likelihoods of an enhanced D LL , which dominates temporal evolutions, was explored in Experiment 4. It is evident that significantly more radial diffusion occurs for heavier tailed variabilities, indicating that the amount of diffusion is controlled by the relative importance of the large values of D LL in the distribution.A global upper bound for possible ULF wave power is justified since it is counterintuitive for ULF waves to have infinitely large power in a finite-sized magnetosphere.The shape of the distribution is therefore important.It may also be that the shape of the distribution of D LL is not constant.During quiet times when the outer radiation belt is relatively quiescent, the variability might be better represented heavily skewed to the left with a single small upper bound on ULF wave power.In a storm-time model where ULF wave activity is enhanced during the main and recovery phase (Murphy et al., 2011;Murphy et al., 2015;Rae et al., 2011), a right skewed ULF wave power distribution that favors larger ULF wave powers might be more suitable.Further research into tail values of the distribution of ULF wave power is important to constrain the physical upper bound of power variability to include in stochastic models.
In each of our experiments, ensemble averages and KDEs were compared to a (Ozeke et al., 2014) constant deterministic solution, which is based on the median of statistical ULF wave power.However, it may be more fair to compare the evolution of our numerical ensembles with an experiment where D LL is kept constant, but at the mean value of the distribution, especially since the ethos of constructing a diffusion coefficient is to consider the average behavior of the waves.Figure 7 indicates the results of a number of numerical experiments with constant D LL (mean, solid pink; upper quartile, dashed pink; and lower quartile, dash-dot pink) compared with the ensemble result using a LN distribution with Δt = 1 hr.We observe that the mean-based D LL only causes slightly more diffusion than the median based and is also significantly less diffusive than the ensemble averages.While inclusion of the LQ-and UQ-based D LL does result in a broad span of possible PSD solutions, the UQ produces diffusion only as strong as the ensemble averages, falling short of the regions of highest density seen in the ensemble solutions.It is apparent that having a deterministic representation of D LL fails to represent the underlying distribution of radial diffusion solutions found from the stochastic D LL time series, which better represent the true underlying distribution of ULF wave power.Our ensemble modeling highlights where efforts should be placed to get a better description of D LL , so that we can aim for a parameterization with a quantified uncertainty that truly represents the underlying distribution of possible solutions of the radial diffusion equation.
Diffusion due to other types of wave-particle interactions is important in the outer radiation belt, and similar modeling strategies may be required.Diffusion in pitch angle and energy due to higher-frequency waves is also highly variable (Watt et al., 2019), potentially with different time and length scales depending on location in the magnetosphere.It will be necessary to repeat similar numerical experiments to determine the stochastic parameters necessary to use in stochastic parameterizations of pitch angle and energy diffusion and then design observational analyses that can best constrain those parameters.
Conclusions
Our idealized experiments highlight the spatiotemporal impacts of including stochastic parameterizations in the ULF wave-driven radial diffusion.We have shown that diffusion is increased above the deterministic model when the diffusion coefficients vary more rapidly, when the spatial correlation of the diffusion across L-shells ranges from fully coherent to completely independent, and when the variance of the distribution is increased, or a more heavy-tailed distribution is used.We have demonstrated that future research should focus on the temporal evolution of ULF wave power, the spatial correlations of diffusion across L-shells, and the underlying distribution and variance of the radial diffusion coefficients.The successful implementation of a stochastic radial diffusion model requires variability parameters that are derived appropriately; that is, spatial and temporal scales of the variability may themselves vary in time and space.Our research motivates further investigation of stochastic methods for use in radiation belt diffusion models as a method to include the variability of wave-particle interactions in the inner magnetosphere.
Figure 1 .
Figure 1.Example ensemble member D LL time series shown for a range of temporal variability scales.In each case, the constant (Ozeke et al., 2014) deterministic D LL is multiplied by a log-normal variability at the relevant hour of variability, constrained by the empirical model and ULF wave power observations, and persists until to the next hour of variability where the process is repeated.Examples are shown for variability temporal scales of 1, 3, 6, 12, and 24 hr, along with the constant D LL with no variability.D LL shown here has units s −1 in line with the 1 s time step used in our numerical scheme.
1:34896 Þ are the parameters of the normally distributed logðϵÞ.Note that for a normally distributed random variable, the IQR is approximately 1.34869 multiplied by the standard deviation.We consider variability Δt = 1, 3, 6, 12, and 24 hr, and example ensemble members for each of these cases are shown in Figure1.They are effectively artificial representations of what might be observed in situ.
Figure 2 .
Figure 2. Example ensemble member D LL time series shown for a range of spatial variability scales.In each case, every 3 hr the constant (Ozeke et al., 2014) deterministic D LL is multiplied by log-normal variabilities on a variety of local spatial variability scales, constrained by the empirical model and ULF wave power observations, and persists for 3 hr where the process is then repeated.Examples are shown for variability spatial scales of 1L, 0.5L, and 0.1L, along with the global variability case and constant D LL with no variability.D LL shown here have units s −1 in line with the 1 s time step used in our numerical scheme.
4. 3 .
Experiment 3: Width of the D LL Probability Distribution The empirical (Ozeke et al., 2014) D LL parameterization is based on the median of statistical ULF wave power, and uncertainty in the parameterization has the multiplicative IQR 1 3 D L L ; 3D L L mentioned previously.We compare the IQR suggested by Ozeke et al. (2014) with larger and smaller IQRs, namely,
Figure 3 .
Figure 3. Ensemble results for the final PSD at the end of Experiment 1 for a range of temporal variability scales (1, 3, 6, 12, and 24 hr, respectively).The median (dashed), mean (dash-dot) ensemble profiles are shown, as well as the initial PSD profile (dotted) and the deterministic solution with constant deterministic D LL (solid).Ensemble kernel density estimates of the resulting electron PSD are also shown.
Figure 4 .
Figure 4. Ensemble results for the final PSD at the end of Experiment 2 for a range of spatial variability scales (global, 1L, 0.5L, and 0.1L, respectively).The description of lines and KDEs are as in Figure 3.
Figure 5 .
Figure 5. Ensemble results for the final PSD at the end of Experiment 3 for a range of log-normal variability IQRs (±2, ±3, ±6, and ±10 of the deterministic D LL , respectively).The description of lines and KDEs are as in Figure 3.
Figure 6 .
Figure 6.Ensemble results for the final PSD at the end of Experiment 4 for a range of variability probability distributions (Log-Normal, Log-Laplace, Log-Uniform, and Log-cauchy, respectively).The description of lines and KDEs are as in Figure 3.
Figure 7 .
Figure 7. PSD resulting from the radial diffusion equation after 2 days with constant Kp = 3, shown for a constant deterministic D LL based on the mean pink), LQ (dash-dot pink) and UQ (dash pink) of ULF wave power.These plots are laid over the first subplot in Figure 3. | 10,819.2 | 2020-08-01T00:00:00.000 | [
"Physics"
] |
A Splice Isoform of DNedd4, DNedd4-Long, Negatively Regulates Neuromuscular Synaptogenesis and Viability in Drosophila
Background Neuromuscular (NM) synaptogenesis is a tightly regulated process. We previously showed that in flies, Drosophila Nedd4 (dNedd4/dNedd4S) is required for proper NM synaptogenesis by promoting endocytosis of commissureless from the muscle surface, a pre-requisite step for muscle innervation. DNedd4 is an E3 ubiquitin ligase comprised of a C2-WW(x3)-Hect domain architecture, which includes several splice isoforms, the most prominent ones are dNedd4-short (dNedd4S) and dNedd4-long (dNedd4Lo). Methodology/Principal Findings We show here that while dNedd4S is essential for NM synaptogenesis, the dNedd4Lo isoform inhibits this process and causes lethality. Our results reveal that unlike dNedd4S, dNedd4Lo cannot rescue the lethality of dNedd4 null (DNedd4T121FS) flies. Moreover, overexpression of UAS-dNedd4Lo specifically in wildtype muscles leads to NM synaptogenesis defects, impaired locomotion and larval lethality. These negative effects of dNedd4Lo are ameliorated by deletion of two regions (N-terminus and Middle region) unique to this isoform, and by inactivating the catalytic activity of dNedd4Lo, suggesting that these unique regions, as well as catalytic activity, are responsible for the inhibitory effects of dNedd4Lo on synaptogenesis. In accord with these findings, we demonstrate by sqRT-PCR an increase in dNedd4S expression relative to the expression of dNedd4Lo during embryonic stages when synaptogenesis takes place. Conclusion/Significance Our studies demonstrate that splice isoforms of the same dNedd4 gene can lead to opposite effects on NM synaptogenesis.
In higher eukaryotes, there are several Nedd4 family proteins, including the closely related Nedd4-1 (Nedd4) and Nedd4-2 (Nedd4L). In mammals, Nedd4-2 is known to regulate stability of ion channels, such as ENaC, which has PY motifs that interact with the Nedd4-2 -WW domains to promote ENaC endocytosis [7,8,9,10]. Mutations in ENaC PY motifs found in Liddle syndrome (a hereditary hypertension) result in increased retention of ENaC at the plasma membrane in the kidney [11,12]. The interaction between ENaC and Nedd4-2 can be negatively regulated through the phosphorylation of Nedd4-2 by the Ser/ Thr kinase, Sgk1, and its close relative Akt1 [13,14]. In contrast to Nedd4-2, mammalian Nedd4-1 has been implicated in the regulation of cellular and animal growth [15], T cell activation [16] and heart [17] and nervous system [18] development.
Recently, Nedd4, which is expressed in muscles, was found to regulate neuromuscular (NM) synaptogenesis in flies [19] and mammals [20]. Specifically, Drosophila Nedd4 (dNedd4) was shown to regulate endocytosis of commissureless (Comm) from the muscle surface to allow proper initiation of NM synaptogenesis [19]. Comm contains two PY motifs (PPCY and LPSY) and 10 Lys residues (ubiquitin acceptor sites) in its intracellular domain. The WW domains of dNedd4 bind to the PY motifs of Comm, leading to Comm ubiquitylation [21]. In Drosophila, each body wall hemisegment contains 30 muscle fibers that are innervated by ,40 motor neurons in a specific, precisely timed manner [22]. Internalization of Comm (expressed on the muscle cell surface) into the muscle is a prerequisite for proper initiation of NM synaptogenesis [23] and is facilitated by dNedd4 [19]. Of note, the dNedd4 gene is subject to alternative splicing, with several isoforms that can be generally divided into two groups represented by a short isoform (dNedd4S, or dNedd4) and a long isoform, dNedd4Lo. While dNedd4S (dNedd4) was previously shown to enhance NM synaptogenesis, the function(s) of dNedd4Lo was unknown.
Here we characterized dNedd4Lo and show that in contrast to dNedd4S, dNedd4Lo has a negative function in NM synaptogenesis, leading to defects in neuromuscular synapse formation and abnormal larval locomotion.
Results
Our previous work showed that dNedd4 is involved in regulating neuromuscular synaptogenesis, and revealed two isofoms of dNedd4 expressed in the body wall muscle [19]. In accord, our immunoblotting analysis of embryo lysates revealed two major splice isoforms of dNedd4, which we named dNedd4short (dNedd4S, ,92 kDa) and dNedd4-long (dNedd4Lo, ,112 kDa) (Fig. 1A,B). DNedd4Lo possesses the same functional domains as dNedd4S: a conserved C2 domain, 3 WW domains, including the high affinity WW3 binding domain [24] and a Hect domain, which is catalytically active, much like dNedd4S (Fig. S1A). In addition, dNedd4Lo contains a unique N-terminal (Nterm) region that includes a putative Akt phosphorylation site, as well as a unique middle (Mid) region (Fig. 1A). Our previous work focused on the function of dNedd4S [19]. Here we investigated the biological role of dNedd4Lo and compared it to that of dNedd4S.
DNedd4S, but not dNedd4Lo, can partially rescue lethality of dNedd4 null mutant flies To determine the biological importance of the two splice isoforms of dNedd4, we tested their ability to rescue dNedd4 null mutant flies. The dNedd4 null (dNedd4 T121FS homozygote) flies contain a frame shift mutation in the dNedd4 gene that truncates the protein products of all the dNedd4 splice isoforms at Thr121, which renders dNedd4 inactive. DNedd4 T121FS flies are heterozygous viable and homozygous lethal at the embryonic stage. For rescue experiments, flies containing either a UAS-dNedd4S or UAS-dNedd4Lo transgene under the transcriptional control of the ubiquitous Act-GAL4 driver [25] (Fig. 1C) were crossed into dNedd4 T121FS homozygote flies. Two independent crosses were performed for each of the UAS-dNedd4S and UAS-dNedd4Lo transgenes, as well as their respective Act-GAL4 driver alone and UAS transgene alone controls. A total of 800 embryos were collected for each UAS line and observed under a fluorescent microscope to follow the survival of rescued larvae, which do not express GFP (non-GFP). Rescue was determined by measuring the percentage of viable dNedd4 null mutant embryos (hatched larvae without GFP expression) out of the expected number of mutant embryos. The expected number of genotype of interest for rescue with dNedd4S (UAS-dNedd4S/X; Act-GAL4/CyO (or Sp); dNedd4 T121FS /dNedd4 T121FS ) was calculated to be 12.5%, and with dNedd4Lo (+; UAS-dNedd4Lo/CyO (or Sp); dNedd4 T121FS / dNedd4 T121FS ) was calculated to be 25%. This excludes all other possible genotypes. Our results show that expression of dNedd4S throughout the embryo partially rescued the lethality of dNedd4 null embryos (from 5.5% or 7.25% to 36%), whereas dNedd4Lo did not rescue the lethality (From 5.5% or 3.75% to 4.5%) and larvae died soon after egg hatching (and some did not fully crawl out of their egg shells) (Fig. 1D). These results suggest that the two splice isoforms of dNedd4, both of which are expressed endogenously in embryos (Fig. 1B), have distinct roles during embryo development.
Overexpression of dNedd4Lo in the muscle reduces fly viability Given the different effects of dNedd4S and dNedd4Lo on rescuing dNedd4 null mutant flies, we next examined the effect of overexpression of dNedd4Lo and dNedd4S. Our results show that ubiquitous expression of all transgenic lines of UAS-dNedd4Lo crossed with the Act-GAL4 driver resulted in lethality before the third instar larval stage, while flies expressing UAS-dNedd4S survived to the adult stage. In an attempt to observe the effect of slight variations in expression levels on fly survival, different ubiquitous GAL4 drivers, including da-GAL4 [26], Actin-GAL4 and Tubulin-GAL4 [27] were used at 25uC, room temp (RT, ,22uC), or 18uC. All ubiquitous drivers at all temperatures yielded developmental lethality when crossed to UAS-dNedd4Lo, but not UAS-dNedd4S (Table S1). To rule out the possibility that lethality was due to a defect in sub-cellular localization of the protein, we examined the distribution of dNedd4Lo and dNedd4S in salivary glands. The proteins did not form aggregates and properly localized in the cytosol and on the plasma membrane ( Fig. S1B), as was previously observed for localization of endogenous dNedd4 [19] [28]. In an attempt to determine the cause of lethality, various tissue-specific GAL4 drivers were used to drive overexpression in the CNS using elav c155 -GAL4 [29], in motor neurons using D42-GAL4 [29], in muscles using 24B-GAL4 [29] or 5-GAL4 [19], in eye using GMR-GAL4 [30], in fat body using Ppl-GAL4 [31] and in the epithelial lining of the digestive system and respiratory system using 48Y-GAL4 [29]. Each test was performed at 25uC, RT, or 18uC, to determine whether slight variations in expression levels had any effect on fly survival. We found that all UAS-dNedd4Lo transgenic flies died during development only when overexpressed in muscle using the 24B-GAL4 or 5-GAL4 drivers, while UAS-dNedd4S transgenic files survived to adulthood (Table S2). Once again, the lethality was not due to a defect in protein sub-cellular localization since we found that dNedd4Lo was properly localized in the muscle (see below).
Overexpression of UAS-dNedd4Lo in muscle results in aberrant synaptic innervation along the SNb branch from body wall muscle 13 to 12 Since we previously demonstrated that dNedd4S is involved in neuromuscular (NM) synaptogenesis [19], potential defects in NM synapse formation were analyzed by performing immunofluorescence staining on dissected body wall muscles of third instar larvae from UAS-dNedd4Lo transgenic lines. There are two major categories of innervation defect: pathfinding error and axonal overgrowth/undergrowth error. The abnormal innervations that were identified and scored in this experiment included: backward innervation from muscle 12 onto muscle 13, which belongs to pathfinding error, and increased branching on muscle 12, which belongs to overgrowth error ( Fig. 2A). The frequency of abnormalities was calculated as the number of abnormal innervations (backward innervation and/or pathfinding error) that were identified over the total number of neuromuscular junction at muscle 13 and muscle 12 that were scored. By scoring ,100-130 NM synapses at muscles 13 and 12 from each of the UAS-dNedd4Lo and UAS-dNedd4S transgenic lines, the number of normal and abnormal innervation for each UAS-dNedd4 transgenic was compared with that of the muscle driver control (5-GAL4 muscle driver fly used to drive overexpression of the UAS transgenics) (Fig. 2B). Our results show that the UAS-dNedd4Lo transgenics had a significant amount of abnormal innervation relative to the control (two-tailed Fisher's exact test, p,0.0001), which was not observed for the UAS-dNedd4S transgenics (p,0.4734). Taken together, these results demonstrate that muscle-specific overexpression of UAS-dNedd4Lo in a wild type background causes a high frequency of abnormal motor neuron innervation on muscles 13 and 12, whereas that of UAS-dNedd4S does not cause significant abnormalities.
Overexpression of UAS-dNedd4Lo in muscle results in reduction of larval locomotor activity To determine the consequences of the abnormal muscle innervation in larvae that overexpress dNedd4Lo in the muscle, we analyzed larval locomotor activity by measuring the total path length travelled by pre-wandering third instar larvae over a 150 sec period. Our results reveal a severe reduction in locomotor activity of larvae that overexpress dNedd4Lo in the muscle (+/UAS-dNedd4Lo; +/24B-GAL4) (73.7663.80 mm) relative to the +/UAS-dNedd4Lo (120.3565.05 mm) or +/24B-GAL4 (137.2365.46 mm) controls (post hoc analysis, p,0.001) (Fig. 2C,D).
Adverse effects of dNedd4Lo are not a result of dAkt regulation Differences between the two isoforms of dNedd4 include an alternate start codon site resulting in a longer N-terminal region in dNedd4Lo, and an extra exon inserted between the WW1 and WW3 domains ( Fig. 1A and 3A). In the unique N-terminal region of dNedd4Lo, there is a putative Akt phosphorylation site (S39) with the consensus sequence, RxRxxS/T. Another putative Akt phosphorylation site was found common to both dNedd4Lo (S645) and dNedd4S (S444) (Fig. 3A). Interestingly, a close relative of dNedd4 in mammals, Nedd4-2, also contains a consensus sequence, RxRxxS/T, that can be phosphorylated by the Ser/ Thr kinase Sgk1 and its close relative, Akt1 [13]. Therefore, we tested the possibility that the putative Akt phosphorylation site(s) in dNedd4Lo were phosphorylated by Drosophila Akt (dAkt) in Drosophila S2 cells. We found that while the site containing Ser39 in the unique N-terminal region of dNedd4Lo was indeed Figure 2. Neuromuscular innervation and locomotion defects in larvae overexpressing dNedd4Lo in the muscle. (A) Muscle-specific overexpression of UAS-dNedd4Lo (dNedd4Lo/5-GAL4) leads to aberrant synaptic innervation along the SNb branch from body wall muscles 13-.12 of third instar larvae (HRP stain, red). Muscle driver control line (+/5-GAL4) was included (left panel) to show the normal motor neuron innervation pattern on muscle 13-.12. Muscles were stained with Phalloidin (green). Scale bars, 10 mm. (B) Quantification of the muscle innervation defects. Numbers in brackets denote the number of muscles scored (n). * p,0.0001 (Two tailed Fisher's exact test). (C,D) Overexpression of dNedd4Lo adversely affects larval locomotor activity measured by total path length: (C) Representative path travelled by third instar larvae of muscle driver alone (24B-GAL4), UAS-dNedd4Lo alone (UAS-dNedd4Lo), or larvae overexpressing dNedd4Lo in the muscle (UAS-dNedd4Lo/24B-GAL4). Scale bar, 10 mm. (D) Quantification of locomotor activity shown in panel C of 55-60 larvae (per genotype). Numbers in brackets denote the exact number of larvae scored (n). * indicates a significant difference between the UAS-dNedd4Lo/24B-GAL4 line and both control lines (UAS-dNedd4Lo alone or 24B-GAL4 alone) (p,0.001, post hoc analysis). doi:10.1371/journal.pone.0027007.g002 phosphorylated, the site containing Ser653 in dNedd4Lo, which is also found in dNedd4S (Ser543), was not phosphorylated by dAkt (Fig. 3B). Ser39-.Ala mutation in dNedd4Lo abolished its phosphorylation. This observation led us to ask whether dNed-d4Lo's unique function is mediated by dAkt phosphorylation. To test this possibility, we generated UAS transgenic lines that express a dNedd4Lo S39A;S645A double mutant (UAS-dNedd4Lo 2S-.A), in which the Ser residue of each putative Akt phosphorylation site was mutated to Ala. A UAS-dNedd4S S444A (UAS-dNedd4S S-.A) mutant transgenic line was also generated for comparison (Fig. 3A). We performed the same lethality tests using ubiquitous and muscle-specific GAL4 drivers as described above for UAS-dNedd4Lo and UAS-dNedd4S (wildtype, WT). No difference was found between dNedd4Lo WT and its 2S-.A mutant, nor between dNedd4S WT and its S-.A mutant (Table S3). Next, we scored for muscle innervation defects from UAS-dNedd4Lo 2S-.A and UAS-dNedd4S S-.A transgenic flies as described above for their WT counterpart. Again, no significant difference was found between UAS-dNedd4Lo WT and its 2S-.A mutant (p,0.5288) (Fig. 2B). Because removing the dAkt phosphorylation site of dNedd4Lo mutant did not alter the muscle innervation defects, we conclude that the negative role of dNedd4Lo in NM synaptogenesis is not due to dAkt regulation.
Both unique sequences and the catalytic activity of Nedd4Lo are involved in its regulation of neuromuscular synaptogenesis Since the dAkt phosphorylation site in the unique N-terminal region of dNedd4Lo did not explain the functional difference between dNedd4Lo and dNedd4S, we next studied the role of the N-terminal (Nterm) region as well as the unique middle (Mid) region of dNedd4Lo to determine if either region is involved in the adverse function of dNedd4Lo in NM synaptogenesis. Two dNedd4Lo deletion mutants were created: One with the middle unique sequence deleted (dNedd4LoDMid) and the other, dNedd4LoDNterm, had the N-terminal unique sequence replaced by the one from dNedd4S. In addition, a catalytically inactive Cys961-.Ala (C-.A) mutant of dNedd4Lo was created to test the effect of abolishing the ubiquitin ligase activity of dNedd4Lo on its function in NM synaptogenesis ( Fig. 4 A,B). These mutants were analyzed in comparison to dNedd4Lo WT. Ubiquitous overexpression of UAS-dNedd4LDNterm crossed with the da-GAL4 driver, Actin-GAL4, or Tubulin-GAL4 in a wild type background at 25uC, RT or 18uC resulted in developmental lethality, while UAS-dNedd4LoDMid and UAS-dNedd4Lo C-.A transgenic flies survived to adulthood (Table S3). Muscle-specific overexpression of UAS-dNedd4LoDNterm, UAS-dNedd4LoDMid and UAS-dNedd4Lo C-.A crossed with 24B-GAL4 or 5-GAL4 muscle driver at 25uC, RT or 18uC did not result in any lethality (Table S3). Next, we scored for muscle innervation defects from UAS-dNedd4LoDNterm and UAS-dNedd4LoDMid transgenic flies as described above for UAS-dNedd4Lo WT and UAS-dNedd4S WT. We found that while both UAS-dNedd4LoD-Mid and UAS-dNedd4LoDNterm still exhibited abnormalities compared to the control (5-GAL4 muscle driver fly alone), these abnormalities were seen at a much lower frequency than UAS-dNedd4Lo WT (Fig. 4C). Furthermore, there was no significant difference between UAS-dNedd4LoDMid and UAS-dNedd4LoDNterm (p,0.3729). These results show that removing either the N terminal or middle unique regions of dNedd4Lo reduces the frequency of abnormal motor neuron innervation on muscles 13-.12 that was observed by muscle-specific overexpression of UAS-dNedd4Lo WT. This suggests that the N-terminal and middle unique regions contribute to the innervation defects caused by dNedd4Lo. Furthermore, mutating the catalytic Cys of dNedd4Lo (UAS-dNedd4Lo C-.A mutant) abolishes the abnormality in muscle innervation, indicating that the catalytic activity of dNedd4Lo is required for its adverse effect on NM synaptogenesis (Fig. 4C). Our previous work showed that overexpressing the catalytic inactive mutant of dNedd4S, dNedd4S C-.A, caused significant abnormal muscle innervation (which was not seen upon overexpression of dNedd4S WT [19]). Interestingly, the results we show here for dNedd4Lo are opposite to the results we obtained previously for dNedd4S. Thus, these combined data suggest that while dNedd4S has a positive role in neuromuscular synaptogenesis, dNedd4Lo plays a negative role in this process.
The negative role of dNedd4Lo in NM synaptogenesis is not caused by its inhibition of catalytic activity of dNedd4S, nor by interfering with dNedd4S-mediated regulation of Comm To explain the negative role of dNedd4Lo and positive role of dNedd4S in NM synaptogenesis, as well as if/how they coordinately regulate this process, we investigated the possibility that dNedd4Lo inhibits the function of dNedd4S by interfering with its catalytic activity. Since the N-terminal and middle unique regions of dNedd4Lo appear to be responsible for its negative effect on fly viability and NM synaptogenesis, we investigated the possibility that they inhibit the catalytic activity of dNedd4S. This was tested in an in vitro ubiquitylation and binding assays. Our results show that adding recombinant proteins corresponding to the Nterm or Mid unique regions of dNedd4Lo into the reaction mixture, either alone or together, did not affect ubiquitylation activity of dNedd4S ( Fig. 5A and Fig. S2), nor did these proteins directly bind to dNedd4S (Fig. 5B). In contrast, the C-terminal region of Comm, containing the PY motifs, was able to bind well to dNedd4S (as we previously demonstrated, [19]) and hence was used as a positive control for the binding (Fig. 5B).
Another possible way in which dNedd4Lo could negatively regulate NM synaptogenesis is through the regulation of dNedd4S substrates. Since Comm is a known target of dNedd4S in NM synaptogenesis and removal of Comm from the muscle surface is required for initiation of this event [19], the effect of dNedd4Lo overexpression on endocytosis of Comm from the cell surface was studied in Drosophila S2 cells, which endogenously express dNedd4S. Comm-GFP (WT or 2PY-.A mutant, which cannot bind dNedd4) was co-expressed with Flag-dNedd4Lo or Flag-dNedd4S. We hypothesized that overexpression of dNedd4Lo WT would interfere with the dNedd4S-mediated endocytosis of Comm, if indeed dNedd4Lo were to oppose the function of dNedd4S towards Comm. Surprisingly, we found that when co-expressed with dNedd4Lo or dNedd4S, Comm was properly internalized from the cell surface in most cells and co-localized with dNedd4Lo or dNedd4S in intracellular vesicles. As controls, we showed that Comm 2PY-.A mutant remained on the plasma membrane of most cells when co-expressed with either dNedd4S or dNedd4Lo (Fig. 6A,B). Likewise, staining for Comm and dNedd4 in muscles of third instar larvae revealed proper internalization of Comm from the muscle surface in muscles that overexpress dNedd4Lo, much like those overexpressing dNedd4S (Fig. 6C). Thus, the negative effect of dNedd4Lo on NM synaptogenesis was likely not caused by interference with Comm endocytosis, since dNedd4Lo was able to promote Comm internalization similar to dNedd4S.
Expression of dNedd4Lo is suppressed during NM synaptogenesis
Given the adverse effect of dNedd4Lo on NM synaptogenesis and larval locomotion, in contrast to dNedd4S (which positively regulates NM Synaptogenesis), it is expected that their timing of expression be tightly regulated. Indeed, our semi-quantitative RT-PCR revealed that expression of dNedd4Lo in the embryo is down-regulated soon after the onset of NM synaptogenesis (13 hrs after egg laying), while expression of dNedd4S remains high (Fig. 7). These results suggest that the negative function of dNedd4Lo has to be suppressed during NM synaptogenesis, and that the constant overexpression of UAS-dNedd4Lo driven by the GAL4 muscle drivers throughout embryogenesis could have contributed to the observed lethality and muscle innervation defects, as well as abnormal locomotor activity.
Discussion
We previously showed that dNedd4S is involved in NM synaptogenesis in Drosophila [19], most likely by promoting [23]). Consistent with this model, knock down of dNedd4 during early muscle development or overexpression of Comm mutants that cannot bind dNedd4 yielded the same defects in NM synaptogenesis [19]. Here, we provide genetic evidence that, in contrast to dNedd4S, the splice isoform dNedd4Lo has a negative role in NM synaptogenesis and embryo development in flies. In accord, expression of dNedd4Lo is reduced (and that of dNedd4S is increased) during synaptogenesis, to permit synaptogenesis to proceed. This negative role of dNedd4Lo does not involve Comm or phosphorylation of dNedd4Lo by Akt, nor does it involve an adverse effect of the unique regions of dNedd4Lo on the catalytic activity of the Hect domain of dNedd4S. Instead, it is likely that the unique Nterm and Mid regions of dNedd4Lo contribute to inhibition of NM synaptogenesis by interacting with other cellular factors or complexes, which are not yet known. Studies to analyze differences in general pattern of ubiquitylation upon overexpression of dNedd4Lo vs. dNedd4S in S2 cells did not reveal overt differences (Fig. S3), most likely due to insufficient sensitivity of the system to detect changes in ubiquitylation of specific substrates among the many ubiquitylated cellular proteins.
Our studies here demonstrate that muscle-specific overexpression of dNedd4Lo causes abnormal motor neuron innervation along the SNb branch on body wall muscles 13-.12. The types of defects we found include inappropriate backward innervation from muscles 12-.13 and increased number of nerve branches on muscle 12. The backward innervation defect was previously observed for overexpression of Comm 2PY-.A (that cannot bind dNedd4) and Comm 10K-.R mutants (that cannot become ubiquitylated), as well as dNedd4 RNAi mutants [19]. The other common defect we observed was increased motor nerve branching on muscle 12. tailed) on 100 cells per treatment, and show no statistical difference in Comm localization in the presence of overexpressed dNedd4S or dNedd4Lo (dNedd4Lo+Comm WT vs. dNedd4S+Comm WT, P,0.2134; dNedd4Lo+Comm 2PYA vs. dNedd4S+Comm 2PYA, P,0.8646). Scale bar in A, 10 mm. (C) Comm and dNedd4 localization in the muscles (muscles 12 and 13 are shown) of third instar larvae overexpressing Flag-tagged dNedd4Lo (UAS-dNedd4Lo/5-GAL4) or dNedd4S (UAS-dNedd4S/5-GAL4), which was stained with anti Comm antibodies (red) or anti Flag (dNedd4) antibodies (green). The plasma membrane of the muscles was stained with ConA (blue). 40-50 larvae (per genotype) were analyzed, and 100% showed the same localization pattern depicted in panel C. Scale bar in panel C, 30 mm. doi:10.1371/journal.pone.0027007.g006 It is known that disruption of genes involved in cell adhesion processes cause nerve branching defects, such as position specific (PS) b-integrin, fasciclinII (FasII), Calcium/Calmodulin dependent Kinase II (CaMKII), and DLG (a PDZ-domain Scaffold protein) [36,37]. They form a post-synaptic complex in the muscle to coordinately regulate defasiculation of the nerve terminal endings and fine-tune the interaction between motor neurons and their muscle targets. It has been proposed that b-integrin regulates recruitment of the cell adhesion molecule FasII on the muscle surface [36]. Down-regulation of b-integrin and up-regulation of FasII on the muscle surface lead to nerve defasiculation. Whether or not the adverse effects of dNedd4Lo on NM synaptogenesis involve these (or other) proteins is currently unknown.
One consequence of muscle innervation defects could be abnormal locomotor activity. Indeed, we found a significant reduction in the locomotor activity of larvae that overexpress dNedd4Lo specifically in the muscle. Furthermore, muscle-specific overexpression of dNedd4Lo leads to lethality during development. However, the muscle drivers 24B-GAL4 and 5-GAL4, used in this experiment, drive expression in the entire mesoderm [29,32], which derives into somatic (body wall) muscles for movement [33,34], visceral (gut) muscles for digestion, and cardiac muscles [35]. Thus, while the innervation defects on body wall muscles may contribute to the larval lethality, defects in heart and/or gut muscle functions might contribute as well, since heart activity and feeding are essential for larval survival.
Similar muscle innervation defects were also observed for the mouse homologue of dNedd4, mNedd4 (mNedd4-1) [20]. In mNedd4 mutant embryos, motor nerves defasciculate upon reaching their skeletal muscle targets and the pre-synaptic nerve terminal branches are increased in number. It was also demonstrated that mNedd4 mutants had increased spontaneous miniature endplate potential (mEPP) frequency, which is consistent with the ultra structural alternation. In addition, b-catenin, a subunit of the cadherin protein complex, was proposed to be a potential substrate for mNedd4 in NM synapse formation and function. b-catenin deficient muscles show similar defects of nerve defasiculation as mNedd4 mutant [38]. Similarly, molecular manipulation of b-integrins, which is also involved in the cell adhesion process, in muscles of mice also lead to abnormal development of pre-synaptic nerve terminals [39].
Phosphorylation is an important mechanism for the regulation of Nedd4 proteins and other E3 ubiquitin ligases. For example, Nedd4-2 is known to be regulated by Akt/Sgk -mediated phosphorylation, which inhibits its ability to interact with its substrate ENaC [13,14]. However, mutating the dAkt phosphorylation sites (S-.A) in dNedd4Lo did not affect the abnormal muscle innervation. Since the dAkt phosphorylation site in the unique N-terminal region of dNedd4Lo did not explain its negative function, we removed the whole unique N-terminal region or the middle region to determine their role in the negative effect of dNedd4Lo on viability and NM synaptogenesis. We demonstrated that removing either the N-terminal or the middle region rescued the lethality and alleviated the muscle innervation defects. Therefore, both regions are involved in the negative function of dNedd4Lo in this event. We thus investigated two possible underlying mechanisms for the negative regulation of dNedd4Lo in NM synaptogenesis. First, we tested inhibition of function of dNedd4S through the unique regions of dNedd4Lo. It is known that the catalytic activity of Nedd4 proteins can be regulated through auto-inhibition mechanism. For example, the WW domains of Nedd4-2 [40] and a close relative of Nedd4, Itch [41], as well as the C2 domain of Smurf 2 [42], were shown to bind to their own Hect domains and inhibit their catalytic activity.
However, our data suggest that the unique N-terminal or middle regions of dNedd4Lo do not bind nor inhibit the catalytic activity of dNedd4S in vitro. Second, we investigated the effect of dNedd4Lo overexpression on dNedd4S-mediated Comm endocytosis in Drosophila S2 cells and body wall muscles. We hypothesized that if dNedd4Lo acts to inhibit the function of dNedd4S, it would interfere with Comm endocytosis. However, our results show that overexpression of dNedd4Lo did not affect internalization of Comm. Therefore, the unique regions of dNedd4Lo regulate NM synaptogenesis by as yet unknown mechanisms, possibly by targeting other substrates.
Interestingly, differential regulation of substrates is known for isoforms of the E3 ligase Cbl, namely dCblL (long) and dCblS (short). While the long isoform down-regulates EGFR signaling, the short isoform preferentially controls Notch signaling through regulation of the Notch ligand Delta [43]. DNedd4 might use a similar mechanism to regulate Drosophila embryo development, particularly NM synaptogenesis. In addition, temporal regulation of expression of dNedd4S and dNedd4Lo differs, allowing NM synaptogenesis to proceed at the appropriate time in development.
Generation of UAS-dNedd4LoDNterm, UAS-dNedd4LoDMid, and UAS-dNedd4Lo C-.A mutants. UAS-dNedd4LoDNterm: The DNA fragment flanked by the AarI & Bsp1407I sites of Flag-dNedd4Lo WT in pRmHa3 vector was subcloned to replace the region in the Flag-dNedd4S WT-pRmHa3 flanked by the same restriction sites. UAS-dNedd4LoDMid: the DNA fragment flanked by the AarI&Bsp1407I sites of Flag-dNedd4S WT in pRmHa3 vector was subcloned to replace the region in the Flag-dNedd4Lo WT-pRmHa3 flanked by the same restriction sites.
UAS-dNedd4Lo C-.A: the DNA fragment flanked by the XhoI & BamHI sites containing the Cys-.Ala mutation (generated by site-directed mutagenesis) of Flag-dNedd4S C-.A in pRmHa3 vector was subcloned to replace the region in the Flag-dNedd4Lo WT-pRmHa3 flanked by the same restriction sites. The constructs were subsequently subcloned into pUAST KpnI & BamHI sites to generate transgenic lines, as described [29].
Generation of GST-dNedd4Lo N-terminus (Nterm) and GST-dNedd4L Middle (Mid) for bacterial expression and purification. The PCR-amplified DNA fragment corresponding to the N-terminal unique sequence of dNedd4Lo (bp 1 to 189) was subcloned into pGEX6P1 BamHI & SalI sites with an N-terminal GST tag. The PCR-amplified DNA fragment corresponding to the middle unique sequence of dNedd4Lo (bp 912 to 1419) was subcloned into pGEX6P1 BamHI & EcoRI with an N-terminal GST tag. All fly crosses are summarized in Table S4.
Rescue of dNedd4 null mutant flies
The fly lines and final crosses generated to rescue dNedd4 T121FS homozygote mutant flies with UAS-dNedd4Lo WT or UAS-dNedd4S WT are listed in the Table S4. Adult flies in final crosses were put on grape agar plates to lay eggs, which were collected and transferred to new plates after ,16 hrs. Plates were observed under a fluorescent microscope to look for rescued larvae, which did not express GFP (non-GFP).
Semi-quantitative RT-PCR
Oregon-R wild-type flies were put on fruit agar plates to lay eggs, which were collected every 2 hours during the 24-hr embryonic development at ,22uC. Embryos were incubated in 50% bleach for 5 min, collected using embryo collection net (Netwell insert, Corning, Acton, MA), and rinsed in PBS (phosphate-buffered saline). Total RNA was extracted from embryos (Qiagen RNeasy Mini Kit) and reverse transcribed to cDNA (Invitrogen Super-Script III Synthesis System). Forward primer: ccatggctgcaataacagtg and reverse primer: gcgtagttcgcgtgttatga were used to amplify dNedd4S to give a PCR product of 1124 bp using Platinum Taq DNA Polymerase (Invitrogen). Forward primer: tacactcctcgcagatcgtt and reverse primer: gtgtctctgaccccgatgtt were used to amplify dNedd4Lo to give a PCR product of 1239 bp. For Actin 5C: forward primer: tgtgtgacgaagaagttgctg, and reverse primer: cctcctccagcagaatcaag were used to amplify this internal control, yielding a PCR product of 1193 bp. 50 ng of each cDNA sample was added to a 50 ml PCR mixture. PCR conditions were as follows: 94uC for 30 sec, 55uC for 30 sec, and 68uC for 2 min. A series of amplification cycles from 28 to 42 were tested to determine the exponential phase and 34-cycle was chosen to perform the PCR. Equal volumes of the resulting PCR reactions were analyzed by electrophoresis on a 1.5% agarose gel. Band intensities were measured using Image J software to quantify the expression level of each mRNA at each given embryonic stage. Table S5.
Larva dissections and immunostaining
Pre-wandering third instar larvae were dissected using a standard fillet preparation technique [44], fixed in 4% paraformaldehyde (PFA) (20 min), washed in PBT (0.1% Tween-20 in PBS), blocked, and stained with Cy3-conjugated anti-horseradish peroxidase (HRP) antibody to visualize motor neurons, and Alexa Fluor 488 Phalloidin to visualize body wall muscles. Epifluorescence microscopy and imaging of the neurons and muscles was performed using an LSM 510 confocal microscope.
In vitro ubiquitylation assays
Full-length dNedd4S protein immunopurified from HEK293T cells was incubated in reaction mixtures containing 100 nM human E1 ubiquitin-activating enzyme (BostonBiochem), 200 nM human E2 ubiquitin-conjugating enzyme (UbcH5; BostonBiochem), 1 mg of ubiquitin (Sigma) and 4 mM ATP in a reaction buffer (25 mM Tris/HCl pH 7.5, 50 mM NaCl, 0.1 mM dithiothreitol and 4 mM MgCl 2 ). Reactions were incubated for 1 hr at RT. Protein ubiquitylation was detected on western blots. Membranes were blocked and incubated with mouse antiubiquitin antibody and HRP-conjugated goat-anti-mouse antibody to detect ubiquitin (Table S5). To detect Flag-dNedd4S, membranes were incubated with M2 anti-Flag antibody and HRP-conjugated goat anti-mouse antibody. Mouse anti-glutathione S transferase (GST) antibody and HRP-conjugated goat antimouse antibody were used to detect bacterially expressed GST-dNedd4Lo Nterm and GST-dNedd4Lo Mid.
In vitro binding assay
Flag-dNedd4S protein immunopurified from HEK293T cells was immobilized on M2 anti-Flag agarose beads (Sigma) and incubated with bacterially expressed GST-dNedd4Lo Nterm or GST-dNedd4Lo Mid at 4uC for 2 hrs. Supernatant was removed and the beads were washed with lysis buffer. Mouse anti-GST antibody and HRP-conjugated goat-anti-mouse antibody were used to detect GST-dNedd4Lo Nterm or GST-dNedd4Lo Mid (Table S5). Flag-dNedd4S was detected using M2 anti-Flag antibody and HRP-conjugated goat anti-mouse antibody.
Larval Locomotor Activity
Fly crosses generated to analyze larval locomotion are listed in Table S4. 150-sec path length was determined for the following heterozygous genotypes: (1) Driver control line, +/24B-GAL4; (2) UAS control line, +/UAS-dNedd4Lo; and (3) Over-expression line, +/UAS-dNedd4Lo; +/24B-GAL4 to analyze their locomotor activity. All experiments were performed at 2461uC in a climate-controlled room with 45-50% humidity with a 12/12hr light/dark cycle. All animals were tested at the pre-wandering physiological stage and the stage was confirmed as previously described [45]. The same number of 5 day old adult flies were put on grape agar plates to lay eggs and transferred to new plates every 24 hrs. After 28 hrs, hatched larvae were cleared from the plate and all newly hatched larvae were collected 2 hrs later. Twenty larvae (3 repetitions per genotype) were separated into vials containing standard fly food medium. Larvae from overexpression lines showed a delay in development, taking 17264 h after egg-laying to reach the wandering stage, while control larvae reached this stage after 16464 h. Thus, for all experiments, larvae were aged 16862 h for the over-expression line and 15862 h for controls. To measure locomotor activity, larvae were rinsed and placed in a Petri dish containing 3% grape agarose for 30 sec to adapt to the experimental conditions. The crawling path for each larva was traced for 150 sec on the lid of the dish and scanned for subsequent digital analysis. Image J software (National Institutes of Health) was used for quantification of crawling path length.
Statistical Analyses
GraphPad Prism 5.0 software (GraphPad Software, San Diego, CA) and Fisher's exact test in a 262 contingency table were used to analyze the data of motor neuron innervation on body wall muscles, as well as protein localization of dNedd4 and Comm in S2 cells. A p-value of ,0.05 was considered significant. For analysis of locomotor activity, the behavioral data were Box-Cox transformed to correct problems of distribution and nonhomogeneity of variance before transformation. Normal distribution was confirmed by the Kolmogorov-Smirnov test. The common variance of the three lines was tested by the Levene test (F 2, 168 = 2.47, p = 0.087). One-way ANOVA and subsequent Turkey post-hoc comparisons were performed to evaluate differences among lines. A p-value of ,0.01 was considered significant. Statistical analyses were conducted using Statistica 8.0 (StatSoft) software. For analysis using semi-quantitative RT-PCR, Student's t-test was used to compare the mRNA expression level of dNedd4Lo versus dNedd4S at each given developmental stage during embryogenesis. A p-value of ,0.01 was considered significant. Figure S1 Catalytic activity of dNedd4S and dNedd4Lo, and normal cellular localization of these isoforms ectopically expressed in salivary glands of third instar larvae. (A) Catalytic activity of dNedd4S and dNedd4Lo: In vitro ubiquitylation assay of wildtype (WT) dNedd4S or dNedd4Lo was performed by incubating E1, E2 (UbcH5), E3 (Flag-tagged dNedd4S or dNedd4Lo), ubiquitin and ATP, and the extent of ubiquitylation (most likely reflecting autoubiquitylation) analyzed by immunoblotting with anti-ubiquitin antibodies. Note the loss of ubiquitylation of the catalytically-inactive C-.A mutants of the dNedd4 isoforms. Lower panels: The blot was stripped and reblotted with anti-Flag antibody to show equal amounts of Flag-dNedd4S WT and its C-.A mutant (left blot) or Flag-dNedd4Lo WT and C-.A mutant (right blot) present in the reactions. (B) Similar to dNedd4S, FLAG-dNedd4Lo WT and its S-.A mutant do not form aggregates and localize ubiquitously in the cytosol and on the plasma membrane, but not in the nucleus (stained with DraQ5 in blue). W 1118 fly was used as a negative control to show no background staining of the anti-FLAG antibody (green). Scale bars, 10 mm. (TIF) Figure S2 The unique regions of dNedd4Lo do not inhibit catalytic activity of dNedd4S. In vitro ubiquitylation activity of dNedd4S in the presence of GST alone (control), GSTtagged dNedd4Lo Nterm, Mid, or both unique regions, detected using anti-ubiquitin antibody on western blots. E1, E2 (UbcH5), E3 (dNedd4S), ubiquitin and ATP were included in the ubiquitylation reactions, as well as increasing concentrations (0.9 mM, 1.8 mM and 3.6 mM) of each potential inhibitor (GST, GST-Nterm, GST-Mid, or GST-Nterm+GST-Mid). Middle panel: The blot was stripped and re-blotted with anti-Flag antibody to show equal amount of Flag-dNedd4S present in all lanes. Bottom panel: The blot was also stripped and re-blotted with anti-GST antibody. The catalytically inactive dNedd4S C-.A mutant was included as a negative control to demonstrate that the ubiquitylation activity observed was mediated by dNedd4S. (TIF) Figure S3 Ubiquitylation of cellular proteins in S2 cells ectopically overexpressing dNedd4Lo or dNedd4S. S2 cells were untransfected or transfected with Flag-tagged dNedd4S (WT or its catalytically-inactive CA mutant) or dNedd4Lo (WT or its catalytically-inactive CA mutant), and extent/pattern of ubiquitylation of cellular proteins analyzed by immunoblotting (IB) with anti ubiquitin antibodies (top panel). Bottom panels depict controls for dNedd4Lo and dNedd4S expression and for loading controls (lamin). In the bottom panels, double the amount of proteins were loaded on the gel as compared with the respective top (ubiquitylation) panel. | 8,471.4 | 2011-11-14T00:00:00.000 | [
"Biology"
] |
In silico modeling predicts drug sensitivity of patient-derived cancer cells
Background Glioblastoma (GBM) is an aggressive disease associated with poor survival. It is essential to account for the complexity of GBM biology to improve diagnostic and therapeutic strategies. This complexity is best represented by the increasing amounts of profiling (“omics”) data available due to advances in biotechnology. The challenge of integrating these vast genomic and proteomic data can be addressed by a comprehensive systems modeling approach. Methods Here, we present an in silico model, where we simulate GBM tumor cells using genomic profiling data. We use this in silico tumor model to predict responses of cancer cells to targeted drugs. Initially, we probed the results from a recent hypothesis-independent, empirical study by Garnett and co-workers that analyzed the sensitivity of hundreds of profiled cancer cell lines to 130 different anticancer agents. We then used the tumor model to predict sensitivity of patient-derived GBM cell lines to different targeted therapeutic agents. Results Among the drug-mutation associations reported in the Garnett study, our in silico model accurately predicted ~85% of the associations. While testing the model in a prospective manner using simulations of patient-derived GBM cell lines, we compared our simulation predictions with experimental data using the same cells in vitro. This analysis yielded a ~75% agreement of in silico drug sensitivity with in vitro experimental findings. Conclusions These results demonstrate a strong predictability of our simulation approach using the in silico tumor model presented here. Our ultimate goal is to use this model to stratify patients for clinical trials. By accurately predicting responses of cancer cells to targeted agents a priori, this in silico tumor model provides an innovative approach to personalizing therapy and promises to improve clinical management of cancer.
Introduction
Cancer remains a major unmet clinical need despite advances in clinical medicine and cancer biology. Glioblastoma (GBM) is the most common type of primary adult brain cancer, characterized by infiltrative cellular proliferation, angiogenesis, resistance to apoptosis, and widespread genomic aberrations. GBM patients have poor prognosis, with a median survival of 15 months [1]. Molecular profiling and genome-wide analyses have revealed the remarkable genomic heterogeneity of GBM [2,3]. Based on tumor profiles, GBM has been classified into four distinct molecular subtypes [4]. However, even with existing molecular classifications, the high intertumoral heterogeneity of GBM makes it difficult to predict drug responses a priori. This is even more evident when trying to predict cellular responses to multiple signals following combination therapy. Our rationale is that a systems-driven computational approach will help decipher pathways and networks involved in treatment responsiveness and resistance.
Though computational models are frequently used in biology to examine cellular phenomena, they are not common in cancers, particularly brain cancers [5,6]. However, models have previously been used to estimate tumor infiltration following surgery [7] or changes in tumor density following chemotherapy in brain cancers [8]. More recently, brain tumor models have been used to determine the effects of conventional therapies including chemotherapy and radiation [5]. Brain tumors have also been studied using an agent-based modeling approach [9]. Multiscale models that integrate hierarchies in different scales are being developed for application in clinical settings [10]. Unfortunately, none of these models have been successfully translated into the clinic so far. It is clear that innovative models are required to translate data involving biological networks and genomics/proteomics into optimal therapeutic regimens. To this end, we present a deterministic in silico tumor model that can accurately predict sensitivity of patient-derived tumor cells to various targeted agents.
Description of In Silico model (Version 7.3 Cellworks)
We performed simulation experiments and analyses using the predictive tumor modela comprehensive and dynamic representation of signaling and metabolic pathways in the context of cancer physiology. This in silico model includes representation of important signaling pathways implicated in cancer such as growth factors such as EGFR, PDGFR, FGFR, c-MET, VEGFR and IGF-1R; cytokine and chemokines such as IL1, IL4, IL6, IL12, TNF; GPCR mediated signaling pathways; mTOR signaling; cell cycle regulations, tumor metabolism, oxidative and ER stress, representation of autophagy and proteosomal degradation, DNA damage repair, p53 signaling and apoptotic cascade. The current version of this model includes more than 4,700 intracellular biological entities and~6,500 reactions representing their interactions, regulated by~25,000 kinetic parameters. This comprises a comprehensive and extensive coverage of the kinome, transcriptome, proteome and metabolome. Currently, we have 142 kinases and 102 transcription factors modeled in the system.
Model development
We built the basic model by manually curating data from the literature and aggregating functional relationships between proteins. The detailed procedure for model development is explained in Additional file 1 (Section 2) using the example of the epidermal growth factor receptor (EGFR) pathway block (Additional file 1: Figure S1 and Figure S2). We have also presented examples of how the kinetic parameters are derived from experimental data, in Additional file 1: (Section 2). We have validated the simulation model prospectively and retrospectively, at phenotype and biomarker levels using extensive in vitro and in vivo studies [11][12][13][14][15][16][17][18][19][20].
Disease phenotype definitions
Disease phenotype indices are defined in the tumor model as functions of biomarkers involved. Proliferation Index is an average function of the active CDK-Cyclin complexes that define cell cycle check-points and are critical for regulating overall tumor proliferation potential. The biomarkers included in calculating this index are: CDK4-CCND1, CDK2-CCNE, CDK2-CCNA and CDK1-CCNB1. These biomarkers are weighted and their permutations provide an index definition that gives maximum correlation with experimentally reported trend for cellular proliferation (based on literature).
We also generate a Viability Index based on 2 subindices: Survival Index and Apoptosis Index. The biomarkers constituting the Survival Index include: AKT1, BCL2, MCL1, BIRC5, BIRC2 and XIAP. These biomarkers support tumor survival. The Apoptosis Index comprises: BAX, CASP3, NOXA and CASP8. The overall Viability Index of a cell is calculated as a ratio of Survival Index/ Apoptosis Index. The weightage of each biomarker is adjusted so as to achieve a maximum correlation with the experimental trends for the endpoints (based on literature).
In order to correlate the results from experiments such as MTT Assay, which are a measure of metabolically active cells, we have a 'Relative Growth' Index that is an average of the Survival and Proliferation Indices.
The percent change seen in these indices following a therapeutic intervention helps assess the impact of that particular therapy on the tumor cell. A cell line in which the Proliferation/Viability Index decreases by <20% from the baseline is considered resistant to that particular therapy.
Creation of cancer cell line and its variants
To create a cancer-specific simulation model, we start with a representative non-transformed epithelial cell as control. This cell is triggered to transition into a neoplastic state, with genetic perturbations like mutation and copy number variation (CNV) known for that specific cancer model. We also created in silico variants for cancer cell lines, to test the effect of various mutations on drug responsiveness. We created these variants by adding or removing specific mutations from the cell line definition. For example, DU145 prostate cancer cells normally have RB1 deletion. To generate a variant of DU145 with wild-type RB1 (WT), we retained the rest of its mutation definition except for the RB1 deletion, which was converted to WT RB1 (Additional file 1).
Simulation of drug effect
To simulate the effect of a drug in the in silico tumor model, the targets and mechanisms of action of the drug are determined from published literature. The drug concentration is assumed to be post-ADME (Absorption, Distribution, Metabolism and Excretion).
Creation of simulation avatars of patient-derived GBM cell lines
To predict drug sensitivity in patient-derived GBM cell lines, we created simulation avatars (in silico profiles) for each cell line as illustrated in Figure 1B. First, we simulated the network dynamics of GBM cells by using experimentally determined expression data (Additional file 1: Table S1; Additional file 1: Section 7). Next, we overlay tumor-specific genetic perturbations on the control network, in order to dynamically generate the simulation avatar. For instance, the patient-derived cell line SK987 is characterized by overexpression of AKT1, EGFR, IL6, and PI3K among other proteins and knockdown of CDKN2A, CDKN2B, RUNX3, etc. (Additional file 1: Table S1). After adding this information to the model, we further optimized the magnitude of the genetic perturbations, based on the responses of this simulation avatar to three molecularly targeted agents: erlotinib, sorafenib and dasatinib. The response of the cells to these drugs (from in vitro experimental data) was used as an "alignment data set". In this manner, we used "alignment drugs" (erlotinib, sorafenib, and dasatinib) to optimize the magnitude of genetic perturbation in the trigger files and their impact on key pathways targeted by these drugs. For example, most GBM cell lines demonstrated dominance of EGFR signaling as they had gains in copy number of EGFR gene. Hence the effect of EGFR inhibitor would be a good indicator for the relative dominance of this signaling pathway. This is illustrated in further details in Additional file 1 using an example of two cell line profiles that have EGFR over-expression but differential response to EGFR inhibitor. Similarly, sorafenib helped determine and align with MEK/ERK activation, while dasatinib with activation of SRC signaling.
Simulation protocol
The simulation protocol included 3 states: 1. Control State -The in silico model was simulated for 50,000 seconds, during which the different biological entities (called species) attain a steady-state concentration. This concentration depends on the balance between the rate of reaction nodes producing the species and the reaction nodes utilizing/degrading the species. This is an untriggered system and is representative of a non-transformed epithelial cell. 2. Disease State -At 50,000 seconds simulation time, we introduced the mutation data (specific to patient-derived GBM cell lines to be created) and simulated for an additional 1,25,000 seconds. During this time, the system attained a new steady state that aligns to the network dynamics of the cell line. Figure 1 Simulation workflow for in silico tumor model. A, This illustration depicts a representative simulation protocol used for retrospective analysis of gene mutation-drug sensitivity association reported in the Garnett study. The simulation protocol included 3 states: Control or Untriggered state is simulated for 50,000 seconds to allow the biological entities to attain a steady-state concentration. At 50,000 seconds, mutation data is introduced and simulated for an additional 1,25,000 seconds to attain Disease state. For Drug-treated state, we introduce a drug into the system by perturbing the target reaction nodes and simulate the model for 2,00,000 seconds. At the end of this state, we calculate percent change in the indices for cell survival. B, This schematic demonstrates the simulation workflow for creation, optimization and testing of patient-derived GBM cell line profiles in silico. The key steps involved in developing the simulation avatars of the patient-derived GBM cell lines include: Input profiling data reporting relative expression of the different proteins in the cell lines; Iterative testing and alignment of simulation avatars to match experimental data on drugs used for alignment of the network (erlotinib, sorafenib and dasatinib); Locking the simulation avatars; In silico predictions and in vitro testing.
3. Disease State + Drug Treatment -Following the simulation run time of 1,25,000 seconds for Disease State, we introduced the drug into the system by perturbing the target reaction nodes as explained above. We then simulated the model further for 2,00,000 seconds (drug treatment). A percent change in the indices for cell survival (described earlier) indicates the therapeutic potential of the drug. Iterative simulations with varying concentrations of the drug generate dose-response curves from which IC 50 values can be determined. Figure 1A is a schematic of the representative simulation protocol that we used for the retrospective analysis of gene mutations-drug effects reported in the study by Garnett and co-workers. Figure 1B illustrates the workflow for simulation studies on patient-derived GBM cell lines. For the patient-derived GBM cell line predictions, we prospectively compared in silico responses to experimentally obtained results (in vitro data from patientderived GBM cell lines) and determined corroboration between in silico and in vitro data. As per the doseresponse plots generated by in silico predictions, a cell line was considered sensitive to a drug if it demonstrated >20% decrease in relative growth. The 20% threshold was used for both in silico predictions and for in vitro experimental data.
Patient-derived glioblastoma cell lines
Fresh human glioblastoma samples were acquired from brain tumor patients undergoing clinically indicated surgery (University of California San Diego Human Subjects Protocol) and cultured as previously reported [21,22]. GBM4 and 8 cells were a kind gift from C. David James (University of California San Francisco). Briefly, the dissociated tissue was washed, filtered through a 30 μm mesh and plated onto ultra-low adherence flasks at a concentration of 500,000 to 1,500,000 viable cells/ml. The stem cell isolation medium included human recombinant EGF (20 ng/ml), human bFGF (10 ng/ml) and heparin (2 μg/ml). Sphere cultures were passaged by dissociation using Acutase (Sigma), washed, resuspended in neural stem cell culture medium (#05750, Stemcell Technologies), and plated on ultra low-adherence 96 well plates at 2000 cells per well for all subsequent drug testing. We characterized all patient-derived glioblastoma lines using histopathologic and integrated genomic analyses. The glioblastoma lines were profiled using the Affymetrix Gene Chip Human Gene 1.0 ST Array.
Drug screening
Drug screens were performed on patient-derived GBM cell lines plated at 2000 cell per well in 96-well microtiter plates, incubated overnight. After 72 hours of incubation with drugs, cell viability was quantified by the Alamar Blue assay. Briefly, after incubation, Alamar Blue (#BUF012B, AbDSerotec) was added directly to the culture medium, and the fluorescence measured at 560/90 to determine the number of viable cells (Infinite M200, Tecan Group Ltd.).
Results
Our study involved a retrospective component where we predicted gene mutationsdrug sensitivity associations defined in a recent hypothesis-independent study [23]. In addition, we predicted sensitivity of our profiled patient-derived GBM cell lines to targeted agents and compared these in silico predictions to in vitro experimental data.
Retrospective validation of in Silico tumor model
In the first part of the study, we evaluated the ability of the in silico tumor model to predict drug responses that were reported in the study by Garnett and colleagues [23]. A comparison of our predictions with the associations reported in the Garnett study indicated the predictive capability of our in silico tumor model.
Our modeling library has definitions for 45 of the 639 cell lines used in this study (Additional file 1: Table S2) and supports 70 of the 130 drugs studied (Additional file 1: Table S3). Further, we can represent 51 of the 84 genes screened for mutations (Additional file 1: Table S4). Of the 448 significant gene mutation-drug response associations reported, our in silico model was able to accurately predict 22 of the 25 testable associations from the Garnett study (>85% agreement; Additional file 1: Table S5). The gene mutation-drug response correlations from the Garnett study that are currently not supported by the system are listed in Additional file 1: Table S6. From the 25 gene mutation-drug response associations tested from the Garnett study (Additional file 1: Table S5), a few examples of the correlations are explained below. Figure 1A depicts a representative schematic of this retrospective analysis using the simulation (in silico tumor model).
BRAF Mutations and Drug Sensitivity
The Garnett study showed that cells with BRAF mutation were sensitive to the MEK1/2 inhibitor AZD2644 [23]. To examine this association, we modeled cancer cell variants with wild-type BRAF in silico. Modeling data showed that cells with wild-type BRAF were resistant to AZD6244, when compared to the parent tumor cells with mutant BRAF. Thus, BRAF mutation conferred sensitivity to the MEK1/2 inhibitor in silico; this prediction validates the finding reported in the Garnett study ( Figure 1A). 40-60% melanoma patients carry BRAF mutations that activate MAPK signaling [24,25] and this association could have therapeutic implications for the treatment of patients with BRAF mutant melanoma.
Effect of different mutations on sensitivity to tyrosine Kinase inhibitors
The Garnett study showed that cells with BRAF mutation were sensitive to the MEK1/2 inhibitor AZD2644 [23]. To examine this association, we created cancer cell variants with wild-type BRAF in the in silico model. Simulation data showed that cells with wild-type BRAF were resistant to AZD6244, when compared to cells with mutant BRAF. Thus, BRAF mutation conferred sensitivity to the MEK1/2 inhibitor; this validates the finding reported in the Garnett study (Figure 2A). 40-60% melanoma patients carry BRAF mutations that activate MAPK signaling [24,25]. This association tested in Figure 2A may have therapeutic implications for the treatment of patients with BRAF mutant melanoma.
ERBB2 (HER2) amplification is a biomarker for sensitivity to EGFR-family inhibitors [26]. In the in silico model, we tested for sensitivity to EGFR2 family inhibitors, lapatinib and BIBW2992. Specifically, we examined sensitivity of cancer cells in the presence of mutations and/or over-expression of BRAF, CDH1, ERBB2, CCND1 and MET. These predictions from simulations were compared with results obtained in the Garnett study and the predictive capability of our model was determined.
In silico predictions indicate that BRAF mutation decreases sensitivity of cells to lapatinib ( Figure 2B), whereas CDH1 mutant lines demonstrated higher sensitivity to lapatinib when compared to variants with wild-type CDH1 ( Figure 2C). Further, cMET over-expression showed increased sensitivity to lapatinib, as indicated by decrease in viability in cells with cMET over-expression ( Figure 2D). Additionally, ERBB2 and CCND1 over-expression correlated positively with lapatinib sensitivity (Additional file 1: Table S5). In all these simulation experiments Figure 2 Retrospective analysis tests in silico predictions of gene mutations and sensitivity to EGFR family inhibitors. Associations reported in the Garnett study were tested in a blinded manner using our in silico model and predictions obtained were compared to results reported in the Garnett study. A, We created wild-type BRAF variants of four cancer cell lines -COLO205, HT29, MDAMB231 and U266 in silico and compared the effect of MEK1/2 inhibitor AZD2644 on these cell lines and on corresponding parent lines expressing mutant BRAF. Our data demonstrated that BRAF mutation increases sensitivity to AZD6244. B, We simulated three cell lines -H1650, H1975 and SW48 with wild-type or mutant BRAF and tested for sensitivity to the EGFR2 family kinase inhibitor, lapatinib. BRAF mutation decreases sensitivity of cells to lapatinib. C, Similarly, when four cell lines (AGS, H1437, MKN1 and MKN45) were tested for sensitivity to lapatinib, we observed that CDH1 mutation increases sensitivity to lapatinib. D, We generated cell lines with wild-type or MET over-expression and tested the effect of lapatinib (A549, AGS, H358 and HT29 cell lines). MET over-expression increases sensitivity to lapatinib. testing sensitivity to lapatinib, our in silico predictions corroborated with associations reported in the Garnett study.
CDKN2A mutation and drug sensitivity
The Garnett study reported associations between tumor suppressor gene mutations and several anti-cancer drugs. We tested these associations in our in silico tumor model. In the in silico analysis, cells harboring wild-type CDKN2A were resistant to erlotinib whereas CDKN2A mutation was associated with erlotinib sensitivity ( Figure 3A). Similarly, cell lines with mutant CDKN2A showed increased sensitivity to dasatinib ( Figure 3B), bortezomib ( Figure 3C), and to the CDK4/6 inhibitor PD0332991 ( Figure 3D). These predictions/analyses from our simulation corroborated accurately with data from the Garnett study.
Other gene mutation-drug response associations examined in our simulation models are illustrated in Additional file 1: Table S5. In addition, Additional file 1: Table S6 lists correlations between gene mutations and drug responses reported in the Garnett study, which are currently not supported by our modeling technology. In spite of these limitations, we obtained~85% agreement of our simulation data with findings reported by Garnett [23].
Prospective evaluation of tumor modelpatient-derived GBM cell lines Identifying drug sensitivities in tumors/cancers with different mutations is important for designing individualized therapies for cancer. To this end, we created in silico avatars of 8 patient-derived GBM cell lines using genomic data (Methods and Additional file 1: Table S1) and predicted their sensitivity to various targeted therapeutic agents. We then tested these in silico predictions prospectively by comparing then with experimental data obtained by in vitro testing on the same patient-derived GBM cell lines ( Figure 1B).
The patient-derived GBM cell lines were obtained from patient tumors resected surgically and cultured in vitro Figure 3 Retrospective analysis evaluates CDKN2A mutationdrug response association by in silico modeling. Using simulation modeling, we tested the role of the tumor-suppressor protein CDKN2A on sensitivity to different inhibitors and compared these predictions to those reported in the Garnett study. A, Cells expressing mutant CDKN2A and their wild-type variants were simulated in the in silico tumor model for four lines -BxPC3, H1437, H1650 and SW48. CDKN2A mutation increased sensitivity of cells to erlotinib when compared to wild-type CDKN2A. B, Cells with mutant CDKN2A were more sensitive to dasatinib than cells with wild-type CDKN2A (A549, BxPC3, HCT116 and H460). C, COLO205, HT29, H1437 and SW48 cell lines with mutant CDKN2A were sensitive to bortezomib more than cells expressing wild-type variants. D, CDKN2A mutant cells BxPC3, H1437, H1975 and HT29 also showed higher sensitivity to CDK4-Cyclin D1 inhibitor PD0332991 over the CDKN2A WT variants.
(details in Methods). We have profiled these lines using Affymetrix Gene Chip Human Gene 1.0 ST Array. Using whole-exome sequencing, we recently tested the validity of these cells (maintained in cultures) for development and testing of personalized targeted therapies, based on their accurate representation of the original tumor profiles [27]. We have designated the different patient-derived GBM cell lines as: GBM4, GBM8, SK102, SK262, SK429, SK748, SK987 and SK1035.
After generating in silico profiles of these cells, we optimized these simulation avatars in terms of strength of functional effect of the mutation on key pathways such as EGFR, RAS and Src/PI3K. The rationale for this optimization is that expression data on these cells does not provide an Figure 4 In silico modeling analysis and experimental in vitro data for drug responsiveness to 3 alignment drugs. A, Predictive dose response data for erlotinib with percent change in viability. Cells showing decrease in viability of 20% or greater are considered sensitive to the drug. B, In vitro experimental results for effect of 1 μM erlotinib on viability in patient-derived GBM cell lines; viability was determined at 72 h using Alamar Blue assay. C, D, Predictive and experimental data for sorafenib. E, F, Predictive and experimental data for dasatinib. All drugs were tested in vitro at 1 μM. Dose-response curves for in silico data demonstrate the effects of increasing concentrations of the drugserlotinib, sorafenib and sunitinib on the viability of profiled patient-derived GBM cell lines in the simulation model. accurate measure of the dominance of different intracellular pathways. In order to interrogate this information on the pathways that play a dominant role in each tumor line (such as EGFR, RAS, PI3K, etc.), we used 3 anti-cancer agents (erlotinib, sorafenib and dasatinib) targeting these pathways. This will achieve "alignment" and train the simulation avatars for further analyses (details in Additional file 1). The alignment for these 3 drugs could be best achieved in the following cell lines: GBM8, SK262, SK429, SK748, and SK1035. In cell lines GBM4 and SK987, there was a mismatch for sorafenib where the predictive trends were reversed. GBM4 was sensitive to sorafenib experimentally but our in silico predictions showed it to be resistant; SK987 was resistant experimentally but sensitive in predictive results. Similarly, the experimental trend for SK102 resistance to dasatinib could not be met predictively. Correlation of predictive trends with alignment drugs is shown in Figure 4 A-F.
Predictions obtained by simulation modeling are presented as dose-response plots for viability; decrease in viability of >20% was considered as sensitive. Experimentally, viability was determined by Alamar blue assay, in response to 1 μM concentration of respective inhibitors at 72 h. These data represent viability as mean values from triplicate samples.
We tested ten anti-cancer drugs in silico on the simulation avatars of the 8 patient-derived GBM cell lines in a blinded prospective study. These simulations generated predictions that we compared with in vitro experimental data (Additional file 1: Table S7A-D). Of the 80 in silico predictions, 61 (76.25%) predictions showed agreement with in vitro experimental results. Analysis of drug sensitivity correlation for all 8 GBM patient-derived cell lines, for all the 13 drugs is summarized in Additional file 1: Table S7. Figures 5A-H and 6A-H show a drugwise comparison of in silico predictions (dose-response curves) and in vitro experimental results generated with testing 1 μM concentration of each drug on these cell lines.
Effect of tyrosine kinase inhibitors on patient-derived GBM cells
For the EGFR family inhibitor lapatinib, simulation studies predicted SK429, SK748 and SK1035 to be resistant, which were confirmed by in vitro data. Similarly, modeling predicted GBM8, SK102, SK262 and SK987 to be sensitive and these predictions were in agreement with experimental data (Figure 5A and B). However, modeling predicted GBM4 to be resistant to lapatinib while in vitro data showed GBM4 to be highly sensitive to lapatinib ( Figure 5B). For the tyrosine kinase inhibitor nilotinib, the model predicted GBM8 to be sensitive while all the other profiles to be resistant ( Figure 5C). In vitro studies demonstrated that GBM8 was indeed sensitive to nilotinib as predicted, but there was a mismatch with the experimental results for two lines -SK262 and SK1035. Experimentally, SK262 was found to be sensitive, whereas SK1035 was on the borderline of sensitivity and resistance ( Figure 5D). For imatinib, simulation predicted that all GBM lines except GBM8 were resistant ( Figure 5E). The experimental results corroborated with this in silico prediction ( Figure 5F). Sunitinib was the other multi-tyrosine kinase inhibitor tested. Our simulation predicted GBM8, SK102 and SK987 to be sensitive to sunitinib; however, only GBM8 was found to be sensitive in vitro. SK262 was predicted to be resistant to sunitinib but in vitro data found it to be moderately sensitive. On the other hand, GBM4, SK429, SK748 and SK1035 were found to be resistant in both simulation and experimental data ( Figure 5G-H).
Effect of other drugs on patient-derived GBM cells
Besides the tyrosine kinase inhibitors, correlation between in silico predictions and experimental results for the 8 patient-derived GBM cell lines was also tested for drugs such as pitavastatin (HMG CoA reductase inhibitor), everolimus (mTOR inhibitor), celecoxib (COX2 inhibitor) and bortezomib (proteasome inhibitor) ( Figure 6 A-H). For bortezomib, all profiles were predicted to be sensitive and these predictions matched with in vitro experimental results ( Figure 6A and B). For everolimus, in vitro results were in agreement with simulation predictions for all lines except SK429 ( Figure 6C and D). Our in silico model predicted GBM4, SK262, SK429, SK748 and SK1035 to be resistant to celecoxib; these predictions matched with in vitro results. However, GBM8, SK102 and SK987 were predicted to show moderate sensitivity to celecoxib, but were found to be resistant in vitro ( Figure 6E and F). For pitavastatin, the simulation predicted 5 patient-derived GBM cell lines to be sensitive (GBM8, GBM4, SK102, SK262 and SK987), of which SK987 was found to be resistant in vitro. On the other (See figure on previous page.) Figure 5 In silico modeling and experimental in vitro data for drug responsiveness to tyrosine kinase inhibitors. This figure demonstrates in silico predictions of sensitivity and in vitro viability (respectively) in response to treatment with tyrosine kinase inhibitors: A, B, lapatinib, C, D, nilotinib, E, F, Imatinib and G, H, Sunitinib. Cells were exposed in vitro to 1 μM tyrosine kinase inhibitors for 72 h and viability determined using Alamar Blue assay. The dose-response for in silico predictions is generated by iterative simulations with increasing concentrations of the drug in the model and the viability index is calculated. Cells showing decrease in viability of 20% or greater are considered sensitive to the drug.
hand, of the cell lines predicted to be resistant (SK429, SK748 and SK1035), SK1035 was sensitive in vitro and did not match with the prediction (Figure 6G and H).
These data demonstrate a 76.25% agreement between in silico predictions of drug response and in vitro experimental data in patient-derived GBM cell lines.
Discussion
Developing an in silico model that takes into account the complex genotypes/phenotypes of cancer to accurately predict drug response will help personalize therapy with more efficiency. In this study, we developed and validated a virtual tumor model by retrospectively testing it against a dataset from a recent screening study [23]; we obtained a corroboration of~85% between our predictions and the results from this study. Following this retrospective validation, we generated in silico predictions to prospectively test the sensitivity of patientderived GBM cell lines to targeted agents. These analyses also demonstrated a high degree of agreement (>75%) between in vitro experimental findings and in silico predictions. These studies validate our in silico tumor model and the simulation-based approach and provide critical proof-of-concept of a priori prediction of responses to targeted therapies. Thus, this model provides an effective platform for testing and developing personalized therapeutic regimens for cancer patients.
The genomic inputs that we used to create simulation avatars for patient-derived GBM cell lines were copy number variation data. A more comprehensive and accurate profile would require additional data (gene mutations, methylation status etc. along with copy number variation); this would help us develop a more representative avatar and would likely improve the accuracy of our drug response predictions and provide higher correlation with experimental data.
Genotypes of cancer cell lines have traditionally been used to correlate with drug sensitivity [28,29]. A similar recent study makes efficient use of gene expression profiles to categorize colorectal cancers into different molecular and clinically actionable subtypes [30]. Moreover, it is clear that using molecular tumor profiles to stratify patients for therapy affects response and progressionfree survival [31]. However, increasing amounts of data from genomic, proteomic, transcriptomic and metabolomic profiling will likely require integration of these varied datasets and development of predictive systems modeling, which may hold the key to effective cancer therapy.
Rapid screening of patient samples in real time with models such as the one we have developed can drive critical therapeutic decision-making. Although our current model makes only cell-intrinsic predictions, we have been able to achieve a high rate of agreement between in silico predictions and in vitro findings. Future versions of this model are being refined to incorporate tumor microenvironment including aspects of angiogenesis, hypoxia, and tumor-associated inflammation. We believe that incorporating these features into our model would more accurately represent the tumor in a patient. Importantly, this will further help improve our predictions for designing therapeutic regimens for GBM patients. This model can also be adapted to identify potential mechanisms of resistance a priori and to design rational drug combinations that prevent emergence of resistance and development of escape pathways.
Our in silico model aligns with NCI guidelines that emphasize evaluation of similar predictor models to determine their accuracy [12,15,32,33]. We intend to test this model in clinical trials and utilize it as a tool to expedite clinical decision-making and determine drugs/ combinations most likely to benefit a patient. Additionally, models such as these will play important roles in testing new biological hypotheses. This is critical to the discovery of molecular drivers and critical networks in cancer pathophysiology and the development of better diagnostics and effective therapeutics.
Additional file
Additional file 1: Supplementary Information.
Competing interests
The following authors are employed by Cellworks, Inc.: Zeba Sultana, Taher Abbasi, Shweta Kapoor, Ansu Kumar, Shahabuddin Usmani, Ashish Agrawal, and Shireen Vali. The other authors report no competing financial interests.
Authors' contributions SCPdesigned study, performed research, analyzed data, wrote manuscript; ZSexecuted simulation studies, analyzed data, wrote manuscript; SPdesigned study, performed research, analyzed data, wrote manuscript; PJperformed research, analyzed data; RMperformed research, analyzed data; YCperformed research, analyzed data; ISBperformed research; NNperformed research; MMperformed research, analyzed data; TAanalyzed data, developed analytics, wrote manuscript; SKdeveloped predictive simulation-based tumor cell technology; AKdeveloped predictive simulation-based tumor cell technology; SUexecuted simulation studies, developed predictive simulation-based tumor cell technology; AAdeveloped predictive simulation-based tumor cell technology; SVanalyzed data, (See figure on previous page.) Figure 6 In silico modeling and experimental in vitro data for drug responsiveness to different drugs. This figure demonstrates in silico predictions of sensitivity and in vitro viability in response to treatment of patient-derived GBM cell lines with A, B, bortezomib, C, D, everolimus, E, F, celecoxib, and G, H, pitavastatin. All drugs were tested in vitro at 1 μM for 72 h and viability was assayed using Alamar Blue assay. Cells showing decrease in viability of 20% or greater are considered sensitive to the drug. developed predictive simulation-based tumor cell technology, developed analytics, wrote manuscript; SK (Santosh Kesari)designed study, planned and directed research, analyzed data, wrote manuscript. All authors read and approved the final manuscript. | 7,604.8 | 2014-05-21T00:00:00.000 | [
"Biology",
"Computer Science",
"Medicine"
] |
Pressure, motion, and conformational entropy in molecular recognition by proteins
The thermodynamics of molecular recognition by proteins is a central determinant of complex biochemistry. For over a half-century, detailed cryogenic structures have provided deep insight into the energetic contributions to ligand binding by proteins. More recently, a dynamical proxy based on NMR-relaxation methods has revealed an unexpected richness in the contributions of conformational entropy to the thermodynamics of ligand binding. Here, we report the pressure dependence of fast internal motion within the ribonuclease barnase and its complex with the protein barstar. In what we believe is a first example, we find that protein dynamics are conserved along the pressure-binding thermodynamic cycle. The femtomolar affinity of the barnase-barstar complex exists despite a penalty by −TΔSconf of +11.7 kJ/mol at ambient pressure. At high pressure, however, the overall change in side-chain dynamics is zero, and binding occurs with no conformational entropy penalty, suggesting an important role of conformational dynamics in the adaptation of protein function to extreme environments. Distinctive clustering of the pressure sensitivity is observed in response to both pressure and binding, indicating the presence of conformational heterogeneity involving less efficiently packed alternative conformation(s). The structural segregation of dynamics observed in barnase is striking and shows how changes in both the magnitude and the sign of regional contributions of conformational entropy to the thermodynamics of protein function are possible.
INTRODUCTION
The change in the Gibbs free energy underlying molecular recognition and other complex protein functions such as allosteric regulation has, in principle, net contributions from both entropy and enthalpy. The latter is comprised of the internal energy and a pressure-volume work term. Detailed analysis of static low-temperature structural models has historically provided great insight into the internal energy and has promoted sig-nificant advances in understanding protein functions such as ligand binding through simulation and theory (1). Nevertheless, the origins of protein conformational entropy and its contribution to functions such as allostery remain much less well defined (2). Measurement of equilibrium fluctuations offers a powerful way to describe transitions between and occupancy of states that cannot be observed with classical methods of structural biology, and NMR relaxation has proven particularly useful in this regard (3)(4)(5). Over the past two decades, numerous studies of fast internal side-chain motion by NMR methods, particularly that of methyl-bearing amino acids, have revealed an unexpected complexity without distinguishing structural correlates (6).
Here, we take advantage of the fact that the Gibbs free-energy change associated with a change in state contains a pressure-volume work term. Volume changes represent the natural variable, and application of pressure can illuminate otherwise unobservable details of the thermodynamics of protein functions such as ligand binding and allostery. Protein molecules respond, both dynamically and structurally, to pressure in a complicated way. Pressure can compress proteins (7)(8)(9), remodel active sites (9), and facilitate excursions to higher-lying (10,11), locally unfolded (12)(13)(14)(15), or globally unfolded states (16,17), thereby revealing various aspects of the ensemble nature of proteins. Here, we use high-pressure NMR (18) relaxation to probe fast internal motion of methyl-bearing side chains in the small enzyme barnase and use this motion as a proxy for changes in conformational entropy (DS conf ) (2,4).
Sample preparation
pET-DUET expression plasmids containing the genes for barnase and barstar under the control of their own T7 promoter were obtained from GenScript Biotech Corporation (Piscataway, NJ, USA). An N-terminal 6xHis-tag followed by a Factor Xa cleavage site (MGSSHHHHHHSQAPIEGR) was added to barnase, while barstar remained untagged. Expression was carried out in BL21-(DE3) E. coli cells. Barstar expressed and purified with the N-terminal Met residue present. NMR-relaxation samples were prepared largely as described elsewhere (4). Deuterium (19) and 15 N relaxation (20) experiments of the free proteins were performed on a 1:2 mixture of uniformly 15 N-labeled protein and uniformly 13 C-labeled protein expressed in 60% D 2 O media to generate the 13 CH 2 D isotopomer. The complex was studied by combining 15 N-labeled protein (barnase or barstar) with 13 CH 2 D-labeled binding partner (barstar or barnase). Prochiral methyl assignment samples were expressed during growth on 10% 13 C 6 -glucose and 90% unlabeled glucose and uniform 15 N labeling (21).
The barnase-barstar complex was isolated by Ni-NTA affinity chromatography, and the complex dissociated with 6 M guanidine HCl (pH 7.9). Barstar was collected in the flow through (20 mL) and refolded by dilution into 1 L of water. Refolded barstar was further purified by a DEAE ion-exchange column that included a wash with 25 mM imidazole (pH 7.9) and 10 mM KCl and elution with 500 mM NaCl, being spin concentrated (3 kDa cutoff), and further purified by size-exclusion chromatography on Superdex SEC-75 equilibrated with 25 mM imidazole (pH 7.9) and 10 mM KCl. Barnase was eluted from the Ni-NTA column with 500 mM imidazole, spin concentrated (3 kDa cutoff), and buffer exchanged into 25 mM imidazole (pH 6.2), 10 mM KCl, and 5 mM CaCl 2 . The His tag was cleaved with 4 mg factor Xa per mg of barnase added and mixed overnight at room temperature. The solution was passed through a 1 mL Ni-NTA column coupled to a SEC-75 column in 50 mM imidazole (pH 7.9) and 50 mM KCl, spin concentrated, and buffer exchanged to 25 mM imidazole (pH 6.2) and 10 mM KCl. NMR experiments were performed with samples prepared in 25 mM imidazole (pH 6.2), 10 mM KCl, 5% D 2 O, and 0.02% NaN 3 (w/v). Samples were stable about 1 month for the free proteins and several months for the complex at 35 C.
NMR assignment and relaxation of free and bound barnase
All experiments were carried out at 35 C. Assignment experiments were done on a uniformly 13 C, 15 N-labeled sample, with only one protein in the complex labeled to reduce spectral crowding. Nonuniform sampling was used extensively for triple-resonance assignment spectra (22). Assignments were mapped to high pressure by collecting 13 C and 15 N heteronuclear single quantum coherence (HSQC) spectra every 500 bar. These experiments were collected either at 500 or 600 MHz. Carbon and nitrogen HSQC spectra were collected at 1, 50, 500, 1,000, 1,500, 2,000, 2,500, and 3,000 bar with a waiting period of 1 h between each. Spectra were collected during ramp up and ramp down of pressure, with no detectable difference observed between them. Chemical shift analysis utilized the gyromagnetic ratio weighted change in chemical shift of 1 H and 15 N (or 13 C) of bonded atoms resolved in twodimensional correlation spectra.
Longitudinal and transverse relaxation was measured using HSQC spectra with nine interleaved delay points and three duplicates (delays 2, 5, and 8) for uncertainty estimation (each applied to itself and neighboring delay points 1-3, 4-6, and 7-9) (25). Maximum peak intensities and uncertainties were used to fit single-exponential decay curves with three parameters. 1 H-15 N nuclear Overhauser enhancement experiments were measured with a 5 s mixing time with and without irradiation of 1 H. Relaxation was measured at 500 and 600 MHz ( 1 H) for high-pressure experiments and 500, 600, and 750 MHz ( 1 H) for ambient-pressure experiments. Deuterium relaxation employed IzCzDz and IzCzDy experiments with "on-the-fly" IzCz compensation (26). High-pressure NMR relaxation experiments were carried out in a 3 kbar rated 5 mm o.d./3 mm i.d. ceramic NMR tube connected to a high-pressure Xtreme-60 pressure generator (Daedalus Innovations, Aston, PA, USA). The pressure medium was degassed water with a mineral oil interface with the sample. The effect of pressure on imidazole's pK a is small (27). Relaxation measurements on the complex (24 kDa) were performed on uniformly 15 N-labeled barstar and uniformly 13 C-labeled barnase expressed during growth in 60% D 2 O media.
The macromolecular rotational correlation model and Lipari-Szabo squared generalized order parameters of the amide N-H bond vectors (O 2 NH ) were determined from 15 N relaxation experiments (28). Tumbling models for the complex were determined using data from 15 N-labeled barstar backbone relaxation measurements and were chosen according to the Akaike and Bayesian information criteria and F-tests to obtain probability values for each model (29). Simple model-free parameters were determined using a grid-search in a Cþþ/AMP implementation of Relxn2A (4,30). The analysis used an N-H bond length of 1.02 Å (31), ignoring any influence of angular motion of the bonded H; a general 15 N tensor breadth of À170 ppm (32); a quadrupolar coupling constant of 167 kHz (33); and a methyl rotation order parameter O 2 rot of 0.1107 assuming perfect tetrahedral geometry of the methyl carbon.
Voronoi volumes (34) are ideally suited to investigate the volume of buried atoms as "the sum of polyhedral volumes is exactly equal to the total space occupied by the points" (35). Voronoi volumes were determined with an in-house Cython program. Only structural models with a nominal resolution of <2.5 Å based on data obtained at cryogenic temperature for the protein with the same biological context as that of the NMR experiment (e.g., free versus complexed) were considered. Deposited structures were used without modification. When multiple copies of the protein were present in the asymmetric unit, the copy with the strongest electron density and highest-quality model was identified by visual inspection. The calculation finds the edges of the coordinates and defines a box with a 5 Å padding. A cubic grid is created with a step size of 0.01 Å. Each voxel is interrogated for the nearest heavy atom and assigned to it. Voxels within the van der Waals radius (36) of any atom were excluded. The sum of all voxels assigned to an atom represents the atom's Voronoi polyhedron and is used to calculate its volume. Side-chain volumes were summed starting at the Cb and ending with the atoms with the same number of dihedral angles as the methyl group of interest. For example, Ile Cg2 methyls will include the volume of both Cg carbons and one Cb atom, while Ile Cd methyls will include one Cd, both Cg, and one Cb atom. The surface was defined using a large probe (2.4 Å radius) to avoid fitting inside any internal cavities. The surface algorithm ignored ligands and waters, so binding pockets were considered open surfaces as well. The probe was moved through the grid to find all voxels where the probe fit without steric overlap, and protein atoms that came within 1 voxel of the probe were flagged as belonging to the surface. If any atom of a side chain (starting at the Cb and ignoring backbone) contained a surface atom, the side chain was not considered buried. Alternative rotamers were analyzed by including all rotamers in the calculation, summing the volumes of all rotamers of a given side chain, and subtracting the van der Waals volume only once. All unoccupied void volume is obtained by this calculation including that which remains from perfect packing of spheres.
RESULTS
Barnase-barstar is one of the strongest protein-protein interactions known in biology. Its fM affinity derives from large enthalpic contributions at the interface (37). Individual entropic contributions sum to yield a negligible ($zero) contribution to the binding free energy (37). We sought here to learn the dynamical character of barnase in its primary functional states, i.e., free and bound to the inhibitor barstar, and to also learn how elevated hydrostatic pressure influences this interaction. We quantify the disorder of the methyl symmetry axis in terms of the Lipari-Szabo squared generalized order parameter (O 2 axis ) (38) obtained using deuterium NMR-relaxation methods (19). The O 2 axis can range from a value of one, corresponding to complete rigidity within the molecular frame, to zero, which effectively corresponds to isotropic disorder. Importantly, only motion faster than the overall molecular reorientation of the protein contributes to O 2 axis .The four states of barnase examined have quite different average O 2 axis values, variances about those averages (Table 1), and distributions within the molecular structure ( Fig. 1).
At ambient pressure, we find that complexation is accompanied by an overall rigidification of the methyl-bearing side chains of barnase ( Fig. 2; Table 1). Motion of side-chain torsions, both within a rotamer well and between rotamers, indirectly captures conformational entropy expressed on the timescales represented by the NMR-relaxation phenomena used here (2). Unfortunately, motion leading to interconversion of states slower than macromolecular tumbling are rendered invisible (38). For example, long-time molecular dynamics simulations of T4 lysozyme beyond the macromolecular tumbling regime suggest the presence of "excess" entropy (39), but it is not clear how higher-order couplings (40) influenced that estimate. Earlier, analysis of albeit somewhat shorter molecular dynamics simulations of seven proteins that removed the influence of coupled motions using the MIST algorithm (41) suggests that the vast majority of rotamer entropy is indeed expressed in order parameters obtained by solution NMR-relaxation approaches (42). Furthermore, Br€ uschweiler and co-workers utilized a clever strategy to sample timescales slower than macromolecular tumbling and found little influence on the methyl symmetry axis order parameters in ubiquitin (43). Nevertheless, to ameliorate this and other issues in the use of classical NMR-relaxation phenomena to characterize conformational entropy, the so-called NMR "entropy meter" was developed to provide an empirical calibration that avoids specific motional models and uses motion on the ps-ns timescale to capture changes in rotamer entropy (2). Long time (i.e., rare) fluctuations or coupling and shorter timescalecorrelated motion is meant to be absorbed into the calibration of the entropy meter (2,4) and expressed in the limits of its determined precision. Accordingly, the dynamical proxy for conformational entropy (4) indicates that the overall rigidification of barnase upon complex formation corresponds to an unfavorable contribution (DS conf < 0) to the binding free energy corresponding to þ11.7 5 1.2 kJ/mol at 300 K. Application of high hydrostatic pressure on free barnase yields an unexpected clustering of changes in motion (DO 2 axis ) into two spatial regions, one that rigidifies with pressure and one that activates dynamically ( Fig. 2; Table 1). Region I, which becomes more rigid with pressure, is defined by 21 methyl-bearing side chains that are largely localized to the N-terminal domain of the protein. Eight of these side chains are fully buried. Region II, which becomes more dynamic with applied pressure, is comprised of 17 methyl-bearing side chains in the C-terminal domain, and 11 of these are fully buried. Nine methyl-bearing side chains are outside of these regions. All methyl probes are 7 Å or more from the barnase-barstar interface, which is highly polar and extensively hydrated (44).
A thermodynamic cycle from free barnase was created with barnase either bound to its inhibitor barstar, subjected to high hydrostatic pressure (3 kbar), or both ( Fig. 2). At 3 kbar, binding of barnase to barstar results in an opposite response from the N-and C-terminal groups of side chains. Motion in Region I is activated by elevated pressure, which is opposite to the response of free barnase. Region II rigidifies upon barnase binding barstar both at ambient and high pressure ( Fig. 1; Table 1). Application of high pressure to the barnase-barstar complex leads to a general increase in the internal motion of barnase, with the largest change centered in region II. As might be expected, the pressure sensitivity of a side chain's motion is reduced as the O 2 axis approaches the rigid limit of one at ambient pressure. Of the four states of barnase examined, the complexed state at ambient pressure is the most rigid (CO 2 axis D ¼ 0:693) ( Table 1).
The heterogeneous and regional response of sidechain dynamics to pressure is in stark contrast with other metrics examined. For example, fast backbone motions are generally suppressed in response to pressure and without apparent grouping to regions I and II (Table 1). The regional grouping in the dynamical character of the protein is also not apparent from the more usual tactic of characterizing the pressure dependence of NMR chemical shifts (Fig. S1). Nonlinear pressure-induced changes in chemical shifts in the free protein are heterogeneously distributed throughout the protein, while the complex shows very few chemical shifts with significant nonlinearity and provides little insight (Fig. S1).
In contrast, local conformational heterogeneity can be inferred from the response of crosspeak volumes to pressure in both free barnase and barnase in complex with barstar (Fig. 3). Crosspeak intensities in the free protein show a general increase with pressure, suggestive of a less heterogeneous conformational landscape (45), but also highlight three interfacial residues whose intensities collapse significantly over the span of 3 kbar. Spatial grouping of a nonlinear response to pressure is seen in the complex for 11 backbone and five methyl resonances, the large majority of which are located in region I (Fig. 3). As in the free state, most of these resonances show negative curvature and an initial gain in intensity with pressure, indicating that pressure leads to a more ordered backbone. Though locally heterogeneous in detail, the overall response of protein dynamics to changes along the thermodynamic cycle is, of course, conserved for both backbone and side chains (Fig. 2, inset table). Indeed, the indicated precision is remarkable.
DISCUSSION
Despite the development of high-pressure NMR sample tubes suitable for multidimensional heteronuclear NMR of proteins some time ago (46), there has been only one previous study of the pressure dependence of the fast ps-ns motions of methyl-bearing amino acid side chains (47). In that work, the motions of methyl-bearing side chains of human ubiquitin were found to be significantly perturbed by the application of hydrostatic pressures reaching to 2.5 kbar. As observed here for barnase, both free and in complex with its natural inhibitor barstar, ubiquitin side-chain motion showed a heterogeneous response with small volumes of "clustered" (i.e., spatially localized) perturbations of similar magnitude. However, the influence of applied pressure on methyl-bearing side-chain motion in free barnase and the barnase-barstar complex is more striking.
The localized response of motion to pressure is not easily explained by CO 2 axis D values at ambient pressure (region I: presence of some distinguishing property that results in their segregation. The effect of pressure is fundamentally related to changes in system volume. To examine potential contributions to the pressure sensitivity by the protein itself, we carried out a fine-grained volumetric analysis of the crystal structure (see materials and methods). Focusing on methyl-bearing side chains without solvent accessible surface area, we find that region I side chains in the ambient-pressure structure, on average, have $35 Å 3 more unoccupied volume surrounding the side chain than those of region II (111 5 23 and 78 5 33 Å 3 , respectively; p < 0.022). Compression of voids explains the rigidification by pressure observed in region I. In contrast, a more densely packed region II may not be able to further compress and will respond to pressure through other mechanisms that decrease the system volume, such as local structural transitions or changes in hydration. These initial observations have prompted a broader examination of experimentally determined methyl symmetry axis order parameters to investigate the influence of surrounding void volume more generally. These results will be presented elsewhere.
The barnase-barstar complex has perhaps the highest affinity known for a noncovalent heterodimer with a dissociation constant in the low femtomolar range (37). High affinity binding selectively alters the response of barnase to pressure. Region I rigidifies as free barnase is compressed, indicating that there is conformational heterogeneity involving less efficiently packed alternative conformation(s). Interestingly, this region corresponds to a putative latefolding intermediate (48). In distinct contrast, pressure favors increased disorder on the subnanosecond timescale in region II. As noted above, only very localized spatial clustering of the response of fast motion to pressure was observed in ubiquitin (47), but the extent of structural segregation in barnase is more pronounced and involves larger volumes of protein. The femtomolar affinity of the barnase-barstar complex exists despite a determined ÀTDS conf penalty of þ11.7 kJ/mol. But at high pressure, the overall change in side-chain dynamics is zero, and binding occurs with no conformational entropy penalty. This observed response of side chains to pressure is consistent with an important role of conformational entropy, reflected by changes in fast side-chain motion (4), in the adaptation of protein function to extreme environments (41). Furthermore, these results make clear that changes in both the magnitude and the sign of regional contributions of conformational entropy to the thermodynamics of protein function are possible.
DATA AVAILABILITY
The barnase relaxation data reported here has been deposited to the BMRB under BMRB: 50791 and 50792.
DECLARATION OF INTERESTS
A.J.W. is a founding member of Daedalus Innovations, LLC (Aston, PA, USA), a manufacturer of high-pressure NMR apparatus. | 4,797.8 | 2022-12-01T00:00:00.000 | [
"Chemistry"
] |
Subtype-associated epigenomic landscape and 3D genome structure in bladder cancer
Muscle-invasive bladder cancers are characterized by their distinct expression of luminal and basal genes, which could be used to predict key clinical features such as disease progression and overall survival. Transcriptionally, FOXA1, GATA3, and PPARG are shown to be essential for luminal subtype-specific gene regulation and subtype switching, while TP63, STAT3, and TFAP2 family members are critical for regulation of basal subtype-specific genes. Despite these advances, the underlying epigenetic mechanisms and 3D chromatin architecture responsible for subtype-specific regulation in bladder cancer remain unknown. We determine the genome-wide transcriptome, enhancer landscape, and transcription factor binding profiles of FOXA1 and GATA3 in luminal and basal subtypes of bladder cancer. Furthermore, we report the first-ever mapping of genome-wide chromatin interactions by Hi-C in both bladder cancer cell lines and primary patient tumors. We show that subtype-specific transcription is accompanied by specific open chromatin and epigenomic marks, at least partially driven by distinct transcription factor binding at distal enhancers of luminal and basal bladder cancers. Finally, we identify a novel clinically relevant transcription factor, Neuronal PAS Domain Protein 2 (NPAS2), in luminal bladder cancers that regulates other subtype-specific genes and influences cancer cell proliferation and migration. In summary, our work identifies unique epigenomic signatures and 3D genome structures in luminal and basal urinary bladder cancers and suggests a novel link between the circadian transcription factor NPAS2 and a clinical bladder cancer subtype.
Introduction
Urinary bladder cancers (BLCA) are the second most commonly diagnosed urologic malignancy in the USA, with over 81,400 total new cases diagnosed in 2019 [1,2]. As BLCA is a morbid disease that is costly to treat, increased molecular understanding is required [3]. Expression of luminal (FOXA1, GATA3, PPARG, etc.) and basal (KRT1, KRT5, KRT6A, etc.) [4,5] genes have been used to molecularly characterize muscle invasive BLCA. In particular, the presence of basal BLCA, which is often enriched for squamous differentiation, is associated with significant morbidity, disease progression, and lower survival [6,7].
In addition to directly regulating transcription, studies show that TFs regulate gene expression through epigenetic histone modifications and open chromatin accessibility in breast cancers [21][22][23][24]. However, the degree to which the specific repertoire of TFs, epigenetic open chromatin TF accessibility, histone modifications, and 3D genome architecture cooperate for subtype expression is unknown. Therefore, we performed the most comprehensive set of genome-wide experiments to systematically map the epigenome, transcriptome, TF binding, and 3D chromatin loops. To our knowledge, this is the first report identifying 3D genome architecture in bladder cancer. Our work highlights the relevance of epigenetic modifications, open chromatin accessibility, and TF repertoire and identifies a novel identified basic helix loop helix (bHLH) TF NPAS2, all of which cooperate in the coordination of subtype-specific gene expression in bladder cancer.
Comprehensive epigenomic profiling in both BLCA lines and primary tumors
In this project, we performed RNA-Seq, ChIP-Seq for Histone 3 lysine 27 acetylation (H3K27ac), Assay for Transposase-Accessible Chromatin using sequencing (ATAC-Seq), and genome-wide chromatin confirmation capture experiments (Hi-C) on 4 bladder cancer cell lines (Fig. 1a), two of which (RT4 and SW780) were previously annotated as luminal and the two others (SCABER and HT1376) that were characterized as basal [8,25]. Based on the RNA-Seq data generated in this study, we used a previously reported molecular subtyping approach [26] to confirm assignment to luminal and basal states. Our results confirmed RT4 and SW780 as belonging to the Luminalpapillary subtype, while SCABER and HT1376 belong to the Basal/squamous subtype (Additional file 1: Table S1). Each experiment in bladder cancer cell lines has at least two biological replicates (Additional file 2: Table S2) and we observed a high correlation between the two replicates (Additional file 3: Table S3). More importantly, we performed the same set of experiments on four patient muscle-invasive bladder tumors as well. By using the same molecular subtyping method, we determined their subtypes as the following: T1 is Luminal-papillary, T3 is Stroma-rich, and T4 and T5 are basal/ squamous.
Luminal and basal transcriptional BLCA subtypes are associated with distinct promoter and distal enhancers activity at the epigenetic level Enrichment of H3K27ac signals has been used to predict both active promoters and distal enhancers [27,28]. Therefore, we first performed ChIP-Seq for H3K27ac in all four cell types and four patient samples. We observed that biologic replicates following H3K27ac ChIP-seq always clustered together, indicating our results are highly reproducible (Additional file 4: Figure S1A). Further, we found that two luminal subtypes (RT4 Fig. 1 Luminal and basal transcriptional BLCA subtypes are associated with distinct promoter and distal enhancers' activity at the epigenetic level. a Overall design of the study. b Differential expression gene (DEG) analysis of luminal cell lines (RT4 and SW780) and basal cell lines (SCABER and HT1376) shows 427 basal-specific upregulated genes and 524 luminal-specific upregulated genes. c Heatmap of differential H3K27ac ChIP-Seq at promoters (left). Signal H3K27ac intensity profiles for each cluster of BLCA cells (right). d Genome browser signal tracks for a panel of luminal and basal genes. Shown here are the tracks of H3K27ac ChIP-Seq, ATAC-Seq, and RNA-Seq data in RT4, SW780, SCABER, and HT1376 cells. e Promoter H3K27ac and its associated RNA-Seq signals for selected luminal and basal genes shows remarkable similarity. f Integrated H3K27ac peaks at distal enhancers and RNA-Seq gene expression association model identifies putative enhancers and gene regulation. Top 10,000 most variable enhancers (left heatmap) are plotted along with their corresponding gene expression (right heatmap). g Correlations of genome-wide H3K27ac signals between the bladder cancer cell lines and tumor samples demonstrate similarity of enhancer landscape and SW780) clustered together, while two basal (SCABER and HT1376) cell lines are grouped together as well (Additional file 4: Figure S1A). These clustering results suggest global epigenomic profiles accurately reflect cell identity. The hierarchical clustering in the cell lines based on H3K27ac signals was also mirrored by global mRNA expression by RNA-Seq data (Additional file 4: Figure S1B). We performed differential gene expression analysis on the two groups of cell types (RT4 and SW780 vs. SCABER and HT1376) and identified 427 basal-specific (Additional file 5: Table S4) and 524 luminal-specific genes (Fig. 1b, Additional file 6: Table S5).
Next, we examined promoter usage based on H3K27ac signals at known genes. We confirmed that promoter H3K27ac intensities are remarkably similar to gene expression (Fig. 1c), and clustering analysis based on promoter H3K27ac intensity was able to distinguish luminal and basal models of BLCA (Additional file 4: Figure S1C). For example, we observed that two luminal subtype BLCA cell lines RT4 and SW780 have similar H3K27ac patterns at luminal genes FOXA1, GATA3, and PPARG (Fig. 1d, e), while the two basal cell lines share similar promoter marks at genes encoding the basal/squamous markers KRT5/14. Interestingly, although based on global gene expression, HT1376 is classified as a basal/squamous subtype, it shows a similar promoter H3K27ac pattern at luminal genes (GATA3, KRT7/8/18, Fig. 1e).
Distal H3K27ac peaks from gene promoter regions have been used as markers for active enhancers [27,29]. We took the same approach here, and on average, we predicted 59,466 (40,506) enhancers in each cell line (Additional file 7: Table S6). To link the distal enhancers to their target genes, we performed a correlation-based distalenhancer peak-gene association as described in [30] and identified the top 10,000 variable distal enhancers that show significant correlation to its linked gene (correlation ≥0.5, p < 0.01; a total of 58,509 satisfied our criteria; Fig. 1f and Additional file 8: Table S7). We observed that the enhancers show clear clustering according to different cell types, and their target genes show similar cell-type-specific patterns ( Fig. 1f and Additional file 4: Figure S1D). Moreover, to understand the clinical relevance of our findings, we performed H3K27ac ChIP-Seq in four muscle invasive bladder patient samples. Our results show a remarkable correlation of tumor cell lines (Fig. 1g). In summary, we show in these cell lines and in a limited tumor cohort that epigenetic regulation is correlated with molecular subtype assignment. Table S8). Among them, 40.8% of open chromatin regions were located at promoter regions, while 59.2% were located at distal regions. Overall, > 90% of the open chromatin promoter regions overlap with H3K27ac (Additional file 4: Figure S2A, S2C-D). The overlap of distal ATAC-Seq peaks and H3K27ac is lower (Additional file 4: Figure S2A and Additional file 10: Table S9), at least partially due to the different numbers of peaks in different datasets. Genome-wide correlation of ATAC-Seq showed that HT1376 and SCABER clustered together with 80% similarity (Additional file 4: Figure S2E) compared to luminal RT4 (~65%). We noted that this observation agrees with the RNA-Seq-based clustering and H3K27ac-based clustering (Additional file 4: Figure S1A and B).
Next, we performed motif analysis of these open chromatin regions (Additional file 11: Table S10). We observed that binding sites for CTCF and AP-1 complex are enriched in all cell lines ( Fig. 2b and Additional file 4: Figure S2G). Further ranking of TF motifs by enrichment p-value revealed luminal open chromatin regions (shared between RT4 and SW780) were enriched with binding motifs for GRHL2, TP53, and [30] TP63 while basal open chromatins (shared between SCABER and HT1376) were enriched for TEAD1/4 and KLF factor (Fig. 2b) binding motifs. GRHL2 [31] was previously reported to be a luminal gene, thereby validating our findings. Interestingly, binding motifs for AP-1 complex proteins FOSL1/2, JUN/JUNB, ATF3, and BATF TFs [32] were the topmost enriched motifs for both luminal and basal-squamous open chromatins. We then comprehensively mapped all the enriched TF motifs in luminal, basalsquamous and shared open chromatins of distal enhancers to examine the relationship between TFs and BLCA subtypes (Additional file 11: Table S10). We discovered that at distal enhancers, the luminal BLCA subtypes are associated with previously reported steroid hormone receptor TFs [31] suggesting that their binding sites may be primed early during development. We also discovered that the stem-cellassociated pioneering TFs such as KLF factors (KLF10/14), ATF factors (ATF1/2/4/7), and NANOG were enriched in basal-associated enhancers. This is interesting because there exists a progenitor cell population within basal urothelium that can contribute to urothelial development and differentiation [33,34].
FOXA1 and GATA3 bind at luminal open chromatins at regulatory distal enhancers to drive expression of luminal-specific genes
We hypothesized that TFs such as FOXA1 and GATA3 bind at the open chromatin region to pioneer luminal enhancers and activate associated gene expression. To test this hypothesis, we performed GATA3 ChIP-Seq in the RT4 luminal BLCA cell line and obtained FOXA1 ChIP-Seq in RT4 cells from our previously published work (Additional file 12: Table S11) [8]. As predicted, luminal TFs FOXA1 and GATA3 showed enriched binding at the open chromatin loci of luminal-associated (FOXA1, GATA3, PPARG, FGFR3, and FABP4) distal enhancers (Fig. 2c). More specifically, we discovered 1325 distal enhancers that show co-binding of both FOXA1 and GATA3 in RT4 (Fig. 2c). Similarly, FOXA1 and GATA3 showed enriched binding at open chromatin loci of luminal marker genes (FOXA1, ERBB3, KRT19, GPX2, and FABP4) promoters (Additional file 4: Figure S2F).
GO term analysis of genes proximal to these distal enhancer sites showed regulation of TGF beta production, epithelium development, regulation of transcription involved in cell fate commitment, and cell-cell adhesion biological processes (cadherin binding and adherens junction assembly) as terms associated with FOXA1. In addition, regulation of cellular component, cell size, and apical plasma membrane biological processes were terms identified with GATA3-bound genes proximal to these distal enhancers, suggesting a strong involvement of both TFs in commitment to cell fate and luminal differentiation (Fig. 2d). In regard to proximal genes associated with distal enhancers bound by both FOXA1 and GATA3, terms identified were associated with various developmental processes and the regulation of mucus secretion and fat cell differentiation, both important metabolic attributes of differentiated urothelium (Fig. 2d).
We then proceeded with the motif analysis of FOXA1 only, GATA3 only, and cobound sites. Surprisingly, AP1-complexes were enriched specifically in all distal enhancers in addition to FOXA or GATA motifs (Fig. 2e). The order of binding of these three factors remains to be investigated. Finally, to understand the clinical relevance of our findings, we compared our four BLCA cell lines to the TCGA muscle-invasive bladder tumor ATAC-Seq data [30] and discovered that the genome-wide open chromatin profile in our cell lines is clustered with distinct clusters of tumors (Fig. 2f), suggesting that the open chromatin regions in these cell lines share similar patterns with patient tumors.
Luminal and basal subtypes of BLCA show potentially distinct 3D genome organizations
Previous studies have shown that 3D chromatin organization is associated with epigenetic activation or silencing of genes in cells [35]. For example, the majority of heterochromatin is known to be compressed in nuclei and located near the lamina-associated periphery of the nuclear envelope [35]. To obtain initial insights into the genome-wide 3D landscape of luminal and basal BLCA, we performed high-resolution Hi-C experiments on all four cell lines (at least 800 M reads, each) and five bladder tumor patients (> 800 M reads, each) (Additional file 4: Figure S3). We used our recently developed software, Peakachu [36], which is a machine learning-based chromatin loop detection approach, to predict loops at 10Kb bin resolution. First, we identified an average of 56, 315 loops (range between 38,271 and 69,032) in the four cell lines (prob> 0.8; Additional file 13: Table S12). Then, by using the probability score output from Peakachu, we assigned subtype-specific chromatin loops as shown in the Aggregate Peak Analysis (APA, Fig. 3a and Additional file 14: Table S13) [37]. Based on our approach, we observed more potentially luminal-specific loops in RT4 and SW780 (2299) relative to the basal BLCA models SCABER and HT1376 (2144). We then compared each of these categories with loops detected in five patient samples (Fig. 3b):~30-40% of luminalassigned and basal-assigned 3D chromatin loops identified in the cell lines were observed in these five tumor samples.
Finally, we examined enhancer and promoter loops in each category for their association with subtype-specific gene expression. Examples are shown in Fig. 3c, in which we found that the luminal gene FOXA1 and the basal gene KRT5 showed increased number of enhancer-promoter loops in luminal and basal cell lines, respectively. Overall, we observed that~40% of the chromatin loops exist between enhancers and promoters (Fig. 3d). Furthermore, we found a significant enrichment of FOXA1 and GATA3 binding sites at these loop anchors, indicating the involvement of these pioneer factors in the regulation of the 3D genome (Fig. 3e). This finding is in agreement with previous studies reporting the enrichment of FOXA1 binding sites in enhancerpromoter loops [38].
Copy number variation (CNV) and chromatin loops in bladder cancer
A hallmark of cancer is large structural variations (SVs), which includes inversions, deletions, duplications, and translocations. Recently, it has been shown that alteration in CNVs and SVs can lead to the alterations in 3D genome structure, including the formation of new topologically associated domains ("neo-TADs") [39] and resultant "enhancer hijacking [40]." Neo-TADs refer to scenarios where an SV event leads to the formation of new chromatin domains, which can in turn affect the expression profiles of the genes located in those regions. In the "enhancer-hijacking" model, altered 3D genome organization results in abnormal enhancer interaction, with enhancers brought in close proximity to the wrong target gene (usually an oncogene) resulting in inappropriate target activation.
We first systematically identified copy number variations (CNVs) and SV events using the Hi-C data with HiNT [41] and the Hi-Cbreakfinder [42] software. We identified tens of large SVs, including inversions, deletions, and translocations (Fig. 4a, b, Additional file 4: Figures S4-S5, Additional file 15: Table 14). As might be expected, we observed fewer CNVs in the patient samples than in cell lines. More importantly, we were able to re-construct the local Hi-C map surrounding the breakpoints of the SVs. We can observe interesting enhancer-hijacking events and the formation of neo-TADs in these local Hi-C maps (Fig. 4c-h). These observations provide an important resource to further study the function of the re-arranged enhancers in the context of bladder cancer.
Neuronal PAS Domain Protein 2 (NPAS2) is a novel luminal BLCA TF which regulates luminal gene expression and cell migration
Genome-wide open chromatin analysis of BLCA cell lines provides an ideal platform for the identification of novel transcriptional regulators of BLCA cell fate and phenotype. Here we performed motif analysis of luminal-associated, basal-associated, and shared open chromatin regions, resulting in the identification of distinct TFs in each cluster. Among them, many represent known families of subtype-specific regulators, such as the GATA, FOX, and ETS families at luminal-associated ATAC-Seq peaks. Among them, we noticed a potential novel bHLH containing regulator, NPAS2, which is enriched in the luminal-associated and shared clusters, but not enriched in basalassociated ATAC-Seq peaks (Fig. 5a). We examined its binding profile using the latest ENCODE data (HEPG2 cells) [43] and found that NPAS2 binds at the FOXA1 promoter region (Fig. 5b), but not at regulatory regions for basal marker genes. This suggests the possibility that NPAS2 may be an upstream regulator of FOXA1. We then checked the TCGA data and found that high expression level of NPAS2 is significantly correlated to overall patient survival (Fig. 5c).
To further determine whether NPAS2 expression influences the downstream target expression and phenotype, we overexpressed NPAS2 in the basal-squamous BLCA cell line SCABER. First, we performed trans-well migration assays and found that overexpression of NPAS2 in SCABER cells decreased cell trans-well migration (Fig. 5d). We then performed RT-qPCR experiments and found that the basal marker genes (such as KRT5, KRT6A, and TFAP2C) are significantly downregulated (Fig. 5e) following NPAS2 overexpression, suggesting NPAS2 represses the expression of a subset of basal marker genes.
Because our functional genomics analysis suggests that FOXA1 and GATA3 cooperate to regulate luminal target genes [8], we individually overexpressed FOXA1 and GATA3 in SCABER cells to test their ability to regulate NPAS2 expression. We observed increased expression of NPAS2 by both FOXA1 and GATA3 overexpression (Fig. 5f).
Discussion
Muscle invasive BLCA is a morbid and expensive disease to treat [3,[44][45][46]. However, with recent development of immunotherapies such as anti-PD-1 [47] and PD-L1 [48], as well as targeted approaches including FGFR3 inhibitors, clinical management has been revolutionized [49]. However, response rates to these and other standard approaches are suboptimal, suggesting the need for increased molecular understanding. In keeping with this, recent National Comprehensive Cancer Network (NCCN) SW780 (b). c A large intra-chromosomal translocation on chr9. dh Inter-chromosomal translocations. The breakpoints were identified by the HiCBreakfinder software. We then reconstructed the local Hi-C maps across the breakpoints. RNA-Seq and H3K27ac ChIP-Seq tracks from the same cell type are shown below the Hi-C maps guidelines have encouraged biomarker and molecular-based subtype studies to further stratify patients for recent targeted therapies [50].
It has been suggested that RNA-Seq-based molecular subtyping of BLCA is prognostic of clinical outcomes in patients [6][7][8]. While TCGA and other studies have identified mRNA-based molecular subtypes, the epigenetic differences underlying these expression subtypes are unknown. The Encyclopedia of DNA Elements (ENCODE) Consortium has contributed greatly to the current understanding of how epigenetic modifications in multiple tissues vary to regulate tissue-specific gene expression [29]. Histone modification states such as H3K27ac, among various other epigenetic states, mark enhancers and promoters that form a complex interacting network hub to regulate gene expression [51,52]. TCGA has incorporated DNA methylation to [53]. DNA methylation states have been shown to be coupled with histone modifications particularly at the CpG cites at promoters [54]. However large changes in epigenetic histone modification states that influence gene expression lie in distal regions in enhancers and other sites that orchestrate the 3D genome (CTCF) [29,38,[55][56][57]. Hence, our study utilized large-scale genomic experiments such as ATAC-Seq and H3K27ac ChIP-Seq, as well as FOXA1 and GATA3 ChIP-Seq and Hi-C combined with RNA-Seq to construct a comprehensive molecular map of luminal and basal BLCA in both cell line models as well as patient tumors. We have further utilized TCGA datasets to orthogonally validate and derive inferences for clinical importance of our findings.
We found evidence for regulation of luminal and basal bladder cancer genes by proximal-promoters and distal enhancers that form long-range chromatin loops and potentially drive oncogenic programs. Our findings are largely in agreement with previously known work on the role of FOXA1 and GATA3 in the regulation and maintenance of oncogenic programs in luminal bladder cancers [8,[19][20][21]. Interestingly, we found a novel co-regulation of FOXA1 and GATA3 with a common binding partner of AP-1 complex like breast cancers [58,59] that appears to drive activity of distal enhancers, but not promoters. Our comprehensive 3D genome map shows distinct chromatin loop interaction networks available to luminal and basal BLCA. To our knowledge, this is the first report of a comprehensive 3D genomic map of bladder tumor patients. Our analysis further demonstrates the regulation of distal enhancers with promoters of BLCA subtype-specific genes through physical loops as reported in other studies [37,38,52]. Through our analysis, we have identified a novel bHLH TF regulator of luminal BLCA-NPAS2-whose expression correlates with overall survival of BLCA patients included in the TCGA cohort. Through several biological experiments, we showed that NPAS2 regulates the expression of several genes which serve as markers of basal-squamous BLCA, and further diminishes the migration ability of basal BLCA cells. Most importantly, our work highlights how these TFs can cooperatively regulate molecular subtypes and drive clinical associations.
The clinical implication of identification of potential regulators of primary luminal differentiation such as NPAS2 is that, once identified, these factors can be leveraged or targeted. In cases in which the cancers are less luminal and more basal, shifting the biology of the tumor to a more luminal gene expression subtype by activating NPAS2 (and other required factors) could slow the growth of a basal tumors. Alternatively, 30% of muscle invasive tumors are luminal, with upregulated luminal pathways, and blocking the function of luminal tumors could potentially improve survival in patients with luminal BLCA.
Although our studies provide an excellent starting point by identifying associations between epigenetic landscape and 3D genome architecture with tumor subtype, increased numbers of tumor specimens will be required. However, the cost of sequencing limited our current capability to include a large set of patient tumors. An additional limitation of our current study is the lack of a consortium-level analysis of ChIP-Seq data for histone modifications and all major TFs. Such an approach would increase precision in our regulatory analysis. However, previous studies were limited to the understanding of single or combination of few TFs in the context of gene regulation. Therefore, we believe that our study will emerge to be a solid comprehensive resource to launch a further series of hypothesis-driven biological experiments based on gene and epigenetic perturbations to unveil both novel molecular targets as well as biomarkers.
Cell lines and patient tumor samples
Bladder cancer cell lines-RT4, SCABER, SW780, and HT1376-were obtained from ATCC and cultured as previously described [8]. Bladder tumor samples were obtained from Penn State Hershey, College of Medicine's biobank storage at the Institute of Personalized Medicine (IPM) with appropriate protocol approval from the institutional review board (IRB Number: STUDY00001117). The samples from Northwestern University were also obtained with proper approval from the institutional board (IRB number: STU00088853). Samples were selected based on its availability (50 mg) for several rounds of sequencing experiments.
Cell culture
Bladder cancer cell lines were cultured in growth medium containing media + 10% fetal bovine serum and 1% penicillin and streptomycin (Corning). RT4, SW780, SCABER, and HT1376 cells were cultured in McCoy's 5A (Gibco), RPMI-1640 (Corning), Eagle minimum essential medium (MEM; GE life sciences), and Eagle MEM with 1% nonessential amino acids (Corning), respectively. Cells were plated in tissue culture plates (TCPs, Corning)-T-25, T-75 or 15-cm dishes to further grow and expand in 5% CO 2 humidified incubator for several different sequencing experiments. For future storage, cells were preserved in 5% DMSO containing growth medium in the vapor of liquid nitrogen. For passaging, cells were washed with phosphate-buffered saline (PBS, Corning) and trypsinized for 5 min to detach cells from the TCPs. They were further spun down at 200×g to pellet and washed with PBS for further experiments.
RNA-Seq
For RNA-Seq, RNA was extracted from frozen cell pellets using RNeasy Mini Kit (Qiagen). Extracted RNA was quantitated using NanoDrop (Thermo Scientific). SureSelect strand-specific RNA library preparation kit (Agilent) was used to generate cDNA libraries where polyA RNA was pulled down using 2 μg of oligo (dT) beads. Extracted cDNA was then fragmentation, reverse transcribed, end repaired, 3′-end adenylated, adaptor ligation, and subsequently amplified and beads purified (Beckman Coulter). Barcode sequences were thus used to multiplex high-throughput sequencing. The cDNA library was QC'ed for size distribution and concentration using BioAnalyzer (Agilent) High Sensitivity DNA Kit (Agilent) and Kapa Library Quantification Kit (Kapa Biosystems). Final libraries were then pooled and diluted to 2 nM and subsequently sequenced using Illumina HiSeq, X Ten, or NovaSeq platform (Illumina).
Chromatin crosslinking and ChIP-Seq library preparation
Each cell line was grown in 15-cm dishes (× 4) with 25-mL growth medium where they were detached from the TCPs using trypsin as described above and further pelleted. Approximately 10-20 M cell pellets (2x biological replicates) were crosslinked immediately with 1% formaldehyde in PBS at RT for 10 min and subsequently quenched with 0.125 M glycine for 5 min. Crosslinked cells were then washed in PBS and 100 μL freshly prepared lysis buffer (1% SDS, 50 mM Tris-HCl pH 8, 20 mM EDTA, and 1x complete protease inhibitor) was added. Lysed cells were then diluted in 900 μL TE buffer and sonicated using focused beam ultrasound sonicator (COVARIS). Sonicated samples were repeated for extended periods of time (up to 1.5 h) until the chromatin size distribution of~200-300 bp was achieved. Sonicated DNA-chromatin complexes were then pulled down with anti-H3K27ac antibody and washed several times with RIPA buffer to remove non-specific bindings. Pulled-down samples as well as input controls were all de-crosslinked at 65°C overnight. Samples were treated with RNase and Proteinase K digestion at 37°C and 55°C, respectively, followed by further DNA library extraction using phenol-chloroform method. The library was then prepared using Kapa Hyper Prep Kit (KAPA) and further amplified using Hi-fidelity KAPA PCR kit (KAPA) for 6-11 cycles and purified with Kapa pure beads (KAPA). The final library was quantified using Qubit high sensitivity DNA assay (Thermofisher) and then sequenced using Illumina's HiSeq platform 2500, X Ten, or NovaSeq platform (Illumina).
Nuclei extraction and ATAC-Seq library preparation
Each cell line was pelleted following detachment from the TCPs, as described above. Cells were washed with PBS, counted and kept on ice as a pellet. Fifty thousand cells were used for ATAC-Seq library preparation as described by Greenleaf [60]. First, pelleted cells were reconstituted in a freshly made lysis buffer to remove unwashed mitochondria DNA, spun down, and the buffer was discarded. Then, the pelleted nuclei were tagmented with Tn5 transposase (Illumina ref: 15027866 & 15027865) in 50 μl volume for 30 min at 37°C and the DNA was subsequently purified using Qiagen MinElute kit (Qiagen). Then, the library was amplified using Hi-fidelity KAPA PCR kit (KAPA) with Nextera's PCR nonbarcoded Ad1 and barcoded Ad2.* primers for 6 cycles and purified using Kapa pure beads (KAPA). The library was then quantified using Qubit high sensitivity DNA assay (Thermofisher) and further assessed for quality using bioanalyzer (Agilent). Libraries were either sequenced using Illumina's HiSeq platform 2500, X Ten, or NovaSeq platform (Illumina).
Chromatin crosslinking and HiC library preparation
Each cell line was grown in T-25 flasks (× 4) with 5-mL growth medium, trypsinized as described above, and pelleted. Approximately 4-5 M cells as pellets were crosslinked immediately with 2% formaldehyde in PBS at RT for 10 min. Crosslinked cells were then washed in PBS and frozen as 1-1.5 M aliquots in −80°C for several months before library preparation. We used ARIMA's Hi-C kit for making libraries (ARIMA Genomics). As per their protocol, we tested and did QC on samples and sequenced 300 M-600 M reads for each sample using Illumina's NovaSeq platform (Illumina).
Overexpression of NPAS2
Overexpression of FLAG (DYKKK) tagged NPAS2 protein was performed using genscript plasmids, which were expanded following transformation in in competent Escherichia coli and picking clones. We then transiently transfected SCABER cell lines with 2 μg plasmid/well in 6-well plates (2x wells). For transfection, we used Lipofectamine 3000 reagent (Invitrogen) as per manufacturer protocol and allowed cells to be cultured up to 2 days before collecting it for various analysis.
Quantitative reverse transcription PCR (RT-qPCR)
We used the following primers for RT-qPCR to detect mRNA levels.
NPAS2 RNA was extracted from frozen cell pellets using RNeasy Mini Kit (Qiagen). Samples were treated with DNase to digest any additional DNA extracted during the process. DNase-free RNA was then further converted to cDNA using reverse transcriptase kit (Invitrogen) according to the manufacturer's protocol. Reverse transcribed cDNA was then assayed for RT-qPCR using KAPA SYBR FAST qPCR Master Mixes (Roche) at 60°C melting temperature and quantitated using BioRad quantitative PCR system. CT values obtained through the quantitation were then normalized to beta-actin and further transformed to relative expression shown in plots.
Transwell migration assay
Transwell migration assay was performed using 8 μm PVDF inserts (Corning) in a transwell chamber fitting into 24-well plates (Corning). Each cells (control and overexpressing NPAS2 for 2 days) were seeded with 50,000 cells in a transwell (× 3 replicates) in a FBSfree medium containing 1%PS. Cells were allowed to migrate through the transwell inserts for 24 h into medium containing regular 10% FBS and 1%PS. Transwell chambers were removed, washed with PBS and stained with crystal violet. Cells that were not migrated through the insert were removed using a Q-tip. Migrated cells were then visualized in microscope and scored for number of stained spots and compared with different experiment groups. T-test was used to calculate significance between groups.
Computational analysis methods are provided in Supplementary information part as Additional file 16.
Additional file 1: Table S1. Classification results for each sample using consensusMIBC.
Additional file 2: Table S2. Cell Lines and Experiment table.
Additional file 4: Figure S1. Epigenetic landscape analysis of histone modifications in luminal and basal bladder cancers. a Genome-wide H3K27ac signals show that biological replicates and molecular subtypes (basal and luminal) cluster together. b Hierarchical clustering of genome-wide RNA-Seq results for 4 cell lines recapitulate the luminal and basal gene expression based molecular subtypes. c Integrated H3K27ac peaks at promoters and RNA-Seq gene expression association model identifies putative promoter and gene regulation. Top 10,000 most variable promoters (left heatmap) are plotted along with their corresponding gene expression (right heatmap). Luminal (cyan) and basal (magenta) genes are highlighted for their specific linked enhancers. d Corresponding enhancer H3K27ac and its linked RNA-Seq signals based on our predicted model for selected luminal and basal genes shows remarkable similarity. Figure S2. Figure S4. Copy number profiles for four bladder cancer cell lines (HT1376, RT4, SW780, and SCABER) and five tumor samples (Tumor T1, Tumor T2, Tumor T3, Tumor T4, and Tumor T5). CNVs were computed using Hi-C data. Figure S5. Intra-and inter-chromosome structure variation (SV) events. Circos plot showing intraand inter-chromosome SVs in HT1376 (a), RT4 (b), Tumor T1 (c), Tumor T2 (d), Tumor T3 (e), Tumor T4 (f) and Tumor T5 (g).
Funding F.Y. is supported by 1R35GM124820, R01HG009906, U01CA200060, and R24DK106766. DJD is supported by RSG1723301TBE (American Cancer Society) and the Bladder Cancer Support Group at Penn State Health.
Declarations
Ethics approval and consent to participate Bladder tumor samples were obtained from Penn State Hershey, College of Medicine's biobank storage at the Institute of Personalized Medicine (IPM) with appropriate protocol approval from the institutional review board (IRB Number: STUDY00001117). The samples were also obtained from Northwestern University with proper approval from the institutional board (IRB number: STU00088853). All patients had previously provided written informed consent for tumor collection and subsequent analysis. This study was performed in compliance with the Helsinki Declaration. | 7,417.2 | 2021-04-15T00:00:00.000 | [
"Biology"
] |
Sclerodermatous GVHD after Allogeneic Bone Marrow Transplant : a Review
Chronic graft versus host disease (cGVHD) is the leading cause of non-relapse mortality after allogeneic hematopoietic bone marrow transplantation (HCT) for blood malignancy in patients who survive for more than two years. cGVHD can significantly affect quality of life and cause decreased mobility amongst other grave consequences such as end-organ damage, contributing to morbidity and mortality rates for recipients of HCT. Unlike acute GVHD (aGVHD), the chronic variant of graft versus host disease (GVHD) has complex immunopathology involving both humoral and cell immunity. It typically affects the integumentary system, though is known to also affect myofascial, mucocutaneous tissues as well as cause end organ damage ultimately resulting in death. Sclerodermatous cGVHD is a type of cGVHD characterized by involvement of the skin, subcutaneous tissue and fascia without evidence of disease in the viscera. Manifestations of this disease are often evocative of autoimmune disease, which is a self-directed inflammatory reaction to the innate and adaptive immune system in various tissues or multiple organ systems. This inflammatory reaction gives rise to autoantibodies as well as B-cell and T-cell mediated direct toxicity which can cause chronic inflammatory changes of tissues ultimately resulting in tissue scarring and end organ dysfunction. We aim to review the literature on this grave disease and elucidate aspects of the immunopathology of chronic sclerodermatous GVHD in hopes that it may lead to revelations inspiring novel therapies after its diagnosis or preventative measures before stem cell transplantation for malignancy.
1.
Sclerodermatous Chronic Graft Versus Host Disease Chronic graft versus host disease (cGVHD) remains the predominant cause of non-relapse mortality in patients who receive allogeneic hematopoietic bone marrow transplantation (HCT) for blood malignancy and survive for more than two years [1].The risk of cGVHD is increased with older HCT recipient age, transplant from an unrelated donor, use of peripheral blood as the HCT source as well as treatment with donor-lymphocyte infusion [2,3].cGVHD can greatly impact quality of life and mobility for those afflicted with it, contributing to morbidity and mortality for recipients of HCT.Unlike its acute counterpart, the chronic variant of graft versus host disease (GVHD) has complex immunopathology and most commonly affects the integumentary system, though is known to also affect myofascial, mucocutaneous tissues as well as cause end organ damage [1].Sclerodermatous cGVHD is characterized by involvement of the skin, subcutaneous tissue and fascia without evidence of disease in the viscera [4].Recipients of HCT who present with sclerodermatous skin changes have been demonstrated to have elevated levels of antibodies to TGF-B and PDGF [4].Many manifestations of this disease are evocative of autoimmune pathology, which is a self-directed inflammatory reaction to the innate and adaptive immune system in various tissues or multiple organ systems.This inflammatory reaction gives rise to autoantibodies which can cause chronic inflammatory changes of tissues ultimately resulting in tissue scarring and end organ dysfunction [5].The most characteristic histopathological feature of acute GVHD is dyskeratotic epidermal keratinocytes surrounded by lymphocytes called -satellitosis‖ [6,7].cGVHD in the skin begins with an appearance of lichenoid tissue inflammation which progresses to a condition with scleroderma-like features [6,7].On histology, sclerodermatous cGVHD features thickened collagen bundles in the dermis [6,8].
Clinical Features
Most cases of cGVHD occur in months or years after HCT, even when not preceded by acute GVHD (aGVGD).cGHVD symptoms are reminiscent of a variety of autoimmune diseases such as systemic sclerosis, Sjögren syndrome, systemic lupus erythematosus, primary biliary cirrhosis, bronchiolitis obliterans, and immune cytopenias [8].Pathognomic characteristics include sclerosis, lichen-planus-like lesions, poikiloderma, esophageal webs, fasciitis, and bronchiolitis obliterans.Skin is the most common site involved in cGVHD at the initial diagnosis in about 75% of subjects [8].Clinical manifestations of immune-mediated fibrosis are observed in ocular, oral, esophageal, integumentary, joint, fascial, pericardial and pleural and genital tissues with varying degrees of severity which can ultimately result in renal failure and premature cardiovascular and endocrine disease.We aim to focus on integumentary manifestation of cCVHD in this review.Patients afflicted with cutaneous manifestations of cGVHD are at risk of joint contractures secondary to sclerodermatous skin changes, skin atrophy with ulceration, esophageal strictures, lichen planus-like lesions of mucosa and skin and keratoconjunctivitis sicca [8].Severity of cutaneous fibrotic disease can be measured noninvasively in terms of skin thickness and density by 20 MHz high-frequency ultrasonography as well as skin elasticity via non-invasive suction skin elasticity meter, Cutometer MPA 480.These two noninvasive parameters are an imperfect measurement and gold standard diagnostic testing is skin biopsy, but they can be employed to assess disease severity, though they are typically not used outside of the purposes of research [9].
Immunopathology
cGVHD is a complex disease process which involves interaction between alloreactive and dysregulated T and B cells as well as innate immune defenses-namely, macrophages, dendritic cells, toll-like receptors (TLRs) and neutrophils which ultimately incite profibrotic pathways and disease manifestation.The immunological pathology of autoimmune disease generally is a failure of correct identification of self-antigens when they are presented.It was previously believed that cross-reactions with antigens carried by foreign particles such as microbes were responsible for inciting auto-aggressive phenomena.Studies have now shifted to investigating the microenvironment of the presentation process to identify self [5,10].This may contribute to understanding of pathology of cGVHD after hematopoietic stem cell transplantation in context of a cytotoxic environment affected by immune-depleting chemotherapeutics necessary in pre-transplantation protocols.
Our basic understanding of chronic GVHD from study of experimental animal models is that it is a 3-step process.First, there is an activation of host antigen-presenting cells (APCs) which are induced by the HCT preconditioning protocols in an acute inflammation and tissue injurious phase [1].This has been described as a reaction of innate immunity which is mediated by cytokines, toll like receptor agonists, neutrophils, platelets which are released in response to cytotoxic agents, infections and acute GVHD [1].This is followed by a phase of chronic inflammation and dysregulated adaptive immunity which is distinguished by proliferation and migration of effector T cells, B cells, antigen-presenting cells and NK cells [11,12].It is during the final -effector phase‖, that involvement of the innate and adaptive immunity is observed in a process directed dysregulated donor lymphocyte populations via transforming growth factor-β (TGFβ), PDGFα, TNFα, IL-17, macrophages and fibroblasts which ultimately cause organ damage and fibrotic skin changes [11,13].The release of these profibrotic mediators causes macrophage and fibroblast activation, collagen deposition, fibrosis and irreversible end organ damage.
In cGVHD, TGFβhas been shown to be central for the development of skin fibrosis via Th2/Th17 pathways.Monocyte-produced TGF-β1, a potent stimulus for collagen synthesis, is thought to drive the fibrosis which causes eventually debilitating sclerodermatous skin changes and skin contractures [6,14].
Highly cytotoxic ablative chemotherapy may predispose an auto-aggressive innate immunity response via release of cytotoxic agents which directly trigger toward the Th1 and Th17 as well as TGF-b and IL-6 pathways, inciting adaptive regulatory lymphocyte production [10,15,16].CD4+ CD25+ regulatory T cells (Tregs) are thought to influence the size of peripherally activated CD4 count in the arsenal prepared to identify and attack exogenous systems or develop auto-inflammatory reactions and subsequent tissue damage [17].A balance between memory and activated peripheral CD4 cells exists on account of direction by the Treg pool and disruption of this balance in the healthy individual can result in derangements in the peripheral lymphocyte population and thereby incite immune deficiencies or clonal expansions resulting in autoimmune disease and its manifestations [11].
The role of humoral immunity in GVHD immunopathology has also been examined in great detail, of late.New roles for B cell-mediated immunostimulation though antigen presentation and immunoregulation have been recognized by Shimabukuro-Vornhagen, et al [18].B cell-mediated immune responses are carried out by antibody-mediated and antibody-independent mechanisms.
Antibodies produced by B cells after activation can effect complement activation, antibody-mediated direct cytotoxicity and Fc-receptor antigen uptake resulting in phagocytosis [19].B cells can subsequently secrete a large number of pro-inflammatory cytokines including IL-2, TNF-α, IL-6, IL-12, MIF and interferon-γ when consequently activate a large number of immune cells such as T cells (including Th17 cells), macrophages, and natural killer (NK) cells which, as previously recapitulated in this review, have been shown to have direct roles in graft versus host (GVH) reaction and clinical morbidity [18].It is also understood that antigen presentation by activated B cells that have upregulated major histocompatibility complex and costimulatory molecules such as CD80 and CD86 which leads to CD4+ and CD8+ T-cell activation and differentiation ultimately also shown to effect GVH related skin fibrosis post allogeneic bone marrow transplant [18].
Cytokines in acute versus chronic GVHD are another area of interest, particularly in terms of the role of chemokines as potential therapeutic target.RT-PCR analysis of expression of cytokines in various severities of disease reveals increased expression of interferon-γ (INF-γ) and interleukin (IL) 10 mRNA as well as upregulated IL-4, IL-5, IL-13 in aGVHD [11,14].This evidence of a Th2 pathway was supported by a finding of enhanced CCL17 and CCL22, which were found to be downregulated in chronic forms of GVHD [20].In contrast, Th1-mediated immune response was predominant in chronic sclerodermatous GVHD as evidenced by increased expression of INF-γ, CXCL9, CXCL10, CCL5 [11,20].
Current Therapeutics in Sclerodermatous cGVHD
Evaluating therapies for csGVHD is a challenge on account of the heterogeneous group of patients who are afflicted with single or multiple organ cGVHD involvement.This heterogeneity makes meta-analysis difficult, though increasing attention has focused on the role of chemokines and their potential as a therapeutic target in both acute and chronic GVHD.The use of agents interfering with these particular molecules has shown promising results in animal models of aGVHD but yielded no significant advantage in human patients [6].Imatinib (a tyrosine kinase inhibitor, TKI) has been used for steroid-refractory sclerodermatous GVHD with initially promising results involving two cases reported in 2008 by Magro et al. wherein two patients who developed refractory sclerodermatous cGVHD following allogeneic stem cell transplant received Imatinib at the dose of 400 mg/day; in both patients, the sclerodermatous GVHD symptoms resolved within 3 months of initiation of the treatment [21].In larger studies, however, efficacy of Imatinib for severe sclerodermatous cGVHD was limited [22,23].
Profibrotic cytokines (such as TGFβ and PDGF) have key roles in the pathogenesis of the autoimmune disease scleroderma as well as sclerodermatous cGVHD [14].Both of these cytokines are upregulated in the skin of idiopathic scleroderma patients and strongly stimulate matrix synthesis by fibroblasts in the dermis; this phenomenon is also seen histologically in skin samples of patients with sclerodermatous cGVHD [8].In accordance with these findings, blockade of TGFβ or PDGF signaling has been found to reduce the development of skin fibrosis in various experimental models but only inconsistently in human trials [21].Long wavelength UVA treatment as well as administration of bone marrow-derived mesenchymal stem cells have been sparsely studied and yield somewhat promising results, though they have not been studied in large scale to assess their true efficacy as of yet [23][24][25].Impact of these modalities on development of chronic GVHD or skin malignancies is also unknown.Similarly, no clear differences in the incidence of cGVHD or sclerodermatous cGVHD between TKI-exposed and unexposed patients was observed in retrospective studies with Imatinib and Dasatinib [4,26].
Therapies directed toward B cell mediated GVH have been an attractive target in sclerodermatous GVHD as well as other forms of cGVHD.B cell depletion was considered a potential therapeutic route after the discovery that rituximab (a chimeric human-murine monoclonal IgG antibody targeted at the B cell CD20 receptor) treatment for immune thrombocytopenia actually improved a patient's sclerodermatous GVHD symptoms [27].Rituximab depletes B cells by various mechanisms resulting in a concomitant decline in T call activation and increase in Treg population [18].A meta-analysis and systematic review of the efficacy of rituximab in cGVHD in which seven studies were included (three of which were prospective trials), was recently published and revealed a broad range of response rates from 13-100% [27].Due to the heterogeneity of disease type and burden, very few meta-analyses exist in the literature at this time and this compilation is of utility to the scientific and clinical community.This meta-analysis data show that rituximab is a treatment option for patients with extensive steroid-refractory cGVHD as well as patients with steroid-refractory cGVHD manifesting as thrombocytopenia or with sclerodermatous, other cutaneous or rheumatologic symptoms [27].Yet another multi-center phase II clinical study performed by Kim, SJ et al found a cutaneous response rate of 77% among a cohort of 22 patients with sclerodermatous GVHD receiving weekly rituximab followed by monthly rituximab administration [28].
Future Therapeutic Investigations
While the incidence and severity of cGVHD has not decreased, several randomized trials are hoping to show a lower rate of cGVHD.Current therapies are not more effective or less toxic, but some promising therapies are in clinical trials, and there appear to be others still in development to improve outcomes of HCT as well as attempt to prevent occurrence of cGVHD [29,30].New therapies may target the specific pathophysiologies of cGVHD, as opposed to the pan-immunosuppressive agents currently available [29].Specifically, the described role of Th1 cytokines in cGVHD, especially of the skin and liver is a potentially targetable in therapeutics for cGVHD or Th2 cytokines and mediators in efforts to prevent preceding acute GVHD [14,31].In addition to targeting fibrosis with imatinib and B cell-mediated antibody depletion to prevent downsteam activation of pro-fibrotic pathways, biologics are continuously studied and developed in attempts to stabilize or improve debilitating and deforming cutaneous manifestations of cGVHD.One such biologic is tocilizumab, an anti-IL-6 receptor antibody therapy which revealed mixed cutaneous response rates and concern for worsening hyperbilirubinemia in a cohort of 8 patients with both aGVHD and cGVHD [32].
Pravastatin, a 3-hydroxy-3-methylglutaryl-coenzyme A reductase inhibitor, was investigated for a potential role it may have in cGVHD, though the results of it were promising in murine disease and disappointing in human subjects [33].Abl kinase and PDGF receptor inhibitors (dasatinib and nilotinib) are also under investigation for a role in cGVHD treatment.A phase II open-label trial specific to cutaneous GVHD is underway at the National cancer Centre which is expected to elucidate useful data in current GVHD therapeutics [34].Among the emerging therapies for sclerodermatous GVHD, imatinib and rituximab are most convincing.Though sclerodermatous cGVHD is better understood than it was even 5 or 10 years ago, advancements in therapeutics after its onset are few after high dose steroid therapy is exhausted.For patients and practitioners alike, succumbing to complications of the HCT therapy for malignancy in a state of remission is painfully unfortunate.Future directions of research in the pathogenesis of sclerodermatous cGVHD may reveal crucial novel targetable therapy. | 3,341.8 | 2017-04-24T00:00:00.000 | [
"Medicine",
"Biology"
] |
m6A methylation controls pluripotency of porcine induced pluripotent stem cells by targeting SOCS3/JAK2/STAT3 pathway in a YTHDF1/YTHDF2-orchestrated manner
Embryonic stem cells (ESCs) and induced pluripotent stem cells (iPSCs) hold great promise for regenerative medicine, disease treatment, and organ transplantation. As the ethical issue of human ESCs and similarity of pig in human genome and physiological characteristics, the porcine iPSCs (piPSCs) have become an ideal alternative study model. N6-methyladenosine (m6A) methylation is the most prevalent modification in eukaryotic mRNAs, regulating the self-renewal and differentiation of pluripotency stem cells. However, the explicit m6A-regulating machinery remains controversial. Here, we demonstrate that m6A modification and its modulators play a crucial role in mediating piPSCs pluripotency. In brief, loss of METTL3 significantly impairs self-renewal and triggers differentiation of piPSCs by interfering JAK2 and SOCS3 expression, further inactivating JAK2–STAT3 pathway, which then blocks the transcription of KLF4 and SOX2. We identify that both of JAK2 and SOSC3 have m6A modification at 3′UTR by m6A-seq analysis. Dual-luciferase assay shows that METTL3 regulates JAK2 and SOCS3 expression in an m6A-dependent way. RIP-qPCR validates JAK2 and SOCS3 are the targets of YTHDF1 and YTHDF2, respectively. SiMETTL3 induced lower m6A levels of JAK2 and SOCS3 lead to the inhibition of YTHDF1-mediated JAK2 translation and the block of YTHDF2-dependent SOCS3 mRNA decay. Subsequently, the altered protein expressions of JAK2 and SOCS3 inhibit JAK2–STAT3 pathway and then the pluripotency of piPSCs. Collectively, our work uncovers the critical role of m6A modification and its modulators in regulating piPSCs pluripotency and provides insight into an orchestrated network linking the m6A methylation and SOCS3/JAK2/STAT3 pathway in pluripotency regulation.
Introduction
Embryonic stem cells (ESCs) offer great hope for regenerative medicine, organ transplantation, and drug development. These cells also provide a powerful model system for studies of cellular identity and early mammalian development 1 . However, there are ethical issues regarding destroying human embryos and fetuses for cells isolation. The pig is an excellent model for human disease and clinical medicine applications, because of similarity in human genome and physiological characteristics 2,3 . Nevertheless, no authentic porcine embryonic stem cells (pESCs) have been isolated successfully. Induced pluripotent stem cell (iPSCs) are a type of embryonic stem cell-like pluripotent stem cell that has indefinite selfrenewal and could differentiate into all types of cells 4 . Moreover, iPSCs and ESCs and are extremely similar in morphology, gene and protein expression, differentiation ability, and epigenetic modification status. Therefore, porcine induced pluripotent stem cells (piPSCs) now become an ideal alternative resource, which holds unprecedented promise for human regenerative medicine, disease treatment, and organ transplantation. However, the mechanisms of porcine embryonic development and the pluripotent regulation network remain largely unknown.
Recent studies have revealed a crucial role for m 6 A methylation and METTL3 in regulating the pluripotency and differentiation of stem cells [15][16][17][18] . Nevertheless, the function of m 6 A modification in ESCs has been investigated with discrepant results among different studies. One model reported that m 6 A modification destabilizes developmental regulators and maintains pluripotency 15 .
Other studies proposed that m 6 A is not required for ESC maintenance but for cell fate transition of ESCs to differentiated lineages 16,17 . Thus, the explicit biological role of m 6 A modification in self-renewal and differentiation of pluripotency stem cell remains to be elucidated.
In the present study, we provide strong evidence for the vital role of m 6 A and its modulators in maintaining selfrenewal and pluripotency of piPSCs. We demonstrate that METTL3 depletion significantly impairs self-renewal and triggers differentiation of piPSCs by inactivating JAK2-STAT3 pathway. Further study shows that METTL3 regulates JAK2-STAT3 pathway by mediating the expression SOCS3 (a negative regulator of JAK2-STAT3) and JAK2 in an m 6 A-YTHDF1/YTHDF2dependent manner. For the first time, our findings illustrate an orchestrated network linking the m 6 A methylation and SOCS3/JAK2/STAT3 pathway in pluripotency regulation.
METTL3 is required for piPSCs self-renewal and pluripotency
We first examined the m 6 A methyltransferase METTL3 expression of piPSCs in retinoic acid (RA)-induced differentiation and revealed a gradual decrease in METTL3 levels (Fig. 1a). To explore the regulatory role of METTL3 in piPSCs self-renewal and pluripotency, we next conducted loss-of-function assays by using smallinterfering RNA (siRNA) that exhibited at least 90% endogenous METTL3 RNA and protein expression were inhibited in piPSCs (Fig. 1b, c). Liquid chromatographytandem mass spectrometry (LC-MS/MS) analysis of global m 6 A level in purified mRNA from cells with or without METTL3 knockdown showed that METTL3 ablation leads to a significant reduction (~80%) of m 6 A on mRNA (Fig. 1d), confirming the methylation activity of METTL3 in piPSCs.
To test their differentiation ability, control and METTL3-depleted piPSCs were transferred to differentiation media without 2i/LIF for embryoid bodies (EBs) for 5 days. Next, EBs were disaggregated and re-plated in piPSCs growth conditions for 7 days. AP staining revealed that only control EBs efficiently regenerated stable piPSCs (Fig. 1i). Consistently, the mRNA levels of most developmental regulators were also significantly upregulated in METTL3-deficient cells relative to control cells (Fig. 1j). Taken together, these results illuminate that METTL3 is essential to maintain the pluripotency state of piPSCs.
METTL3 regulates piPSCs pluripotency via STAT3-KLF4-SOX2 signal axis
It is well established that signal transducer and activator of transcription 3 (STAT3), a latent transcription factor that upon phosphorylation, has a critical role in the maintenance of embryonic stem cell pluripotency [19][20][21] . KLF4, a direct JAK-STAT3 downstream target, is transcriptionally activated by STAT3 phosphorylation and preferentially activates SOX2 22 . Thus, we hypothesized that loss of METTL3 downregulated gene expression of SOX2 and KLF4 by inhibiting phosphorylated STAT3 (pSTAT3). Indeed, knockdown of METTL3 significantly reduced STAT3 phosphorylation levels compared with control cells (Fig. 2a). Consistent with qPCR results, the protein expression of KLF4 and SOX2 were decreased upon METTL3 knockdown (Fig. 2a). Moreover, overexpression of METTL3 enhanced STAT3 phosphorylation and increased the protein abundance of KLF4 and SOX2 (Fig. 2b), indicating a positive correlation between METTL3 and STAT3 phosphorylation.
STAT3 is phosphorylated on the residue (Tyr-705), dimerizes and then translocates from the cytoplasm to the nucleus to activate transcription of target genes in stem cells 20 . To investigate whether METTL3 affected piPSCs pluripotency through STAT3 phosphorylation, we examined nuclear-cytoplasmic shuttling of pSTAT3 following METTL3 knockdown. As expected, we observed a dramatically decreased nuclear retention and subsequently increased cytoplasmic localization of pSTAT3 in METTL3 knockdown cells (Fig. 2c). Furthermore, the nucleic expression of KLF4 and SOX2 were repressed in METTL3 knockdown piPSCs relative to control cells (Fig. 2c). In support, immunofluorescence analysis is required for piPSCs self-renewal and pluripotency. a Real-time quantitative PCR (qPCR) analysis of METTL3 expression in piPSCs during RA-induced differentiation. GAPDH was used as an internal control. b, c METTL3 knockdown efficiency was measured by qPCR b and western blot c. For the immunoblot, β-actin was used as loading control. d Liquid chromatography-tandem mass spectrometry (LC-MS/MS) quantification of the m 6 A/A ratio in mRNA from piPSCs with or without METTL3 knockdown. e Morphology and Alkaline phosphatase (AP) staining of piPSCs with or without METTL3 knockdown. f Quantification of AP-positive colonies of piPSCs with or without METTL3 knockdown. g Cell proliferation assay of piPSCs with or without METTL3 knockdown. h qPCR analysis of SOX2, KLF4, NANOG, OCT4, SMAD2, ID3, ZFX, FOXD3, and C-MYC expression in piPSCs with or without METTL3 knockdown. i AP staining of EBs differentiated from piPSCs with or without METTL3 knockdown. PiPSCs with or without METTL3 knockdown were transferred to serum-based media without 2i/LIF for 5 days, to promote cell differentiation. The cells were then disaggregated, re-plated on feeder cells and grown in piPSCs conditions for 7 days. j qPCR analysis of PAX6, FGF5, BRACHYURY, FOXA2, and GATA6 expression in piPSCs with or without METTL3 knockdown. Data were presented as mean ± SD of three independent experiments. * P < 0.05, ** P < 0.01, *** P < 0.001 compared with the control group indicated that METTL3 depletion reduced the expression of pSTAT3 in nuclear speckle (Fig. 2d). Consistently, the decreased nuclear accumulation of SOX2 was also observed (Fig. 2e).
To further confirm the role of STAT3 phosphorylation in METTL3-mediated pluripotency of piPSCs, we treated control and METTL3-overexpressed piPSCs with or without Stattic, a selective inhibitor of STAT3 phosphorylation 23 . AP staining analysis showed that forced expression of METTL3 enhanced piPSCs pluripotency, which could be effectively reversed by Stattic treatment (Fig. 2f, g). Consistently, Stattic also reversed the increased mRNA and protein levels of SOX2 and KLF4 caused by METTL3 overexpression (Fig. 2h, i). Together, our findings indicate that METTL3 maintains piPSCs pluripotency by activating STAT3-KLF4-SOX2 signaling.
METTL3 controls the STAT3-KLF4-SOX2 pathway by targeting JAK2 and SOCS3 Previous study demonstrated that JAK2-STAT3 signaling pathway has an indispensable role in embryonic stem cell self-renewal 19 . JAK2, a non-receptor tyrosine kinase, could phosphorylate STAT3 and activate JAK2-STAT3 pathway to transduce the intracellular signal 24 . SOCS3 is a key negative regulator of JAK2-STAT3 signaling pathway and has an important role in stem cell self-renewal 25 . Based on the above findings, we investigated whether METTL3 affects STAT3 phosphorylation through JAK2 and/or SOCS3. Compared with control cells, the mRNA level of SOCS3 was increased in METTL3 knockdown cells, whereas JAK2 mRNA expression was unchanged (Fig. 3a). We also measured the protein expression of JAK2 and SOSC3 following METTL3 knockdown. Intriguingly, loss of METTL3 downregulated JAK2 protein abundance and upregulated SOCS3 protein abundance (Fig. 3b). Moreover, overexpression of METTL3 increased JAK2 protein abundance and decreased SOCS3 protein abundance (Fig. 3c).
To further validate whether METTL3 regulates STAT3-KLF4-SOX2 pathway and pluripotency of piPSCs by targeting JAK2 and SOCS3, we performed rescue experiment and found that knockdown of JAK2 reversed the activation STAT3-KLF4-SOX2 signaling and increased nuclear retention of pSTAT3 in METTL3-overexpressed cells (Fig. 3d, e). In addition, the increased mRNA levels of KLF4 and SOX2 in METTL3-overexpressed cells could be reversed by JAK2 knockdown (Fig. 3f) Furthermore, we observed that knockdown of SOCS3 could rescue the inhibition of STAT3-KLF4-SOX2 signaling and decreased nuclear retention of pSTAT3 in METTL3-depleted piPCSs (Fig. 3g, h). Silencing of SOCS3 also restored the gene expression of KLF4 and SOX2 in METTL3 knockdown cells (Fig. 3i), indicating that METTL3 knockdown suppressed STAT3-KLF4-SOX2 signaling by attenuating JAK2 and elevating SOCS3. Collectively, these finding indicate that METTL3 maintains the activation of STAT3-KLF4-SOX2 signal pathway by mediating JAK2 and SOSC3 to preserve piPSCs pluripotency.
METTL3 mediates protein expression of JAK2 and SOSC3 in an m 6 A-dependent manner
To explore the underlying regulatory mechanism of METTL3 on JAK2 and SOSC3 expression, we tested whether the methyltransferase activity of METTL3 is required. we first constructed plasmid to express either wild-type (METTL3-WT) or catalytic mutant METTL3 (METTL3-MUT, aa395-398, DPPW→APPW) based on published data 26 , and confirmed the effect by m 6 A dot blot (Fig. 4a). Ectopic expression of METTL3-WT, but not METTL3-MUT nor an empty vector, significantly increased the JAK2 protein abundance and decreased SOCS3 protein abundance (Fig. 4b), imply METTL3 modulated the expression of JAK2 and SOSC3 in a methyltransferase activity-dependent manner. Moreover, compared with METTL3-MUT or the empty vector, ectopic expression of METTL3-WT elevated the self-renewal ability of piPSCs (Fig. 4c, d). Consistently, the mRNA and protein levels of KLF4 and SOX2 were significantly augmented in cells expressing METTL3-WT, rather than METTL3-MUT (Fig. 4e, f). These results demonstrate that the m 6 A methylation activity of METTL3 is required for piPSCs pluripotency.
(see figure on previous page) Fig. 2 Inhibition of METTL3 impairs piPSCs pluripotency by suppressing STAT3/KLF4/SOX2 signaling. a Western blot analysis of pSTAT3, STAT3, KLF4, SOX2, NANOG, and OCT4 in piPSCs with or without METTL3 knockdown. β-Actin was used as loading control. b Western blot analysis of METTL3, pSTAT3, STAT3, KLF4, and SOX2 in piPSCs transfected with control or METTL3 plasmid. c Western blot of nuclear and cytoplasmic distribution of pSTAT3, KLF4, and SOX2 in piPSCs with or without METTL3 knockdown. Histone H3 and Tubulin serve as nuclear and cytoplasmic markers, respectively. d Immunofluorescence analysis of pSTAT3 in piPSCs transfected with siControl or siMETTL3 after 24h and 48h. Scale bar, 10 μm. e Immunofluorescence analysis of SOX2 in piPSCs with or without METTL3 knockdown. Scale bar, 10 μm. f AP staining of piPSCs transfected with control or METTL3 plasmid and treated with DMSO or 1 μM Stattic. g Quantification of AP-positive colonies of piPSCs transfected with control or METTL3 plasmid and treated with DMSO or 1 μM Stattic. h qPCR analysis of piPSCs transfected with control or METTL3 plasmid and treated with DMSO or 1 μM Stattic. GAPDH was used as an internal control. i Western blot analysis of pSTAT3, STAT3, KLF4, and SOX2 of piPSCs transfected with control or METTL3 plasmid and treated with DMSO or 1 μM Stattic. Data were presented as mean ± SD of three independent experiments. ** P < 0.01, *** P < 0.001 compared with the control group . a qPCR analysis of JAK2 and SOCS3 in control and METTL3 knockdown piPSCs. GAPDH was used as an internal control. b Western blot analysis of JAK2 and SOCS3 in piPSCs with or without METTL3 knockdown. β-Actin was used as loading control. c Western blot analysis of JAK2 and SOCS3 in piPSCs with or without METTL3 overexpression. d Western blot analysis of JAK2, pSTAT3, STAT3, KLF4, and SOX2 in piPSCs with or without METTL3 overexpression and transfected with negative control or JAK2 siRNA. e Western blot of nuclear and cytoplasmic distribution of pSTAT3, KLF4, and SOX2 in piPSCs with or without METTL3 overexpression and transfected with negative control or JAK2 siRNA. f qPCR analysis of SOX2 and KLF4 expression in piPSCs with or without METTL3 overexpression and transfected with negative control or JAK2 siRNA. g Western blot analysis of SOCS3, pSTAT3, STAT3, KLF4, and SOX2 in piPSCs with or without METTL3 knockdown and transfected with negative control or SOCS3 siRNA. h Western blot of nuclear and cytoplasmic distribution of pSTAT3, KLF4, and SOX2 in piPSCs with or without METTL3 knockdown and transfected with negative control or SOCS3 siRNA. Histone H3 and Tubulin serve as nuclear and cytoplasmic markers, respectively. i qPCR analysis of SOX2 and KLF4 expression in piPSCs with or without METTL3 knockdown and transfected with negative control or SOCS3 siRNA. Data were presented as mean ± SD of three independent experiments. * P < 0.05, ** P < 0.01, *** P < 0.001 compared with the control group To identify and localize m 6 A sites at a transcriptomewide level, we performed m 6 A sequencing (m 6 A-seq) to mRNA purified from piPSCs. The consensus "GGACU'' was identified as the most enriched in the m 6 A peaks (Fig. 4g), resembling the common m 6 A motif described in mammalian cells 8,9 . Consistent with previous studies, the m 6 A peaks were especially enriched around stop codon, in 3′untranslated regions (3′UTRs) (Fig. 4h), suggesting an evolutionary conservation of m 6 A among eukaryotic species that range from human, mouse to pig. From our m 6 A-seq data of piPSCs, we found that JAK2 and SOSC3 mRNA 3′UTRs have highly enriched and specific m 6 A f Western blot analysis of KLF4 and SOX2 in piPSCs transfected with control, WT, and MUT METTL3 plasmid. g Top consensus motif identified by HOMER with m 6 A-seq peaks in piPSCs. h Distribution of m 6 A peaks across the length of mRNA transcripts. Each region of 5′UTRs, CDSs, and 3′UTRs were binned into 100 segments, and the percentage of m 6 A peaks that fall within each bin was determined. i The m 6 A abundances in JAK2 and SOCS3 mRNA transcripts in piPSCs as detected by m 6 A-seq. The m 6 A peaks were shown in the black rectangles. j Methylated RNA immunoprecipitation (MeRIP)-qPCR analysis of m 6 A levels of JAK2 and SOCS3 in piPSCs with or without METTL3 knockdown. k MeRIP-qPCR analysis of m 6 A levels of JAK2 and SOCS3 in piPSCs transfected with control or METTL3 plasmid. l Relative luciferase activity of WT or MUT (A-to-T mutation) SOCS3-3′UTR (or JAK2-3′UTR) luciferase reporter in piPSCs transfected with control, WT, or MUT METTL3 plasmid. Firefly luciferase activity was measured and normalized to Renilla luciferase activity. Data were presented as mean ± SD of three independent experiments. ** P < 0.01, *** P < 0.001 compared with the control group peaks (Fig. 4i), which is consistent with published mouse embryonic stem cell and T cell transcriptome-wide m 6 A profiling data sets 16,27 .
To ascertain whether JAK2 and SOSC3 transcripts are substrates for METTL3, we performed methylated RNA immunoprecipitation combined with qPCR (MeRIP-qPCR) to determine the JAK2 and SOSC3 m 6 A methylation levels following METTL3 knockdown. Indeed, our analysis confirmed that METTL3 knockdown decreased m 6 A levels of JAK2 and SOSC3 (Fig. 4j). Furthermore, m 6 A levels of JAK2 and SOSC3 were elevated in METTL3 overexpression piPSCs relative to control cells (Fig. 4k). More importantly, to determine whether m 6 A modifications on target mRNAs are essential for METTL3mediated gene regulation, we performed dual-luciferase reporter and mutagenesis assays. Forced expression of METTL3-WT, but not METTL3-MUT, substantially promoted luciferase activity of reporter carrying wild-type 3′UTR fragment of JAK2, decreased luciferase activity of reporter containing wild-type 3′UTR fragment of SOCS3, relative to the control (Fig. 4l). These changes were abrogated when the m 6 A sites were mutated (A was replaced with T) (Fig. 4k). Overall, METTL3 regulates the expression of JAK2 and SOSC3, further controls pluripotency of piPSCs through m 6 A-dependent mechanism.
Loss-of METTL3 impairs YTHDF1-mediated translation of JAK2
We next explored the regulatory mechanism for how m 6 A modification regulates the expression of JAK2 and SOSC3. It is known that m 6 A should be selectively recognized by specific m 6 A-binding proteins to exerts its biological functions 7 . YTH M 6 A RNA-binding protein 1 (YTHDF1) is known to promote translation of m 6 A methylated transcripts 12 . The expression of JAK2 appeared to be promoted by m 6 A methylation, which raises the possibility that it is a target of YTHDF1. Overexpression of YTHDF1-FLAG significantly increased the protein expression of JAK2 in piPSCs (Fig. 5a), confirming that YTHDF1 is involved in regulation of JAK2. As expected, RIP-qPCR analysis revealed that JAK2 is a target gene of YTHDF1 (Fig. 5b). Moreover, Ectopic YTHDF1 significantly upregulated luciferase activity in reporters carrying wild-type 3′UTR fragment of JAK2 (Fig. 5c). Such an increase was abrogated when the m 6 A consensuses sites were mutant (Fig. 5c), suggesting an m 6 A-dependent regulation. In the case of m 6 A near stop codons or in 3′UTRs, YTHDF1 binds to select transcripts at m 6 A sites in their 3′UTRs and enhances cap-dependent translation 12 . Rapamycin, a specific inhibitor of capdependent protein translation, inhibits 4E-BP1 phosphorylation and causes increased association between 4E-BP1 and eIF-4E 28 . To determine whether YTHDF1 regulates JAK2 expression by promoting cap-dependent translation, we treated control and YTHDF1overexpressed piPSCs with or without rapamycin. The results showed that rapamycin treatment markedly inhibited the increase of JAK2 protein expression in YTHDF1-overexpressed cells (Fig. 5d), indicating YTHDF1 mediates mRNA translation of JAK2 in a capdependent manner.
Furthermore, Ectopic expression of YTHDF1 recovered the decreased protein abundance of JAK2 in METTL3depleted piPSCs (Fig. 5e). Overexpression of YTHDF1 could partially rescue the loss of pluripotency caused by METTL3 knockdown (Fig. 5f, g). In addition, the reduction of mRNA and protein levels of SOX2 and KLF4 were also restored by overexpression of YTHDF1 (Fig. 5e, h). Taken together, our results demonstrate that METTL3 regulates JAK2 protein expression by modulating translation in m 6 A-YTHDF1-dependent pathway.
Knockdown of METTL3 enhances SOCS3 mRNA stability via YTHDF2-dependent pathway YTH M 6 A RNA binding protein 2 (YTHDF2) is reported to recognize and decay m 6 A-modified mRNA 11 . As the negative correlation between m 6 A methylation and expression of SOCS3, we hypothesized that SOCS3 transcripts might be recognized and subsequently degraded by YTHDF2. To test this hypothesis, we overexpressed YTHDF2-FLAG in piPSCs and observed a markedly decreased of SOCS3 protein levels (Fig. 6a). RNA immunoprecipitation followed by qPCR (RIP-qPCR) assay validated that SOCS3 mRNA interacts with YTHDF2-FLAG (Fig. 6b), suggesting that SOCS3 is a target of YTHDF2. Moreover, dual-luciferase assays revealed that ectopic YTHDF2 significantly downregulated luciferase activity in reporters carrying wildtype 3′UTR fragment of SOCS3 (Fig. 6c). Such a decrease was completely abrogated by mutations in the m 6 A consensuses sites (Fig. 6c), suggesting an m 6 A-dependent regulation of YTHDF2 on SOCS3 expression.
To examine the role of YTHDF2 in our system, we knocked down YTHDF2 and confirmed the knockdown efficiency by qPCR (Fig. 6d). Depletion of YTHDF2 significantly increased the protein level of SOCS3 in piPSCs (Fig. 6e). Measuring the decay of SOCS3 mRNA after blocking new RNA synthesis with actinomycin D showed that silencing YTHDF2 strikingly elevated SOCS3 mRNA stability (Fig. 6f). Similar results were also observed upon METTL3 knockdown, suggesting that YTHDF2 destabilized SOCS3 mRNA in an m 6 A-dependent manner.
Furthermore, YTHDF2 overexpression could reverse the increased protein level of SOCS3 in METTL3depleted piPSCs (Fig. 6g). AP staining analysis suggested that adding back YTHDF2 was able to partially rescue the loss of self-renewal capacity caused by METTL3 knockdown (Fig. 6h, i). Consistently, the inhibition of SOX2 and KLF4 expression by siMETTL3 could be effectively recovered by overexpression of YTHDF2 (Fig. 6g, j). Together, these results demonstrate that YTHDF2 plays an important role in the regulation of METTL3-mediated SOCS3 expression by affecting mRNA stability.
Discussion
Because of the ability to infinite proliferation and give rise to all types of cells, iPSCs represent an invaluable resource to investigate human disease. Thus, in-depth understanding of the epitranscriptomic mechanisms controlling self-renewal and transitions to differentiated cell fates is essential for iPSC to hold great promise in the field of regenerative medicine 29 . Here, we identify METTL3 play a critical role in modulating piPSCs pluripotency, by mediating JAK2-STAT3 signal pathway through m 6 A-based and YTHDF1/YTHDF2-dependent post-transcriptional regulation (Fig. 7). In brief, METTL3 promotes STAT3 phosphorylation and further enhances expression of core pluripotency genes KLF4 and SOX2 by targeting JAK2 and SOSC3. METTL3 increases the m 6 A levels of JAK2 and SOSC3 mRNA, leading to enhancing YTHDF1-mediated translation of JAK2 and attenuating YTHDF2-dependent mRNA stability of SOCS3, resulting in increased protein expression of JAK2 and decreased protein expression of SOCS3, thereby activating JAK2-STAT3 pathway and facilitates piPSCs pluripotency.
Prior works had documented that m 6 A methylation has a critical role in regulation of mouse ESCs self-renewal and differentiation, the explicit function and role of m 6 A modification, however, remains controversial. Wang et al. reported that m 6 A modification on developmental regulators blocks the binding of HuR and destabilizes such transcripts, leading to maintaining pluripotency 15 . By contrast, Batista et al. 16 demonstrated that METTL3 knockout promotes mESC self-renewal in an m6Adepedent way. Geula et al. 17 demonstrated that depletion of METTL3 in both naive mouse ESCs and primed (epiblast stem cell, EpiSC) states resulted in upregulation of pluripotent and developmental regulators, respectively, which was explained by the fact that METTL3 targeted the dominating transcripts in either state to increase the expression of already-expressed genes. More recently, another study showed that Zc3h13 anchored the m 6 A regulatory complex in the nucleus to facilitate m 6 A , and JAK2 in piPSCs transfected with control and YTHDF1-FLAG plasmid. β-Actin was used as loading control. b RIP analysis of the interaction of JAK2 with FLAG in piPSCs transfected with YTHDF2-FLAG plasmid. Enrichment of JAK2 with FLAG was measured by qPCR and normalized to input. c Relative luciferase activity of WT or MUT JAK2-3′UTR luciferase reporter in piPSCs transfected with control or YTHDF2 plasmid. Firefly luciferase activity was measured and normalized to Renilla luciferase activity. d Western blot analysis of JAK2 in piPSCs transfected with control or YTHDF1 plasmid and treated with or without 20 nM rapamycin (Rap). e Western blot analysis of JAK2, KLF4, and JAK2 in piPSCs with or without METTL3 knockdown and transfected with control or YTHDF1 plasmid. f AP staining of piPSCs with or without METTL3 knockdown and transfected with control or YTHDF1 plasmid. g Quantification of AP-positive colonies of piPSCs with or without METTL3 knockdown and transfected with control or YTHDF1 plasmid. h qPCR analysis of SOX2 and KLF4 expression in piPSCs with or without METTL3 knockdown and transfected with control or YTHDF1 plasmid. Data were presented as mean ± SD of three independent experiments. ** P < 0.01, *** P < 0.001 compared with the control group methylation and mESC pluripotency 18 . Consistently, we suggest that m 6 A methylation act as a safeguard of pluripotency factors to maintains pluripotency of piPSCs, which is supported by the fact that METTL3 expression levels of piPSCs were gradually decreased during RAinduced differentiation. These studies demonstrate that the function of m 6 A methylation on pluripotency could be highly conserved between mouse and pig. Further studies are needed to confirm the extent to which the in vitro observations correlate with in vivo development.
Pluripotent cells exhibit a core transcriptional regulatory circuitry that activates stem cell-specific genes and Fig. 6 Silencing of METTL3 elevates SOCS3 mRNA stability via YTHDF2-dependent mechanism. a Western blot analysis of FLAG, YTHDF2, and SOCS3 in piPSCs transfected with control and YTHDF2-FLAG plasmid. β-Actin was used as loading control. b RNA immunoprecipitation (RIP) analysis of the interaction of SOCS3 with FLAG in piPSCs transfected with YTHDF2-FLAG plasmid. Enrichment of SOCS3 with FLAG was measured by qPCR and normalized to input. c Relative luciferase activity of WT or MUT SOCS3-3′UTR luciferase reporter in piPSCs transfected with control or YTHDF2 plasmid. Firefly luciferase activity was measured and normalized to Renilla luciferase activity. d qPCR analysis of YTHDF2 in control and YTHDF2 knockdown piPSCs. GAPDH was used as an internal control. e Western blot analysis of SOCS3 and YTHDF2 in piPSCs with or without YTHDF2 knockdown. f mRNA stability analysis of SOCS3 mRNA in control, METTL3-depleted or YTHDF2-depleted piPSCs treated with actinomycin D for 3 and 6 h. g Western blot analysis of SOCS3, KLF4, and SOX2 in piPSCs with or without METTL3 knockdown and transfected with control or YTHDF2 plasmid. h AP staining of piPSCs with or without METTL3 knockdown and transfected with control or YTHDF2 plasmid. i Quantification of AP-positive colonies of piPSCs with or without METTL3 knockdown and transfected with control or YTHDF2 plasmid. j qPCR analysis of SOX2 and KLF4 expression in piPSCs with or without METTL3 knockdown and transfected with control or YTHDF2 plasmid. Data were presented as mean ± SD of three independent experiments. ** P < 0.01, *** P < 0.001 compared with the control group represses developmental regulators 30 . It is well-known that JAK2-STAT3 signaling has a critical role in maintaining mESCs pluripotency by activating the downstream target KLF4 and subsequently activating SOX2 22 . Previous study reported that loss of JAK2 is lethal by embryonic day 12 in mice 31 . SOCS3 is a vital physiological inhibitor of JAK2-STAT3 signal pathway and has important roles in regulating stem cell proliferation and differentiation 32,33 . STAT3 activation is required for selfrenewal of ESCs 20,21 . Leukemia inhibitory factor (LIF) signaling maintains pluripotency by inducing JAKmediated phosphorylation of STAT3 Y705 (pY705) 34 . In agreement with these findings, we unveil that METTL3 maintains pluripotency of piPSCs by sustaining JAK2 expression, inhibiting SOCS3 expression and activating STAT3/KLF4/SOX2 signal axis. The JAK2-STAT3 pathway plays important roles in a variety of biological processes, and dysfunctional JAK2-STAT3 pathway may contribute to diseases such as cancer, heart disease and obesity [35][36][37] . The regulation of JAK2-STAT3 signal pathway by m 6 A methylation could be a common mechanism that affects a range of other biological processes, which should be further investigated.
The functional consequences of these dynamic and distinct RNA modifications converge mostly into regulating protein synthesis. Thus, a coordinated network of post-transcriptional modification pathways may ultimately modulate cell fate determination or stress by coordinating the mRNA stability, translation efficiency and splicing of transcripts that maintain the cell typespecific proteome. In this study, we identify that m 6 A modification regulates JAK2-STAT3 signaling in a YTHDF1/YTHDF2-orchestrated manner. Mechanistically, YTHDF1 recognizes and binds m 6 A-containing mRNA of JAK2, promotes translation and protein expression; YTHDF2 selectively targets and destabilizes m 6 A-modified mRNA of SOCS3, results in reduced protein abundance of SOCS3. Similarly, a recent study demonstrated that both of YTHDF1 and YTHDF2 were involved in regulating AKT signaling to promote the proliferation and tumorigenicity of endometrial cancer cells 38 . As m 6 A modification requires for selective recognition by specific binding proteins to exerts its biological functions 7 , other signal pathway could also be coordinately regulated by m 6 A and multiple m 6 A readers, which will be a new direction to explore in the future. Fig. 7 A working model summarizing the mechanism of mRNA m 6 A modification and its modulators in regulation of piPSCs pluripotency. m 6 A methyltransferase METTL3 increases the m 6 A levels of JAK2 and SOCS3 mRNA, leading to enhancing YTHDF1-mediated translation of JAK2 and attenuating YTHDF2-dependent mRNA stability of SOCS3, resulting in increased protein expression of JAK2 and decreased protein expression of SOCS3, thereby activating STAT3 phosphorylation and enhancing expression of core pluripotency genes KLF4 and SOX2 to facilitates piPSCs pluripotency In summary, we identify m 6 A methyltransferase METTL3 as a key regulator of pluripotency and that facilitated piPSCs self-renewal. For the first time, our studies suggest that m 6 A methylation controls pluripotency by targeting SOCS3/JAK2/STAT3 signaling in a YTHDF1/YTHDF2-orchestrated manner. These results provide a better understanding of the molecular regulatory mechanisms of m 6 A methylation and its modulators in stem cell biology. The exact functions and mechanisms of m 6 A mRNA modification in iPSC pluripotency and early development are of high clinical value and certainly worth continued investigation. Ultimately, by understanding the fundamental aspects of RNA modifications we will be able to develop small-molecule inhibitors or gene therapy tools for targeting proteins that could lead to new ways of controlling gene expression or protein translation. Such discoveries might lead to the development of novel therapeutic strategies to treat complex diseases, including developmental disorders and cancer.
Cell culture and differentiation in vitro
The mESC-like piPSCs used in this study were generated from the pig embryonic fibroblasts and provided by professor Jianyong Han 39 . These cells were maintained on mitomycin-treated mouse embryonic fibroblasts (called feeder cells) in Dulbecco's modification of Eagle medium (DMEM) supplemented with 15% serum replacement (SR) (Gibco), nonessential amino acids, L-glutamine, penicillin/streptomycin (all from Gibco, CA, USA), βmercaptoethanol (Sigma, St. Louis, MO, USA), human LIF (Gibco, CA, USA), and 2i (CHIR99021 and PD0325901) (Selleck, Shanghai, China) (called 2i plus LIF medium). The medium was changed every day. To induce differentiation with RA, LIF, and 2i were removed, and RA (Sigma, St. Louis, MO, USA) was added into differentiation medium at a concentration of 5 mM. Embryoid bodies (EBs) formation was performed in "hanging drop" method as described previously 40,41 . In brief, piPSCs were digested and suspended in differentiation medium without 2i/LIF. The cell suspension was placed onto the inner surface of the lids of bacteriological grade dishes and then placed carefully in the incubator. All cells were maintained at 37°C in a humidified 5% CO 2 incubator.
Cell transfection, plasmids, and RNA knockdown
Cell transfection was achieved by using Lipofectamine 2000 (Invitrogen, Carlsbad, CA, USA) for plasmid and Lipofectamine RNAiMAX (Invitrogen, Carlsbad, CA, USA) for siRNA following the manufacturer's protocols. The wild-type METTL3-CDS expression plasmid was generated by cloning the full-length ORF of pig METTL3 gene (XM_003128580.5) into pLVX vector. The catalytically mutant METTL3 (D395A and W398A) was amplified by PCR and cloned into pLVX vector based on published data 27,42,43 . Lentiviral vectors expressing METTL3 in piPSCs was purchased from Hanbio (Shanghai, China). METTL3 overexpression was achieved by lentivirus transduction in the presence of 4 µg/mL polybrene according to manufacturer's protocols. The FLAG-YTHDF1 and FLAG-YTHDF2 expression plasmid were cloned into pcDNA3.1 mammalian expression vectors. The sequences for siRNA were listed in Table S1.
AP staining and immunofluorescence
For AP staining, piPSCs were stained by Alkaline Phosphatase Activity Detection Kit (Sidansai Biotechnology Company, Shanghai, China) according to the manufacturer's instructions. For immunofluorescence analysis, cells were washed with phosphate-buffered saline (PBS) and fixed with 4% paraformaldehyde for 10 min at room temperature, permeabilized with Triton X-100 for 10 min. Cells were subsequently washed with PBS three times and blocked with the immunostaining blocking buffer (Beyotime Biotechnology, Shanghai, China) for 1 h. Primary antibodies were incubated at 4°C overnight. Secondary antibodies were incubated at room temperature for 1 h. Nuclei were stained with DAPI (Beyotime Biotechnology, Shanghai, China) for 5 min at room temperature. The primary antibodies used in this work were as follows: SOX2 (1:100, sc-365964, Santa Cruz, CA, USA), pSTAT3 (1:300, ab76315, Abcam, MA, USA). The secondary antibodies used in our work were as follows: goat antirabbit Alexa Fluor 594 (1:500, A11037, Invitrogen, CA, USA), goat anti-mouse Alexa Fluor 594 (1:500, A11032, Invitrogen).
Real-time quantitative PCR (qPCR)
Total RNA from the 3T3-L1 cells was extracted using TRIzol reagent (Invitrogen, CA, USA) according to the manufacturer's protocol. cDNA was synthesized with M-MLV reverse transcriptase (Invitrogen, CA, USA) using 2 μg of extracted RNA per sample. qPCR analysis was performed using SYBR Green PCR Master Mix (Roche) with the ABI Step-One Plus TM Real-Time PCR System (Applied Biosystems). GAPDH was used as an internal control. The primers used for qPCR were listed in Table S2.
Protein extraction and western blot
Cells were washed twice with ice-cold PBS and lysed using radioimmunoprecipitation buffer lysis buffer containing a protease and phosphatase inhibitor cocktail (Beyotime Biotechnology, Shanghai, China) on ice. Equal volumes of lysates were loaded and separated by 10%-15% sodium dodecyl sulphate polyacrylamide gel electroporesis (SDS-PAGE gel and then transferred to polyvinylidene difluoride membranes. Membranes were blocked with 5% non-fat milk at room temperature for 1 h, incubated sequentially with primary and secondary antibodies with primary antibodies. The immunoblots were visualized using chemiluminescence (ECL Plus detection system). Quantification of bands was performed using Image J software. The primary antibodies used for western blot were as follows:
Extraction of cytoplasmic and nuclear proteins
A nuclear and cytoplasmic protein extraction kit (Beyotime Biotechnology, Shanghai, China) was applied to separate these two cellular components according to the manufacturer's instructions. First, cells were harvested in cytoplasmic protein extraction buffer supplemented with phenylmethylsulfonyl fluoride (PMSF). After vortex for 5 sec and incubated on ice for 10-15 min, the cytoplasmic protein extraction buffer was added. Then the samples were incubated on ice for 5 min and centrifuged at 13,000 rpm for 5 min at 4°C. The supernatants were collected as the cytoplasmic extracts. Next, the resulting pellet was resuspended in nuclear protein extraction buffer supplemented with PMSF and incubated on ice for at least 30 min. The resulting supernatant was gathered as nuclear extracts following centrifuge at 13,000 rpm for 10 min. The cytoplasmic and nuclear components were then subjected to Western blot.
Analysis of m 6 A levels by LC-MS/MS
Quantitative analysis of RNA m 6 A levels by LC-MS/MS was performed as described previously 44,45 . In brief, total RNA was extracted using TRIzol reagent (Invitrogen, CA, USA), and purified using a Dynabeads mRNA DIRECT kit and RiboMinus Eukaryote Kit (Ambion, CA, USA) following the manufacturer's instruction. About 200 ng of mRNA was digested by nuclease P1 (2 U) in 25 μl of buffer containing 10 mM of NH 4 OAc (pH = 5.3) at 42°C for 2 h, followed by the addition of NH 4 HCO 3 (1 M, 3 μl) and AP (0.5 U, Sigma, St. Louis, MO, USA) with incubation at 37°C for 2 h. Then the sample was diluted to a total volume of 90 µl and filtered (0.22 μm pore size, Millipore). In total, 10 µL of the solution was injected into LC-MS/MS (Agilent Technologies, CA, USA). Quantification was performed by comparison with the standard curve obtained from pure nucleoside standards. The m 6 A level was calculated as the ratio of m 6 A to A.
Methylated RNA Immunoprecipitation coupled with quantitative real-time PCR (MeRIP-qPCR) mRNA was prepared as described above, and fragmented using Ambion RNA Fragmentation reagent (Ambion, Carlsbad, CA, USA) at 70°C for 15 min. A small portion (10%) of the RNA fragments was collected to be used as input sample. MeRIP-qPCR was performed according to previously protocol 9 . In brief, fragmented mRNA was incubated immunoprecipitated with anti-m 6 A antibody (Synaptic Systems) in immunoprecipitation buffer (RNase inhibitor, 50 mM Tris-HCl, 750 mM NaCl and 0.5% (vol/vol) Igepal CA-630 (Sigma, St. Louis, MO, USA)) at 4°C for 2 h with rotation. The m 6 A antibody-RNA mixture was incubated with Dynabeads protein A (Invitrogen, CA, USA) at 4°C for 2 h with rotation. The bound RNA was eluted twice by competition with M 6 A 5'-monophosphate sodium salt (Sigma, St. Louis, MO, USA) at 4°C for 1 h. Following ethanol precipitation, the input RNA and immunoprecipitated m 6 A RNAs were reverse transcribed into cDNA using M-MLV reverse transcriptase (Invitrogen, CA, USA). m 6 A enrichment was determined by qPCR analysis. The primers used for MeRIP-qPCR were listed in Table S2.
RNA immunoprecipitation-qPCR (RIP-qPCR)
This procedure was used according to a previous published report 46 . piPSCs transfected with FLAG-YTHDF1, FLAG-YTHD2, or control plasmid were washed twice by PBS and lysed in lysis buffer of 150 mM KCl, 10 mM HEPES, 2 mM EDTA, 0.5% NP-40, 0.5 mM dithothreitol (DTT), 1 x Protease Inhibitor Cocktail and RNasin Plus RNase inhibitor (Promega, WI, USA) for 30 min at 4°C. The cell lysates were centrifuged and the supernatant was transferred to pass through a 0.45-μm membrane syringe filter. A 50-μl aliquot of cell lysate was saved as input, and the remaining sample was incubated with IgG antibodyconjugated magnetic beads or anti-FLAG magnetic beads (Sigma, St. Louis, MO, USA) for 4 h at 4°C and six times with wash buffer (50 mM Tris, 200 mM NaCl, 2 mM EDTA, 0.05% NP40, 0.5 mM DTT, RNase inhibitor). Then the beads were eluted in wash buffer containing 0.1% SDS and 10 mL proteinase K, and incubated at 55°C for 30 min. The input and immunoprecipitated RNAs were isolated by TRIzol reagent (Invitrogen, CA, USA) and were reverse transcribed into cDNA using M-MLV reverse transcriptase (Invitrogen, CA, USA) according to manufacturer's instruction. The fold enrichment was detected by qPCR.
Dual-luciferase reporter and mutagenesis assays SOCS3-3'UTR and JAK2-3'UTR with either wild-type or mutant (m 6 A was replaced by T) were inserted into downstream of pmirGLO Dual-Luciferase vector (Promega, WI, USA). For dual-luciferase reporter assay, cells seeded in 24-well plates were co-transfected with wildtype or mutant SOCS3-3'UTR (or JAK2-3'UTR) and METTL3-WT (or METTL3-MUT, or YTHDF1, or YTHDF2, or empty vector). After 48 h post transfection, the activities of firefly luciferase and Renilla luciferase in each well were determined by a Dual-Luciferase Reporter Assay System (Promega, WI, USA) according to the manufacturer's protocol. To determine mRNA stability, cells were treated with actinomycin D (Sigma, St. Louis, MO, USA) at a final concentration of 5 μg/mL for 0, 3, or 6 h. The cells were collected and RNA samples were extracted for reverse transcription. The mRNA transcript levels of interest were detected by qPCR 11 .
Sequencing data analysis
The sequencing data were sent to trimmomatic to remove low quality reads and adaptor sequence contaminants under default parameters. Reads were aligned to the reference genome (Sscrofa11.1) using Tophat (v2.0.14) 47 . Gene structure annotations were downloaded from Ensemble release 94 (Sscrofa11.1). For m 6 A peak calling, the longest isoform was used if multiple isoforms were detected. The m 6 A-enriched peaks in each m 6 A immunoprecipitation sample were identified by MACS2 peak-calling software (version 2.1.1) with the corresponding input sample serving as control. MACS2 was run with default options except for '-nomodel,-keepdup all' to turn off fragment size estimation and to keep all uniquely-mapping reads, respectively. A stringent cutoff threshold for Q value of 5 × 10 −2 was used to obtain highconfidence peaks. Each peak was annotated based on Ensembl (release 94) gene annotation information by applying BEDTools' intersectBed (v2.24.0).
Motif identification within m 6 A peaks
The motif identification within m 6 A peaks was performed as described previously 48 . The motifs enriched in m 6 A peaks were analyzed by HOMER (v4.10.1). Motif length was restricted to 6 nucleotides. All peaks mapped to mRNAs were used as the target sequences and background sequences were constructed by randomly shuffling peaks upon total mRNAs on genome using BEDTools' shuffleBed (v2.24.0) 49 . All piPSCs m 6 A peaks in were listed in Table S3.
Statistical analysis
The data were presented as mean ± SD. The statistical significance of differences was determined using unpaired Student's t test with GraphPad Prism 6 (Graphpad Software). p < 0.05 was considered statistically significant. | 9,518.6 | 2019-02-20T00:00:00.000 | [
"Biology"
] |
Time-of-Failure Probability Mass Function Computation Using the First-Passage-Time Method Applied to Particle Filter-based Prognostics
eling, system identification
to analyze the impact in the estimation of the ToF-PMF (probability mass function) when particle-filter-based prognostics algorithms are used to perform long-term predictions of the fault indicator and compute the probability of failure considering specific hazard zones (which may be characterized by a deterministic value or by a failure likelihood function). A hypothetical self-regenerative degradation process is used as a case study to evaluate the performance of the proposed methods.
INTRODUCTION
Monitoring the state-of-health (SoH) of a system (and/or its components) is essential to improve their overall performance and reduce costs associated with a corrective maintenance (Wang, Lu, Cheng, & Jiang, 2019). Based on this, prognostics and health management (PHM) plays a crucial role in the estimation of system conditions. According to Si (2015), PHM provides a set of tools used to guarantee the system's reliability, estimate its real condition, and avoid risks that can affect the operation or cause irreversible damage to the system. Prognostics allow the identification of the requirements of a system (and/or its components) in the future; that is, the time-of-failure (ToF) and the remaining useful life (RUL). The ToF is a relevant parameter in PHM, and it is defined as the time in which the failure threshold is reached (Skaf, 2015). In the case of RUL, it is defined as the difference between the ToF and the current time instant (Wei, Dong, & Chen, 2018). In this article, we prefer to use ToF instead of RUL, since it is more general and is applicable for a broader range of cases (Orchard & Vachtsevanos, 2009).
Currently, various methods are used to estimate the ToF. Some authors have classified these methods into two major categories: model-based and data-driven. The first category consists of the use of a set of equations that include the physical characteristics of the phenomenon under study. In contrast, the second category uses a large amount of data obtained from sensors and monitoring. These data is then used to infer the behavior of the event. For example, Pola et al. (2015) presents a model-based approach combined with a particle filter (PF) to estimate the end of discharge (EoD) of lithium-ion batteries, while Liu, Zhao, and Peng (2019) presents a data-driven approach based on long short term memory networks and Bayesian model averaging to determine the RUL.
The SoH is an indicator widely used to characterize degradation processes. According to Qu, Liu, Ma, and Fan (2019), an accurate estimation of the SoH of a system will affect the RUL prediction directly. In this line, failure prognostic algorithms use long-term predictions to describe the future behavior of this kind of indicators to estimate the ToF of the faulty system. To achieve this, failure prognostic algorithms require a correct characterization of the indicator under study, and its accuracy has a direct impact on the estimation of the ToF probability mass function (ToF-PMF). An example of these algorithms is the one based on PFs, which are widely used by many researchers in diverse applications within the PHM community .
The characterization of specific degradation processes continues to be a topic of interest in PHM for applications related to self-regenerative processes; such is the case of the discharging process for lithium-ion (Li-ion) batteries (Ng, Xing, & Tsui, 2014;Zou, Hu, Ma, & Li, 2015). According to Xu et al. (2019), the self-regenerative process in Li-ion batteries can be defined as the process by which the battery increases its capacity for the next operation cycle if a long standby time is considered. If this self-regenerative behavior is not treated adequately by prognostic algorithms, then the accuracy and precision of the ToF (or RUL) estimation may be considerably affected. Therefore, it is of utmost importance to characterize and consider aspects related to these processes through methodologies that allow computing a correct ToF (or RUL) estimation. Examples of advances in this area are the works presented in Orchard et al. (2015) and Xu et al. (2019). This paper presents an approach to compute the ToF-PMF based on the concept of the first-passage-time (FPT) method. FPT is defined as the first time when a stochastic process crosses a specified threshold (Jaskowski & van Dijk, 2015). This method is commonly used in the areas of economics and finance (Bakshi & Panayotov, 2010;Janssen, Manca, & Manca, 2013). However, during the last year, the concept was extended to different areas, such as animals' movement to establish a distant relationship between the animal and its prey (McKenzie, Lewis, & Merrill, 2009). Another case is the study of a stochastic degradation model under bivariate time scales. In this specific study, the concept of FPT is used to predict the RUL of the degrading components (Pei et al., 2019). Finally, the approach proposed by Si (2015) combines the FPT concept with the Kalman filter for RUL estimation. In this work also, the author establishes that the use of the extended Kalman filter or PF combined with the FPT may be an interesting approach for future research.
Considering all of the above, in this research effort, we propose three methods to improve the efficiency of sampling strategies in the implementation of particle-filtering-based prognostic algorithms. These methods allow working either with deterministic or probabilistic definitions of the failure hazard zone.
As a case study, a hypothetical self-regenerative degradation process is considered for the computation of the ToF-PMF. The performance of the three methods is evaluated through the Jensen-Shannon Divergence and the computation time, in function of the amount of particles used by the particlefiltering-based prognostic algorithm. This paper is organized as follows: Section 2 describes the main concepts related to ToF estimation, FPT and particlefiltering-based prognostic algorithms, Section 3 describes the proposed methodology, Section 4 introduces the case study, Section 5 shows the simulations and obtained results, and finally, Section 6 presents the conclusions.
Time-of-Failure estimation and First-Passage-Time
Failure prognostic algorithms use long-term predictions to describe the future trend in time of a SoH indicator of a failing system (or subsystem, or component), aiming to estimate its ToF (Figure 1). To compute these long-term predictions, failure prognostic algorithms should require a complete understanding of the underlying dynamics of the SoH indicator, as well as a proper characterization of the related uncertainty sources, and also a future usage profiles of the system (Diaz et al., 2020). Thus, it is more accurate to state that failure prognostic algorithms allow estimating the ToF-PMF of a failing system.
Generally, the SoH indicator is directly related to a degradation process or a deterioration phenomenon (e.g., erosion, corrosion, or cracking) (Deng, Barros, & Grall, 2016). This entails that monotonic assumptions may be usually made over the SoH indicator, because most SoH degradation/deterioration processes exhibit this kind of behavior (Park & Bae, 2010).
According to this, the system under observation would incur into a catastrophic failure condition only once, moreover, this Figure 1. Long-term predictions generated by failure prognostic algorithms. event could be characterized by a situation where the SoH indicator crosses a predetermined threshold (Zhang, Si, & Hu, 2015). For this case, the concept of threshold represents a failure condition defined by collected data from historical failures of the system. In most cases, the threshold is characterized by using a deterministic value; however, if there is enough historical failure data, it is also possible to characterize the probability of failure events by a likelihood function (Orchard & Vachtsevanos, 2009). The latter is also called hazard zone, and it is the general way to define a failure condition in PHM. Taking this into account, we can observe that a deterministic threshold corresponds to the simplest hazard zone that can be defined (Acuña & Orchard, 2018).
In cases where the catastrophic failure condition is assumed to occur only once and where the hazard zone is represented by a deterministic threshold T , the ToF can be estimated using the concept of FPT (Deng et al., 2016;Jaskowski & van Dijk, 2015). Mathematically speaking, let {x k , k ≥ 0}, with x ∈ R, k ∈ N ∪ {0}, be a scalar discrete-time stochastic process that characterizes the evolution in time of the SoH indicator, and where it is assumed that the degradation condition monotonically increases its severity in time. Then, the ToF of the stochastic process x k at time k , given that we have acquired measurements y ∈ R until time k p , is computed according to Eq. (1): It is important to mention that, for simplicity in the notation, we consider T oF = T oF (k p ) in upcoming equations.
With the definition presented in Eq. (1) and the Law of Total Probability, the authors in Jaskowski and van Dijk (2015) obtained a recursive way to compute the ToF-PMF, P(T oF = k), given by Eq. (4), as from Eq. (2) and considering P(x k ≥ T |T oF = k) = 1 in Eq. (3): As stated above, the expression in Eq. 4 allows computing the ToF-PMF considering a deterministic threshold; besides, in some particular cases for the SoH indicator random process, an exact and closed-form of the ToF-PMF can be obtained (Si, Wang, Chen, Hu, & Zhou, 2013). However, if a probabilistic threshold is considered as the failure condition, the recursion in Eq. (3) is no longer valid, and there is no closed-form solution to the problem of computing the PMF-ToF of the faulty system.
Particle Filter based Prognostics algorithms for Time-of-Failure estimation
In PHM, there are different failure prognostic algorithms proposed to address the prognostic problem; however, many researchers have preferred probability-based methods. This is owing to the possibility to include the notion of uncertainty. A widely used failure prognostic algorithm is based on the PF, which is suitable for both on-line learning systems and state estimation of uncertain systems (Arulampalam, Maskell, Gordon, & Clapp, 2002). In this sense, particlefiltering-based prognostic algorithms aim at approximating the SoH indicator PDF by a set of weighted samples (called "particles"). Authors in Orchard and Vachtsevanos (2009) define a theoretical framework and provide the necessary procedures with the purpose of estimating the ToF-PMF, as well as obtaining a proper characterization of it in accordance with Eq. (5): is the probability of being in the event of catastrophic failure, conditional on a specific particle, and {w (i) k } Np i=1 are the particle weights at time k. As the expression in Eq. (4), the Eq. 5 is also valid for a system that will reach the condition of catastrophic failure only once given a deterministic threshold. It is noteworthy to highlight these two elements because, in the case of using a probabilistic threshold, the mathematical expression in Eq. (5) is not valid. Otherwise, in case of the SoH indicator represents a regenerative system, the particles of the particle-filtering-based prognostic algorithm could cross the threshold several times, which could lead to a mathematically erroneous estimation of the ToF-PMF (the sum of all the probabilities of the PMF higher than 1). Due to this, in this work, we propose three methods inspired in the concept of FPT, which are capable of obtaining, in an efficient way, an empirical approximation of the ToF-PMF for particle-filtering-based prognostic algorithms. These methods can be used for both nonregenerative and regenerative systems, as well as any kind of hazard zone.
The Jensen-Shannon Divergence
Divergence is a mathematical concept used in different research topics with the purpose of measuring the dissimilarity between PMFs. An example of this is the Kullback-Leibler (KL) Divergence. However, the KL-Divergence is not a true metric for the dissimilarity between PMFs because it does not comply with the symmetry property. Instead, we use the Jensen-Shannon (JS) Divergence, which is based on KL-Divergence. Indeed, from a mathematical point of view (Osán, Bussandri, & Lamberti, 2018), given two random discrete distributions P = {p 1 , p 2 , ..., p n } and Q = {q 1 , q 2 , ..., q n } the KL-Divergence is defined by: and the symmetric JS-Divergence is defined as: 3. TIME-OF-FAILURE ESTIMATION METHODS FOR PARTICLE FILTER-BASED PROGNOSTICS ALGO-RITHMS As stated above, particle-filtering-based prognostic algorithms approximate the PDF of the SoH indicator by a set of particles in each instant time in the absence of measurements and allow computing the ToF-PMF when the particles reach (or cross) the predefined hazard zone. These so called hazard zones can be classified in two types according on how the threshold is defined. The first type uses a deterministic threshold, while the other type uses a probabilistic threshold through a failure likelihood function. For both approaches, the ToF computation will depend on the kind of hazard zone considered. In other words, when a deterministic threshold is considered, the ToF for each particle, given that we have acquired measurements until time k p , is computed according to Eq. (8): Otherwise, when a probabilistic hazard zone is considered, the ToF for each particle is computed by means of Eq. (9): where F (·) is the failure likelihood function, that denotes the failure condition of the given particle, which is defined by: F (·) can be understood as a realization of a Bernoulli process, where the probability of the event is a function of the failure likelihood function that defines the hazard zone and the position of each particle x ). In this case, p is a function that denotes the nonlinear mapping R − → [0, 1] according to: where T − and T + indicate the lower and upper bounds of the hazard zone.
On account of the ToF computation previously exposed, the ToF-PMF can be computed through the counting of the particles (and the summation of their respective weights) that have reached the failure condition for the first time.
Considering all the prior concepts, we propose three methods inspired in particle-filtering-based prognostic algorithms, and the FPT concept, to compute the ToF-PMF. Each method differs from the others in the treatment of the particles and their weights. For example, each of these methods consider the weights of the particles as follows: • Method 1: All the particles have the same weight.
• Method 2: The particles have different weights.
• Method 3: The weights of the particles are re-computed depending on the number of particles that enter into the failure condition.
Furthermore, it is noteworthy to recall that the proposed methods are based also in the concept of FPT, therefore, the three methods only count the particles that enter into the failure condition for the first time. This includes the cases where regenerative systems are prognosticated, since the particles may reach the failure condition more than one time. In this case, the failure condition is considered using one of the following criteria: i) when the particles cross the threshold T if it was a deterministic hazard zone, or ii) when F (x (i) k ) = 1 for a probabilistic hazard zones.
First Method
The first method considers the scenario in which all the particles have the same weight when long-term prediction are performed by the particle-filtering-based prognostic algorithm (e.g., when a resampling step is applied before starting the long-term predictions). With this in mind, the ToF-PMF computation will only consider the first time instant in which the particles reach the failure condition.
An illustration of the main idea in which this method is based is shown in Figure 2. Moreover, the procedure to compute the ToF-PMF is detailed in Algorithm 1 for a deterministic hazard zone, and in Algorithm 2, for a probabilistic hazard zone. It is important to specify that, for all proposed algorithms, we use the term Prediction horizon to denote the time period in which long-term predictions are computed, whereas the Prediction step corresponds to a predictive update of the state vector that is computed using the process model. Figure 2. Computation of ToF-PMF using the first method. Every time a blue particle enters the hazard zone, its failure status is considered to form the ToF-PMF, then discarded (magenta particles). The weight of the particles is illustrated with a line, every line has the same weight.
Algorithm 1 ToF-PMF computation when a deterministic hazard zone is considered have different weights when long-term prediction are performed by the particle-filtering-based prognostic algorithm (e.g., when the resampling step is not applied before starting the long-term predictions). An example of this method is shown in Figure 3. The general procedure to compute the ToF-PMF through this method is also detailed by Algorithms 1 and 2. Figure 3. Computation of ToF-PMF using the second method. As the first method the blue particles are used to form the PMF, now their weights are not uniform, they are sampled from a normal distribution from N (E(k), σ(k)).
Third Method
The third method is similar to M ethod 1, in the way that, the long term predictions are performed using particles with the same weight, and considering the first time instant in which the particles reach the failure condition. However, this method is proposed to improve the ToF-PMF characterization by increasing the number of particles.
In this method, at every time instant that a particle enters the failure condition (p ≥ T or F (p) = 1), a new particle is inserted to the "healthy particles" (H p ) set by using the multi-nomial resampling approach (Douc & Cappe, 2005). With the addition of this new particle, the weights of the H p set are recalculated, with the aim of maintaining all these particles with the same weight, and also considering that the sum of the whole set of particles must be equal to 1. Figure 4 illustrates the proposed methodology. The H p set is represented with blue. As time passes, some particles enter into the hazard zone. These particles are colored in magenta, and they are replaced with new particles, colored with red. Once the substitution is made and the weights of the H p set is recalculated, the algorithm continues. Here the weights are changing according to the particles that are resampled. The weight of the remaining particles is distributed to the blue and red particles to maintain the integral of the ToF-PMF equal to one.
The procedure to compute the ToF-PMF by M ethod 3 is detailed in Algorithm 3 for a deterministic hazard zone, and in Algorithm 4, for a probabilistic hazard zone.
Algorithm 3 ToF-PMF computation when a deterministic hazard zone is considered Resample particles in p that (B == 1) 11: end while
CASE STUDY
A hypothetical self-regenerative system was considered as a case study to analyze the proposed methodology. This hypothetical degradation process was designed to represent a strong self-regenerative phenomenon with the aim to evaluate the proposed methods in a challenging scenario. It is also important to note that the self-regenerative model is only used to generate the long-term predictions of the state variable, which means that the proposed case study represents only the prognosis stage of a PHM application. Therefore, any previous estimation stage is assumed as correct, and the initial state (x(0)) for the proposed self-regenerative model corresponds to the prognostic starting point of a particle-filter-based prognostics algorithm (i.e., k p in Figures 1, 2, 3, and 4).
The self-regenerative system is modeled by Eq. 12, with x(0) = 1. Model parameters a and b are set to 1.06 and 0.935, respectively. In addition, the process noise is modeled as a Gaussian distribution, ω k ∼ N (0, 6.4e − 3), and the prediction horizon H equal to 156. The hazard zone is characterized with the threshold value T = 20 in the deterministic case, and the Gaussian distribution N (20, 0.5) in a probabilistic scenario.
To compute the ground truth ToF-PMF for the proposed case study, Monte Carlo (MC) simulations were performed. With the purpose of obtain an acceptable representation of the ground truth ToF-PMF, one million of MC simulations were considered. In Figure 5, an example of the MC simulations for the regenerative system is shown. Figure 6 shows the corresponding ground truth ToF-PMF for the both kinds of hazard zones.
RESULTS
To evaluate the ToF-PMF characterization of the proposed methods and their efficiency, the three algorithms were tested using the self-regenerative process presented in the case study. The ToF-PMF estimation obtained by the three methods were compared to the ground truth ToF-PMF obtained in the case study. The software used for the simulations was Matlab R2017, and the hardware corresponds to an Intel® Core TM i5-8250U CPU @ 1.60GHz 1.80GHz, and 12 GB of RAM.
The ToF-PMF characterization was evaluated both through the JS Divergence (Osán et al., 2018), and the execution time.
In both cases, they were represented as a function of the amount of particles (N p ) used by the particle-filtering-based prognostic algorithm. The amount of particles were varied from 100 to 500, and increased by hundreds. Additionally, one hundred iterations per each of the N p defined were considered to carry out an statistical analysis.
Finally, the ToF-PMF estimation for the case study considered the two kinds of hazard zones described above: deterministic threshold and failure likelihood function. In this way, the results are organized according to the kind of hazard zones, as stated below.
Deterministic hazard zone
The results for the three methods, when a deterministic threshold is considered are shown in Figure 7. The upper plot compares the results obtained for the JS-Divergence as a function of the amount of particles, while the lower plot compares the execution times for the three methods. For a given amount of particles, the best characterization of the ToF-PMF is obtained using M ethod 3, as seen in upper plot of Figure 7, followed by M ethod 2 and 1. This can be explained by the formulation of M ethod 3 which has the effect of maintaining a constant amount of particles available to compute the ToF-PMF in every time instant. However, this method considers a resampling algorithm that requires major computation efforts to compute the ToF-PMF in comparison with M ethod 1 and 2. This is illustrated in the execution time plot of Figure 7, where M ethod 1 and 2 are in ascending order, since the complexity of them is also increased, and the one with major complexity is the slowest of the three proposed methods.
According to Figure 7 and the obtained results regarding the JS-Divergence the performance of the three methods present a considerable deviation with a few amount of particles. But that trend tends to decrease with the increasing amount of particles considered to compute the ToF-PMF with each method. Furthermore, the execution time does increase for the three methods when the amount of particles is also increased.
Finally, Figure 8 shows the comparison between the ToF-PMF obtained by each method using 200 particles, and the ground truth ToF-PMF.
Probabilistic hazard zone
When the hazard zone is characterized by a failure likelihood function, the three methods behave similar to the determinis-tic case. In Figure 9, in the upper plot the JS-Divergence for the three methods shows that the M ethod 3 is the one that better characterizes the ToF-PMF of the ground truth, and the M ethod 1 and M ethod 2 behave in the same manner as for the deterministic threshold. Moreover, when the execution time is compared all three methods are less efficient in time when the number of particles is increased. This can be noted in the lower plot of Figure 9. This behavior can be explained by the implementation of the Bernoulli process in the computation of the failure status of the particles, since this new step takes almost 70, 60 and 40 percent of the total execution time per method, respectively. Also, the dispersion does not show a trend for any of the proposed methods, but for M ethod 3 a greater dispersion in execution time is measured, and presents an independent behavior respect to the amount of particles. Finally, it is important to mention that simulations with less amount of particles were performed, although the observed behavior was similar to the one exposed in this article.
CONCLUSIONS
In this paper three methods were proposed to compute the ToF-PMF based on FPT, and considering two kinds of hazard zones. The three proposed methods were evaluated using a hypothetical self-regenerative degradation process.
The three methods were capable of approximating the ToF-PMF, compared to the ground truth ToF-PMF, at a reasonable level, and in an efficient manner. Therefore, the three proposed methods may be used in particle-filtering-based prognostic implementations for different kinds of processes and hazard zones.
When the results were focused on the efficiency of the execution time, they showed that the proposed methods can be implemented in real conditions by particle-filtering-based prognostic algorithms. However, the trade-off between ToF-PMF approximation and execution times must be always considered.
For future work, we propose to test this methodology considering real data, to compare its performance with stateof-the-art methodologies. Also, to evaluate the behavior of each of the algorithms in a context with limited computational power. . He has authored and co-authored more than 100 papers on diverse topics, in-cluding the design and implementation of failure prognostic algorithms, statistical process monitoring and system identification. His research work at the Georgia Institute of Technology was the foundation of novel real-time failure prognosis approaches based on particle filtering algorithms. His current research interests include the study of theoretical aspects related to the implementation of real-time failure prognosis algorithms, with applications to battery management systems, electromobility, mining industry, and finance. Dr. Orchard is a Fellow of the Prognostic and Health Management Society. | 5,959.8 | 2020-11-02T00:00:00.000 | [
"Computer Science"
] |
Nek family of kinases in cell cycle, checkpoint control and cancer
Early studies in lower Eukaryotes have defined a role for the members of the NimA related kinase (Nek) family of protein kinases in cell cycle control. Expansion of the Nek family throughout evolution has been accompanied by their broader involvement in checkpoint regulation and cilia biology. Moreover, mutations of Nek family members have been identified as drivers behind the development of ciliopathies and cancer. Recent advances in studying the physiological roles of Nek family members utilizing mouse genetics and RNAi-mediated knockdown are revealing intricate associations of Nek family members with fundamental biological processes. Here, we aim to provide a comprehensive account of our understanding of Nek kinase biology and their involvement in cell cycle, checkpoint control and cancer.
Introduction
Deregulation of the cell cycle is a hallmark of neoplastic transformation and plays a central role in the initiation and progression of cancer. The fidelity of the cell cycle is tightly maintained by numerous regulatory proteins, most notably kinases. Cyclin dependent kinases (CDK), in complex with their partner cyclins, are considered the master regulators of the cell cycle. Members of the Aurora and Polo families are also critical components of the cell cycle machinery. More recently, the NimA related kinase (Nek) family protein kinases begun to emerge as important players in regulation of the eukaryotic cell cycle both during normal cell cycle progression and in response to genotoxic stress. This review aims to provide a systematic account of our understanding of Nek kinase biology and their involvement in disease drawn from biochemical, cell biology, animal model and genetic studies.
Nek kinase family
The filamentous fungus Aspergillus nidulans Never in mitosis A (NimA) is the founding member of the (NEK) family of serine-threonine kinases, and an essential regulator of mitosis [1,2]. NimA is required for transport of active CDC2 into the nucleus thus allowing initiation of mitosis [3]. Moreover, NimA promotes mitotic chromosome condensation through phosphorylation of histone H3 at serine 10 and may regulate nuclear membrane fission during mitotic exit [4,5].
The critical role for NimA in promoting cell cycle progression in A.nidulans raised the possibility that homologues of NimA existed in higher eukaryotes. Consistent with this, overexpression of NimA in S.pombe and in human HeLa cells induced chromosome condensation in the absence of other mitotic events, such as the microtubule spindle assembly or Cdc2 activation [6,7]. Indeed, NimA-related kinases have been identified throughout higher eukaryotes with a significant expansion of the family through evolution. While a single NimA homologue exists in yeast, 2, 4 and 11 NimArelated kinases were identified in D.melanogaster, C.elegans and mammals respectively.
NimA consists of an N-terminal catalytic domain, coiled-coiled domains, which mediate oligomerization, and PEST sequences, which participate in ubiquitindependent proteolysis, a process that may be required for A.nidulans to exit mitosis [8] (Figure 1). NimA kinase activity exhibits a preference for N-terminal hydrophobic residues and a phenylalanine at position -3 relative to the phosphorylated residue (FR/KR/KS/T, target residue underlined) [9]. Despite low overall sequence homology, the organizational features of NimA are broadly conserved among mammalian Nek kinases. For instance, all Nek kinases except Nek10 contain N-terminal catalytic domains, whereas Nek4, 6 and 7 are the only family members that do not contain coiled-coiled motifs. Moreover 6 of 11 mammalian Nek kinases contain putative PEST sequences ( Figure 1).
Outside regions of homology, certain Nek kinases contain unique protein domains that point to the acquistion of novel functions relative to the ancestral NimA protein. Nek8 and Nek9 contain regulator of chromosome condensation (RCC1) repeats, which are homologous to RCC1, a guanine nucleotide exchange factor (GEF) for the small GTPase, Ras-related nuclear protein (Ran). While the role of the RCC1 domain has not been characterized in Nek8, in Nek9 this domain acts as a negative regulator of Nek9 catalytic activity and can interact with Ran. However, there is no evidence that Nek9 can act as a GEF towards Ran [10]. Additional unique domains in Nek family members include a predicted DEAD-box helicase-like domain in Nek5 and a cluster of armadillo repeats in Nek10 ( Figure 1).
A recent determination of the three dimensional structure of Nek7 revealed a novel autoinhibitory sequence within the kinase domain. This tyrosine-down motif within the nucleotide binding lobe projects into the active site of the kinase, generating an inactive conformation. Activation of Nek6/7 occurs in two distinct ways, by interaction with Nek9's non-catalytic C-terminal tail, which relieves the autoinhibition, and by direct Nek9-mediated phosphorylation within the activation loop [10,11]. An equivalent autoinhibitory tyrosine can be found in 8 of 11 Nek kinases (including Nek2 and Nek6) (Figure 1), and 10% of all human kinases [11].
A divergence in function between mammalian Neks and the ancestral NimA is highlighted by the fact that only nim-1 from the related fungus Neosporra crassa can functionally complement the nimA mutation [12]. Neither the yeast nimA homologues (fin1 in S.pombe; KIN3 in S.cervisae) nor Nek2, the closest mammalian nimA homologue, are able to rescue the cell cycle defect incurred by defects in nimA [13,14]. While mammalian Nek kinases do not phenocopy the NimA mutation, they are involved in many aspects of cell cycle progression. Notably, many of these functions can be attributed to the regulation of microtubules and microtubule containing structures. More recently, several Nek family members have also been shown to participate in control of cell cycle checkpoints following cellular stress and DNA damage, as well as development of cancer.
Nek kinases, microtubules and microtuble-based organelles a) Nek2 in control of centrosome splitting Based on sequence homology within the kinase domain, Nek2 is the closest mammalian NimA homologue. Unlike NimA however, Nek2 is not essential for mitotic entry, but instead regulates centrosome separation during mitosis [15,16]. Nek2 localizes to centrosomes during interphase and early mitosis where it interacts with and phosphorylates several centrosomal proteins including cNap-1, Rootletin and β-catenin [16][17][18][19]. Nek2 localization and ability to phosphorylate c-Nap and Rootletin is mediated by interaction with members of the Hippo pathway, Mst2 and hSav1 [20]. Inhibition of Nek2 catalytic activity or knockdown of its' substrates, cNap-1, Rootletin or β-catenin, inhibits centrosome separation, spindle assembly and formation of multinucleated cells [15,[18][19][20]. In addition to the centrosome, Nek2 localizes to the condensed chromatin, the midbody and the kinetochores of dividing NIH3T3 cells [21]. Significantly, knockdown of Nek2 causes displacement of the centromeric protein Mad2 from the kinetochores and impairs chromosome segregation [21]. Taken together, these studies indicate that Nek2 may coordinate cell division on multiple levels.
A fundamental role of Nek2 in control of the cell cycle progression and division is strongly corroborated by its function in early embryogenesis. Downregulation of Nek2 in one-cell mouse embryos through microinjection of dsRNA prevented 75% of the embryos from reaching the blastocyst stage, with most arresting at the four-cell stage [22,23]. Most embryos displayed morphological defects in both mitotic and interphase blastomeres, forming abnormal spindle structures and displaying irregular nuclear morphologies, including dumbbell shaped nuclei, nuclear bridges, and micronuclei. b) Nek 6, 7 and 9 and the mitotic spindle Nek6 and Nek7 are highly related and are almost entirely composed of catalytic domains, which share 87% identity. While they were originally identified based on their ability to phosphorylate p70 S6 kinase in vitro, the physiological significance of this interaction remains unclear [24,25]. Instead, Nek6 and Nek7 were found to act downstream of Nek9 and regulate the mitotic spindle and cytokinesis [26]. Specifically, Nek6 or Nek7 depletion led to fragile spindle formation during mitosis and prolonged the activation of the spindle assembly checkpoint (SAC) preventing progression to anaphase [26]. In addition to regulation of spindle formation, Nek6/7 contribute to the final stage of cell division, as cells that are treated with pharmacological inhibitors of the SAC continue to progress through mitosis but arrest again during cytokinesis [26]. Consistent with these findings, Nek9 function in spindle dynamics has also been demonstrated, whereby inhibition of Nek9 through microinjection of α-Nek9 antibodies impaired spindle assembly and chromosome alignment during metaphase [10]. Finally, Nek6, 7 and 9 have recently been implicated in centrosome splitting [27]. In HeLa cells, Nek9 is activated by sequential phosphorylation by CDK1 and PLK1 during mitosis, which leads to Nek6/7-dependent phosphorylation of Eg5, and its accumulation at centrosomes, an event required for centrosome separation [27].
Taken together, these cell-based studies suggested that Nek6/7/9 might be critical for regulation of microtubule organization during mitosis. Indeed, targeted disruption of the Nek7 gene in mice revealed that this kinase was indispensable for murine development, with only rare homozygous-null animals surviving to one month of age [28]. At birth, Nek7-deficient mice weighed slightly less than their littermates, but thereafter exhibited severe growth retardation, weighing roughly half as much as their littermates by twenty days of age. Furthermore, Nek7 -/-MEFs were frequently found to be bi/multinuclear or mononuclear with enlarged nuclei. Analysis of metaphase chromosome spreads revealed increased polyploidy and genetic instability leading to aneuploidy. Evidence of multi-centrosomes in the binucleated cells, as well as more frequent incidence of chromosomal lagging and bridges at anaphase or telophase were further indicative of cytokinesis failures. Interestingly, judged by the strong phenotypes elicited by Nek7 deletion, despite their strong homology, Nek6 could not compensate for loss of Nek7 in both cultured cells and the whole organism. This may in part be explained by differential tissue distribution and subcellular localization of Nek6 and 7 [26,29].
In addition to Nek6, 7 and 9, Nek3 and Nek4 are also implicated in control of microtubule dynamics. For example, in post-mitotic neurons, expression of a Nek3 mutant lacking the regulatory phosphorylation site (T475) within the PEST sequence, believed to act as a dominant negative, resulted in disruption of microtubule deacetylation, polarity and overall neuronal morphology [30]. Finally, knockdown of Nek4 in MCF7 cells altered the cellular sensitivity to the microtubule poisons taxol and vincristine, suggesting that Nek4 may also regulate microtubule dynamics [31].
c) Nek1, Nek8 and Ciliagenesis
Nek kinases prominently feature in the biology of cilia, which are microtubule-based organelles that are structurally and functionally similar to flagella (reviewed in [32]). Two types of cilia exist. The motile cilia function to move extracellular fluid and debris and are found on certain cell types such as the tracheal epithelia where they work to sweep debris out of the airway. On the other hand, primary cilia are present on most cell types and coordinate the cellular responses with the extracellular environment. Primary cilia form during interphase from the mother centriole and dissemble prior to mitosis (reviewed in [33,34]). Ciliary protein mutations are the basis of a number of human genetic disorders termed ciliopathies, including retinal degeneration, polycystic kidney, liver and pancreatic diseases, abnormalities in neural tube closure and skeletal defects (reviewed in [35]).
Nek kinases were first linked with ciliagenesis with the discovery that mutations in Nek1 and Nek8 are the causal events in independent mouse models of polycystic kidney disease (PKD) [36,37]. The Kat and Kat2J strains, harbor mutations in the NEK1 gene that result in production of truncated Nek1 proteins. Mice carrying these mutations display facial dysmorphism, dwarfing, male sterility due to testicular hypoplasia and reduced spermatogenesis, anemia, and progressive polycystic kidney disease [38,39]. Another model of PKD is the Jck mouse strain, which harbors a G448V missense mutation in the C-terminal RCC1 domain of NEK8 [36,40,41]. The Kat, Kat2J and the Jck strains recapitulate the characteristics of PKD seen in humans to varying degrees, with the phenotype of the Jck mice, in particular, strongly resembling the autosomal dominant human disease. Specifically, Jck mice recapitulate many of the hallmark features of the human condition, including onset and sites of the disease, as well as the abnormal epidermal growth factor receptor (EGFR) expression and increased cAMP signaling [41]. Recently, loss-of-function Nek1 mutations in 2 families were identified and found to be the underlying cause of the ciliopathy, autosomal-recessive short-rib polydactyly syndrome [42].
In vitro work with cultured cells has provided further insight into the roles of Nek1 and Nek8 in ciliagenesis. In wildtype kidney epithelial cells, Nek8 localizes to primary cilia, while in cells derived from Jck mice, mutant NEK8 exhibits cytoplasmic and perinuclear localization, which correlates with increased cilia length [41]. In Jck mice, the expression of the polycystins PC-1 and PC-2 is elevated and while they are ordinarily restricted to the basal bodies of wild-type cilia, both proteins are found along the length of the cilia of kidney cells [43]. Notably, the accumulation of polycystins in cilia has been reported in other polycystic kidney disease models and mutations in PC-1 and PC-2 themselves can lead to PKD [44]. In the case of Nek1, a role in cilia formation was demonstrated in IMCD3 cells. Overexpression of Nek1 in these normally ciliated cells derived from the inner medullary collecting duct of the murine kidney, led to inhibition of ciliagenesis [45]. This is likely dependent on Nek1 catalytic activity, as a catalytically inactive mutant of Nek1 while localizing to cilia failed to affect cilia formation [45].
It has been proposed that the ability to coordinate the primary cilium with the cell cycle coevolved with the expansion of the Nek family [34]. For example, A.nidulans and yeast are non-ciliated and only contain a single NimA-related kinase. In D.melanogaster and C.elegans, which have 2 and 4 NimA-related kinases respectively, ciliated cells are terminally differentiated and thus do not coordinate cilia function with the cell cycle. In contrast, organisms such as mammals, Chlamydomonas and Tetrahymena which feature proliferating ciliated cells display an expansion of the Nek family, as they contain 11, 10 and 35 members respectively [34].
Nek Kinases and Checkpoint Control
In addition to the established functions during mitosis, certain Nek kinases also participate in cell cycle regulation following genotoxic stress. All eukaryote cells have multiple molecular mechanisms to identify and repair damaged DNA and preserve genomic integrity (reviewed in [46]). An important aspect of this process is activation of a checkpoint and induction of cell cycle arrest, to allow the cell time to repair damage. Cell cycle arrest can be triggered at G1/S, intra-S and G2/M phases of the cell cycle following damage caused by endogenous sources, such as stalled replication forks, or by exogenous agents, including ultraviolet (UV) radiation, ionizing radiation (IR), reactive oxygen species (ROS) and certain chemotherapeutic agents. Upon successful repair, the cell will re-enter the cell cycle.
Checkpoint activation is initiated by the PIKK family serine/threonine kinases ATM (ataxia-telangiectasia mutated) and ATR (ATM and rad3-related), and their effector kinases Chk1/2 (checkpoint kinase 1/2). Parallel to Chk1/2 signaling, p38 MAPK and it's downstream kinase MK2 (MAPK activated protein kinase 2) have also been identified as key regulators of cell cycle arrest (reviewed in [47]). Ultimately, the two checkpoint pathways culminate in inactivation of CDKs.
Some of the key molecular targets that mediate checkpoint engagement are the transcription factor p53 and the CDK-activating phosphatases Cdc25A, B and C. Activation of the ATM/ATR/Chk1/2 cascade leads to stabilization of p53, and subsequent upregulation of a number of antiproliferative genes, including p21 [48][49][50][51][52][53]. While p53 likely contributes to all checkpoints it is absolutely required for the G1/S cell cycle arrest. Many human tumors and immortalized cell lines exhibit compromised p53 activity and G1/S arrest following damage. In such cells, the G2/M checkpoint takes on increasing importance for maintaining genomic stability. Cdc25A, B and C are inactivated via phosphorylation by mutiple kinases, including Chk1/2 and Nek11 (reviewed in [54,55]). Following genotoxic stress, Cdc25A undergoes ubiquitin-mediated degradation, which occurs in a Chk1/2-dependent manner [56,57]. On the other hand, Chk1/2 and/or MK2 phosphorylation of Cdc25B and C leads to association with 14-3-3 and their cytoplasmic sequestration, away from their targets CDKs [58][59][60][61].
Amongst the Nek family, Nek11 contribution to checkpoint control has been best characterized. Meliexetian et al. demonstrated that in response to IR, Nek11 gets activated via phosphorylation on S273 by the ATM effector kinase, Chk1, which also phosphorylates Cdc25A on S76, priming it for further phosphorylation within the DSG motif [55]. Significantly, Nek11 acted as the Cdc25A DSG motif kinase promoting its ubiquitination and degradation. Consistent with this, HeLa cells depleted for Nek11 display elevated levels of Cdc25A protein and fail to undergo IR-induced G2/M arrest [55].
Nek1 and Nek2 also participate in IR-induced checkpoints. For instance, IR of Cos-7 cells results in reduction of Nek2 catalytic activity, likely in an ATM/ protein-phosphatase-1 (PP-1)-dependent manner, integral to the IR-induced inhibition of centrosome splitting [62]. Unlike Nek2, in HK2 and HeLa cells, Nek1 expression and catalytic activity are elevated in response to IR [63]. Highlighting the importance of Nek1 levels following IR, Nek1 -/cells displayed defective G1/S and G2/M checkpoints and were unable to repair their DNA, leading to accumulation of double strand breaks [64]. Nek1 subcellular localization is also regulated by IR. While in unstimulated cells Nek1 is predominantly cytoplasmic, following treatment with various genotoxic agents including IR, UV, etoposide and cisplatin, Nek1 localizes to γ-H2AX positive nuclear foci [63,64]. Significantly, unlike Nek11 and Nek2, IR-induced changes in Nek1 activity and localization occur independently of ATM/ ATR [65].
Work from our laboratory on Nek10, a previously uncharacterized Nek family member, has uncovered its role in G2/M checkpoint control [66]. In response to UV irradiation, HEK293 and MCF10A cells depleted for Nek10 displayed an impaired G2/M arrest. Intriguingly, these studies revealed that Nek10 can promote autoactivation of MEK1 in response to UV irradiation, but not mitogenic stimuli. While ectopic expression of Nek10 enhanced, its depletion inhibited UV-induced MEK1/2 and ERK1/2 phosphorylation. Nek10 was shown to interact with both Raf-1 and with MEK1 in a Raf-1dependent manner. Surprisingly, Raf-1 was required for Nek10 complex formation with MEK1, but its catalytic activity was dispensable for activation of MEK1 in response to UV irradiation. Instead, MEK1 underwent auto-activation upon exposure to UV irradiation. Integrin-stimulated MEK1 autophosphorylation has previously been described in the context of cell adhesion [67], but unlike the response to UV irradiation, it required prior phosphorylation at S298 by PAK1.
Regardless of the nature of the upstream signal, MEK1 autoactivation represents an alternate means of ERK pathway activation. Significantly, ERK1/2 activation has been linked to checkpoint control upon genotoxic stress, as well as recovery from cell cycle arrest and DNA repair [68][69][70][71]. MEK1 autophosphorylation can be detected following UV irradiation, as well as other stressors such as anisomycin and sorbitol treatment, but not following EGF or PMA stimulation (Moniz L. and Stambolic V., unpublished observation) consistent with the notion that MEK autoactivation occurs in stimulus-specific manner. Other means of communication between Nek kinases and the ERK signalling cascade may also exist. For instance, during the first meiotic prophase, Nek2 activity is sensitive to U0126/MEK inhibition, while in vitro it can be phosphorylated and activated by p90Rsk2, a downstream target of ERK1/2 [72]. Moreover, Nek2A directly interacts with ERK2 and may specify its localization to centrosomes [73].
Nek1
Renal tubular epithelial cells established from Kat2J mice exhibit abnormal nuclear morphologies including multinuclei, micronuclei, and bridging chromosomes [65]. Multipolar spindles, lagging chromosomes, improper chromosome movements, and incomplete cytokinesis were also observed during mitosis. As a consequence of these mitotic defects, populations of Kat2J cells manifest progressively worsening aneuploidy, with three quarters of cells having greater than 4N DNA content after several passages. Indicative of their transformation, xenograft injection of Kat2J mutant, but not wild-type renal tubular cells led to formation of tumors [65]. Consistently, 89% of mice heterozygous at the Kat2J locus (Nek1 +/-) developed lymphomas between 17 and 24 months of age, compared to 30% of wild-type mice [65]. Importantly, lymphoma cells were devoid of Nek1 immunoreactivity, suggestive of loss of heterozygosity at this locus.
Nek2
Elevated levels of Nek2 has been found in certain human cancers, raising the possibility that they may represent potential therapeutic targets. Colangiocarcinoma is an aggressive cancer originating in the liver bile duct epithelium with a markedly poor clinical prognosis. A cDNA array analysis comparing gene expression in colangiocarcinoma and normal liver tissue revealed Nek2 upregulation in these tumors, which was further confirmed in a subsequent evaluation of seven colangiocarcinoma cell lines [74]. Significantly, siRNA-mediated knockdown of Nek2 in xenografts generated by femoral injection of HuCCT1 colangiocarcinoma cells, attenuated cancer progression. Similar observations were made in several breast cancer cell lines, both ER-positive and ER-negative [75]. Namely siRNA-mediated knockdown of Nek2 in MCF7, MDA-MB-231 and Hs578T mammary carcinoma cell lines suppressed their proliferation, invasiveness, and anchorage-independent growth in vitro [75]. Further, Nek2 siRNAs significantly reduced tumor burden in mice femorally injected with either MCF7 (ER-positive) or MDA-MB-231 (ER-negative) cells [75]. Elevated Nek2 expression has also been noted in colorectal cell lines, as well as in tumor biopsies [76]. Similar to the effects in breast cancer cell lines, Nek2 siRNA impaired the in vitro proliferation of the DLD-1 and Colo320 carcinoma cell lines, as well as xenografts generated by injection of DLD-1 cells [76]. Finally, Nek2 siRNA and Cisplatin displayed an additive suppressive effect in treating DLD-1 xenografts, suggesting a possible therapeutic opportunity in targeting Nek kinases [76].
Nek6
Similar to Nek2, Nek6 is overexpressed in tumors from a variety of tissues including breast, uterus, colon, ovary, thyroid, and cervix, as well as a number of associated carcinoma cell lines [77]. A recent study linking Nek6 to p53-induced senescence has shed light on how Nek6 may promote tumorigenesis. In both human lung fibroblasts and EJ human bladder carcinoma cells, Nek6 expression decreased upon p53-induced senescence [78]. Importantly, ectopic expression of Nek6 in EJ cells reduced markers of senescence, including cell-cycle arrest, production of reactive oxygen species (ROS) and senescence-associated β-galactosidase activity caused by expression of p53 expression or treatment with chemotherapeutic agents such as doxorubicin [78,79]. Consistently, knockdown of Nek6 suppressed anchorageindependent growth of several carcinoma cell lines, including colon (HCT-15), stomach (NCI-N87) and cevix (HeLa), as well as growth of HeLa xenografts [77].
Nek10
A potential association of Nek10 and cancer was uncovered by a comprehensive genome wide association study (GWAS) involving over 37,000 breast cancer samples and 40,000 controls, which identified a strong breast cancer susceptibility locus within human chromosome 3p24 (p value = 4.1 × 10-23) [80]. Importantly, the subregion of 3p24 identified by this GWAS contains only two genes, Nek10 and the solute carrier family 4, sodium bicarbonate co-transporter, member 7 (SLC4A7) [80]. Interestingly, this susceptibility locus associates with increased risk of breast cancer for BRCA2 but not BRCA1 mutation carriers [81].
Nek10 may also be subject of direct mutations in cancer. Namely, whole genome sequencing of 210 primary tumors and immortalized human cancer cell lines uncovered more than a 1000 somatic mutations within the coding sequences of the 518 predicted human protein kinases [82,83]. One parameter for distinguishing driver and passenger mutations is the ratio of nonsynonymous to synonymous mutations appearing in distinct cancers. In this regard Nek10 is noteworthy in having thirteen catalogued missense mutations in six cancers. Based on mutation frequency, Nek10 was defined as one of 120 kinases predicted to contain a driver mutation [82]. This raises the possibility that disrupted Nek10 function contributes to oncogenesis, though this remains to be formally tested through rigorous experimentation. Of note, Nek10 mutations were found with the same frequency (4/33) as the mutations of B-Raf and liver kinase B1 (LKB1), kinases previously firmly implicated in tumorigenesis [82]. Nek10 mutations were found in both primary tumors (ovarian (A66K, V568I, D875Y, F50L), lung (R878M), brain (I783V)) and cultured cell lines (skin (E379K), lung (P1115L), pancreas (D665Y, stomach (R878K, R103C)) [82]. While none of the identified mutations map to the catalytic domain of Nek10, their effect on protein function is currently unknown.
Summary
Early phenotypic analyses of the mutant fungi for the archetypal Nek kinase revealed their involvement in cell cycle regulation. Subsequent studies in yeast and frogs, and more recently mice, have uncovered the fascinating intricacy in the control of the cell cycle and its checkpoints by various members of the Nek family. Further, mutations of Nek family members have also been identified as drivers behind the development of ciliopathies and cancer. Recent emergence of comprehensive cancer genomes is highlighting certain members of the Nek family as targets of frequent mutations. Despite remarkable progress in understanding the biology of the Nek family, the most interesting work is yet to be done, fuelled by the advent of gene knockout, RNAi-mediated knockdown, naturally occurring mutant and xenograft tumor models. | 5,630.8 | 2011-10-31T00:00:00.000 | [
"Biology"
] |
NiO nanoparticle-decorated SnO2 nanosheets for ethanol sensing with enhanced moisture resistance
In a high relative humidity (RH) environment, it is challenging for ethanol sensors to maintain a high response and excellent selectivity. Herein, tetragonal rutile SnO2 nanosheets decorated with NiO nanoparticles were synthesized by a two-step hydrothermal process. The NiO-decorated SnO2 nanosheet-based sensors displayed a significantly improved sensitivity and excellent selectivity to ethanol gas. For example, the 3 mol% NiO-decorated SnO2 (SnO2-3Ni) sensor reached its highest response (153 at 100 ppm) at an operating temperature of 260 °C. Moreover, the SnO2-3Ni sensor had substantially improved moisture resistance. The excellent properties of the sensors can be attributed to the uniform dispersion of the NiO nanoparticles on the surface of the SnO2 nanosheets and the formation of NiO-SnO2 p–n heterojunctions. Considering the long-term stability and reproducibility of these sensors, our study suggests that the NiO nanoparticle-decorated SnO2 nanosheets are a promising material for highly efficient detection of ethanol.
Introduction
Metal oxide semiconductors (MOX) have attracted substantial attention in the field of gas detection over the past few decades due to their ease of use and reproducible response to various gases [1][2][3] . As a representative n-type MOX, SnO 2 has been extensively investigated and used for commercial gas detectors 4 . To further improve the sensor performance, diverse SnO 2 -based nanostructures, such as nanoparticles 5 , nanosheets 6 , nanowires 7 , nanotubes 8 , hollow spheres 9 , and some hierarchical architectures [10][11][12] , have been developed. In these reports, two-dimensional (2D) SnO 2 nanostructures exhibit a rather high catalytic activity on certain surface sites, which promotes their sensing performance 1 . On the other hand, SnO 2 -based sensors can also be substantially improved by the addition of appropriate dopants, such as Pd 13 , Sb 14 , Ce 15 , and Ni 16 .
The gas sensing mechanisms related to doping effects, junction forming, surface catalytic effects, and synergistic effects have been explored to explain the improved sensor performance 17,18 . Among them, NiO is often used as a catalyst, which may also form p-n heterojunctions between the interface of the NiO and the SnO 2 19,20 . In particular, a p-type NiO enables an increase in the oxygen adsorption that can react with target gases 21 .
According to previous studies, NiO-decorated SnO 2 nanostructures were synthesized by various methods with beneficial ethanol sensing effects. NiO/SnO 2 composite nanofibers prepared via electrospinning were used for ethanol detection, and a response up to 25.5 (100 ppm) was achieved at 300°C, which was 12.7 times larger than that of the pure SnO 2 nanofibers 19 . The ultrafine NiO/ SnO 2 nanoparticles obtained by thermal treatment of the precursor exhibited a fast sensing process with a response and recovery period of 2 s and 3 s, respectively 5 . The 3D structures of Ni-doped SnO 2 , such as hollow spheres 22 , microflowers 20 , or other hierarchical nanostructures 23 , were produced by the hydrothermal method or chemical solution route, which successfully improved the response with excellent selectivity for ethanol detection. To date, ethanol testing is needed not only for drunk driving and alcohol brewing but also for the production of biochemical products. It is imperative that researchers carry out significant work on the sensitivity, selectivity, and longterm stability of ethanol sensors. However, it should be noted that the moisture resistance is often the mostoverlooked aspect of gas sensors in actual use scenarios. On the other hand, NiO-doped SnO 2 hierarchical nanostructures could be applied to reduce the influence of environmental humidity and demonstrate a fast response time and excellent gas response 24 . Even so, it is still necessary to further clarify the state of the NiO (dopant or individual phase) added to SnO 2 nanostructures because this may extend our understanding of their gas sensing mechanisms. It is also well known that NiO shows a high affinity for water molecule absorption 25,26 .
This work reports the synthesis of NiO-decorated SnO 2 nanosheets by a facile two-step hydrothermal process. The effects of NiO content on the structural, morphological, and gas sensing properties of SnO 2 nanosheet-based sensors were analyzed in detail. The gas sensing results confirmed that the NiO-decorated sensors indeed exhibited highly sensitive and selective ethanol sensing properties, with excellent long-term stability and reproducibility. In particular, the 3 mol% NiO-decorated sensor had a remarkable enhancement in moisture resistance compared with the pure SnO 2 sensor, which makes it more promising for practical application.
Structural and morphological characteristics
As illustrated in Fig. 1a, SnO 2 nanosheets can be easily decorated with NiO nanoparticles during the preparation procedure. First, precipitates were formed immediately when the SnCl 2 ·2H 2 O was put into deionized water because of Sn 2+ hydrolysis. The added NaOH also reacted with Sn 2+ ions and accelerated its hydrolysis. Hence, the solution turned slightly white at first. The oxidation of the Sn(OH) 2 precipitates occurred at conditions with a high pressure and high temperature of 180°C. Following the so-called "oriented attachment" mechanism, excessive OH − ions preferred to attach on the (110) of rutile SnO 2 and bind relatively weakly to (001) 27,28 . With the control of the pH value (pH = 13), the basic units gradually aggregated to form the SnO 2 nanosheets and grew along the [110] direction. In the secondary hydrothermal process, urea was used to ensure the homogeneous precipitation of Ni(OH) 2 on the surface of 2D SnO 2 nanosheets. After annealing at 500°C in air, the NiO nanoparticle-decorated SnO 2 nanosheets were obtained.
The crystal structures of the pure and NiO-decorated SnO 2 samples were analyzed by X-ray diffraction (XRD), as shown in Fig. 1d It should be noted that the peaks for Cu and C shown in Fig. 1e originated from the copper grid in the TEM specimen. Figure 2a-c display the scanning electron microscopy (SEM) and the transmission electron microscopy (TEM) images of the pure SnO 2 sample. The nanosheets were in the size range of 100-500 nm with a smooth surface morphology. Compared with the pure SnO 2 nanosheets, the 3 mol% NiO-decorated nanosheets ( Fig. 2d-f) had rough surfaces and diverse shapes, which might be due to the decoration of the NiO. More details for the morphologies of the SnO 2 samples with NiO decoration amounts of 1 mol%, 5 mol%, and 10 mol% are shown in Fig. S1. It can be clearly observed that the 2D nanosheet structure of pure SnO 2 was well maintained for all the samples.
To further confirm the decoration of the NiO nanoparticles, we investigated the SnO 2 -3Ni nanosheets with high-resolution TEM (HRTEM), as shown in Fig. 2f. The HRTEM image demonstrates the presence of independent phases of NiO nanoparticles on the surface of the SnO 2 nanosheets. The lattice fringes with d-spacings of 0.242 nm and 0.335 nm were obtained, which match well with the (111) plane of NiO and the (110) plane of rutile SnO 2 , respectively 5 .
X-ray photoelectron spectroscopy (XPS) was conducted to further investigate the surface compositions and the Fig. 2h, and the peaks are consistent in the two samples. Two peaks of 485.9 eV and 494.3 eV were attributed to spin-orbit components of Sn 3d 5/2 and Sn 3d 3/2 , respectively, corresponding to Sn 4+ in a tetragonal rutile structure. The same binding energy of Sn 3d in the two samples suggests the formation of NiO/SnO 2 rather than Ni-doped in SnO 2 . The core-level Ni 2p spectra are shown in Fig. 2i, where peaks at 855.3 eV and 872.9 eV were assigned to Ni 2p 3/2 and Ni 2p 1/2 , respectively, and a spin-orbit splitting of 17.6 eV can be seen between the Ni 2p 3/2 and Ni 2p 1/2 peaks. The Ni 2p 3/2 peaks were attributed to NiO 5 or a Ni 2+ pyramidal symmetry, according to previous literature reports 5,31 . Based on the findings above, the core level Ni 2p spectra further confirmed the formation of NiO decoration on the SnO 2 nanosheets.
Gas-sensing properties
To verify the optimum operating temperature, the responses of the sensors based on pure and NiO-decorated SnO 2 nanosheets to 100 ppm ethanol were investigated from 200 to 320°C, as shown in Fig. 3a. For all sensors, the response first increased, reached a maximum value at an optimum operating temperature, and decreased with increasing temperature. Obviously, the optimum operating temperature of all the sensors was approximately 260°C. It is worth noting that all the NiO-decorated sensors exhibited significantly improved ethanol sensing properties compared with the pure SnO 2 -based sensor. In particular, the SnO 2 -3Ni sensor exhibited the best performance of the samples considered in this study, and a high response of 153 was achieved at 260°C. We also noticed that an excessive amount of NiO decoration resulted in a decrease in the response. The responses of sensors based on SnO 2 , SnO 2 -1Ni, SnO 2 -3Ni, SnO 2 -5Ni and SnO 2 -10Ni (at 260°C) were 28, 107, 153, 87, and 65, respectively. Figure 3b displays the response of the pure SnO 2 and SnO 2 -3Ni-based sensors to ethanol with concentrations ranging from 5 to 10000 ppm. The response of the pure SnO 2 sensor had a significant increase at the ethanol concentration below 500 ppm and then tended to saturate at 2000 ppm. In comparison, the response of the SnO 2 -3Ni sensor increased rapidly in the range of 5-2000 ppm and then continued to increase with ethanol concentration up to 10,000 ppm, which suggests a higher maximum Fig. 3c. The response increased sharply once the sensor was exposed to ethanol and returned to its original value after exposure to air. Another critical factor to meet the practical demands required for gas sensors is the selectivity for different gases. As shown in Fig. 3d, the sensor responses to various gases were measured at 260°C with a fixed concentration of 100 ppm. All the sensors showed the highest response to ethanol among the six gases. For instance, the responses of the SnO 2 -3Ni sensor were 153, 46.5, 18.0, 13.9, 9.0, and 2.7 to ethanol, methanol, acetone, acetic acid, ammonia, and toluene, respectively. In other words, the SnO 2 -3Ni sensor demonstrated a good selectivity to ethanol gas.
Reproducibility and long-term stability are important requirements for the practical application of gas sensors. Figure 3e displays the response curve of the SnO 2 -3Ni sensor towards 50 ppm ethanol and contains the measurement of four continuous cycles at 260°C. The response curves were repeated well during the four cyclic measurements, reflecting its good reproducibility. In addition, the response values of all sensors were measured for four weeks. As shown in Fig. 3f, all response values of the sensors remained around their initial value with little fluctuation during the 4-week measurement period. The response of the SnO 2 -3Ni sensor was maintained at 143 after 4 weeks.
The effect of humidity is a major concern for the performance and stability of SnO 2 -based gas sensors 2,14,24 . As shown in Fig. 4a, the response of the sensor based on SnO 2 -3Ni maintained 71% of its initial value when the relative humidity increased from 20% to 80% RH, while that of the sensor based on pure SnO 2 decreased to 32%. This comparison indicates that the resistance of a gas sensor to a humid environment could be significantly improved with the help of NiO nanoparticles. To investigate the impact of humidity on the sensors based on SnO 2 nanosheets decorated with NiO, the SnO 2 -3Ni sensors were analyzed by electrochemical impedance spectroscopy (EIS) under various humidities at 260°C (the optimum operating temperature) during the measurement. As shown in Fig. 4b, the semicircles were fitted by an equivalent QR model (shown insert of Fig. 4b). The value of R 1 extracted from the semicircles was influenced by humidity. Additionally, Q is a small phase element that was almost constant during the investigation, and R 2 is the contact resistance during the measurement and far less than that of R 1 . With increasing RH, R 1 decreased when the water molecule reacted with the absorbed oxygen species. The EIS plots in Fig. S2 show the same tendency, which confirms our assumption and the QR model. The major difference is that the resistance of the SnO 2 -3Ni was much larger than that of the pure SnO 2 . As mentioned in Fig. 4a, the SnO 2 -3Ni sensor maintained a high response to ethanol in an environment with high relative humidity. However, Figure S3 also shows that the moisture resistance of the SnO 2 -3Ni sensor was mainly determined by its resistance change in air (R a ). The EIS plots can directly indicate a change in R a , to some extent, and reflect the moisture resistance of the gas sensors.
Gas-sensing mechanisms
The gas-sensing mechanism of SnO 2 (an n-type MOX) has been generally explained as a resistance change resulting from the gas absorption-dissociation on the surface of the sensing material. The absorbed oxygen air. Once exposed to ethanol vapor, the ethanol molecules react with the absorbed oxygen species, which results in thin electron depletion layers that decrease the resistance. Figure 5 compares the ethanol sensing mechanisms of the NiO-decorated SnO 2 nanosheets with those of pure SnO 2 nanosheets. As previously reported, p-n heterojunctions at the interface between the NiO and SnO 2 bend the bands of p-type and n-type semiconductors in the depletion layers, resulting in the equalization of Fermi levels 5,16,32,33 . In air, both electrons in the conduction band of SnO 2 and holes in the valance band of NiO ionize the absorbed oxygen molecules, which broadens the width of electron depletion layers on the surface of SnO 2 nanosheets and hole accumulation layers on the surface of NiO nanoparticles. It should be noted that the NiO-decorated sensors presented a higher sensor resistance in air (R a ) than that of pure SnO 2 -based sensors at the same operating temperatures. When the sensors were exposed to ethanol gas, the electrons, resulting from the reaction between ethanol molecules and oxygen species, passed through the NiO/SnO 2 interface attributed to a p-n heterojunction. The electron depletion layer and hole accumulation layer became narrow, which led to a broader conductive channel in the SnO 2 nanosheets and decreased the sensor resistance (R g ).
We can also see that an excessive amount of NiO decoration led to a decrease in the response. The reason may be explained as follows. (1) An excessive amount of NiO further broadens the width of the electron depletion region between a NiO nanoparticle and the SnO 2 nanosheet, making it difficult to adjust the electron transfer in the SnO 2 nanosheets. (2) As a p-type MOX, an excessive amount of NiO also captures partial free electrons during the ethanol sensing process, which hinders the decrease in R g . Consequently, an appropriate amount of NiO is of great importance to promote the sensor performance of NiO/SnO 2 . On the other hand, the selectivity of the sensor is always affected by the operating temperature (or determined by the ratio of the absorbed oxygen species to the target gases). In this work, NiO-decorated SnO 2 sensors show a strong catalytic capacity to ethanol at 260°C, which requires further discussion 34 . In addition, NiO could act as a catalyst to facilitate the oxidation reaction on the surface of SnO 2 nanosheets 23,33 . The amount of oxygen adsorbed on NiO is markedly larger than that of SnO 2 due to charge compensation through the oxidation of Ni 2+ to Ni 3+26 . Considering the more efficient carrier regulatory mechanisms with the help of NiO decoration, the NiOdecorated SnO 2 nanosheets indeed exhibited improved ethanol sensing properties.
When the sensor operated in a high RH environment, there were many oxygen species absorbed onto the NiO nanoparticles, which interacted with the water molecules, providing a good response to ethanol. Moreover, NiO was more capable of adsorbing water molecules than SnO 2 25,26 . Therefore, the SnO 2 nanosheets decorated with NiO nanoparticles maintained an excellent ethanol sensing performance with little response loss in a high RH environment due to the NiO-SnO 2 p-n heterojunctions and the increased oxidation reaction facilitated by the NiO decoration.
Conclusions
In summary, tetragonal rutile SnO 2 nanosheets decorated with NiO nanoparticles were successfully prepared by a template-free two-step hydrothermal method. The SnO 2 nanosheets decorated with NiO nanoparticles exhibited excellent sensing performance towards ethanol detection. With an optimum NiO decoration amount of 3 mol%, a high response of 153 was achieved to 100 ppm ethanol gas at 260°C, compared to 28 for the sensor with the pure SnO 2 nanosheets. All the sensors demonstrated good selectivity of ethanol to other reductive gases (methanol, acetone, acetic acid, ammonia, and toluene), good reproducibility, and excellent long-term stability. These findings were attributed to a p-n junction forming between the NiO nanoparticles and SnO 2 nanosheets. The SnO 2 -3Ni sensor also exhibited high moisture resistance in a high RH environment. Hence, SnO 2 nanosheets decorated with NiO nanoparticles are promising candidates for ethanol sensing applications.
Synthesis of NiO-decorated SnO 2 nanosheets
All the reagents were of analytical grade and were used without any further purification. NiO-decorated SnO 2 nanosheets were obtained by a two-step hydrothermal process, as illustrated in Fig. 1a. In the first step, 6 mmol SnCl 2 ·2H 2 O was dissolved into 20 mL of deionized water. Then, the solution was adjusted to pH = 13 with 0.4 M NaOH solution. The mixture was stirred for 30 min and transferred into a 50 mL Teflon-lined stainless autoclave. The autoclave was sealed and kept in an oven at 180°C for 12 h and cooled naturally to room temperature. The SnO 2 nanosheets were collected by centrifugation and successively washed with deionized water and absolute ethanol several times to remove any residual ions and finally dried at 80°C overnight 27 . In the second step, the as-obtained powder (0.1 g) was fully dispersed in 20 mL deionized water with sonication. A certain amount of nickel chloride (NiCl 2 , 0.2 M) solution and urea (molar ratio NiCl 2 : urea = 1:10) were added to the above suspension under continuous magnetic stirring. Then, the mixture was transferred into autoclave again and maintained at 80°C for 6 h. The final product was collected and washed, as described previously, and calcined at 500°C for 2 h in air. For comparison, SnO 2 nanosheets with different contents of NiO (1, 3, 5, and 10 mol%) were prepared and referred to as SnO 2 -1Ni, SnO 2 -3Ni, SnO 2 -5Ni, and SnO 2 -10Ni, respectively.
Characterization X-ray diffraction (XRD) patterns were recorded on an Xray diffractometer (Rigaku Smartlab) using Cu K α radiation. The morphologies of the samples were characterized by scanning electron microscopy (SEM, Zeiss Gemini) and high-resolution transmission electron microscopy (HRTEM, FEI Tecnai G2 F30), where the high-resolution transmission electron microscope was equipped with energy dispersive X-ray spectroscopy (EDX). X-ray photoelectron spectroscopy (XPS) was carried out on ESCALAB 250Xi.
Fabrication and sensor measurement
Gas-sensing measurements were performed on a commercial WS-30B system (Weisheng Instruments Co., Zhengzhou, China). Figure 1b displays a schematic diagram of the ceramic tube device used in our gas sensing measurements. Two ring-shaped Au electrodes were pasted at each end of the Al 2 O 3 tube as the testing electrodes, and each Au electrode was connected with two Pt wires. A Ni-Cr coil was placed inside the tube to control the operating temperature. Figure 1c displays a photograph of the as-fabricated sensor with SnO 2 -based materials coated on the Al 2 O 3 tube. In brief, the as-obtained products were mixed with a proper amount of binder (ethylcellulose: terpinol = 10:90 wt%) and pasted onto the Al 2 O 3 tube 35 . After drying at 80°C, all sensors were heated at 400°C for 2 h in air. During the test, the operating temperature varied from 200 to 320°C at a constant humidity of 20% RH. The gas response is defined as R a /R g (R a : sensor resistance in air, and R g : sensor resistance in the target gas). Impedance measurements were characterized by the E4990A impedance analyzer (Agilent Tech., Inc.). The heating power was supported by a PWS2721 DC Power Supply (Tektronix, Inc.). Different RH conditions were given by saturated salt solutions at room temperature; specifically, 11.3%, 23.1%, 33.1%, 43.2%, 55.9%, 69.9%, 75.5%, 85.1%, and 97.6% RH were generated by the saturated solution of LiCl, CH 3 COOK, MgCl 2 , K 2 CO 3 , Mg (NO 3 ) 2 , KI, NaCl, KCl, and K 2 SO 4 , respectively 36 . | 4,860.8 | 2019-05-20T00:00:00.000 | [
"Materials Science"
] |
Modeling and Simulation of Ballistic Penetration of Ceramic-Polymer-Metal Layered Systems
Numerical simulations and analysis of ballistic impact and penetration by tungsten alloy rods into composite targets consisting of layers of aluminum nitride ceramic tile(s), polymer laminae, and aluminum backing are conducted over a range of impact velocities on the order of 1.0 to 1.2 km/s. Computational results for ballistic efficiency are compared with experimental data from the literature. Simulations and experiments both demonstrate a trend of decreasing ballistic efficiency with increasing impact velocity. Predicted absolute residual penetration depths often exceed corresponding experimental values. The closest agreement between model and experiment is obtained when polymer interfaces are not explicitly represented in the numerical calculations, suggesting that the current model representation of such interfaces may be overly compliant. The present results emphasize the importance of proper resolution of geometry and constitutive properties of thin layers and interfaces between structural constituents for accurate numerical evaluation of performance of modern composite protection systems.
Introduction
Modern protection systems often consist of layers of ceramic, metallic, and/or polymer-based components.Interfaces between layers may strongly influence performance of such systems under ballistic impact.However, the importance of interfacial characteristics-for example, interface thickness, material type, and bonding strength-is not fully understood in many cases.Furthermore, the accuracy of available computational tools to assess such effects has not heretofore been thoroughly quantified.The purpose of this study is the assessment of one computational tool-with typical/default user options enabled-for modeling ballistic impact and penetration of a layered target consisting of one or more ceramic tiles backed by a thick metallic plate, with thin layers of polymer between the tiles in some cases.The current focus is the evaluation of the fidelity of the existing material models (including corresponding property parameters) and related numerical methods; modification of constitutive models or calibration of user-defined parameters to best match experimental ballistic results is beyond the scope of the present study.
As discussed in detail later, the penetrator-target configuration studied in this work duplicates that featured in ballistic experiments of Yadav and Ravichandran [1].Prior to description of the specific problem investigated here and in [1], an overview of literature on the subject is warranted.Other experiments of ballistic impact and penetration of ceramic targets with various interlayers and/or backing materials include those described in [2][3][4][5].Analytical models used to describe and/or predict ballistic penetration and possible perforation of such systems include those presented in [6][7][8].Numerical simulations invoking finite element methods, for example, those of ballistic impact of ceramic systems, are described in [9][10][11].Principles of dimensional analysis applied to relate properties and performance of armor ceramics are developed in [12,13].Comprehensive descriptions of terminal ballistics with applications to brittle solids can be found in several additional lengthy references [14][15][16].Figure 1: Ballistic problem (a) projectile and target (three tiles) [1]; (b) finite element mesh.
Problem Statement
In particular, the penetrator-target configuration simulated in this work depicts that examined in experiments of [1].
As shown in Figure 1(a), a WHA (Tungsten Heavy Alloy) penetrator, cylindrical in shape with flat nose, impacts a target at velocity ranging from ≈1000 m/s to ≈1200 m/s at null obliquity.The respective length and diameter of the penetrator are 50.6 mm and 8.43 mm (/ = 6).
The target consists of one, three, or six tiles of aluminum nitride (AlN), an isotropic polycrystalline armor ceramic.The total thickness of the tile(s) is 38.1 mm in all cases.A thin polyurethane laminate separates neighboring tiles in the experiments when the target contains multiple tiles.Ballistic performance of the ceramic-polymer system (or a single tile in some cases) is quantified by residual penetration depth into a 6061-T6 aluminum (Al) backing block of thickness 76.2 mm, which was sufficient to fully stop the penetrator in all reported experiments [1].
The main result ascertained from the experimental study was that ballistic efficiency was the highest (best) for the three tiles each of thickness 12.7 mm, intermediate for a single tile of thickness 38.1 mm, and the lowest (worst) for six tiles each of thickness 6.35 mm [1].Lateral tile dimensions were 101.6 mm × 101.6 mm.It was speculated that soft polymer layers in the three tile configuration enabled dispersion of the initial, primary compressive shock wave that caused more severe damage in the single tile configuration.On the other hand, bending and tensile failure modes were posited to strongly and negatively influence penetration resistance of the six tile configurations compare to offsetting any benefits obtained by dispersion or attenuation of the initial compressive shock attributed to the presence of compliant polymer layers and weak interfaces.
The computational tool implemented in this study is the EPIC (Elastic Plastic Impact Calculation) finite element code [20] (2013 release).This numerical analysis tool was chosen for two primary reasons: (i) its existing library of material constitutive models and property database are extensive and were thought to be sufficient for the representation of behaviors of each component (i.e., the ceramic, polymer, and metals as listed in Table 1) and (ii) its graphical user interface permits rapid generation of finite element meshes for ballistic penetration simulations of layered targets, as shown, for example, in Figure 1(b).
Mathematical Theory and Numerical Methods
Given initial and boundary conditions, the finite element method for dynamic analysis in a Lagrangian framework seeks solutions of the governing equations of continuum mechanics-conservation of mass, momentum, and energy-written, respectively, in local form as [21] + ∇ ⋅ = 0, Mass density is ; the particle velocity vector is = u/; internal energy per unit mass is ; the spatial gradient operator is ∇; partial time derivatives / are taken with respect to fixed material coordinates (i.e., material time derivatives).For a general finite deformation mechanics problem involving an elastic-inelastic solid, the deformation gradient F and volume ratio obey, with ∇ 0 being the reference gradient operator, 1 being the second-order unit tensor, and u being the particle displacement vector The deformation gradient is decomposed multiplicatively into elastic (superscript ) and inelastic or plastic (superscript ) parts.Assuming small elastic deformation, or assuming an additive decomposition of the rate of stretching into elastic (D ) and inelastic (D ) parts in the spatial frame independent from such a multiplicative split, the spatial velocity gradient can be written, with being skew spin tensor The Cauchy stress tensor is symmetric and can be split into deviatoric ( ) and hydrostatic parts, letting denote pressure; also the scalar (Mises) effective stress is defined in the following equation: ) Denoting by C a state-dependent tangent elastic modulus tensor of order four, the objective Jaumann rate of Cauchy stress obeys a general constitutive equation of the form The state of stress, temperature (or internal energy ), cumulative inelastic deformation , and cumulative damage generally depend on the history of the displacement gradient at each material point via a constitutive model that depends on material type.In practice, for isotropic solids considered herein, ( 5) is replaced by distinct constitutive equations for pressure (an equation-of-state depending on and ) and deviatoric stress components.When the material deforms elastically, < , where is the effective strength of the solid that generally depends on strain, strain rate, temperature, pressure, and damage.When plastic deformation occurs, the yield condition = is enforced numerically via a radial return algorithm.Adiabatic conditions are assumed in (1), a standard practice for dynamic impact problems, leading to the energy balance below, with being the specific heat per unit mass: Damage variable is updated via an incremental constitutive equation of the form where is the instantaneous equivalent strain to fracture that generally depends on state variables.Numerical discretization of global forms of equations in ( 1) is described in [20], for example.Specific materials considered herein are described by particular constitutive equations and associated model parameters, for pressure , strength , and fracture strain .Such equations are listed in what follows next for material classes covered in Table 1.
For the metallic solids (aluminum backing and tungsten rod), the following constitutive equations are used to dictate pressure, strength, and failure behaviors: Here, * is a dimensionless, normalized total strain rate, * is homologous temperature, Γ is the Gruneisen coefficient, and , , , , and are other material parameters calibrated from experimental data.
For the aluminum nitride ceramic, Here, is the pressure increment due to bulking, is a material parameter, as are other terms with and subscripts that may take different values for intact and comminuted material [17].The model also accounts for a possible phase change, and corresponding details can be found in [17].
For the polymer, specifically a crushable polyurethane foam, Here, is the crush pressure beyond which the response is nonlinear with being the corresponding (elastic) volumetric strain.In (12), is a material parameter describing volumetric inelastic strain at failure, which also enters a modified form of (7) accounting for volume change as well as cumulative deviatoric plastic strain.Further details can be found in [2,18].As shown in Table 2, cases with and without polymer layers were simulated.In the former, the thickness of polymer layers was restricted by constraints imposed by the mesh generator to a minimum value of 1.054 mm, about four times thicker than the value of 0.254 mm tested experimentally [1].Resolution of the latter very small thickness would require extremely small finite elements, which in turn would drastically increase computational cost through time step reductions imposed by the Courant condition [20], written explicitly later in (14).
Material models were selected from code library options that best matched those of the experiments; details can be found in Table 1.A notable discrepancy is that the density of the polyurethane polymer material used in experiments is somewhat larger (a factor of 3.8) than the most dense polyurethane foam of the available constitutive models.Default options for element failure were imposed in all simulations: tetrahedral elements were eroded [22] when scalar effective strains exceeded a value of 1.5.Nodal masses were conserved upon element erosion, but strength and pressure were zeroed for failed/eroded elements.Frictionless contact between projectile and target was imposed by default along slide-lines.Interfaces were assigned one of two conditions: (i) tied bonding, corresponding to shared nodes and perfect coherence or (ii) free contact, corresponding to duplicate nodes along distinct, interacting frictionless surfaces.In some simulations involving multiple tiles, the polymer layers were excluded.The very thin coating of epoxy used to glue the rearmost tile to the backing block in experiments was not modeled explicitly.Far-field boundary conditions corresponded to free surfaces; that is, the targets were unconfined as in the experiments, though effects of interaction with the mounting apparatus were necessarily excluded in the simulations to maintain a reasonable problem size.
Prior to simulations of the ceramic-polymer-metallic targets, simulations of penetration of the bare backing metal were conducted, similar to those reported experimentally [1].The thickness of the bare metal target was not listed in the experimental study; a value of 6 was used in the simulations, ensuring independence of residual penetration depth 0 from target thickness.A simulation time of 1.0 ms was sufficient for cessation of relative motion of the residual eroded projectile mass to that of the target.Impact velocities of 1030, 1100, and 1160 m/s were considered.
Next, numerical simulations of the layered targets were conducted for the same three impact velocities, as listed in where is the residual penetration depth into the aluminum backing behind the interface between the backing and rearmost aluminum nitride tile, and 0 is the residual penetration depth into the bare backing at the same impact velocity .
When the projectile completely penetrated the backing metal thickness of 76.2 mm [1], a value of zero was assigned to .In such cases, the residual velocity of the penetrator at a time of 1.0 ms was recorded (see Table 2) and used as a metric for performance comparisons.Tetrahedral finite element meshes were generated using the EPIC preprocessor, most often with the default fine mesh setting and expanded grid, and the latter feature leading to progressive mesh coarsening with increasing the distance from the penetration zone.This mesh density was found to yield sufficiently mesh size-independent results for residual penetration depths and resulting ballistic efficiency ; in fact, an even coarser medium mesh setting was usually deemed sufficient but was not used.Refer to Table 3 for details comparing fine and medium mesh densities for particular simulations involving one ceramic tile with free bonding, impacted at 1160 m/s.The verification that the results are independent of the time step restriction imposed for numerical integration of the rate (e.g., linear momentum and stress update) equations is also shown in Table 3 for the same target configuration.In particular, Δ max is the maximum allowable time step in a corresponding simulation, which in all cases is smaller than that imposed by the Courant condition necessary for stability of solutions obtained by explicit numerical integration of the equations of motion [20]: with ℎ min and being the minimum lineal element size in the meshed domain and the effective longitudinal sound speed, respectively.Numerical simulations were executed in parallel mode on 16 processors using the available 2013 version of the EPIC code on the Spirit cluster at the US Air Force Research Laboratory (AFRL).Wall-clock execution times were always less than 24 hours.
Results
Predictions are compared with experiments for the bare backing metal (Al) in Figure 2(a), wherein a linear fit to the data was sufficient to describe computational results for the three impact velocities considered in each case: with particular values of dimensionless constant 0 and constant 1 [s/m] embedded within Figure 2(a).The deformed finite element geometry corresponding to residual penetration at 1.0 ms is shown in Figure 2(b); notice that the damaged zone exceeds the penetration depth of the partially eroded projectile in this case.
Predicted penetration depths significantly exceed experimental values.Reasons for the differences in results cannot be isolated in the present set of complex simulations, but possibilities include the following: the WHA material may be weaker than that depicted by the model, or the Al material may be stronger than that depicted by the model; the erosion criterion invoked in simulations may be too liberal for the Al or too strict for the WHA; omission of friction and commensurate wear between target and eroding projectile may result in larger penetration depths in simulations than those observed in experiments; and/or far-field boundary conditions may artificially affect depth of penetration results at later computation times in finite element simulations.
Representative results from various target configurations and impact velocities are shown in Figure 3, all corresponding to a solution time of 1 ms.In particular, in Figure 3(a), the penetrator barely defeats the single ceramic tile and resides just inside the metal backing plate (/ = 0.136 in Table 2).In Figure 3(b), the entire target-including three ceramic tiles, two layers of polymer, and metal backing plate-has been perforated by the projectile, and all layers of polymer laminate have been highly eroded.The latter result agrees qualitatively with experimental observation of severe damage in polymer layers of recovered targets [1].In Figure 3(c), the initially unbonded six ceramic tiles have been shattered by the projectile which remains lodged at the back-free surface of the aluminum backing at = 1 ms.
Ballistic efficiencies from simulations and experiments are compared in Figure 4.Note that overlapping data points in Figure 4, for example, those when → 0 in many instances, can be discerned by examining corresponding numerical values listed in Table 2.In Figure 4(a), simulation results for for a single ceramic tile exceed those from experiments when the ceramic is perfectly bonded (tied) to the backing plate, while agreement with experiment is closer for free contact between ceramic and backing.For results of the three tile configurations shown in Figure 4(b), experimental values of exceed simulation predictions regardless of numerical bonding representation or inclusion of polymer layers, though the closest agreement is obtained when polymer layers are omitted in the simulations.In Figure 4(c), the same conclusion is drawn for the six tile configurations; that is, the closest agreement is obtained when the polymer layers are not explicitly represented in the calculations.The simulations do tend to reflect the experimentally observed trend of decreasing ballistic efficiency with the increasing impact velocity.When ranked via descending ballistic penetration resistance, experimental results [1] suggest an ordering of three, one, and then six tiles, while simulation results suggest an ordering of one, three, and then six tiles.
Residual velocities from simulations at 1.0 ms for threeand six-tile target configurations are shown in Figures 5(a) and 5(b), respectively.Recall that complete perforation did not occur in any reported experiment [1].Residual velocities are similar for free interfaces and for tied bonding with polymer, confirming failure and commensurate erosion of the polymer layers, and consistent with efficiency results shown in Figure 4.
Analysis and Discussion
As inferred from examination of solution data in Table 2 and Figures 4 and 5, results suggest that incorporation of compliant polymer layers promotes bending modes and tensile fracture in the ceramic layers, leading to decreased ballistic efficiency relative to simulations wherein polymer is omitted.For example, consider efficiency and residual velocity predictions for the three tile configurations impacted at 1030 m/s and listed in Table 2.When bonding is free, ballistic efficiency decreases from 0.628 to 0.518 when polymer interlayers are inserted between the ceramic tiles.When bonding is tied, efficiency decreases to zero, and residual velocity becomes nonzero (specifically, / of 0.115).Similar, but not identical, trends, are evident for the six tile configurations, whereby projectile defeat occurs only when polymer is omitted, with nonzero residual velocities reported whenever polymer layers are included.Furthermore, increasing the number of tiles, while decreasing the individual tiles' thickness, exacerbates this weakness of the target package, especially when more polymer layers are included with an increasing total number of tiles.Stress wave propagation for simulations with three tiles for impact velocities of 1030 m/s is evident in Figure 6, which specifically shows hydrostatic pressure contours (positive in compression) at a time of 0.25 ms after initial impact.The penetration cavity is wider and shallower without polymer (Figure 6(a)), with minor differences in pressure waves emanating from the cavity evident among all cases shown.
As noted already in the context of penetration results for the bare backing metal, the source of discrepancy between model and experiment could not be isolated in these complex multimaterial calculations, but several possibilities can be suggested.Uncertainties in material properties and erosion criteria, omission of contact friction, and possible artifacts of far-field boundary conditions may adversely affect accuracy or precision of results.Another likely source of model discrepancy is the thicker, more compliant polymer representation than that tested experimentally, which would tend to promote target defeat for reasons explained above.
In summary, numerical results listed in Table 2 and shown in Figures 4 and 5 demonstrate how resolution of geometry and behavior of thin interfaces between layers of stiff material in armor systems strongly affects predicted ballistic efficiency.It follows that representation of interfaces should be carefully considered by the numerical analyst when constructing finite element or finite difference models for performance evaluations of such systems.Concurrent experiments and validation simulations on systems of lower complexity are recommended for future work, such that sources of discrepancy between model and experiment can be more precisely identified.Cohesive zone representations of interfacial separation [23][24][25] offer the potential for more realistic modeling of interfacial physics than the fully bonded or free surface interactions prescribed herein among layers.
Constitutive models with a more rigorous basis in finite deformation kinematics [26] and thermodynamics [21] may enable improvements in descriptions of the bulk behavior of metals [23,27] and ceramics [28], albeit at increased model complexity and computational expense.Phase field models [29] of structural transformations (e.g., for high-pressure phase transitions [17] and fracture in AlN) and nonlocal models for inelasticity and damage mechanisms [30] may also offer improvement over usual continuum mechanical treatments available in simulation codes such as EPIC, for example, potential benefits with regard to regularization of numerical solutions.
Experiments in [1] represented by simulations in the present numerical study do not address potential failure mechanisms observed in all possible kinds of ballistic impact problems.For example, adiabatic shear banding, plugging, and/or petal formation in metallic targets (often thin) reported for other armor systems [14,[31][32][33][34][35] are not of primary interest in the present case.Plasticity, fracture, and solid-solid phase transitions are appropriately addressed here for aluminum nitride, but mechanisms prevalent in other brittle targets not relevant here include pore collapse, for example, in concrete targets [34,36] or impacted rocks and minerals [37,38] and stress-induced amorphization, as observed in boron carbide [38,39] and quartz [40].For these different classes of targets not considered herein or in [1], appropriate constitutive models should always be chosen or constructed to represent dominant failure mechanisms observed in corresponding experiments.
Conclusions
Numerical simulations of ballistic impact and penetration of targets consisting of layers of aluminum nitride ceramic tile(s), polymer laminae, and aluminum backing have been conducted over a range of impact velocities on the order of 1.0 to 1.2 km/s.Results for ballistic efficiency have been compared with experimental data.Predicted residual penetration depths often tended to exceed corresponding experimental values, though simulations and experiments both demonstrated a trend of decreasing efficiency with increasing impact velocity.The closest agreement was obtained when polymer interfaces of small but finite thickness were not explicitly resolved, suggesting that the model representation of such interfaces is overly compliant.Results emphasize the importance of proper resolution of geometry and constitutive properties of thin layers and interfaces in numerical evaluation of performance of modern composite protection systems.
Figure 2 :Figure 3 :
Figure 2: Penetration into bare aluminum backing material: (a) depth versus impact velocity for simulation and experiment [1]; (b) simulation result at impact velocity of 1030 m/s.
Table 1 :
Materials and constitutive models.
Table 3 :
Convergence results for mesh size and time integration. | 5,019 | 2015-10-08T00:00:00.000 | [
"Materials Science"
] |
MYC Oncogene Contributions to Release of Cell Cycle Brakes
Promotion of the cell cycle is a major oncogenic mechanism of the oncogene c-MYC (MYC). MYC promotes the cell cycle by not only activating or inducing cyclins and CDKs but also through the downregulation or the impairment of the activity of a set of proteins that act as cell-cycle brakes. This review is focused on the role of MYC as a cell-cycle brake releaser i.e., how MYC stimulates the cell cycle mainly through the functional inactivation of cell cycle inhibitors. MYC antagonizes the activities and/or the expression levels of p15, ARF, p21, and p27. The mechanism involved differs for each protein. p15 (encoded by CDKN2B) and p21 (CDKN1A) are repressed by MYC at the transcriptional level. In contrast, MYC activates ARF, which contributes to the apoptosis induced by high MYC levels. At least in some cells types, MYC inhibits the transcription of the p27 gene (CDKN1B) but also enhances p27’s degradation through the upregulation of components of ubiquitin ligases complexes. The effect of MYC on cell-cycle brakes also opens the possibility of antitumoral therapies based on synthetic lethal interactions involving MYC and CDKs, for which a series of inhibitors are being developed and tested in clinical trials.
Introduction
The oncogene c-MYC (referred to herein as MYC) was the first described gene that encoded for an oncogenic transcription factor with the ability to transform cells in culture. MYC is overexpressed by different mechanisms in 60-70% of human solid and hematopoietic tumors [1][2][3][4][5]. The MYC family of proteins is composed of three members: c-MYC, N-MYC, and L-MYC. The existence of multiple MYC family members with distinct expression patterns reflects different requirements of MYC during development and in the adult animal, which is consistent with the specific way each gene is deregulated in certain cancer types [6].
MYC is a transcription factor of the helix-loop-helix-leucine zipper (HLH-LZ) family that regulates the activation or repression of many target genes [7,8]. Regulation of transcription by MYC depends on the formation of heterodimeric complexes with MAX protein [9]. The MYC-MAX heterodimer is the active form, which binds to specific DNA sequences called E-boxes (canonical sequence CACGTG) in the regulatory regions of target genes. The MYC network (also known as the MAX-MLX network), includes other components of the HLH-LZ family such as the MXDs, MNT, MLX and others, with different functions in gene expression regulation upon binding to E-boxes in the DNA (for recent reviews see [10,11]).
The number of MYC-binding sites revealed by genome-wide technologies ranks between 7000 and 15,000 in different models. Indeed, MYC is bound at one or more sites of the regulatory regions of 10-15% co-activators such as TRRAP, GCN5 and others. These complexes mediate histone acetylation to transactivate MYC target genes. Middle: CBP/p300 also mediates MYC acetylation and increased stability. Bottom: BRD4 is a reader of acetylated histones and promotes the activity of P-TEFb complex, composed of CyclinT1 and CDK9. MYC interacts with P-TEFb, which phosphorylates the C-terminal domain of RNA polymerase II to trigger elongation. (c) Transcriptional repression through MYC-associated complexes. Upper: MYC interacts with MIZ-1, displacing coactivators with HAT activity such as CBP/p300. The MYC/MIZ-1 complex binds to Initiator element (Inr) sequences and recruits the DNA methyltransferase DNMT3A to repress transcription. Middle: SP1-SMAD complex is repressed by MYC. Recruitment of HDAC1 contribute to histone deacetylation nearby Inr sequences. Bottom: MYC also recruits HDAC3 to E-box sequences, reducing histone acetylation.
We will review here the role of MYC as cell-cycle brake releaser i.e., how MYC stimulates cell cycle mainly through the repression of cell-cycle inhibitors ( Figure 2). Cell-cycle progression is regulated by serine/threonine protein kinases composed by a catalytic subunit or CDK (cyclindependent protein kinase), and a regulatory subunit, the cyclin [43,44]. CDK1, 2, 4, and 6 and A, B, E, and D-type cyclins constitute the major regulators of the mammalian cell cycle. D-type cyclins (D1, Upper: MYC-MAX heterodimers bind E-box sequences and interact with co-activators such as TRRAP, GCN5 and others. These complexes mediate histone acetylation to transactivate MYC target genes. Middle: CBP/p300 also mediates MYC acetylation and increased stability. Bottom: BRD4 is a reader of acetylated histones and promotes the activity of P-TEFb complex, composed of CyclinT1 and CDK9. MYC interacts with P-TEFb, which phosphorylates the C-terminal domain of RNA polymerase II to trigger elongation. (c) Transcriptional repression through MYC-associated complexes. Upper: MYC interacts with MIZ-1, displacing coactivators with HAT activity such as CBP/p300. The MYC/MIZ-1 complex binds to Initiator element (Inr) sequences and recruits the DNA methyltransferase DNMT3A to repress transcription. Middle: SP1-SMAD complex is repressed by MYC. Recruitment of HDAC1 contribute to histone deacetylation nearby Inr sequences. Bottom: MYC also recruits HDAC3 to E-box sequences, reducing histone acetylation.
We will review here the role of MYC as cell-cycle brake releaser i.e., how MYC stimulates cell cycle mainly through the repression of cell-cycle inhibitors ( Figure 2). Cell-cycle progression is regulated by serine/threonine protein kinases composed by a catalytic subunit or CDK (cyclin-dependent protein kinase), and a regulatory subunit, the cyclin [43,44]. CDK1, 2, 4, and 6 and A, B, E, and D-type cyclins constitute the major regulators of the mammalian cell cycle. D-type cyclins (D1, D2, and D3) preferentially bind and activate CDK4 and CDK6 at early G 1 -phase of the cell cycle, leading to the phosphorylation of the retinoblastoma protein (RB) and the release of the E2F transcription factors [45,46]. Cyclin E1/2-CDK2 complexes in the late G 1 -phase further phosphorylate RB, allowing the expression of E2F target genes required for the transition to S-phase [47]. Later, CDK2 complexes with Cyclin A2. Cyclin A is required for DNA replication and is expressed through S and G 2 phases. M-phase transition is regulated by CDK1 activated by B-type cyclins (B1 and B2) [43,48]. CDK inhibitory proteins (CKIs) accomplish an additional level of regulation of the cell cycle. CKIs are divided into two families ( Figure 2). The INK4 family (consisting of p16 INK4A , p15 INK4B , p18 INK4C , and p19 INK4D ) binds and inhibits CDK4 and CDK6 kinases, impairing their association with D-type cyclins. The CIP/KIP family (consisting of p21 CIP1 , p27 KIP1 , and p57 KIP2 ) inhibits progression at every cell-cycle phase upon binding to several already formed Cyclin-CDK complexes [49]. CDK inhibitors are involved in the regulation of a variety of biological processes beyond cell-cycle regulation [50] and some of them play important roles in cancer [51].
Genes 2019, 10, 244 4 of 28 D2, and D3) preferentially bind and activate CDK4 and CDK6 at early G1-phase of the cell cycle, leading to the phosphorylation of the retinoblastoma protein (RB) and the release of the E2F transcription factors [45,46]. Cyclin E1/2-CDK2 complexes in the late G1-phase further phosphorylate RB, allowing the expression of E2F target genes required for the transition to S-phase [47]. Later, CDK2 complexes with Cyclin A2. Cyclin A is required for DNA replication and is expressed through S and G2 phases. M-phase transition is regulated by CDK1 activated by B-type cyclins (B1 and B2) [43,48]. CDK inhibitory proteins (CKIs) accomplish an additional level of regulation of the cell cycle. CKIs are divided into two families ( Figure 2). The INK4 family (consisting of p16 INK4A , p15 INK4B , p18 INK4C , and p19 INK4D ) binds and inhibits CDK4 and CDK6 kinases, impairing their association with D-type cyclins. The CIP/KIP family (consisting of p21 CIP1 , p27 KIP1 , and p57 KIP2 ) inhibits progression at every cell-cycle phase upon binding to several already formed Cyclin-CDK complexes [49]. CDK inhibitors are involved in the regulation of a variety of biological processes beyond cell-cycle regulation [50] and some of them play important roles in cancer [51]. Impact of MYC on cell-cycle regulation. MYC stimulates cell-cycle progression and the cellular proliferation through the regulation of genes related to cell-cycle control. MYC induces positive cell-cycle regulators such as several cyclins, CDKs and E2F transcription factors (green arrows). Cyclin-CDK complexes phosphorylate RB, releasing E2Fs from the inhibitory interaction with RB, and allowing the expression of E2F target genes and the progression through the cell-cycle phases. MYC also represses genes encoding cell-cycle inhibitors such as p15, p21, or p27 (red bars), by different mechanisms. The regulatory mechanisms by which MYC antagonizes the activity of cellcycle inhibitors are detailed in the text.
MYC and the INK4A/ARF/INK4B Locus
The INK4A/ARF/INK4B gene locus is located on chromosome 9p21 in humans encoding three related proteins: p15 INK4B (p15 herein after), p14 ARF in humans or p19 ARF in mice (ARF herein after) and p16 INK4A (p16 herein after). p15 and p16 are characterized for their direct interaction with CDK4 and CDK6, blocking the formation of cyclin D-CDK4/6 complexes and provoking arrested proliferation through preventing phosphorylation of RB and S-phase entry [52]. On the other hand, Figure 2. Impact of MYC on cell-cycle regulation. MYC stimulates cell-cycle progression and the cellular proliferation through the regulation of genes related to cell-cycle control. MYC induces positive cell-cycle regulators such as several cyclins, CDKs and E2F transcription factors (green arrows). Cyclin-CDK complexes phosphorylate RB, releasing E2Fs from the inhibitory interaction with RB, and allowing the expression of E2F target genes and the progression through the cell-cycle phases. MYC also represses genes encoding cell-cycle inhibitors such as p15, p21, or p27 (red bars), by different mechanisms. The regulatory mechanisms by which MYC antagonizes the activity of cell-cycle inhibitors are detailed in the text.
MYC and the INK4A/ARF/INK4B Locus
The INK4A/ARF/INK4B gene locus is located on chromosome 9p21 in humans encoding three related proteins: p15 INK4B (p15 herein after), p14 ARF in humans or p19 ARF in mice (ARF herein after) and p16 INK4A (p16 herein after). p15 and p16 are characterized for their direct interaction with CDK4 and CDK6, blocking the formation of cyclin D-CDK4/6 complexes and provoking arrested proliferation through preventing phosphorylation of RB and S-phase entry [52]. On the other hand, ARF protein is unrelated with the INK4 family of CDK inhibitors but it shares the exons 2 and 3 with p16 INK4A gene, while the first exon of each gene is totally different. They are transcribed from an alternative reading frame (i.e., ARF) within the same locus and thus, their amino acid sequences lack any similarity. ARF induces cell-cycle arrest in G 1 and G 2 phases [53] and/or apoptosis through the regulation of the ARF/MDM2/p53 apoptotic pathway mainly, although induction of p53-independent apoptosis has also been reported to be mediated by ARF [54,55]. Albeit activation of the p53 apoptotic pathway is commonly mediated by DNA damage or cellular stress responses, ARF acts as an unusual tumor suppressor, being activated by oncogenic signals such as MYC [56] among others (reviewed in [57]). This response is considered as a security measure to avoid aberrant and uncontrolled proliferation due to sustained growth signaling. In fact, the expression of the INK4A/ARF/INK4B locus is lost in a wide range of human tumors (reviewed in [58]). Disruption of the exon 2 of INK4A makes mice more prone to tumor development, an alteration that affects both p16 and ARF. However, specific deletion of the ARF exon 1 in mice lead to the same phenotype while harboring intact p16, confirming ARF as a tumor suppressor playing a key role in protecting cells from aberrant proliferation [59]. In agreement, immortalization of primary mouse embryonic fibroblasts (MEFs) implies normally loss of either ARF or p53 [60,61] and MYC can immortalize MEFs [62,63] through a process that is normally accompanied by either ARF or p53 loss of function [56]. Here we will review the ARF-regulation by MYC and vice-versa, as ARF controls MYC's activity to prevent abnormal proliferation and oncogenic transformation.
MYC and p15 INK4B Regulation
The cell-cycle inhibitor p15 arrests proliferation in G 1 phase by specifically inhibiting cyclin D-CDK4/6 complexes [64]. Moreover, high levels of p15 redistribute p27 from cyclin D-CDK4/6 complexes to cyclin E-CDK2 complexes, leading to arrested proliferation [65]. Treatment of lung epithelial cells with TGFβ lead to a rapid downregulation of MYC levels, while p15 was highly induced. However, exogenous MYC expression resulted in the inhibition of TGFβ-mediated p15 induction [66]. In fact, MYC inhibits the activation of a reporter gene under the control of the proximal region of p15 promoter. This region contained the TGFβ responsive element (TGFβ-RE) and the transcriptional initiator site (Inr) [66]. The repression of p15 by MYC occurs through either mechanisms that involve or not the Inr element. The Inr element consists of a weak consensus sequence located at the transcription start site (TSS) of different promoters through which MYC is known to exert part of its repression activity (reviewed in [67,68]). Different proteins have been described to cooperate with MYC in the binding to the Inr element, such as YY1, TFII-I and MIZ-1 [69][70][71]. The zinc-finger protein MIZ-1 recognizes and binds the Inr element of its target genes promoting their activation, such as INK4B upon TGFβ treatment. MYC-MAX heterodimers impair INK4B expression by interacting with MIZ-1 at the Inr element of its promoter, preventing p300 recruitment by MIZ-1 [35]. TGFβ inhibited the interaction of MYC with MIZ-1, leading to INK4B induction by MIZ-1 through its interaction with SMAD proteins [42]. On the other hand, MYC can repress INK4B expression independently of the Inr element. This mechanism involves the interaction of MYC with SP1 and SMAD proteins. MYC binds to activated SMAD, forming a repressor complex together with SP1, leading to the inactivation of INK4B expression upon TGFβ treatment [38].
MYC Regulation of ARF Expression
Although MYC is always related to enhanced proliferation and cell growth, deregulated MYC expression paradoxically triggers apoptosis upon cellular stress conditions such as serum deprivation [56,72]. This process takes places mainly through the 53-dependent apoptosis pathway [73,74] although it has been reported to also happen in a p53-independent manner [75] in some cell types. Thus, cells overexpressing MYC are subjected to a high selection pressure to proliferate in the absence of growing factors, in which programmed cell death mechanisms need to be abrogated. MYC-induced apoptosis is mainly mediated by the induction of ARF expression at the mRNA level, leading to the inactivation of MDM2 by its sequestration to the nucleolus and thus, stabilization and activation of p53. Activation of p53 results in subsequent induction of p21 and other proteins involved in the p53-dependent apoptosis pathway [76]. In fact, p53-null cells showed resistance to MYC-induced apoptosis, while the effect observed in ARF-null cells was less compromised [56]. MYC has been found to induce p53 expression in an ARF-independent manner, although p53-dependent apoptosis was significantly compromised in ARF-null cells [56]. Furthermore, lymphomagenesis induced by MYC in Eµ-MYC transgenic mice [77], selectively inactivates either ARF or p53 in most tumors, being both genes found mutated with similar frequency [78]. In agreement with previous results obtained in MEFs, Eµ-MYC derived pre-B cells showed high rates of apoptosis and increased ARF levels, while p53 levels remained constant when compared to control cells. Thus, high rates of spontaneous cell death in this model correlated with ARF activation [78]. Although in most of the cases, ARF and p16 were inactivated due to mutations within their shared DNA sequences, retained expression of non-altered p16 found in some of these tumors brought to light the importance of ARF but not p16 for B-cell lymphoma development [78]. Thus, loss of ARF attenuates MYC-induced apoptosis in vivo, allowing prevalence of MYC oncogenic activity leading to high rates of tumor formation. In agreement, INK4A/ARF −/− -Eµ-MYC mice were more prone to develop lymphomas and displayed apoptotic defects despite the presence of wild-type p53, a phenotype similar to the one observed in p53-null lymphomas [79]. Other studies using mouse models with restricted expression of the oncogene MYC to the epidermis and other epithelial tissues [80,81] showed nearly completely abrogated apoptosis in a p53-null background [81] and highly reduced in ARF-null mice [82], consistent with previous studies. Moreover, ARF modulated specifically MYC-mediated apoptosis, while MYC-mediated stimulation of proliferation was not affected in the absence of ARF in the epidermis.
The mechanism of ARF expression induction by MYC remains largely unclear, although it seems to happen through an indirect mechanism involving the regulation of other factors that directly activate ARF expression. MYC induces FoxO transcription factors, which bind to and regulate the INK4A/ARF locus activating ARF expression. Thus, constitutive MYC signaling induces both nuclear FoxO levels and ARF expression [83]. On the other hand, the transcription factor E2F1 directly induces ARF [84], although this pathway does not seem to be conserved in mouse [85]. As MYC is known to directly regulate E2F1 expression, MYC-mediated ARF upregulation through E2F1 regulation has been suggested [86]. On the other hand, MYC has been reported to modulate ARF protein stability by interfering with ARF ubiquitination and degradation. ARF is very unstable in normal cells, while its degradation is inhibited in cancerous cells. The ubiquitin ligase ULF has been reported to ubiquitylate ARF leading to its degradation in vitro and in vivo. Furthermore, MYC can interact with ULF, impeding ARF ubiquitination and thus, increasing its stability [87]. This control of ARF stability is thought to be a mechanism by which the cell senses and distinguishes between normal versus overexpressed MYC. Thus, only upon oncogenic MYC levels, ULF-mediated ARF degradation is inhibited and therefore the apoptotic response is activated [88]. Consistently, physiological levels of MYC did not activate ARF promoter [89].
ARF-Mediated Regulation of MYC Activity
Apart from the p53-dependent ARF induction of apoptosis and arrested proliferation through MDM2 sequestration, ARF has been proposed to have p53-and MDM2-independent functions to suppress cell proliferation [75]. Moreover, ARF has been suggested to interact with targets other than p53 and MDM2 to inhibit proliferation [54]. ARF was found to interact with MYC to relocalize it from the nucleoplasm to the nucleolus and thus, inhibiting MYC-activated transcription and leading to G 1 arrest in a p53-independent manner [90]. An ARF mutant lacking the N-terminal domain of the protein failed to interact and colocalize with MYC and thus, was not able to inhibit MYC-activated transcription [90]. In contrast, other studies have shown that upon ectopic MYC expression, ARF is relocalized from the nucleolus to the nucleoplasm and colocalized with it. The same result was obtained upon MYC-ER activation, a chimeric protein consisting of MYC fused to the estrogen receptor and activatable by 4-hydroxy-tamoxifen. [91]. This discrepancy has been attributed most likely to the different systems used for each study and the different ratio levels between ARF and MYC in each model. Thus, MYC/ARF localization is bidirectional. MYC interacts with ARF through two different domains, one through the TAD situated at the N-terminal, and the other one through the HLH-LZ domain, located at the C-terminus of MYC [17]. Although the C-terminal domain had only a minimal effect over ARF interaction when deleted, depletion of the TAD completely abrogated MYC-ARF interaction [91]. Notably, ARF antagonizes the SKP2-mediated ubiquitylation of the MYC TAD [92]. MYC-p14ARF interaction has also been demonstrated and takes place through the MBII of MYC. This interaction leads to inhibition of MYC-induced transcription and nucleolar localization of MYC [93]. Chromatin immunoprecipitation assays showed that ARF was recruited to active MYC target genes, forming complexes with MYC-MAX heterodimers, impairing MYC-transactivating activity without affecting MYC-transrepressing activity [57]. Thus, this mechanism of ARF blocking MYC transactivation of genes impairs MYC-mediated hyperproliferation probably by ARF-mediated interference of TAD interaction with MYC-coactivators [91]. Many target genes which are repressed by MYC are involved in anti-apoptotic functions. The fact that ARF impairs MYC transactivation activity but that it does not interfere with MYC repression mechanisms would favor the pro-apoptotic response within the cells upon deregulated MYC activity [94][95][96].
MYC and p21 Regulation
The CIP/KIP cell-cycle inhibitor p21 Cip1/Waf1 (p21), encoded by the CDKN1A gene, plays key roles in controlling cellular processes such as proliferation, senescence, cell differentiation and apoptosis (reviewed in [97,98]). Similar to its relative p27, p21 interacts with cyclin-CDK complexes inhibiting cell-cycle progression [99,100] in response to different stimuli. p21 is a transcriptional target of p53, essential for p53 induced cell-cycle arrest in G 1 and G 2 phases upon DNA damage [101,102]. One of the first evidences in which MYC was found to have an opposite effect over p21-mediated cell-cycle arrest was reported by Perez-Roger and colleagues, when they showed that MYC promoted p21 sequestration through induction of D-type cyclins [103]. While a strong RAF signal was found to promote cell-cycle arrest through p21 induction in NIH 3T3-derived cells [104], MYC-ER activation was able to counteract this effect by an increase in cyclin D2-p21 binding that was proportional to the increase in cyclin D2 expression mediated by MYC [103]. However, MYC-ER activation did not lead to increased cyclin D1 expression in this system, in agreement with the lack of increased binding of p21 to cyclin D1 upon these conditions [103]. One of the major mechanisms by which MYC induces S-phase entry relies on MYC's ability to activate cyclin E-CDK2 complexes [13]. Thus, apart from the induction of Cyclin E expression (among others), MYC-mediated release of cyclin E-CDK2 inhibition though induction of cyclin D2 and further sequestration of p21 in cyclin D-CDK4/6 complexes [13] constitutes a remarkably important process in MYC's role as pro-proliferative agent.
MYC-Mediated p21 Repression by Direct Recruitment to Its Core Promoter Region
The better characterized and most studied mechanism by which MYC is known to counteract the antiproliferative activity of p21 occurs at the transcriptional level ( Figure 3). In fact, p21 has been reported to be one of the major targets of MYC repression [105]. This regulation of p21 by MYC is a clear example of MYC as transcriptional repressor, an idea that is becoming widely accepted and studied and that seems to account for at least half of MYC's activity as transcriptional regulator, as revealed in transcriptomic analysis upon MYC enforced expression.
Several mechanisms have been reported as per which MYC is able to repress transcription (reviewed in [34,36]), however further research needs to be performed to better understand how this process takes place. Histone deacetylase recruitment to promoter regions is a well-known mechanism of transcriptional repression. Indeed, trichostatine A (a histone deacetylase inhibitor) treatment has been shown to induce p21 expression [106]. Different studies have found that MYC-mediated CDKN1A transcriptional repression occurs in a HDAC-independent manner [40,107]. Besides, cells stably expressing the MYC-ER construct repressed the expression of CDKN1A upon MYC-ER activation, even in the absence of de novo protein synthesis. The inhibition of de novo protein synthesis diminishes the possibility that an intermediate protein could be responsible for this effect, meaning that MYC directly triggers p21 repression [40]. The CDKN1A promoter contains three non-canonical E-box sequences, two of them close to the transcription start site (TSS) (−5 to +1 bp and −20 to −15 bp) and another one around 150 bp upstream the TSS (−162 to −157 bp) (Figure 3a). Whether MYC repression activity relies on MYC's ability to recognize and interact with the DNA through E-boxes is not yet determined. In the case of CDKN1A, direct MYC DNA binding has not been reported so far, thus its activity on CDKN1A promoter is E-box independent. Different studies reported that a short sequence within the transcription start site (from around −150 to +16 bp) is enough for MYC to repress CDKN1A promoter's activity [40,41,107]. This promoter region contains several responsive elements as shown in Figure 3. MYC is recruited to the promoter DNA sequence by interacting with other transcription factors involved the regulation of CDKN1A expression, being SP1/SP3 and MIZ-1 the main ones described so far [40,41,108]. TGFβ treatment of murine and human keratinocytes leads to MYC downregulation followed by p21 induction and cell-cycle arrest [107]. Luciferase assays using different CDKN1A promoter fragments revealed that the TGFβ responsive element is not needed for MYC-mediated p21 repression. A luciferase construct containing from −62 to +16 bp of the CDKN1A promoter, was enough for MYC to mediate promoter repression and thus, MYC exerts its regulation independently of the rest of elements that act upstream that sequence, such as p53 or C/EBP [40]. Within the vicinity of the CDKN1A transcription start site that is enough for MYC to repress p21 expression, there are multiple SP1 binding sites and a potential Inr sequence. The initiator binding protein (TFII-I) induces gene transcription from the Inr of certain TSS and MYC is known to interact with the TFII-I impeding its activity in other models. However, that was not the case for CDKN1A, as depletion of the Inr sequence (+7 to +16 bp from the TSS) did not affect MYC repression of CDKN1A promoter in colorectal adenocarcinoma cells. Instead, MYC was found to interact with SP1 and SP3 transcription factors which play important roles in the induction of p21 expression [108]. The central part of the MYC protein, from amino acids 143 to 352, is essential for MYC to interact with the zinc finger domain of SP1 and enough to counteract SP1 induction of CDKN1A expression in CaCo cells [40].
The mechanism by which MYC represses CDKN1A promoter activity seems to be cell-type dependent. MYC also represses CDKN1A expression by interacting with the initiator-binding transcription factor MIZ-1. During hematopoietic differentiation, MIZ-1 levels increase and trigger CDKN1A expression, while ectopic MYC expression repressed basal or TPA-induced CDKN1A levels [41]. The MYC responsive region of CDKN1A promoter in this model was found to be between −49 and +16 bp from the transcription start site, a sequence already reported in other studies, as mentioned above. Nevertheless, opposite to previous reports [40], the Inr sequence was essential for MIZ-1-dependent recruitment of MYC to impair CDKN1A expression in other studies [41]. Again, MYC binding to the DNA was not necessary as the basic domain of MYC protein is not needed for CDKN1A repression. Instead, MYC was recruited to the DNA by interacting through its HLH domain with MIZ-1 [41,109]. The MYC V394D mutant (HLH mutated domain), unable to interact efficiently with MIZ-1 although still capable of interacting with MAX, allowed p21 expression and cell differentiation, bringing to light that MYC-MIZ-1 interaction is essential for CDKN1A repression [41].
More recently, MYC has been shown to form a ternary complex with MIZ-1 and GFI-1 able to bind the CDKN1A core promoter resulting in p21 repression [110] (Figure 3b). GFI-1 is a nuclear transcriptional repressor found to have important roles in hematopoietic cells [111][112][113] as well as in other tissues [114][115][116] and it has been reported to cooperate with MYC in lymphomagenesis [117,118]. GFI-1 regulates CDKN1A expression by recruitment of HDAC1 and G9a [119,120]. Nevertheless, although GFI-1 has two binding sites located 1.4 and 2.8 Kb upstream CDKN1A TSS, GFI-1 repression of CDKN1A expression happened through a mechanism that is independent of its DNA binding ability [119,120]. Instead, and according to this study [110], recruitment of both MYC and GFI-1 is dependent on MIZ-1 leading to the formation of a ternary complex that binds CDKN1A core promoter. Knocking down MIZ-1 expression leads to a significant decrease in MYC and GFI-1 occupancy at the CDKN1A promoter region. Indeed, MIZ-1 binds GFI-1 through its C-terminal 1-12 ZFs, while the regions flaking the ZFs are required for MYC interaction [109]. Besides, TGFβ not only would induce p21 through reduction of MYC expression [107], but also it reduces the levels of GFI-1, an effect that may contribute to the disruption of the MIZ-1/MYC/GFI-1 complex at the CDKN1A promoter region allowing p21 expression [110]. Another ternary complex involving MYC and MIZ-1 together with DMNT3A has been described to inhibit CDKN1A expression by inducing CpG methylation within the CDKN1A core promoter (Figure 3b) [39]. Combination of ectopic expression of MYC and DMNT3A has been found to highly repress CDKN1A, while downregulation of DMNT3A restores its expression [39]. MYC recruits DMNT3A to the core promoter of CDKN1A through MIZ-1, forming a ternary complex in which MYC is essential for bringing together MIZ-1 and DMNT3A [39]. Moreover, inhibition of DNA methyltransferase activity through 5-aza-cytidine abolished the MYC-mediated repression of CDKN1A, proving that DNA methyltransferase activity is needed for MYC to accomplish p21 downregulation [39]. On the other hand, histone demethylation activity has been reported to cooperate with MYC in CDKN1A repression. MYC forms a complex with TFAP2C (AP2C herein after) and the histone demethylase KDM5B capable to bind and repress the core promoter of CDKN1A through the AP2-binding site located −111 to −103 bp from the TSS (Figure 3b). Although AP2C and MYC are capable of repressing p21 expression alone, recruitment of KDM5B is dependent on both Another ternary complex involving MYC and MIZ-1 together with DMNT3A has been described to inhibit CDKN1A expression by inducing CpG methylation within the CDKN1A core promoter (Figure 3b) [39]. Combination of ectopic expression of MYC and DMNT3A has been found to highly repress CDKN1A, while downregulation of DMNT3A restores its expression [39]. MYC recruits DMNT3A to the core promoter of CDKN1A through MIZ-1, forming a ternary complex in which MYC is essential for bringing together MIZ-1 and DMNT3A [39]. Moreover, inhibition of DNA methyltransferase activity through 5-aza-cytidine abolished the MYC-mediated repression of CDKN1A, proving that DNA methyltransferase activity is needed for MYC to accomplish p21 downregulation [39]. On the other hand, histone demethylation activity has been reported to cooperate with MYC in CDKN1A repression. MYC forms a complex with TFAP2C (AP2C herein after) and the histone demethylase KDM5B capable to bind and repress the core promoter of CDKN1A through the AP2-binding site located −111 to −103 bp from the TSS (Figure 3b). Although AP2C and MYC are capable of repressing p21 expression alone, recruitment of KDM5B is dependent on both transcription factors and required for an optimal CDKN1A repression [121]. Thus, MYC would not only block the expression of p21 by interfering with factors that upregulate its expression, it will also actively modulate CDKN1A transcription by recruitment of DNA methyltransferase and histone demethylase activities to its core promoter.
MYC-Dependent Switch from Cell-Cycle Arrest to Apoptosis by Inhibiting p53-Dependent Activation of p21 Expression
Activation of the p53 pathway upon DNA damage can lead to two different outcomes, either cell-cycle arrest, mediated by the p53-direct induction of CDKN1A transcription, or apoptosis, mediated by p53 induction of PUMA and PIG3, among other target genes. MYC plays a very important role in the choice of this response. By interacting with MIZ-1, MYC is recruited to the proximal promoter region of CDKN1A leading to the inhibition of p53 mediated p21 expression in HTC116 cells upon MYC overexpression [122]. MYC did not affect p53 binding to CDKN1A promoter neither that of PUMA or PIG3, but specifically inhibited p21 expression promoting PUMA-mediated apoptosis instead of p21-dependent cell-cycle arrest [123]. Similar results were obtained in K562 cells, in which p53 activation lead to apoptosis or cell-cycle arrest while MYC overexpression significantly impaired apoptosis and p21 induction by p53, without affecting BAX expression [124].
MYC-Mediated Inhibition of RAS-Induced CDKN1A Expression
Cooperation between RAS and MYC in cellular transformation was the first example of oncogenes cooperation and has been widely studied since then [62]. Apart from its pro-proliferative activity, RAS is known to induce cell-cycle arrest and senescence in different models of primary cells [125,126] and chronic myeloid leukemia (CML) cells [127]. This mechanism of RAS-induced cell-cycle arrest involves the induction of cell-cycle inhibitors such as p16 (leading to RB inactivation), ARF and p53 that subsequently activates p21 expression and cell-cycle arrest. The mechanism through which RAS mediates p21 induction was first described to happen mainly through SP1 sites 2 and 4 in Cos7 cells [128]. Few years later, RAS induction of p21 expression was reported to be dependent on RAF in a model of CML (K562 cells). In this study, the SP1 sites 2 and 5 are the ones that account for the main RAS transactivation activity on CDKN1A promoter, although sites 3 and 4 also contributed to it [108]. Like other models already described, MYC was able to impair RAS-induced CDKN1A expression by binding to SP1 and inhibiting SP1-mediated CDKN1A expression regardless of the SP1 site analyzed [108] (Figure 3b). HLH and MB2 domains were needed for MYC to exert its repression on CDNK1A promoter upon RAS activation, in a process independent of MIZ-1 [108]. Thus, MYC exerts a major role in controlling the CDKN1A promoter in a silent state in CML, promoting cell-cycle progression and contributing to tumorigenesis. However, in agreement with the fact that cell context is essential to determine the outcome of a biological process, depending on the signal that induces the expression of p21, MYC will mediate its repressive activity over CDKN1A promoter through one mechanism or another. In fact, MYC seems to adapt its regulation ability according to the factor which mainly regulates p21 expression depending on the cellular context. These multiple mechanisms of MYC-induced p21 repression bring to light the importance of p21 regulation for MYC to promote cell proliferation and transformation.
MYC-Indirect Repression of CDKN1A Expression
Apart from the direct regulation of p21 transcription by MYC through its recruitment to the core promoter of the CDKN1A gene, mediated by protein-protein interactions with other CDKN1A regulators, MYC can induce transcription factors and miRNAs that are directly involved in the regulation CDKN1A expressions. TFAP4 (AP4 herein after) is a direct MYC target gene that belongs to the bHLH-LZ family of transcription factors. Its basic DNA-binding domain is essential to mediate CDKN1A repression through recognition of the E-boxes located at the core promoter of this gene [129,130]. AP4 only forms homodimers, so that it is very unlikely that AP4 exerts its repression by interacting with other transcription factors. Instead, it may compete for the occupancy of the E-boxes with other bHLH-LZ transcription factors known to induce CDKN1A expression [131]. AP4 is known to repress gene expression by recruitment of HDAC (HDAC1 and HDAC3) to core promoters [132,133]. Nevertheless, inhibition of HDAC activity is not enough to abolish AP4-mediated CDKN1A repression [134], in agreement with the HDAC-independent MYC-mediated p21 repression already addressed [40,107]. Other studies have described other potential mechanism for AP4-mediated p21 repression in which AP4 would impair TBP interaction with the TATA-box within the TSS, preventing the assembly of the RNA polymerase II complex [132,135].
Finally, MYC has been shown to regulate p21 expression at the post-transcriptional level, by modulation of miRNA expression (Figure 4). p21 is a major target of the miR-17 family of miRNAs and it has also been reported that silencing of p21 due to aberrant regulation of miRNA-17 contributes to tumorigenesis [136][137][138]. Moreover, the miR-17 family members correlate with MYC expression [139][140][141] and indeed, miR-17-5p, miR-20a, and miR-106a, all of them belonging to the miR-17 family of miRNAs are induced by MYC and downregulate p21 expression [142]. Thus, miRNA regulation by MYC indirectly regulates p21 expression contributing to the promotion of cell proliferation by MYC. with other bHLH-LZ transcription factors known to induce CDKN1A expression [131]. AP4 is known to repress gene expression by recruitment of HDAC (HDAC1 and HDAC3) to core promoters [132,133]. Nevertheless, inhibition of HDAC activity is not enough to abolish AP4-mediated CDKN1A repression [134], in agreement with the HDAC-independent MYC-mediated p21 repression already addressed [40,107]. Other studies have described other potential mechanism for AP4-mediated p21 repression in which AP4 would impair TBP interaction with the TATA-box within the TSS, preventing the assembly of the RNA polymerase II complex [132,135]. Finally, MYC has been shown to regulate p21 expression at the post-transcriptional level, by modulation of miRNA expression (Figure 4). p21 is a major target of the miR-17 family of miRNAs and it has also been reported that silencing of p21 due to aberrant regulation of miRNA-17 contributes to tumorigenesis [136,137,138]. Moreover, the miR-17 family members correlate with MYC expression [139,140,141] and indeed, miR-17-5p, miR-20a, and miR-106a, all of them belonging to the miR-17 family of miRNAs are induced by MYC and downregulate p21 expression [142]. Thus, miRNA regulation by MYC indirectly regulates p21 expression contributing to the promotion of cell proliferation by MYC.
MYC and p27 Regulation
The cell-cycle inhibitor p27 Kip1 (p27), encoded by the CDKN1B gene, is known to induce proliferation arrest in G1 by blocking the kinase activity of cyclin-CDK complexes, being cyclin E-CDK2 inhibition which exerts its main role in cell-cycle control. Besides, p27 behaves as a transcriptional regulator involved in a variety of cellular functions and in cancer (recently reviewed in [143]). Since MYC is a well-known potent inductor of the transition from G1 to S-phase, the antagonistic effect found between MYC and p27 in the control of cell-cycle progression has been a matter of study for many years. This is consistent with the fact that MYC -/cells showed increased levels of p27 and inhibition of cyclin-CDK activity, together with reduced proliferation rates [144]. Moreover, the opposite correlation between high levels of MYC and low levels of p27 has been found
MYC and p27 Regulation
The cell-cycle inhibitor p27 Kip1 (p27), encoded by the CDKN1B gene, is known to induce proliferation arrest in G 1 by blocking the kinase activity of cyclin-CDK complexes, being cyclin E-CDK2 inhibition which exerts its main role in cell-cycle control. Besides, p27 behaves as a transcriptional regulator involved in a variety of cellular functions and in cancer (recently reviewed in [143]). Since MYC is a well-known potent inductor of the transition from G 1 to S-phase, the antagonistic effect found between MYC and p27 in the control of cell-cycle progression has been a matter of study for many years. This is consistent with the fact that MYC −/− cells showed increased levels of p27 and inhibition of cyclin-CDK activity, together with reduced proliferation rates [144]. Moreover, the opposite correlation between high levels of MYC and low levels of p27 has been found in many human tumors and it is considered as poor prognosis of the disease [145,146]. There are several mechanisms through which MYC counteracts p27 activity, thus enabling the G 1 -S transition: (i) repression of p27 at the transcriptional level; (ii) induction of miR-221 and miR-222 that down-regulate p27 expression; (iii) induction of D-type cyclins and CDK4 and CDK6 that sequester p27 from cyclin E-CDK2 complexes; (iv) induction of CCNE expression directly or through E2F, leading to activation of cyclin E-CDK2 complexes that antagonize p27 function; v) induction of different components of the SCF SKP2 ubiquitin ligase complex (i.e., CKS1, CUL1 and SKP2) that targets p27 for proteasomal degradation (Figure 4). Mechanisms (i) and (ii) lead to CDKN1B regulation at the mRNA level, either due to promoter repression or post-transcriptional regulation. However, these two mechanisms of repression account for a minor percentage of MYC-mediated p27 regulation. The last three mechanisms contribute to p27 downregulation in a much higher extent, mostly involving p27 protein sequestration and degradation by MYC [147]. These mechanisms are discussed below.
Repression of CDKN1B Expression
One of the mechanisms that accounts for the inhibition of p27 by MYC involves MYC-mediated transcriptional repression of the CDKN1B core promoter, as already described for its related CDKN1A gene. CDKN1B mRNA expression levels inversely correlate with MYC expression in immune cells and other models. B cell receptor (BCR) engagement in immature B cells (upon anti-IgM treatment) lead to MYC downregulation, followed by p27 expression and induction of apoptosis [148], an effect that is reproduced upon siRNA-mediated MYC downregulation [149] and blocked by MYC [148][149][150]. Thus, there is an inverse correlation between MYC and CDKN1B mRNA expression levels in this model upon IgM treatment. Luciferase assays showed that the CDKN1B promoter region containing from −2002 to +154 bp responded to anti-IgM treatment leading to an increase in CDKN1B promoter activity [151]. The CDKN1B promoter contains an Inr element at the TSS which, as already described for other MYC-repressed target genes, has been found to be crucial for CDKN1B downregulation and MYC has been reported to interact with it in different models [151]. Indeed, CDKN1B upregulation upon BCR engagement is abrogated by ectopic MYC expression [151]. MYC interaction with the Inr element relies on its MBII and accordingly, a MYC P115L mutant, in which the Phe at the position 155 is replaced by a Leu within the region of MYC needed for its transcriptional suppression function, enhances its repressor activity [151], consistently with other known MYC-repression mechanisms [67,152,153]. Later studies showed that MYC represses CDKN1B promoter by direct interaction and inhibition of Foxo3a, a transcription factor known to upregulate CDKN1B expression [154]. In fact, immature B cells subjected to anti-IgM treatment showed an increase in Foxo3a expression [155], which can be abrogated by MYC expression [154]. Opposite to what was found for CDKN1A, MYC interacts with the CDKN1B Inr element through MAX, blocking CDKN1B expression [151].
MYC-Induced Repression of p27 Through miRNA Up-Regulation
Regulation of CDKN1B at the post-transcriptional level by miRNAs has been recurrently reported during the last years. Indeed, aberrant up-regulation of miRNA clusters that regulate p27 expression have been linked with cancer development, progression and invasion [156][157][158], bringing to light the importance of p27 regulation at this level. Screening of the miRNAs involved in the regulation of p27 revealed that the miR-221 family of miRNAs directly regulates the expression of p27 by targeting its 3 UTR sequence (Figure 4). miR-221 and miR-222, both belonging to this miRNA family, were predicted and verified to downregulate p27 expression in cell culture models [159]. MYC plays a key role in the regulation of non-coding RNAs and thus, modulates the expression of their target genes, a mechanism that has become recently more evident. MYC-regulation of miRNA expression has been linked mainly with miRNAs targeting mRNAs involved in cell-cycle regulation. In fact, MYC directly regulates the miR-221 family of miRNAs, which have been found to target p27 (and p57) [160]. Besides, miR-221 and miR-222 are consistently overexpressed in liver tumors, showing an opposite correlation with low levels of p27 due to its aberrant pros-transcriptional regulation. Furthermore, miR-221 (but not miR-222) has been reported to enhance tumorigenesis not only in vitro, but also in vivo [161].
Sequestration of p27 by Cyclin D-CDK4/6 Complexes
Inhibition of the cell cycle by p27 is controlled, in a great extent, by its recruitment to cyclin D-CDK4/6 complexes. The shift of p27 from cyclin E/CDK2 to cyclin D-CDK4/6 complexes relieves cyclin E-CDK2 from p27-mediated inhibition allowing progression through the cell cycle [162]. Intriguingly, p27 binds constitutively to cyclin D-CDK4/6 complexes. Although considered a CDK inhibitor, p27 has been found associated with both, active and inactive cyclin D-CDK4/6 complexes, depending on the cell proliferation state and on the phosphorylated status of p27 [163,164]. In arrested cells, the unphosphorylated p27 impairs the activation of cyclin D-CDK4/6 complexes by blocking CDK-ATP binding pocket. However, upon mitogenic stimuli, p27 gets phosphorylated at Tyr74, Tyr88, and/or Tyr89, leading to a conformational change that releases the blockade of the ATP binding site and the CDK is further activated by the CAK [164]. Moreover, p27 (as well as p21) is known to stabilize these complexes, as p27 depletion leads to more unstable D-type cyclins and less cyclin D-CDK4/6 complexes. MYC induces the expression of D-type cyclins and CDK4 and CDK6 [13,103], thus leading to the formation of cyclin D-CDK4/6 complexes able to sequester p27 from cyclin E-CDK2. Activation of MYC in mouse cells containing the MYC-ER chimera, promoted the interaction of p27 with D-type cyclins in an extent that proportionally correlated with the levels of cyclin D induced by MYC and with the activation of cyclin E-CDK2 complexes [103]. Although it has been reported that MYC directly induces cyclins D1 and D2, there has been some controversy concerning cyclin D1 regulation. Different studies reported opposite effects in the regulation of cyclin D1 by MYC, depending on cell types and models used [103,[165][166][167][168]. On the other hand, cyclin D2 is well known to be induced by MYC as recurrently reported [103,[168][169][170]. Moreover, CDK4 is also a bona fide MYC target gene [171] which is activated by MYC, presumably through the E-boxes located along its promoter region. Indeed, MYC has been reported to induce CDK4 at the transcriptional level in human and rodent cells and it has been found to activate the CDK4 promoter in reporter assays [171]. Finally, CDK6 is induced by MYC at the mRNA level, although this induction does not correlate with CDK6 protein levels [172,173]. D-type cyclins and CDK4 and 6 are repressed by different miRNAs (as many other genes involved in cell-cycle progression) such as the let-7 family of miRNAs, miR-34a, miR-15a/61, and miR-26a. MYC has been reported to induce the expression of D-type cyclins and CDK4/6 by repressing these miRNAs. Altogether, MYC induces the formation of cyclin D-CDK4/6 complexes promoting the switch of p27 from cyclin E-CDK2 to cyclin D-CDK4/6 complexes, thus inducing the G1-S phase transition as reviewed in [13].
Induction of p27 Degradation Through the MYC/CDK2/SKP2 Axis
The most important regulation of p27 levels and thus, p27 activity, takes place in the nucleus and relies on p27 protein stability. Upon mitogenic stimuli, p27 levels within the cell need to be reduced to allow cyclin-CDK activation and cell-cycle progression. The most efficient way for the cell to overcome p27 inhibition is mediated by its degradation via proteasome. Proteasomal degradation of p27 is mainly mediated by the SCF SKP2 ubiquitin ligase complex [174][175][176] (Figure 5) which, as most of the SCF complexes, relies on a specific phosphorylation state of its target protein to be able to recognize and ubiquitylate it (reviewed in [177,178]). In the case of p27, phosphorylation at its Thr187 is essential for SCF SKP2 recognition [176]. p27 phosphorylation and subsequent degradation is induced by MYC, whereas mutation of the threonine of p27 at the position 187 impaired this effect [179,180]. Phosphorylation of the Thr187 of p27 is mainly mediated through cyclin E-CDK2 complexes, although it has been found to be phosphorylated as well by cyclin A-CDK2 and cyclin B-CDK1, although in a lesser extent and in vitro [181]. Moreover, cells lacking CDK2 have shown phosphorylation of p27 at the Thr187 residue, suggesting that, in the absence of CDK2, there are other/s kinases able to trigger this phosphorylation [182]. In the absence of CDK2, CDK4, and CDK6, the phosphorylation of p27 at Thr187 can be carried out by CDK1 [183]. In vivo, p27 phosphorylated at the Thr187 is found forming complexes with cyclin E/A-CDK2, but not with D-type cyclin complexes [180]. MYC activation of cyclin E-CDK2 complexes during G 1 phase was first described in a Rat1-MYC-ER model [184] while the absence of MYC impaired cyclin E-CDK2 activation in exponentially growing conditions [185]. Moreover, CCNE was later reported to be a direct MYC target gene and that MYC could also induce its expression via E2F1, another MYC target gene needed for the G 1 to S-phase transition [186,187]. In turn, some E2F factors (E2F1, 2, 3) can repress MYC whereas E2F7 transactivate MYC [188]. On the other hand, MYC directly represses certain miRNAs that target CCNE, such as miR-34a and miR-26a (reviewed in [13]. Thus, MYC activation of cyclin E-CDK2 complexes would rely mainly in MYC's ability to induce cyclin E expression and form new and active cyclin E-CDK2 complexes. Nevertheless, activation of cyclin-CDK complexes is not only dependent on the regulatory subunit of the kinase (the cyclin), but also on the phosphorylation of certain residue within the CDK (Thr160 in CDK2 and their structurally equivalents in CDK1, CDK4, and CDK6). Phosphorylation of these residues is mediated by the CAK (CDK activating complex) [189][190][191], which consists of three subunits: cyclin H, CDK7 and MAT1. MYC increases CAK activity by augmenting the translation rates of the mRNA of its three components, leading to higher protein levels [192]. On the other hand, MYC activates CDK7 expression. MYC binds to its promoter sequence in mouse ES cells and CDK7 expression is reduced in MYC null rat cells [144,193], thus, MYC actively participates in the regulation of the CAK, promoting cyclin-CDK complexes activation and, in the case of cyclin E-CDK2 complexes, favoring phosphorylation and subsequent inactivation of p27 [194]. Phosphorylation of p27 by cyclin E-CDK2 led to p27 ubiquitination in vitro, suggesting that phosphorylated p27 was a target for the ubiquitin-proteasome degradation system while a p27 T187A mutant did not show this effect [181,195]. In fact, the F-box protein SKP2, which is part of an E3 ubiquitin ligase of the SCF complex, specifically recognizes p27 phosphorylated at the Thr187, promoting p27 degradation, and it is needed for the transition through quiescent state to S-phase. This process leads to the activation of cyclin A-CDK2 complexes inducing S-phase entry and DNA synthesis ( Figure 5). Moreover, the T187A p27 mutant suppresses SKP2-induced cyclin A activation and S-phase entry [176,196,197].
The SCF SKP2 complex is composed by RBX1, CUL1, SKP1, and the F-box protein SKP2 [177,178]. Cell-free extract assays revealed that SKP2 binds to phosphorylated p27 at the C-terminal domain, while the lack of that phosphorylation totally abolished the interaction. Immunodepletion of CUL1, SKP1, or SKP2 abolished p27 degradation [176]. Unlike any other SCF substrate, p27 ubiquitination requires the accessory protein CKS1, which appears to be necessary to bridge between p27 and SCF SKP2 . The N-terminal portion of p27 packs with SKP2, a central Glu 185 side chain inserts between SKP2 and CKS1 and the C-terminal portion containing the phosphorylated Thr187 binds to CKS1 [198]. Further, cyclin A-CDK2 complexes facilitate the recruitment of p27 to the SCF SKP2 -CKS1 stimulating p27 ubiquitination [181,199,200]. Cyclin A interacts with SKP2 while CKS1 does with CDK2 and both interactions are essential, as disruption of any of them abolished p27 ubiquitination [200][201][202].
MYC induces p27 proteasomal degradation through the upregulation of the SCF SKP2 complex. This is achieved as several components of this complex have been reported to be MYC-target genes: CUL1, CKS1, and SKP2 [179,203,204]. While CKS1 is indirectly induced by MYC, most likely through other transcription factors regulated by MYC and involved in CKS1 transcriptional regulation [203], CUL1 and SKP2 have been described to be direct-MYC target genes. Activation of MYC-ER by 4-hydroxy-tamoxifen resulted in increased mRNA levels of CUL1 and SKP2, even in the absence of de novo protein synthesis. Both contain canonical E-boxes within their core promote and have been reported to be essential for MYC-transcriptional regulation of these genes [179,204]. Depletion of any of the three (CUL1, CKS1, and SKP2) lead to increased p27 protein levels and arrested proliferation, and MYC was unable to counteract it. Besides, overexpression of CUL1 or CKS1 in null MYC cells, which reduced p27 levels within the cells, and siRNA mediated depletion of p27, restored MYC's wild type phenotype leading to normal proliferation rates. Thus, the SCF SKP2 complex is essential for MYC's activity as a pro-proliferative transcription factor, by means of reducing p27 levels to allow cell-cycle progression. Altogether, it brings to light that MYC plays a critical role in the regulation of p27 degradation via proteasome through the combination of different regulatory mechanisms.
MYC-Mediated Synthetic Lethality and the Cell Cycle
MYC would be a good target for therapy. First, MYC deregulation occurs frequently in human cancer. Second, MYC addiction has been shown in several models so that inactivation or depletion of MYC leads to tumor regression [205,206,207]. The oncogene addiction is defined as the phenomenon by which some tumors exhibit a dependence on a single oncogenic protein or pathway for sustaining growth and proliferation [208].
Third, whole-body inactivation of MYC in mouse models by expression of a dominant negative MYC form (Omomyc, a peptide that interferes with MYC-MAX interaction [209]) only provokes mild side effects. This suggest that pharmacological inhibition of MYC could likely be implemented without major side effects [210,211]. However, to date no anti-MYC drugs have reached the clinical use. Like other transcription factors, MYC has the reputation of a non-druggable target. Despite that, several approaches have targeted MYC. Inhibitors of bromodomain BRD4 protein (JQ1, OTX15 and derivatives) that repress MYC expression [212,213] (Figure 6a) have been tested in clinical assays in lymphoma, but the drug is not specific for MYC but also represses other genes which transcription is dependent on BRD4 [214]. Several molecules have been described to bind MYC and impair its function. Most of these molecules interrupt the MYC-MAX interaction (Figure 6b), as the peptide Omomyc does [215]. Most of these compounds, i.e., 10058-F4 and 10074-G5, were discovered using a two-hybrid system [216]. These inhibitors are specific for MYC and have been broadly used in preclinical studies, but they have not reached the clinical use due to its low potency and to its rapid degradation [217].
A more promising approach is to target MYC as an indirect target via synthetic lethal approaches
MYC-Mediated Synthetic Lethality and the Cell Cycle
MYC would be a good target for therapy. First, MYC deregulation occurs frequently in human cancer. Second, MYC addiction has been shown in several models so that inactivation or depletion of MYC leads to tumor regression [205][206][207]. The oncogene addiction is defined as the phenomenon by which some tumors exhibit a dependence on a single oncogenic protein or pathway for sustaining growth and proliferation [208].
Third, whole-body inactivation of MYC in mouse models by expression of a dominant negative MYC form (Omomyc, a peptide that interferes with MYC-MAX interaction [209]) only provokes mild side effects. This suggest that pharmacological inhibition of MYC could likely be implemented without major side effects [210,211]. However, to date no anti-MYC drugs have reached the clinical use. Like other transcription factors, MYC has the reputation of a non-druggable target. Despite that, several approaches have targeted MYC. Inhibitors of bromodomain BRD4 protein (JQ1, OTX15 and derivatives) that repress MYC expression [212,213] (Figure 6a) have been tested in clinical assays in lymphoma, but the drug is not specific for MYC but also represses other genes which transcription is dependent on BRD4 [214]. Several molecules have been described to bind MYC and impair its function. Most of these molecules interrupt the MYC-MAX interaction (Figure 6b), as the peptide Omomyc does [215]. Most of these compounds, i.e., 10058-F4 and 10074-G5, were discovered using a two-hybrid system [216]. These inhibitors are specific for MYC and have been broadly used in preclinical studies, but they have not reached the clinical use due to its low potency and to its rapid degradation [217].
A more promising approach is to target MYC as an indirect target via synthetic lethal approaches (Figure 6c). Several putative synthetic lethal genes have been identified [218,219], including CDKs. Indeed, the first synthetic lethal MYC interactor described was CDK2, and most of the synthetic lethal combinations of MYC so far reported involve enzymes that functions in cell cycle. They will be briefly discussed below. combinations of MYC so far reported involve enzymes that functions in cell cycle. They will be briefly discussed below.
MYC and CDK1 Inhibitors
CDK1 is essential for mammalian cell division [220] and is the only CDK required for completion of cell cycle in animal cells [221]. A number of small molecule inhibitors of CDK1 have been developed. Most of them induce an arrest in G2 phase, and some are being used in clinical trials [222,223]. Accordingly, a CDK1 inhibitor induces cell death in Burkitt lymphoma and multiple myeloma cell lines depending on MYC levels, and CDK1 inhibition in Eµ-Myc mice results in extended survival [224]. Similar observations were made in breast cancer cells [225]. These results suggest that CDK1 inhibition is synthetic lethal on MYC expressing cells. However, purvalanol A is selective but not specific for CDK1, and shows some activity against other CDKs [222]. Therefore, the possibility existed that other kinases could contribute to the synthetic lethal effect. However, we have recently shown, using genetic approaches, that CDK1 inhibition is enough for the synthetic lethality with MYC in mouse embryo fibroblasts as it occurs in cells deficient in CDK2, CDK4, and CDK6 [183].
MYC and CDK1 Inhibitors
CDK1 is essential for mammalian cell division [220] and is the only CDK required for completion of cell cycle in animal cells [221]. A number of small molecule inhibitors of CDK1 have been developed. Most of them induce an arrest in G 2 phase, and some are being used in clinical trials [222,223]. Accordingly, a CDK1 inhibitor induces cell death in Burkitt lymphoma and multiple myeloma cell lines depending on MYC levels, and CDK1 inhibition in Eµ-Myc mice results in extended survival [224]. Similar observations were made in breast cancer cells [225]. These results suggest that CDK1 inhibition is synthetic lethal on MYC expressing cells. However, purvalanol A is selective but not specific for CDK1, and shows some activity against other CDKs [222]. Therefore, the possibility existed that other kinases could contribute to the synthetic lethal effect. However, we have recently shown, using genetic approaches, that CDK1 inhibition is enough for the synthetic lethality with MYC in mouse embryo fibroblasts as it occurs in cells deficient in CDK2, CDK4, and CDK6 [183]. It is worthy to note that CDK1 not only arrests the cell cycle but also plays a role in DNA replication and DNA repair [226]. On the other hand, MYC-induced carcinogenesis is associated to genomic instability, as demonstrated in cell culture and in mice models (reviewed in [227,228]). MYC impairs DNA repair [229] and induces unscheduled DNA replication [230][231][232][233][234][235]. Therefore, it is conceivable that not only cell-cycle arrest but also the impairment of DNA repair is part of the molecular mechanism of the synthetic lethality between MYC overexpression and CDK1 inhibition.
MYC and Aurora Kinase Inhibitors
Aurora kinases A and B (AURKA and AURKB) are serine/threonine kinases required for mitosis [236]. MYC regulates Aurora Kinase A [237]. Expression of MYC but not that of other oncogenes, made the cells much more sensitive to Aurora kinase inhibitors (e.g., AS703569), being AURKB the central target in this model. Another Aurora kinase inhibitor, VX-680, was demonstrated to selectively kill the cells that overexpress MYC [238]. Indeed, MYC expression levels may provide a biomarker to identify tumors that may respond to aurora B kinase inhibitors. Moreover, the drug inhibited AURKB in vivo using mouse models that develop either B-cell or T-cell lymphomas in response to MYC overexpression, and the lethal response is independent of p53-p21 pathway [239]. This fact is relevant since TP53 is frequently mutated in cancer and usually confers an adverse prognosis.
MYC and CHK1 Inhibitors
One of the effects of MYC overexpression is to induce DNA replicative stress [13], which in turn activates CHK1 (Checkpoint Kinase 1). CHK1 is a serine/threonine kinase that functions as a major component of the DNA damage response. CHK1 regulates cell-cycle checkpoints following genotoxic stress to prevent the entry of cells with damaged DNA into mitosis and coordinates various aspects of DNA repair, and a number of molecules have been described as CHK1 inhibitors [240,241]. In cells from human and murine B-cell lymphomas there is a correlation between MYC and CHK1 levels, although CHK1 seems to be an indirect target of MYC [242]. Silencing of CHK1 with siRNA technology or inactivation with a small molecule results in selective death of MYC-overexpressing cells. These evidences turned CHK1 into an attractive therapeutic target. A CHK1 inhibitor (Chekin), was tested in the λ-Myc mouse model, where MYC induces lymphomas. In this model CHK1 inhibition was able to induce a significantly slower disease progression [242].
MYC and CDK9 Inhibition
CDK9 is not a kinase involved in cell-cycle progression but in transcription initiation. However, it is worth noting this interaction, given its similarity with cell cycle CDKs. Inhibition or depletion of CDK9 (with shRNAs) in cells and mouse models of hepatocellular carcinoma, results in delay of growth and the extent of its effect correlates with MYC levels, suggesting a synthetic lethal inhibition [243].
Concluding Remarks
The importance in cancer research of the set of proteins acting as physiological brakes of the cell cycle has been well established. On the other hand, the impairment of CKI activities is a major mechanism for the tumorigenic effects of MYC. Therefore, the deciphering of the molecular clues of the mechanisms leading to MYC-mediated inhibition of p21, p27, and p15 functions or expression is critical for the design of therapeutic approaches of cancers with MYC deregulation. Funding: The work in the laboratory of the authors is funded by grant SAF2017-88026-R from MINECO, Spanish Government, to J.L. and M.D.D., L.G.-G. was recipient of a fellowship from the FPI program from MINECO. The funding was co-sponsored by FEDER program from European Union. | 13,972 | 2019-03-01T00:00:00.000 | [
"Biology",
"Chemistry"
] |
On the beam spin asymmetries of electroproduction of charged hadrons off the nucleon targets
We study the beam single-spin asymmetries $A_{LU}^{\sin\phi_h}$ for charged hadrons produced in semi-inclusive deep inelastic scattering process, by considering the $e H_1^\perp$ term and the $g^\perp D_1$ term simultaneously. Besides the asymmetries for charged pions, for the first time we present the analysis on the asymmetries in the production of charged kaons, protons and antiprotons by longitudinally polarized leptons scattered off unpolarized proton and deuteron targets. In our calculation we use two sets of transverse momentum dependent distributions $g^\perp(x,\bm k_T^2)$ and $e(x,\bm k_T^2)$ calculated from two different spectator models, and compare the numerical results with the preliminary data recently obtained by the HERMES Collaboration. We also predict the beam spin asymmetries for $\pi^\pm$, $K^\pm$, $p/\bar{p}$ electroproduction in semi-inclusive deep-inelastic scattering of 12 GeV polarized electrons from unpolarized proton and deuteron targets.
Introduction
As a powerful tool to reach a more detailed understanding of the structure of hadrons, single-spin asymmetry (SSA) appearing in high energy scattering processes has attracted extensive attention in the last two decades [1][2][3][4]. In recent years, substantial SSAs for the electroproduction of pions and kaons in semi-inclusive deep-inelastic scattering (SIDIS) were measured by several collaborations, such as the HER-MES Collaboration [5][6][7][8][9][10][11], the Jefferson Lab (JLab) [12][13][14][15][16][17] and the COMPASS Collaboration [18][19][20][21][22][23]. In a particular case of SSAs, an asymmetry with a sin φ h modulation (the so-called beam SSA) has been observed in SIDIS by colliding the longitudinal polarized electron [12,[15][16][17] or positron beam [9] on the unpolarized nucleon target. Since the magnitude of the observed asymmetry with several percents cannot be explained by perturbative QCD [24], several mechanisms have been proposed to generate such asymmea e-mail<EMAIL_ADDRESS>try. One mechanism involves the eH ⊥ 1 term [25,26], which indicates that the asymmetry results from the coupling of the distribution e [27,28] with the Collins fragmentation function (FF) H ⊥ 1 [29]. Another mechanism relates to the h ⊥ 1 E term [30], which suggests that the beam SSA is contributed by the convolution of the Boer-Mulders function h ⊥ 1 [31] and the FF E [25,30]. Apart from the above two mechanisms, a new source giving rise to the beam SSA at the twist-3 level has been found through model calculations [32,33]. This mechanism involves a new twist-3 transverse momentum dependent (TMD) distribution function (DF) g ⊥ [34], which appears in the decomposition of the quark correlator if the dependence on the light-cone vector is included. As a T -odd and chiral-even TMD, g ⊥ can be regarded as an analog of the Sivers function [35] at the twist-3 level, because both of them require quark transverse motion as well as initialor final-state interactions [36][37][38][39] via soft-gluon exchanges to receive nonzero contributions. Therefore, studying beam SSAs may provide a unique opportunity to unravel the role of quark spin-orbit correlation at twist 3.
In a recent work [40], we studied the impact of g ⊥ (x, k 2 T ) on the beam SSA for neutral pion production. For this we calculated g ⊥ of valence quarks inside the proton using a spectator model [41] with scalar and axial-vector diquarks. By comparing our results with the experimental data measured by CLAS [15] and HERMES [9], we found that the Todd twist-3 DF g ⊥ may play an important role in the beam SSA in SIDIS. In Ref. [42], we extended the calculations on the twist-3 TMD DFs e and g ⊥ in the context of different spectator models for comparison. We considered two options for the propagator of the axial-vector diquark, as well as two different relations between quark flavors and diquark types, to obtain two sets of TMD DFs. Using the model results, we estimated the beam SSAs for neutral and charged pions at HERMES and CLAS, by considering the eH ⊥ 1 term and g ⊥ D 1 term simultaneously. Our numerical results shows that different choices for the diquark propagator will lead to different magnitudes and signs for the distribution functions,and that they can result in different sizes of the asymmetries. The contributions to the beam SSAs given by the eH ⊥ 1 term and the g ⊥ D 1 term are also quite different even in different sets.
Most recently, new preliminary measurements on the beam SSAs of charged hadrons with increased statistics were performed by the HERMES Collaboration [43], not only from a proton target, but also from a deuteron target. Especially, the beam SSAs of K + , K − , proton, and antiproton have been measured for the first time. The new experiments adopted different kinematics from the ones in Ref. [9] and extended the measurements to larger x and P T regions. The preliminary data shows that the beam SSAs for the charged pions off the proton target are slightly positive, which are consistent with our theoretical results [42] calculated from the TMD DFs in Set 1. For the events of charged-kaon, proton, and antiproton production, the data indicate that the beam SSAs are consistent with zero. In this work, we will confront the spectator-model results [40,42] on the beam SSAs with the preliminary data from HERMES. Especially, we will not only present the beam SSAs for the charged pions with the new kinematic cuts at HERMES, but we also give the theoretical results for the charged kaons, the proton, and the antiproton, which has not been done before. In the calculation we only consider the contribution from TMD DFs of valence quarks, therefore, the analysis on the charged kaons can be used to test the role of the sea quarks in the beam SSA. Furthermore, we will calculate the asymmetries with both the proton and deuteron targets. It is supposed that the contributions from the eH ⊥ 1 term are small in the case of the deuteron target, thus the measurement with a deuteron target may provide clean evidence of the g ⊥ D 1 term to the beam SSA, similar to the case of neutral pion production.
The rest of the paper is organized as follows. In Sect. 2, we present the formalism of beam SSA in SIDIS. In Sect. 3, we use two sets of TMD DFs resulted from two different spectator models to calculate the beam SSAs for charged hadrons at the new kinematic region of HERMES. We also present the predictions on the beam SSAs in the electroproduction of different charged hadrons at JLab with a 12 GeV electron beam. Finally, we give our conclusion in Sect. 4.
Formalism
In this section, we present the formalism of beam SSA in SIDIS, which will be applied in our phenomenological analysis later.
We adopt the reference frame where the momentum of the virtual photon defines the z axis, as shown in Fig. 1. We use k T and P T to denote the intrinsic transverse momentum of the quark inside the nucleon and the transverse momentum of Fig. 1 The kinematic configuration for the SIDIS process. The lepton plane (x-z plane) is defined by the initial and scattered leptonic momenta, while the hadron production plane is identified by the detected hadron momentum together with the z axis the detected hadron h. For the transverse momentum of the hadron with respect to the direction of the fragmenting quark, we denote it by p T . Following the Trento convention [44], the azimuthal angle of the hadron plane with respect to the lepton plane is defined as φ h . The differential cross section of SIDIS for a longitudinally polarized beam with helicity λ e scattered off an unpolarized hadron is generally expressed as [45]: where γ = 2Mx Q , and the ratio of the longitudinal and transverse photon flux ε is defined as In the parton model, the unpolarized structure function F UU and the spin dependent structure function F sin φ h LU in Eq. (2) can be expressed as the convolutions of twist-2 and twist-3 TMD DFs and FFs, using the tree-level factorization adopted in Ref. [45]. With the help of the notation we can express F UU and F sin φ h LU as [45]: where M h is the mass of the final-state hadron andP T = P T P T with P T = |P T |.
We point out that our calculation on the structure function F sin φ h LU is based upon a generalization of the TMD factorization to the twist-3 level. Therefore the correctness of our results relies on the validation of the twist-3 TMD factorization. However, the TMD factorization formalism in QCD at twist 3, or at order 1/Q, has not been established yet. The main challenge is that the extension of the twist-2 factorization formula to twist 3 at high orders of α S is not trivial [46,47]. Also, for the T-odd twist-3 observables, direct calculation shows that there are light-cone divergences [46] for which it has not been understood how to control them at order 1/Q. This does not necessarily mean that the twist-3 TMD factorization cannot be developed. Further study is needed to overcome this difficulty. Nevertheless, we will still use Eq. (6) as our starting point to study the beam SSA.
The beam SSA A sin φ LU as a function of P T therefore can be written as with The x-dependent and the z-dependent asymmetries can be defined in a similar way. Equation (6) shows that there are four terms giving contributions to the structure function F sin φ h LU , which are expressed as the convolutions of the twist-3 TMD DFs or FFs with the twist-2 ones. In the following calculation, we will neglect the h ⊥ 1Ẽ term and the f 1G ⊥ term, based on the Wandzura- Wilczek approximation [48]. Thus, there are two remaining terms that may give contributions to the structure function F sin φ h LU . One is the eH ⊥ 1 term, which has been applied to analyze the beam SSA of π + production in Refs. [25,26]. The other is the g ⊥ D 1 term that has been adopted to calculate the beam SSA of neutral and charged pion production [40,42] recently. In this work, we take both terms into consideration and finally arrive at For the twist-3 TMD DFs e and g ⊥ of the u and d valence quarks, we apply the results from our previous work [42], in which we obtained two sets of TMD DFs by using two different spectator diquark models. Among them, Set 1 is calculated from the spectator diquark model developed in Ref. [41], while Set 2 is from the spectator diquark model used in Ref. [49]. There are two differences between these two models. One is the choice of the propagator of the axialvector diquark, which corresponds to the different sum of the polarization of the axial-vector diquark. The other is the relation between quark flavors and diquark types. In this work we will adopt both sets of TMD DFs to calculate the beam SSAs for comparison. The relevant diagrams for the spectator-model calculation are shown in Fig. 2, in which we denote the propagators of the diquarks by dashed lines. In the following we explain some details on how to obtain the above mentioned two sets of TMD DFs. In the calculation of Set 1 TMD DFs, we choose the following form for the propagator of the axial-vector diquark [41]: which is the summation over the light-cone transverse polarizations of the axial-vector diquark [50]. At the same time, we choose the following relation between quark flavors and diquark types to obtain the TMD DFs of valence quarks: where a and a denote the vector isoscalar diquark a(ud) and the vector isovector diquark a(uu), respectively, and c s , c a and c a are the parameters of the model. In this calculation, the values of these model parameters are taken from Ref. [41], where they were fixed by reproducing the parameterization of unpolarized [51] and longitudinally polarized [52] parton distributions. To calculate Set 2 TMD DFs, we adopt an alternative form for d μν [49] d μν (P − k) = −g μν , while for the relation between quark flavors and diquark types, we employ the commonly used approach in the previous spectator-model calculations [49,53] Here the coefficients in front of f X are obtained from the SU(4) spin-flavor symmetry of the proton wave function. It is worthwhile to point out that another propagator of the axialvector diquark is investigated in [54], in which a complete polarization sum has been considered. As for the Collins function H ⊥ 1 , we adopt the following relations for the charged pions: where H ⊥ 1fav and H ⊥ 1unf are the favored and unfavored Collins functions, for which we apply the fitted results from Ref. [55]. Since currently there are no parameterized Collins functions for kaons [56] and proton/antiprotons, we assume that they satisfy the following relations: for the favored FFs and for the unfavored FFs, which means that the ratios of favored and unfavored Collins function of the kaon and pro-ton/antiproton are proportional to the ratios of the favored and unfavored unpolarized FFs of the pion. For mesons, the relations in Eqs. (16), (18) and (19) may be motivated by the Artru model [57], which suggests that all the favored (or unfavored) Collins function describing fragmentation into spinzero mesons have the same sign. For the Collins functions of quarks fragmenting into spin-1/2 hadrons, currently there is no theoretical implication or experimental constraint. As a first approximation, we assume that they can be connected to the Collins fragmentation of mesons through Eqs. (17) and (20). For the TMD unpolarized FF D q 1 (z, p 2 T ), we assume its p T dependence has a Gaussian form where p 2 T is the Gaussian width for p 2 T . We choose p 2 T = 0.2 GeV 2 in the calculation, following the fitted result in Ref. [58]. For the integrated FFs D q 1 (z) for different cases of hadron production, we adopt the leading-order set of the DSS parameterization [59].
Finally, in this work, we consider the following kinematic constraints [60] on the intrinsic transverse momentum of the initial quarks throughout our calculation: They are obtained by requiring the energy of the parton to be less than the energy of the parent hadron (the first constraint) and the parton should move in the forward direction with respect to the parent hadron (the second constraint) [60]. There are two upper limits for k 2 T for the region x < 0.5 at the same time, it is understood that the smaller one should be chosen.
HERMES
To perform numerical calculation on beam SSAs of chargedhadron production in SIDIS at HERMES, we adopt the following kinematic cuts [43]: where W is the invariant mass of the hadronic final states, and E beam and E h are the energies of the electron beam and the detected final-state hadron in the target rest frame, respectively.
In the left, central, and right panels of Fig. 3, we plot the beam SSAs for charged pions, kaons and proton/antiproton production in SIDIS off the proton target at HERMES, as functions of z, x, and P T . The upper panels show the results calculated from the TMD DFs in Set 1, while the lower panels show the results from the TMD DFs in Set 2. The curves are compared to the preliminary HERMES results on the asymmetries using the data collected during the years 1998-2007 [43]. To distinguish different origins of the asymmetry, we use the dashed and dotted curves to show the contributions from the eH ⊥ 1 term and g ⊥ D 1 term, while the solid curves stand for the total contribution.
By comparing the theoretical results with the preliminary experimental data, we find that for π + production, the result in Set 2 shows a positive asymmetry at the magnitude of 1 to 2 %, which can well describe the preliminary HERMES data. For π − production, the model result from Set 1 is positive, agreeing with the sign of the preliminary HERMES data that demonstrate slightly positive asymmetry, although the calculation overestimates the data at large x and large P T regions. Our new results are the predictions on charged-kaon, proton, and antiproton production, for which we obtain rather small asymmetries in both sets. These results are consistent with the preliminary HERMES data, although the uncertainties are large. This indicates that the valence quark approximation could be valid in the asymmetries for charged kaon, proton, and antiproton produced at HERMES. Furthermore, the contributions from the eH ⊥ 1 term are almost negligible in both sets.
One of the main results in this work is our prediction for the beam SSAs of charged-hadron production with a deuteron target at HERMES, as shown in Fig. 4. Again we plot the asymmetries for charged-pion, charged-kaon, and proton/antiproton production in the left, central, and right panels. The sizes of the asymmetries are similar to the case of the proton target. For the pion asymmetries on the deuteron target, we find that the calculation in Set 1 can well describe the preliminary data, especially for the π − production. Also, the agreement between the theoretical curves and the preliminary data is better than that on the proton target. Another difference from the proton target is that the dominant contributions are given by the g ⊥ D 1 term for almost all hadrons, while the contributions from the eH ⊥ 1 term are small compared to the g ⊥ D 1 term. The dominance of the g ⊥ D 1 term is more evident in Set 1. This is not surprising because in the case of the deuteron target the eH ⊥ 1 term contributes in the following way: where H ⊥h/u 1 + H ⊥h/d 1 corresponds to the sum of the favored Collins function and the unfavored one. Since the favored and the unfavored Collins functions are similar in size but opposite in sign, the eH ⊥ 1 term contribution for the deuteron target is largely suppressed. In the case of the charged-hadron production, it would be more ideal to probe the distribution g ⊥ using the deuteron target than the proton target at HERMES.
CLAS 12 GeV
In this subsection, we present our predictions on the beam SSAs for charged-hadron production at JLab with a 12 GeV longitudinally polarized electron beam scattered off nucleon targets, which could be performed in the near future. We adopt the constraints on k T given in Eq. (22) and apply the following kinematic cuts in the calculation [61]: In Fig. 5 we plot the beam SSAs for charged hadrons produced in SIDIS by a longitudinally polarized electron beam with 12 GeV scattered off an unpolarized proton target at JLab, as functions of z, x, and P T . In our previous work [40], we already presented the results for π 0 production at JLab 12 GeV, where we considered the g ⊥ D 1 term and used the distribution g ⊥ calculated in Set 1. Here we show the beams SSAs for π + and π − in Set 1 and Set 2, in the left panel of Fig. 5. The result for π + production at JLab 12 GeV in Set 1 shows that the asymmetries contributed by two different sources almost cancel, leading to a rather small total asymmetry. In the other cases the pion asymmetries do no vanish. Similarly, we plot the asymmetries for K ± and p/p in the central and the right panels of Fig. 5. We find that the asymmetries for K ± and p/p in Set 1 are quite sizable, while the asymmetries for those hadrons in Set 2 are consistent with zero. Therefore, the precise measurements on the beam SSAs for K ± and p/p production at JLab 12 GeV could be used to distinguish different spectator models. For completeness, in Fig. 6 we plot the same asymmetries for different chargedhadron production at JLab 12 GeV, but on the deuteron target, in the case that a deuteron target would be available. We find that the size and the sign of the asymmetries on the deuteron target is similar to the case of the proton target.
Conclusion
In this work, we performed an analysis on the beam SSAs for π ± , K ± , proton, and antiproton in SIDIS at the kinematics of HERMES, as well as at the kinematics of JLab 12 GeV. We considered the case that the nucleon target is a proton or a deuteron. In our calculation we employed the contributions from the eH ⊥ 1 term and the g ⊥ D 1 term, and we used two sets of TMD DFs calculated from two different spectator models. We compared the theoretical curves with the preliminary data recently obtained by the HERMES Collaboration. We find that for pion production, two sets of TMD DFs lead to rather different results, also, the roles of the eH ⊥ 1 term and the g ⊥ D 1 term are different in different Sets. The asymmetries for charged kaons, protons, and antiprotons are small in both sets and are consistent with the preliminary HER-MES data. For the deuteron target, we find that the role of the eH ⊥ 1 term is small compared to the g ⊥ D 1 term. Therefore, the contribution to beam SSAs related to the g ⊥ D 1 term could be studied without a significant background from the mechanism related to the eH ⊥ 1 term. Finally, the analysis on the beam asymmetries of charged-hadron production at JLab indicates that the precise measurement on the beam SSAs of K ± and p/p production, which can be performed at JLab with a 12 GeV electron beam in the near future, could be used to distinguish different spectator models and shed light on the mechanism of the beam SSAs in terms of TMD DFs. | 5,138 | 2014-01-19T00:00:00.000 | [
"Physics"
] |
Nichols algebras associated to the transpositions of the symmetric group are twist-equivalent
Using the theory of covering groups of Schur we prove that the two Nichols algebras associated to the conjugacy class of transpositions in S_n are equivalent by twist and hence they have the same Hilbert series. These algebras appear in the classification of pointed Hopf algebras and in the study of quantum cohomology ring of flag manifolds.
Introduction
Nichols algebras play a fundamental role in the classification of finitedimensional pointed Hopf algebras over C. They are graded Hopf algebras in the category of Yetter-Drinfeld modules over a Hopf algebra H, and they are uniquely determined by V , the homogeneous component of B(V ) of degree one.
Let H be the group algebra of a finite group G. In the study of Nichols algebras a basic question is to describe those Yetter-Drinfeld modules V over H for which B(V ) is finite-dimensional. Whereas deep results were found for the case where G is abelian, [5,16,17], the situation is widely unknown for non-abelian groups G.
The first examples of finite-dimensional pointed Hopf algebras with nonabelian coradical appeared in [21], as bosonizations of Nichols algebras related to the transpositions in S 3 and S 4 . The analogous Nichols algebra over S 5 was computed by Graña, see [14]. These Nichols algebras are computed from the conjugacy class of transpositions and a 2-cocycle (cocycle for short) associated to this conjugacy class. The cocycles arise from a cohomology theory defined for racks (see for example [3,9,13]). In [2,Theorem 1.1] it is proved that for all n ∈ N, n ≥ 4, there are precisely two rack 2-cocycles associated to the conjugacy class of transpositions in S n that might have finite-dimensional Nichols algebras. Explicitly, one of these cocycles is the constant cocycle −1. The other one is the cocycle given by for all transpositions σ and τ = (i j) with i < j. For all n ∈ {4, 5} the Nichols algebra associated to the conjugacy class of transpositions in S n and 2010 Mathematics Subject Classification. 16T05; 16T30; 17B37. This work was partially supported by CONICET.
any of the two cocycles −1, χ is finite-dimensional. Moreover, both of these algebras have the same Hilbert series. It is not known whether these algebras are finite-dimensional for n > 5. The main result of this work is to connect these two algebras by twisting the cocycle. More precisely, we prove that the constant cocycle −1 and χ are equivalent by twist. This gives an affirmative answer to a question due to Andruskiewitsch, see [1,Question 7]. However, the problem arose already earlier in the literature. For example, in the last paragraph of [19], Majid discusses the relationship between these two algebras and the related quadratic algebras. To reach our main result, we use the existence of projective representations of S n . Projective representations of S n were originally studied by Schur in 1911, see [22] for an English translation of his fundamental paper about this subject. As a corollary of our result we obtain that for all n ≥ 4 both Nichols algebras associated to the conjugacy class of transpositions of S n have the same Hilbert series.
We recall briefly another application for Nichols algebras which may have connections with the main result of this work. In [8], Borel identified the cohomology ring of a flag manifold with S W , the algebra of coinvariants of the associated Coxeter group W . This admits certain divided-difference operators which create classes of Schubert manifolds. In [11], Fomin and Kirillov introduced a new model for the Schubert calculus of a flag manifold, realizing S W as a commutative subalgebra of a noncommutative quadratic algebra E W , when W is a symmetric group. In [6], Bazlov proved that Nichols algebras provide the correct setting for this model for Schubert calculus on a flag manifold. It is an open problem whether the Nichols algebra associated to χ coincides with the quadratic algebra E W [19,21].
Preliminaries
2.1. Racks and cohomology. We briefly recall basic facts about racks, see [3] for more information and references.
A rack is a pair (X, ⊲) where X is a non-empty set and ⊲ : X × X → X is a function, such that the map x → i ⊲ x is bijective for all i ∈ X, and In particular, the conjugacy class of transpositions in S n is a rack; it will be denoted by X n .
In this work we are interested only in racks which can be realized as a finite conjugacy class of a group. Let X be such a rack. A map q : X × X → C × is a 2-cocycle if and only if q x,y⊲z q y,z = q x⊲y,x⊲z q x,z for all x, y, z ∈ X. We write Z 2 R (X, C × ) for the set of all rack 2-cocycles.
for all x, y ∈ X. Since ∼ is an equivalence relation and Z 2 R (X, C × ) is stable under ∼ it is possible to define the second rack cohomology group as All these notions are based on the abelian cohomology theory of racks proposed independently in [9], [13]. For more details about cohomology theories of racks see [3, §4].
2.2. Nichols algebras. We refer to [4] for an introduction to Yetter-Drinfeld modules and Nichols algebras.
Let n ∈ N. We recall the well-known presentation of the braid group B n by generators and relations. The group B n has generators σ 1 , . . . , σ n−1 and relations There exists a canonical projection B n → S n that admits the so-called Matsumoto section µ : S n → B n such that µ ((i i + 1)) = σ i . This section satisfies the following: µ(xy) = µ(x)µ(y) for any x, y ∈ S n such that If c is a solution of the braid equation, we say that (V, c) is a braided vector space. A solution of the braid equation induces a representation By [3, Theorem 4.14], Yetter-Drinfeld modules over group algebras can also be studied in terms of racks and rack 2-cocycles. Therefore we are interested in Nichols algebras of braided vector spaces arising from racks and 2-cocycles.
Let (X, ⊲) be a rack and let q ∈ Z 2 R (X, C × ). We consider V = CX, the vector space with basis x ∈ X, and define c : Then c is a solution of the braid equation. The Nichols algebra associated to the pair (X, q) is the Nichols algebra of the braided vector space (V, c). This algebra will be denoted by B(X, q).
Recall that X n is defined as the rack associated to the conjugacy class of transpositions in S n . In [2, Theorem 1.1] it is proved that there are two rack 2-cocycles associated to X n that might have a finite-dimensional Nichols algebra. One is the constant 2-cocycle −1. The other is the 2-cocycle χ given by Equation (1).
Remark 2.2. It can be checked directly that the 2-cocycles −1 and χ associated to the rack X 3 are cohomologous. Then the Nichols algebras B(X 3 , χ) and B(X 3 , −1) are isomorphic and hence they have the same Hilbert series.
Example 2.3. The Nichols algebras B(X 4 , −1) and B(X 4 , χ) have both dimension 576. In both cases the Hilbert series is These algebras appeared first in [11,21]. For more information about these algebras see [ Example 2.4. The Nichols algebras B(X 5 , −1) and B(X 5 , χ) have both dimension 8294400. In both cases the Hilbert series is These algebras were first computed by Graña, [14]. For more information about these algebras see [
2.3.
Twisting. In [1, Section 3.4] it is shown how to relate two rack 2cocycles by a twisting in such a way that some properties of the corresponding Nichols algebras are preserved. This method is based on the twisting method of [10] and its relationship with the bosonization given in [20].
Let X be a subrack of a conjugacy class of a group G. Let q be a rack 2-cocycle on X and let φ be a group 2-cocycle on G. Define q φ : X ×X → C × by (2) q φ x,y = φ(x, y)φ(x ⊲ y, x) −1 q x,y for x, y ∈ X.
Remark 2.5. Let X be a rack and q ∈ H 2 R (X, C × ). For a map φ : X × X → C × define q φ by Equation (2). Then q φ is a rack 2-cocycle if and only if for any x, y, z ∈ X. Thus, if X is a subrack of a group G and φ is a group 2cocycle, φ ∈ Z 2 (G, C × ), then φ| X×X satisfies Equation Definition 2.7. The 2-cocycles q and q ′ on X are equivalent by twist if there exists φ : X × X → C × such that q ′ = q φ as in (2). 3. The Schur cover of S n 3.1. Projective representations and covering groups. We review some aspects of Schur's theory of projective representations and construct the Schur cover of S n . See [7,18,22] for details.
A projective representation of a finite group G is a group homomorphism G → PGL(V ). Equivalently, such a representation may be viewed as a map f : for all x, y ∈ G and suitable scalars φ(x, y) ∈ C × . The map G × G → C × , (x, y) → φ(x, y), is called a factor set. The associativity of the group GL(V ) implies the 2-cocycle condition of the factor set φ: for all x, y, z ∈ G. Two projective representations ρ 1 : G → GL(V 1 ) and ρ 2 : for all x ∈ G. Two factor sets φ and φ ′ are equivalent if they differ only by a factor b x b y /b xy for some b : G → C × . The Schur multiplier of G is the abelian group of factor sets modulo equivalence. It is isomorphic to the second cohomology group H 2 (G, C × ).
Recall that a central extension of G is a pair (E, p), where p : E → G is a surjective group homomorphism such that ker p is contained in the center of the group E. Schur proved that every finite group G has a central extension (E, p) with the property that every projective representation ρ of G lifts to an ordinary representationρ of E such that the diagram
commutes.
There exist extensions with ker p ≃ H 2 (G, C × ). Moreover, H 2 (G, C × ) is the unique minimal possibility for ker p. These minimal central extensions of G are called Schur covering groups of G.
Theorem 3.1. Given n ≥ 4, define the group T n as follows Then T n is a Schur covering group of S n . Therefore, there exists a central extension Remark 3.2. Let t ∈ T n . For any σ ∈ S n we have that p −1 (σ) = {σ,σz}.
Since the involution z is a central element of T n , the group S n acts on T n by conjugation: σ ⊲ t =σt(σ) −1 = (σz)t(σz) −1 . Therefore it is possible to write the conjugation in T n as σ ⊲ t = σtσ −1 , where t ∈ T n and σ ∈ S n . Definition 3.3. For i, j ∈ N such that 1 ≤ i, j ≤ n, i = j, let [i j] be an element of T n defined inductively as Proof. Multiplying both sides by z if needed, we may assume that i < j. If {k, k + 1} ∩ {i, j} = ∅ then the claim follows from [22,Paragraph 6,III]. If k = i − 1 then the claim follows from Definition 3.3. The case k = i follows from the case k = i − 1 by applying s i−1 . Since s j ⊲ t j−1 = s j−1 ⊲ t j , a straightforward computation settles the case k = j. Finally, the case k = j − 1 follows from the case k = j by applying s j . Proposition 3.5. Let l ∈ N, σ = s i 1 s i 2 · · · s i l ∈ S n and i, j ∈ {1, ..., n}.
Proof. Follows from Lemma 3.4 by induction on l.
3.3. Nichols algebras over symmetric groups. Recall that X n is the rack of transpositions in S n . There exist two rack 2-cocycles that we want to consider. One of these rack 2-cocycles is the constant cocycle −1. The other one is the 2-cocycle given by Equation (1).
Lemma 3.7. There exists a section s : S n → T n such that if τ = (i j), i < j, then for all σ.
Proof. By Theorem 3.1 there exists a central extension where A = z . Take any set-theoretical sections : S n → T n such that s(id) = 1 and define a map s : S n → T n by (6) s(π) = s(π) if π / ∈ X n , [i j] if π = (i j) ∈ X n , with i < j.
Then ps = id and s(id) = 1. Since σ ∈ X n , the length of σ is 1. Remark 3.2 and Proposition 3.5 imply that Hence the claim follows. where A = z . Let s : S n → T n be the section of Lemma 3.7 and let φ(x, y) ∈ A be defined by the equation s(x)s(y) = i(φ(x, y))s(xy). | 3,287.2 | 2010-11-24T00:00:00.000 | [
"Mathematics"
] |
Peso Education and Resource Assistance (PERA) Program: A Case Study in Barangay T. Padilla, Cebu City, Philippines
ABSTRACT
INTRODUCTION
The COVID-19 virus has dramatically affected the world economy, and many companies are battling to stay in business. However, establishing a company is a fantastic opportunity to gain money and weather the economic effects of the epidemic. Launching a small business in these challenging economic times can help generate small profits. By establishing a small business, individuals can generate income outside the market. Rather than depending on traditional employment, which may not be secure during the pandemic, showing a small business can be a reliable way to earn money. Small companies may give individuals greater financial freedom. Instead of putting the economy as a whole at risk, businesspeople might earn their income and adapt their business practices in response to market volatility. As a consequence of the epidemic, many people have lost their jobs or money, and this appeals to them, especially since it gives a feeling of security when things are uncertain. The growth of a small business can provide a sense of fulfillment and significance. For many business people, starting a business is about more than just earning money. It is also about pursuing one's aspirations and creating something meaningful. Numerous individuals are dissatisfied with their occupations or need help to obtain employment. Small business ownership can provide them with direction and purpose. Beginning a new venture, particularly a small firm, can be a very gratifying and complicated experience. A strong business management and finance foundation is one of the keys to success. Starting a small business requires understanding how to do everything from scratch. To launch a small business, one must first comprehend how to construct one from scratch, including creating a business plan and finding the optimal consumers and markets. With a firm foundation, people with limited business knowledge might find it easier to get their business off the ground or make costly mistakes. Business people can increase their chances of success by laying a firm foundation for their small enterprises by learning the fundamentals of business construction. Building a small business requires specific skills and knowledge compared to starting one from scratch. Additionally, small business proprietors must be able to manage their products and services, keep track of their finances, and maintain the operation of their business. Small business proprietors who need an understanding of these concepts may feel pressured or need help to meet their company's demands. If someone understands the fundamentals of business administration, they can remain organized, efficient, and profitable while running a business. To launch a flourishing modest business, one must first comprehend how money functions. To make positive choices and successfully manage their finances, owners of small enterprises that operate enterprises would familiarize themselves thoroughly with financial concepts such as profit and loss, financial flow, and investment return. These are all examples of financial statements. Businesses that comprehend finance can make more informed decisions and increase their prospects of long-term success. As a result of those mentioned above, the researchers must conduct this study. This article suggests creating a program called "Peso Education Resource Assistance (PERA)," specifically in Barangay T. Padilla, which provides an accessible, practical, and hands-on approach to business management that will be encouraged through a series of training and seminars. In addition, the selected recipients will receive financial assistance for the construction and launch of their microbusiness. The program would target entrepreneurs to solicit funds from numerous individuals through online platforms (Nunan et al., 2022). Entrepreneurs also face other challenges besides access to initial capital. Regulatory barriers, lack of infrastructure, and limited market access are common challenges entrepreneurs encounter (Nunan et al., 2022). Entrepreneurs may also need help in low-density areas, such as a lack of human capital and limited social networks (Silva et al., 2023). Therefore, besides providing initial capital, policymakers and practitioners must address these broader challenges to support entrepreneurship in low-density areas. Additionally, the COVID-19 pandemic has emphasized the need for entrepreneurship education to adapt to the changing business environment (Ratten & Jones, 2020). Online entrepreneurship education and remote mentoring can provide aspiring entrepreneurs with the necessary skills and support to tackle the challenges posed by the epidemic (Ratten & Jones, 2020). The need to manage financial resources is crucial during the pandemic. Kathleen Ann Pasaoa et al. (2023) demonstrate that innovative financial techniques significantly impact social companies' income and cash flow. By and large, studies dealing with impact assessment of extension activities used the program recipients or beneficiaries as the respondents (Herrera, 2010; Dilao, 2010) of its worth undertaking. However, in a fascinating view, in doing impact assessments relative to extension projects, the entire picture of the community will be examined and looked upon very squarely, and this is for the simple reason that impacts pertain to the changes in the conditions of the community. These changes must be evaluated and verified based on the project's outcome. On the other hand, according to Amosah et al. (2023), small-scale businesses do not frequently encourage record-keeping methods. Those enterprises cannot benefit from it because record-keeping is rarely practiced. Results indicate that most small-business owners need more expertise and understanding to employ modern record-keeping systems and that seminars and livelihood programs are vital in building lucrative enterprises capable of overcoming the challenges in entrepreneurial knowledge deficiencies; a resource-based approach to the development and execution of strategy is applicable Bryson et al., (2007). The researchers took an active interest in the present investigation as much as they are in community extension services. They have been serving as faculty coordinators of extension projects concerning their academic program. Whether or not, in a coherent way of looking at things and events, the College of Business and Accountancy delivered the impact that is purpose or desired to be apparent in the partner-barangay was the query that guided them in materializing this work.
Theoretical Framework
This theory is anchored to the leading idea of the Resource-Based View Theory by Birger Wernerfelt and, Am. J. Multidis. Res. Innov. 2(3) 110-116, 2023 Supported by Two Sub-Theories of Resource dependence theory by Gerald R. (Jerry) Salancik and Jeffrey Pfeffer and Abraham Maslow's Hierarchy of Needs. According to the theory of resource-based view (RBV) by Wernerfelt (1984), a corporation's distinctive assets and capabilities are the fundamental factors that determine its level of competitive advantage. When applied to launching a new company, the RBV theory emphasizes the significance of identifying and using the resources and capabilities that will allow the new enterprise to thrive in a notoriously cutthroat market. The article emphasizes the significance of education, financial literacy, and the first investment while discussing the many resources that are accessible for the establishment of a firm. Knowledge is, without a doubt, the most critical thing to have when beginning a company. A significant advantage over the competition may be achieved by implementing a program that teaches potential business owners the necessary skills, information, and resources to launch and run a profitable company. For instance, prospective business owners may learn how to discover and analyze potential business prospects, create an action plan for their company, and practice efficient sales and marketing techniques by enrolling in entrepreneurship education classes. These sorts of programs will also give direction and the opportunity to network. In addition to schooling, financial literacy is an essential resource for those who want to start their businesses. Financial literacy is the capacity to efficiently manage money via activities such as creating a budget, understanding how to invest, and being familiar with financial statements. Aspiring business owners who understand finance are more equipped to manage the financial aspects of their businesses, such as capital creation, cash flow management, and financial report preparation. Because of this, company owners can make informed decisions and steer clear of expensive blunders, which may be a significant advantage over their competitors. Initial funding is another crucial resource for getting a firm off the ground. Inadequate funding may make it easier to get a firm off the ground and maintain it viable. There is a wide range of funding available, and some of these options include personal savings, loans, grants, and investors. Because it allows them to engage in advertising, product development, and other critical business sectors, access to capital may give entrepreneurs a competitive edge. The resource-based perspective theory emphasizes the significance of using one's unique resources and abilities to achieve a competitive advantage. In starting a company, education, financial literacy, and beginning cash are crucial components for prospective business owners who wish to prosper in a highly competitive market. The likelihood of an entrepreneur successfully establishing and growing a firm may be improved by the entrepreneur's ability to locate and use the relevant resources. Resource Dependence Theory (RDT) by Pfeffer & Salancik (2019) discusses how organizations rely on their surroundings for the resources necessary for their continued existence and growth. The idea behind this notion is for companies to amass resources to lessen their dependency on external sources and boost the degree to which they exercise control over their surroundings. RDT emphasizes the relevance of finding and securing essential resources when beginning a company. This is done to limit reliance on outside sources and increase the likelihood of the venture's success. One of the crucial resources that RDT has is recognized as being knowledge. In the context of beginning a company, education gives the information and skills necessary to discover and acquire the resources needed to initiate and grow a successful business. These abilities and knowledge are required to identify and achieve the resources to start and build a successful business. An education program that teaches prospective industrialists the skills, information, and resources necessary to launch and run a prosperous firm is an invaluable tool that may help a company become less reliant on outside resources. RDT acknowledges that having a solid understanding of finances is an essential resource. Aspiring company owners knowledgeable about finances are better positioned to handle their businesses' finances, such as raising money, controlling cash flow, and creating financial reports. This results in the organization having greater control over its financial resources while reducing its dependence on other sources of support. RDT highlights the relevance of securing initial cash as a critical resource for the beginning of a firm, which is the last point of the report. Beginning a firm and growing operations might be challenging if appropriate funding is unavailable. Increasing the organization's command over its financial resources and reducing its dependency on other sources of finance are both benefits that might result from successfully raising initial capital.
In conclusion, the Resource Dependence Theory emphasizes minimizing dependence on external sources by gaining the required resources to develop and grow a successful firm. This may be accomplished by accumulating resources such as capital, human capital, and physical capital. Education, financial literacy, and beginning investments are necessary tools that help an organization become less reliant on outside sources and more in control of its surrounding environment. Acquiring and effectively using these resources may significantly boost the odds of success for dogged businesspeople operating in notoriously cutthroat markets. The model of human motivation conceived by and refined by Maslow (1943) is a fundamental idea within the scientific discipline of psychology. According to Maslow, human reason is defined by a hierarchy of requirements that must be fulfilled precisely, with physiological and safety needs being the most crucial. According to this idea, individuals are driven to meet their fundamental needs first, followed by their higher-order requirements, such as self-esteem and self-actualization. Lastly, people are motivated to satisfy their wants and desires. According to Maslow's theory, the physiological need for money is the source of the drive to start a company and become Am. J. Multidis. Res. Innov. 2(3) [110][111][112][113][114][115][116]2023 financially literate. This is true when starting a business and obtaining financial literacy. This is a solid incentive to get things done. Financial literacy is equally vital since it equips individuals with the information and skills required to manage their resources and satisfy their fundamental requirements efficiently. According to Maslow's hierarchy of needs theory, a person's level of self-esteem and their capacity for self-actualization are two factors that impact their desire to start a company and learn about personal finance. Establishing and running a profitable company is one of the best ways for people to boost their self-worth. Education in personal finance helps people become more self-actualized because it provides them with the knowledge and skills they need to become financially independent and pursue their life ambitions. The conclusion drawn from Maslow's theory of human motivation is that the desire for self-esteem and selfactualization and the need to satisfy core physiological demands are the primary drivers of the incentive to start a company and gain financial literacy. Individuals may make the most of their desire to succeed in business by grasping these underlying motives.
Research Questions
This study aimed to explore assessing the peso education resources and resources assistance (PERA) specifically within barangay Barangay T. Padilla, Cebu City, Philippines.
1. How the program contributes to the economic capability of the beneficiaries? 2. What circumstances contribute to the success or failure of their business?
3. How can the program be enhanced based on its current status?
MATERIALS AND METHODS Research Design
This qualitative investigation uses a kind of research design known as in-depth and personal interviews to gather information from chosen beneficiaries and authorities of Barangay T. Padilla as key informants. Indepth and personal interviews with chosen beneficiaries and officials of Barangay T. Padilla were crucial for establishing our research design since they helped researchers gather insights and viewpoints that would be difficult to access via other methods. These interviews significantly shaped our research design.
Research Environment
The researchers conducted this study in T. Padilla's neighborhood in Cebu City, Philippines. The aid recipient in each barangay was given from among the poorest residents, particularly those interested in establishing micro businesses. The program was started in January 2020 and ends on December 2022.
Research Informants
In this thesis, research informants are residents of Barangay T. Padilla in Cebu City, Philippines, particularly those who relevant to the poorest of the impoverished segment and have established limited business knowledge and experience was inaugurated and conducted a series of seminar workshops on entrepreneurship, basic finance, management, and showed accounting. Those who passed the screening process were considered for the seed capital. The seed capital of P2,000 was given to start a micro business, particularly the street cart vendor. Monitoring activities of each beneficiary were conducted each semester to determine the condition of their business.
Data Collection
In-depth and personal interviews are robust qualitative research designs to collect detailed and nuanced data. The method involves one-on-one conversations with key informants to gather their perspectives, experiences, and opinions on a specific topic. The in-depth and personal interview research design is a valuable data collection method for deeply understanding a particular case. To conduct the interviews in T. Padilla, we first identified vital informants knowledgeable about the community and the program.
Data Analysis
In analyzing the data, the researcher transcribed the recorded responses from the informants and categorized them to find common patterns that arose from the different reactions of the informants. After numerous stages of coding, the researchers constructed various themes, which were then reconstructed to form emergent themes.
Ethical Consideration
In compliance with the research standard and ethics protocol, the researchers will always ask permission from the barangay staff and consent from the informants. The rights and privacy of informants will always be on top priority. The researchers will respect informants' rights whenever they are uncomfortable answering the questions. Researchers must always be careful in securing the data in all information gathered and will be securely kept confidential at all times.
RESULTS AND DISCUSSION
Twelve cluster themes were found and regrouped from the established core meanings, resulting in six emerging themes. From the responses of the study's informants, we have created emerging themes as follows: Effects of the PERA Program on the Beneficiaries 1. Deducting different techniques. 2. Improving lives and uplifting from the poverty line and challenging moments.
3. Imparting additional knowledge and skills in running and circulating the capital.
Effects of the PERA Program on the Beneficiaries Deducing Different Techniques
This particular theme refers to identifying and developing various methods, approaches, and strategies to solve problems or achieve a specific goal.
1. When asked about the program, informant number 3 shared The money is not enough because, as of today, different products' prices are increasing, so the holding of seminars and teaching the communities the various livelihood programs and the giving of additional capital can help our community and us, especially since we came from the pandemic experience (IDI 3).
2. Another informant shares the same opinion about the program There should be capital as of today's current situation, and there is also a need for a seminar for a livelihood program (IDI 5).
3. With the same question being asked, the informant on his opinion about the program Of course, as the capital was given, they just needed to earn profit out of it. They mainly venture into processed foods, but it takes time, so they need more capital as it is too costly. In contrast, if they venture into reselling those processed foods and buy enough to sell for a day in the afternoon, there will already be a profit to purchase more the following day. Nevertheless, they are the ones who will make it. In that case, it is just okay, but not for those persons who have children in elementary, IDI 1-4SS 4-4, because nowadays in Zapatera and Tejero, children in the school need to be guided in school as there are already many cases of kidnapping from other parts of the country, and the school experienced one parent claiming a student when, in fact, she was not her child. So, let us pick those who will focus on their business or livelihood, like the one I know whom we appoint as a secretary for seniors but who is not a senior citizen. He will make sideline sales of processed foods like lumpia, which he will cook and then sell house to house, and at three o'clock, he will sell another product that was already booked before the time. He is now meeting at the Cebu Business Hotel with the other two officers, the president, vice president, and secretary. He is the acting secretary because other senior citizens have already moved out of the area because of demolition, and others are already living with their children who already have families. Others are already in Liloan,Sugod. Here is T. Padilla; we can still see many people with potential that we can recruit for the said program, which will significantly help our barangay (IDI 1).
Improving Lives and Uplifting from the Poverty Line and Challenging Moments
This particular theme will refer to the efforts and strategies to improve the standard of living and socio-economic status of individuals and communities experiencing poverty and hardship.
1. When asked about their situation, informant number 2 shared If they do it diligently, they can improve their lives by running back the money and profit from it. However, others have yet to focus on their first business, where if they have planned to sell a particular product, another type of product will be sold the next day, and others would venture on to carrying water. Because of this, most beneficiaries have not improved their livelihood out of the capital given by PERA Program (IDI 2).
2. With the same question being asked, informant Two his opinion about the program Yes, it has improved our life even in a little way, that I could sell anything like pancit canton, noodles, and eggs where I have sold a lot; I also cook for them my customers (IDI 2).
Another informant shares the same opinion about the program
Much of what I learned at the PERA Program lecture has already been used in my gardening company. (IDI 5).
Imparting Additional Knowledge and Skills in Running and Circulating the Capital
Providing people with the information and training they need to manage and use their financial resources properly is referred to as financial literacy training. It involves providing education and training on financial management, investment, and entrepreneurship to help individuals achieve their financial goals and objectives.
1. When asked about their view of the program, informant number 3 shared Yes, it helped (PERA PROGRAM), and because of this, I could begin a sari-sari store. Our business grew because of the seminar, and PERA Program shows how it helped our needs (IDI 3).
2. With the same question being asked, informant Two his opinion about the program Because of the PERA Program has given me business and additional capital and an idea of how I will roll back my money, and it has developed and improved my means of livelihood (IDI 4).
3. With the same question being asked, informant five his opinion about the program It is okay to have a seminar in our barangay to give us an idea of how and what we can do to alleviate poverty in our family because if we cannot reflect, then we cannot discipline our family and ourselves (IDI 5).
Lessens the Poverty Line in the Community of Barangay T. Padilla
Refers to the activities and initiatives intended to decrease poverty and raise the quality of life in Barangay T. Padilla, Am. J. Multidis. Res. Innov. 2(3) 110-116, 2023 a neighborhood in a particular geographic location.
1. When asked about their view of the program, informant number 3 shared Our sari store has helped me with our day-to-day expenses, which has helped us and our community to lessen the poverty we have been experiencing for so long (IDI 3).
Factors Affecting the Business of the Beneficiary Unforeseen Calamities
Refers to unexpected and sudden events that cause widespread damage, disruption, diseases, and loss of life.
1. When asked about their view of the program, informant number 2 shared The last time I could work or run my business was during the pandemic since it was too complicated. I sell in the afternoon, and some officials examine the area, and people are not allowed to go out, so I cannot concentrate on selling my items. However, the company significantly reduces my and my family's day-to-day expenditures (IDI 2).
CONCLUSION
In Barangay T. Padilla, the PERA Program is said to have positive and negative effects on the people who benefit from it, according to the opinions expressed by the sources of information. Most program beneficiaries have claimed that they have been able to enhance their quality of living as a direct effect of their participation in decisionmaking and that the program has reduced poverty in their neighborhood. However, some recipients said the financial help needed to be increased as extra cash and other livelihood campaigns to sustain their enterprises. To fully reap the advantages of the PERA Program, we must provide beneficiaries with the information and training they need to manage and use their resources effectively. Furthermore, seminars and livelihood programs that aid the community in building lucrative enterprises capable of overcoming the challenges that the economy provides would be beneficial. A resource-based approach to the development and execution of strategy is applicable (Bryson et al., 2007). It may be helpful in the private and public sectors for planning and determining specific skills and constructing livelihood schemes. It gives a unique general training and consultation that provides an overview that benefits various public and private organizations. Our points of contention may sum up that the outcome of the program can further be investigated.
RECOMMENDATION
This article highlights the significance of recognizing the existence of various cultures and gaining knowledge from them. In addition, it emphasizes the role that education plays in fostering unity and empathy, and it provides recommendations for how equality might be promoted in day-to-day living.
There are a few different directions that one might go in order to make the PERA Program more efficient. Beneficiaries may get improved assistance from the program if it investigates and puts into practice a variety of strategies, such as expanding their access to finance and organizing seminars on livelihood programs. In addition, the program might concentrate on bettering recipients' lives by expanding the training it offers and offering more help in running businesses. This might involve educating them to concentrate on their principal company and to avoid diversification, which could result in results that are not beneficial for them. Additionally, the program might continue to give education and training on financial management, investing, and entrepreneurship to assist recipients in managing the resources at their disposal more successfully. The program might execute measures targeted at lessening the severity of poverty and enhancing people's quality of life to achieve its goal of lowering the poverty threshold in the community of Barangay T. Padilla. That might involve providing beneficiaries with increased assistance and resources and exploring collaborations with other organizations and programs whose efforts complement those of the PERA Program. An intelligent and self-aware viewpoint is fitted out in the article on an upsetting subject, but neither the author nor their personal experiences are discussed. The ideas are clearly explained via personal tales and stories, which demonstrates a knowledge of the problems and possibilities given by diversity. | 5,913.6 | 2023-06-14T00:00:00.000 | [
"Economics",
"Education"
] |
Utilizing Smartphones for Approachable IoT Education in K-12
Distributed computing, computer networking, and the Internet of Things (IoT) are all around us, yet only computer science and engineering majors learn the technologies that enable our modern lives. This paper introduces PhoneIoT, a mobile app that makes it possible to teach some of the basic concepts of distributed computation and networked sensing to novices. PhoneIoT turns mobile phones and tablets into IoT devices and makes it possible to create highly engaging projects through NetsBlox, an open-source block-based programming environment focused on teaching distributed computing at the high school level. PhoneIoT lets NetsBlox programs—running in the browser on the student’s computer—access available sensors. Since phones have touchscreens, PhoneIoT also allows building a Graphical User Interface (GUI) remotely from NetsBlox, which can be set to trigger custom code written by the student via NetsBlox’s message system. This approach enables students to create quite advanced distributed projects, such as turning their phone into a game controller or tracking their exercise on top of an interactive Google Maps background with just a few blocks of code.
Introduction
Most of the applications we use on our computers and mobile devices every day are distributed and use the Internet to provide their functionality. Networked sensors and actuators-the Internet of Things (IoT)-are also becoming ubiquitous, with smart homes and health monitoring leading the way. However, hardly any of the enabling technologies are taught in introductory computer science (CS) classes in K-12. There do exist classes and makerspaces where some students are exposed to embedded computers, providing opportunities to program Raspberry Pis or micro:bits with simple sensors and actuators, such as LEDs, using connectivity based on either a USB cable or Bluetooth. However, while these experiences are fun, they are fairly disconnected from the IoT that otherwise surrounds us. In addition, not many schools offer these types of classes due to cost, logistics, and a lack of teachers who have experience with these tools.
However, over 84% of teenagers in the United States already own a mobile device [1] that comes with a rich set of powerful sensors, including an accelerometer, gyroscope, microphone, camera, GPS, and many more, and is Internet-enabled out of the box. Thus, smartphones offer an excellent opportunity to expose students to key networked sensing topics, such as polling and streaming access paradigms and event-based computing, and make computing more engaging by enabling students to be creators and not just users of compelling applications. However, one important question remains: how can we make these powerful technologies accessible to novice programmers?
This work introduces PhoneIoT, an open-source mobile app for Android and iOS that allows users to programmatically access their own smartphones and tablets as IoT devices through NetsBlox [2], a block-based programming environment based on Snap! [3]. NetsBlox introduced two powerful networking and distributed computing abstractions to block-based languages: Remote Procedure Calls (RPCs) and message passing. RPCs make a rich set of online services and data sources accessible to student programs, such as Google Maps, earthquake data from USGS (United States Geological Survey), climate change datasets from NOAA (National Oceanic and Atmospheric Administration), The Movie Database, gnuplot (accessed online through NetsBlox so that students need not have it installed), and many more [4]. Message passing lets NetsBlox projects running anywhere in the world communicate with one another through the exchange of custom packets of structured data, making it possible to create multi-player games and other distributed programs. RPCs and messages are also used to control WiFi-enabled devices such as educational robot vehicles [5].
PhoneIoT provides two main features: the ability to read and/or stream live sensor data, and the ability to create an interactive, configurable user interface on a mobile device. Together, these features address the fundamental requirements of an educational IoT tool, including the ability to access sensors through common paradigms such as polling and streaming, as well as a way to directly interact with the device to create more engaging projects, such as integrating sensor data and custom user input to provide feedback on the device's display. Importantly, all PhoneIoT features are accessible through the typical NetsBlox primitives of RPCs and message passing, making its usage both simple for novices and familiar for existing NetsBlox users. By using a simple yet powerful blockbased interface, it becomes possible to abstract away much of the complexity of networking and distributed computing while allowing students to explore and learn the most important concepts in a convenient framework.
The primary contributions of this paper are: an overview of user-aware PhoneIoT design choices that were made to facilitate targeted curriculum topics and ease of use, as well as a preliminary, proof-of-concept study with students who used PhoneIoT in 1-2 week IoT coding camps. We note that this is an extended work from the 2021 conference paper Your Phone as a Sensor: Making IoT Accessible for Novice Programmers [6].
Previous Work
There are several existing approaches that allow for the creation of standalone mobile apps that can be constructed online with a block-based programming interface, including Thunkable, App Inventor, and Kodular (formerly known as AppyBuilder) [7][8][9]. Pocket Code, part of the Catrobat project, is similar to these, although its app designer is built into the app itself and is more focused on creating games and simulations [10]. Thunkable is perhaps the most similar to PhoneIoT, as it allows access to Internet resources (e.g., cloudbased speech recognition and translation utilities) similar to NetsBlox services, as well as several onboard device sensors, such as the accelerometer and gyroscope. However, PhoneIoT is fundamentally different from these projects in that it does not aim to be an app creation tool; rather, the custom controls in PhoneIoT are merely a means of interacting with NetsBlox code running in the browser on the student's computer. That is, Thunkable and similar projects are not tools for teaching distributed computing or IoT, as all user interaction and sensor data are kept internal to the device running the app. Additionally, because PhoneIoT is tailored to a distributed computing environment, it offers more possibilities for creating engaging educational projects. For instance, PhoneIoT could be used to turn a phone into a custom game controller, with accelerometer input and soft (virtual) buttons on the phone's screen making sprites move or shoot on the computer's screen. The phone could also be used to control real robots [5] in the same way, using a single NetsBlox program to control multiple components of a distributed system: one or more mobile devices, one or more robots, and the laptop running the project code, creating an engaging distributed application.
Another project similar to PhoneIoT in terms of intent and network architecture is Sensor Fusion, an education-focused project which collects sensor data from a mobile device and streams it to a computer for analysis [11]. This is similar to the core sensorbased functionality of PhoneIoT but is more heavily focused on a scientific perspective, namely sensor fusion, which is the combination of data from multiple sensors to achieve greater accuracy or precision. In contrast, PhoneIoT is part of a distributed computing environment, empowering students to utilize incoming live data streams, as well as content from other NetsBlox services, and reconfigure the phone's display in real time based on the desired application (e.g., a game controller, data viewer, or fitness tracker). This is not possible with Sensor Fusion, as its display and interactive components are not configurable. PhoneIoT's programmability is a key factor in creating engaging educational projects for young learners.
PhoneIoT
Mobile devices already come with a wide variety of hardware sensors, from simple cameras/microphones, accelerometers, and gyroscopes to more specialized hardware such as contact pressure sensors or relative humidity detectors. Although a typical device does not contain all of these potential sensors, there are several sensors that are reliably present even on older devices, simply due to basic system requirements. These include an accelerometer (which is typically used for automatic landscape/portrait screen rotation) and, for smartphones, a microphone and proximity sensor (to disable the touchscreen when the phone is held to the user's ear). While not essential for core device functionality, virtually all modern smartphones and tablets also have a camera, although access to this sensor through PhoneIoT is handled differently due to privacy concerns (see Section 3.1). Additionally, through services such as Google's Fused Location Provider API, any mobile device connected to the Internet can retrieve live location data, if not by GPS, then by estimation from the connected local network.
The PhoneIoT app is capable of accessing all of these common sensors and more. If a sensor is not present on the device, is disabled, or otherwise blocked by app permissions, it is simply logically disabled as a target for IoT interactions through the NetsBlox interface. PhoneIoT continuously monitors data from all available sensors and makes it available to the NetsBlox server when requested by an authenticated student's program. The server also handles other specialized requests, such as GUI configuration and forwarding user interactions as messages to linked NetsBlox clients. Figure 1 visualizes this system architecture.
Privacy
Due to its extensive access to live device sensors, PhoneIoT introduces a number of potential privacy issues, primarily due to exposing access to live location data, the camera, and the microphone over the Internet. These concerns become especially important given that minors will be using the app, as it is largely meant to be a K-12 educational tool. For instance, it would not be acceptable for someone to be unknowingly tracked or spied upon through the NetsBlox interface due to forgetting to close the app. Because of this, unless explicitly requested with the "run in background" setting in the menu, the app ceases communication with the server and rejects all incoming requests upon being put into the background (minimized) or when the device's display turns off, e.g., due to inactivity. As a further precaution, the app generates a random password which must be provided for any IoT interaction. For added security, this password is set to expire one day after generation (or upon user request), at which point a new random password is created, effectively cutting off any active connections. This one day window is sufficient for most uses of the app while still providing necessary privacy guarantees.
The password and expiry behavior is sufficient to make location data reasonably secure, but the camera and microphone could still be problematic. To solve the microphone issue, PhoneIoT only exposes the current volume level, rather than the actual waveform/content. To solve the camera issue, only images stored in image displays (described in Section 3.3) are accessible; that is, the app does not allow a network request to take a new picture from the camera without user interaction. We believe these behaviors are sufficient to allow any K-12 audience to use the app while still affording them reasonable internet privacy.
Network Exchanges
As the app was meant to be used by young audiences in classroom settings, connecting PhoneIoT to a NetsBlox server deployment is very easy, only requiring a single button press from the app menu, which is already open when the app is started. Once pressed, the app connects to the server, announces its presence as an IoT device, provides the server with a unique identifier for further communications, and begins accepting network requests forwarded from the server.
The targeted server address is displayed as a URL in the app menu; this defaults to the primary NetsBlox server deployment but can be configured to any address. This is especially important in some classrooms around the world where a stable high-speed internet connection is not available, in which case a local deployment of NetsBlox can be used on the local network. The server address field is persistent across app restarts, so classrooms in these less-ideal circumstances would only have to configure their app settings once.
The UDP protocol was selected for PhoneIoT's server interactions both for speed and because PhoneIoT's data exchange model is already packet-based, making UDP a more natural model than streaming protocols such as TCP. Although UDP has the potential issue of dropping packets, for our purposes, this is actually desirable due to providing real-world lessons on error-handling in fallible network transactions. For instance, an early project for students could be to make robust wrappers for some PhoneIoT functions by repeating the operation until it succeeds.
The networking primitives used by the NetsBlox side of PhoneIoT are composed of "messages", the same concept used throughout NetsBlox. In essence, a "message" is a structured block of data that is identified by name and has a set of fields associated with it. Messages can be sent with the "send msg" block and received (typically on a different computer) with a "when I receive" block. As an example, there is a default message type called "message" which has a single field called "msg". Figure 2 shows a simple example of how to send and receive a message of this type. PhoneIoT provides two primary ways of accessing sensor data: polling for instantaneous values through explicit RPC requests, or streaming up to date values by registering a message type as a sensor update event that the device will send periodically based on the requested update interval. The explicit request style is similar to other pre-existing networking APIs in NetsBlox, and thus is a good introductory point for using PhoneIoT. However, in practice, many real-world IoT devices are accessed by continuous data streams, so lesson plans involving PhoneIoT quickly transition to this method.
Because PhoneIoT has the chance of dropped packets due to using the UDP protocol, performing polling in a loop can result in an error every once in a great while, depending on the network connection; this would have to be checked by students to avoid bugs in their code. Thus, streaming access can be introduced naturally as a more elegant solution, removing the need for both the loop and error checking code, as well as halving latency by inverting the problem and instructing the PhoneIoT device to send periodic update messages to the student's project, rather than the project having to request each one. In this way, dropped packets go from causing errors to being simply absent update messages which are simply ignored by the student's project without issue. Figure 3 shows example code which registers for and receives sensor updates from the accelerometer every 100 ms. From examining Figure 3, one detail that may not be clear is how to know the name of the sensor to listen to, in this case "accelerometer". For the most part, the names of the various sensors are identical to their polling RPCs; for instance, the "getLinearAcceleration" RPC has a matching sensor name of "linearAcceleration". However, this is not always the case, in particular for soft sensors that PhoneIoT adds on top of the existing hardware sensors; for example, the "getCompassHeading" RPC's sensor is in fact "orientation" (matching the "getOrientation" RPC). This information is available to users through the RPC help menu, which can be accessed by right clicking on a "call" block configured to, e.g., a polling RPC and selecting "help". Figure 4 shows the help information for the "getCompassHeading" (polling) RPC, which displays the name of the sensor/message type for the streaming access method, as well as the available fields that can be received on each update.
Custom GUI Controls
A key feature of PhoneIoT is its customizable interactive display. The static GUI for the main screen of the PhoneIoT app is intentionally minimalistic, containing only a button to toggle the pull-out app menu that contains all other static controls. Importantly, the menu is where the device's id and password are shown, as well as the controls for connecting to the NetsBlox server; this menu is shown in Figure 5a. When the menu is closed, the entirety of the screen, aside from the menu toggle button, is a single blank canvas which can be populated with content via various RPCs from the user's program. PhoneIoT supports many standard GUI control types, such as labels, buttons, text fields, image displays, and toggle switches, as well as some controls tailored for designing game controllers, such as virtual joysticks and touchpads. Each of these controls has fully customizable text content, location, size, color, orientation, and several other options depending on the specific control. A non-exhaustive example of custom control types is given in Figure 5b,c.
An important consideration when designing PhoneIoT was to ensure that projects could be easily shared between students, who are potentially using different devices. Unfortunately, all of these different devices have different screen resolutions. To counteract this, PhoneIoT uses a relative, percent-based scale for specifying the x/y position and the width/height of controls. Specifically, PhoneIoT uses the standard GUI coordinate layout where (0, 0) is the top left corner of the canvas and (100, 100) is the bottom right. Additionally, the app automatically scales fonts depending on the DPI of the display, which allows fonts to be approximately the same size on all screens. These simple accommodations result in a coordinate system that is easy for students to use and is roughly invariant of the specific device display being used (up to mostly-minor aspect ratio stretching).
Being able to display custom content on the device is all well and good, but many of these types of controls are intended to facilitate receiving input from the user, such as button presses and joystick movement. To facilitate this, the various RPCs that are used to add controls to the device accept an optional configuration setting called "event", which is the name of a message type that the device will send when a user interacts with the control. For instance, button events are triggered when pressed, text fields are triggered when the text is modified and submitted (i.e., there is not a separate update for each keystroke), image displays are triggered when a new image is saved in them from the camera (which can only be done manually by the user-see Section 3.1), and joysticks/touchpads send an event when initially touched, continuously while moving (currently throttled to 10 Hz), and when released. Figure 6 gives an example of creating a joystick control with an event called "joyMoved" that is triggered each time the stick is moved and displays the x/y position of the stick on the NetsBlox stage. Message names for GUI events are customizable, but each type of control sends a different pre-determined set of values that users can receive by including them as fields on the message type. For instance, text field events include the new text, joystick/touchpad events include the x/y position, as well as a "tag" field that is one of "down", "move", or "up" to determine the type of interaction, and so on. Additionally, all events send the ID of the control (which can also be obtained by the return value of the RPC used to create the control) and the device ID on which the control was located (to differentiate controls on projects that configure multiple phones with the same GUI layout, such as a quiz game or group chat app). The specific information concerning what fields are available for each type of control can be found in the detailed documentation that is linked at the bottom of the (basic) RPC help menu. An example of this full, detailed documentation for the "addJoystick" RPC is shown in Figure A1.
When a control is created, it is automatically assigned a control ID, which is returned by the RPC that was used to add the control (but if not needed, can be discarded by using a "run" block rather than a rounded "call" block). This control ID can then be used after creation to get and set state information about the specific control, or to delete the control while leaving all others intact. For instance, RPCs that contain text (e.g., labels, text fields, and buttons) can be used with the "getText" and "setText" RPCs, image displays can be used with "getImage" and "setImage", toggle-based controls can be used with the "getToggleState" and "setToggleState" RPCs. This type of dynamic update and query behavior after control creation is vital, and can be used to perform tasks such as real-time updating of information displayed on the device screen.
This interactive component is important for teaching IoT to younger K-12 audiences because it immediately gives the students a useful tool related to things they already know, such as game controllers or content sharing with text/image displays. Due to how important phones are to today's youth, introducing them to new ways of engaging with and controlling their devices can be especially motivating for continued interest in CS topics. The networking and IoT components are added to this to provide even more functionality and to teach the concepts to an already eager audience as a "side effect".
Example Projects
This section will cover several example projects to demonstrate how phone-based IoT through the NetsBlox platform enables powerful applications with very little code or specialized knowledge.
GPS Tracker
The NetsBlox platform already supports many online services, one of which is Google Maps. With this service, a program can obtain and display a map of the current location, specified by latitude and longitude, or obtain the screen position of a latitude and longitude point on the map and vice versa. By reading live GPS data from a mobile device running PhoneIoT, it is possible to track the location of the device on a map and use NetsBlox's built-in drawing utilities (inherited from Snap!) to plot the course. Thus far, this has all been performed on the NetsBlox client (for user logic and drawing) and the NetsBlox server (for performing API requests), with the device running PhoneIoT only being used as a sensor. However, by using PhoneIoT's custom GUI elements, we can add an image display to the screen and send periodic updates to the mobile device. Essentially, this creates a stripped-down form of the Google Maps front-end that can be built in under ten blocks. See Figure 7 for the blocks that set up the display, Figure 8 for the update logic running in NetsBlox, and Figure 5c for the custom GUI shown on the mobile app. . Location message handler. It reads live GPS data from the mobile device, plots the track on a map on the stage, and sends the map/track back as an image to the device. The "add point" custom block (function) consumes the location sequence and updates the total "distance" variable using a Google Maps RPC to get the distance between the current and previous locations.
Accelerometer Plotter
A common topic in introductory IoT is analyzing a live data stream coming from a device. We have already seen that receiving a data stream from PhoneIoT is as simple as one RPC call, followed by listening for NetsBlox messages. Once the data are received, students can perform whatever analysis is needed and output results to their NetsBlox client display. A simple project that could be conducted on a student's first day of working with PhoneIoT is to receive live accelerometer data and plot its x, y, and z components. This can be done with the Chart service, a pre-existing NetsBlox service for generating graphs from data points. Figure 9 shows the code required to do this, as well as the NetsBlox display (running on the student's computer) after the phone was picked up from rest, rotated slowly around one axis, then another, and finally dropped onto a pillow. Figure 9. Code running the accelerometer plotter project. The "add sample" custom block (function) constructs three lists (xvals, yvals, zvals). The "update display" script is only shown for the x sprite.
Robot Remote Controller
One of the many services supported by NetsBlox is RoboScape, which allows students to interact with physical robots that can be shared in a classroom setting. Students can send commands to the robots via RPCs, and can asynchronously receive messages from robots based on basic sensor input or other events. The interfaces for these features are very similar to PhoneIoT (in fact, PhoneIoT copied this interaction pattern due to its proven success with students [5]), with commands being issued by normal RPCs that are proxied through the NetsBlox server and event-based messages being sent back from the device asynchronously after registering to receive them with the "listen" RPC. A later development was RoboScape Online [12], which is a fully virtual environment embedded in NetsBlox as an extension; it gives access to virtual robots that connect into the same existing RoboScape networking layer and so can be controlled by the very same student programs that would be used with their physical counterparts. RoboScape Online makes it possible to give students access to any number of robots (i.e., classroom sharing is no longer required), works well in virtual classroom settings, and offers unique motivating features such as automated scoring for challenges and access to virtual simulations of many different types and combinations of sensors (e.g., LiDAR, radiation sensor, light sensor, GPS, or compass). See Figure 10 for an example of the RoboScape Online environment.
Both PhoneIoT and RoboScape Online were included in the IoT version of two 1-week Computer Science Frontiers (CSF) camps. On the first day of the camps, students were introduced to NetsBlox basics, such as control flow, lists, loops, RPCs, and message passing. The following two days introduced RoboScape Online and had the students complete several increasingly complex robotics challenges, with a focus on autonomous control based on various virtual sensors. The following day introduced PhoneIoT, which was made easier by students already having become familiar with the same interaction schemes through RoboScape and RoboScape Online. Finally, the week concluded with two projects which had students turn their phones into custom robot remote controllers by combining RoboScape with PhoneIoT's interactive GUI and sensory features. Here, we present one such controller that students could create. The concept for this controller is to have a "throttle" slider on the phone screen that can be used to control the robot's speed in real time; meanwhile, orienting the robot in a particular direction will be done via the phone's internal compass using the orientation sensor. The only RoboScape RPC we will need is "send", which allows us to issue simple text commands to the robot; e.g., we can control movement by sending the command "set speed L R" where "L" and "R" are the speeds for the left and right wheels, respectively, which are integers between -128 and 128. To match the robot's virtual heading to the phone's real heading, the only additional piece of information required is querying the robot's heading. This can be done with the PositionSensor service's "getHeading" RPC; all RoboScape Online sensors that were not present in RoboScape proper are hosted as dynamically-created "Community" services such as PositionSensor. Figure 11 contains all the code needed to create the throttle slider, listen to updates from the phone's orientation sensor, continuously turn the robot to face the desired direction, and proceed forward to the target with the desired speed; also included is an image of the configured GUI on the phone screen. Note that, although this would be a complex robotics task to perform from scratch, PhoneIoT and RoboScape Online provide sufficient abstractions that students can create these advanced behaviors with only a handful of blocks.
(a) (b) (c) Figure 11. An example project that acts as a custom robot remote controller using PhoneIoT.
(a) PhoneIoT code to set up the speed slider and start receiving target headings from the phone's orientation/compass sensor; (b) RoboScape code to match a target heading and advance forward when facing the correct way; (c) the PhoneIoT app with its configured interface for throttle control.
Use with Students
PhoneIoT is still relatively new and has not had thorough analysis with students. However, we were able to include some of its curriculum in two 1-week CSF camps (as described in Section 4.3), as well as the second week of a 2-week fully-remote cybersecurity camp involving 17 high school students with no required previous programming experience. The first week of the camp consisted of an introduction to NetsBlox basics such as RPCs and message passing. This was primarily used to control 3D simulated robots in RoboScape Online and perform various manual and autonomous robotics challenges. During week two, students were introduced to PhoneIoT through several projects, including an "Avoid the Holes" game, which used sensor streaming with the accelerometer to apply a tiltingbased force to a ball sprite rolling around through a maze of holes. Students were later introduced to the interactive graphical display by implementing the GPS Tracker project shown in Section 4.1. The remaining three days of camp consisted of fusing PhoneIoT and RoboScape by converting their phones into custom robot controllers, an example of which was provided in Section 4.3. We note that the specific robot controller example given in Section 4.3 was not a student project; however, its complexity is comparable to other robotics tasks that students completed. All students successfully implemented the projects involving PhoneIoT; additionally, students seemed to pick up PhoneIoT relatively quickly, which could be explained by the fact that PhoneIoT's NetsBlox interfaces are very similar to RoboScape.
The camp included pre-and post-tests; however, the questions were primarily gauging student interest in various aspects of CS, and none of the questions were specifically about PhoneIoT. The results of the study included increased scores for student interest in computer networking, data analysis for scientific issues, desire to use CS in their careers, confidence in CS, desire to pursue CS in college, and several other similar categories. However, the sample size was small (17) and response rates for the post-test were even lower (10). Coupled with the low sample size and the lack of questions specifically targeting PhoneIoT, it is unclear from this study what effect PhoneIoT alone had on the results. However, in a few free response portions, several students expressed that they enjoyed PhoneIoT, with over half (6) including PhoneIoT projects in their top three most useful projects of the camp. Because of these reasons, in the future, we intend to pursue another study with a new pre-/post-test including questions specifically concerning PhoneIoT to better gauge its effectiveness with students.
CSF Course
The CSF camps previously mentioned have been 1-and 2-week studies; these are effectively contracted versions of existing CSF curriculum, which is quite extensive in its entirety. In fact, it has recently received approval to become a full-fledged high school course, and is being piloted in Nashville, Tennessee's Martin Luther King Jr. Magnet High School. The course consists of four 9-week modules: Distributed Computing, IoT, and Cybersecurity (including PhoneIoT and RoboScape Online, among other topics), Artificial Intelligence and Machine Learning, and Software Engineering. The full range of curriculum is beyond the scope of this paper, but we will overview the IoT and Cybersecurity module, which is largely hands-on, with students solving challenges or creating original applications using what they have learned in previous lessons/projects.
In the first week of the module, students are introduced to the fundamental concepts of IoT through the use of several tools such as ThingSpeak, which provides unified access to many different types of IoT sensors around the world [13]. The following two weeks introduce PhoneIoT, with the first week consisting of sensor-based projects, and the second week being GUI-focused. Some of the PhoneIoT projects that are covered are similar to those seen in Section 4, though they are typically more in-depth versions with additional features and/or creative extensions added by students as part of open-ended programming projects. The next two weeks focus on RoboScape Online with various manual and autonomous tasks, including its fusion with PhoneIoT for creating custom remote controllers (see Section 4.3). The next two weeks cover several cybersecurity topics with RoboScape, which are beyond the scope of this paper. In addition, finally, the last 1-2 weeks of the module (depending on pacing) are reserved for extended individual or team projects; these are meant to be large creative projects that culminate all the content learned up to that point and give students more time and creative liberty than the other, more focused open-ended tasks throughout the module.
Conclusions
In this brief overview, we have shown that PhoneIoT is a low-cost method for allowing K-12 students to access device sensors for learning the key concepts of networked sensing. These concepts include API requests in fallible conditions, methods for handling failures, sensor data processing, event based programming via message passing, and potentially many other topics depending on usage (e.g., error mitigation for noisy sensor modalities such as GPS location). Additionally, the custom display on the phone allows students to come up with novel ways to interact with their code running on NetsBlox. We believe students will find PhoneIoT an enjoyable educational tool that will allow them to envision and create innovative distributed applications. Along the way, they will learn important cutting edge computing concepts rarely taught in K-12 today. Informed Consent Statement: Informed consent was obtained from all subjects involved in the study.
Data Availability Statement: Not applicable.
Conflicts of Interest:
The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results. Appendix A Figure A1. Example of the detailed documentation page for the "addTouchpad" RPC. The detailed documentation contains hierarchically organized information such as this for all NetsBlox services and RPCs. | 7,806.4 | 2022-12-01T00:00:00.000 | [
"Computer Science"
] |
User Adaptive and Context-Aware Smart Home Using Pervasive and Semantic Technologies
Ubiquitous Computing is moving the interaction away from the human-computer paradigm and towards the creation of smart environments that users and things, from the IoT perspective, interact with. User modeling and adaptation is consistently present having the human user as a constant but pervasive interaction introduces the need for context incorporation towards context-aware smart environments. The current article discusses both aspects of the user modeling and adaptation as well as context awareness and incorporation into the smart home domain. Users are modeled as fuzzy personas and these models are semantically related. Context information is collected via sensors and corresponds to various aspects of the pervasive interaction such as temperature and humidity, but also smart city sensors and services. This context information enhances the smart home environment via the incorporation of user defined home rules. Semantic Web technologies support the knowledge representation of this ecosystem while the overall architecture has been experimentally verified using input from the SmartSantander smart city and applying it to the SandS smart home within FIRE and FIWARE frameworks.
Introduction
Although in their initial definition and development stages pervasive computing practices did not necessarily rely on the use of the Internet, current trends show the emergence of many convergence points with the Internet of Things (IoT) paradigm, where objects are identified as Internet resources and can be accessed and utilized as such.In the same time, the Human-Computer Interaction (HCI) paradigm in the domain of domotics has widened its scope considerably, placing the human inhabitant in a pervasive environment and in a continuous interaction with smart objects and appliances.Smart homes that additionally adhere to the IoT approach consider that this data continuously produced by appliances, sensors, and humans can be processed and assessed collaboratively, remotely, and even socially.In the present paper, we try to build a new knowledge representation framework where we first place the human user in the center of this interaction.We then propose to break down the multitude of possible user behaviors to a few prototypical user models and then to resynthesize them using fuzzy reasoning.Then, we discuss the ubiquity of context information in relation to the user and the difficulty of proposing a universal formalization framework for the open world.We show that, by restricting user-related context to the smart home environment, we can reliably define simple rule structures that correlate specific sensor input data and user actions that can be used to trigger arbitrary smart home events.This rationale is then evolved to a higher level semantic representation of the domotic ecosystem in which complex home rules can be defined using Semantic Web technologies.
It is thus observed that a smart home using pervasive and semantic technologies in which the human user is in the center of the interaction has to be adaptive (its behavior can change in response to a person's actions and environment) and personalized (its behavior can be tailored to the user's needs and expressed using more advanced and complex home rules).In the case of smart homes, the user's acceptance has become one of the key factors to determine the success of the system.If the home system aims to be universally usable, it will have to accommodate a diverse set of users [1] and adjust to fulfill their needs in case they change.With the aim of helping practitioners to improve their user modeling techniques, some researchers have established rules to follow, for example, the set of user modeling guidelines for adaptive interfaces created by [2].A context-sensitive smart home should reckon dynamically to accommodate the needs of users, taking into account a wide range of users and context or behavior situations.This user-centric functioning of smart home systems has to be supported by an adequate user model.The intelligence and interface of the system have to be aware of the user abilities and limitations to interact with the person properly.The user model must include information about the person's cognitive level and sensorial and physical disabilities.
To be more precise, a user model [3] is a computational representation of the information existent in a user's cognitive system, along with the processes acting on this information to produce an observable behavior.User stereotype or persona is a quite common approach in UM due to its correlation with the actors and roles used in software engineering systems and its flexibility, extensibility, reusability, and applicability [4].The "personas" concept was originally introduced by Cooper in [5], where, according to his definition, "personas are not real people, but they represent them throughout the design process.They are hypothetical archetypes of actual users."There are two different types of personas: primary personas, which represent the main target group, and secondary personas, which can use the primary personas' interfaces but have specific additional requirements [6,7].Even though personas are fictional characters, they need to be created with rigor and precision; they tell stories about potential users in ways that allow designers to understand them and what they really want.Characteristics like name, age, profession, or any other relevant information are given to each persona in order to make them look more realistic or "alive."The most accurate way of creating personas, also known as "cast of characters," is to go through a phase of observation of real users within the environment in which the system will exist and eventually interview them with the intention of finding a common set of motivations, behaviors, and goals among the end-users.However, this method is expensive and timeconsuming.A low-cost approach is to create them based on Norman's assumption personas [8] where designers use their own experience to identify the different user groups.Thus, in the same way, in our work, the personas technique fulfills the need of mapping and grouping a huge number of users based on the profile data, aims, and behavior which can be collected during both design time and run time and users and usage design, respectively.
Recently, the emergence of ubiquitous or pervasive computing technologies that offer "anytime, anywhere, anyone" computing by decoupling users from devices has introduced the challenge of context-aware user modeling.So far, most of the context-aware systems focus on the external context known as physical context which refers to context data collected by physical sensors.Thus, they involve context data of physical environment, distance, temperature, sound, air pressure, lighting levels, and so forth.The external context is important and very useful for context-aware systems, as context-aware systems provide recommended services.However, from a broader scope, context may be considered as information used to characterize the situation of an entity [9].An entity is a person, place, or object that is considered relevant to the interaction between a user and an application, including location, time, activities, and the preferences of each entity.A user model is context-aware if it can express aspects of the user's contextual information and subsequently help the system adapt its functionality to the context of use.Many aspects of contextual information used in modeling are discussed in [10,11].Nevertheless, to provide personalized services to user models according to user preferences, task, and emotional state of user, the cognitive domains such as situational monitoring are needed; so far, a few authors have addressed utilizing the cognitive elements of a user's context and the semantics of the relations between the user and the system's entities.Several researchers have proposed models to capture the internal elements of context.Our proposed model differs from many of the previous approaches, as it focuses on extracting a user's cognitive activities, rather than extracting the user's movement based on physical environment.Cognitive context information given through a semantic formalization idea is key information to satisfy users by providing personalized context-aware computing services.
The semantic formalization idea is to provide a functional ontological and reasoning platform that offers unified data access, processing, and services on top of the existing IoT-A ubiquitous services and to integrate heterogeneous home sensors and actuators in a uniform way.From an application perspective, a set of basic services encapsulates sensor and actuators network infrastructures hiding the underlying layers with the network communication details and heterogeneous sensor hardware and lower-level protocols.A heterogeneous networking environment indeed calls for means to hide the complexity from the end-user as well as applications by providing intelligent and adaptable connectivity services, thus providing an efficient application development framework.Thus, to face the coexistence of many heterogeneous sets of things and home appliances, a common trend in IoT applications is the adoption of an abstraction layer capable of harmonizing the access to the different devices with a common language and procedure [12].Our approach is to further encapsulate this abstraction layer into "if then that" rule sets and then to OWL ontologies that combined with home rules defined in Semantic Web Rule Language (SWRL) form the domotic intelligence that continuously adapts home environment conditions to the user's actions and preferences.
The scope of most of the applications or services with respect to smart homes so far has focused on the concept of small regions like laboratory, school, hospital, smart room, and so forth.Furthermore, algorithmic and strategic models for gaining the revenue by using context-aware systems are very few.Additionally, technologies related to contextaware systems are merely standardized.The architecture, the context modeling method, the algorithm, and the network implementation as well as the devices of users in each project are different.Moreover, middleware, applications, and services make use of different level of contexts and adapt the way they behave based on the current context.Therefore, according to the level and type of contexts along with the goal of context-aware systems, the context modeling process, the inference algorithm, and interaction method of personas (humans known as personas for computational representation purposes) are changed.Although the interaction between personas and cooperation between components of the same architecture have been investigated, standard interaction, cooperation, and operation in the different contextaware systems have not been studied.Thus, the novelty of our proposed approach is to provide a common context-aware architecture system in which the user ("eahouker" in SandS) is able to control his household appliances in a collective way via the SNS (Social Network Service) and in an intelligent way via the adaptive social network intelligence.As our system is human-centered, the UM (user modeling) is related to the user's activity inside the ESN (Eahoukers Social Network), while the context-aware environment refers to the contextual information that characterizes the situation and conditions of the system's entities.
Finally, the modeling of the contextual information is completed through the capture of the semantics of the relationships between the user and the various entities of the ecosystem (other users, appliances, and recipes) to further improve the overall user experience.The semantic description framework of our proposed approach is based on a number of home rules that are defined for a specific household and eahouker.Since the SandS architecture consists of two layers, high and low, respectively, we have on the one hand recipes for common household tasks produced and exchanged in the SandS Social Network that are described in near-natural language.Additionally, on the other hand, we have every user's context which consists of the actual appliances that the user has in house with their particular characteristics (type, model, brand, etc.).Finally, to ensure the executability and compatibility of a recipe and to deal as well with any uncertainty and vagueness in modeling the contextual information, a number of some axioms, to enforce constraints to all objects (things in IoT paradigm) of the ecosystem, have been introduced in the proposed Web Ontology Language (OWL) that was adopted.To conclude, the experimental results for the above framework are presented which have been conducted inside the "Social & Smart" (SandS) [13] FP7 European Project which aims to highlight the potential of IoT technologies in a concrete user-centric domestic pervasive environment.Large-scale experiments are planned at SmartSantander [14], a city-scale experimental research facility in support of typical applications and services for a smart city, comprising a very large number of online ambient sensors inside a real-life human environment.
Related Work.
As correctly stated in [15], user modeling is the process through which systems gather information and knowledge about users and their individual characteristics.Therefore, a user model is considered a source of information about the user of the system which contains several assumptions about several relevant behavior or adaptation data.Approaching user modeling from the HCI perspective, there is the potential that user modeling techniques will improve the collaborative nature of human-computer systems.During the last 20 years, there has been a lot of work done in this area.Authors attempted to cover all possible scenarios through the development of different definition for users and user modeling approaches, respectively.
Reviewing how "user models" term has been approached, within the HCI literature, it is indicated that users are part of an enlarged communication group in which users change through time and according to the environmental conditions and the experience they gain.Thus, in the end, there are three types of users: "novel," "intermediate," and "expert" [15].Another more oriented work is that of [16], as it focuses on the specific group of elderly people with none, one, or more than one disability, whose needs and capabilities change as they grow older, underlying the need for having more diverse and dynamic computing systems for modeling users.A few years later, in terms of maintaining rich and adaptive output information, ontology-based approaches have been used for the design of the Ec(h)o audio reality system for museums to further support experience design and functionality related to museum visits, through user models.This work has been later extended [17] by incorporating rich contextual information such as social, cultural, historical, and psychological factors related to the user experience.
Within the area of multimedia content, the work presented in [18] is the first to introduce a triple layered sensation-perception-emotion user model to evaluate the experience in a video scenario.In this work, low-level characteristics such as light variation are combined with the knowing and learning cognition process and emotions for entertainment product designs.In a similar way, in [19], the authors consider four crucial parameters for the interaction between people and technology: the user, the product, the contextual environment, and the tasks to specify the interaction process.
Based on ontology approaches to characterize users capabilities within adaptive environments, in 2007, the GUMO ontology has been proposed [20] which takes into account the emotional state, the personality, the physiological state of the user, and particularly stress.Five years later, Evers and his colleagues [21] implemented an automatic and self-sufficient adaptation interface to measure the user's stress levels.Finally, in 2004, the research in user modeling has started to shift from focusing not on users capabilities but on users' needs.This work has incorporated the "persona" concept [22], which has been introduced to distinguish between different user groups within an adaptive user interface domain.These "persona" concepts have been proved really useful as a wide range of potential users could be covered by assigning random values to characteristics like age, education, profession, family conditions, and so forth.It is thus observed that, from product design to multimedia and user interfaces adaptation, the approaches described above, even though the collected personal data characteristics to improve the system and user's satisfaction and product or service usability differ a lot, share the same goal.For a more extended review, the reader is directed to [23].
Typically, a user model represents a collection of personal data associated with a specific user of a system.Following a similar definition, a user model [3] is a computational representation of the information existent in a user's cognitive system, along with the processes acting on this information to produce an observable behavior.Thus, the act of user modeling identifies the users of the application and their goals for interacting with the application.As a result, a user model is considered to be the foundation of any adaptive changes to the system's behavior.The main question to answer when dealing with this kind of information is which data is included in the model; as it is expected, the type of data used depends on the purpose of each application and the domain where the latter is applied.A user model can in principle include personal information, such as users' names and ages, their interests, their skills and knowledge, their goals and plans, their preferences, and their dislikes or data about their behavior and their interactions with the system.
As one may expect, there are also different design patterns for user models, though often a mixture of them is used [24].In an attempt to describe a system's users in the most relevant way, one may start from the humble "actor," which provides a common name for a user type.In use case modeling, actors are people who interact with the system and they are often described using job titles or a common name for the type of user.On the other hand, a "role" names a relationship between a user type and a process or a software tool.A user role generally refers to a user's responsibility when using an application or participating in a business process.To help us understand the characteristics of our users that might have bearing on our design, we may then construct a "profile," containing information about the type of user relevant to the application being created.Still, user profiles contain general characteristics about the groups of users.User stereotype or "persona" is a quite common approach in UM due to its correlation with the actors and roles used in software engineering systems and its flexibility, extensibility, reusability, and applicability [4].
A persona is an archetypal user that is derived from specific profile data to create a representative user containing general characteristics about the users and user groups and is used as a powerful complement to other usability methods, whereas it is more tangible, less ambiguous, easier to envision, and easier to empathize with.The use of personas is an increasingly popular way to customize, incorporate, and share the research about users [25].The personas technique fulfills the need of mapping and grouping a huge number of users based on the profile data, aims, and behavior which can be collected during both design and run time and users and usage design, respectively.
Personas development supports the design process by identifying and prioritizing the roles and user characteristics of a system's key audience.In the general case, personas development is initiated by introducing assumptions about user profiles, based on data from initial research steps conducted.Through interviews and observation, researchers expand and validate the profiles by identifying goals, motivations, contextual influences, and typical user stories for each profile.Having such a fictional person (persona) representing a profile grounds the design effort in so-called "real users."For each persona, the user modeling description typically includes key attributes and user characteristics, such as name, age, and information that distinguishes each persona from others.
Basic Characteristics.
The herein proposed approach for modeling user information following a personas-based inspiration is discussed within this subsection.More specifically, according to the notation followed within our system, the so-called "eahouker profile" ( ) is a set of properties of the system's users ("eahoukers," ) that can be exploited for determining eahoukers with similar characteristics.These properties are stored in a database, that is, the Eahoukers Social Network's Database (EDB), and are continuously updated.The profile contents are rather static in the sense that the information is present in the database when the eahouker joins the SandS system and seldom changes in everyday activities.The interested reader should at this point note that a quasistatic approach would have been more accurate, since a number of user attributes, like, for instance, a user's marital status and the number of children she/he may have, can change over time.Basic information about the user is also included in the profile and consists of gender, age, number of children, social status, and his/her house appliances and geographical position.
In a more formal manner, the profile of an eahouker , denoted by , contains the following information about the user: ∈ {gender, age, children, city, houseRole, social-Status}, where gender ∈ {male, female} is the gender of , age ∈ is the age of , children ∈ denote the number of children of , city is a string describing the city of , houseRole ∈ {owner, junior, senior} is the house role of , and socialStatus ∈ {single, married, young} corresponds to the marital status of .Considering the above user profile definition at hand, the semantic description framework of the eahoukers can be directly interfaced and queried, but more importantly it enables us to define a personas-based user similarity measure.The latter is considered to outperform a traditional rating-based user similarity measure and is described in the following.
As a last point to consider and in order to further illustrate the herein proposed approach, we provide an example of a typical eahouker persona: the Papadopoulos family composed of four family members, namely, the parents, John and Maria, and their children, Nikos and Ioanna.Their household is located in Athens, Greece, and it contains five smart household appliances: (1) A Samsung 55 TV set, model UN55F6300 (2) An AEG washing machine, model AEG L60260 (3) A Nescafe coffee machine, model KP1006 (4) An LG refrigerator, model LFX31995ST (5) A GE bread maker, model GE106732 Potential users are of course {John, Maria, Nikos, Ioanna}; however, as rather obvious, Nikos and Ioanna are not allowed to interact directly with the above devices apart from the TV.Following the above notation, their profiles are modeled as follows: (i) John = ⟨male, 37, 2, athens, owner, married, UN55F6300, AEGL60260, KP1006, LFX31995ST, GE106732⟩ (ii) Maria = ⟨female, 36, 2, athens, owner, married, UN55F6300, AEGL60260, KP1006, LFX31995ST, GE106732⟩ (iii) Nikos = ⟨male, 4, 0, athens, young, single, UN55F6300⟩ (iv) Ioanna = ⟨female, 1, 0, athens, young, single, UN55F6300⟩ 2.3.Fuzzification.Let us consider a set of eahoukers E that interact with information objects and a set of meanings that can be found or referred to in items.Within our approach, is described as a set of semantic entities that the eahouker has interest in to varying degrees.This interpretation provides fairly precise, expressive, and unified representational grounding, in which both user interests and content meaning are represented in the same space, in which they can be conveniently compared [26].
In addition, the use of ontologies for capturing knowledge from a domain of interest has grown significantly lately; thus, we also consider a domain ontology O herein.According to one of the core ideas of the Semantic Web, that is, that of sharing, linking, and reusing data from multiple sources, the availability of semantically described data sources and thus the uptake of Semantic Web technologies is important to applications in which rich domain descriptions can play a significant role.Still, considering the inherent complexity of a decent knowledge representation formalism (e.g., Web Ontology Language (OWL) [27]), convincing domain experts and thus potential ontology authors of the usefulness and benefits of using ontologies is one of the major barriers to broader ontology adoption [28].
Efficient user model representation formalism using ontologies [29,30] presents a number of advantages.In the context of this work, ontologies are suitable for expressing user modeling semantics in a formal, machine-processable representation.As an ontology is considered to be "a formal specification of a shared understanding of a domain," this formal specification is usually carried out using a subclass hierarchy with relationships among classes, where one can define complex class descriptions (e.g., in Description Logics (DLs) [29] or OWL).
As far as a relevant mathematical notation is concerned, given a universe X of eahoukers E, one may identify two distinct sets of concepts, namely, a crisp (i.e., nonfuzzy) set and a fuzzy set.The crisp set of concepts on X may be described by a membership function : X → {0, 1}, whereas the actual crisp set may be defined as = { }, = 1, . . ., .Quite similarly, a fuzzy set on may be described by a membership function : → [0, 1].We may describe the fuzzy set using the well-known sum notation for fuzzy sets introduced by Miyamoto [31] as where ∈ , = ||, is the well-known cardinality of the crisp set and = ( ), or more simply = ( ), is the membership degree of concept ∈ .Consequently, (1) for a concept ∈ can be written equivalently as Apart from the above described set of concepts, we need to introduce and illustrate a set depicting potential relations between the aforementioned concepts.Thus, we introduce to be the crisp set of fuzzy relations defined as and discussed within Section 2.4.
Fuzzy Personas Similarity.
In order to define, extract, and use a set of concepts, we rely on the semantics of their fuzzy semantic relations.As discussed in Section 2.3, a fuzzy binary relation on is defined as a function : × → [0, 1], = 1, . . ., .The inverse relation of relation (, ), , ∈ , is defined as −1 (, ) = (, ), following the prefix notation (, ) for fuzzy relations.The definitions of the intersection, union, and sup- composition of any two fuzzy relations 1 and 2 on the same set of concepts are given by equations where and are a fuzzy -norm and a fuzzy -conorm, respectively.The standard -norm and -conorm are the min and max functions, respectively, but others may be used if considered more appropriate.The operation of the union of fuzzy relations can be generalized to relations.If 1 , 2 , . . ., are fuzzy relations in ×, then their union is a relation defined in × such that, for all (, ) ∈ × , (, ) = ( (, )).A transitive closure of a relation is the smallest transitive relation that contains the original relation and has the fewest possible members.In general, the closure of a relation is the smallest extension of the relation that has a certain specific property such as the reflexivity, symmetry, or transitivity, as the latter are defined in [32].The sup- transitive closure Tr ( ) of a fuzzy relation is formally given by equation where and (1) = .Based on the relations , we construct a combined relation : where the value of is determined by the semantics of each relation used in the construction of .The latter may take one of three values, namely, = 1, if the semantics of imply it should be considered as is; = −1, if the semantics of imply its inverse should be considered; and = 0, if the semantics of do not allow its participation in the construction of the combined relation .The transitive closure in ( 6) is required in order for to be taxonomic, as the union of transitive relations is not necessarily transitive, independently of the fuzzy -conorm used.In the above context, a fuzzy semantic relation defines, for each element ∈ , the fuzzy set of its ancestors and its descendants.For instance, if our knowledge states that "LG refrigerator" is produced before "Samsung TV" and "Nescafe coffee machine" is produced before "Samsung TV," it is not certain that it also states that "LG refrigerator" is produced before "Nescafe coffee machine."A transitive closure would correct this inconsistency.
Last but not least, thing to consider in our approach is the actual selection of meaningful relations to consider for the production of combined relation . has been generated with the help of fuzzy taxonomic relations, whose semantics are derived primarily from both the MPEG-7 standard and the specific user requirements.The utilized relations are summarized within Table 1.This approach is ideal for the user modeling interpretation followed herein because when dealing with generic user information, focus is given to the semantics of high-level abstract concepts.
It is worth noticing that all relations depicted within Table 1 are traditionally defined as crisp relations.However, in this work, we consider them to be fuzzy, where fuzziness has the following meaning: high values of Bel(, ), for instance, imply that the meaning of approaches the meaning of , while as Bel(, ) decreases, the meaning of becomes narrower than the meaning of .A similar meaning is given to fuzziness of the rest of the semantic relations of Table 1 as well.Based on the fuzzy roles and semantic interpretations of , it is easy to see that relation 8 combines them in a straightforward and meaningful way, utilizing inverse functionality where it is semantically appropriate.
More specifically, in our implementation relation utilizes the following subset of relations: Relation is of great importance, as it allows us to define, extract, and use contextual aspects of a set of concepts.All relations used for its generation are partial taxonomic relations, thus abandoning properties like synonymity.Still, this does not entail that their union is also antisymmetric.Quite the contrary, may vary from being a partial taxonomic to being an equivalence relation.This is an important observation, as true semantic relations also fit in this range (total symmetricity as well as total antisymmetricity often has to be abandoned when modeling real-life relationships).Still, the taxonomic assumption and the semantics of the used individual relations, as well as our experiments, indicate that is "almost" antisymmetric and we may refer to it as ("almost") taxonomic.Considering the semantics of the relation, it is easy to realize that when the concepts in a set are highly related to a common meaning, the context will have high degrees of membership for the concepts that represent this common meaning.Understanding the great importance of the latter observation, we plan to integrate such contextual aspects of user models in our future work.
As observed in Figure 1, concepts household appliance and eahouker are the antecedents of concepts household and appliance manufacturer in relation , whereas concept eahouker is the only antecedent of concept recipe.
So far and in compliance with the notion introduced in [33], the herein introduced fuzzy ontology will contain both concepts and relations and may be formalized using the crisp set of concepts described by the ontology and the crisp set of fuzzy semantic relations amongst these concepts, , as In order for us to provide a measure for the evaluation of similarity between two eahoukers' profiles, we first need to establish an evaluation of similarity for each profile component.In the following, we define a set of functions { | ≤ ≤ size ( )}, one for each attribute of the eahouker' profile.
User Profile Similarity Functions
(i) Two eahoukers are considered identical if their gender, city, role in the house, and marital status are the same.This property is expressed through functions CS 1 , 4 , CS Having introduced the functions for the evaluation of profile similarity, we can define a function that uses these evaluations to provide the level of similarity of two eahoukers.Let denote the th attribute of .In addition, let and be the profiles of eahoukers and , respectively.The eahouker profile similarity function S is then defined as follows: where is actually the cardinality of (which equals six in the herein presented use case example).
Context
3.1.Related Work.Filling a home with sensors and controlling devices by a computer are nowadays not only possible, but also common.Sensors are available off the shelf which localize movement in the home, provide readings for light and temperature levels, and monitor usage of doors, phones, and appliances.Small inexpensive sensors are attached to objects not only to register their presence but also to record histories of recent social interactions [34].
As social interaction is an aspect of our daily life; social signals have long been recognized as important for establishing relationships, but only with the introduction of sensed environments where researchers have become able to monitor these signals.Hence, it is possible to look at socialization within the smart home and cities (such as entertaining guests, interacting with residents, or making phone calls) and examine the correlation between the socialization parameters and productivity, behavioral patterns, or even health.These results will help researchers not just to understand social interactions but also to design products and behavioral interventions that will promote more social interactions.
Proliferation of sensors in the home results in large amounts of raw data that must be analyzed to extract relevant information.Most smart home data from environmental sensors can be processed with a small computer.Once data is gathered from wearable sensors and smartphones (largely accelerometers and gyroscopes, sometimes adding camera, microphone, and physiological data), the amount of data may get too large to handle on a single computer, and cloud computing might be more appropriate.Cloud computing is also useful if data are collected for an entire community of smart homes to analyze community-wide trends and behaviors.
Collecting and handling with concurrently enormous ubiquitous data, information, and knowledge that have different formats within the SmartSantander [14] are a hard task.According to the level of abstraction of context-aware systems in HCI, context is divided into low-level context and high-level context, respectively.The raw data of lowlevel context are usually gathered from different physical sensors.Data type, formats, and abstraction level from different physical sensors are different.Devices and physical sensors of context-aware systems use various scales and units, and lowlevel context has different elements.Context-aware systems store data, information, and knowledge that have different relationship, format, and abstraction level in the context base.Furthermore, context-aware systems collect context history storing sensor data over time to offer proactive service.Context history stores huge amount of data on location, temperature, lighting level, task, utilized devices, selected services, and so forth.To quickly provide suitable services to users, context-aware systems should manage variety, diversity, and numerous amounts of context.However, previous research suggested only a concept to control this problem.Therefore, our methodology ensures semantic interoperability by bridging the gap between the expressively rich natural language vocabulary used in the recipes and the low-level machine-readable instructions with very precise and restricted semantic content.
Context-Aware HCI.
In everyday social contextual situations, humans are able to, in real time, perceive, combine, process, respond to, and evaluate a multitude of information including semantics meaning of the content of an interaction, nonverbal information such as facial and body gestures, subtle vocal cues, and context, that is, events happening in the environment.Multimodal cues unfold, sometimes asynchronously, and continuously express the interlocutors' underlying affective and cognitive states, which evolve through time and are often influenced by environmental and social contextual parameters that entail ambiguities.These ambiguities with respect to contextual aspect range from the multimodal nature of emotional expressions in different situational interactional patterns [35], the ongoing task [36], the natural expressiveness of the individual, and his/her personality [37] to the intra-and interpersonal relational context [38,39].Additionally, in human communication, the literature indicates that people evaluate situations based on contextual information such as past visual information [40], general situational understanding, past verbal information [41], cultural background [42], gender of the participants, knowledge of the phenomenon that is taking place [36], discourse and social situations [43], and personality traits under varied situational context [44].Without context, even humans may misinterpret the observed affective cues such as facial, vocal, or gestural behavior.
Understanding that the human behavior in terms of decision-making process is inherently a multidisciplinary problem involving different research fields, such as psychology, linguistics, computer vision, and machine learning, there is no doubt that the progress in machine understanding of human interactive behavior and personality is contingent on the progress in the research in each of those fields.
Attempting to provide a formal definition for contextaware applications and Human-Computer Interaction (HCI) systems, a starting point would be to investigate how the term context has been defined.The word "context" has a multitude of meanings even within the field of Computer Science (CS).
To illustrate this, we group the different definitions of the term context in the area of artificial intelligence, natural language processing, image analysis, and mobile computing, where every discipline has its very own understanding of what context is.
According to the first work which introduced the term context awareness in CS [45], the important aspects of context are as follows: who you are with, when, where you are, and what resources are nearby.Thus, context-aware applications look at the who, where, when, and what (the user is doing) entities and use this information to determine why the situation is occurring.In a similar definition, Brown et al. [36] define context as location, identities of the people around the user, the time of day, season, temperature, and so forth.Other approaches such as that of Ryan et al. [46] include context as the user's location, environment, identity, and time while others have simply provided synonyms for context, for example, referring to context as the environment [47] or situation [48].However, to characterize a situation, the categories provided by [45] have been extended to include activity and timing of the HCI.Reference [49] views context as the state of the application's surroundings and [50] defines it to be the application's setting.Reference [51] included the entire environment by defining context as the aspects of the current situation.However, even though there has been a development in the area, both definitions by example and those which use synonyms for context are extremely difficult to apply in practice.For a more extended overview on context awareness, the reader is referred to [52].
Based on context's broader approach [52], context can be formalized as a combination of four contextual types, identity, time, location, and activity, which are the primary context types for characterizing the situation of a particular entity and also act as indices to other sources of contextual information.
With an entity's location, we can determine what other objects or people are near the entity and what activity is occurring near the entity.From these examples, it should be evident that the primary pieces of context for one entity can be used as indices to find secondary context (e.g., geolocalization) for that same entity as well as primary context for other related entities (e.g., proximity to other homes).This context model was later enlarged [9] to include an additional context type called Relations, to define dependencies between different entities (information specific to the social network itself).Relations describe whether the entity is a part of a greater whole (multiparty interactions within Brown's family) and how it can be used in the functionality of some other entities.
Recently, the term Relations has been used to refer to the relation between the individual and the social context in terms of perceived involvement [35] and to the changes detected in a group's involvement in a multiparty interaction [43].
Identity specifies personal user information like gender, age, children, social and marital status, and so forth.Time, in addition to its intuitive meaning, can utilize overlay models to depict events like working hours, holidays, days of week, and so on.Location refers either to geographical location or to symbolic location (e.g., at home, in the shop, or at work).Activity relates to what is occurring in the situation.It concerns both the activity of the entity itself and the activities in the surroundings of the entity.
For real-world context-aware HCI computing frameworks, context is defined as any information that can be used to characterize the situation that is relevant to the interaction between the users and the system [45].Thus, this definition approaches better the understanding of human affect signals.An even more suitable definition is the one that summarizes the key aspects of context with respect to the human interaction behavior (who is involved (e.g., dyadic/triadic interactions among persons), what is communicated (e.g., "recipes" to perform a specific task), how the information is communicated (the person's cues), why, that is, in which context, the information is passed on, where the proactive user is, what his current task is, and which (re)action should be taken to participate actively in content creation [53]).
All these context-aware systems that model the relevant context parameters of the environment depend on the application domain and hence face difficulties in modeling context in an independent way and also lack of models to be compared.Setting aside the fact that sometimes the domains such as context-aware computing, pervasive environments, and Ubiquitous Computing entail similarities with respect to the necessity of managing context knowledge, the concrete applications and approaches domains are different.In the area of pervasive computing, the work of [54] refers to context in environments taking into account the user's activity, the devices being used, the available resources, the relationships between people, and the available communication channels.To allow developers to consider richer information as activities and abstract knowledge about the current global context and to model specific knowledge of the current subdomain, an ontology-based approach has been proposed [55] in which context information is modeled into two separate layers (high and low level, resp.).Modeling high-level information allows performing deeper computations taking into account behavioral characteristics, trends information, and so forth.On the other hand, modeling low-level information, such as location, time, and environmental conditions, is used to achieve the system's final goal which is the adaptation to the user interface.Besides, several approaches consider userrelated characteristics to fulfill their purposes.For example, Schmidt and his colleagues [56] also remark social environments as relevant for context modeling.Another interesting point highlighted in this work is the user's tasks.This topic has been studied also in the past [52,54,57] where the aspect of activities has been used to enrich contextual information about the user.Nevertheless, as it occurs with user information, sometimes the collected data might lead to misunderstandings.In [58], ambiguity and uncertainty of user data are attempted to be solved through an ontologybased process which allows modeling them within a smart environment.A related work that deals with the uncertainty of context data in intelligent applications [59] extends the OWL web ontology language, with fuzzy set theory, to further capture, represent, and reason with such type of information.For a more extended review on representing and reasoning with uncertainty and vagueness in ontologies for the Semantic Web, the reader is referred to [60].
Unfortunately, such ambiguities with respect to the human behavior data understanding are usually context independent due to the fact that the human behavioral signals are easily misinterpreted if the information about the situation in which the shown behavioral cues have been displayed is not taken into account.Thus, to date, the proposed methodology has approached one or more of the above presented contextual aspects either separately or in groups of two or three using the information extracted from multimodal input streams [37].Overall, further research is needed in approaching this contextual information in a continuous way.
Ubiquitous Contextual
Information.An issue related to the use of data collected continuously [61] is that both psychologists and engineers tend to acquire their data in laboratories and artificial settings [62], to elicit explicitly the specific phenomena they want to observe.However, this is likely to simplify excessively the situation and to improve artificially the performance of the automatic approaches.For the last 20 years, well-established datasets and benchmarks have been developed for automatic affect analysis.Nevertheless, there are some important problems with respect to the analysis of facial behavior, such as (a) estimation of affect in continuous dimensional space (e.g., valence and arousal) in videos displaying spontaneous facial behavior and (b) detection of the activated facial muscle.That is, the majority of the publicly available corpora for the above tasks contain samples that have been captured in controlled recording conditions and/or captured under a specific social contextual environment.Arguably, in order to make further progress in automatic analysis of affect behavior, datasets that have been captured in the wild and in various contextual social environments have to be developed.
Recently, many face analysis research works have gradually shifted to facial images captured in the wild with the introduction of Labelled Faces in the Wild (LFW) [63], FDDB for face detection [64], and 300-W series of databases for facial tracking [65,66].To a great extent, the progress we are currently witnessing in the above face analysis problem is largely attributed to the collection and annotation of "in-thewild" datasets.The contributions of the already developed datasets and benchmarks for analysis of facial expression in the wild have been demonstrated during the challenges in Representation Learning (ICML 2013) [67], in the series of Emotion Recognition in the wild challenges (EmotiW 2013, 2014, 2015 [61,[68][69][70], and 2016 (https://sites.google.com/site/emotiw2016/)) and in the recently organized workshop on context-based affect recognition (CBAR 2016 (http:// cbar2016.blogspot.gr/)).For a more extended overview on datasets collected in the wild, the reader is referred to [71].
Aligned with the aforementioned trend of collecting contextual data in nonstandard situations (in the wild), there also has been much work in creating large-scale semantic ontologies and datasets.Typically, such vocabularies are defined according to utility for retrieval, coverage, diversity, availability, and reusability.Moreover, semantic concepts such as objects, locations, and activities in visual data can be easily automatically detected [72].Recent approaches have also turned towards semantic concept-level analysis approaches.
Nevertheless, not all of them are full of rich meta information such as the entities involved, the situational context, the demographic aspects, their social status, their cultural background, and their dialect and, thus, it is not certain whether tasks such as these can be used to make reliable generalizations about natural conversation [73].For these reasons, researchers have started to record smart homes or work situations to further achieve even higher levels of social naturalistic data.Representative examples are the collection of natural telephonic data that have been gathered by recording large numbers of real phone conversations, as in the Switchboard corpus [74] or audio corpora of nontelephonic spoken interaction or even collections of everyday interactions by having subjects wear a microphone during their daily lives for extended periods due to the great level of advancements in the area of pervasive computing [75][76][77].
However, the main criticism of that type of data is that they do not address all aspects of social interactions.Consequently, the existing resources should be revisited and repurposed every time new research questions arise.The above presented reasons justify the quality of data that we have so far, where the context is relatively stable (meetings, radio programs, laboratory sessions, etc.) and the variability related to such a factor is limited.Thus, there is a need for having mechanisms to collect feedback from users in the wild (such as software systems upon smartphones that ran continuously in the background to monitor user's mood and emotional states), to further establish large-scale spontaneous affect databases efficiently with very low cost [77].This need has been fulfilled by the great level of advancement with respect to such a situation as follows: the diffusion of mobile devices equipped with multiple sensors [78] and the advent of Big Data [79].
Mobile devices can collect a large amount of contextual information (geographic position, proximity to other people, audio environment, etc.) for extended periods of time.Big Data analytics can make sense of that data and provide information about context and its effect on behavior.Thus, it is possible to overcome limitations such as the collection of affect-related data in a large population as well as having involved participants in the experiment for too long.With the advent of powerful smart devices with built-in microphones [80], Bluetooth patterns, cameras, usage log, and so on, it is possible for researchers to identify new ways for capturing spontaneous face expression databases.Unfortunately, these studies have been carried out mainly in a social context (person-person communication) and only through acted scenarios.Further studies are needed in a variety of contexts to establish a better understanding of this relationship and identify whether and how these models could be generalized over different types of tactile behavior, activity, context, and personality traits.However, most of the approaches concentrate on offline analysis and no results that take context into account that could clarify any ambiguities in the interpretation of social cues have been presented so far.
Due to the huge growth of collecting wearable data in the wild and access to more contextual information, respectively, affect analysis has recently started to move into the realm of Big Data.For example, in terms of physiological data, having enough participants being able to own and wear sensors at all times and being willing to allow contextual data to be collected from their phones, it might allow a large collection of physiological signals with high-confidence affect labels.Data could then be labelled with both self-report and contextual information such as time of day, weather, activity, and who the subject was with so as to make an assessment of affective state.Consequently, with sufficiently ground truth datasets, it will likely be able to develop better contextually aware algorithms for individuals and like groups even if the sensor data are noisier.These algorithms will enable HCI in a private, personal, and continuous way and allow our sensors to both know us better and be able to communicate more effectively on our behalf with the world around us.Taking into account the fact that personalization is desirable, that is, the system adapts itself to the user by regarding this behavior, emotions, and intentions, specifically this leads to technologies with companion-like characteristics [81][82][83] that can interact with a certain user in a more efficient way independent of the contextual social situation and the environment.
Another important issue is the interplay among the personality, the situational context, and the contextualized behavior.The problem of context has been controversial in the HCI community [37,[84][85][86].The ultimate goal is to have context-aware technology that is capable of working and interacting differently depending on the context (e.g., a phone should not ring during a meeting).The key issue is how to encode and represent context, even in the case of identifying a set of features of the surrounding environment, location, identities of the people around the user, and so forth [36].Furthermore, of equal importance is the understanding of how people achieve and maintain a mutual understanding of the context according to their dependency [9] or how the social relations are structured in small [87] and large groups (friends, colleagues, families, students, etc.) and finally how the changes in individuals' behaviors [43,88] and attitudes occur due to their membership in social and situational settings.
So far, the issue is still open for technologies dealing with social and psychological phenomena like personality [89].Besides the difficulties in representing context, current approaches for human behavior understanding (facial expression analysis, speaker diarization, action recognition, etc.) are still sensitive to factors like illumination changes, environmental noise, or sensor placement.It is not clear whether personality should be considered as a stable construct or as a process that involves changes and evolution over time, as this decision depends on how it is measured and aggregated [90].In this view, personality ranges from highly stable and trait-like to highly variable and adaptive to context.
Particularly, data from smart wearable devices can indicate personality traits using machine learning approaches to extract useful features, providing fruitful pathways to study relationships between users and personalities, by building social networks with the rich contextual information available in applications usage, call, and SMS logs."Designing" smart homes in terms of enhancing the comfort is also challenging for mobile emotion detection.The friendly design of an intelligent ecosystem responsive to our needs that can make users feel more comfortable for affective feedback collection and may change user's social behavior is very promising to boost the affect detection performance and explore the possibility of further HCI techniques.
Moreover, it is necessary to discover new emotional features, which may exist in application logs, smart device usage patterns, locations, order histories, and so forth.There is a great need to thoroughly monitor and investigate the new personality and behavioral features.In other words, establishing new HCI databases in terms of new social features could be a very significant research topic and could bring "ambient intelligence" in the home closer to reality.
Gradually, the new multidisciplinary area that lies at the crossroads between Human-Computer Interaction (HCI), social sciences, linguistics, psychology, and context awareness is distinguishing itself as a separate field.It is thus possible to better recognize, interpret, and process "recipes," to incorporate contextual information, and finally to understand the related ethical issues about the creation of homes that can enhance shelter.For applications in fields such as realtime HCI and big social data analysis [91], deep natural language understanding is not strictly required; a sense of the semantics associated with text and some extra information such as social parameters associated with such semantics are often sufficient to quickly perform tasks such as capturing and modeling social behavior.
Semantic context concept-based approaches [92][93][94][95] aim to grasp the conceptual and affective information associated with natural language semantic rules.Additionally, conceptbased approaches can analyze multiword expressions that do not explicitly convey emotion but are related to concepts that do.Rather than gathering isolated rules about a whole item (e.g., iPhone 5), users are generally more interested in comparing different products according to their specific features (e.g., iPhone 5's versus Galaxy S3's touchscreen), or even subfeatures (e.g., fragility of iPhone 5's versus Galaxy S3's touchscreen).This taken-for-granted information referring to obvious things people normally know and usually leave unstated/uncommented, in particular, is necessary to properly deconstruct natural language text into rules, for example, to appraise the concept small room as negative for a hotel review and small queue as positive for a post office or the concept "go read the book" as positive for a book review but negative for a movie review.
Context-level analysis also ensures that all gathered rules are relevant for the specific user.In the era of social context (where intelligent systems have access to a great deal of personal identities and social dependencies), such rules will be tailored to each user's preferences and intent.Irrelevant opinions will be accordingly filtered with respect to their source (e.g., a relevant circle of friends or users with similar interests) and intent.(i) In-place sensors such as temperature, humidity, luminosity, noise, or human presence sensors located in the various rooms or outside, in the vicinity of the house (ii) Power and water consumption meters of the house (iii) Smart city sensors providing additional information such as pollution levels, temperature, and total electrical power consumption of the city, optionally with geospatial information 3.4.2.Home Rules.Users sometimes need their appliances to perform a specific action in their house taking into account the context information.For example, they may not want to wash clothes when it is raining or the temperature in the city is quite low.For this reason, there are defined actions for the smart home system.These actions are called home rules.These home rules are handling whether the appliances should be switched on or off.In a more high-level approach, the structure of the home rules can be customized as "if it is valid, do/do not do that."Figure 2 illustrates an example of that.
Pervasive Context Awareness Environments
The "if it is valid, do/do not do that" structure consists of three parts: (i) "If it is valid," a trigger that consists of the following: (a) An input type and the value of the input that is defined by pervasive and context information such as the ones described in Section 3.4 (b) An operator <, ≤, =, ̸ =, ≥, > (c) A reference value, which is input by the user (e.g., 20 degrees Celsius) (ii) "Do/do not," what to do when the rule is triggered, where any smart home system action/reaction can be inserted (iii) "That," which consists of an optional parameter (e.g., lower the house blinds by using that percentage) Moreover, more complex rules such as the temperature in specific interval of values are expressed with multiple rules that are logically joined together.
Semantic Representation
In this section, semantic technologies are used in order to represent the knowledge of an ecosystem.In general terms, an ecosystem with respect to the Internet of Things (IoT) which is often considered as the next step in Ubiquitous Computing [96] is particular IoT implementation (a smart grid, a smart home, a smart city, or personalized wearables) focusing on standards, protocols, or abilities from the technical perspective while at the same time analyzing the social relationships of the users from a social point perspective.According to the formal definition given in [97], an ecosystem consists of a set of solutions that enable, support, and automate the activities and transactions by the actors in the associated social environment.Furthermore, it enables relationships among the sensors, the actuators (complex devices), and their users.The relationships are based on a common platform and operate through the exchange of information, resources, and artifacts [98].In our work, we merge the two areas of IoT ecosystems implementation: home automation systems (smart homes) and IoT based solutions for smart cities. Particularly, our ecosystem consists of cities, comprising a number of houses.Additionally, in every city and in every house, a number of sensors are located which give data for the environmental context, for example, humidity and temperature.They are also able to give more specific information such as noise and pollution levels or information about the human presence inside the house.All these data are received from the sensors and are stored in a database.
In this ecosystem, we can define a number of rules, which we will call home rules, for example, defining under which conditions house appliances should be switched on or off.Another more concrete example would be "do not operate the air-condition when the outside temperature is high." The OWL 2 Web Ontology Language (OWL 2) [99], an ontology language for the Semantic Web with formally defined meaning, was adopted for the semantic representation of our ecosystem.OWL 2 ontologies provide classes, properties, individuals, and data values and they are stored as Semantic Web entities.The following sections (from (v) The Gender, the HouseRole, and the SocialStatus which for the different types of gender, house roles, and social status implement the user model 4.2.Properties.The ontology also comprises a series of properties.These properties are both object properties and data properties.Object properties provide ways to relate two objects (also called predicates).Object properties relate two objects (classes), of which one is the domain and the other is the range.The object properties of the ontology of this ecosystem are mainly used to relate the sensors with a specific location and the inhabitants of the house and the appliances.Some of the ontology's object properties are described below: On the other hand, data properties are similar to object properties with the sole difference that their domains are typed words.In our ontology, they relate the actual sensor values with a sensor, power on or off status of the appliances, and the user properties with numerical features.Some of them are described below: (i) hasNoise, which relates a sensor with the actual captured noise value, for example, 40 dB (ii) hasTemperature, which relates a sensor with the actual captured temperature value, for example, 25 ∘ C (iii) isOn, which has a true value if the appliance is turned on and is false otherwise (iv) numberOfChildren, which relates a person with the number of his/her children, which must be a nonnegative integer The object's and the data's properties of the ontology appear in Figure 3.
Individuals.
The ecosystem in all contains a large number of appliances, sensors, and people.Every single appliance, sensor, and person is represented in the ontology as an individual of the Appliances, Sensor, or Person class, respectively.Figure 3(d) illustrates a small set of individuals contained in the ontology.
Rules and Consistency
Check.In the current section, we provide a novel semantic representation of the home rules of the ecosystem.These home rules are expressed using the Semantic Web Rule Language (SWRL) [101].SWRL has the full power of OWL DL, only at the price of decidability and practical implementations.However, decidability can be regained by restricting the form of admissible rules, typically by imposing a suitable safety condition.Rules have the form of an implication between an antecedent (body) and a consequent (head).This meaning can be read as follows: "whenever the conditions that are specified in the antecedent may hold, the conditions that are specified in the consequent must also hold."A critical property of our ontology is that the ontology should always be consistent, a condition that is verified with the use of a Pellet reasoner [102].Thereat, whenever a home rule is violated, an inconsistency must be detected.Taking this into account, whenever the conditions that are specified in the antecedent hold, the conditions specified in the consequent must also hold; hence, the home rule's violation is transformed to the respective antecedent of the SWRL.
For this reason, a data restriction has to be created in the Appliances class.A data property called "restriction" is created.Its domain is an appliance and its range is boolean, but it is also restricted to create an appliance with the restriction property.Then, every home rule is transformed to a SWRL, and if the left side of the rule is satisfied, it leads to the creation of the "restriction" property for an appliance.This makes our ontolgy inconsistent; in other words, the appliance is restricted to start working.So every time a database record changes or a new one is added, the ontology individuals are populated with the new values querying the database.Then, using the Pellet reasoner, the system checks for possible existence of any inconsistency.Finally, the inconsistency is being handled by forcing the appliance to switch off or switch on.Using the Semantic Web technologies, the restriction is added to every appliance in order not to create any restriction data property for any individual of the class after the reasoning.In this subsection, some indicative home rules transformed to SWRLs are presented.
(1) Do not operate any washing machine when the external temperature is greater than 26 As it is clear, the built-ins for SWRLs, such as "equal," "lessThan," "greaterThan," "lessThanOrEqual," and "lessTha-nOrEqual," are used for comparisons.By using these builtins, it is possible to create home rules in which a value comparison of environmental values is needed such as the temperature, the humidity, and the noise level, or more elaborated boolean values such as the human presence detection in a house.Additionally, rules can be used in conjunction between each other in order to express more elaborated rules, such as the third home rule.
Experiments
In this section, we present the rudiments of what constitutes SandS, our smart home environment, which we define as a city in which information and communication technologies are merged with traditional infrastructures, coordinated, and integrated using the IoT technologies.These technologies establish the functions of the city and also provide ways in which citizen groups can interact in augmenting their understanding of the city and also provide essential engagement in the design and planning process.We first sketch our vision defining three goals which concern us: feeding the home rules with the signals provided by the smart city system, to represent a simple interoperability test; introducing limitations on the use of the appliances related to environment conditions, like the power or water consumption reckoned by the city environment sensors, the short-term weather forecasting, and so forth, which represents a logical test on the DI scheduler and consistency checker; and managing alarm messages sent by the municipality.We begin by presenting how our data have been collected within a social network in order to create and exchange content in the form of so-called recipes and to develop collective intelligence which adapts its operation through appropriate feedback provided by the user.Additionally, we approach SandS from the user's perspective and illustrate how users and their relationships can be modeled through a number of fuzzy stereotypical profiles (user-centered experimental validation).Furthermore, the context modeling in our smart home paradigm is examined through appropriate representation of context cues in the overall interaction (pervasive experimental validation).
Data Collection.
In this subsection, we present our approach towards the vision of smart home that supports inhabitants' high-level goals, emphasizing collecting our data in the wild in terms of having been captured in real-world and unconstrained conditions.Thus, our smart home technologies deal with interference with IoT technologies and react to nonstandard situations.More precisely, data was collected by the SandS consortium and partners during a small-scale mockup according to the "in-house" and "out-house" sensors such as mobility sensors, traffic and parking sensors, environmental sensors, and park and garden irrigation sensors, respectively.Finally, this context data information collected through the sensors is sent periodically to the ecosystem.These values are stored in a specific table of a database overwriting the previous record that was stored.
User Models.
Regarding the experimental dataset to validate the formation of personas, data was collected by the SandS consortium and partners during a small-scale mockup.SandS also opened up its user base towards the FIRE and related communities such as the Open Living Labs.The dissemination call for user participation pointed to a user registration form, illustrated in Figure 4.
This registration form comprised several user-related fields: first name, last name, date of birth, senior/junior, gender, single/married, and city.
Smart City Sensors.
In large-scale tests of the unified user in a smart home in a smart city, SandS will use context sensor data gathered at SmartSantander.SmartSantander [14], born as a European project, is turning into a living experimental laboratory as part of the EU's Future Internet initiative.Major companies involved in the project include Telefonica Digital, the company's R&D wing, along with other smaller suppliers as well as utility and service companies.In terms of application areas, five main areas have initially been targeted in the trials so far: traffic management and parking, street lighting, waste disposal management, pollution monitoring, and parks and garden management.To this aim, the city of Santander, Spain, has been equipped with a large number of sensors (Figure 5) used to collect a huge amount of information.We can divide the sensors into several categories based on the data they should collect.
(i) Mobility sensors: they are placed on buses, taxis, and police cars.(iv) Park and garden irrigation sensors: in order to control and make the irrigation in certain parks and gardens more efficient, these sensors register information about wind's speed, quantity of rain, soil temperature, soil humidity, atmospheric pressure, solar radiation, air humidity, and temperature, as well as water consumption At the moment, the data collected by these sensors are stored in the USN/IDAS SmartSantander cloud storage platform.This platform stores in its databases all the observations and measurements gathered by the sensors.It contains live and historical data.These databases are migrating on the Filab platform as an instance of the FIWARE [103] ecosystem.
In very minimal terms, our experiments will manage the integration of the two systems only in one direction: by exploiting SmartSantander data in favor of SandS with special regard to the empowerment of the home rules used by the domestic infrastructure (DI), which is the core of the proposed system and handles the home rules and the appliances, manages the users, and updates the database with any new value gathered from a sensor.Hence, the contact between the two systems will happen via the home rules which may be fed by the smart city sensor data either in their current version or in an enlarged one to be capable of profiting from the data.Available sensor data, related to the SandS domain, include the following: temperature, noise, light, humidity, and quantity of rain.Other data, for instance, those concerning traffic, could be considered in a more longterm planning and scheduling approach.
Finally, our goal would be to stress the following case studies: (1) Feeding the home rules with the signals provided by the smart city system.It represents a simple interoperability test (2) Introducing limitations on the use of the appliances related to environment conditions, such as the power or water consumption reckoned by the city environment sensors and the short-term weather forecasting.
It represents a logical test on the DI scheduler and consistency checker (3) Managing alarm messages sent by the municipality.It will represent a stress test for the entire system 5.1.3.Sensor Integration.In the ecosystem, there are sensors both in every house and for the whole city.These sensors send periodically information about the temperature, the luminosity, and the humidity.Both the in-house and the city sensors send the values of the sensors periodically to the ecosystem.These values are stored in a specific table of a database overwriting the previous record that was stored.The in-house sensors send information about the humidity in the house, the inside house temperature, the human presence in it, the power consumption and the water consumption of all the appliances inside it, the location where the sensor is installed (e.g., the kitchen, the bathroom, or the bedroom), the noise, and the local timestamp.Moreover, the city sensor values are collected at a specific moment using the FIWARE Ops tools (https://data.lab.fiware.org/dataset?tags=sensor&organization limit=0&organization=santander) [104].The data of the sensors are periodically sent to the system in a JSON format using an HTTP connection.Then, the JSONs are parsed and the information is stored to the database.The city sensors, like SmartSantander [14], are sending information of the noise inside the city, the temperature, and the exact location where they are installed.Adding all these pieces of information of the sensors to a database, it is every time feasible for the system to identify the exact condition inside and outside the house, where the sensors are installed, just by doing a simple query in the database.Due to the structure of the home rules, it is possible in a very short time for the ecosystem to know if a home rule is triggered and if an appliance in a house should be switched on or off.
User-Centered Experimental Validation.
A user can get the best recipe for him by comparing his request for a recipe with other users' requests of using the fuzzy similarity method presented in Section 2.4.The fuzzy similarity method is taking into account both the similarity of the users (e.g., their gender, age, house role) and the similarity between the request parameters.A request parameter for a request of a recipe in order to bake bread might be the crustiness, the amount of the water that should be used for the dough, or the flour's type that is going to be used.Figure 6 illustrates a form where a user can insert his database ID and some request parameters in order to get the similarity with other requests.Then, clicking the submit button, a table with all the requests of other users, ranked by their total similarity, is returned, as the one illustrated in Figure 7.The first column shows the total similarity taking into account both the user similarity and the similarity of request parameters.The sixth column shows only the user similarity and the fourth only the request parameters similarity.The fifth column shows the satisfaction of the users that have used this recipe in the past.One means "fully satisfied" and, on the other hand, zero means "not satisfied at all."
Pervasive Experimental Validation.
The system is periodically querying the database and, more specifically, the collection where the sensor values are stored.Then, using the home rules, which have been added in the ecosystem, it checks whether the consistency of the ontology still holds for the new sensor values.If any of the home rules is triggered, it denotes that an inconsistency has been detected from the system for a specific appliance.This specific appliance is switched off, until none of the home rules related to this appliance are inconsistent.As it has been mentioned previously, a home rule could be triggered by both the inhouse sensor value changes and the value changes detected by the SmartSantander sensors.In order to be clear, an example is presented.Figure 8(a) illustrates the noise levels in the house, which follow the Gaussian function.These values are received by the in-house noise detection sensors and they are stored in the database.In addition, Figure 8(b) presents the human presence in the house at the same period of time.In case the value is equal to one, this means that there exists a human in the house this specific period of time.In case there is not anyone in the house, the value is equal to zero.Considering that the house is a part of the ecosystem where the home rules presented in Section 4.4 are defined, the second home rule is triggered.At the beginning, the washing machine is switched on, executing the clothes' washing program, until the noise volume tides over 40 dB at 10:00.Then, the appliance is switched off until 18:00, when the noise levels fall below 40 dB.In case a washing program for the clothes was interrupted during its execution, the program starts its execution from the beginning or continues from the step it was stopped, depending on the users' choices.If an inconsistency is detected but the washing machine is not executing any laundry program or it is scheduled to start it immediately, then, the washing machine is switched off, without affecting any scheduled process.Moreover, in case the system receives from a city sensor, such as the SmartSantander sensors, temperature values equal to or greater than 26 ∘ C, then the first home rule would be triggered because an inconsistency would have been detected.As a result, the house's washing machine would be switched off.The temperature values of such an occasion are presented in Figure 9. Between 11:00 and 15:00, a city sensor receives temperature values higher than 26 ∘ C. Consequently, an inconsistency is detected, which forces the house's washing machine to switch off.Finally, later than 15:00, when the temperature is again lower than 26 ∘ C, the washing machine is switched on again.
Conclusions and Future Work
In this paper, we illustrated how the emerging semantics of the smart home environments can be captured through a novel formalism and how expert knowledge can be used to ensure semantic interoperability.User stereotypes or personas on the one hand provide flexibility, extensibility, reusability, and applicability and on the other hand knowledge management is incorporated as an efficient user and context model representation formalism.In addition, this formal, machine-processable representation is used in order to define, extract, and use a set of concepts and their fuzzy semantic relations.This user modeling approach is put into a rich smart home context representation which abstracts raw sensor data to a high-level semantic representation language in which complex home rules can be defined.
Future work includes further incorporation of user, usage, and context information, through a unified semantic representation, driving an adaptation mechanism aiming to provide a personalized service and optimizing the user experience.Among the aspects of the architecture that will be stressed through experimental validation is the computational cost and the scaling of SandS to a wider user group.Based on the SandS architecture, the cloud infrastructure ensures the optimal handling of the computational load since the intermediate processes are not computationally demanding.On the other hand, issues that may arise from the scaling of the platform application are part of the experimental validation since the load is directly correlated with the user activity.The large-scale validation at SmartSantander will provide us with useful insights about the latter.
3. 4 . 1 .
Context Sources.Context data in a smart pervasive environment such as a smart home can come from various sources as follows: Section 4.1 to Section 4.4) explain in more detail how the ecosystem is represented by our ontology.The ontology was created using the open-source Protégé 4.2 platform [100].
4. 1 .
Ontology Hierarchy.Figure 3(c) illustrates the ontology's hierarchy.The ontology's classes describe different aspects of the ecosystem which may be as follows: (i) The Appliances which contain all the different types of the ecosystem's appliances, such as (a) the refrigerator, (b) the washing machine, (c) the air-condition, and (d) the television (ii) The Location, which contains both the house and the city (iii) The Sensor, which is a class that contains the individuals of all the existing sensors (iv) The Person, which contains all the individuals
1 (Figure 3 :
Figure 3: An example of the ontology properties, the hierarchical structure, and the individuals used for our experiments.
Plot of the noise levels per second of a day window Plot of the human presence in the house, per second of a day window."1" means that there is a human in the house and "0" means that none is in the house
Figure 8 :
Figure 8: In-house sensor values of the noise and the human presence.
Figure 9 :
Figure 9: SmartSantander sensor values of the temperature for a specific period in a day.
Table 1 :
Semantic relations used for generation of combined relation .
5 , and CS 6 that are collectively represented in user profile similarity functions as CS 1/4/5/6 .(ii) Two eahoukers are considered identical if their difference of age is less than 5 years.Indeed, their behavior and habits inside the house can be considered the same even if they have a slight difference of age.For example, two people, one at the age of 30 and one at the age of 32, probably would have the same behaviors, according to their age.On the other hand, a person at the age of 30 would have quite different behaviors from a person at the age of 50 or 60.This property is expressed by the function CS 2 . | 17,753.4 | 2016-08-01T00:00:00.000 | [
"Computer Science",
"Engineering",
"Environmental Science"
] |
ZnO nanoneedle/H2O solid-liquid heterojunction-based self-powered ultraviolet detector
ZnO nanoneedle arrays were grown vertically on a fluorine-doped tin oxide-coated glass by hydrothermal method at a relatively low temperature. A self-powered photoelectrochemical cell-type UV detector was fabricated using the ZnO nanoneedles as the active photoanode and H2O as the electrolyte. This solid-liquid heterojunction offers an enlarged ZnO/water contact area and a direct pathway for electron transport simultaneously. By connecting this UV photodetector to an ammeter, the intensity of UV light can be quantified using the output short-circuit photocurrent without a power source. High photosensitivity, excellent spectral selectivity, and fast photoresponse at zero bias are observed in this UV detector. The self-powered behavior can be well explained by the formation of a space charge layer near the interface of the solid-liquid heterojunction, which results in a built-in potential and makes the solid-liquid heterojunction work in photovoltaic mode.
Background
Ultraviolet (UV) detectors play an essential role in a wide range of civil and military applications including UV astronomy, environmental monitoring, flame sensing, secure space-to-space communications, and chemical/biological analysis [1][2][3]. As a wide bandgap material, ZnO has emerged as one of the most promising materials for UV detectors due to its exceptional photosensitivity and high radiation hardness [4][5][6]. ZnO has a direct wide bandgap of 3.37 eV, eliminating the need for costly filters to achieve visible-blind operation as that in traditional photomultipliers and silicon photodetectors. Its bandgap can be tuned in a wide range simply by doping with a small mole fraction of Al, Mg, or Cd, which enables ZnO to be used in different detection ranges. In the past, most ZnObased photodetectors were fabricated in planar type based on ZnO thin films grown by sputtering, pulsed laser deposition, or molecular beam epitaxy. Different kinds of UV detectors based on ZnO have been investigated with metal-semiconductor-metal [7][8][9][10], p-i-n [4,11,12], p-n junction [5,13,14], or Schottky barriertype [15][16][17] structures. However, factors such as high cost, difficulty of integrating with Si substrate, and complicated fabrication process have drawn back the potential application of planar-type ZnO photodetectors.
Recently, there is a growing interest in UV detectors based on one-dimensional (1D) nanostructures of ZnO like nanowires [18][19][20] or nanobelts [21] due to the highly susceptible photoelectric properties by means of electron-hole generation or recombination under UV illumination. ZnO nanowire-based UV sensors exhibit a high on/off ratio between photoresponse current and dark current because of the large surface-to-volume ratio and the high crystal quality. Additionally, characteristics such as fast response and recovery time, visible light blindness, and potential for flexible electronics [22,23] further contribute to 1D UV detectors' competence. However, the very low photoresponse current due to the small size of individual nanowires is an essential hindrance to single ZnO nanowire-based UV detectors [18,20,24]. Efficient routes like integrating multiple nanomaterials or assembling nanoarrays often lead to a complicated, time-consuming, and uneconomic device fabrication process [24][25][26]. On the other hand, these photodetectors typically require an external bias as the driving force to prevent the recombination of photogenerated electron-hole pairs. For large-area two-dimensional arrays that contain huge amounts of small UV sensors, large-scale use of batteries as a power source will lead to environmental pollution [27][28][29].
In this letter, we introduce a self-powered UV detector based on a ZnO nanoneedle/water solid-liquid heterojunction structure. ZnO nanoneedle arrays were grown on a fluorine-doped tin oxide (FTO)-coated glass substrate by spin coating and subsequent hydrothermal method without any costly epitaxial process. X-ray diffraction (XRD) and scanning electron microscope (SEM) results proved a high-quality, vertically aligned ZnO nanoneedle array structure. A selfpowered photoelectrochemical cell-type UV detector was assembled using the ZnO nanoneedles as the active photoanode and H 2 O as the electrolyte, which has almost the same structure as that of a conventional dye-sensitized solar cell but without dye adsorption. The solid-liquid heterojunction owes an inherent builtin potential across the interface which behaves in a Schottky barrier manner. The built-in potential acts as the driving force to separate the electron-hole pairs from recombination and generate photocurrent [28][29][30]. Hence, this ZnO/water heterojunction-based UV detector operates in photovoltaic mode, eliminating the need for external electric bias, which demonstrates a great potential in realizing self-powered UV detection and a self-driven integrated nanopower-nanodevice system [31].
Growth of ZnO nanoneedle arrays by hydrothermal process
ZnO nanoneedle arrays were grown using solution deposition method on FTO glass covered with a ZnO seed layer. Zinc acetate dehydrate was dissolved in the mixed solution of ethanolamine and 2-methoxyethanol to yield a homogeneous and stable colloid solution, which served as the seed solution. The ZnO seed layer was formed by spin coating the colloid solution at 3,000 rpm followed by annealing in a furnace at 400°C for 1 h. The following hydrothermal growth was carried out at 90°C for 6 h in a Teflon bottle by placing the seeded substrates vertically in aqueous growth solutions, which contain 20 mM zinc nitrate, 20 mM hexamethylenetetramine, and 125 mM 1,3-diaminopropane. Then the FTO glass with ZnO nanoneedle arrays was rinsed with deionized water thoroughly and annealed at 500°C for 1 h to remove any residual organics and to improve the crystalline structure.
Assembly of the solid-liquid heterojunction-based UV detector
The solid-liquid heterojunction-based UV detector was assembled in the same structure as that of a dye-sensitized solar cell, except that no dye molecules were adsorbed and the electrolyte used in this case was deionized water, as discussed in our previous work [32]. Figure 1 shows the schematic structure of the nanocrystalline ZnO/H 2 O solid-liquid heterojunction-based UV detector. For device manipulation, FTO glass with vertically aligned ZnO nanoneedle arrays was used as the active electrode. A 20nm-thick Pt film deposited on FTO glass by magnetron sputtering formed the counter electrode. Afterwards, the work electrode (ZnO/FTO) and the counter electrode (Pt/FTO) were adhered together face to face with a 60-μm-thick sealing material (SX-1170-60, Solaronix SA, Aubonne, Switzerland). Finally, deionized water was injected into the space between the top and counter electrode. A ZnO/H 2 O solid-liquid heterojunctionbased UV detector was fabricated with an active area for UV irradiation of about 0.196 cm 2 .
Characterization of ZnO nanoneedle arrays and the UV photodetector
The crystal structure of the ZnO nanoneedle arrays was analyzed by XRD (XD-3, PG Instruments Ltd., Beijing, China) with Cu Kα line radiation (λ = 0.15406 nm). The surface morphology was characterized using a scanning electron microscope (Hitachi S-4800, Hitachi, Ltd., Chiyoda, Tokyo, Japan). The optical transmittance was measured using a UV-visible dual-beam spectrophotometer (TU-1900, PG Instruments, Ltd., Beijing, China). The photoresponse characteristics of the UV detector under illumination were recorded with a programmable voltage-current sourcemeter (2400, Keithley Instruments Inc., Cleveland, OH, USA). A 500-W xenon lamp (7ILX500, 7Star Optical Instruments Co., Beijing, China) equipped with a monochromator (7ISW30, 7Star Optical Instruments Co.) was used as the light source. For the photoresponse switching behavior measurement, photocurrent was measured by an electrochemical workstation (RST5200, Zhengzhou Shirusi Instrument Technology Co. Ltd, Zhengzhou, China). Figure 2a shows the typical XRD pattern of ZnO nanoneedle arrays grown on FTO substrate. All of the diffraction peaks can be indexed within experimental error as a hexagonal ZnO phase (wurtzite structure) from the standard card (JCPDS 76-0704). No characteristic peaks from impurities such as Zn(OH) 2 are detected. Compared to powdered ZnO XRD patterns, the (002) diffraction peak was significantly enhanced, which indicates that the ZnO nanoneedles are highly oriented along the c-axis direction with the growth axis perpendicular to the substrate surface. The full width at half maximum (FWHM) of ZnO (002) As is shown in Figure 3, the optical property of the ZnO nanoneedle arrays was characterized by the UVvisible transmittance spectrum in the range of 220 to 800 nm. In the visible light region, ZnO shows low transmittance (30% to 50%), which comes from the strong light scattering effect of the nanoneedle array structure. An obvious sharp absorption edge appears at about 385 nm, which can be attributed to the bandgap of wurtzite ZnO nanoneedle arrays. Not much difference can be found in the absorption edge of the nanocrystalline ZnO as compared with that of bulk ZnO in this case, as the size of the ZnO nanoneedle is well above the ZnO Bohr exciton diameter. The inset of Figure 3 shows the transmittance spectrum of a typical FTO substrate, with an average transmittance of 80% within the visible light region and a sharp absorption edge at about 310 nm. Taking both the absorption spectra of ZnO and FTO glass into consideration, we can achieve the conclusion that light with a wavelength of 310 to 385 nm can be well absorbed by ZnO nanoneedle arrays and contribute to the photoresponse, which is further confirmed by the following photoresponsivity spectrum. This inherent built-in potential arises from the SB-like ZnO-water interface, acts as a driving force to separate the photogenerated electron-hole pairs, and produces the photocurrent. Therefore, this device can operate at photovoltaic mode without any external bias. Figure 4b shows the spectral photoresponsivity of the ZnO nanoneedle array/water heterojunction-based UV detector at 0-V bias. The incident light wavelength ranges from 350 to 550 nm. A strong peak appears at 385 nm, corresponding to the bandgap of wurtzite ZnO. The maximum responsivity located at around 385 nm is about 0.022 A/W cm 2 , which is suitable for UV-A range (320 to 400 nm) application. Note that the full width at half maximum of the photoresponse is about 18.5 nm (0.15 eV) as shown in Figure 4b, which demonstrates excellent spectral wavelength selectivity in the UV-A range. The photoresponsivity decreases rapidly to nearly zero as the wavelength is longer than 450 nm because of the low absorption for photons with energies smaller than the bandgap. The responsivity also drops fast on the short-wavelength side because of the strong electronhole recombination effect. As illustrated in Figure 2c, the ZnO nanoneedle array has a dense, compact layer at the base (closest to FTO). The absorption coefficient of ZnO at a wavelength shorter than 375 nm is very high. When illuminated through the FTO glass, the majority of photons will be absorbed by this ZnO layer close to the FTO. This absorption occurs well away from the junction. Due to the high electronhole recombination rate in this layer, only carriers excited near the junction region contribute to the photocurrent in the photodetector. Therefore, UV light below 375 nm only creates a poor photocurrent response. The photocurrent under different incident light intensities was also measured. The measurement of this self-powered UV detector was carried out at 0-V bias and under 365-nm UV light irradiation. As shown in Figure 4c, under weak UV light intensity, the photocurrents are almost linearly increased with an increasing incident UV light intensity. A gradual saturation of the photocurrent was observed under higher UV irradiances. One possible reason for this saturation is the poor hole transport ability of water.
Results and discussion
The real-time photocurrent response of the selfpowered UV detector at 0-V bias is shown in Figure 5 under an incident UV light with a wavelength of 385 nm, corresponding to the bandgap of ZnO nanoneedle arrays. The incident radiation is switched with an on/off interval of 10 s. Six repeated cycles are displayed in Figure 5a, in which the photocurrent is observed to be consistent and repeatable with no degenerate effect found during the detection process. From the magnified rising and decaying edges of photocurrent shown in Figure 5b,c, respectively, a fast photoresponse can be seen clearly. The rising time (defined as the time to increase from 10% to 90% of the maximum photocurrent) and the decaying time (defined as the time to recover from 90% to 10% of the maximum photocurrent) are both approximately 0.1 s, indicating rapid photoresponse characteristics.
In order to clearly clarify the working principle of this self-powered UV detector, a simple energy band diagram is schematically shown in Figure 6. Since the Fermi level of the n-type semiconductor (ZnO) is higher than the redox potential of the aqueous electrolyte (deionized water), when a semiconductor is placed in contact with an electrolyte, electric current initially flows across the junction until electric equilibrium is reached [28][29][30]. In this case, electrons will transfer from the semiconductor (ZnO) into the electrolyte (deionized water), which will produce a region on each side of the heterojunction where the charge distribution differs from the bulk material, known as the space charge layer. Electron depletion from solid into the solution results in a positive excess charge by immobile ionized donor states. Hence, an electric potential difference across the solid-liquid interface is set up, which works in a Schottky barrier mode, as is reflected by the upward bending of the bandgaps of the n-type semiconductor. When incident light travels through FTO glass and reaches the active layer of ZnO nanoneedle arrays, photons with energy exceeding that of the ZnO bandgap will be absorbed and electron-hole pairs will be generated thereafter. The built-in potential across the interface works as the driving force to separate the electron-hole pairs. Negative charge moves along the ZnO nanoneedle and gets collected by the FTO electrode and poured into the external circuit easily since the work function of FTO matches with the conduction band of ZnO. The positive holes are driven to the surface and got captured by the reduced form of the redox molecule (h + + OH − → OH·). Fast removal of holes can be expected across the heterojunction due to the large surface area. The oxidized form of the redox molecule is reduced back to the reduced form OH − at the counter electrode (Pt/FTO) by the electrons that re-entered into the UV detector from the external circuit (e − + OH· → OH − ). The circuit was completed in this manner, demonstrating a self-powered UV detection property.
Overall, the ZnO nanoneedle array/water solid-liquid heterojunction is one type of regenerative UV detector. Considering the tunability of the absorption edge of ZnO by simply changing the concentration of the doping element like Al [33,34] or Mg [35,36] and excellent spectral selectivity of this system, we suggest that the spectral response should be tailored by elemental doping [37] in a relatively wide range, which presents a promising versatile potential. In addition, the photoresponsivity and time performance of the solid-liquid heterojunction can also be improved by seeking for the optimized electrolyte solution. The simple fabrication technique, low cost, and environmental friendliness (nontoxic composition) further add to the solid-liquid UV detector's commercial application.
Conclusion
In conclusion, c-axis-preferred ZnO nanoneedle arrays have been successfully prepared on a transparent conductive FTO substrate via a simple hydrothermal method. A new type of self-powered UV detector based on a ZnO nanoneedle array/water solid-liquid heterojunction structure is fabricated, which exhibits a prominent performance for UV light detection. The photocurrent responds rapidly with UV light on-off switching irradiation under ambient environment. The mechanism of the device is suggested to be associated with the inherent built-in potential across the solidliquid interface which works in a Schottky barrier manner that separates the electron-hole pairs generated under UV irradiation. The large relative surface and high crystal quality further promote the photoresponse. This new type of self-powered solid-liquid heterojunction-based UV detector can be a particularly suitable candidate for practical applications for its high photosensitivity; fast response; excellent spectral selectivity; uncomplicated, low-cost fabrication process; and environment-friendly feature. | 3,550.4 | 2013-10-08T00:00:00.000 | [
"Chemistry"
] |
A survey of users’ perspectives and preferences as to the value of JISIB - a spot-check
The Journal of Intelligence Studies in Business (JISIB) has performed a survey, or done a spot-check, to learn more about its users at the end of three years of publications. Users were found via the journal’s site on LinkedIn and a web-survey was sent from there as an announcement. 18 respondents answered completely. This was only 3,2% of the total member group, but we still think we can draw a number of conclusion from it, also as compared to feedback gathered during the years. Users are looking for more case study material in the articles. There is an even balance between those who think there is too much technical material and too little. The discussion about what languages to publish articles in is likely to continue. It is not given that this should be exclusively English in the future. At the same time publishing non-English articles present a number of challenges.
Introduction
The Journal of Intelligence Studies in Business (JISIB) has now existed for three years.During that time it has been accepted to EBSCO and SCOPUS.As journal is opens source it is also available over DOAJ.As its platform it uses the software system Open Journal System (OJS).
The content and format of the journal was much decided based on previous experience with other similar journals.The process to start up the journal took about two years.During that time the Available for free online at https://ojs.hh.se/Journal of Intelligence Studies in Business Vol 4, No 2 (2014) 61-66 failure with the previous journal was much discussed and a consensus was formed around the possibilities to form a new journal.The most important venues for these discussions were Competitive Intelligence (CI) conferences.Users' preferences and perspectives were not considered simply because there were none.To find out what users think a survey was conducted.By "users" we refer to a large group then "readers" even though the latter is a more common term for these surveys.Many contributors are not necessarily ardent readers of the journal.Consultants likewise, may just check out a model in an article.Some companies may be interested in the journal more for publicity, etc.Similar article are also often referred to as "Reader Spotchecks" or "report to readers".
Theory and Method
There cannot be said to be much relevant theory for this field, as it is highly applied.JISIB has previously published an article about a review of two previous CI journals (Solberg Søilen, K., 2013), but that was by no means an analysis of users or readers.Other papers have found that readers want more material that is interesting for practitioners, but also more case studies, for example Fairlie, R., & Holder, D. (2010).Some journals operate with a kind of annual report to readers where surveys are a part, for example Sullivan, R. N. ( 2014).There are many potential dimensions which can be surveyed.Anonymous. (2003) lists high marks for "article length," "career applicability," and "timeliness of topics."The survey went out by email to 569 members of the JISIB group on LinkedIn.After 1 week 18 users had responded with complete answers to the Survey table.That is a 3,2% response rate.This is a low rate, also considering that the users were well targeted, as all were members of the JISIB site on LinkedIn, and the questions to be answered were few.The Introduction letter asked for 5 minutes time from the users.
The first four questions were about the value of JISIB.Answers were given by Likert scale of five grades.The second question was about what topics users would like to see in the journal.The third question was about how to improve the quality (not popularity) of the journal.The last quest was about the role the user could imagine playing for the journal, for example to be an author, reviewer or to get involved during conferences.
Results and Discussion
The average score for "the value of the SIIB journal to me" was 3,78 which means that most users think that the journal has value to them.The Average score for the value of the journal for the development of intelligence studies was even higher, 4,22.This was the highest score for the survey.For the moment there are two other journals which focus specifically on intelligence in business; both are open source.There are also journals on intelligence studies in the political field and of course in the military domain.We do not know if the users are familiar with these or if they thought that the question was only for business related journals.The lowest score was given to the question if the journal was of value to their company/organization, with average of 3,28.Even though this was the lowest score it was still positive/above neutral (=3).The second highest score was related to whether or not JISIB publishes good science.The average here was 3,89.It is clear that questions 2 and 4 assume the respondents know what good science is.From question 4 we could see that most users were in fact academics and researchers themselves (the survey was anonymous, but here users could write their contact info if they wanted to and many did).Many have also contributed directly to the journal.
Table 1: Answers on value of JISIB
The second question was about what topics users would like to see published in the journal.The information given here was very useful and again showed that the users who answered were in many cases at least experts; working with/in intelligence related areas.One response was given two times, which indicated it was same person.The most common request was to publish more case studies.Secondly it is not clear whether or not users want to see IT related material in the journal, as has been the tendency so far.One user says he is against it, while another user wants to see more on big data.Other suggestions include: articles on competitive strategy, more related to developing countries, more critical studies (Critical theory) and more articles related to innovation.All of these topics have indeed been covered in the journal.We have also published case studies, including in this issue.One conclusion could be to try to find even more case studies.This has also been requested by CI consultants.There is one problem with critical theory and case studies from a scientific perspective and that is that it tends to become more difficult to be acknowledged as a scientific.In most ratings and evaluations scientific implies a dominance of empirical articles.We have solved this question by divining the articles into articles and "opinions".In some recent issues the number of "opinion" articles has been rather large.This may be a difficult trade off, as many readers want "opinions" and evaluators/peers want science/empirical material.
Table 2: User preferences as to JISIB content
The third question was about quality improvement.It is implicit here that a comparison between the answers of question two and three is interesting as it shows if suggested improvements for better quality is the same as the material users want to see more of in the journal.
We see that for most part this is not the case.Instead there is a list of specific suggestions directly related to quality.The first point is the editing and implicitly the grammar and syntax.This has been a major issue for the journal.If we should reject articles which are not written in proper English we would have to disregard a large amount.This would also have the effect that most articles would be from authors form Anglo Saxon countries.Too a certain extent we have tried to help some authors, but this has also been difficult due to time restraints.We will continue to make efforts to improve this part.Another user suggests the invitation of guest editors.This is absolutely a possibility and the same person got an invitation directly, as he has also published with us before and have been active in the community for many years.
The next suggestion is to expand the editorial committee.It is quite possible that this can be done, and we will loom into it, but at the same time, few journals have a more diverse editorial committee.In addition JISIB has an active co-editor on each continent.Committee members are evaluated every second year based on their net contribution.New members will then have the possibility to enter and contribute.It is probably only healthy for the wellbeing of the journal with a certain turnover here.Another suggestion is to allow for more articles in more languages.At the start of JISIB there was some talk of having a bilingual journal, French and English.It is still an open question.At the same time the language of science tends to be English, even though there are a growing number of articles in other languages, first of all Chinese.If we play with the idea of having articles in several other languages it is a question how many of our users would in fact be able to read the articles.One user also wants us to use more appealing images in the articles.This is possible, but normally not associated with scientific articles.It also takes many resources, which we do not have.There are some good exceptions to, like the journals "Science" and "Nature", but these stand in a class by themselves.
Table 3: User perceptions about quality improvements of JISIB
The last question was more an open invitation to get users more involved with the journal.When the journal started it was clear that it was only going to be possible if a large number of people volunteered with their own free time.This is still the building block for the journal five years down the road.As the survey was anonymous we could not see who sent in the different answers.We used the web service Qualitrics to gather the actual data, and it shows the approximate GPS coordinate for the IP number only.I personally consider this information not to be acceptable, but did not know about the function before afterwards, as I have used other services before.Still it was not possible for us to see who the respondents were.However, in question four the respondents could disclose who he was, and many did.Their information is not presented I the table below, which is then more of a figure.
Many users showed here that they are already active, writing articles, being reviewer and participating at conferences.Some users also volunteered to do work (write, review and even edit) which is a great thing for the journal.
Conclusion
To keep the conclusion short users think the overall value of the journal is high, but they are looking for more case study material in the articles.There is an even balance between those who think there is too much technical material and too little.One conclusion that is not suggested by any one user, but which could be explored is to invite guesteditors to publish a whole issue in their own language.There could be a special french issue, as many contributions continue to come from France and a Spanish special issue, as we have several contributions from Mexico and Spain.It could also
Figure 1 :
Figure 1: What role users would like to fill in JISIB | 2,571.8 | 2014-11-25T00:00:00.000 | [
"Business",
"Computer Science"
] |
Accurate Detection for Zirconium Sheet Surface Scratches Based on Visible Light Images
Zirconium sheet has been widely used in various fields, e.g., chemistry and aerospace. The surface scratches on the zirconium sheets caused by complex processing environment have a negative impact on the performance, e.g., working life and fatigue fracture resistance. Therefore, it is necessary to detect the defect of zirconium sheets. However, it is difficult to detect such scratch images due to lots of scattered additive noise and complex interlaced structural texture. Hence, we propose a framework for adaptively detecting scratches on the surface images of zirconium sheets, including noise removing and texture suppressing. First, the noise removal algorithm, i.e., an optimized threshold function based on dual-tree complex wavelet transform, uses selected parameters to remove scattered and numerous noise. Second, the texture suppression algorithm, i.e., an optimized relative total variation enhancement model, employs selected parameters to suppress interlaced texture. Finally, by connecting disconnection based on two types of connection algorithms and replacing the Gaussian filter in the standard Canny edge detection algorithm with our proposed framework, we can more robustly detect the scratches. The experimental results show that the proposed framework is of higher accuracy.
Introduction
Zirconium and its compounds have unique physicochemical properties, e.g., amazing corrosion resistance, extremely high melting point, and ultrahigh hardness and strength, accounting for their applications in engineering and science [1,2]. As a relatively rare metal material, zirconium sheet plays a significant role in machinery manufacturing, aerospace, nuclear reactor, chemical industry, ceramic industry, and other fields [3][4][5][6][7][8]. However, in various production and processing processes, scratches will appear on the surface of zirconium sheets due to various factors. The surface quality of zirconium sheets will directly affect the performance and quality of final products; therefore, it is necessary to detect the location and shape of scratches. At present, the scratches on the surface of zirconium sheets are usually detected manually. However, manual detection has many shortcomings, such as high false detection rate and low efficiency. Therefore, we urgently need a method for scratch detection of zirconium sheets.
At the same time, with the development of modern industry and science and technology, nondestructive testing (NDT) is widely used in various fields, e.g., aerospace, machinery industry, shipbuilding, automobile, etc. [9][10][11][12][13]. NDT technology includes ultrasonic, machine vision, radiographic, eddy current, and other methods. Among them, compared with other methods, machine vision based on image processing technology has great advantages, e.g., automation, high precision, easy operation, etc. Therefore, we choose machine vision to detect scratches on the surface of zirconium sheets [14].
However, due to the interference of the surface features, which are noise and texture of zirconium sheets, it is difficult to use machine vision methods to detect scratches. In other words, noise will mask the details of images, resulting in breakpoints in the extracted scratch contour. In addition, texture will mistakenly become branches of the contour, destroying the size and shape of the contour [15][16][17]. Therefore, removing noise and suppressing texture before scratch detection are key to obtain the region of scratches. Traditional algorithms typically detect targets based on image features, e.g., gradient, color, and texture. Due to that the detection objects are in the complex texture background, the difference between the results obtained from color-based image segmentation and real value is significant [18][19][20]. Researchers from different countries have proposed the following noncolor methods to solve such problems [21][22][23].
Renuka et al. [24] presented an objective calculation method of denoising threshold based on dual-tree complex wavelet transform (DTCWT), which has good edge preservation and denoising ability. Liu et al. [25] presented a nonreference image denoising method based on enhanced DTCWT and bilateral filter. Compared with other algorithms, the results show that the denoising effect of this method is better than other methods. Li et al. [26] proposed a you only look once (YOLO)-attention based on YOLOv4 for complex defect types and noisy detection environments in wire and arc additive manufacturing (WAAM), achieving fast and accurate defect detection for WAAM. This method achieved an average accuracy of 94.5% in dynamic images. Kelishadrokhi et al. [27] proposed a new method based on the combination of color and texture features to solve the problem of finding more similar images from a large database. They proposed an extended version of local neighborhood difference pattern (ELNDP) to achieve discriminative features and optimized the color histogram features in the HSV color space to extract color features. This method has better retrieval performance compared to other methods. Xu et al. [28] proposed a new texture structure extraction system. Experimental results show that the algorithm is effective and does not need a priori condition. Zhou et al. [29] proposed a method to extract machine tool defects from high-speed milling workpiece surface images, which reduces the influence of workpiece surface background texture. The application example shows that this method can effectively extract machine tool defects. Su et al. [30] proposed a new edge-preserving texture suppression filter, which uses the joint bilateral filter as a bridge to achieve the dual purposes of texture smoothing and edge-preserving. Isar et al. [31] proposed a two-stage denoising system structure to separate threshold calculation and noise removal so as to improve the denoising performance. Tian et al. [32] designed a lighting method combining plane illumination mode with multiangle illumination mode to automatically detect five kinds of defects by different detection methods.
However, most of the above contour extraction methods have the disadvantages of nonadaptive denoising, a large number of datasets, and texture suppression without edge preservation. Therefore, it is necessary to propose a method that takes into account the weaknesses of the above methods. In order to remove noise and suppress texture simultaneously, while preserving as much detail as possible before scratch detection, we propose an adaptive scratch extraction algorithm framework for detecting scratches with a two-stage system, suppressing interference features. This method is able to detect scratches on the surface of zirconium sheets from the interference of background features effectively.
The key of this method is to obtain wavelet decomposition level, threshold, and texture size.
The rest of this article is organized as follows. In Section 2, we first introduce the theoretical part. Then, the experiment and its results are introduced in Section 3. The conclusion and further work are shown in Section 4.
Methodology
Due to the interference of background features, we propose an adaptive scratch extraction framework for the surface of zirconium sheets to detect scratches. At first, this algorithm uses DTCWT function to decompose images with selected wavelet decomposition level, and then designs the optimized adaptive local threshold to improve the ability of removing noise. Secondly, it selects the texture size and then designs a relative total vari-ation enhancement (RTVE) algorithm to suppress texture. Finally, it detects the scratches and quantitatively analyzes these defects from three aspects, e.g., area, length and position deviation. Therefore, this section provides theories related to noise removal and texture suppression, e.g., the objective function for calculating adaptive threshold, RTVE model, parameter selection formula, the evaluation function, etc. The flow chart of the algorithm is shown in Figure 1 and Algorithm 1. In Algorithm 1, D n represents the decomposition level and T ps represents the texture size. The first step is to input the original image and view the 3D grayscale image; the second step is to observe the 3D grayscale images of wavelet coefficients at all levels, select appropriate decomposition level, and remove noise; the third step is to test texture size and then suppress texture; the final step is to extract the region of interest (ROI) and compare it with the results to obtain accuracy.
Comparison of results and ROI return f c and comparison result
Adaptive Noise Removal in Wavelet Domain
Property 1. The two types of double hook functions are described as where a and b are constants; a, b = 0. Normally, a = b = 1.
where, normally, a = 1, b = −1. The double hook function has two asymptotes, one vertical asymptote and one oblique asymptote. Equation (1) has an oblique asymptote and has no intersection with the x axis. Equation (2) is different from Equation (1) because it intersects with the x axis.
Property 2.
The hard threshold function and soft threshold function are described as [33,34] where λ is threshold, W is wavelet coefficients, and HT and ST are hard threshold function and soft threshold function, respectively. Equations (3) and (4) are shown in Figure 2c. (1)). (b) Prototype II (The red line represents the asymptote, and the blue curve represents Equation (2)). (c) Threshold functions.
We propose an improved threshold model according to Equation (2) of Property 1, which takes into account the advantages of both soft and hard threshold functions. The semisoft threshold formula is described aŝ where λ A is threshold, sgn represent step function, andŴ SST is the result of filtering. Furthermore, λ A is described as whereσ 2 n is estimated variance for the noise component, andσ is estimated variance for the noiseless component.
In the first stage of removing noise, the soft threshold function or hard threshold function is usually used to extract noise components of each sub-band. However, from Figure 2c, the wavelet coefficients processed by the hard threshold function generate jump points at ±λ A , do not have the smoothness of the original information, and oscillation occurs after reconstruction. Therefore, this disadvantage can lead to sudden changes in the pixels on the scratch image, i.e., there is a significant difference in the grayscale values of some adjacent pixels. The wavelet coefficients processed by the soft threshold function have a fixed deviation compared to the hard threshold function, i.e., it directly affects the similarity between the reconstructed coefficients and the real coefficients. Therefore, this disadvantage can cause the overall shift of pixel grayscale values on the scratch images, resulting in a blurry effect.
From Figure 2c, we can intuitively see the shapes and features of the hard threshold function and the soft threshold function. Among them, the red line represents the soft threshold function, the green line represents the hard threshold function, and the blue curve represents the semisoft threshold function. Based on the characteristics of Figure 2c, the common feature of both functions can be obtained, which is that they are parallel to the lineŴ SST = W. Therefore, considering the drawbacks of both functions, we need an asymptotic semisoft threshold function to compensate for the discontinuity of the hard threshold function and the deviation of the soft threshold function, i.e., it can eliminate sudden changes in pixel grayscale values and weaken blurring effects. Since the linê W SST = W belongs to the oblique asymptote, and Equation (2) has this feature, we construct the formula shown in Equation (5).
In order to verify that Equation (5) converges to a straight lineŴ SST = W, we assume two scenarios. When W > λ A , i.e., W is also greater than 0, according to Equation (5), the transformed formula can be described aŝ by applying the limit to Equation (7), the result can be obtained as When W < −λ A , i.e., W is also less than 0, according to Equation (5), the transformed formula can be described asŴ by applying the limit to Equation (9), the result can be obtained as Therefore, based on the above results, we have the following conclusion: We can see that Equation (5) has only one oblique asymptoteŴ SST = W. As |W| continues to increase, the value of λ 2 A /|W| gradually decreases and eventually approaches W, resulting in a gradual decrease in the deviation between the estimated wavelet coefficients and the actual wavelet coefficients, overcoming the shortcomings of the two threshold functions.
It is worth noting that, in the first stage, we use the square directional window to calculate the variance as we need to extract as much noise as possible from images; the square window can meet the requirements. In the second stage, we use elliptical directional windows to calculate the variance to preserve as much detail as possible while denoising. However, elliptical windows have poor directionality and are bulky in volume.
Therefore, in the second stage of removing noise, we use a parameters-adjusted elliptical direction window with smaller size and more monotonic direction for calculating local threshold based on the elliptical window prototype [35]. The formula is defined as where θ ∈ [−π, π] is the principal axis direction; when it is ±15 • , ±45 • or ±75 • , it corresponds to the elliptical window of six high-frequency sub-bands, and m and n are coordinates of pixels. Finally, the image of removing noise is obtained by inverse transformation.
Texture Suppression and Edge Preservation
We propose one RTVE model, which can maintain the defect contour while suppressing texture. Using RTVE to suppress texture of the image f d , the result f c is defined as where f d − S a 2 enhances the contrast of the image f d and is defined as where P lowest 1% denotes the minimum 1% pixels set as the minimum pixel value and P highest 1% denotes the maximum 1% pixels set as the maximum pixel value. Furthermore, F RTVE (p) is described as where ε is a small positive number, avoiding zero denominator, D E x (p) and D E y (p) denote the total variations of the window for pixel p in the x and y directions, respectively, and L E x (p) and L E y (p) are the inherent variations of the window for pixel p in x and y directions, respectively.
All parameters of Equation (15) are described, and D E x (p) is described as and D E y (p) is described as L E x (p) is described as and L E y (p) is described as W p is described as g E p,q is described as where i and j are the abscissa and ordinate of the central pixel, m and n are the abscissa and ordinate of the input pixel, I(i, j) is the value of the central pixel, and I(m, n) is the value of the input pixel. In addition, g E p,q denotes the product of two weights. Usually, we use relative total variation (RTV) to suppress texture [28]. Furthermore, the objective function of RTV is defined as [28] f c = min where λ t is the weight of the regular item, and F RTV (p) is defined as When the RTV method is applied to the surface of metal materials with structural texture, it is difficult for RTV to completely suppress the texture background while ensuring that the small size defects are not lost, because RTV only considers the weight of spatial distance. Equation (15) reflects on the calculation of two weights to compensate for the shortcomings of RTV. Not only is the influence of spatial distance weight considered, but the gray difference between the central pixel and other pixels in the neighborhood is also taken as the basis for calculating another weight.
After the end of preprocessing, it is necessary to ensure the integrity of scratches based on the convex hull and close operation to connect the broken part. The principle of convex hull is shown in Figure 3a Compared with the manually extracted ROI, the scratch extraction of the real image is evaluated by area, length, and position deviation. The comprehensive accuracy is the arithmetic mean of the first two accuracy. Furthermore, it is described as The position deviation is expressed by the Euclidean distance between the centroids, where x 1 and y 1 represent the centroid of the real value, x 2 and y 2 represent the centroid of the actual value, and d e represents the distance difference between the two. As long as the distance difference is less than 100 pixels, we consider this distance to be qualified.
Experiments
As shown in Figure 4, the scratch images used in this paper are collected by the machine vision system, which is composed of a three-dimensional motion device and a pixel resolution of 4024 × 3036 industrial camera, a white light source, a computer, and a test piece. Three-dimensional space movement of the camera and light source is realized through control software to ensure the collection of scratch images.
The surface images of zirconium sheets are shown in Figure 4b. Among them, aluminum plate is used to replace zirconium sheet. There are four kinds of scratch images in this paper, including single scratch, multiple scratches, cross scratch, and other scratches, as shown in Figure 5. It is noteworthy that the size of these images is processed later due to the need of wavelet transform. Before collecting images, we use alcohol to wipe the surface of the sample due to interference from other things. In addition, before the experiment, we test the light intensity. Lighting has a significant impact on the imaging quality. Different lighting methods have different effects on the detection object; for example, diffuse light sources can cause the detection surface to emit light, requiring special light sources to solve the problem of reflection. Therefore, choosing the appropriate light source and appropriate lighting intensity is crucial for nondestructive testing technology based on machine vision. By analyzing the surface properties of the sample and investigating the types and application scenarios of light sources, we design a machine vision inspection system with a high-precision camera and a coaxial light source as the core due to the high reflectivity and complex texture on the surface of the detected object. Firstly, we use coaxial light sources to reduce the impact of reflected light due to the inability of ordinary light sources to suppress surface reflection. Secondly, we choose the appropriate lighting intensity through experiments to ensure the integrity of the scratch area while minimizing the texture as much as possible. We use the algorithm proposed in this article to test scratch images obtained from different lighting intensities and compare them with other algorithms. It is a good choice to increase the light intensity as much as possible while ensuring the integrity of scratches according to the data. Therefore, choosing a value within the appropriate range of light intensity is the correct choice. Before the scratch detection, it is necessary for inputs to remove noise and suppress texture. The first half of Section 3 introduces the content of noise removal and texture suppression, respectively. From Figure 6, observing the change trend of wavelet coefficients at 45°from first level to fourth level and their 3D grayscale due to significant characteristics of diagonal coefficients, we can summarize the following conclusions. At first, from Figure 6a-d, the wavelet coefficients of the first and second levels cannot reflect detailed information. Second, from Figure 6g-h, the wavelet coefficients of the fourth level will be distorted due to the loss of pixels. Finally, from Figure 6e,f, the wavelet coefficients of the third level can retain rich scratch information while also reflecting necessary background interference. In the process of decomposing the levels from 1 to n, the details of scratches move from blurry to clear, and then to blurry. In addition, when the decomposition level is 3, the scratch details are most obvious. Therefore, we choose the third decomposition level, i.e., D n is 3.
As shown in Figure 7, the image pixels after hard thresholding appear abrupt, and the pixels after soft thresholding are blurry and smooth, while the pixels after semisoft thresholding are between the two thresholds, which not only weakens the sudden changes in pixel values but also maintains smoothness.
On the basis of removing noise, we need to suppress structural texture. Firstly, the parameter selection method is used to calculate the texture size. The texture size is determined by the height shown in Figure 8. By analyzing Figure 8, we estimate the texture size and select the parameter in 0-6. When the autocorrelation coefficient is greater than 0.01, the texture size is roughly between 3 and 6. Furthermore, when the autocorrelation coefficient is less than 0.01, the texture size is roughly between 0 and 3. Therefore, we can obtain T ps based on texture size. The height difference between the yellow plane and the blue plane indicates the texture size; thus, it can be seen from Figure 8 that the autocorrelation coefficients are 0.0238, 0.0150, 0.0170, and 0.0187, respectively. Therefore, the texture size can be roughly determined to be over 3.
Then, the results obtained by using RTVE are compared with those obtained by RTV. Figure 9 shows the surface of the scratch images after suppressing texture. Figure 9a,c,e,g are processed by RTV. Furthermore, Figure 9b,d,f,h are processed by RTVE. By analyzing Figure 9, we can see that the integrity of the scratch edge is higher after being processed by the RTVE algorithm. On the contrary, the scratch area processed by RTV always loses some edges, indicating that the RTVE algorithm proposed in this paper has advantages that RTV does not have. The proposed noise removal and texture suppression methods are used to process the real scratch images. Before detecting scratches, image preprocessing is required. In order to quantitatively evaluate the performance of the algorithm proposed in this paper, the error with the real value is calculated from three aspects of area, length, and position deviation, using the manually extracted region of interest as real value. The results are shown in Figure 10 and Table 1. The red boxes of Figure 11a-d are the area where the scratch is located. From Table 1 and Figure 11, we know that there are four types of scratches, each with three types, totaling 17 scratches. The real value indicates ROI, and scratches are manually extracted from the scratch images. The actual value represents the results obtained by the framework proposed in this paper. By comparative analysis, we can see that the accuracy of this algorithm is greater than 85% and the algorithm has more advantages in terms of accuracy. The framework proposed in this article is compared with current scratch detection algorithms, and the results are shown in Table 2. These algorithms are snake based on edge, fuzzy edge, log, K-means, fuzzy c-means, and method of combining superpixels with k-means (the original Canny is not included because using only Canny cannot remove texture) [36]. The results show that the algorithm proposed in this article has better performance, can obtain more complete scratches, and is superior to other gradient algorithms. Compared with log, the proposed algorithm is more accurate and stable. The edge-based snake algorithm requires masks, and the number of iterations is difficult to determine, resulting in a wide fluctuation range of results. Although the texture is suppressed, the fuzzy edge algorithm will still recognize the residual texture as an edge of scratches, resulting in incorrect results. Compared with image segmentation algorithms, the algorithm proposed in this article can accurately obtain complete scratch contours.
Conclusions
This study proposes a scratch detection framework for zirconium sheets with surface interference. As can be seen from the previous text, this framework includes two parts: noise removal and texture suppression. An adaptive threshold algorithm based on DTCWT is proposed for denoising. At the same time, an optimized texture suppression algorithm based on RTVE is proposed for suppressing texture. Finally, an optimized Canny algorithm is used to detect scratch contours, and the result is compared with the real value.
This surface scratch detection solution is able to remove noise and suppress texture, solving the problem of background interference. The experimental results show that the comprehensive accuracy is greater than 85%, and the framework can effectively improve the accuracy of scratch detection. This optimized scratch detection algorithm framework is equivalent to a semiadaptive local threshold method, which is not only suitable for metal parts but also for other materials with similar surface, such as tile scratches. In the future work, complete adaptive scratch detection can be realized through threshold calculation, morphological processing and image enhancement, so that defects of different degrees and types can be detected. | 5,593 | 2023-08-01T00:00:00.000 | [
"Computer Science"
] |
An Approach to the Green Area Parameter in Urban Transformation
In this study, the green area value is obtained from the feasibility reports which are made in 4, 6-hectare region that is declared as a risky area within the framework of the Law of Transformation of Areas Under Disaster Risk (No. 6306) and Implementation Regulation and the green area per capita (m2/person) is determined. In urban planning in which all of the land-use, social, technical infrastructure parameters need to be considered at the optimum level as a whole, according to this article; even if only the green area data is taken into consideration, the importance of making a transformation decision for the region is supported by the analysis. In this context; an analysis and calculation model has been proposed with the parameters defined in suggestion form which is bordered with the boundary value conditions in the light of international and national data. In the current situation, development plans’ situation and the draft case, it is tried to compare the amounts of the green areas and to give an approach for the green area ratio per capita.
Introduction
The concept of open space is one of the important basic elements of urban texture and is defined as open spaces or vacant spaces outside the architectural structure and transportation areas.In other words, they are perceived as areas that have no potential for outdoor use and that have potential for any recreational use.For example, water surfaces, vegetation elements or very limited number of squares and transportation areas are defined as open areas [1].In addition, green areas in the Planned Landscapes Type Zoning Directive; is defined the whole the play garden, children's ground, recreation picnic and coastal areas reserved for community use.[2].
Green spaces can be expressed as public spaces in urban areas in which generally social relations established and social solidarity developed.Usage forms and types of green spaces show variability as children's playground, recreation area, zoological garden, botanic park.The green area in a region or in a project area refers to the collective sum of all these areas.
The green areas created in the cities show significant differences from country to country, from city to city.The types of green spaces and their contents, the forms and size of the green spaces depend on: Population size, Characteristics of the settlement, The natural characteristics of cities such as topography, soil's ability and plant cover, Climatic conditions.
The green spaces in the city provide the necessary foot traffic assurance for people, especially by separating vehicle traffic from pedestrian recreation and residential areas [3].Besides, the green areas softening the solid mold by the formal buildings and give an organic character to urban area [4].
However, unplanned designs result in proportionally deficient or inefficient, especially green areas of social infrastructure.In this study; the green area that is the basis for urban transformation decisions has been evaluated together with the results of local and universal applications.
The application of a mathematical model on finding the size of the green area per capita facilitates the development of urban specific criteria [5].According to Polat, green area parameter; with the recommendation limit values set out in the light of international and national data is included in a relational matrix together [6].
In this method of study, a green area approach has been adopted with boundary values determined according to international and national parameters.In this context; in Istanbul, Bagcilar district an area of 4.6 hectares which is declared as risky within the frame of Law No. 6306 has been evaluated about the current green spaces and need of transformation focused only on the green areas.Thus, the need for green space in urban areas has been searched on universal and local platforms, and the ideal solution proposal has been tried to be reached [7].
Methodology
The green area diagram in Figure 1 shows the current situation in a project area and the boundary value criteria including the proposed model data base and the national, international and recommendation standards that are likely to be applied as a result of the application of the urban transformation application decision.
Figure 1. Green area diagram
In this analysis study; 3 current situations, 3 draft models of 6 total parameters are evaluated (Table 1).
Green Area in Project Area
Table 2 shows the green area data for the current situation.The green area data based on the current population (person) and green area (m2) obtained in the direction of the research report of the region where the urban transformation to be carried out is in the variable input column and the calculation and analysis results are shown in the output column as the area per person.Since there is no green area in the project area, the current green area per capita is 0.00 (m²/person) as a result of Equation 1.The fact that 1,400 people living in the region do not have any green space reveals how much urban renewal is necessary even in this context.The presence of green spaces is of great importance in order to build up living quality, urban sustainability and more livable urban areas.From this point of view, it is obvious that it would not be an exaggerated approach to describe the characteristics of this city section where there are no green areas as consisting of only building stocks and roads (Figure 2).
Green Area in Draft Model
The draft model analysis for the project area has been adopted after the determination of the necessity of the urban transformation is assumed as positive.In the green area data bank section of the draft model, the proper green area per capita (m²/person) data determined in accordance with the need of the region where the urban transformation to be performed is in the input column, the calculation and analysis results of the draft green area (m²) are shown in the output column.The draft population (person) data is an output data obtained as a result of the interaction between the project area (ha) and the draft density (person/ha) data, which is a parameter used in the green area calculations [8].However, the values created from the national, international criteria and the suggestions are in the boundary value column.In suitability and condition columns, the limit value of the outputs and its relevant situations are shown (Table 3).Green area data of the draft model; has been scrutinized under the title of "international values" the general results of the literature researches are gathered together, "situation in Turkey" where the relevant laws and regulations in Turkey and the current situation results are collected.As a result of this data, the limit values suggested by the synthesis and interpretation of the data obtained in the world and Turkey have been determined."Analysis on the project area" where the model applied and the values determined in the draft case integrated is compiled (Table 4).Gedikli (2002) aims to find the size of the green area per capita, which should be determined by the individual and familial characteristics of the society.The study develops a mathematical model proposal which can be used in the evaluation of the size of the open green space that is per capita in the cities.In the study, it is argued that the amount of green area specific to each region should be determined according to the answers of the questions by conducting researches and surveys with social contents.
It can be said that the quality of life in an urban area has a fairly direct correlation with the per capita green space in the world cities that are developed and whose living standards are high.The amount of green space per capita is the basic element of social infrastructure.The more this value is increased, the more sustainable living spaces and the ideal city is to be approached in that place.
The green space data is one of the main arguments of planning thought.This parameter is the dominant characteristic of the amount of social infrastructure in a project area.As shown in the green area data interaction diagram, the green area (m²) is the first factor affecting the total area of social infrastructure (Figure 3).According to Table 5, which indicates the amount of green area per person by the countries, it is seen that the green area in the western society is watching higher values.In countries where the quality of life is high, such as the United States, England and Australia, the amount of green space is 10 m²/person while 3.5 m²/person in the Netherlands, 3.0 m²/person in Italy and 0.21 m²/person in Iran and it is calculated as 6.02 m²/person in average.These values show the average of the cities equivalent to metropolitan cities in Turkey such as Istanbul or the standards in the regulations of these countries.In some important European cities, the amount of green space per capita is listed in Table 6 [17,18].While the green area up to 19.90 m²/person in Hague, it is seen that in Munich and Copenhagen there is also a significant amount of ratio (11.60 m²/person).In the chart, Ankara (1.00 m²/person) and Istanbul (2.10 m²/person) are observed to be the lowest.Moreover, the green areas of the two largest cities of Turkey are easily understood from the calculations that contribute to keeping the world average (7.17 m²/person) below 10 m²/person.According to the World Cities Culture Forum; the green area ratio in Istanbul is 2.20% while the average of the world's major cities is 21.10% and the average of the major cities in Europe is 23.83% (Table 7).In fact, it is 54.00% in Moscow, 47.00% in Singapore and 46.00% in Sydney.The cities below 5% are generally in the Far East such as Tokyo (Japan), Taipei (Taiwan) and Shanghai (China) [19].Spatial Planning Regulations in Turkey include a chart of green spaces by population groups in urban areas (Table 8).According to this table green area per capita is set as 10 m²/person under the headings of children's park, park, Botanical Park, zoological garden, promenade and recreation area [20].Green areas are getting smaller and smaller every day and the green area per capita is decreasing both in terms of quantity and quality in Istanbul.Thus, the city is transforming toward being a concrete city by its high population density and being over crowded.In Table 10 the values obtained by Akdoğan (1972), in the districts of Ankara in the context of the researches on qualifications and planning principles of children's playgrounds are tabulated and these values are identified quite low as in Istanbul (average: 1.02 m²/person) [21].According to the literature researches and taken into consideration the current situation in the world, there is a consensus on the green area per capita being determined as around 10 m²/person.This value contributes to the high human living standards.However, it can be said that this value is far from applicability for the large cities like Istanbul which has high density and the great majority of the green areas has already been consumed.
As in the case of Bağcılar Kemalpaşa region, which has been subjected in this study in the frame of all these data and opinions, the Appropriate green space coefficient and the Appropriate green space boundary value have been tried to be determined below under the domination of high density, distorted construction.especially for the areas which are not protected area and historical area.
According to the literature researches and taken into consideration the current situation in the world, there is a consensus on the green area per capita being determined as 10 m 2 /person.This value contributes to the high human living standards.However, it can be said that this value is far from applicability for the large cities like Istanbul which has high density and the great majority of the green areas has already been consumed.
Appropriate green area per capita (m²/person): The green area per person, which is idealized, is determined as a value close to the limits of the regulation that should be applied in Turkey.However, the choice of a higher value is useful for the human living conditions of the project field.The green area is a calculated as m² per person, as well as it can be determined as a percentage of the urban area.(In Table 8, the green area has been represented as percentage to the urban area).In this context; taking into consideration the international values and existing conditions in Turkey, the green area per capita considered in the appropriate situation is minimum 8.00 m²/person. Appropriate green area boundary value (m²): It is the value which is defined as the green area coefficient multiplied by the number of population in the region.
Also in this study; considering the great role of green area in urban planning, two separate threshold values are determined in addition to the above-mentioned limit value for the green area coefficient.According to this; According to the 1st subsidiary boundary value: A green field value lower than the existing amount of green space during the new planning study should not be selected for the case.Although this is not a very common situation for Turkey, it has been tried to make this recommendation considering that there might be special circumstances. According to the 2nd subsidiary boundary value: It is recommended that not to select below the percentage of the world and European cities in Table 8.In this case, the green area corresponding to the second subsidiary boundary value; it is thought that it would be more appropriate to identify an area that is not smaller than 23.83%, which is the green area average of Europe's largest cities.
In the project area where the model is applied, the green area coefficient is chosen as 10.00 m² /person.Population value is included in the calculations as the value of the population in the region at the current situation.
The green area per capita for the project area has been chosen as 10.00 m 2 /person.In equation 2, according to the stated limit value (8.00 m² per person); 14,000.00m² area obtained by selecting 10.00 m² green area per person is greater than 11,442.47m² the appropriate green area boundary value.
1st subsidiary boundary value:
Appropriate green area (m²) ≥ Current green area (=0.00 m²) 14,000.00m² ≥ 0.00 m² 2nd subsidiary boundary value: Appropriate green area ratio = 14,000.00/ 46,000.00= 30.43% Appropriate green area ≥ Green area average of Europe's largest cities 30.43% ≥ 23.83% Also; according to the research reports of Bağcılar Municipality and in the current implementation, the area allocated to the green area is 7,857.66m² and the amount of green area per capita is calculated as 2.62 m² in a possible transformation study of 4.6 hectares with a population of 1,400 people [22].
Results and Discussions
In this study, 6 parameters specific to the green areas of the current and draft situation in the project area are emphasized.An evaluation and comparison are tried to be made by making calculations and analyzes.A project area of 4.6 hectares in the Bagcilar District Kemalpasa quarter, which is declared as a risky area within the frame of Law No. 6306 with 1,400 habitants is investigated.The green area is obtained from the research reports and the green area per capita is determined.
In Table 11, a comparative table is created by bringing together the current situation of the project area, the draft model and the values obtained according to Bağcılar Municipality's current plan principles where the risky area is located.The green area parameter which belongs to the social infrastructure data and is essential to the mathematical model, is analyzed.The results of the project area which needs urban transformation are compared below to the boundary values determined in draft model: According to the table above green area, non-exist at all in current situation, covers the area of 14,000.00m² in the appropriate situation (draft model).That is, 31.00% of the whole project area is allocated to the green area.The green area per capita is optimized as 10.00 m²/person.In the development plans of Municipality, the green area is 7,857.66m² with 17.03% of the project region and 2.62 (m²/person) per capita.The amount of green space per person, which is 0.00 (m²/person) in current and 2.62 (m²/person) in current planning codes obtained from Bagcilar Municipal Development Plan Implementation, are very lower than the limit (boundary) value (8.00 m²/person).
Conclusion
As a result; in the light of the green area parameter which is calculated and analyzed within the scope of the urban transformation model; the green area which is not exist in the current situation is calculated as 14,000.00m² and it is taken as 10.00 (m²/person) per capita.One of the most striking parts of the Development and Implementation Plan is that the green area is taken at a very low level in a newly planned area as 2.62 m²/person.In the context of these analyzes and evaluations, the amount of green space which is inadequate in the present case and the most important measure of social infrastructure concept, has been increased to the appropriate and adequate level.
Figure 2 .
Figure 2. Project area satellite photo
Figure 3 .
Figure 3. Green area data interaction diagram | 3,959.2 | 2017-12-10T00:00:00.000 | [
"Economics"
] |
Consistent interactions of Curtright fields
Consistent self-interactions of Curtright fields (Lorentz tensors with (2,1) Young diagram index symmetry) are constructed in dimensions 5, 7 and 9. Most of them modify the gauge transformations of the free theory but the commutator algebra of the deformed gauge transformations remains Abelian in all cases. All of these interactions contain terms cubic in the Curtright fields with four or five derivatives, which are reminiscent of Yang-Mills, Chapline-Manton, Freedman-Townsend and Chern-Simons interactions, respectively.
Introduction
This work concerns consistent interactions of Curtright fields [1]. Curtright fields are Lorentz tensors T a µν̺ with Lorentz indices µ,ν,̺ having the permutation symmetries 1 T a µν̺ = −T a νµ̺ , T a [µν̺] = 0. (1.1) The additional index a is no Lorentz index but only enumerates the Curtright fields, i.e. we examine also models with more than one Curtright field. The Lagrangian that we use for free (non-interacting) Curtright fields is wherein F a µν̺σ = ∂ µ T a ν̺σ + ∂ ν T a ̺µσ + ∂ ̺ T a µνσ , F a µν = F a µν̺ ̺ (1. 3) and Lorentz indices are lowered and raised with a flat metric η µν and its inverse η µν . Curtright fields are particularly interesting in D = 5 dimensions because there a Curtright field is the elementary field (counterpart of the metric field) in a dual formulation of linearized general relativity [2,3].
(1.4) implies descent equations sω 1,D−1 + dω 2,D−2 = 0, sω 2,D−2 + dω 3,D−3 = 0 etc. with increasing ghost number and decreasing form-degree that can be compactly written as (see section 9 of [7] and section 3 of [8] for reviews) wherein Ω D is a "total form" with "total degree" 2 D, and m is some form-degree at which the descent equations terminate (the value of m varies from case to case).
BRST differential
In our case the master action corresponding to the Lagrangian (1.2) can be taken as wherein S a µν and A a µν denote ghost fields, C a µ denote ghost-for-ghost fields, and T ⋆µν̺ a , S ⋆µν a , A ⋆µν a denote the antifields for T a µν̺ , S a µν and A a µν respectively (the antifields for C a µ are denoted C ⋆µ a ). The ghost fields and antifields have the index symmetries The fields, antifields, spacetime coordinates x µ and differentials dx µ have the following ghost numbers (gh), antifield numbers (af), Graßmann parities (| |) and BRST transformations (s): µν̺ and E a µ are traces of a gauge invariant tensor E a µν̺στ : These tensors fulfill the identities For later purpose we also introduce the totally tracefree part W a µν̺στ of E a µν̺στ in dimensions D > 3: We remark that F µν̺σ , E a µν̺στ , W a µν̺στ , E a µν̺ and E a µ are the counterparts of the linearized Levi-Civita-Christoffel connection, Riemann-Christoffel tensor, Weyl tensor, Ricci tensor und curvature scalar of general relativity, respectively. E a µν̺ and E a µ vanish on-shell in the free theory, and E a µν̺στ equals W a µν̺στ on-shell in the free theory: wherein T ⋆µ a = T ⋆µν a ν (2.11) and ≈ denotes equality on-shell in the free theory (sT ⋆µν̺ a is the Euler-Lagrange derivative of L (0) with respect to T a µν̺ , i.e. the BRST-transformations sT ⋆µν̺ a are the "left hand sides" of the equations of motion of the free theory).
2
: The forms defined in equations (3.1) fulfill and total (D − 2)-forms Ω aµν̺ D−2 : The forms defined in equations (3.4) and (3.5) fulfill Comments: (i) The total (D −3)-forms Ω aµ D−3 defined in equations (3.4) derive from the following simpler total (D − 3)-forms Λ aµ D−3 : with ω aµ −3,D and ω aµ −2,D−1 as in equations (3.4). Using table (2.2) it can be readily checked that the total forms Λ aµ D−3 are (s + d)cocycles: Furthermore it can readily be shown that Λ aµ D−3 is no (s + d)-coboundary. Indeed, already a gauge invariant improvement of λ aµ 0,D−3 but we proceed one step further and remove also all terms from the exterior (D−3)-form (3.13) which vanish on-shell in the free theory. Using equations (2.8)-(2.10) one finds that these terms are the BRST-transformation of the following exterior (D − 3)-form η aµ −1,D−3 : (3.14) We arrive at the improved total form (3.4): is thus a direct consequence of (3.9). As Λ aµ D−3 is nontrivial in the cohomology of (s + d), Ω aµ D−3 is also nontrivial in that cohomology. Notice also that the exterior (D − 2)-form ω aµ and ∂ σ T ⋆σ b ) by subtracting a total form (s + d)η aµ −2,D−2 from Λ aµ D−3 , and afterwards also the s-trivial terms in the resultant redefined exterior (D − 1)-form and exterior Dform which however appears to be merely of academic interest and therefore is not done here (the exterior p-forms with p > D − 2 in Ω aµ D−3 anyway do not contribute to the deformations constructed below).
We remark that it is impossible to improve Λ aµ D−3 to an (s + d)-cocycle with a gauge invariant and x-independent exterior (D − 3)-form. Indeed, such an improvement would require the existence of x-independent exterior forms η aµ 0,D−4 and η aµ −1,D−3 that fulfill (3.10) but it can easily be shown that such forms do not exist. The improvement of Λ aµ D−3 thus necessarily depends explicitly on the coordinates x. Furthermore the improvement is crucial for the construction of consistent deformations involving Ω aµ D−3 , as will become clear below.
and Ω aµν̺ D−2 in (3.5) can also be improved so as not to contain terms that vanish on-shell in the free theory. Using equation (2.10) one can write the terms of Ω aµν̺ 2 that vanish on-shell in the free theory as sη aµν̺ −1,2 with an exterior 2-form η aµν̺ −1,2 and redefine Ω aµν̺ Furthermore one can write the terms of ω aµν̺ 0,D−2 that vanish on-shell in the free theory as sη aµν̺ −1,D−2 with an exterior (D − 2)-form η aµν̺ −1,D−2 and redefine Ω aµν̺ and Ω aµν̺ D−2 can be constructed likewise (and equivalently) with the redefined total forms.
Comments: With this multi-index notation the total forms Ω YM D in equations (4.1)-(4.3) can all be written as and one obtains, using (s + d)Ω (which holds owing to (3.3)): where we used thatΩ if f abc = f (abc) for k = 2m and f abc = f [abc] for k = 2m + 1. We remark that (4.13) actually vanishes for k > 4 because in dimensions D = 2k + 1 > 9 there is no way to contract the nine free Lorentz indices of Ω aµ 1 µ 2 µ 3 2k−1 Ω bµ 4 µ 5 µ 6 1 Ω cµ 7 µ 8 µ 9 1 in a Lorentz invariant way. For the same reason there is no Ω YM D in even dimensions D. (iii) Using the same multi-index notation as above, one can construct further Chern-Simons type solutions of (1.5) in odd dimensions: We remark that the Chern-Simons type solution (4.9) can be written in this form.
Consistent deformations in first order formulation
To explore whether or not the consistent first order deformations derived in the previous section exist to all orders we employ the first order formulation [9] of the free theory. The classical fields of that formulation are denoted ϕ a µν̺ and B a µν̺σ whose Lorentz indices have the permutation symmetries We take as Lagrangian of the first order formulation The B-fields are auxiliary fields which can be eliminated using the algebraic solution of their equations of motion. Elimination of the B-fields reproduces the Lagrangian (1.2) (up to a total divergence ∂ µ R µ ) with the definitions 5 The ghost fields of the first order formulation of the free theory are denoted D a µν andĤ a µν̺ =Ĥ a [µν̺] , the ghost-for-ghost fields again C a µ , and the antifields again with a ⋆ and indices corresponding to the indices of the respective field. These fields and antifields have the following ghost numbers, antifield numbers, Graßmann parities and BRST transformations (corresponding to the master actionŜ We also note that D a µν = S a µν + 3A a µν , i.e. S a µν = D a (µν) and A a µν = 1 3 D a [µν] . We now introduce the following total 1-forms and 2-forms analogously to (3.1): and the following total (D − 3)-forms analogously to (3.4): whereinŴ a ν̺σµτ is defined analogously to W a ν̺σµτ in (2.7), withÊ in place of E. The total forms (5.7) and (5.8) with H aµν̺ and Ω aµν̺ 1 as in (3.1). Hence, in the first order formulation of the free theory the total 1-formΩ ′ aµν̺ This implies indeed that Ω CM 5 , Ω CM 7 , Ω FT 5 and Ω CS 5 are equivalent in H(s+d) toΩ CM 5 , Ω CM 7 ,Ω FT 5 andΩ CS 5 , respectively, and that the defomations of the free theory which arise from these (s + d)-cocycles are equivalent as well, respectively. Now, the first order deformationsŜ (1) which arise from the solutionsΩ CM 5 ,Ω CM 7 ,Ω FT 5 andΩ CS 5 of (1.5) fulfill (Ŝ (1) ,Ŝ (1) ) = 0 simply because the exterior D-forms present in these solutions do not depend on the fields ϕ, and the only antifields on which these exterior D-forms depend are the antifields ϕ ⋆ of ϕ (of course, Ω CS 5 andΩ CS 5 do not depend on antifields at all and therefore it is actually not necessary to substitutê Ω CS 5 for Ω CS 5 in order to get (S (1) , S (1) ) = 0 for this deformation by itself; however this changes when one considers linear combinations of Ω CM 5 , Ω FT 5 and Ω CS 5 ). Hence, these first order deformationsŜ (1) provide in fact already a complete deformation S =Ŝ (0) + gŜ (1) of the master actionŜ (0) of the first order formulation of the free theory. This implies that the first order deformations arising from the solutions (4.5), (4.6), (4.8) and (4.9) of (1.5) indeed exist to all orders and the complete deformations in the second order formulation of the free theory with Lagrangian (1.2) can be obtained fromŜ by eliminating the auxiliary fields B (e.g., perturbatively). It should also be noticed that this reasoning does not only apply to the Chapline-Manton, Freedman-Townsend and Chern-Simons type solutions in D = 5 individually but also to any linear combination thereof.
The author has not found an analogous line of reasoning for the Yang-Mills type deformations yet. The reason is that it does not appear straightforward to find B-dependent total formsΩ analogous to (5.7) and (5.8) for the Yang-Mills type deformations which allow a reasoning similar to comment (ii) in section 4.
Conclusion
The first order deformations L (1) of the Lagrangian (1.2) that arise from the solutions of (1.5) given in section 4 in the respective dimensions D = 5, 7, 9 are obtained from the antifield independent parts L (1) d D x of the exterior D-forms of these solutions. The first order deformations L Here we assumed that the flat metric has signature (−, +, +, +, +). Other conventions can result in a minus sign in (6.7) and a plus sign in L N in (6.8). We remark that all results presented in this work are actually valid also for non-Minkowskian metrics, with possible reversed signs in (6.7) and in L Notice that the first order deformations (6.1), (6.3) and (6.4) exist for any number of Curtright fields (and in particular for only one Curtright field), whereas the first order deformations (6.5) and (6.6) require at least two Curtright fields, and the first order deformations (6.2) and (6.7) require at least three Curtright fields because of equations (4.4), (4.7) and (4.10). Furthermore notice that all the above first order deformations are Lorentz invariant, in spite of the explicit x-dependence of the deformations (6.4), (6.5) and (6.6). 8 This explicit Notice also that all the above first order deformations are cubic in the Curtright fields and that the deformations (6.1)-(6.3) contain four derivatives of the Curtright fields (terms ∂ 2 T ∂T ∂T ) whereas the deformations (6.4)-(6.7) contain five derivatives of the Curtright fields (terms ∂ 2 T ∂ 2 T ∂T ), respectively. Furthermore, all deformations (6.1)-(6.6) are accompanied by deformations of the gauge transformations of the free theory. The first order deformations of these gauge transformations are obtained from the corresponding solutions of (1.5) given in section 4, more precisely from the terms with antifield number 1 in the exterior D-forms of these solutions. We leave it to the interested reader to write out these deformations of the gauge transformations explicitly. The commutator algebra of the first order deformed gauge transformations remains Abelian in all cases, however. This corresponds to the fact that the exterior D-forms of the solutions of (1.5) given in section 4 do not contain terms with antifield number exceeding 1.
The deformations derived here are thus compatible with the results of [10,11] where it was shown that Poincaré invariant first order consistent deformations of the free theory that modify nontrivially the gauge transformations leave the commutator algebra of the deformed gauge transformations Abelian on-shell, and that there are actually no nontrivial consistent deformations of this type containing at most three derivatives of the Curtright fields. In fact it can easily be shown that x-independent and Lorentz invariant consistent deformations that do not deform nontrivially the gauge transformations of the free theory and contain at most four derivatives do not exist either. Indeed, according to the results of [10,11] such deformations can be taken to be quadratic in the tensors E aµν̺στ but all such quadratic terms actually vanish on-shell up to a total divergence because of (2.5)-(2.9). Therefore it seems that the above deformations might actually provide the simplest possible Lorentz invariant nontrivial deformations of the free theory in dimensions D = 5, 7, 9 at first order.
As shown in section 5 the above first order deformations (6.4)-(6.7) can in fact be extended to all orders, most readily using the first order formulation of the theory. Furthermore in D = 5 any linear combination of the deformations (6.4), (6.6) and (6.7) can be extended to all orders. Whether or not the first order deformations (6.1)-(6.3) can be extended to higher orders is left open here.
We also remark that in all above first order deformations the tensors E aµν̺στ can be replaced by the traceless tensors W aµν̺στ (2.7) and vice versa because of E aµν̺στ ≈ W aµν̺στ , see also remark (iii) in section 3 (such replacements provide equivalent deformations and modify the deformed gauge transformations).
The author admits that he has no complete proof yet that the above deformations are really nontrivial. Therefore some (or all) of these deformations may actually turn out to be trivial. The proof of nontriviality is hampered by the possible explicit x-dependence of the terms (forms) that may make the deformations trivial. The author plans to investigate this issue, and whether or not the first order deformations (6.1)-(6.3) can be extended to higher orders in a future work (unless someone else does the job). However, the similarity of (6.1)-(6.7) to Yang-Mills [12], Chapline-Manton [13], Freedman-Townsend [14] and Chern-Simons [15] interactions, respectively, in combination with some BRST-cohomological considerations, suggests the nontriviality of the deformations.
Let me therefore briefly comment on similarities (and differences) of the deformations (6.1)-(6.7) to Yang-Mills, Chapline-Manton, Freedman-Townsend and Chern-Simons interactions. To that end standard p-form gauge potentials are denoted A a p = 1 p! A a µ 1 ...µp dx µ 1 . . . dx µp , the corresponding field strength (p + 1)-forms F a p+1 = dA a p and the Hodge duals of the field strength formsF a D−p−1 . Yang-Mills interactions in D dimensions areF a D−2 A b 1 A c 1 f abc . This is analogous to (4.1)-(4.3) with Ω a··· 1 of (3.1) corresponding to A a 1 , and Ω a··· D−2 of (3.5) corresponding toF a D−2 . I stress that the terminology "Yang-Mills type interactions" used in the present work only relates to this structure of the interactions and not to the commutator algebra of the deformed gauge transformations (i.e. it is not related to the question whether or not this algebra is Abelian).
Cubic Chapline-Manton interactions in D dimensions with two 1-form gauge potentials A a 1 areF a D−3 F b 2 A c 1 e abc . This is analogous to (4.5) and (4.6) with Ω a··· 1 of (3.1) corresponding to A a 1 , Ω a··· 2 of (3.1) corresponding to F a 2 , and Ω a· D−3 of (3.4) corresponding toF a D−3 . Cubic Freedman-Townsend interactions in 5 dimensions areF a 1F b 1 A c 3 d abc . This is analogous to (4.8) with Ω a· 2 of (3.4) corresponding toF a 1 , and Ω a··· 1 of (3.1) corresponding to A a 3 . The correspondence here does not match the form-degrees and total degrees but concerns the structureFF A.
Cubic Chern-Simons interactions in 5 dimensions are F a 2 F b 2 A c 1 c abc . This is analogous to (4.9) with Ω a··· 1 of (3.1) corresponding to A a 1 , and Ω a··· 2 of (3.1) corresponding to F a 2 . The difference of the deformations (6.1)-(6.7) as compared to standard Yang-Mills, Chapline-Manton, Freedman-Townsend and Chern-Simons interactions results on the one hand from the additional Lorentz indices of the Ω's as compared to standard p-form gauge potentials A p and, on the other hand, from the fact that the action L (0) d D x does not correspond to the standard Maxwell type action for free p-form gauge potentials A p containing terms F p+1FD−p−1 .
As far as the author knows the self-interactions of Curtright fields obtained in this paper have not been disclosed anywhere else in the literature so far. Nevertheless, self-interactions of "mixed symmetry gauge fields" similar to the Chapline-Manton type interactions (6.4) and (6.5) have been found in [11]. They are disclosed under item (iv) in section 8.1 of the arXiv-version of [11]. The self-interactions disclosed there also depend explicitly on the coordinates x and have a structure analogous to the Chapline-Manton type interactions (4.5) and (4.6). In the particular case (p, q) = (2, 1) (corresponding to a Curtright field) and s = 1 (using the notation of [11]) the interactions given there will very likely in D = 5 provide a self-interaction of a Curtright field equivalent to the Chapline-Manton type interaction (6.4) (for one Curtright field) when the Lorentz structure of the fields is taken into account. 9 Let me finally remark that it is quite straightforward to construct interactions of Curtright fields with other fields in appropriate dimensions similar to the above self-interactions using the approach of the present paper. For instance, similarly to equation N = −A µ j µ , j µ = ǫ µν 1 ν 2 ̺ 1 ̺ 2W a ν 1 ν 2 σW b ̺ 1 ̺ 2 σ g ab ,W a ν 1 ν 2 σ = ǫ ν 1 ...ν 5 W aν 3 ν 4 ν 5 σ̺ x ̺ (6.8) wherein g ab = g ba are constant symmetric coefficients and L N is a Noether coupling of the gauge field A µ and an ("improved") Noether current j µ of the free theory (∂ µ j µ ≈ 0). Analogously one constructs in D = 5 Chern-Simons type interactions of Curtright fields and a standard Abelian 1-form gauge potential from the solution Ω aµν̺ 2 Ω b 2µν̺ Ω 1 k ab of (1.5) wherein k ab = k ba are constant symmetric coefficients and Ω aµν̺ 2 are the 2-forms of (3.1). Cubic interactions ∂T ∂T ∂ 2 h of a Curtright field T with a symmetric 2-tensor field h µν = h νµ representing the metric field of linearized general relativity were obtained in section 5 of [16] (see equation (5.14) there). These interactions are reminiscent of the Yang-Mills type self-interactions (6.1)-(6.3) and may be constructible analogously to (4.1)-(4.3) using a total curvature (D − 2)form for the h-field in place of Ω aµ 1 µ 2 µ 3 D−2 . This indicates that the approach used here may also be useful for the construction of consistent interactions of other "mixed symmetry" or higher spin fields. | 4,915.4 | 2020-03-11T00:00:00.000 | [
"Mathematics"
] |
Immunotherapy for Recurrent Glioma—From Bench to Bedside
Simple Summary Glioma is the most common and aggressive brain tumor worldwide, and most patients suffer from a recurrence. Additionally, recurrent glioma is often resistant to chemotherapies and radiotherapy. Hence, immunotherapy has come into people’s sights. The most-used immunotherapy is immune checkpoint blockade (ICB), which has shown encouraging efficacy when combined with other immune strategies, especially with antiangiogenetic antibodies. Other promising immune regimes include multiple immunotherapies which function through different mechanisms, such as oncolytic viruses, chimeric antigen receptor T cell therapies and vaccination strategies. In this review, we discuss current immune therapies applied to recurrent glioma, based on the literature of preclinical animal models, and current ongoing clinical trials published in the last 5 years. These immunotherapies have been proved to be safe and tolerant, while some combinational regimes have provided satisfying efficacy on a subgroup of patients with specific gene mutation backgrounds. Though great progress has been made, further exploration of different combination strategies is needed. Abstract Glioma is the most aggressive malignant tumor of the central nervous system, and most patients suffer from a recurrence. Unfortunately, recurrent glioma often becomes resistant to established chemotherapy and radiotherapy treatments. Immunotherapy, a rapidly developing anti-tumor therapy, has shown a potential value in treating recurrent glioma. Multiple immune strategies have been explored. The most-used ones are immune checkpoint blockade (ICB) antibodies, which are barely effective in monotherapy. However, when combined with other immunotherapy, especially with anti-angiogenesis antibodies, ICB has shown encouraging efficacy and enhanced anti-tumor immune response. Oncolytic viruses and CAR-T therapies have shown promising results in recurrent glioma through multiple mechanisms. Vaccination strategies and immune-cell-based immunotherapies are promising in some subgroups of patients, and multiple new tumor antigenic targets have been discovered. In this review, we discuss current applicable immunotherapies and related mechanisms for recurrent glioma, focusing on multiple preclinical models and clinical trials in the last 5 years. Through reviewing the current combination of immune strategies, we would like to provide substantive thoughts for further novel therapeutic regimes treating recurrent glioma.
Introduction
Glioma, with an incidence of 6 per 100,000 population worldwide, is the most common and aggressive primary tumor of the central nervous system [1]. According to the WHO 2021 classification of central nervous system tumors (WHO CNS5), gliomas can be divided into four grades according to clinical features, histological diagnosis and molecular biomarkers (including gene mutation) [2]. Grade 3 and grade 4 gliomas are defined as "high-grade" gliomas, which have a 2-year survival rate of less than 20% [3]. The leading cause of death in high-grade gliomas is tumor recurrence. More than 90% of grade 4 glioma patients experience a recurrent tumor in situ, even with the standard of care (SOC) [4].
The current SOC of initial glioma is maximal safe resection (for tumor volume reduction, accurate pathological diagnosis, and gene mutation detection), followed by radiotherapy and daily temozolomide (TMZ). Additional low-density tumor-treating fields (TTF) can be applied [5]. However, there is no SOC for recurrent or therapy-resistant glioma, and options are less well-defined. One of the obstacles for new drug development and delivery strategies is the brain-blood barrier (BBB), which prevents most antitumor drugs from entering the brain. The other is the complicated tumor immune microenvironment (TME), which is the main reason for glioma immune escape and recurrence.
Multiple studies have showed that glioma has an immunosuppressive nature, and the crosstalk between tumor cells and TME can lead to resistance and recurrence. On one hand, glioma cells express higher level of programmed cell death 1 ligand (PD-L1) and indolamine 2,3-dioxygenase (IDO), which limits the presentation of antigens [6]. On the other hand, glioma has an immunosuppressive TME, leading to less-effective tumor killing. Tumorinfiltrating lymphocytes and macrophages consist of the major infiltrating immune cells in TME. In glioma microenvironment, M2 macrophage detection frequency is related to rapid tumor recurrence after radiotherapy [7]. At tumor sites, exhausted phenotypes of CD4+ and CD8+ T cells (defined as PD1+, TIM3+, LAG3+ T cells) are higher than those detected in matched peripheral blood mononuclear cells [8]. Thus, a comprehensive understanding of the nature of glioma-suppressive immunity and the tumor microenvironment can help with better immune therapy strategies.
Since the discovery of negative immune checkpoint regulator inhibition, immune checkpoint blockade (ICB) has been in the leading position for immune therapy for cancer treatment. However, the therapeutic effect of a single ICB regime for recurrent glioma remains controversial [9]. Many possible therapeutic targets, such as vascular endothelial growth factor (VEGF) and IL-13 receptor α2 (IL-13Rα2), have been discovered, as the mechanistic research probes deeper to elucidate the nature of glioma. In addition, the discovery of novel therapeutics such as oncolytic viruses (OVs) and chimeric antigen receptor T (CAR-T) cell therapies have brought enlightenment as to recurrent gliomas.
In this review, we concentrated on articles describing current immunotherapies for recurrent glioma. Focusing on literatures of multiple preclinical animal models and clinical trials in the last 5 years, we extended our literature search to some earlier articles on therapy mechanisms that may help make the context easier to understand. The aim of this review is to elucidate different immune strategies for recurrent glioma and to discuss the possibility of further studies for novel therapeutic regimes.
Recurrent Glioma Features
Recurrent glioma often refers to grade 3 and grade 4 gliomas consisting of astroglioma and glioblastoma (GBM). According to the fifth edition of the WHO classification of tumors of the central nervous system [2] (published in 2021), astroglioma is IDH-mutant while GBM is IDH-wildtype. In this review, we will focus on recurrent IDH-mutant grade 3/4 astroglioma and IDH-wildtype GBM. Astroglioma is IDH-1/-2 mutated and often harbors TP53/ATRX mutation without 1p/19q codeletion. In a recent published article [10], it was reported that, in recurrent astroglioma, treatment of TMZ can induce hypermutation, leading to higher levels of proliferating stem-like neoplastic cells and deletion of cell-cycle regulators CDKN2A, as well as amplification of CCND2. Additionally, the immune cell composition also changes at recurrence. Compared with primary astroglioma, recurrent astroglioma has an obvious decreased expression of brain-resident microglia signature and increased acquisition of HLA loss of heterozygosity. In recurrent GBM, the tumor-neuron interaction causes higher leading-edge content, which is a proneural subtype and retains neural tissue characteristics [10]. The myeloid compartment in recurrent GBM shows a high blood-driven macrophage signature, which is an immunosuppressive phenotype expressing PDCDLG1 and IDO1. These features suggested that glioma had undergone changes in cell states in association with genetic and microenvironment changes through initial treatment of TMZ and radiotherapy, necessitating other kinds of treatment.
Immune Checkpoint Blockade
Immune checkpoint blockade (ICB) is a turning point in anti-tumor treatment in many cancers. So far, more than ten immune checkpoints have been identified and proved to be promising therapy targets, such as lymphocyte activation gene-3 (LAG-3), indoleamine 2, 3-dioxygenase 1 (IDO1) and T cell immunoglobulin and mucin-domain containing-3 (TIM-3). However, only antibodies targeting cytotoxic T lymphocyte antigen 4 (CTLA-4) and programmed cell death protein 1 (PD-1)/programmed cell death-ligand 1 (PD-L1) have been approved by FDA and widely used (Table 1).
CTLA-4/B7 Axis
CTLA-4, expressed by T cells, is a homolog to CD28. The competitive binding of CTLA-4 with CD80 (B7-1) and CD86 (B7-2) is not associated with a proliferation of T cells, but does increase T cell survival and differentiation, while also preventing the costimulatory signal provided by CD28 and B7 binding [11,12] (Figure 1). Though the anti-CTLA-4 antibody, ipilimumab, has been tested and found effective in treating several cancers, preclinical glioma models have brought unsatisfying results as to anti-CTLA-4 monotherapy [11,13]. Single anti-CTLA-4 therapy has neither improved symptom-free survival in GL261 glioma mouse model, nor enhanced the costimulatory capacity of antigenpresenting cells (APCs) [13]. However, multiple preclinical studies combining the anti-CTLA-4 antibody with other therapeutics have had encouraging results. The same study using a GL261 glioma mouse model showed complete tumor regression using a sequential regime of an anti-CTLA-4 antibody and a whole-tumor-cell vaccine [13]. Other studies combining anti-PD1 with anti-CTLA-4 in GL261 glioma mouse models showed decreased tumor growth and improved symptom-free survival [14,15]. Thus, clinical trials of the FDAapproved anti-CTLA-4 antibody ipilimumab focus on combination strategies. A recently registered phase 2 randomized clinical trial (ISRCTN84434175) aimed to compare the efficacy of a combination of ipilimumab and chemo agent TMZ with TMZ monotherapy [16].
Other clinical trials of anti-CTLA-4 combining anti-PD1/anti-PD-L1 will be reviewed in Section 3.2.
In preclinical glioma mouse models, strategies combining an anti-PD-1 antibody with other therapeutics have been proved effective. In the immunogenic GL26 tumor model, anti-PD-1 monotherapy prolonged median survival from 24 days (vehicle control) to 28.5 days. When combined with TMZ and anti-Na/H exchanger isoform 1 (anti-NHE1), median survival extended to about 41 days [23]. Other research combining anti-PD-1 with TMZ obtained similar survival rates with increased CD8+ T cell/Treg ratios [24,25]. The effect of anti-PD-L1 combination regimes is also encouraging. A study showed 60% of mutant isocitrate dehydrogenase 1 (mIDH1)-glioma-bearing mice had complete tumor regression with reduction of exhausted T cells and generation of memory CD8+ T cells after receiving the combination of anti-PD-L1 antibody, 2-hydroxyglutarate (D-2HG) inhibition, irradiation and TMZ [26]. In a TMZ-refractory glioma mouse model, the combination of anti-PD-L1 antibody and p38MAPK inhibitor significantly improved the 60-day survival rate from 0 (vehicle control) to 60%. The dissociation of post-treatment tumor and flow cytometry showed a decrease of F4/80+/CD11b+ macrophages/microglia [27]. of the FDA-approved anti-CTLA-4 antibody ipilimumab focus on combination strategie A recently registered phase 2 randomized clinical trial (ISRCTN84434175) aimed compare the efficacy of a combination of ipilimumab and chemo agent TMZ with TM monotherapy [16]. Other clinical trials of anti-CTLA-4 combining anti-PD1/anti-PD-L will be reviewed in Section 3.2. Anti-VEGF antibody can prevent binding of VEGF-A and VEGF receptor tyrosine kinase (VEGFR) as well as neovascularization. About half of recurrent-glioma patients have amplification of epidermal growth factor receptor (EGFR). Anti-EGFR antibody can prevent the binding of EGF and EGFR, thus preventing tumor cell proliferation. IL-12 is an anti-tumor cytokine that can stimulate shifting of CD4+ T cells to the Th1 phenotype and increase IFN-γ secretion. Oncolytic viruses (OVs) can cause glioma cell lysis directly and the released tumor cell lysates can be recognized by antigen presenting cells (APCs) to induce an anti-tumor immune response. Dendritic cells (DCs) are efficacious APCs and can activate cytotoxic T lymphocytes (CTLs) to kill glioma cells. DCs can be co-cultured with tumor cell lysates to act as a DC vaccine for glioma tumor killing. Additionally, tumor cell lysates or glioma stem cells themselves can be used as vaccines to stimulate an anti-tumor immune response. Autologous CD4+ T helper cells expressing endogenous cancer/testis (CT) antigens can also be used as vaccines to activate CTLs and NK cells. NK cells are innate immune cells which can be used directly as vaccines to lyse tumor cells. Peptides mimicking several proteins can also be utilized as vaccines to induce specific anti-tumor immune responses. Tumor-associated antigens (TAAs) such as IL13Rα2 and EGFRνIII can be recognized by CTLs by means of the promotion of MHC-1 expression. These TAAs are being exploited as targets of genetically modified CAR-T cells.
Nivolumab and pembrolizumab are both FDA-approved anti-PD1 antibodies commonly used for advanced-stage melanoma. In recurrent glioma, the safety of both antibodies has been verified in several retrospective and phase 1 studies, while the efficacy of monotherapy remains controversial [28][29][30][31][32][33][34][35]. In addition to patient selection, multiple combination regimes have been studied in clinical trials for recurrent glioma. One of the most-used strategies is the combination of nivolumab and ipilimumab, an anti-CTLA4 antibody. A study analyzed the differentiation status of CD8+ tumor-infiltrating lymphocytes (TILs) in primary glioma patients to sort out the underlying mechanisms of non-responsiveness to ICB treatment [36]. The result suggested that PD1+ CD8+ T cells exhibited a more terminally differentiated phenotype (Eomes hi T-bet lo ) and correlated with the response to anti-PD1 therapy. When applying anti-CTLA4 and anti-PD1 combinational therapy, patients with low Eomes hi T-bet lo CD8+ TILs showed additional increases of CD8+ TIL proliferation. In a nonrandomized phase 1 cohort of the Checkmate 143 study (NCT02017717), nivolumab monotherapy was proved to be better-tolerated than combinations of high doses (3 mg/kg) or low doses (1 mg/kg) of ipilimumab, while the effects of therapies were comparable [37]. To solve the problem of low activity of ICB in recurrent glioma, a phase 1 study explored intracerebral administration of nivolumab and ipilimumab after re-surgery in the resection cavity. The median overall survival rate did not improve compared with intravenous administration, though intracerebral administration was safe and feasible [38]. Combination of an anti-PD1 antibody with anti-vascular endothelial growth factor (VEGF) antibody, bevacizumab (BEV), has also been evaluated in multiple clinical trials. In a randomized phase 3 study in the Checkmate 143 study (NCT02017717), 369 first-recurrent-glioma patients were randomly assigned to nivolumab monotherapy or BEV monotherapy. While the 6-month progression-free survival (PFS-6) and overall survival (OS) rates were comparable between groups, the objective response rate was higher in BEV-treated patients (23.1% vs. 7.8%) [39]. Another phase 2 study compared the efficacy of pembrolizumab alone or in combination with BEV. Though both regimes were well tolerated, limited benefits were brought to patients with recurrent glioma [34]. Other combinations with, for example, axitinib (tyrosine kinase inhibitor of VEGF receptors), cyclophosphamide, and hypofractionated stereotactic irradiation (HFSRT), have been proved safe, but of little benefit [40][41][42]. Clinical trials of anti-PD-L1 antibodies such as atezolizumab (NCT01375842) and durvalumab have proved the safety but ineffectiveness of monotherapy or combination strategies with BEV [9,43].
Although the above combination strategies in recurrent glioma were not satisfying, posttreatment analysis revealed therapeutic benefit correlates to indexes such as PD-L1 expression level, baseline steroid use, peripheral lymphocyte counts and gene expression profile, suggesting novel combination regimes and patient selection [9,35].
Anti-Angiogenesis Therapy
VEGF has been known as an angiogenesis inducer and is upregulated in glioma; the binding of VEGF-A and VEGF receptor tyrosine kinase (VEGFR) can activate the VEGF signaling pathway to promote neovascularization [44,45] (Figure 1). Since the first report of an anti-VEGF antibody decreasing tumor volume in multiple cancer preclinical models in 1993, the effects of an anti-VEGF antibody, bevacizumab, have been investigated in several clinical trials [44].
Bevacizumab (BEV) is the first humanized monoclonal antibody that prevents the interaction of VEGF-A and VEGFR by binding to circulating VEGF-A [46]. After the encouraging results of the pivotal study AVF3708g in relapsed glioblastoma, BEV was approved to treat recurrent and progressing gliomas. Multiple clinical studies have explored multiple regimes in recurrent glioma. In a prospective study of 29 recurrent-glioma patients receiving only BEV (10 mg/kg, i.v., every 2 weeks till progression), the baseline level of neutrophils below 3.9 G/L and Treg above 0.011 G/L were related to prolonged OS [47]. Adding BEV (10 mg/kg, i.v.) on days 1 and 15 in the 28-day cycle of TMZ (100 mg/m 2 ) improved PFS-6 up to 52%, suggesting that BEV plus bi-weekly TMZ may be a possible regime [48]. However, two other phase 2 studies (TAMIGA and TAVAREC) did not draw such promising results. In the TAMIGA trial (NCT01860638), 123 newly diagnosed glioma patients were randomized into lomustine (CCNU) plus BEV or plus placebo groups at first disease progression. However, no survival benefit was observed (median survival was 6.4 vs. 5.5 months, respectively) [49]. In addition, a combination of TMZ and BEV did not improve overall survival compared to TMZ single agent in the multicenter phase 2 TAVAREC trial (NCT01164189) [50]. Further analysis of 122 samples collected in the TAVAREC trial revealed the predictive value of homozygous deletions in CDKN2A/B for survival benefit [51]. Other than the unsatisfying clinical trials mentioned above, clinical trials combining BEV with signaling-pathway inhibitors such as Src inhibitor, VOR inhibitor, PI3K inhibitor, etc., drew similar, discouraging results [40,[52][53][54][55][56][57][58].
Anti-EGFR Therapy
Amplification of epidermal growth factor receptor (EGFR) was observed in about 50% of glioma patients [59] (Figure 1), making anti-EGFR therapy a possible choice for glioma patients. Unfortunately, a phase 3, randomized, double-blinded clinical trial failed to show a survival benefit of the EGFR deletion mutation vaccine rindopepimut (CDX-110) on newly diagnosed EGFRνIII positive glioma patients [60]. Rindopepimut is an EGFRvIII-specific peptide conjugated to keyhole limpet haemocyanin. After enrollment, patients were randomly assigned to two groups, rindopepimut or control (keyhole limpet haemocyanin), concurrent with standard oral TMZ. At final analysis, the mOS were 20.1 months for the rindopepimut group and 20.0 months for the control group. Though around 50% patients harbored the EGFRνIII deletion, and EGFR amplification was maintained at the time of recurrence [61,62], neither EGFR inhibitors nor monoclonal antibodies targeting extracellular EGFR showed a survival benefit among recurrent-glioma patients. Depatuxizumab mafodotin (Depatux-M) is a modified antibody-drug conjugate composed of an EGFR monoclonal antibody (depatuxizumab) and a microtubule inhibitor monomethyl auristatin F (mafodotin).
In preclinical mouse U87MG and U87MG EGFRνIII models, the combination of depatux-M, TMZ and radiotherapy inhibited tumor growth more than depatux-M alone or TMZ plus radiotherapy [63]. Additionally, the preclinical model also verified that depatux-M worked efficaciously in recurrent gliomas [63]. In a phase 1 study (NCT01800695), 1.25 mg/kg intravenous depatux-M every two weeks was well tolerated in 66 EGFRamplified recurrent-glioma patients [64]. In another phase 1 study comparing depatux-M plus TMZ with depatux-M monotherapy in recurrent-glioma patients (NCT01800695), PFS6 was higher in the monotherapy group than in the combination group (40% vs. 26.7%, respectively), while OS was still higher in the combination group (17.9 months vs. 7.2 months). This might be due to the high EGFR amplification and mutation rate in the monotherapy group (8/9 vs. 9/15 in combination group) [65,66]. In the further multicenter phase 2 study (NCT02343406), long-term analysis of 199 events showed that a combination of depatux-M and TMZ worked effectively compared with the control group (hazard ratio 0.66), while single depatux-M did not show efficacy compared with the control group (hazard ratio 0.96) [67]. Another analysis of health-related quality of life (HRQoL) showed depatux-M had no impact on HRQoL in EGFR-amplified recurrent-glioma patients [68]. Common adverse events (AEs) among all clinical trials were ocular problems, including blurred vision, dry eyes and photophobia, most of which are grade 3/4 AEs [64-69].
Cytokine Therapy
Cytokines are secreted by immune cells and can regulate the immune response against tumors. One of the most used cytokines is interleukin, of which IL-12 showed promising anti-tumor efficacy (Figure 1). IL-12 functions mainly by increasing IFN-γ secretion and shifting CD4+ T cells to a Th1 phenotype [70]. In an advanced-glioma mouse model, combination of intratumor IL-12 with anti-CTLA-4 successfully led to tumor eradication, compared with either of the monotherapies [71]. To solve the systemic inflammatory toxicity of IL-12, locoregional injection and a regulatable "turn-on" switch were developed. The regulatable hIL-12 vector (Ad-RTS-hIL-12) was injected in the cranial cavity after surgery (NCT02026271). Patients were given different doses (from 10 mg to 40 mg) of veledimex (VDX), the oral activator of the IL-12 vector. A quantum of 20 mg VDX showed superior drug compliance, with a mOS of 12.7 months [72]. Adverse events, such as systemic inflammation, were VDX dose-related and were reversed upon VDX discontinuation [72,73].
Oncolytic Viruses
Oncolytic viruses are genetically modified viruses that preferentially infect and kill cancer cells. Upon oncolysis, new infectious oncolytic viruses will be released to infect and kill the remaining cancer cells. The most-used viral vectors include retrovirus, adenovirus and human simplex virus type-1 (HSV-1) (Figure 1, Table 2).
Retrovirus
Different from HSV and adenovirus vectors, modified retroviral replicating vectors (RRVs) can specifically infect tumor cells without direct oncolytic effects on them. Thus, this makes RRVs a good platform for tumor-targeting gene therapies. In the past few years, the most used RRVs in glioma therapy is the Toca 511, which is modified to encode a transgene for yeast cytosine deaminase (yCD2). When the virus is administered into the resection cavity after surgery, it can specifically infect the remaining cancer cells; then, yCD2 can convert the oral prodrug 5-fluorocytosine (5-FC; Toca FC) into the cytotoxic 5-fluorouracil (5-FU) and kill cancer cells [74]. In a phase 1 trial (NCT01470794), median survival reached 11.9 months for all 53 patients [75]. Other studies further analyzed tumors and peripheral blood samples in this and other phase 1 trials [74,76]. First and foremost, Toca 511 is detected mainly in tumor samples and only transiently detected in the peripheral blood of some patients. Having more activated memory CD4+ T cells, more M1 macrophages with fewer resting NK cells and M0 macrophages in the tumor microenvironment is related to the better response to Toca 511. Responders also showed posttreatment elevation of E-selectin and MIP-1-beta in peripheral blood. However, in a later multicenter, randomized phase 2/3 clinical trial (NCT02414165), Toca 511 showed no advantages compared with SOC [77]. In this trial, patients were given either Toca 511/FC or SOC (investigator's choice of single agent therapy, such as lomustine; TMZ; or BEV). The median OS was 11.1 and 12.2 months for Toca 511/FC groups and SOC groups, respectively, while the second endpoints and AE rates had no difference between groups.
Adenovirus
Oncolytic adenovirus can not only directly oncolysate tumor cells but also induce antitumor immune responses [78,79]. Adenovirus VB-111 (ofranergene obadenovec) is the most studied oncolytic adenovirus for recurrent glioma. It is composed of a non-replicating adenoviral vector with a human Fas-chimera transgene, which is controlled by a modified murine pre-endothelin promoter (PPE-1-3x) [78]. VB-111 is promising on recurrent tumors for its function to disrupt neovascularization independently of the pro-angiogenic signaling pathway and to induce infiltrating CD4+ T and CD8+ T cells [78]. Even though VB-111 has been proved well-tolerated in clinical trials, the efficacy remains controversial. In a phase 1/2 study, the median survival has been significantly improved to 414 days for patients in VB-111 and BEV-combination groups [78]. The same team later conducted a phase 3 study (NCT02511405) and drew a different result. It seemed that only those patients who had both smaller tumors and a posttreatment febrile reaction had improved survival over the combination group [80]. Another adenovirus-based oncolytic virus, DNX-2401 (Delta-24-RGD; tasadenoturev) has also been proved effective and tolerated in a phase 1 study [79]. Posttreatment tumor samples showed infiltrating CD8+ T cells and T-bet+ cells, along with decreasing transmembrane immunoglobulin mucin-3, indicating an anti-tumor immune microenvironment [79]. Agaltimagene besadenovec (AdV-tk) is a modified oncolytic adenoviral vector expressing the herpes simplex virus (HSV) thymidine kinase (tk) gene. The development of AdV-tk was intended to implement a gene-mediated cytotoxic immunotherapy (GMCI) anti-tumor strategy [81]. After local delivery of AdV-tk, the anti-herpetic prodrug was given to activate the STING (stimulator of interferon genes) pathway, turning a "cold tumor" into an immune "hot" tumor. GMCI using AdV-tk has been proved to be safe and tolerated in both adult and childhood recurrent glioma with an efficacy to be further elucidated [81,82].
HSV-1
HSV-1 is one of the most-studied oncolytic viral vectors, as it can infect most cell types and needs a low multiplicity of infection for total cell killing. Furthermore, HSV-1 has a large genome which can be inserted by large or multiple transgenes [83]. The first oncolytic HSV-1 used in clinical trial was G207, which has deletions in both copies of the γ34.5 gene and a lacZ insertion inactivating the ICP6 gene [84,85]. In a neuroblastoma syngeneic mouse model, G207 showed both direct oncolytic activity and induction of antitumor immunity by means of increasing cytotoxic T cell activity [86]. Recent research analyzed biopsies pre-and post-G207 treatment from six recurrent-glioma patients [87]. RNA-seq analysis revealed that genes enriched in intrinsic IFN-mediated antiviral and adaptive immune functional responses correlated with survival duration. However, further clinical trials in glioma showed an unsatisfying efficacy of G207. Subsequently, G47∆ was made by deleting the α47 gene in the G207 genome [88]. This modification caused further attenuation of virus levels in normal cells and stimulated an enhanced antitumor immunity. Two clinical trials conducted by the same team showed the promising results of G47∆ in adult recurrent glioma in the Japanese population [83,89]. In the phase 1/2 study (UMIN-CTR Clinical Trial Registry UMIN000002661), G47∆ was proved safe and tolerated, with a median overall survival rate of 7.3 months [83]. Further, in another phase 2 study (UMIN-CTR Clinical Trial Registry UMIN000015995), posttreatment biopsies revealed increasing tumor-infiltrating CD4+/CD8+ lymphocytes and persistently low numbers of Foxp3+ cells [89].
Parvovirus
Other than above mentioned oncolytic viruses, rat parvovirus has also been applied in clinical trials of recurrent glioma. In an 18-patient phase 1/2a study of recurrent glioma, rat H-1 parvovirus (H-1PV) was safe and tolerated, with PFS6 at 27% [90]. H-1PV crossed BBB, activated macrophages and infiltrated cytotoxic T cells were detected in infected tumor samples [90].
Vaccines and Cell-Based Immunotherapies
Vaccines and cell-based immunotherapies are based on the notion of a tumor-specific immune response towards the injected exogeneous antigens. Currently, cell vaccines, cell-based immunotherapies and peptide vaccines have been applied in clinical trials of recurrent glioma (Figure 1, Table 3). Dendritic cells derive from bone marrow and are morphologically and functionally heterogenetic cells which can present antigens to CD4+ and CD8+ T cells. As the largest population of APCs, DC vaccines were the first immune cell vaccine to be studied for cancer immunotherapy. Though they are difficult to obtain, the efficacy of autologous DCs were tested in several clinical trials. In the HGG-2006 phase I/II trial, the efficacy of DC vaccine on newly diagnosed GBM was evaluated [91]. A total of seventy-seven patients were given four weekly vaccinations after the 6-week chemoradiotherapy, as well as four boost vaccinations during maintenance chemotherapy. This regime is feasible without major AEs, and the PFS6 was 70.1%. This possibly beneficial result led to further clinical trials of DC vaccines on both newly diagnosed and recurrent gliomas. In a double-blind, placebocontrolled phase 2 study, autologous DC vaccine loaded with glioblastoma-stem-cell-like lines (GSC) prolonged OS in the IDH1-wildtype TERT-mutant and B7-H4 low expression newly-diagnosed GBM patients, with increasing levels of plasma CCL22 and IFN-γ [92]. In another clinical trial, 10 recurrent-glioma patients were given a fusion of autologous DC vaccine and glioma cells after TMZ resistance [93]. The median PFS was 10.3 months and a specific immune response against chemoresistance-associated peptides (CAP), such as WT-1, gp-100 and MAGE-A3, was detected. Another study further tested the safety and efficacy of DC vaccine pulsed with lysates from a glioblastoma (GBM)-stem-cell-like cell line [94]. For the 25 recurrent-glioma patients, PFS6 was 24%, which proved the combination safe and tolerated. These clinical trials indicated that DC vaccine is safe and tolerated. Subsequently, other clinical trials have tested the possibility of a chemotherapy and DC vaccine combination [95][96][97]. In one study [95], recurrent-glioma patients were implanted with Gliadel Wafers, composed of biodegradable carmustin, after resection, followed by autologous DC vaccine pulsed with tumor cell lysates. Median PFS was 3.6 months from the beginning of vaccine therapy. In another clinical trial (HGG-2010) [96], similar DC vaccine was given before and during maintenance or after TMZ chemotherapy. There was no difference of OS between groups, and the median OS was 19 months, with increasing CTL CD69+ cells and decreasing Tregs correlating to better OS. However, in another phase 3 clinical trial (NCT00045968), autologous tumor lysate-loaded DC vaccine (DCVax-L) plus TMZ was applied in 64 recurrent-glioma patients [97]. The median OS from relapse was 13.2 months vs. 7.8 months for the control patients (those who only received SOC TMZ). These combinations were safe, however, it seemed the efficacy of DC vaccine plus chemotherapy remained controversial, compared with that of DC vaccine therapy alone. As mentioned above, autologous DC obtained from patients' peripheral blood is difficult to prepare. Hence, allogeneic DC vaccines were proposed as an alternative regime. Several studies have demonstrated that cytomegalovirus (CMV) proteins are expressed in more than 90% of glioblastoma [98,99]. Thus, John H. Sampson and his colleagues developed a DC vaccine targeting CMV protein pp65 and conducted three clinical trials (NCT00639639) in newly diagnosed glioma [100]. In the first blinded, randomized phase II clinical trial, nearly one third of patients were without tumor recurrence at 5 years from diagnosis. In the second clinical trial, survival rate was 36% at 5 years from diagnosis. In the third study, which was the first two-arm trial, migration of the DC vaccine to draining lymph nodes was observed, and this phenomenon was recapitulated in a larger confirmatory clinical study (NCT02366728) conducted by the same team.
Glioma (Stem) Cell Vaccines
An interesting phenomenon observed in malignant tumor patients is that patients with autoimmune diseases may have a better prognosis. This brought into a question whether a mimicry "autoimmune" state may help to eradicating cancer cells. In a syngeneic rat glioma model, allogeneic GBMs were proved to be effective for established tumors [101]. This was the first preclinical proof of ERC1671, a vaccine containing autologous and allogeneic (from other patients) GBM tumor cells and lysates. Later in a double-blinded, randomized phase 2 clinical trial of recurrent glioma [102], ERC1671 was administrated with cyclophosphamide and granulocyte-macrophage colony-stimulating factor (GM-CSF) plus BEV. Compared with those in the control group (placebo plus BEV), patients in ERC1671 obtained a 4.5-month longer OS (7.5 months vs. 12 months), with a positive correlation of maximal CD4+ T lymphocytes. In another two-arm study of low-grade glioma, the safety and effectiveness of an allogeneic cell-lysate-based vaccine GBM6-AD, a glioma stem cell line isolated from a glioma patient, was tested [103]. In this study, patients were divided randomly into two arms; the first arm received vaccine before surgery, while the second arm did not. Both arms of patients received adjuvant vaccine after surgery. The GBM6-AD was co-administrated with TLR3 ligand, polyinosinic-poly-cytidylic acid (poly-IC) stabilized with poly-lysine and carboxymethylcellulose (poly-ICLC). This coadministration helped the vaccine's trafficking to the central nervous system. The median PFS was 11 months for all patients, while upregulation of type-1 cytokines and chemokines and increase of CD8+ T cells in peripheral blood were only found in patients in the neoadjuvant arm. In addition, neoadjuvant vaccination led to effector phenotype CD8+ T cell clones and migration to TME.
T-Cell-Based Immunotherapy
One major category of adoptive immune eradication of cancer cells is the reorganization of tumor-associated antigens by cytotoxic T lymphocytes (CTL). Cancer/testis (CT) antigens, a group of >100 proteins of different families with unknown functions, are one of the major classes of heterogenous antigens recognized by CTL [104]. In cancer cells, CT antigens were usually epigenetically depressed by DNA demethylation. Hence, Walter and his colleagues [104] developed an autologous CD4+ T helper cells expressing endogenous CT antigens by treated with DNA demethylating agent. These CD4+ T helper cells can act as APCs to generate CTLs and natural killing (NK) cells in vivo. In the following 25-patient phase 1 trial, 10 of 25 patients who received all three rounds of treatment survived at the 20-week evaluation; among these, three had tumor regression.
NK-Cell-Based Immunotherapy
The notion of using NK cells to treat malignant tumors is not a vaccination strategy. NK cells are innate immune cells which lyse stressed cells and tumor cells without a need to be previously stimulated [105]. Additionally, tumor cells often downregulate self-markers such as major histocompatibility complex (MHC) class I to escape from T cell cytotoxicity. This brings about another advantage of NK cell immunotherapy, as it makes tumor cells more susceptible to NK cell lysis [105]. In a clinical trial conducted in 2004 [106], autologous NK cells co-cultured with an irradiated human feeder cell line (HFWT) using RHAM-alpha medium supplemented with 5% autologous plasma and IL-2 were injected in nine adult recurrent-glioma patients. Clinical evaluation revealed three cases of partial response (PR), two of minor response (MR), four of no change (NC) and seven of progression of disease (PD) along sixteen courses of NK cell treatment. In another clinical trial, the safety of intraventricular infusion NK cell treatment was proved in pediatric recurrent-glioma patients [107].
Peptide Vaccines
Survivin (BIRC5) is a member of a group of anti-apoptotic proteins highly expressed in glioma. SurVaxM (SVN53-67/M57-KLH) is a synthetic long peptide mimic peptide that spans amino acids 53 through 67 of the human survivin protein sequence [108]. This peptide was modified at amino acid M57 to enhance binding of the core survivin epitope to HLA-A*0201 (human leukocyte antigen) molecules. KLH, keyhole limpet hemocyanin, was used as a vaccine adjuvant. Preclinical murine glioma models had shown that SurVaxM could stimulate an anti-tumor immunity. In a safety evaluation clinical trial, nine patients with recurrent glioma that was survivin-positive, and who had either HLA-A*02 or HLA-A*03 MHC class I allele-positivity, were included. Six of eight evaluable patients developed cellular and humoral immune response against glioma, with a median PFS of 17.6 weeks. Advancement in tumor immunology has revealed that tumor-associated antigens (TAAs) can be used as cancer vaccines. Wilms' tumor gene 1 (WT1) product is one of the TAAs that can be utilized as peptide vaccine. A phase 1 clinical trial on recurrent-glioma patients was conducted to verify the safety of a cocktail vaccine of WT1 HLA class I and II peptides [109]. Eleven of the included fourteen HLA-A*24:02-positive patients completed vaccination. The median OS and 1-year OS rate were 24.7 weeks and 36%, respectively. Another "cocktail" vaccine which has been used in clinical trials is personalized peptide vaccine (PPV), which contains four peptides chosen from forty-eight warehouse peptides according to the patient's HLA type and preexisting peptide-specific immunoglobulin (Ig) G levels [110]. In this study, 88 recurrent-glioma patients were randomly assigned to PPV group or placebo group at a ratio of 2:1. However, this trial failed to meet both primary endpoint (OS) and second endpoint. Isocitrate dehydrogenase (IDH) mutations, a disease-defining mutation resulting in the oncogenic IDH1R132H protein, have often happened in glioma patients. A peptide vaccine targeting IDH1R132H (IDH1-vac) was proved safe and effective in stimulating immune response in a multi-center phase 1 trial (NOA-16 trial) in newly diagnosed glioma patients [111]. In a recent published paper, a randomized, three-arm, window-ofopportunity, multicenter national phase 1 trial (AMPLIFY-NEOVAC, NCT03893903) in resectable IDH1R132H-mutant recurrent-glioma patients was designed [112]. In this trial, patients will receive either IDH1-vac or avelumab (AVE), an anti-PD-L1 antibody, or both. AMPLIFY-NEOVAC is an ongoing clinical trial which aims to demonstrate the safety of this combination for enhanced IDH1-vac-induced T cells in peripheral blood.
CAR-T Therapy
Chimeric antigen receptor (CAR) T cell therapy, a special kind of T cell vaccine, has been proved effective in multiple cancers, especially in leukemia [113]. Chimeric antigen receptors on the T cell often recognize unprocessed antigens on cancer cells. In the setting of glioma, IL13Rα2, human erbb2 receptor tyrosine kinase 2 (HER2) and EGFRνIII are normally targeted proteins on the cancer cell's surface (Figure 1, Table 3). In a pilot study, recurrent-glioma patients expressing EGFRνIII have been enrolled and given anti-EGFRνIII CAR T cells, with no clinical benefit observed [114]. Another study analyzed post-EGFRνIII CAR T cell therapy tumor samples and found antigen decrease in five of seven patients [115]. Additionally, an increase of inhibitory molecules and infiltration of Tregs were detected in the tumor microenvironment after CAR-T therapy, indicating that the local microenvironment's adaptive change and antigen heterogeneity were related to CAR-T therapy efficacy. In a phase 1 study [116], 17 patients with HER2 positive glioma were given autologous HER2-CAR T cell infusions. Infusions were well tolerated, and median OS was 11.1 months. In another phase 1 clinical trial (NCT03500991), locoregional use of median-length HER2-CAR T cells in child or young adult recurrent-glioma patients was evaluated [117]. Interim reports showed high concentrations of CXCL10 and CCL2 in the cerebrospinal fluid, indicating local CNS immune activation. To conquer the limitation of autologous CAR T cells, an off-the-shelf, allogenic CAR T cell was made [118]. IL13Rα2targeted CAR+ (IL13-zetakine+); cytolytic T-lymphocyte (CTL) obtained from a health donor was then genetically engineered using zinc finger nucleases (ZFNs) to permanently disrupt the glucocorticoid receptor (GR) (GRm13Z40-2) and endow resistance to glucocorticoid treatment. Six unresectable recurrent-glioma patients were recruited to evaluate the safety and feasibility of intracranial GRm13Z40-2 T cells combined with recombinant human IL2 (rhIL2, aldesleukin). The regime was to give four doses of 10 8 GRm13Z40-2 T cells over a two-week period, along with aldesleukin (nine infusions ranging from 2500-5000 IU). This regime was well tolerated and transient tumor reduction with/or tumor necrosis at the site of T cell infusion were observed in four of six patients. In addition, the combination of humanized L13Rα2 CAR T cells and anti-CTLA4 antibody has been proved to be more effective than single agent in a glioma mouse model [119]. This was an encouraging result for further use of allogenic CAR T therapy in recurrent glioma.
Conclusions and Perspective
Though the current evidence shows no treatment benefits of immunotherapy for newly diagnosed glioma patients [120,121], great progress in translational immunotherapy for recurrent glioma has been made in recent years. Multiple immune regimes, especially combination therapies, have been proved to be safe and tolerated. Although immune combination strategies may not be effective for all patients, these previous studies enlighten us and encourage us to further explore other combination strategies targeting specific patients and specific tumor backgrounds, in addition to improving present drug dosage forms.
Author Contributions: Conceptualization, S.S.; writing-original draft preparation, Y.P. and G.Z.; writing-review and editing, S.S. and Y.C.; references collecting and figure designation, Y.P., G.Z. and K.Z.; funding acquisition, Y.P. All authors have read and agreed to the published version of the manuscript. | 8,649.2 | 2023-06-30T00:00:00.000 | [
"Biology",
"Medicine"
] |
Thin film epitaxial [111] Co\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$_{50}$$\end{document}50Pt\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$_{50}$$\end{document}50: structure, magnetisation, and spin polarisation
Ferromagnetic films with perpendicular magnetic anisotropy are of interest in spintronics and superconducting spintronics. Perpendicular magnetic anisotropy can be achieved in thin ferromagnetic multilayer structures, when the anisotropy is driven by carefully engineered interfaces. Devices with multiple interfaces are disadvantageous for our application in superconducting spintronics, where the current perpendicular to plane is affected by the interfaces. Robust intrinsic PMA can be achieved in certain Co\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$_x$$\end{document}xPt\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$_{100-x}$$\end{document}100-x alloys and compounds at any thickness, without increasing the number of interfaces. Here, we grow equiatomic Co\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$_{50}$$\end{document}50Pt\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$_{50}$$\end{document}50 and report a comprehensive study on the structural, magnetic, and spin-polarisation properties in the \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$L1_1$$\end{document}L11 and \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$L1_0$$\end{document}L10 ordered compounds. Primarily, interest in Co\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$_{50}$$\end{document}50Pt\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$_{50}$$\end{document}50 has been in the \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$L1_0$$\end{document}L10 crystal structure, where layers of Pt and Co are stacked alternately in the [100] direction. There has been less work on \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$L1_1$$\end{document}L11 crystal structure, where the stacking is in the [111] direction. For the latter \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$L1_1$$\end{document}L11 crystal structure, we find magnetic anisotropy perpendicular to the film plane. For the former \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$L1_0$$\end{document}L10 crystal structure, the magnetic anisotropy is perpendicular to the [100] plane, which is neither in-plane or out-of-plane in our samples. We obtain a value for the ballistic spin polarisation of the \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$L1_1$$\end{document}L11 and \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$L1_0$$\end{document}L10 Co\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$_{50}$$\end{document}50Pt\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$_{50}$$\end{document}50 to be \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$47\pm 3\%$$\end{document}47±3%.
Ferromagnetic films with perpendicular magnetic anisotropy (PMA) are of wide interest for applications in established and nascent technologies such as ultrahigh density magnetic hard drives 1 , MRAM 2 , superconducting spintronics 3 , and energy efficient spin-orbit torque memory 4 . PMA can be achieved in Pt/Co/Pt multilayer systems as a result of the interfacial anisotropy, however at a critical Co thickness, typically about 1 nm, the anisotropy will fall in-plane. Increasing the total ferromagnetic layer thickness further therefore involves adding additional interfaces to the multilayer. Having a multilayer structure introduces interfacial resistance and interfacial spin-flip scattering 5 , which are disadvantageous for applications such as ours which require the transport current perpendicular-to-plane [6][7][8] . Alternately, robust intrinsic PMA can be achieved in certain Co x Pt 100−x alloys and compounds at any thickness, without increasing the number of interfaces.
Here, we study the equiatomic Co 50 Pt 50 alloy, hereafter referred to as CoPt. Through growth at elevated temperatures it is possible to form the L1 1 and L1 0 chemically ordered compounds of CoPt as epitaxial films. Previous experimental studies of such compounds tend to use MgO substrates as the basis for high temperature epitaxial growth. Visokay and Sinclair report L1 0 crystal structure on MgO [001] substrates for growth temperatures above 520 • C 9 . Iwata et al. report growth of L1 1 crystal structure on MgO [111] substrates for a growth temperature of 300 • C 10 .
Early thin film studies of the chemically ordered CoPt (and the related FePt and FePd) compounds were motivated by the large out-of-plane anisotropy and narrow domain wall widths being candidate for high density storage mediums . Recently, renewed interest in these compounds has been driven by the discovery of self-induced spin-orbit torque switching in these materials, which can be used as the switching mechanism for a low-dissipation magnetic memory 54-61 . our motivation for studying [111] CoPt is to incorporate this PMA ferromagnet in an all epitaxial heterostructure suitable for superconducting Josephson devices [6][7][8] or MRAM 2 . For these applications growth in the [111] direction is favourable. In Josephson junctions, the superconductor of choice is Nb, which can be grown epitaxially as a bcc structure in the [110] direction 62 . In MRAM, the seed layer of choice is Ta, which has almost identical structural properties to Nb. On Ta or Nb [110] layers, we know that Pt and Co will grow with [111] orientation 63 www.nature.com/scientificreports/ We fabricate and report the properties of three sets of samples. The first set is designed to determine the optimal growth temperature. Therefore, we fix the thickness d CoPt = 40 nm and vary the substrate heater temperature in the range from 27 to 850 • C . The next two sample sets are thickness series grown by varying the growth time at a fixed temperature and magnetron powers. The temperatures chosen for the thickness series are to produce either the L1 1 ( 350 • C ) or L1 0 ( 800 • C ) crystal structures. These temperatures are guided by the results of the first temperature series of samples. Thicknesses are varied over the range 1 nm ≤ d CoPt ≤ 128 nm.
On each sample set we report systematically on the structural and magnetic properties of the CoPt. Additionally, on the thickest 128 nm L1 1 and L1 0 samples we perform point contact Andreev reflection (PCAR) measurements with a Nb tip at 4.2 K to determine the spin polarisation. The use of Nb as the tip, the temperature of this measurement, and the ballistic transport regime probed are relevant for our proposed application of the CoPt in Josephson devices [6][7][8] .
Results and discussion
CoPt properties as a function of growth temperature. We expect that as the growth temperature is increased the CoPt will form initially a chemically disordered A1 alloy phase, followed by a chemically ordered L1 1 crystal structure, an intermediate temperature A1 phase, and finally a chemically ordered L1 0 crystal structure respectively. In order to map out the growth parameters we report on Al 2 O 3 (sub)/Pt (4 nm)/CoPt (40 nm)/ Pt (4 nm) sheet films grown at different set temperatures in the range from room temperature (RT) to 850 • C.
Structure. In order to understand the magnetic phases of sputter deposited CoPt, it is necessary to understand the underpinning structure and the influence of the growth temperature. Therefore, the structural characteristics and film quality were investigated using X-ray diffraction (XRD) as a function of growth temperature. Figure 1a- Fig. 1d. In Fig. 1a, for a growth temperature of 350 • C , the additional feature at 2θ ≈ 40 • corresponds to the superimposed Pendellosung fringes and the Pt [111] structural peak (bulk Pt has a lattice constant of 3.92 Å). This additional Pt structural peak is present for growths up to 550 • C (see Supplemental Information online) which is within the expected optimal temperature range for sputter deposited Pt films on Al 2 O 3 64 . To maintain consistent Pt structure for CoPt growths at different temperatures, future work could grow the Pt layer at its optimum growth temperature. www.nature.com/scientificreports/ Using the Scherrer equation and the full widths at half maxima of the Gaussian fits to the CoPt [111] structural peaks, an estimation of the CoPt crystallite sizes can be made. The crystallite size determined by the main [111] structural peak is given in Fig. 1e. In the expected range of the ordered L1 1 crystal structure between 200 and 400 • C , the estimated CoPt crystallite size is 37 nm, which compared to the nominal thickness of 40 nm indicates that the CoPt has high crystallinity. On the other hand, at RT growth and intermediate temperatures between 400 and 800 • C , the estimated CoPt crystallite size is much lower, showing a minimum value of 22 nm at 700 • C where we expect A1 growth. Interestingly, the disorder in the A1 growth appears to affect both the chemical disorder (random positions of the Co and Pt atoms in the unit cell) and leads to a poorer crystallite size compared to the chemically ordered crystal structures. The disappearance of the Pendellosung fringes at these intermediate growth temperatures is related to the increased roughness of the A1 films. Finally, upon increasing the temperature further to 800 • C and 850 • C the crystallite size reaches a maximum of 49 nm. This value corresponds approximately to the entire thickness of the Pt/CoPt/Pt trilayer, indicating that the Pt buffer and capping layers have fully interdiffused into the CoPt layer. Figure 1f shows the CoPt c-plane space calculated from the center of the fitter Gaussian to the main [111] structural peak. At all temperatures the measured c-plane spacing is very close to the expected value based on single crystal studies 65 , indicating low out-of-plane strain. The trend with temperature, however, is non-monotonic and shows a discrete increase between the samples grown at 550 • C and 650 • C . The transition associated with the L1 0 crystal structure is expected to result in a transition from cubic to tetragonal crystal with a c/a = 0.979 in the single crystal 65 . In the single crystal study, however, the c-axis is in the [001] orientation. In our [111] orientated film, the small structural transition from cubic to tetragonal crystal structure is not expected to be visible in the [111] peak studied here. Nonetheless, Fig. 1f shows features consistent with the CoPt undergoing structural transitions.
X-ray reflectivity (XRR) is performed to further investigate the growth temperature dependent trends in the structural properties of the films. Figure 2a,c,e show the low angle XRR for selected growth temperatures along with the best fits to the data. The corresponding models are shown in Fig. 2b,d,f. Further XRR data are available in the Supplemental Information online. The fitting is performed using the GenX package 66 , which models each layer as a box with independent thickness, roughness, and density fitting parameters. Across 13 samples grown at varying temperatures, the average total sample thickness is 47.2 ± 0.3 nm, compared to a nominal total thickness of 48.0 nm, confirming the sputter rate calibration and that the growth temperature has not impacted the growth rate. For growth temperatures of 450 • C and below, the XRR is best modelled as a Pt/CoPt/Pt trilayer, for example Fig. 2a,b. At a growth temperature of 550 • C and above there is a clear change in the XRR. Modelling the data suggests that the interdiffusion has become so large that a trilayer model is no longer required. For the data shown in Fig. 2c-f, a single layer model is used to fit the reflectivity. Figure 2g shows the extracted roughness parameter on the top surface of the sample. The surface roughness shows temperature dependence corresponding to the underpinning structure of the CoPt film. For the optimal growth temperature to achieve the L1 1 crystal structure, the corresponding surface roughness is lowest. In the A1 growth regime, the disorder in the crystal structure evident in the XRD is also present in the film surface roughness. The lower roughness is recovered at higher temperatures at the optimal growth temperature for the L1 0 crystal structure. Figure 2h shows an extracted interfacial roughness parameter at the CoPt/Pt interface. We interpret this parameter as a measure of the interdiffusion between the layers. As expected, a clear trend is present where the interdiffusion between the layers increases with increasing temperatures. At the highest growth temperatures where a single layer model is used to fit the data, there is no interfacial roughness parameter to extract as the layer has completely interdiffused.
Magnetic characterisation. The magnetisation versus field data are shown in Fig. 3 for Al 2 O 3 (sub)/Pt (4 nm)/ CoPt (40 nm)/Pt (4 nm) sheet film samples. Further magnetisation data for all samples in this study are available in the Supplemental Information online. The 350 • C , 550 • C , and 800 • C samples are plotted here as they are representative of the magnetic response of the L1 1 crystal structure, the chemically disordered A1 phase, and L1 0 crystal structure respectively. Magnetisation is calculated from the measured total magnetic moments, the areas of the sample portions, and the nominal thicknesses of the CoPt layer.
For the chemically ordered L1 1 crystal structure shown in Fig. 3a, the OOP hysteresis loop shows a wasp waisted behavior associated with the formation of magnetic domains at remanence. Such behavior is common in CoPt alloys and multilayer thin films 9 . The wasp waisted OOP hysteresis loop along with the low IP remanence and higher IP saturation field indicates that the 40 nm CoPt samples with the L1 1 crystal structure have strong PMA. We can estimate the effective anisotropy using the expression K eff = µ 0 M s H s /2 , where µ 0 is the vacuum permeability, M s the saturation magnetisation, and H s is the saturation magnetic field. We estimate from the hysteresis loop that H s in-plane is 0.8 T (based on when the magnetisation reaches 97.5% of the fitted saturation value), and therefore K eff = 0.4 ± 0.1 MJ/m 3 . The effective anisotropy includes the uniaxial and shape anisotropy.
For samples grown at intermediate temperatures with the chemically disordered A1 structure, the magnetism favours IP anisotropy at 40 nm, shown for growth at 550 • C in Fig. 3b.
For the chemically ordered L1 0 crystal structure shown in Fig. 3c, there is a significant increase in the coercivity and squareness ratio ( M r /M s ) for both the IP and OOP field orientations. The increased coercive field suggests that the L1 0 CoPt is magnetically hard compared to the L1 1 and A1 samples. The magnetisation of the L1 0 40 nm CoPt sample does not show clear IP or OOP anisotropy from these measurements.
The magnetisation vs growth temperature is shown in Fig. 3d. At growth temperatures below 550 • C the magnetisation remains approximately constant, however at higher temperatures the magnetisation begins to decrease with increasing temperature. The possible cause of this decrease is the higher growth temperature contributing to interdiffusion between the Pt and CoPt layers, creating magnetic dead layers. www.nature.com/scientificreports/ The saturation field and squareness ratio vs growth temperature are shown in Fig. 3e,f respectively. The general trends can be seen in the differences observed in the hysteresis loops of Fig. 3a-c. These trends allow us In the L1 0 crystal structure, Fig. 3c, the high squareness ratio for both field orientations suggests that the anisotropy axis of the material is neither parallel or perpendicular to the film. Instead, it is possible that the magnetic anisotropy is perpendicular to the layer planes, which are stacked in the [100] direction.
To further investigate the anisotropy, we pattern our L1 1 350 • C and L1 0 800 • C samples into Hall bars and perform angular dependent Hall resistivity, R xy (θ ), measurements (Fig. 4). The fabricated Hall bar and measurement geometry is shown in Fig. 4a. R xy (θ ) for the L1 1 350 • C and L1 0 800 • C Hall bars are shown in Fig. 4b,c, respectively. For the L1 1 350 • C sample with out-of-plane anisotropy, R xy (θ ) shows a plateau close to out-of-plane field and a uniform response for angles in between. The plateau is interpreted as an angle forming between the magnetisation and applied field because of the anisotropy axis 67 . In comparison, the L1 0 800 • C sample also shows a R xy (θ ) plateau for out-of-plane applied field plus an additional plateau for applied field angles between about 45 • and 60 • . We interpret the additional plateau in R xy (θ ) for the L1 0 800 • C CoPt sample as evidence for an additional anisotropy axis, which we propose is perpendicular the [100] direction. The [100] plane has a dihedral angle of 54.75 • with the [111] growth plane. Additional sources of anisotropy in our samples are interface anisotropy at the Pt/CoPt interfaces, which would favour out-of-plane magnetisation for thin layers, and shape anisotropy, which for our thin films would favour in-plane magnetisation. Figure 5 shows the extracted coercive field as a function of in-plane rotator angle for the L1 0 800 • C sample. The data shows a clear six-fold symmetry. This is consistent with an easy axis for the plurality tetragonal L1 0 phase of [001] when grown on {111} planes-the < 100 > directions of the parent cubic structure are inclined at ±45 • from the plane and are coupled with the three fold symmetry of {111} . The magnetometry data therefore strongly suggests that over the sample the [001] L1 0 can be found in any of the three possible < 100 > of the parent cubic structure without a strong preference for which of the possible twins grow. sheet films is shown in Fig. 6a,b for the L1 1 ( 350 • C ) and L1 0 ( 800 • C ) crystal structures respectively. Hysteresis loops over the full thickness range are given in the Supplemental Information online. The moment/area at saturation (or 6 T) verses nominal CoPt thickness are presented in Fig. 6c,d. We calculate the magnetisation (M) by fitting the moment/area versus nominal CoPt thickness data. In order to account for interfacial contributions to the magnetisation of the CoPt, we model the system as a magnetic slab with possible magnetic dead layers and/or polarised adjacent layers. Magnetic dead layers can form as a result of interdiffusion, oxidation, or at certain interfaces with non-ferromagnetic layers. At some ferromagnet/nonferromagnet interfaces, the ferromagnetic layer can create a polarisation inside the non-ferromagnetic layer by the magnetic proximity effect. Polarisation is particularly common at interfaces with Pt [68][69][70][71][72] . To take these into account, we fit to the expression, where d i is the contribution to M from any magnetic dead layers or polarisation. The resulting best fit and the moment/area versus the nominal CoPt thickness is shown in Fig. 6.
CoPt properties as a function of thickness for
For L1 1 growth at 350 • C , the result of fitting Eq. (1) shown in Fig. 6c gives M = 750 ± 50 emu/cm 3 and d i = 0.38 ± 0.05 nm. From the XRR data and fitting presented in Fig. 2, the interdiffusion of the Pt seed and capping layers and the CoPt layer is minimal at this growth temperature, which is consistent with the small dead layer d i .
For L1 0 growth at 800 • C , the thinnest 1 nm and 2 nm films did not display any magnetic response and are excluded from the analysis in Fig. 6. This suggests the formation of a magnetic dead layer or alternatively a large www.nature.com/scientificreports/ enough change to the stoichiometry that those films were no longer magnetic. From the XRR data and fitting presented in Fig. 2, there is significant interdiffusion between the Pt seed and capping layers and the CoPt. The result of fitting Eq. (1) shown in Fig. 6d for the samples thicker than 2 nm gives M = 520 ± 50 emu/cm 3 and d i = −2 ± 1 nm. Interestingly, we find a significant difference in M between the two crystal structures, which is consistent with a previous report of CoPt growth on MgO substrates 43 . It is possible that the differences in M corresponds to a true difference in the saturation magnetisation of the two crystal structures. An alternative scenario is that the 6 T applied field is not large enough to fully saturate the L1 0 samples, leading to a reduced measured M. Another possibility is that the interdiffusion of the Pt seed and capping layers during growth at 800 • C modifies the stoichiometry of the resulting L1 0 film, and hence reduces the magnetisation. The thickness dependence of the magnetic switching of the L1 1 samples is well summarised by the squareness ratio shown in Fig 6e. The thickest 40 and 128 nm samples are wasp waisted, as presented in Fig. 3. At reduced thicknesses, between 2 and 8 nm, the L1 1 CoPt no longer displays the wasp waisted switching for out-of-plane field orientation, and now has a square loop shown in Fig. 6a. The 16 nm sample showed an intermediate behaviour. At 1 nm, the magnetic switching showed "S" shaped hysteresis loops for both in-and out-of-plane applied fields with small remanent magnetisation, see Supplemental Information online.
The thickness dependence of the L1 0 crystal structure samples is significantly different to the L1 1 . In the thinnest films of 1 and 2 nm there is no evidence of ferromagnetic ordering in the hystersis loops, see Supplemental Information online. For L1 0 growth at 800 • C , the XRR measurements (Fig. 2) suggests interdiffusion at the Pt/ CoPt interfaces during high growth temperature. The interdiffusion may account for magnetic dead layers, which in the thinnest samples may prevent ferromagnetic ordering. Upon increasing the thickness to 4 nm, a ferromagnetic response was recovered, however the hysteresis loops and extracted squareness ratio (Fig. 6f) indicate that the 4 nm and 8 nm L1 0 samples have in-plane magnetisation. The in-plane magnetisation in the thinner films suggests that the long-range L1 0 ordering may have not established at those thicknesses.
The thickness dependence in both crystal structures suggest that the Pt/CoPt/Pt trilayers grown on Al 2 O 3 substrates are not suitable for applications where ultrathin magnetic layers are required. To improve the magnetic properties of the thinnest samples in this study our future work will focus on replacing the Pt layers with seed and capping layers where interdiffusion may be less.
Spin polarisation.
To estimate the spin polarisation in the chemically ordered L1 0 and L1 1 CoPt samples, we perform point contact Andreev reflection (PCAR) spectroscopy experiments 30,36,[73][74][75][76] . In the PCAR technique, spin polarisation in the ballistic transport regime can be determined from fitting the bias dependence of the conductance with the a modified Blonder-Tinkham-Klapwijk (BTK) model 77 .
We measure the Al 2 O 3 (sub)/Pt (4 nm)/CoPt(128 nm)/Pt (4 nm) samples grown at 350 • C , corresponding to L1 1 crystal structure, and 800 • C , corresponding to L1 0 crystal structure. The PCAR experiment was performed with a Nb wire tip at 4.2 K. Exemplar conductance spectra with fits to the BTK model are given in Fig. 7a. The interpretation of PCAR data is rife with difficulties 78 and a common issue with the PCAR technique is the presence of degenerate local fitting minima. To ensure that a global best fit is obtained, the fitting code makes use of a differential-evolution algorithm and we then consider the spin polarisation and barrier strength parameter for a large number of independent contacts to the same sample. Figure 7c shows the dependence of the polarisation as a function of the square of the barrier strength, Z 2 . The dashed lines in Fig. 7 are linear fits to the data. The value of the true spin polarisation is is often taken to www.nature.com/scientificreports/ correspond to Z = 0 , however this is strictly nonphysical. Nevertheless, in an all metal system it is possible to produce contacts approaching an ideal case and extrapolating to Z = 0 is close to the (finite) minimum. We find that P = 47 ±3 % for both L1 1 and L1 0 CoPt samples. This compares to ≈ 42 % for L1 0 FePt 30 and ≈ 50 % for L1 0 FePd 36 .
Conclusions
The major conclusions of this work may be summarised as follows. On c-plane For alloy growth, we use the co-sputtering technique. To achieve as close to a Co 50 Pt 50 stoichiometry as possible, first, single layer samples of Co or Pt are grown at room temperature on 10 × 10 mm thermally oxidised Si substrates varying the magnetron power. From this initial study, it is found that a growth rate of 0.05 nm s −1 is achieved for a Co power of 45 W and a Pt power of 25W. These growth powers are fixed for the rest of the study.
For the growth of the CoPt samples, 20 × 20 mm c-plane sapphire substrates are used. The substrates are heated by a ceramic substrate heater mounted directly above the substrate holder. The measured substrate heater temperature is reported. We note that the temperature on the substrate surface is most likely to be below the reported heater temperature. The substrate heater is ramped up from room temperature to the set temperature at a rate of 3-5 • C min −1 . Once at the set temperature, the system is given 30 min to reach equilibrium before starting the sample growth.
Once the system is ready for growth, 4 nm Pt seed layer is deposited. The seed layer is immediately followed by the CoPt layer, which is deposited at a rate of 0.1 nm s −1 by co-sputtering from the two targets at the determined powers. Finally, a 4 nm Pt capping layer is deposited to prevent the samples from oxidising. The final sample structure is Al 2 O 3 (sub)/Pt (4 nm)/CoPt(d CoPt )/Pt (4 nm). Following deposition, the samples are post growth annealed for 10 min at the growth temperature before the substrate heater is ramped down to room temperature at 10 • C min −1 .
Characterisation. Magnetisation loops are measured using a Quantum Design MPMS 3 magnetometer.
Angular dependent magnetization measurements are performed using the Quantum Design Horizontal Rotator option. X-ray diffraction and reflectivity is performed on a Bruker D8 diffractometer with an additional fourbounce monochromator to isolate Cu K-α at a wavelength of 1.5406 Å. Sheet films are patterned into Hall bars of 5 µ m width using conventional photolithography and Ar ion milling. Resulting devices are measured using 4-point-probe transport to measure the Hall resistance of the films using a combined Keithley 6221-2182A current source and nano-voltmeter.
PCAR .
Our experimental setup for performing PCAR measurements is described elsewhere 30,36,[74][75][76] . The Nb tips are prepared from commercial 99.9% pure Nb wires with a diameter of 0.5 mm. An AC lock-in detection technique using Stanford Research Systems SR830 lock-in amplifiers is used for the differential conductance measurements. The tip position is mechanically adjusted by a spring-loaded rod driven by a micrometer screw. The experiment is carried out in liquid He at a fixed temperature of 4.2 K and at zero applied magnetic field.
Data availability
The datasets generated and/or analysed during the current study are available in the University of Leeds repository, https:// doi. org/ 10. 5518/ 1275. www.nature.com/scientificreports/ | 6,343.4 | 2023-08-01T00:00:00.000 | [
"Physics",
"Materials Science"
] |
European Green Deal: Slowing Down Multi-Speed Integration with Innovation
With the creation of the Green Deal, the European Union aims to achieve the goal of climate-neutrality by 2050. Multi-speed European integration is very likely to take place during the procedure of transition, if the proper precautions are not taken. Although this would not necessarily be a reason to be concerned about in other cases, in the case of climate-neutrality would be most distressful. That is because of the lack of justness multi-speed transition carries and the nature of the goal itself. Although the EU has created a whole mechanism to reassure that the transition will take place in a fair and just way, it concentrates on regions with specific features (such as fossil and carbon dependency), leaving behind other, less concerning regions. This Policy Brief aims to create a link between the Just Transition Mechanism and the possibility of multi-speed integration and to propose an additional pillar that –even though might seem gentlermay be enough to prevent this issue. That is, the creation of several model-like projects from the European Commission, that could be proposed and applied in almost any region. One of these projects could be the innovative Vertical Agriculture scheme.
Introduction
Having the European Green Deal in mind, the EU aims to reform and reshape as many sectors as possible. Although it is -and should be-the first priority for the EU to invest in regions where the transition would be most drastic, the question is, how will the funding be distributed from memberstate to member-state -and their regions-, without causing inequality and creating different scales of development.
This Policy Brief, aims to approach that issue with a more versatile perspective. Starting with understanding what multi-speed integration really is and how it may interfere with the goals of Cohesion Policy, the focus will then shift towards the importance of the problem, regarding the implementation of the European Green Deal project. After realizing just how urgent it is to take action against any possibility of creating different development levels and the importance for the European Commission to simultaneously focus and create models of projects which -from their nature-can be applied in any region, the proposal of the Vertical Agriculture Project will arise, as a vol. 2 | no. 1 | June 2021 190 fair and just scheme that will potentially benefit not only each and every member-state (and their region), but also the whole Continent, and the EU in general.
Explanation of the problem
In this part, the main goal is to provide the reader with a basic idea of the multi-speed European Union, why this could potentially interfere with the Cohesion's Policy interests and how important it is for the EU to avoid multi-speed integration for the sake of climate-neutrality.
Multi-speed European Union
Before showing just how urgent it is for the EU to avoid climate-neutrality setout with a multi-speed approach, it is only natural to explain the meaning of multi-speed integration. According to the White Paper of the European Union for the future of Europe (European Commission, 2017), there are several possible approaches on the way the European Union will develop in the future. One of the possibilities is that the member-states which are more likely to be on the same level, to cooperate with each other and continue their integration, while others join them only when they are equally prepared. There are pros and cons with that.
Yet, when it comes to climate-neutrality, it is pointless. That is, because this goal is only achievable and long-term viable when every single member-state and its regions participate in the transition.
There are two reasons for that; firstly, when the EU reaches its goals of climate-neutrality without every member-state transitioning, it will mean that some member-states changed drastically their economy while others took small -if any-steps to change, for example, their carbon-dependent regions. This outcome would devastate the unity of the Union and would surely deepen the gap between some member-states, leading to even more injustice (Rosamond, 2004). Secondly, climate problems by default don't take into consideration borders. As much as some member-states may try to achieve climate-neutrality all by themselves, the real change is to come only with the participation of all of them.
Besides, after the first shock from the COVID-19 hit (Melidis & Tzagkarakis, 2020), EU began a massive campaign under the "NextGenerationEU" title, where European Commission found an opportunity window to promote the Union's integration through billions of investments, taking a step even further for the unity of EU and decreasing in that way a possibility of two (or three)-speed integration (Mitsos, 2020).
Cohesion Policy
In order to avoid multi-speed transition while reaching for climate-neutrality, it is important for the EU to respect the goals of Cohesion Policy. By confining differences, the Cohesion Policy aims to invigorate economic cohesion and social connectivity between all the member-states. Cohesion's Policy main -but not the only-goal is to support the less developed regions (Andreou, 2016). In this case, even though it still remains of great importance to support the most vulnerable regions, it is vital to prevent any further injustice by doing so.
The current period of the Cohesion Policy starts in 2021 up until 2027 and during this period the European Union will focus more on innovation, ecology and regional development (European Commission, 2019). The EU funds each region of each member-state through European Regional
Development Fund (ERDF), European Social Fund (ESF) and the Cohesion Fund (CF).
Approximately 75% of ERDF and CF resources will be distributed to create a smarter Europe, with innovative cities, developed regions and a greener and carbon free Europe, based on a circular economy. These two priorities will be the main focus on regional investments (European Commission, 2018). Since it is the purpose of Cohesion Policy to stabilize the differences and achieve cohesion, what better solution than to also fund model-like projects that have the potential to be applied almost in any region. Thus, this proposal could, also, benefit from Cohesion's Policy funds (European Commission, 2021).
The Importance of the Problem
There is a main issue regarding the implementation of EU's Green Deal projects; the possibility of an increasing multi-speed Union (Melidis & Russel, 2020), since some countries like Sweden are far more climate-neutral than Poland for example 3 . This underlines the need to focus not only on some specific regions, but also to any region regardless of its economic, population and geographic status and it shows exactly why it is of vital importance for the European Commission -now more than ever-to create smaller, regional-friendly, innovative projects and provide them to the member-states.
This Policy Brief will, thus, propose a pioneering project -as an example-that could tackle this issue and still provide the EU with one more mean to reach the 2050 goal of climate-neutrality.
So, the goal is firstly, to find a way to avoid from the beginning any further multi-speed integration due to the ambitious and complex goal of a climate-neutral European Union and secondly to maintain vol. 2 | no. 1 | June 2021 192 the focus on the quantity of regions where projects may apply. This means that -apart from the regions, for instance with highly carbon-intensive activities, that will need the most attention due to the transition-, the EU should simultaneously invest in creating, promoting and proposing projects that may have the potential to be applied to as many regions as possible and create a whole new approach to the Just Transition Mechanism (according to European Commission's 2020 press publication: "[…] the territorial just transition plans will identify the most impacted territories that should be supported").
Green Deal from its core is meant to be applied to all the member-states. This means that climateneutrality is a goal that can only be reached when certain steps are followed by as many regions as possible. It is, thus, of most importance to focus on a just transition, not only from region to region inside each member-state, but also from member-states' to member-states' regions. In other words, it is vital for the EU's Green Deal to adopt the values and the goals of Cohesion Policy, in order to avoid an even greater division of integration.
Even though there is undoubtedly a focus on a just distribution of the funds via the projects, the truth is that there is more to be done. The reason is that the Just Transition Mechanism (European Commission, 2020), as thoughtful and well-functioning as it is, is not enough. To be able to really make sure that the transition towards a climate-neutral Europe will not take place while the distance between the development of the regions of each member-state becomes greater than ever, the European Commission should respectively give a fair amount of time and energy to create pioneering projects (for example, with an additional skill to InvestEU Advisory Hub or TSI programs 4 , to not only create certain projects for each member-state separately, but also to promote and propose modellike projects) that could apply in any member-state, adopting it of course to the specific features of each one of them, and thus avoid the creation of a developmental gap between them.
The Solution: Vertical Agriculture Project
The Vertical Agriculture Project, which is this policy brief's proposition for a just and climate-neutral project, was first conceived as an innovative idea from Dr. Despommier Dickson from the University of Columbia. It fits perfectly for the EU's purposes, since it focuses on agriculture, environment and energy goals, at a local-regional level. It may not drastically change the performance of the whole 193 Continent, yet small steps are equally important not only to achieve but also to maintain climateneutrality.
This project will on the one hand provide a potential of new job vacancies to any EU region, and also enhance their productivity, boost their economy and the goal of becoming a low-carbon, eco-friendly, climate-neutral Continent will be one step closer to its achievement. In addition, all the fresh goods that will be cultivated are going to be sold to the local markets, focusing mainly on the Green Deal's goal of Agriculture and Food (and specifically the "Farm to Fork Strategy"), but also covering themes from other goals, such as environment and energy.
The Vertical Agriculture Project could be proposed as a model-like solution for every region where buildings no longer serve their purpose. We are talking about buildings where renovation is most costly, buildings whose construction has stopped due to lack of funds and finally, abandoned buildings. Those buildings could change their form and become the vessel of innovation. The main theme is, instead of demolishing or completely abandoning them, to have a local interior vertical garden, with all the equipment necessary for it to be self-functional. Something similar to recycling, but with buildings.
In order to promote and propose this project from top-down (from the European Council to the member-states and their regions), randomization is the key. That is, because firstly if counting on randomness the European Commission will be released from a blame-game of intentionally supporting some regions over the others and secondly, due to the overall justness it carries. Random selection is a fair way of choosing one thing over another due to its lack of biases and the presence of the conditioning that each region has equal chances to implement the project (Duxbury, 2012). In this policy brief, randomness lies on the location of abandoned/old and left-under-construction buildings. Wherever there is such a building, there is an opportunity for its region to thrive.
Additionally, it goes without saying that this proposal is only complementary and certainly not as urgent as it is the support of regions where the transition will drastically change the lives, jobs and economic situation of their people. Yet, even if it may seem trivial to some extent for now, if such a small addition would take place, it could drastically change the course of EU's unity.
Finally, the key for this project to really thrive is for each region to make adaptations according to its population (diversity, density, demographics), local conditions, access to labor and the preferences of its consumers (Martin et al., 2016).
Proposals and Benefits for the EU and its Member-States
By providing the reader with several proposals, in this unit, the focus will be on the technical issues regarding the implementation of the Vertical Agriculture Project and the potential benefit of its adaptation for both the European Union and its member-states.
Proposals
While keeping in mind that each member-state (and each region of each member-state), has different needs and possibilities, the main goal is to present to them a basic model so that they can either follow it by the book or make any necessary changes. The final call, though, may be at the discretion of the region or the State.
The buildings may keep their shape as it is though they will obtain a new interior and exterior form.
The main goal is to recreate those abandoned/ semi-build/ old houses so that they can become fully functional.
About the energy goal, on the rooftop of each building, there will be placed solar smart-flowers, a kind of photovoltaic that absorbs almost 50% more solar energy than a traditional photovoltaic. In that way, this renewable source of energy could be used in two ways; firstly, to provide the needed energy for the functioning of the building and secondly the rest of it could be used where is necessary, or can be stored for local use (Tatang et al, 2018).
As for the environment goal, in each building, the outer side will be covered with plants from several different species, gaining the so-called vertical agriculture look, while also respecting the biodiversity of each region. This will improve the air quality, will balance out the temperature of the roads and blocks close to it and provide the passengers with a more beautiful sight.
Finally, and more importantly, for the agriculture and food goal, seasonal fruits and vegetables (like tomatoes, lettuce, cucumber, sweet potato and strawberries) are going to be cultivated inside those buildings vertically, using mainly aquaponics. Aquaponics is a combination of hydroponics and fish farming. It only utilizes approximately one tenth of the water that soil-based cultivation does and is a process that does not use pesticides. Certain kinds of fish can be used for that, like tilapia (Al-Kodmany, 2018).
In each building there will be a system to monitor crops, for the convenience of the people who work there and a tank of used water or rainwater, which then passes through a filter system and can be used for the aquaponics. In that way, there will be no extra consumption of the city's water. Lastly, there vol. 2 | no. 1 | June 2021 195 will be a feeding system, where a mechanism directs a programmed amount of water and light to the individual crops. This process will hence give to the EU regions a self-sufficiency boost, producing a great deal of energy on their own while depending on renewable energy sources, and will also give them the opportunity to ameliorate their productivity and provide their residents with jobs as well as selling to the people in need fresh, reasonably priced goods like vegetables, fish and fruits, reaching, thus, the farm to fork goal of Green Deal.
Benefits for the European Union
Firstly, and more importantly, this project is both preventing a problem and providing a solution while simultaneously creating a new area for the implementation of Green Deal's goals. It is less time consuming than any other project, since it is already fixed while also very adoptable and flexible. It has the potential to be applied almost everywhere.
While also focusing on huge projects and drastic changes, the EU will, at the same time, provide the regions with an opportunity to flourish at a local level, which in fact proves the inclusiveness and adaptability of its climate-neutral goals.
By investing in this project, the EU will also achieve a global goal, combining innovation and the protection of the environment, with a project in such a numerically large scale. Nowadays, proving that she can stay strong and coherent despite the COVID-19 crisis and its aftermath is of vital importance.
Benefits for the Member States
Since it is only a proposal, member-states and their regions are not in the slightest obliged to accept and continue with it. Nevertheless, it is certain that they will. And that is, because; firstly, there will be a project that is already given as a proposal from the European Commission, which provides the opportunity for every region of every member-state to choose an already created and approved project for as many regions as possible and secondly, the aftermath of this project will only be positive. It will provide the people in need with jobs and food (thus the unemployment rate and poverty will be reduced), and to its citizens a cleaner atmosphere. These buildings could even become an attraction for tourists, boosting even more the local economy. Especially for member-states, such as Poland, that may not be ready for a wide transition just yet, starting with small regional projects could be a great solution for both the EU and the country itself.
Conclusions
Climate-neutrality by 2050 seems like a huge transition with many variables to consider. Different member-states have different potentials to reach this goal, which may lead to a multi-speed EU problem. Focusing mainly on regions (and in a larger scale member-states) highly dependent on fossil fuels and carbon exploitation, is a very reasonable and yet incredibly complex task, and there is no doubt that will create different layers of development through EU's regions. This policy brief proposed an additional, gentler approach through a small-scaled vertical agriculture project with not as massive impact as those mentioned before, but still with lots of benefits to provide. | 4,160.8 | 2021-07-28T00:00:00.000 | [
"Economics",
"Environmental Science",
"Political Science"
] |
hoDCA: higher order direct-coupling analysis
Background Direct-coupling analysis (DCA) is a method for protein contact prediction from sequence information alone. Its underlying principle is parameter estimation for a Hamiltonian interaction function stemming from a maximum entropy model with one- and two-point interactions. Vastly growing sequence databases enable the construction of large multiple sequence alignments (MSA). Thus, enough data exists to include higher order terms, such as three-body correlations. Results We present an implementation of hoDCA, which is an extension of DCA by including three-body interactions into the inverse Ising problem posed by parameter estimation. In a previous study, these three-body-interactions improved contact prediction accuracy for the PSICOV benchmark dataset. Our implementation can be executed in parallel, which results in fast runtimes and makes it suitable for large-scale application. Conclusion Our hoDCA software allows improved contact prediction using the Julia language, leveraging power of multi-core machines in an automated fashion.
Background
Thanks to rapidly growing sequence databases, the prediction of protein contacts from sequence information has become an promising route for computational structural biophysics [1][2][3][4]. The so called direct-coupling analysis (DCA) uses a multiple sequence alignment (MSA) to predict residue contacts in a maximum entropy approach. Its high accuracy was shown in various studies [5][6][7][8][9][10][11] and also made it suitable for protein structure prediction software [12][13][14].
The DCA approach leads to a Potts model with probability for a sequence σ = (σ 1 , . . . , σ N ) given as σ j consisting of local fields and two-body interactions and N being the length of the sequences. Z = σ ∈A N P( σ ) is the partition function as the sum over all sequences where each position is chosen from the alphabet A. After estimation of parameters h i , J ij from empirical sequences σ (b) , a contact *Correspondence<EMAIL_ADDRESS>1 Department of Physics, TU Darmstadt, Karolinenpl. 5, 64287 Darmstadt, Germany Full list of author information is available at the end of the article prediction score for residue i and j can be obtained by taking the l 2 -norm J ij 2 . In a recent study [15], an improved prediction accuracy was shown by incorporating threebody interactions V ijk σ i , σ j , σ k into H, obtaining a threebody Hamiltonian Here, we present an implementation of this method, which we call hoDCA. Implementation hoDCA is implemented in the julia language (0.6.2) [16], and depends directly on a) the ArgParse [17] module for command-line arguments and b) on the GaussDCA [18] module for performing preprocessing operations on the MSA and the implicit dependencies for those packages. A typical command-line call is julia hoDCA.jl Example.fasta Example.csv -No_A_Map=1 -Path_Map=A_Map.csv -MaxGap=0.9 -theta=0.2 -Pseudocount=4.0 -No_Threads=2 -Ign_Last=0 with input Example.fasta and output Example.csv. The latter consists of lists of all two-body contact scores J ij separated by at least one residue along the backbone. The meaning of the remaining (optional) parameters will become clear in the following.
General notes. For inference of parameters h i , J ij , V ijk , we use the mean-field approximation as described in [15] with a reduced alphabet for three-body couplings. This is accomplished by a mapping with q being the full alphabet of the MSA and q red ≤ q.
On the one hand, this accounts for the so called curse of dimensionality [19], occuring if the size of the MSA is too small to observe all possible q 3 combinations for each V ijk . On the other hand, this significantly reduces memory usage and allows for a faster computation of contact prediction scores. The mapping μ can be specified by Path_Map, which is a csv file with every row representing a mapping. No_A_Map tells which row to choose. As the bottleneck is still the calculation of three-body couplings, it can be performed using parallel threads by specifying the No_Threads flag. In traditional DCA, the last amino acid q usually represents the gap character and is not taken into account for score computation within the l 2 -norm. In hoDCA, each two-body coupling state l ≤ q contains contributions from {n ≤ q|μ(n) = μ(l)} due to the reduced alphabet. We therefore take gap contributions into account by default, which can be changed by the Ign_Last flag.
MSA preprocessing. The MSA is read in by the GaussDCA module, ignoring sequences with a higher amount of gaps than MaxGap, and subsequently converted into an array of integers. However, in contrast to GaussDCA, we check for the actual number of amino acids types contained in the MSA given. We, then, reduce the alphabet from q = 21 to the number of present characters (amino acid types). Afterwards, the reweighting for every sequence σ (b) is obtained by the GaussDCA module via w b = 1/|{a ∈ {1, ..., B} : difference σ (a) , σ (b) ≤ theta}|, where the difference is computed by the percentage hamming distance [6]. The aim of reweighting is to reduce potential phylogenetic bias.
Frequency computation. Empirical frequency counts for the full alphabet are computed according to [6] with δ being the Kronecker delta, B the number of sequences in the MSA, B eff = B b=1 w b and λ c = Pseudocount · B eff . The Pseudocount parameter shifts empirical data towards a uniform distribution. This is necessary to ensure invertibility of the empirical covariance matrix in the mean-field approach.
Frequency counts for the reduced alphabet are computed via The computation of three-point frequencies takes some time and will be executed on No_Threads threads. For this, we parallelized their calculation over the sequence size N, meaning that the i-th process computes f red ijk for all k ≥ j ≥ i and fixed i. Besides the parallelization scheme, three-point frequencies are preprocessed in the same manner as one-and two-point frequencies.
Contact prediction scores. Contact prediction scores follow directly from two-body couplings. Two-body couplings are obtained within the mean-field approximation by where g ij (l, m) is the inverse of the empirical twopoint covariance matrix e ij (l, m) = f ij (l, m) − f i (l)f j (m). g red ijk (α, β, γ ) is given by a relation to the three-point covariance matrix over the reduced alphabet where g ij (α, β) is the inverse of the two-point covariance matrix over the reduced alphabet (see [15] for more details). For the calculation of scores, J ij are transformed into so called zero-sum gauge, satisfying q lĴ ij (l, .) = q mĴ ij (., m) = 0, where "." stands for an arbitrary state viâ The purpose of the gauge transformation is to shift local bias from two-body couplings into local fields [8,20]. Above calculations are the most time consuming parts and run on No_Threads threads. The final scores result from average product correction (APC) of l 2 norm [21] via and Ĵ ij 2 = q l,m=1Ĵ ij (l, m) 2 .
Discussion
A performance benchmark on the PSICOV-dataset [10], consisting of 150 proteins, is presented in [15]. For eval-uating the performance of a single protein, the so called area under precision curve was used, where C is the total amount of contacts and p i is the number of true positives of the first i predictions. Figure 1 shows the predicted contact map of the protein data bank entry 1fx2A as an exemplary case. For this particular protein, the classical two-body DCA has an A-value of A ≈ 0.5 while hoDCA shows a superior A ≈ 0.72. Interestingly, the majority of hoDCA's false positives are located in the lower and upper right corner of the contact map. We hypothesize that this finding is due to correlated gap regions in the corresponding MSA: For this particular pdb entry, many sequences were too short and had to be extended by gaps on both termini. This, in turn, leads to intra and inter correlations between the left and right termini. Figure 2 shows the two-point gap-gap frequencies of the non-preprocessed MSA (i.e. without sequence reweighting, pseudocount modification or deletion of sequences). As can be seen, there is indeed an accumulation of gap regions at the beginning and ending of the protein, thus possibly leading to false correlations. Figure 3 shows the runtime behavior of hoDCA when No_Threads are used for calculation of three-body terms. We used entry 1tqhA for the benchmark, which has one of the largest MSAs in the PSICOV dataset (N = 242, B = 18, 170) and parameters as in Eq. (2). The overall speedup is about five-fold when executed on n ≥ 12 threads in comparison to a single CPU core. A fit of Amdahl's law T = T 0 · (1 − p · (1 − 1/n)), with T 0 being the Fig. 3 Runtime behaviour of hoDCA for PSICOV entry 1tqhA. The benchmark system was a Debian-operating server with two Intel(R) Xeon(R) CPU E5-2687W v2 @ 3.40GHz. Runtimes were taken for julia-compiled code, thus potential initalization overhead is omitted. The solid line shows a fit of Amdahl's law single-threaded runtime and n = No_Threads, reveals the proportion of parallelized routines as p ≈ 0.86. The serial runtime proportion of ≈ 0.14 comes mainly due to computation of two-body terms. Also note that we did not modify the standard julia parameters, meaning, e.g., a parallel computation of the matrix inverse by default.
Conclusions
Higher-order interactions have been shown to have a strong influence on contact prediction in certain proteins [15,22,23]. Here, we implemented hoDCA, an extension of DCA by incorporating three-body couplings into the Hamiltonian. The accessible command-line user interface and the significant speedup within parallel execution make hoDCA suitable for contact prediction in a variety of proteins, using biochemical inspired alphabet reduction schemes. We hope to have made this method easily accessible for other researchers by this software release.
Availability and requirements
Project name: hoDCA Project home page: http://www.cbs.tu-darmstadt.de/ hoDCA/ Operating systems: Linux, Windows, macOS Programming language: julia (0.6.2) Other requirements: julia packages Argparse, Gauss-DCA License: GNU General Public License v3, http://www. gnu.org/licenses/gpl-3.0.html Any restrictions to use by non-academics: Any commercial use is subject to a contractual agreement between involved parties.
Abbreviations APC: Average product correction; DCA: Direct-coupling analysis; MSA: Multiple sequence alignment | 2,377.2 | 2018-12-01T00:00:00.000 | [
"Biology",
"Computer Science"
] |
Research of LOB Data Compression and Read-Write Efficiency in Oracle Database
Aiming at the problems of huge storage space, low exchange speed and low read-write speed of the current specific oracle database, the read-write speed and exchange speed tests are performed on the compressed and uncompressed Clob and Blob data by three compression algorithms, including Bzip2, Gzip and GzipIO respectively. The read speed test is performed by the direct read, substr read, and substr+threadPool read techniques. The results show that: (1) Blob is superior to Clob in terms of storage, exchange, or read-write speed; (2) For the specific database, Blob+Gzip is the optimal storage structure of the minute and second data. The read-write speed is greatly improved, and the overall capacity of the database is reduced to 7% (or less). The exchange rate of the second data is at least 7.89 times of the present rate, and the station data can be exchanged to the disciplinary center within 2–3 hours (currently 1.5 days); (3) The simplest and most widely used direct read method by software developers has poor database read efficiency, while the substr+threadPool technique shows higher database read efficiency no matter for Clob or Blob, for compressed or uncompressed, which brings a leap-forward improvement in the read speed of LOB data. The results of this paper are of high reference significance to the LOB data storage design and software development.
Introduction
At the end of 2007, the "10th Five-Year" system of a specific network was officially completed and put into operation. The software system is a four-level interconnected distributed system consisting of station, provincial bureau, national center and disciplinary center. In order to facilitate data exchange at all levels, a unified database management system (Oracle10g) and a unified database table structure are adopted nationwide (Zhou Kechang et al, 2009Liu Gaochuan 2008). There are two main software systems: management system (B/S architecture, Running on the server) and processing system (C/S architecture, Running on the client PC machine). The former is responsible for daily data collection and storage, while the latter is responsible for daily data preprocessing and product data calculation. The management system exchanges the station data to the provincial bureau, national center, and disciplinary center on daily timing (Liu Gaochuan, 2008).
The national center is the collection center for specific data across the country and is also the largest specific database. As of August 2018, the data outputted by 3328 sets of observation instruments (364 sets of second sampling instruments, 2126 sets of minute-sampling instruments, 838 sets of hourly and daily sampling instruments) is stored in the database. The total database size is about 8000GB, which is still increasing by 800GB every year. In addition, the total data with a time resolution of minutes and seconds accounts for more than 95% (or more) of the total space of the database.
As all the minute and second data is stored in the format of "uncompressed Clob+Ascii", and the database is caught in problems of huge data storage space, low data exchange speed, low read-write speed, operation and maintenance difficulty. For example, it takes about 4 minutes for the "processing system" to remotely read the second sampling data of 6 elements in an instrument, and it takes at least 1.5 days to exchange the updated station observation data to the disciplinary center. Besides, 10 days are required to continuously copy the national central database (8000GB) cold backup to another server, during which the database and all services must be shut down. This cold backup method is obviously unrealistic; the hot-backup system (autonomously developed by the specific system) can only correspond to one server due to the software reasons. If there is a problem with both the main database and the backup database, data loss will be catastrophic.
With the development of the information society, people are faced with rapidly growing information, and the pressure on storing, transmitting and processing such massive information is increasing. In this case, data compression is an inevitable choice. In order to save information storage space and improve information transmission efficiency, a large amount of actual data must be effectively compressed. Data compression has been greatly valued as a support technology for solving the storage and transmission of massive information (ZHENG Cui-fang, 2011).
Data compression technique is generally classified into lossy compression and lossless compression. Lossless compression means that the reconstructed compressed data (reduced and decompressed) must be identical to the original data, and is suitable for the cases where the reconstructed signal is required to be identical to the original signal (LI Lei-ding et al, 2009;ZHENG Cui-fang, 2011). Lossless data compression algorithms are mainly divided into two categories according to the compression models: Statistical compression based algorithm and dictionary compression based algorithm. The statistical compression based algorithm mainly includes: Run length coding, Huffman coding, arithmetic coding, etc.; dictionary compression based algorithms mainly include: LZ77, LZ78, LZW, LZSS, etc. (LI Lei-ding et al, 2009;XU Xia et al, 2009;ZHENG Cui-fang, 2011;ZHANG Ai-hua et al, 2017). The compression algorithm must be able to provide a high data compression rate to support the real-time mass data storage characteristics of the database. Both compression and decompression processes must present better speed performance (LIU Hong-xia et al, 2010).
Bzip2 is a data compression algorithm and program developed by Julian Seward and released under the Free Software/Open Source Software Agreement. Seward released Bzip2 0.15 for the first time in July 1996. In the following years, the stability of this compression tool was improved and became more popular. Seward released Version 1.0 and Version 1.0.3 in 2000 and 2007 respectively (J Seward, 2002Seward, , 2007. Bzip2 is a lossless compression algorithm based on Burrows-Wheeler Transform (BWT). With its compression rate advantage, it has been more widely applied. BWT is a transform method independent of internal repeatability of data and it can effectively bring together the same characters in data to create conditions for further compression (Li Bing et al, 2015). It is able to compress the common data to 10% to 15%, and offers high compression and decompression efficiency. It is widely used in many versions of UNIX & LINUX, and supports most compression formats, including tar and Gzip. Its main advantages include: Bzip2 open source, free of charge; support for repairing media errors. When it is required to obtain the data in the erroneous compressed file, Bzip2 can still perfectly decompress the unbroken part; it can run on any 32-bit or 64-bit host containing ANSI C compiler (Jeff Gilchrist, 2008;V Pankratius et al, 2009;M Mccool et al, 2012;JS Salazar et al, 2017).
Bzip2 provides two data compression algorithms, including Bzip2 and Gzip, which can be called by dll interface file, ICSharpCode.SharpZipLib.dll. The System.IO.Compression namespace of Microsoft .Net also provides another Gzip compression algorithm, which is referred to as GzipIO in this paper.
In Oracle database, Clob and Blob (Abbreviated LOB) are two typical large object data storage structures, which are widely used in all levels of databases. Clob can only store single-byte character data, and is mostly used to store long text data. Blob is used to store unstructured binary data, mainly including formatted images, videos, audio and Word documents (NIE Hong-mei et al, 2006;ZHANG Jing et al, 2011;ZHANG Hui et al, 2012;XIE Yi et al, 2015).
Based on the Microsoft .Net development platform, this paper uses Bzip2, Gzip, and GzipIO compression algorithms to test and compare the read-write speed, exchange speed of the compressed and uncompressed data of Clob and Blob. The three techniques, including direct read, substr read and substr+threadPool read, are applied for the read speed test. The advantages and disadvantages of each compression algorithm and database read method are summarized in order to test the "optimal" compression algorithm and database read method for the specific database.
Test data and research method 1.Test data
The test data selected in this paper are 6 elements of minute and second data in 31 days outputted from an instrument from January 1 to January 31, 2009. The instrument adopts second sampling, and each element includes 86400 second sample data per day. The minute data is calculated by the second sample data through Gaussian filtering, and each element includes 1440 minute sample data per day.
All the test results in this paper are completed in the office of Lanzhou, Gansu Province. The local server is located in the information room of the work unit while the remote server is located in the information room of a research institute in Beijing. The test data and table structure of the local and remote are identical. The test software is client software written on the Microsoft .Net development platform (running on an office PC machine).
Minute and second data table structure
The minute and second data table structure in the specific database is shown in Table 1. The observation data is in Table 1. The observation data is stored in the format of "uncompressed Clob+Ascii", and one record is required to be made by each set of instruments per day.
LOB data compression and decompression method
After the interface file ICSharpCode.SharpZipLib. dll provided by Bzip2 is referred to in Microsoft .Net, the BZip2OutputStream and BZip2InputStream method are called through the namespace ICSharpCode. SharpZipLib to complete Bzip2 compression and decompression, the GzipOutputStream and GzipInput-Stream method are called to complete Gzip compression and decompression, respectively. The process of GzipIO compression and decompression is completed by calling System.IO.Compression.GzipStream method.
Database connection and LOB read-write method
The Microsoft .Net framework uses ADO.NET to complete access to the database, and OracleConnection for database connection, OracleDataAdapter and DataTable for LOB data reading and temporary storage, and OracleCommand for LOB data writing.
1) Direct read:
For software developers, the simplest and most common database read method for LOB data is to read directly using Select LobName. 2) Substr read: Both Clob and Blob can use the substr function in Oracle's own DBMS_LOB package to read data segmentally, namely Select DBMS_LOB.substr(lobName, n, pos), where lobName is Lob field name, n is the number of bytes read, and pos is the starting position of the read. The maximum length that the Clob can read is 4000 bytes at a time, while the maximum length that the Blob can read is 2000 bytes at a time. Therefore, the substr read must be cyclically executed, and the starting position (pos) of the read must be reset every time. After the cycle, the data of the same date should be spliced in order.
3) Substr+threadPool read:
ThreadPool class provides thread pool management in Microsoft .NET. The SQL statement from substr read is placed in the thread pool in turn, which can execute the "substr read" in parallel (multiple threads read at the same time). Besides, a separate sub-thread class is required, which needs to create a new database connection to execute the SQL statement of substr read; after all the tasks are added to the thread pool, a while loop is required to exit the cycle and perform subsequent operations after all threads have been executed.
LOB data compression and exchange speed test 2.1 LOB data compression test
Bzip2, Gzip and GzipIO compression algorithms are used to test minute and second data of an instrument in January 2009 outputted by an instrument. Table 2 shows the Average compression rate of every record. For both minute and second data, the compression rate of Bzip2 is the highest, followed by Gzip and GzipIO successively; the capacity of compressed record in Bzip2 is the smallest, which means that the compressed record occupies smaller storage space. However, Bzip2 takes the longest compression and decompression time, far higher than that of other two algorithm. This means that the database read (decompression) and write (compression) consume a longer time. The compression time of Gzip is about 2.5 times of that of GzipIO but the difference in the decompression time of both algorithms is very small. But the binary compression rate of minute and second data is improved by 5% and 3% respectively.
LOB data exchange speed test
Currently, the specific management system software adopts "dbLink+Insert" technique in data exchange. The core command of the data exchange is "insert into XX from XX@dbLinkName", where dbLinkName is the dbLink of the remote database. The statement directly insert the data from a remote table into the same table at the local database (The local data records are firstly deleted and then inserted when data records exist). According to the current specific data exchange mechanism, there is no need to parse the LOB data during the exchange process. Therefore, the compression and decompression efficiency has no effect on the exchange speed, and only the capacity of each record can be affected by the exchange speed. After logging into the remote (Beijing) database, the command is run directly, and its execution time is taken as the actual exchange time, which refers to the average time for each record to be transmitted from the local (Lanzhou) to the remote (Beijing). The data exchange speed test results of an instrument in Lanzhou in January 2009 are shown in Table 3. The estimated exchange rate = Capacity of "uncompressed Clob"/other capacity, and the actual exchange rate = Exchange time of "uncompressed Clob"/other exchange time. Table 3 shows the Average data exchange speed test of every record. For both Clob and Blob, the actual exchange rate of the three compression algorithm for minute and second data is not as good as the estimated exchange rate, and the actual exchange rate of the second data Blob compression is improved by 7-9 times, but the actual exchange rate of the minute data is only slightly increased; regarding the Blob and Clob uncompressed algorithm for second data, the actual exchange rate is improved by 1.84 times in the case of the same storage capacity.
Read-write speed test for direct read method
The direct read method are used to test the read-write speed of the four storage structures (three compressed structures + one uncompressed structure). Table 4 shows the read-write speed test result of four storage structures.
(1) The database write speed of GzipIO is the highest for both Clob and Blob, closely followed by Gzip, difference between the two is very small. As Bzip2 requires a long compression time, the local database write speed is much lower than that of others.
(2) The database read speed in Bzip2 for Clob and Gzip for Blob is the highest. Even if it is sometimes slower than other methods, the difference between the read speed and the highest speed is the smallest.
(3) For the same compressed or uncompressed structure, the database write speed of the two LOB types is basically the same, but the database read speed of Blob is much higher than that of Clob.
Read speed test for three LOB read methods
The three LOB read methods are used to test the read speed of the compressed and uncompressed structures. Table 5 shows the read speed test result of three LOB read methods.
(1) For the direct read method, it has the worst read efficiency in the uncompressed Clob, and its read speed is much lower than that of other two methods; in the Blob, except for its high local read speed of one-day second data, the other read efficiencies are almost the worst, and as the number of read days increases, its read speed gap with the substr+threadPool method is increasingly.
(2) For the substr read method, its read speed in uncompressed Clob is significantly higher than that of the direct read method, but it is unstable in the Blob second data read, and it is common that its read time is much longer than that of other methods.
(3) For the substr+threadPool method, its read speed is the highest in the uncompressed Clob, far superior to the other two methods. Even if it is sometimes slower than other methods, the difference between its read speed and the highest speed is the smallest. (4) No matter what read methods, the read speed of Gzip is higher than that of GzipIO. In relative terms, the substr+threadPool method can display the highest database read efficiency compared with the other two methods no matter for Clob or Blob, for compressed or uncompressed. Especially, its read speed of the uncompressed Clob is greatly higher than that of other two methods. The storage structure of "Blob+Gzip" combined with the method of "Substr+threadPool read" can make the reading performance of specific database to be "optimal".
Discussion
The tests show that Blob is superior to Clob in storage performance, but the Clob has advantages in improving the retrieval speed of long text data (Zhang Jing et al, 2011). The above test results once again verify the conclusion of Zhang Jing et al. Blob is superior to Clob in terms of storage, exchange or read-write speed, but
BLOB (Binary)
Bzip2 direct read the "uncompressed Clob+Ascii" format can use the DBMS_LOB.substr function to read partial data (Obtain the starting position of each data by separator), and the read speed is much better than the overall read speed, which Blob cannot achieve because of binary storage. For a specific database, there are very few cases of reading partial of the data, and a large number of practical applications require overall read (data processing, drawing, downloading, etc.). The optimal compression algorithm should have the highest compression rate, the highest compression and decompression speed, which, however, is difficult to achieve in practice. Bzip2 has the highest compression rate but longer compression and decompression time. Gzip and GzipIO have shorter compression and decompression time but slightly lower compression rate. Both compression and decompression require good speed performance. The solutions to these two problems are contradictory. The study of the compression algorithm is to find the balance between the two and achieve optimal performance (LIU Hong-xia et al, 2010). Gzip and GzipIO are better than Bzip2 if only the data read-write speed is considered. Compared with GzipIO, Gzip is superior in read speed, and GzipIO is superior in write speed. The difference between the two in terms of read-write speed is very small, but the compression rates for minute and second data of Gzip are 5% and 3% higher than those of GzipIO, which saves more storage space for disks and provides faster data exchange.
For the specific database, if the Blob+Gzip storage structure is adopted, the overall capacity of the database is reduced to 7% (or lower), and the data read-write speed is greatly improved. The second data exchange rate is at least 7.89 times of the present rate, and the station data can be exchanged to the disciplinary center in the shortest time, thus improving the time efficiency of the specific data. At present, it takes 1.5 days to exchange the data from the station to the disciplinary center, and 4 times exchanges per day are generally performed. After the compressed structure is adopted, more than 24 exchanges per day can be performed (once per hour), and it will take less than 2-3 hours to exchange the station data to the disciplinary center.
The direct read method is the simplest and most widely used database read method for software developers, but it is less efficient. The substr read method can read a maximum length of 4000 bytes in Clob and can read a maximum length of 2000 bytes in Blob. In the case of the same record capacity, the number of Blob cycles is twice that of Clob, which will lead to reduced read efficiency of Blob. This should be the root cause for the unstable performance and frequent longer read time than that of other two methods in the Blob second data read. The substr+threadPool method adopts a multi-thread parallel read technique, which just makes up for this deficiency, and shows high read efficiency in both Clob and Blob, compressed and uncompressed.
The disadvantage of the substr+threadPool method is that a large number of database connections are consumed during reading, and there must be enough connections (Open_Cursors) in the database. Thread pool management in NET has a default limit of up to 25 threads per available processor, and the maximum number of concurrent threads we monitored so far is only 19. That is to say, although the total number of threads opened at the time of LOB reading may be as high as 200 to 300, but, in fact, only 25 threads can be read concurrently, while the other threads are all in the waiting state. The total number of Open_Cursors (the national specific oracle database) is set to 30000, so 1200 users can be supported to read data simultaneously according to this method, and the database access must be under the specific industry network. This configuration is sufficient to support substr+threadPool method within the specific system.
Conclusion
Aiming at the problems of huge storage space, low exchange speed and low read-write speed of the current specific oracle database, the read-write speed and exchange speed tests are performed on the compressed and uncompressed Clob and Blob data by three compression algorithms, including Bzip2, Gzip and GzipIO respectively. The read speed test is performed by the direct read, substr read, and substr+threadPool read techniques. The results show that: (1) Blob is superior to Clob in terms of storage, exchange, or read-write speed.
(2) For the specific database, Blob+Gzip is the optimal storage structure of the minute and second data. The read-write speed is greatly improved, and the overall capacity of the database is reduced to 7% (or less). The exchange rate of the second data is at least 7.89 times of the present rate, and the station data can be exchanged to the disciplinary center within 2-3 hours (currently 1.5 days). (3) The simplest and most widely used direct read method by software developers has poor database read efficiency, while the substr+threadPool technique shows higher database read efficiency no matter for Clob or Blob, for compressed or uncompressed, which brings a leap-forward improvement in the read speed of LOB data. | 5,112.8 | 2019-02-08T00:00:00.000 | [
"Computer Science"
] |
Customization, extension and reuse of outdated hydrogeological software
DOI: 10.1344/GeologicaActa2020.18.9 A. Serrano-Juan, R. Criollo, E. Vázquez-Suñè, M. Alcaraz, C. Ayora, V. Velasco, L. Scheiber, 2020 CC BY-SA A . S e r r a n o J u a n e t a l . G e o l o g i c a A c t a , 1 8 . 9 , 1 1 1 , I I I ( 2 0 2 0 ) D O I : 1 0 . 1 3 4 4 / G e o l o g i c a A c t a 2 0 2 0 . 1 8 . 9 Extension and reuse of hydrogeological software 2 languages to facilitate geoscientific calculations. The first programming languages, such as FORTRAN, COBOL, and BASIC, appeared in the mid-1960s and were widely used until the 1990s. Most were devised for the creation of individual programmes for handling specified tasks and short sets of data (at that time, the data were limited and sometimes difficult to collect). The compilers generated the well-known “.exe” files, which typically required additional “.txt” files such as input, output or conditional data during the execution (Wang et al., 2012), thereby resulting in a set of many files that contained the information for one analysis. Many geoscientists have developed tools based on these programming languages (e.g. Bea, 2009). The development of new technologies in both computer architecture and programming languages continues apace, thereby modifying the landscape. Current programming languages such as Python, Matlab, Visual Basic and Visual C are known as visual languages and are more user-friendly than their predecessors. Most integrate all the required information (e.g. input, output, and sources) into a single file and enable the user to directly conduct the whole analysis. Furthermore, the higher computing power has been accompanied by increasing data availability. In the last few decades, digital data collection, aggregation and integration have increased exponentially (e.g. streaming in from a growing number of satellites and sensors and the Internet). Geoscientists are overrun by data while having access to ever-increasing computing power. In addition, Graphical User Interfaces (GUIs) became commonly used to facilitate rapid, rigorous and interactive analysis (Jones et al., 2014). Many GUIs have been developed in geoscience (e.g. Phong et al., 2012) to make software more user-friendly (e.g. screen selection of the input and output arrangement for instant comprehension of the results). Since new software programmes are dynamic, visual and interactive, some old fashioned programming language-based software programmes, such as FORTRANbased programmes, are becoming outdated due to their complex analysis processes (preparing input text files, analysing the output text files and displaying limited graphical options). However, despite the limitations of these geoscientific software programmes, some remain the best option for resolving specified problems. The academic (e.g. Ibrahim, 2009) and scientific communities (e.g. Asuncion, 2013) have also widely accepted the combination of spreadsheets with Visual Basic for Applications (VBA) for the development of software applications. This acceptance has mainly occurred because i) spreadsheet interfaces are user-friendly and facilitate numerical and statistical computations; ii) data can be easily queried, analysed and visualized; iii) a macro programming interface provides satisfactory enduser guidance that facilitates the user in writing correct and more reliable programmes (Cunha et al., 2014); iv) this approach saves time due to its low barrier since most researchers are already adept at manipulating spreadsheets and v) there are available tools that have been specially designed for the correction of potential errors (Jannach et al., 2014) and inconsistent data storage (Cunha et al., 2014). Consequently, a substantial variety of new tools are available that facilitate geoscientific calculations (e.g. Aliane, 2010; Jones et al., 2014; Wang et al., 2013). In hydrogeology, many spreadsheets have been developed for the facilitation of calculations in the analysis and interpretation of pumping tests, hydrogeochemical data, and analytical and numerical solutions for groundwater flow and pollution problems, among others (e.g. Elmore, 2007; Molano, 2013). For instance, MIX (Carrera et al., 2004) is a FORTRANbased software that computes mixing ratios with uncertain end-members. It is the only available tool that estimates mixing ratios while considering the uncertainty in the end-member concentrations. However, the use of MIX is highly time-consuming since it is difficult to prepare all input text files (MIX is highly sensitive to typing errors, among other errors) and it is difficult to analyse the output files (which contain more than 10,000 text lines). Thus, it is necessary to improve MIX to automate input and output data treatment to reduce errors and to accelerate the analysis. More information about the code could be find in Carrera et al. (2004); Vázquez-Suñé et al. (2010); as well as, its previous application to real case studies (Canovas et al., 2012; Jurado et al., 2016; Scheiber et al., 2018; Tubau et al., 2014). Additional examples are EasyQuim and EasyBal. EasyQuim is a widely used tool (see section 3.4) for representing hydrochemical data and performing calculations such as ionical relationships, unit conversion and balance errors. However, EasyQuim was initially designed to plot up to 24 samples, while current projects typically collect many more samples. A similar difficulty is encountered with EasyBal, which is a software that evaluates the water balance per unit of soil. In this case, the programme is limited by a rigid data period range and requires a tedious input data process. The scientific community is highly specialized. The combination of the field of research, the site of research and the tools that are utilized render the scientist the most specialized person in his or her field of research and in the application of the tools that he or she uses in a specified site. Thus, he or she is the most suitable person for improving his or her tools by overcoming their limitations to realize faster and higher quality analysis. However, most scientists are not software developers. Hence, it is necessary to provide them with an easy approach that enables non-software developers to improve and customize their tools.
INTRODUCTION
Over the past few decades, the rapid evolution of computer processing power has enabled the scientific community to solve various problems in the vast variety of geoscience fields, such as mineralogy, petrology, geochemistry, geology, geophysics, hydrology, and hydrogeology, among others. As a result, most scientists are aware of the importance of computer-aided analysis since geoscience algorithms manage many variables, resulting in laborious calculations that are impossible to conduct without a computer tool .
For decades, scientists have searched for repeatable and predictable processes that would improve the productivity and the quality of the computer architecture and programming
Customization, extension and reuse of outdated hydrogeological software
The development of new technologies in both computer architecture and programming languages continues apace, thereby modifying the landscape. Current programming languages such as Python, Matlab, Visual Basic and Visual C are known as visual languages and are more user-friendly than their predecessors. Most integrate all the required information (e.g. input, output, and sources) into a single file and enable the user to directly conduct the whole analysis. Furthermore, the higher computing power has been accompanied by increasing data availability. In the last few decades, digital data collection, aggregation and integration have increased exponentially (e.g. streaming in from a growing number of satellites and sensors and the Internet). Geoscientists are overrun by data while having access to ever-increasing computing power.
In addition, Graphical User Interfaces (GUIs) became commonly used to facilitate rapid, rigorous and interactive analysis (Jones et al., 2014). Many GUIs have been developed in geoscience (e.g. Phong et al., 2012) to make software more user-friendly (e.g. screen selection of the input and output arrangement for instant comprehension of the results). Since new software programmes are dynamic, visual and interactive, some old fashioned programming language-based software programmes, such as FORTRANbased programmes, are becoming outdated due to their complex analysis processes (preparing input text files, analysing the output text files and displaying limited graphical options). However, despite the limitations of these geoscientific software programmes, some remain the best option for resolving specified problems.
The academic (e.g. Ibrahim, 2009) and scientific communities (e.g. Asuncion, 2013) have also widely accepted the combination of spreadsheets with Visual Basic for Applications (VBA) for the development of software applications. This acceptance has mainly occurred because i) spreadsheet interfaces are user-friendly and facilitate numerical and statistical computations; ii) data can be easily queried, analysed and visualized; iii) a macro programming interface provides satisfactory end-user guidance that facilitates the user in writing correct and more reliable programmes (Cunha et al., 2014); iv) this approach saves time due to its low barrier since most researchers are already adept at manipulating spreadsheets and v) there are available tools that have been specially designed for the correction of potential errors (Jannach et al., 2014) and inconsistent data storage (Cunha et al., 2014). Consequently, a substantial variety of new tools are available that facilitate geoscientific calculations (e.g. Aliane, 2010;Jones et al., 2014;Wang et al., 2013). In hydrogeology, many spreadsheets have been developed for the facilitation of calculations in the analysis and interpretation of pumping tests, hydrogeochemical data, and analytical and numerical solutions for groundwater flow and pollution problems, among others (e.g. Elmore, 2007;Molano, 2013).
For instance, MIX (Carrera et al., 2004) is a FORTRANbased software that computes mixing ratios with uncertain end-members. It is the only available tool that estimates mixing ratios while considering the uncertainty in the end-member concentrations. However, the use of MIX is highly time-consuming since it is difficult to prepare all input text files (MIX is highly sensitive to typing errors, among other errors) and it is difficult to analyse the output files (which contain more than 10,000 text lines). Thus, it is necessary to improve MIX to automate input and output data treatment to reduce errors and to accelerate the analysis. More information about the code could be find in Carrera et al. (2004);Vázquez-Suñé et al. (2010); as well as, its previous application to real case studies (Canovas et al., 2012;Jurado et al., 2016;Scheiber et al., 2018;Tubau et al., 2014). Additional examples are EasyQuim and EasyBal. EasyQuim is a widely used tool (see section 3.4) for representing hydrochemical data and performing calculations such as ionical relationships, unit conversion and balance errors. However, EasyQuim was initially designed to plot up to 24 samples, while current projects typically collect many more samples. A similar difficulty is encountered with EasyBal, which is a software that evaluates the water balance per unit of soil. In this case, the programme is limited by a rigid data period range and requires a tedious input data process.
The scientific community is highly specialized. The combination of the field of research, the site of research and the tools that are utilized render the scientist the most specialized person in his or her field of research and in the application of the tools that he or she uses in a specified site. Thus, he or she is the most suitable person for improving his or her tools by overcoming their limitations to realize faster and higher quality analysis. However, most scientists are not software developers. Hence, it is necessary to provide them with an easy approach that enables non-software developers to improve and customize their tools. This paper presents an approach for easily improving and customizing any hydrogeological software. It is the result of experiences with updating several interdisciplinary case studies. Since the programming language differs among case studies, it has been possible to determine whether this approach can be generalized. The main insights of this approach have been demonstrated using four examples: MIX (Carrera et al., 2004) (FORTRAN-based), BrineMIX (C++-based), EasyQuim and EasyBal (both spreadsheet0based). However, only MIX will be discussed in detail to enable the reader easily to follow a step-by-step application of the presented approach. This paper also attempts to answer the following research questions: Q1 is it possible to easily update any hydrogeological software via this approach? Q2 do the improved versions lead to fewer errors during the analysis compared to the original approaches? Q3 are end users more efficient when using an improved version than when using the original version?
METHODOLOGY General systems development
In both Object-Oriented Analysis (OOA) and the Systems Development Life Cycle (SDLC), programme creation can be regarded as the following flow process: where ID is the identification of the problem (SDLC (1), OOA (1)), GUI denotes the graphical user interface (SDLC (2), OOA (2)), DT represents the required data treatment and RUN describes the solution computation (SDLC (3-5), OOA (3)). The maintenance phase of the SDLC has not been included.
The first step is problem identification (ID), which facilitates understanding of the problem and the answering of questions regarding, e.g. the available information and the desired outcome. Only when the programmer truly understands the nature of the problem is it possible to identify the necessary and available information, display it, arrange it, request it and determine which options should be offered. After identification, a GUI should be designed. Through this interface, the programme requests the input data, visualizes the output data and offers the possibility of setting up any option that the programme offers. Finally, all the requested data in the GUI could require Data Treatment (DT) to realize the suitable format for computation (RUN). Afterwards, the output data should be again displayed in the GUI, thereby maintaining a continuous interaction between the GUI and the computation of the programme.
Based on this scheme, an updating approach has been established as a decision flow chart (following the Unified Modeling Language, UML standards), as software programs differ and require various types of updating ( Figure 1).
Updating Approach
This paper presents an approach for easily improving and customizing software. This chapter follows the decision flow chart in Figure 1, and it describes each step and discusses the flow options. All the presented codes have been developed to run in an MS Excel environment.
To fully investigate the problem, four main issues should be addressed Problem identification (ID): 1) input data and output data, 2) computation, 3) improvements and 4) communication. 1) INPUT DATA AND OUTPUT DATA. What is computed? It is necessary to clearly identify all the data that are involved in the process, which include not only the required data but also the available data. 2) COMPUTATION. How is the result computed? Which software programs are involved in the computation? Is it possible to recompile the available code (access to the source code)? At this step, the developer should understand how the programme works, the complexity of the algorithm, the accessibility (open code access or not) and the possibility of combining various software programmes to define various software configurations, among other aspects. 3) IMPROVEMENTS. Are any changes needed? Which improvements are possible? The strategy is not just to reuse and adapt outdated software but to add new features and functionalities that will improve the performance of the analysis (e.g. allowing data storage, enhancing graphic outputs or connecting the results to other software platforms such as Geographic Information System, GIS). 4) COMMUNICATION. What do I know? What does the final user know? Finally, it is necessary to understand who will use the software and to consider factors such as background knowledge (both in computers and in science) and language. The ways in which information is solicited and displayed are significant. The ID process takes longer than the previous steps as the future GUI, the input and output DT and how the solution will be computed are defined.
A suitable (GUI IN ) should request and display sufficient data while being aesthetically pleasing, comprehensible, simple and responsive. 5) INPUT DATA AND OUTPUT DATA. From where is the input information obtained? In hydrogeology, the information is commonly obtained from maps (GIS), tables (matrices), independent numbers (cells or input boxes) or is selected from an available dataset (e.g. buttons or lists). The process is similar for the output, where results of the analysis are commonly displayed as maps (GIS) or tables (matrix). 6) COMPUTATION and IMPROVEMENTS. How is the analysis conducted? VBA offers a large set of options such as button clicks or events Extension and reuse of hydrogeological software 4 (e.g. when adding information or modifying the content of a cell). 7) COMMUNICATION. Again, who is the final user? Many options are available for displaying information or for ordering and selecting it. The MS Excel environment can substantially improve the power of the analysis by considering whether the results should be static (e.g. simple tables or maps) or dynamic (e.g. pivot tables and charts). Finally, it is necessary to adapt all the new programme capabilities according to the knowledge of the final user. Verplank (1985) and Marcus (1995) defined general principles of GUI design and its effectiveness in visual communication. In addition, many reliable resources are available on the Internet, such as (Jisc Digital Media website, 2019).
The input data are typically the available data, which are not necessarily required data. As these available data are not always provided in the correct order, 8) INPUT DATA TREATMENT (DT IN ) is essential. Depending on each programme, filters, calculations, unit conversions and data rearrangement will sometimes be necessary for the preparation of the required input for analysis, whereas in other programmes. The input will be already in the desired format. Non-Excel-based programmes will need to 9) EXPORT THE INPUT DATA (EX IN ) in various formats and call external executables to perform the analysis, whereas Excel-based programmes (e.g. solvers, macros) will 10) RUN (EXTERNAL RUN (RUN EXT ) and INTERNAL RUN (RUNINT)) as a matter of course. In contrast, depending on the computational core format, 11) OUTPUT DATA will be IMPORTED (IM OUT ) into the GUI or prepared to be computed by another external programme. As not all the output must be presented in GUI, the Output Data (OD) can also be partially disregarded, rearranged in new tables and plotted. This 12) OUTPUT DATA TREATMENT (DT OUT ) is typically necessary for satisfying the 13) GUI output (GUI OUT ) requirements. Occasionally, it will be interesting to export the results to other software or platforms to obtain additional results and to conduct in-depth analyses (e.g. connecting to GIS platforms adds a time-space dimension). Common considerations during the DT process are the decision to use code to evaluate formulas and create objects or to use pre-set formulas and charts in the spreadsheets. Typically, data storage (input/output) will be necessary before the data are recalled by the GUI or exported in various formats. Additionally, during the reuse process, the time that is needed for the development of each step was analysed. According to the analysis, the conceptual model design (identification of the problem, design of the GUI and identification of the necessary DT) requires longer time than coding. Along this line (Buccella et al., 2013) presents similar time distributions during their reuse development case study in GIS, which is also similar to the Rational Unified Process (RUP) (Kruchten, 2003) hump chart and the unified process (Jacobson et al., 1999). Even if the user experience can significantly impact the total time that is necessary for improving any software, the time task distributions typically remain the same.
APPLICATION EXAMPLES
Several application examples have been created and tested to develop the presented approach. The combination of spreadsheets and VBA has been used to implement the software improvement and customization.
The MIX software will be discussed at length to enable the reader to follow a step-by-step application process of the presented approach. This example will emphasize the improvements over the previous versions, e.g. automatic and instant graphical output interaction, automatic formula refill to avoid heavy documents, connection to non-Excelbased software such as FORTRAN or GIS, automatic graphical output generation, and automatic data selection and rearrangement.
Three additional examples will be briefly described to improve the understanding regarding how spreadsheetbased and C++-based software can be improved and customized via the same approach.
MIX 2.0
MIX (Carrera et al., 2004) was created for the assessment of a methodology for computing mixing ratios with uncertain end-members. Problem identification (ID). 1) INPUT DATA AND OUTPUT DATA. The input file contains information about different waters (which can be divided into "end-members" and "samples". Additional information such as restrictions (impossible mixing ratios) or known mixing ratios) can also be set. 2) COMPUTATION. Since the software was developed in FORTRAN, it requires one input file and generates two output ones, both of which are very long (the output files can exceed 10,000 lines). MIX considers three different degrees of freedom in the generation of the input matrix: the number of chemical species, the number of wells and the number of end-members. This matrix plus the user decision to include initial solutions and restrictions results in a complex input generation process. Moreover, the input file is highly sensitive to typing errors. The source code is not available for recompiling changes. 3) IMPROVEMENTS. The first requirement is automatic input file generation for handling typing errors. The new MIX should also offer the possibility of using the main input matrix as a database, thereby offering the possibility of selecting the chemical species, wells or end-members that the user wants to consider in the analysis. A selection of the analysis results should be automatically displayed in the spreadsheet. 4) COMMUNICATION. The final user should minimally feel or not feel that he or she is working with multiple platforms (in this case, MS Excel and FORTRAN). Moreover, output files can be lengthy and monotonous to read with unnecessary information for the analysis. For a standard analysis, only a selection of the data from these files should be displayed in tables and in various types of charts.
To design the GUI, both spreadsheets and UserForm were chosen. This enables the user to predefine the magnitude of the problem in the UserForm and to represent the input data in the spreadsheet as a matrix. 5) INPUT DATA AND OUTPUT DATA. In this case, the input data tables are established in separate spreadsheets (concentrations, standard deviations, initial solutions and restrictions) and are activated when the user navigates through the buttons of the UserForm. The input information can be directly set in the matrix or can be imported from a GIS (using macros that enable spatial selection and filling of the matrix). This GUI also offers the possibility of interacting with Windows by opening folders and available files. 6-7) COMPUTATION, IMPROVEMENTS and COMMUNICATION. After introducing the data, selecting the data that will be suitable for analysis and setting up the desired options, 8) INPUT DATA TREATMENT transforms all these data into a single text file and changes the formats, data types and units. This is the real input file that is 9) EXPORTED AND CALLED by the "FORTRANbased programme MIX" from the Excel environment 10) (RUN EXT ). The "FORTRAN executable" is automatically called by Excel, thereby giving the user the impression that G e o l o g i c a A c t a , 1 8 . 9 , 1 -1 Extension and reuse of hydrogeological software 6 he or she is not working with two software programmes (e.g. using the Shell statement). See the appendix for further information regarding the code. To 11) IM OUT import the results, 12) DT OUT is required despite the difficulty of managing the data. The storage of the input numbers of chemical species, wells and end-members in variables and the use of the functions to find key words enable us to select the information that merits consideration. After the DT, two types of plots are generated: pie charts that show the proportions of the end-members in each sample (well) and scatter plots of measurements versus calculated values for each chemical specie. Additional results such as contributions to the objective function and the eigenvalues are also presented in the form of tables. If the user desires to revise the two complete output files, these files are imported as two spreadsheets even if this also enables us to access the MIX windows folder where all files are stored.
One of the advantages of this case study is that the number of automatically generated plots and tables changes according to the data set that is input by the user. Another advantage is that the new version is connected to GIS-based software QUIMET (Velasco et al., 2014) and AKVAGIS (Criollo et al., 1019). This enables us to 13) export data GUI OUT as a spatial representation in GIS and to 3) import the selected input temporal and spatial GIS environment data to fill the input data tables for analysis in the new MIX. Additionally, the programme enables the storage of large amounts of data (for use as a database) and the selection of a portion of the data for analysis. Last, a UserForm automatically appears when the programme starts, which presents the title, the logo and the designers of the programme. The UserForm can be set to disappear when the user clicks a button, or it can automatically vanish after a few seconds. All the presented UserForms can be minimized to avoid inconveniencing the user when he or she is checking the data. Figure 2 compares the input and the output software environments between the old and the new versions.
In summary, the new MIX version satisfies the need for a GUI by proposing a GUI that is based on the MS Excel environment. This GUI prepares input templates based on the user's requirements for the analysis in external software. A subset of the generated output is plotted and rearranged in the GUI, thereby enabling the user to check the entire output data files. Additional advantages are its potential use as a database (by providing the opportunity to select combinations of chemical species, wells and end-members for analysis) and its connection to a GIS environment.
Other examples
EasyQuim was designed in 1999 for the graphical representation of hydrochemical data. It conducts calculations such as unit conversion and balance error and ionic relationship identification. It also plots Piper, Schöeller-Berkaloff, salinity and stiff diagrams of 24 samples and enables the user to select which to present. Everything is set in spreadsheets with functions, except one small macro, which activates the "No representation of samples" option, which can only be activated once. The new version should provide three main advantages: First, the maximum number of samples is increased (up to 200). Second, a "Sample Selector" is added. Third, a spacetime analysis is possible. The "Sample Selector" provides a powerful tool for using the updated EasyQuim as a database and for plotting various sample combinations, whereas the connection to several GIS-based softwares as QUIMET (Velasco et al., 2014); FREEWAT (Rossetto et al., 2018); AKVAGIS (Criollo et al., 2019), will enable analyses in the spatial and temporal dimensions.
EasyQuim is an example of the energization of a spreadsheet that was originally created for plotting in hydrochemical data analysis. The new version adds functionalities such as conversion of the main data spreadsheet into a database and creation of a data selector, thereby enabling the final user to decide which analyses merit comparison. Additionally, new programme connections such as the connection with GIS were established, thereby enabling further temporal and spatial data analyses.
EasyBal was designed in 1999 for the evaluation of the water balance per unit of soil area as a function of the precipitation, the Potential EvapoTranspiration (PET), the temperature and the irrigation. The outputs are the deficit and the recharge of the aquifer. Older versions required up to six steps to introduce the input data into six Excel sheets. All data analysis periods had to be between January 1970 and December 1997, and calculations and adaptions were required if the user required a different period. Each month had to contain exactly 30 days instead of the real number of days. It is necessary to eliminate the current data period restriction by allowing conditional sums, which will enable us to realize the automatic calculation of monthly and yearly totals. All functions should be reorganized to enable the autofill of each formula in a single line. These improvements enable the user to conduct the analysis simultaneously and to obtain all the results so that they can be clearly structured and organized in a single Excel sheet. Additional features are also included in the new EasyBal version: The user can select English or Spanish as the programme language. The PET can be introduced as input data or can be automatically calculated (using the Hargreaves and Thornthwaite methods) and graphically compared with the input data, thereby enabling the user to select the best option in a menu or graph.
EasyBal provides an example of an improvement to a current calculation spreadsheet. In this case, the process involved reorganizing all data functions to realize automatic Extension and reuse of hydrogeological software 7 Old New New FIGURE 2. Comparison between the old MIX txt input file (top right) and old MIX txt output file (with more than 6.000 lines), and the new MIX v2014 MS Excel-based input dynamic table (down left) and output Grafic User Interface (GUI OUT ) (pie plots, data rearrangement and scatter plots) (right). o l o g i c a A c t a , 1 8 . 9 , 1 -1 Extension and reuse of hydrogeological software 8 formula refill and adding the language selector and the PET graph selector. By changing formulas, it was possible to accept any input data period and to automatically calculate monthly and yearly totals.
In contrast to the earlier examples, BrineMIX is a new programme, not an update. In this case, BrineMIX seeks to create a GUI that automatically generates the input and reads the output of PHREEQC (Parkhurst and Appelo, 2013) for a specified water mixing analysis. In the input, only the chemical water samples, the mixing percentage and the mineral selection are set, whereas the output specifies the chemical composition of the final water and its chemical precipitates. The objective of this new programme is to simplify a specified PHREEQC analysis for a user who does not typically work with it.
BrineMIX provides an example of the externalization of part of a larger software. PHREEQC can conduct many analyses, but not all are necessary for non-advanced chemical users. BrineMIX was created to simplify specified analyses by using an Excel environment to facilitate these users in conducting them. Figure 3 shows the flow diagram paths that are followed in each of the presented case studies: EasyQuim, EasyBal, MIX and BrineMIX.
Software validation
The improvement of software is typically regarded as an empirical discipline. However, authors (e.g. Suri and Garg, 2008) have used quantitative and qualitative metrics to measure the benefits of improving software. This metrics are typically related to quality (such as the error density, fault density, ratio of major errors to total faults, rework effort, module deltas, and developer perception), to productivity (lines of code per effort) and to the time-to-market (development cycle time). Many empirical studies have been reported in the literature in both industry and academia in which the relationship between software improvement and metrics is assessed (e.g. Devanbu et al., 1996).
Quantitative metrics are used to obtain the same or better results in less operational time compared to the original version. In our software, most of the codes cannot be recompiled. Even if the time that is required for strictly computing the solution remains the same, the total time that is required for computing the whole analysis has been dramatically reduced. This is possible via the automation of preparing the input files, setting up the problem, reading the output files and preparing the output for a correct interpretation.
All four examples have been tested to evaluate the total necessary time for conducting a complete analysis: while EasyBal and MIX save at least three quarters of the time, EasyQuim and BrineMIX save half of it.
In contrast, qualitative metrics measure the quality of the response that the user obtains from the software. The addition of new functionalities, the display of the results in a suitable format or the addition of exporting improves the performance of the analysis and the experience of the user. Automatic input/output data treatment not only saves time but also can substantially reduce the errors during the process (e.g. FORTRAN-based programmes are highly sensitive to any typing error). It is also possible to obtain qualitative feedback through an increase of the system reliability, namely, by automating the error-prone human processes or by displaying warnings when values are out of range. The main improvements in our examples rely on dynamic data comparison, a wide range of data values, the addition of a GUI and the automation of the input and output data treatments.
Finally, software validation can also be measured by its acceptance and use in academia and by professionals. This approach has been used in educational, research and technical projects. EasyQuim and EasyBal, the previous versions of which were widely used in the hydrogeological international community (especially in Latin American countries), are taught in various international master courses by institutions such as the Universitat Politècnica de Catalunya (www.upc.edu) or by nonprofits such as the Fundación Centro Internacional de Hidrología Subterránea (FCIHS, www.fcihs.org). All four improved software programmes have been applied in various technical and research projects, and Criollo et al. (2016Criollo et al. ( , 2019; Scheiber et al. (2015Scheiber et al. ( , 2016Scheiber et al. ( , 2018Serrano et al. (2016Serrano et al. ( , 2018; Tubau et al. (2017); Velasco et al. (2014) have applied this approach for reusing these programmes.
System requirements and program availability
The four software examples can be obtained by making a request to the author or by downloading them from the URL: https://www.idaea.csic.es/research-group/ groundwater-and-hydrogeochemical/ CONCLUSIONS This paper presents a new approach for improving and customizing any hydrogeological software and provides insights into the process for its application to four cases. According the objectives and the stated questions we summarize the main outcomes: i) It is possible to easily update hydrogeological software via this approach. Through these case studies, the reader can understand how software (e.g. in C++, FORTRAN, or VBA) can be improved via the same approach. Moreover, this approach enables the creation of new GUIs for the automatic generation of input and reading of output files from other analysis. Finally, the MIX case study has been discussed in detail to enable the reader to easily follow a step-by-step process for the application of the presented approach.
ii) The improved versions lead to fewer errors during the analysis compared to the original approaches. It is demonstrated that the new versions are more userfriendly and avoid errors such as typing mistakes. An MS Excel environment enables us to perform the same action in a variety of ways. This is helpful since it enables the programme developer to design anything he or she considers suitable, thereby resulting in highly personalized programmes. Moreover, VBA offers the possibility of using messages in pop-up windows or colour changes to caution the user; e.g. indicating to him or her which values are out of range or that the required values are numbers instead of letters.
iii) End users are more efficient when using an improved version than when using the original version. In addition, the new versions easily generate input files and show, rearrange and plot the most important parts of the output. Through VBA, it was possible to assess complex input matrix generation and difficult output selections and to generate several chart types. We also demonstrated how VBA interacts with Windows by executing other programme and by opening Windows folders. In all cases, the GUI is highly important as it not only makes each programme easier to manage but also improves its organization.
Additionally, this methodology was evaluated during the improvement processes of several case studies, and a qualitative trend of the time distribution was observed throughout the process. This supports that the conceptual model design requires longer time than the other steps.
This approach has been used in education and research, and it is being applied in several technical projects.
Our approach realizes the objectives by providing the necessary steps for the facile development of any hydrogeological software to enable the advancement of the current understanding in hydrogeology by any scientist. The simplified methodology in a decision flow chart facilitates the programme developer in the assessment of any type of programme. However, although this approach has been developed for the reuse of hydrogeological software by hydrogeologists, it can also be applied to other fields, thereby creating synergies among scientists and expert programme developers. This appendix provides a compilation of the most basic code sentences that allows any program developer to create and design any similar software comparable to that presented above. Every title contains different code examples for performing the title action. Figure A1 locates each action in the decision flow diagram steps. | 8,584.8 | 2020-06-02T00:00:00.000 | [
"Environmental Science",
"Computer Science",
"Engineering",
"Geology"
] |
A Direct Role for the Macrophage Low Density Lipoprotein Receptor in Atherosclerotic Lesion Formation*
To evaluate the contribution of the macrophage low density lipoprotein receptor (LDLR) to atherosclerotic lesion formation, we performed bone marrow transplantation studies in different mouse strains. First, LDLR(−/−) mice were transplanted with either LDLR(+/+) marrow or LDLR(−/−) marrow and were challenged with an atherogenic Western type diet. The diet caused severe hypercholesterolemia of a similar degree in the two groups, and no differences in the aortic lesion area were detected. Thus, macrophage LDLR expression does not influence foam cell lesion formation in the setting of extreme LDL accumulation. To determine whether macrophage LDLR expression affects foam cell formation under conditions of moderate, non-LDL hyperlipidemia, we transplanted C57BL/6 mice with either LDLR(−/−) marrow (experimental group) or LDLR(+/+) marrow (controls). Cholesterol levels were not significantly different between the two groups at baseline or after 6 weeks on a butterfat diet, but were 40% higher in the experimental mice after 13 weeks, mostly due to accumulation of β-very low density lipoprotein (β-VLDL). Despite the increase in cholesterol levels, mice receiving LDLR(−/−) marrow developed 63% smaller lesions than controls, demonstrating that macrophage LDLR affects the rate of foam cell formation when the atherogenic stimulus is β-VLDL. We conclude that the macrophage LDLR is responsible for a significant portion of lipid accumulation in foam cells under conditions of dietary stress.
To evaluate the contribution of the macrophage low density lipoprotein receptor (LDLR) to atherosclerotic lesion formation, we performed bone marrow transplantation studies in different mouse strains. First, LDLR-(؊/؊) mice were transplanted with either LDLR(؉/؉) marrow or LDLR(؊/؊) marrow and were challenged with an atherogenic Western type diet. The diet caused severe hypercholesterolemia of a similar degree in the two groups, and no differences in the aortic lesion area were detected. Thus, macrophage LDLR expression does not influence foam cell lesion formation in the setting of extreme LDL accumulation. To determine whether macrophage LDLR expression affects foam cell formation under conditions of moderate, non-LDL hyperlipidemia, we transplanted C57BL/6 mice with either LDLR(؊/؊) marrow (experimental group) or LDLR-(؉/؉) marrow (controls). Cholesterol levels were not significantly different between the two groups at baseline or after 6 weeks on a butterfat diet, but were 40% higher in the experimental mice after 13 weeks, mostly due to accumulation of -very low density lipoprotein (-VLDL). Despite the increase in cholesterol levels, mice receiving LDLR(؊/؊) marrow developed 63% smaller lesions than controls, demonstrating that macrophage LDLR affects the rate of foam cell formation when the atherogenic stimulus is -VLDL. We conclude that the macrophage LDLR is responsible for a significant portion of lipid accumulation in foam cells under conditions of dietary stress.
The development of atherosclerosis involves the recruitment of monocyte-derived macrophages into the subendothelial space and their transformation into lipid-laden foam cells (1). Because foam cell transformation is a consequence of an excessive accumulation of lipid droplets in the cytoplasm, it has long been hypothesized that macrophage lipoprotein receptor expression may play a role in this process. The macrophage expresses several receptors capable of taking in native or mod-ified lipoproteins, including the low density lipoprotein receptor (LDLR) 1 (2), the LDLR-related protein, and the scavenger receptor (3). The association between elevated levels of LDL cholesterol and increased risk of atherosclerosis suggests that the LDLR might mediate the cholesterol accumulation by macrophage-derived foam cells. However, the uptake of fresh LDL by macrophages is at least one order of magnitude lower than that of acetylated LDL, suggesting that scavenger receptor expression is physiologically more relevant than LDLR expression in this cell type (4,5). Observations from studies both in vivo and in vitro indicate that macrophage and leukocyte LDLR expression is not required for foam cell formation (4,6,7). Leukocytes express little LDLR activity, which is promptly down-regulated by incubation with LDL (8). Similarly, macrophage expression of LDLR is limited (2,9) and easily inhibited by excess cholesterol, suggesting that the physiologic contribution of the LDLR to lipoprotein uptake by the macrophage may be limited in the presence of elevated LDL cholesterol levels (4). Most importantly, individuals with homozygous familial hypercholesterolemia, who lack functional LDLR, show accumulation of cholesteryl esters in macrophages (10), a proof that the LDLR is not necessary for foam cell transformation of macrophages. However, Tabas and co-workers (11,12) have reported that J774 cells and mouse peritoneal macrophages bind and internalize unmodified LDL. In addition, the macrophage LDLR has the ability to take up other atherogenic lipoproteins, such as -very low density lipoprotein (-VLDL) and chylomicron remnants (13)(14)(15). In fact, -VLDL is the only naturally occurring (unmodified) lipoprotein that induces transformation of macrophages into foam cells (16,17). Therefore, macrophage LDLR expression may have a relevant impact in the metabolism and clearance of -VLDL and may modulate foam cell formation when the main atherogenic stimulus is the dietinduced remnant.
Murine bone marrow transplantation (BMT) studies have been used to examine the role of the leukocyte LDLR in lipoprotein metabolism and atherosclerosis. LDLR deficient (Ϫ/Ϫ) mice have increased plasma LDL levels and enhanced susceptibility to diet-induced atherosclerosis (18,19). We and others (20,21) have shown that reconstitution of wild-type LDLR expression in the hematopoietic system of LDLR(Ϫ/Ϫ) mice (LDLR(ϩ/ϩ)3 LDLR(Ϫ/Ϫ)), has no measurable effects on plasma lipoprotein levels and turnover time. We also demonstrated that elimination of LDLR expression from the hematopoietic cells C57BL/6 mice has no effect on plasma lipid parameters on a normal chow diet (20). Based on the qualitative observation that both LDLR(Ϫ/Ϫ)3 LDLR(Ϫ/Ϫ) and LDLR-(ϩ/ϩ)3 LDLR(Ϫ/Ϫ) mice developed extensive atherosclerosis in the aortic valves after 20 weeks on a diet containing 1.25% cholesterol and 0.5% sodium cholate, Boisvert et al. (21) have suggested that the leukocyte LDLR may not play a major role in lesion development. Herijgers et al. (22) found similar results in LDLR(Ϫ/Ϫ)3LDLR(Ϫ/Ϫ) and LDLR(ϩ/ϩ)3LDLR(Ϫ/Ϫ) mice after 20 weeks on a diet containing 1.0% cholesterol. The dietary conditions in both of these studies induced severe hypercholesterolemia and complex atherosclerotic lesions. Therefore, a contribution of leukocyte LDLR expression to foam cell formation might have been obscured under these conditions of extreme hypercholesterolemia and advanced atherosclerosis.
The goal of the current study was to examine whether the reconstitution of macrophage LDLR activity in LDLR(Ϫ/Ϫ) mice or its elimination in C57BL/6 mice would have an impact on the extent of atherosclerosis in a setting of less severe hypercholesterolemia and during an early stage of atherosclerotic lesion formation. Although the macrophage LDLR is unlikely to play a significant role in the uptake of LDL, it is possible that its involvement in the endocytosis of -VLDL is substantial (13)(14)(15). To test this hypothesis we set up a series of experiments directed at analyzing the development of aortic atherosclerosis in LDLR(Ϫ/Ϫ) or C57BL/6 mice reconstituted with either LDLR(Ϫ/Ϫ) or LDLR(ϩ/ϩ) marrow. In LDLR(Ϫ/Ϫ) mice, extreme atherosclerosis developed irrespective of the kind of marrow received, indicating that, in the presence of massive elevations in LDL levels, the macrophage LDLR is not a modulator of foam cell formation. However, C57BL/6 mice that received LDLR(Ϫ/Ϫ) marrow had a mean aortic lesion area that was 70% less compared with mice that received LDLR(ϩ/ϩ) marrow. This effect was evident despite a 40% higher plasma cholesterol level in LDLR(Ϫ/Ϫ)3 C57BL/6 mice, which was due to the accumulation of -VLDL. Thus, our results are compatible with a major role of macrophage LDLR in the regulation of foam cell transformation when the atherogenic stimulus is -VLDL.
MATERIALS AND METHODS
Animals-A colony of C57BL/6J mice is established in our animal facility. The LDLR(Ϫ/Ϫ) mice were originally purchased from Jackson Laboratories (Bar Harbor, ME) and backcrossed into the C57BL/6 background. Recipient LDLR(Ϫ/Ϫ) mice were at the 7th backcross, whereas donors for the C57BL/6 study were at the 10th backcross into C57BL/6. LDLR genotype was determined by polymerase chain reaction as described previously (20). All mice were maintained in microisolator cages on a rodent chow diet containing 4.5% fat (PMI 5010, St. Louis, MO) and acidified water (pH 2.8). Atherogenic diets used included the Western type diet containing 21% milkfat and 0.15% cholesterol diet (Teklad, Madison, WI) and the butterfat diet containing 19.5% fat, 1.25% cholesterol and 0.5% cholic acid (ICN, Aurora, OH). Animal care and experimental procedures were performed in accordance with institutional guidelines and under approval from the Animal Care Committee of Vanderbilt University.
BMT-A week before and 2 weeks following BMT, 100 mg/liter neomycin and 10 mg/liter polymyxin B sulfate (both from Sigma) were added to the acidified water. Bone marrow was collected from donor mice by flushing femurs with RPMI 1640 media containing 2% fetal bovine serum and 5 units/ml heparin (Sigma). Recipient mice were lethally irradiated (9 Gy), and 4 h later, 5 ϫ 10 6 bone marrow cells in 0.3 ml were transplanted by tail vein injection.
Serum Cholesterol and Triglycerides Analysis-Nonfasting mice were anesthetized with methoxyflurane (Mallinckrodt Veterinary, Inc., Madelein, IL) and blood samples were collected by retro-orbital venous plexus puncture. Serum cholesterol levels were determined using Sigma kit 352 adapted for a microtiter plate assay as described (23). Serum triglyceride levels were determined using Sigma kit 339 on a microplate reader, and absorbance was read at 540 nm.
Separation of Lipoproteins-Mouse serum was fractionated on a Superose 6 column (Amersham Pharmacia Biotech) using an HPLC system model 600 (Waters, Milford, MA). A 100-l aliquot of serum was injected onto the column and separated using a buffer containing 0.15 M NaCl, 0.01 M Na 2 HPO 4 , 0.1 mM EDTA (pH 7.5) at a flow rate of 0.5 ml/min. Forty 0.5-ml fractions were collected, and fractions 11-40 were analyzed for cholesterol content. Fractions 13-17 contain VLDL and chylomicrons; fractions 18 -24 contain intermediate density lipoproteins (IDL) and LDL; fractions 25-31 contain high density lipoproteins (HDL), and fractions 32-40 contain nonlipoprotein-associated serum proteins.
Quantitation of Arterial Lesions-Mice were sacrificed and flushed with 30 ml of saline by slow injection through the left cardiac ventricle. The heart with ascending aorta was embedded in OCT and snap-frozen in liquid N 2 . Cryosections of 10-m thickness were taken from the region of the proximal aorta starting from the end of the aortic sinus and for 300 m distally, according to the procedure of Paigen et al. (24). Cryosections were stained with Oil-Red-O and counterstained with hematoxylin. Quantitative analysis of lipid-stained lesions was performed using an Imaging System KS 300 (Release 2.0, Kontron Electronik GmbH). Color threshold was used to delimit the Oil-Red-O stained lesion area, and the lesion area was determined as mean lesion area per section in square micrometers.
Immunocytochemical Analysis-Immunocytochemical staining of tissue samples for LDLR and macrophages was performed on 5-m thick serial cryosections from the proximal aortas. Sections were fixed in acetone and incubated with either rabbit antibodies to bovine LDLR, which cross-react with mouse LDLR (Rb.455, a gift from Dr. Innerarity, Gladstone Institute, San Francisco, CA; and Ab638, a gift from Dr. Herz, University of Texas, Southwestern Medical Center, Dallas, TX) or with a rat monoclonal antibody to mouse macrophages, MOMA-2 (Accurate Chemicals, Westbury, NY). Primary antibodies were used at dilutions of 1:250, 1:300 and 1:30, respectively, and incubated overnight at 4°C. After washing, the sections were treated with goat biotinylated antibodies to rabbit and rat IgGs (both from PharMingen, San Diego, CA) and incubated with avidin-biotin complex labeled with alkaline phosphatase (Vector Laboratories, Inc., Burlingame, CA). The enzyme activity was visualized with Fast Red TR/naphthol AS-NX substrate (Sigma). Sections were counterstained with hematoxylin. Nonimmune rabbit and rat sera were used as negative controls in the place of primary antibodies. Photomicroscopy was performed on a Zeiss Axiophot with Plan-Neofluar objectives (Zeiss, Thornwood, NY).
In Situ Hybridization-A 167-base insert consisting of nucleotides 2106 -2273 of the mouse LDLR cDNA (primers were gift from Dr. Ishibashi, University of Tokyo, Japan) was cloned into pBluescript II SK phagemid (Stratagene, La Jolla, CA). Another 59-base fragment consisting of nucleotides 735-794 of the mouse LDLR gene was amplified using primers (CAGTGCTCCTCATCTGACTTGTC and GTGG-TAGCAGTGAGTGTATCC), and cloned into pGEM-T vector (Promega, Madison, WI). Antisense and sense riboprobes for LDLR were labeled with 35 S-uridine (RNA transcription kit, Stratagene). Cryosections (5-m thick) were fixed for 30 min in 4% paraformaldehyde-phosphatebuffered saline, treated for 15 min with proteinase K (5 g/ml), prehybridized for 1 h at 55°C in a mixture (0.3 M NaCl, 20 mM Tris, pH 8.0, 5 mM EDTA, 1ϫ Denhardt's solution, 10 mM dithiothreitol, 10% dextran sulfate, 50% formamide) and, after addition of the riboprobes, incubated overnight at 55°C. The sections were then treated for 30 min with RNase A (20 g/ml), washed, coated with autoradiographic emulsion (Kodak NTB-2) and exposed for 2-3 weeks. After development, the slides were counterstained with hematoxylin. The sense probe was used in parallel as a negative control.
RESULTS
The role of the macrophage LDLR in foam cell formation and atherosclerosis was examined in two different murine bone marrow transplantation models, using dietary conditions which differed significantly in ambient levels of plasma lipids and lipoproteins. The duration of the atherogenic diet in each model was selected to induce lesions consisting primarily of macrophage-derived foam cells. For a model of severe hypercholesterolemia, lethally irradiated (9 Gy) male LDLR(Ϫ/Ϫ) mice were transplanted with either LDLR(ϩ/ϩ) marrow (experimental group; n ϭ 15) or LDLR(Ϫ/Ϫ) marrow (controls; n ϭ 14). Eight weeks post-BMT, the mice were challenged with an atherogenic diet containing 21% milkfat and 0.15% cholesterol diet for 9 weeks. To examine the contribution of the macrophage LDLR to foam cell formation under conditions of more moderate hypercholesterolemia, 8-wk-old lethally irradiated (9 Gy) female C57BL/6 mice were transplanted with either LDLR(Ϫ/Ϫ) marrow (experimental group; n ϭ 11) or LDLR(ϩ/ϩ) marrow (controls; n ϭ 11). Eight weeks post-BMT the mice were challenged with an atherogenic diet containing 19.5% butterfat, 1.25% cholesterol, and 0.5% cholic acid for 13 weeks.
In the LDLR(Ϫ/Ϫ) mice transplanted with either LDLR(ϩ/ϩ) or LDLR(Ϫ/Ϫ) marrow, there were no significant differences in serum cholesterol or triglyceride levels between the two groups at baseline or after 6 weeks on a chow diet or after 6 or 9 weeks on the Western-type diet (Table I). We have previously reported that on a chow diet the lipoprotein profiles in the LDLR(ϩ/ϩ)3 LDLR(Ϫ/Ϫ) mice and LDLR(Ϫ/Ϫ)3 LDLR(Ϫ/Ϫ) mice are indistinguishable, with HDL as the predominant lipoprotein class and a significant accumulation of LDL cholesterol (20). After 6 weeks on the Western-type diet, examination of the distribution of cholesterol among the serum lipoprotein fractions by size-exclusion chromatography in the LDLR(Ϫ/Ϫ)3 LDLR(Ϫ/Ϫ) mice revealed a massive accumulation of cholesterol in the VLDL/IDL/LDL range, with a relative decrease in the HDL cholesterol compared with the lipoprotein profile on a normal chow diet (Fig. 1A). A similar pattern was seen in the LDLR(ϩ/ϩ)3 LDLR(Ϫ/Ϫ) mice (data not shown). Thus, in the LDLR (Ϫ/Ϫ) mice, the Western-type diet induced severe hypercholesterolemia due to an accumulation of both LDL cholesterol and VLDL/IDL-sized remnant lipoproteins.
Consistent with our previous results, examination of serum cholesterol and triglyceride levels in the C57BL/6 mice transplanted with either LDLR(ϩ/ϩ) or LDLR(Ϫ/Ϫ) marrow revealed no significant differences on a chow diet 8 weeks post-BMT (Table II) (20). After 6 weeks on the atherogenic diet, no significant differences in serum cholesterol or triglyceride levels existed between the two groups, although the serum cholesterol levels had doubled from baseline (Table II). However, after 13 weeks on the butterfat diet, the mean serum cholesterol level in the LDLR(Ϫ/Ϫ)3 C57BL/6 mice was significantly higher than in the LDLR(ϩ/ϩ)3 C57BL/6 mice (Table II). Examination of the distribution of cholesterol among the serum lipoprotein fractions by size-exclusion chromatography after 13 weeks on the atherogenic diet revealed an accumulation of cholesterol in the VLDL/IDL range with a relative decrease in the HDL cholesterol in both groups (Fig. 1B). Levels of HDL cholesterol in 8 LDLR(Ϫ/Ϫ)3 C57BL/6 and 7 LDLR(ϩ/ϩ)3 C57BL/6 mice were 68.6 Ϯ 10.0 and 67.4 Ϯ 9.37 (mg/dl Ϯ S.D.), respectively, (p ϭ 0.820), and the ratio of total cholesterol to HDL cholesterol was higher in the LDLR(Ϫ/Ϫ)3 C57BL/6 mice than in the controls (5.01 versus 3.63). Thus, in the C57BL/6 mice, the butterfat diet induced a moderate hypercholesterolemia due to an accumulation of remnant lipoproteins.
In situ hybridization studies were performed to examine the expression of mouse LDLR mRNA in the atherosclerotic lesions of both the LDLR(Ϫ/Ϫ) and C57BL/6 transplant models. In control experiments, hepatic sections from LDLR(Ϫ/Ϫ) mice obtained 4 days after infection with an adenoviral construct coding for the human LDLR showed extremely high levels of LDLR expression when hybridized with the 35 S-labeled 167nucleotide mouse LDLR mRNA antisense riboprobe and no expression above background with the corresponding sense probe, as described previously (27). Mouse LDLR mRNA expression was detectable at low levels in hepatic sections from wild-type C57BL/6 mice, but was absent in hepatic sections from LDLR(Ϫ/Ϫ) mice (data not shown). Hybridization of 5-m sections from the proximal aorta of LDLR(ϩ/ϩ)3 C57BL/6 mice with the 35 S-labeled 167-nucleotide mouse LDLR mRNA antisense riboprobe revealed low level expression of the LDLR in foam cells, which was absent in sections hybridized with the sense probe (Fig. 4). In contrast, expression of the LDLR in foam cell lesions of the LDLR(ϩ/ϩ)3 LDLR(Ϫ/Ϫ) mice was not detectable by this assay (data not shown).
DISCUSSION
The current studies provide strong evidence for a direct role of the macrophage LDLR in foam cell formation and atherogenesis in vivo. The macrophage LDLR has been implicated in the binding and internalization of -VLDL and chylomicron remnants by a number of in vitro studies (13)(14)(15). On an atherogenic diet, C57BL/6 mice develop relatively modest hypercholesterolemia due to an accumulation of -VLDL, providing an attractive model for testing the hypothesis that the macrophage LDLR influences foam cell formation and atherogenesis in vivo. Therefore, female C57BL/6 mice were transplanted with either LDLR(Ϫ/Ϫ) marrow or LDLR(ϩ/ϩ) marrow and challenged with the butterfat diet. As expected, the mice in both groups developed moderate hypercholesterolemia (Table II). Although serum cholesterol levels were not significantly different between the two groups at baseline or after 6 weeks on the butterfat diet, the serum cholesterol levels were 40% higher in the experimental LDLR(Ϫ/Ϫ)3 C57BL/6 mice compared with controls after 13 weeks. The main lipoprotein class accumulating under conditions of dietary stress is a -VLDL remnant, and the higher cholesterol levels observed in the experimental mice were due primarily to higher levels of the -VLDL in this group. However, this significant additional accumulation of a potentially atherogenic lipoprotein did not have the expected consequences on the artery wall. In fact, quantitative analysis of the extent of atherosclerosis in the proximal aorta revealed that C57BL/6 mice reconstituted with LDLR(Ϫ/Ϫ) marrow developed 63% smaller lesion area than the LDLR(ϩ/ϩ) marrow recipients. Thus, our results are compatible with a major role of macrophage LDLR in foam cell formation when the atherogenic stimulus is -VLDL.
It is noteworthy that no correlation was detected between individual serum cholesterol levels and the extent of lesion area in the experiment using C57BL/6 recipient mice, suggesting that the effect of intervention was not mediated by plasma lipoprotein changes. After 13 weeks on the atherogenic diet, the LDLR(Ϫ/Ϫ)3 C57BL/6 mice had higher serum total cholesterol levels and a higher total cholesterol to HDL cholesterol ratio
than the control group. Levels of serum triglycerides did not differ between the two groups, and the lipoprotein distributions were qualitatively similar as determined by size-exclusion chromatography. Subtle changes in HDL or apolipoprotein B composition are unlikely to explain the difference in atherosclerosis. A 75% decrease in HDL cholesterol due to knockout of the apoAI gene does not effect the extent of diet-induced atherosclerosis in 129xC57BL/6 hybrid mice (28). In apoE-deficient mice expressing only apoB100 or apoB48, serum cholesterol levels predict lesion area but the differences in apoB containing lipoproteins do not (29). In the current study, despite the presence of an apparently more atherogenic lipoprotein profile, the LDLR(Ϫ/Ϫ)3 C57BL/6 mice developed significantly less atherosclerosis. Therefore, the lack of macrophage LDLR expression in these mice was apparently protective, resulting in less foam cell formation.
In the current studies, the role of macrophage LDLR expression in foam cell formation was examined in LDLR(ϩ/ϩ)3 LDLR(Ϫ/Ϫ) mice and LDLR(Ϫ/Ϫ)3 LDLR(Ϫ/Ϫ) controls under dietary conditions resulting in less severe hypercholesterolemia than in the studies of Boisvert et al. (21) and Herijgers et al. (22). The mice in both groups developed severe hypercholesterolemia due to accumulation of VLDL, IDL, and LDL cholesterol, but there were no significant differences in serum cholesterol between the two groups at baseline or after 6 or 9 weeks on the atherogenic diet. The extent of atherosclerosis was examined when the lesions had not progressed beyond fatty streak lesions. Quantitative analysis of the extent of atherosclerosis demonstrated that there were no differences between the two groups. Thus, in the presence of extremely high levels of serum cholesterol, macrophage LDLR expression did not influence the extent of foam cell lesion formation. Our results extend the findings of Boisvert et al. (21) and Herijgers et al. (22) by showing that reconstitution of LDLR expression in leukocytes and macrophages of LDLR(Ϫ/Ϫ) mice during the foam cell-rich fatty streak stage of atherogenesis does not influence the extent of atherosclerosis.
The contribution of leukocyte LDLR expression to foam cell formation and atherogenesis was examined in two different murine bone marrow transplantation models, which differed dramatically with respect to the levels of plasma lipids and lipoproteins. Although the studies in LDLR deficient mice seem to indicate that the macrophage LDLR does not influence foam cell formation, a different picture emerges when one looks at the effect of eliminating macrophage LDLR from C57BL/6 mice on a high fat diet. In this experimental model of moderate hypercholesterolemia due predominantly to the accumulation of -VLDL, macrophage expression of the LDLR does play a physiologic role in foam cell formation in vivo, as evidenced by the significant (p ϭ 0.031) 70% reduction in lesion area shown by mice transplanted with LDLR(Ϫ/Ϫ) marrow. Although there is inherent variation in the diet-induced model of atherosclerosis in C57BL/6 mice, the result is clearly statistically significant, and the extent of aortic atherosclerotic lesion area obtained in C57BL/6 mice under similar dietary conditions has been shown to be highly reproducible (30). Overall, our results emphasize the importance of genetic background, dietary conditions, and stage of atherosclerosis in designing experiments to elucidate the physiologic role of expression of a gene by the macrophage in atherosclerosis.
We and others (20,21) have previously reported that plasma lipid and lipoprotein levels do not differ in LDLRϪ/Ϫ mice reconstituted with LDLRϩ/ϩ or LDLR Ϫ/Ϫ marrow on a chow diet. Based on these studies, we concluded that leukocyte LDLR activity does not play a significant role in the clearance of LDL in plasma. In contrast, Herijgers et al. (22) reported that 4 weeks post-BMT LDLR(ϩ/ϩ)3 LDLR(Ϫ/Ϫ) mice have significantly lower levels of total serum cholesterol level and LDL cholesterol than control LDLR(Ϫ/Ϫ)3 LDLR(Ϫ/Ϫ) mice, but the decrease in LDL was less prominent by 12 weeks post-BMT, suggesting it was a transient effect (22). Consistent with our current results, Herijgers et al. (22) did not see any significant differences in total serum cholesterol levels between LDLR(ϩ/ϩ)3 and LDLR(Ϫ/Ϫ)3 LDLR(Ϫ/Ϫ) mice on a diet containing 1% cholesterol, demonstrating that the leukocyte LDLR does not influence plasma cholesterol levels in the presence of severe hypercholesterolemia. In addition, we have reported that plasma lipid and lipoprotein levels do not differ in C57BL/6 mice reconstituted with LDLRϩ/ϩ or LDLR Ϫ/Ϫ marrow on a chow diet (20). In C57BL/6 mice on a chow diet, the majority of cholesterol is found in HDL, making it unlikely that a contribution of leukocyte LDLR expression to the clearance of LDL from plasma would be detected. Yet, in the current studies we found that after 13 weeks on the butterfat diet the serum cholesterol levels were 40% higher in the LDLR(Ϫ/Ϫ)3 C57BL/6 mice than LDLR(ϩ/ϩ)3 C57BL/6 mice because of accumulation of -VLDL. van Berkel and co-workers (31,32) have shown that Kupffer cells contribute significantly to the clearance of LDL from plasma in the rat. Therefore, a lack of Kupffer cell LDLR expression in the LDLR(Ϫ/Ϫ)3 C57BL/6 may be responsible for the increased level of serum cholesterol relative to the LDLR(ϩ/ϩ)3 C57BL/6 mice. These results suggest that the leukocyte LDLR expression can significantly influence plasma cholesterol levels under conditions of moderate hypercholesterolemia due to an accumulation of -VLDL.
We have previously reported that six weeks after bone marrow transplantation in LDLRϪ/Ϫ recipient mice both the myeloid and lymphoid cells were essentially completely reconstituted by cells of donor origin (20). In addition, we have previously demonstrated that bone marrow transplantation results in reconstitution of the arterial wall with macrophages of donor origin (26). In the current study, we report that macrophages in the atherosclerotic lesions of LDLR(ϩ/ϩ)3 LDLR(Ϫ/Ϫ) mice stain positive for the LDLR, a finding consistent with results reported by Boisvert et al. (21). These findings indicate that the LDLR is being expressed by macrophage-derived foam cells even in the setting of extreme hypercholesterolemia with high levels of LDL cholesterol. However, the results of our in situ hybridization studies indicate that the level of macrophage-derived foam cell LDLR expression in the LDLR(ϩ/ϩ)3 LDLR(Ϫ/Ϫ) mice was down-regulated relative to the level of expression noted in the LDLR(ϩ/ϩ)3 C57BL/6 mice. Thus, it is possible that the LDL receptor participates in foam cell formation when levels of LDL or total cholesterol are not high enough to completely down-regulate the macrophage LDLR.
In conclusion, the wider relevance of these results lay in the proof that expression of LDLR from macrophages in the artery wall directly mediates the progression of atherosclerosis and that protective changes in the macrophage can overcome atherogenic changes such as diet-induced hyperlipidemia in the plasma compartment. The contribution of macrophage LDLR to foam cell formation and atherosclerosis may be substantial given the large body of evidence implicating triglyceride-rich remnant lipoproteins in human atherosclerotic disease (33,34). In addition, the majority of people who die of coronary heart disease have normal to modestly elevated levels of cholesterol (35), a setting in which the macrophage LDLR may contribute significantly to foam cell formation. This concept emphasizes the need for developing therapeutic strategies, based either on drugs or gene transfer, aimed at reducing the recruitment of monocytes in the artery wall or delaying macrophage transformation into foam cells to reduce the development and progression of coronary atherosclerosis. C) and sense (B, D) riboprobes. Cryosections of aortic sinus were fixed in 4.0% PFA and hybridized overnight with 35 S-labeled antisense or sense riboprobes. Sections were covered with emulsion that was developed after 4 weeks. The hybridization signal of LDLR mRNA expression appears as black grains located over macrophage-derived foam cells of the aortic lesion with the antisense probe on bright field (A; magnification, ϫ40) and as white dots on dark field of the same section (C; magnification, ϫ40). The sense probe did not show specific hybridization under the same conditions (B, D). | 6,242.6 | 1999-07-02T00:00:00.000 | [
"Biology",
"Medicine"
] |
Application of Artificial Intelligence Technology in Martial Arts Education Governance
Martial arts education has a relatively comprehensive educational function. Compared with other educational methods, it has some unique features. When martial arts education carries out moral education, it not only attaches importance to the teaching of moral norms but also requires martial arts practitioners to practice moral norms, so martial arts education is more practical in improving moral literacy. In fact, the role of martial arts education is far from just playing its role in strengthening the body. is kind of prejudice of mindset conceals the diversity characteristics of martial arts education. is paper proposes to apply articial intelligence technology in martial arts education governance, which uses the target tracking algorithm based on deep learning to track and analyze the movement of martial arts practitioners. At the same time, this paper uses the pose estimation algorithm of coordinate regression to predict the key points of the human body from the global perspective of the human body and then locates the key points of the human body from the features. It greatly simplies the prediction of key points and solves the problem of nonstandard movements of students in martial arts education. e experimental analysis part includes the results and analysis of the impact of AI-based ipped classroom teaching on students’ martial arts learning and the comparison and analysis of students’ martial arts learning in the two classes after the experiment. e analysis results show that the P values of the four aspects of learning interest, active participation attitude, independent exploration ability, and analysis and problem-solving ability of the two classes are all less than 0.01, indicating that there is a signicant dierence.
Introduction
Many studies have shown that there are still many problems in martial arts education in various aspects. For example, martial arts education is not valued in modern society, facing the useless place of mass martial arts education, and the educational value of martial arts has not been fully explored in today's society. Many previous studies have paid attention to the realization of the value of martial arts education, and many great ideas and methods have been put forward. However, there is no e ective solution to the governance of martial arts education. According to the development law of martial arts, the martial arts project should be transformed in a centralized manner to re ect the characteristics of its own projects, so that martial arts can better re ect the national characteristics. It combines tradition and fashion better. e charm of martial arts is that it not only has a smooth body but also has more handsome movements than aerobics, which people from other countries cannot resist. It is necessary to understand how to realize the educational value of martial arts. As an educational program, martial arts have similarities and di erences with other educational programs.
e application of arti cial intelligence technology in martial arts education governance makes modern people like martial arts and to readily accept martial arts education. It needs the majority of martial arts educators to learn from the development law of other projects and to explore the law of martial arts education implementation.
Martial arts education is becoming more and more important all over the world. Sangjin aimed to develop a preschool exercise program utilizing basic mixed martial arts (MMA) techniques to improve the athletic ability of preschool children. He also validated the effect of the program on their athletic ability, development, and body composition by applying the program to the preschool curriculum [1]. In order to study the importance of physical activity to physical health, Wolfgang research shows that martial arts shed light on the benefits of sports to the public. He also explored the huge but untapped potential of physical activity tailored for public health [2]. To evaluate the use of martial arts (MA) programs in secondary physical education (PE) settings, Rotunda found that MA teaching has the potential to produce physical and psychosocial benefits for both adult and adolescent participants. He seldom implemented systematic programs in schools [3]. Ujuagu et al. aimed to evaluate the pedagogy of junior high school martial arts physical education curriculum and effective self-defense programs and techniques. Using a survey research design, the researchers found that the techniques used in practical teaching are difficult for teachers to implement, and practical ones are usually not taught [4]. At present, martial arts education is involved in the physical education courses of universities, but there are not many courses in primary and secondary schools and high schools, so it still needs greater efforts to incorporate martial arts into education. Akehurst's qualitative case explored part of the extracurricular sports program of taekwondo, measuring the benefits of student learning, health, and well-being. Studies have shown that traditional taekwondo can promote self-regulation in education [5]. In the process of martial arts teaching, the education and inheritance of culture in martial arts teaching is the insufficiency. Wang analyzed and learned the content of martial arts teaching in primary and secondary schools.
rough a comprehensive understanding of the role of martial arts in the teaching process of primary and secondary schools, he reintegrated content and culture [6]. However, none of the above studies have highlighted the importance and feasible solutions of martial arts education governance.
It is highly important to formulate a scientific martial arts system teaching and training method. Han considered the above problems and the current popular artificial intelligence technology and built a neural network algorithm to solve [7]. He used a computer to test a study of functional asymmetry in students and schoolchildren practicing martial arts. Bobrova involved students and schoolchildren practicing martial arts (taekwondo, karate) due to software determining functional asymmetry. He used two visual tests [8]. e role of visual learning is critical for a new generation of learners. John and Martin used topic modeling and sentiment analysis to examine a YouTube text feedback data set containing keywords related to martial arts learning. Topic modeling shows that many discussion topics in martial arts are closely related to learning, arts, and humanities [9]. Martial arts are considered a cultural heritage in China, and exploring special learning systems has become a hot research topic. Shibiao discussed the design and implementation of a martial arts learning system based on Silverlight and took Taijiquan as the research object. e conclusion shows that the proposed system is easy to use; therefore, users can better master Tai Chi [10]. However, none of the above studies have closely integrated artificial intelligence and martial arts education governance. e novelty of this paper is that the governance of martial arts education will be from the perspective of structural functionalism. It analyzes the decisive role of the structural attributes of martial arts on its educational function and then analyzes the formation process of each individual education in martial arts education and the degree of recognition of these educational functions by experts. e article made a profound summary of the problems of Wushu education, not only made suggestions on the shortcomings of current Wushu education but also made a certain analysis of the development of Wushu education in the future.
Problems and Solutions in Governance of
Martial Arts Education with Artificial Intelligence Technology
Structural Attributes of Martial Arts and Influence on
Educational Function. According to the theory of structural functionalism, it can be known that the thing or system that has a specific function also has a specific structure. In a sense, the structure of a thing or system determines the function of the thing, and the change of the structure will also lead to the corresponding change of the function [11]. e reason for the change of the structure may be that one or some of the factors that constitute the structure have changed, and it may also be that the mode of action or connection between the factors that constitute the structure has changed. In the practice of martial arts, the structure is a crucial part, and the structure includes the body structure, the movement structure, and the structure that connects the whole. e connection between the structures is very close, and they cooperate with each other to achieve the effect of martial arts. Martial arts education has its own specific functions, that is, some functions that are different from other education methods. It is caused by the three basic factors that constitute martial arts education and their interaction and mutual connection. e greatest impact on their respective educational functions should be the educational content in their intermediary systems. Chinese classic traditional martial arts are shown in Figure 1.
As shown in Figure 1, Chinese martial arts include Shaolin, Taiji, Bagua, Wing Chun, Baji, and so on. e specific complex structure of martial arts determines its diverse properties. When martial arts exist in martial arts education as the main content of education, these attributes further determine that martial arts education has various educational functions. It can be said that the multifunctional characteristics of martial arts education are determined by the complex structure and diverse attributes of martial arts [12].
Influence of Martial Arts Attributes on Its Educational
Function.
e attribute of martial arts refers to martial arts as a social and cultural form, which is a martial art with national cultural characteristics [13]. is is the most fundamental attribute of martial arts, that is, the essential attribute of martial arts, and it has the greatest impact on the educational function of martial arts. First, martial arts attributes require martial arts education to attach importance to moral etiquette education. Due to the violent characteristics of martial arts, martial arts education must pay attention to the promotion of benevolence, integrity, justice, responsibility, and other external morals, so as to regulate the use of violence by these violent holders. Due to the strong confrontational characteristics of martial arts combat, martial arts education must cultivate morality such as bravery and self-con dence because in order to defeat the opponent in actual combat, one must maintain arrogant ghting spirit. It overwhelms opponents in momentum.
is requires cultivating students' bravery, selfcon dence, and other qualities in the usual martial arts education. Especially for people who are beginners in martial arts, if they want to absorb the spirit and culture of martial arts, they must cultivate excellent qualities such as bravery in practice. Due to the di cult characteristics of martial arts techniques, martial arts education must improve morals such as perseverance, tenacity, and patience. Because it is di cult to achieve success in martial arts without these qualities of will, these qualities are strictly cultivated in students in martial arts education. In the study of martial arts, willpower is one of the most basic requirements. Only after hard training can you achieve better results in martial arts. Second, the martial arts attributes require that martial arts education must develop intelligence.
e winning factors of martial arts combat confrontation have various characteristics. To defeat an opponent not only requires superb skills and ingenious tactics but also requires superb wisdom. Martial arts confrontation has always been an activity of ghting wits and courage, so in the usual martial arts education, we must pay attention to developing the intelligence of martial arts practitioners [14].
e In uence of Martial Arts Cultural Attributes on Its Educational Function.
e cultural attribute of martial arts refers to the rich cultural connotation of the Chinese nation in martial arts, which is also an important attribute of martial arts. Some scholars even regard the cultural attributes of martial arts as the essential attributes of martial arts. On the one hand, martial arts itself are a kind of culture, and learning martial arts is to learn a distinctive national body culture. On the other hand, martial arts carry rich national cultural connotations, and receiving martial arts education helps to enrich one's own traditional national knowledge.
In uence of Martial Arts Sports Attributes on Its Educational Function.
e sports attribute of martial arts means that martial arts have a better function of keeping t and strengthening the body, which determines that martial arts education has a better function of strengthening the body [15]. On the one hand, many technical movements of martial arts meet the requirements of medical science, and learning these technical movements through martial arts education can play a better role in tness. On the other hand, the process of martial arts education is also a process of improving physical function, developing physical strength, and enhancing physical tness. Regular practice is also conducive to the formation of sports habits.
Basic Structure and Relationship of Martial Arts Teaching
Mode.
e teaching mode of martial arts is identical with the teaching mode of physical education. ey all exist in a certain space and time.
e space shows the established teaching theories and goals, the position of teachers and students in teaching and their relationship, and the time shows how to arrange the teachers' "teaching" and the students' "learning" [16]. erefore, we can think that the basic structure of martial arts teaching is the established teaching theory, teaching objectives, and teacher-student arrangement that appear in time and space. e basic structure of the martial arts teaching mode and its relationship are shown in Figure 2.
As shown in Figure 2, the teaching guiding ideology of arti cial intelligence-assisted martial arts elective courses should be the guiding ideology of physical education established on the basis of the national education policy, basic teaching theories, and teaching ideas. It is mainly re ected in the humanized teaching of "student's development as the center," "learning to teach," and "problemoriented." It can also elevate students' knowledge of martial arts to a higher level, which means that students' knowledge of martial arts does not just stay on the surface of martial arts movements. rough the combination of practical experience, the comprehensive ability of students is improved, and the teaching is "student-centered" throughout. It is also connected with the learning of knowledge before class, the internalization of knowledge in class and after class, and students' autonomous learning and daily life style. It nally achieves the purpose of promoting the all-round development of students' morality, intelligence, physique and beauty. e classic martial arts movements are shown in Figure 3. As shown in Figure 3, arti cial intelligence is a product born under the highly informatized society, which is different from traditional martial arts classroom teaching. Traditional martial arts teaching is mainly "teacher-centered," occupying the entire classroom with knowledge and skills, explanations, and demonstrations. In addition, the teaching form of large-class elective courses (more than 40 people or more) cannot well cultivate students' interest in martial arts and develop students' personality and comprehensive practical ability [17]. On the contrary, the teaching of arti cial intelligence-assisted martial arts electives builds a good online learning environment for students with the new media (arti cial intelligence) teaching platform. It realizes equal opportunities for teachers and students to communicate before, during, and after class. Its training of students has changed from "indoctrination" to "targeted" guidance, allowing students to explore the mysteries of martial arts independently. Students are participants and masters of learning. Finally, they can share their research results, exchange topics, experience fun, and learning experiences with the whole class, so as to realize the deep internalization of knowledge and skills. By cultivating students' interest in martial arts learning, it improves students' comprehensive practice ability and develops students' learning personality. is makes the teaching and learning process more fun, and the teacher-student relationship is more harmonious [18].
Target Tracking Algorithm Based on Deep Learning.
In this paper, combined with the background of actual camera shooting and the dynamic model of martial arts movement, we establish a new martial arts tracking system to achieve high-precision tracking. At the same time, once the tracking fails, the target recognition calculation is used to recalculate the position of the martial arts in the screen and continue to execute the target tracking algorithm. For the other part, the algorithms for estimating rotational speed and rotational direction require the use of martial arts spatial structures and camera models. It estimates three-dimensional spatial structure information on a two-dimensional image. Such an information structure can enable better data transmission, higher e ciency in the system, and accurate positioning of human joints when modeling martial arts postures. In the case of obtaining enough data, a visualization system for martial arts data analysis is built to facilitate martial arts students and martial arts coaches to obtain the required information [19]. is paper proposes an end-to-end approach that combines human pose prediction and human action recognition, as shown in Figure 4.
As shown in Figure 4, by combining the re ected spatial 3D information with the skeleton data, richer behavioral features can be obtained, and the nal recognition rate can be improved [20]. erefore, this paper proposes a twostream fusion method to fuse video data and skeletal joint data, as shown in Figure 5.
As shown in Figure 5, after introducing the attention mechanism, each frame of the video image is rst generated by the convolutional network with the attention mechanism. It then feeds the data into a convolutional long short-term memory network in time series and extracts the results.
Constructing Spatiotemporal Graph Convolution.
e convolution of the graph needs to deal with discrete feature points in space, and its de nition is di erent from the two-dimensional convolution. e traditional two-dimensional convolution algorithm is image-based, and the convolution operation can be achieved by using a lter and an image pixel matrix to perform a dot product operation [21]. We can think of the input image and output feature map as a two-dimensional matrix grid, and the two-dimensional convolution operation can be understood as a nonlinear mapping of input features to output features. e output of a 2D convolution operation at m positions can be de ned: (1) en, by rede ning the sampling function K and the weighting function w, the above convolution formula can be extended to a graph convolution formula. It completes the extraction of local features of key points in the space through the graph convolution operation in the spatial domain. It connects the temporal convolution network (TCN) after the spatial domain graph convolution to extract the local Discrete Dynamics in Nature and Society features of key points between adjacent frames, as shown in Figure 6. As shown in Figure 6, in the time-domain graph convolution, each convolution operation is equivalent to completing the convolution operation on t frame nodes. It then moves to the next frame according to the step size, completes the convolution of all frames of this node, and then performs the convolution of the next node [22].
Sampling Function and Weight Function.
For a twodimensional convolution operation, the sampling function is de ned on a pixel matrix centered at position x and with lters as regions. erefore, the sampling function can be de ned as follows: On the graph, the weight function w de nes a lter similar to a 2D convolution. Each position in the two-dimensional convolution operation lter provides a weight value, so the weight function of graph convolution can be constructed in this way, the weight function ω(p bm , p bn ).
Constructing Spatial Graph
Convolution. By using the sampling and weighting functions de ned in formulas (2) and (3), formula (1) can be reconstructed to obtain the convolution expression for the spatial graph: After substituting formulas (2) and (3) into formula (4), the nal graph convolution formula in space is obtained as shown in However, the skeleton diagram sequence can only represent the node information of each frame in the video, which represents a kind of information in the spatial dimension. But it cannot represent the coherence between video frames [23]. In this way, the human body posture in martial arts learning is modeled, and the simulation of joint points is deep into the space, so that the simulation of human body posture can achieve high accuracy. e spatiotemporal modeling of the video frame sequence is shown in Figure 7.
As shown in Figure 7, the spatiotemporal modeling in this paper is to obtain the spatiotemporal model by connecting the same nodes between two adjacent skeleton graphs, which is a data model starting from the spatiotemporal structure.
e signi cance of this model is to explore the movement trajectories of the same joint points in the process of time change, so as to judge the behavior of the characters [24,25].
ere are two kinds of edges in the constructed spatiotemporal model. One is the spatial edge formed by natural connectivity between nodes in the space, and the other is the connected edge between the same nodes in the time dimension.
Attitude Estimation Algorithm Based on Coordinate
Regression. Deep pose is one of the rst methods to use coordinate regression in deep neural networks. It uses an end-to-end approach to predict the human body key points from the global perspective of the human body and then locates the human body key points from the features. It greatly simpli es the prediction of key points. e pose estimation algorithm based on coordinate regression takes a whole image as the input of the model and uses a simple 7layer convolutional neural network as the characteristic Discrete Dynamics in Nature and Society special zone network. Finally, it is fully connected into a multidimensional vector of corresponding coordinates, such as (x, y) representing the coordinates of a key point. It needs to return to ve key points in total, then the vector and supervision information output by the network are both a vector of length 10.
For the posture of the human body, the joint points are the best way to measure the action, and the overall movement of the human body can be simulated through the movement of the joint points. Suppose the human body has k joint points, represented by a vector: Representing the absolute coordinates of the predicted pose vector as in z * ψ(x; θ).
e loss function used is L2 loss, and then the model can be written as follows: In essence, the convolutional neural network based on coordinate regression is to regress the size o set of each key point from the image boundary. However, the information provided by this supervision method is relatively small, the convergence speed of the entire network is slowed down, and the error in the actual model training is large.
For convolutional neural networks, the calculation formula of a single convolutional layer is as follows: params l weights + bias, In particular, in depthwise separable convolutional layers params l c l in × k l width × k l height + c l in .
Each motion detection algorithm has its own characteristics. If it uses background subtraction, it must build the same background model as the actual background. e di erence operation can only be performed if a valid background model has been established. In order to nd the area of interest, it is necessary to build a background model, and the method of building a solid background model is the most important step of background subtraction.
Assuming that the background image model is f d (t) and the current frame image is f c (t), the image after the difference algorithm is shown in On the premise that the scene environment is not so complicated, statistical ltering can be used to infer the background image. Adaptive background correction can be achieved by performing multiple averaging operations on the background image, as shown in e parameters are important parameters for correcting background images using statistical averaging. If the object to be moved is not always displayed in the background image, better results can be obtained by properly selecting the parameters, and a more accurate background model can be obtained. e mean lter method is most often used to construct the background model, as shown in e premise of this algorithm is to have a memory space that can store frame video images. e background calculation formula is shown in 1 (a, b). (15) ...
Discrete Dynamics in Nature and Society
In the moving target detection algorithm, the biggest advantage of background subtraction is that the operation is simple, the implementation is simple, and the calculation amount is small. erefore, the purpose of real-time detection can be basically achieved, and detection of the target to be moved can also be performed correctly.
After minimizing the above formula, we get: During the training process, the numerator and denominator of the above formula are regarded as a whole for iterative optimization. After the training is completed, if there is a new image area z, first calculate the value Z after its discrete Fourier transformation, and then obtain the response score of this area by the following formula: α − 1 stands for inverse Fourier variation, and finding the largest y finds the location of the tracked martial arts performer.
When estimating the scale of martial arts performers, it is similar to the above calculation, except that the position and scale dimensions are considered at the same time, and f is the characteristic area.
ere are a total of d scale dimensions; h and g are also similar to the above, but only have more scale dimensions. e loss function to be optimized is calculated as follows: Among them, λ represents the regular term, and the H of the Fourier space is obtained after solving:
Martial Arts Education Enriches Physical Knowledge and Increases Physical Skills.
In the process of development, martial arts have been influenced and nurtured by traditional Chinese medicine and health preservation. Many scientific knowledge and theories of traditional Chinese medicine and health preservation have become the guiding ideology of martial arts practice, and martial arts technical movements are also formed under the guidance of traditional Chinese medicine theory. ese technological movements also correspond to modern scientific knowledge and theories. Martial arts education has always been a traditional fitness program and has an important position in the elderly group. With the development of martial arts culture, more and more young groups love fitness sports and combine them with modern fitness theory. e effect of martial arts education on enriching physical knowledge and increasing physical skills is shown in Figure 8.
As shown in Figure 8, the proportion of experts who believe that martial arts training is very useful for enriching sports knowledge is 70.2%, indicating that most experts recognize the role of martial arts training in enriching sports knowledge. In terms of increasing physical skills, the percentage of experts who thought martial arts training is very useful in increasing physical skills was 83%, indicating that most experts recognized the role of martial arts training in increasing physical skills.
Martial Arts Education Develops Practitioners' Physical Strength and Develops Sports Habits.
Physical fitness refers to the physical ability to perform a sport or activity. It includes the ability to perform sports, occupations, and a range of other physical movements. It is not identical to the concept of physical fitness but focuses on the expression of the functional level of physical movement. In China, it includes physical qualities such as strength, speed, and coordination, as well as protective qualities such as adaptability, endurance, and immunity. Physical fitness is a reflection of individual physical fitness. In China, the measurement of physical fitness is also a test that every college student needs to pass. Physical fitness is also an important indicator to measure a person's potential and ability.
rough martial arts education, the functions of body organs and tissues can be trained. is helps develop physical qualities such as strength, speed, stamina, and flexibility and improves the body's defenses such as fitness and stamina. e role of martial arts training in developing physical habits and fitness is shown in Figure 9.
As shown in Figure 9, the proportion of experts who think that martial arts education is very or relatively large in developing sports habits is 83.0%, which indicates that most experts agree that martial arts education plays a role in developing sports habits. e proportion of experts who think that martial arts education plays a very large or relatively large role in developing physical strength is 78.7%, which shows that most experts approve of the role of martial arts education in developing physical strength.
Martial Arts Education Cultivates Self-Awareness and
Cultivates Independence. In terms of student self-control, martial arts education will cultivate students' effective selfregulation. Martial arts education attaches great importance to the education of students' independence, autonomy, and self-discipline. It requires students to learn to think independently, learn to rely on themselves, learn to control themselves, learn to persist, and learn to motivate themselves. Because only with these abilities and these qualities can he achieve his dream of becoming a martial arts master.
is long-term education and edification process is conducive to students' effective self-regulation. e effect of martial arts education on developing self-awareness and fostering independence is shown in Figure 10. 8 Discrete Dynamics in Nature and Society As shown in Figure 10, 70.2% of experts believe that martial arts education has a very large or relatively large role in developing self-awareness and independence. is shows that most experts agree on the role of martial arts education in developing self-awareness and independence.
Results of the Impact of AI-Based Flipped Classroom Teaching on Students' Martial Arts Learning.
e students were divided into experimental group and control group to conduct martial arts teaching experiments. After the teaching experiment, the martial arts skills, theory, active participation attitude, learning attitude, self-inquiry ability, analysis, and problem-solving ability of the students in the two classes were tested and analyzed. In this paper, the independent sample T test was carried out on the obtained data, and the paired T test was carried out on the data of the students in the two groups before and after the experiment. Table 1 shows the comparative analysis of students' martial arts learning in the rst two classes of the experiment. As shown in Table 1, students in the rst two classes of the experiment conducted a questionnaire on their learning situation. e learning situation is mainly analyzed from the four dimensions: students' learning interest, active participation attitude, self-inquiry ability, and problem-solving ability.
is paper analyzes the data obtained from the questionnaire. e analysis results show that the P value of students' interest in learning in the two classes is 0.648, the P value of independent inquiry ability is 0.945, the P value of active participation attitude is 0.077, and the ability to analyze and solve problems is 0.062.
e P values of the four dimensions are all greater than 0.05, and the data show that the learning situation of the students in the two classes is basically the same, and there is no signi cant di erence. Table 2 shows the comparative analysis of students' martial arts learning situation in the two classes before and after the experiment.
As shown in Table 2, the analysis results show that there are some changes in the values of students' interest in learning, active participation attitude, self-exploration ability, and analysis and problem-solving ability of the students in the control class, but the changes are not large.
e P value of learning interest is 0.725, the P value of independent inquiry ability is 0.078, the P value of active participation attitude is 0.835, and the P value of analytical problem-solving ability is 0.000. Except for analyzing the problem-solving ability, the P values for the remaining three were all greater than 0.05. It indicated that there was no signi cant di erence in the students' active participation attitude, learning interest, and self-inquiry ability before and after the experiment. e P value for analyzing problemsolving ability was 0.001. If the P value is less than 0.01, it means that there is a very signi cant di erence in the problem-solving ability of the students in the control class.
It can be seen from the above results that the students in the control class have improved their ability to analyze and solve problems under the traditional classroom learning. Learning interest, active participation attitude, and self-inquiry ability did not improve. e reason for this result may be the traditional classroom teaching method is relatively boring and single. e way of class has not changed for so many years. During the class, the teacher will explain, demonstrate, and correct errors. Most of the time in the class is the teacher's demonstration and the students imitate the movements. e whole practice process loses interest. It cannot fully mobilize students' enthusiasm and interest in learning and cannot make students fall in love with martial arts. In traditional classrooms, teachers will organize students to practice in groups throughout the teaching process, and students will discuss and analyze in groups. For the movements that do not know, we collectively discuss and practice. erefore, this is what makes the students' analytical and problem-solving abilities improve. Table 3 shows the test of martial arts learning of students in the experimental class before and after the experiment. As shown in Table 3, the analysis results show that the values of students' interest in learning, active participation attitude, self-exploration ability, and analytical problemsolving ability have changed, and the changes have been large. e P value of the students' learning interest before and after the experiment was 0.001, the P value of active participation attitude was 0.001, the P value of active inquiry ability was 0.001, and the P value of analysis and problem-solving ability was 0.001. e P values of the four are less than 0.01, which means that the students in the experimental class have very significant differences in their active participation attitude, learning interest, self-inquiry ability, and analysis and problem-solving ability. After 4 weeks of experiments, it is shown that AI flipped classroom teaching can improve students' interest in learning martial arts, drive students' enthusiasm for learning, and cultivate students' ability to actively explore and solve problems. e reason for this result may be students preview the Taijiquan video uploaded by the teacher before class and learn about the history and culture of Taijiquan through online Taijiquan materials and links. Before class, they restrained themselves according to the teacher's requirements, conducted online classroom learning, discussed with their classmates and teachers about the problems they did not understand, and completed the homework. During class, students learn with their own preclass questions, and the teacher will focus on teaching the students' feedback. e difficult points before the class are solved by the teacher's explanation in the class, and their own practice is solved. After class, they review carefully. Students review the content of the previous class online and preview the new content of the next class. Before class, during class, and after class, students will be organized to discuss difficult points to improve students' ability to explore and solve problems by themselves. Table 4 shows the test table of students' martial arts learning situation in the two classes after the experiment.
As shown in Table 4, the analysis results show that the P value of students' interest in learning in the two classes is 0.001, the P value of active participation attitude is 0.001, and the P value of independent inquiry ability is 0.001. e three P values are all less than 0.01, which means there is a very significant difference. e P value of analytical problemsolving ability is 0.014, which is less than 0.05, indicating a significant difference. From the above results, we know that the overall effect of AI flipped classroom is better than that of traditional classroom, and AI flipped classroom can not only mobilize students' interest in learning but also improve students' active participation attitude, self-inquiry ability, and problem-solving ability. e AI teaching model can improve the learning effect of students. Most of the reasons are due to vivid and flexible teaching videos and online selflearning without time and geographical restrictions. To a large extent, it has stimulated students' enthusiasm for learning and improved students' interest in learning.
Conclusion
According to the actual situation of teaching, it should flexibly use as many teaching methods, learning organization forms, and teaching aids as possible. It helps to stimulate students' interest in learning, improve students' learning enthusiasm and initiative, and thus promote the teaching effect. e design of teaching methods is related to the presentation of teaching effects. At present, Wushu education relies more on interest as the driving force for learning. In the future, it should be standardized with systematic teaching theories, so that teaching methods and teaching effects can promote each other. In the past, teaching methods were mainly based on lectures and demonstrations, and the organizational form of learning was relatively stable. ere is a method in teaching, but there is no fixed method. Martial arts teachers should actively seek some teaching methods that are suitable for martial arts teaching and can stimulate the interest of learners. It can also innovate some teaching methods and means that are suitable for martial arts teaching and that students like, so as to stimulate students' interest, improve teaching efficiency, and ensure the realization of teaching purposes. At present, more electronic teaching methods can be used in martial arts teaching.
is teaching method can be used not only in technical teaching but also in traditional culture teaching and martial arts education. By watching videos of martial arts technique moves, one can gain a deeper understanding of the technique and learn faster and more regularly. Watching different martial arts competition videos can also stimulate interest in martial arts and broaden students' horizons. By allowing students to appreciate some educational martial arts movies, animations, etc., it can make students nurtured and infected, which is beneficial to moral education. e role of the teacher in the overall classroom design is crucial, and a large part of the student's learning effect depends on the way the teacher teaches. is is especially true for martial arts teachers. Without a rich theoretical foundation, it is not effective to teach movements unilaterally.
Data Availability
e data used to support the findings of this study are available from the corresponding author upon request.
Conflicts of Interest
e authors declare that there are no conflicts of interest regarding the publication of this paper. | 8,908.2 | 2022-08-29T00:00:00.000 | [
"Education",
"Computer Science"
] |
Production and Composition of Group B Streptococcal Membrane Vesicles Vary Across Diverse Lineages
Although the neonatal and fetal pathogen Group B Streptococcus (GBS) asymptomatically colonizes the vaginal tract of ∼30% of pregnant women, only a fraction of their offspring develops invasive disease. We and others have postulated that these dimorphic clinical phenotypes are driven by strain variability; however, the bacterial factors that promote these divergent clinical phenotypes remain unclear. It was previously shown that GBS produces membrane vesicles (MVs) that contain active virulence factors capable of inducing adverse pregnancy outcomes. Because the relationship between strain variation and vesicle composition or production is unknown, we sought to quantify MV production and examine the protein composition, using label-free proteomics on MVs produced by diverse clinical GBS strains representing three phylogenetically distinct lineages. We found that MV production varied across strains, with certain strains displaying nearly twofold increases in production relative to others. Hierarchical clustering and principal component analysis of the proteomes revealed that MV composition is lineage-dependent but independent of clinical phenotype. Multiple proteins that contribute to virulence or immunomodulation, including hyaluronidase, C5a peptidase, and sialidases, were differentially abundant in MVs, and were partially responsible for this divergence. Together, these data indicate that production and composition of GBS MVs vary in a strain-dependent manner, suggesting that MVs have lineage-specific functions relating to virulence. Such differences may contribute to variation in clinical phenotypes observed among individuals infected with GBS strains representing distinct lineages.
INTRODUCTION
Group B Streptococcus (GBS) is an opportunistic pathogen that asymptomatically colonizes ∼30% of women either vaginally or rectally (Verani et al., 2010). In individuals with a compromised or altered immune state, including pregnant women, neonates, the elderly, and people living with diabetes mellitus, GBS can cause severe infections (Verani et al., 2010). Presentation of disease is variable between individuals: in elderly patients and neonates, GBS infection typically presents as septicemia, whereas in pregnant women it more commonly causes chorioamnionitis, preterm birth, or stillbirth (Doran and Nizet, 2004;Edwards and Baker, 2005).
Despite the high prevalence of GBS colonization during pregnancy, only a fraction of babies born to colonized mothers develops an infection. In the United States pregnant individuals colonized with GBS are given antibiotics to reduce the risk of neonatal GBS infection, but even without such prophylaxis most neonates born to GBS-colonized mothers remain infection-free (Aronoff and Blaser, 2020). The factors that determine whether a neonate develops GBS sepsis or not are incompletely understood, but evidence implicates bacterial strain variation as a key factor. For example, certain polysaccharide capsular serotypes of GBS are much more common at causing perinatal infections than others (Bianchi-Jassir et al., 2020).
Application of multilocus sequence typing (MLST) has also demonstrated that GBS isolates comprise multiple sequence types (STs) that are differentially correlated with disease outcomes (Jones et al., 2003). While ST-12 strains have been associated with asymptomatic colonization (Manning et al., 2008), ST-1 and ST-17 strains have been linked to invasive disease in adults and neonates, respectively (Jones et al., 2003;Poyart et al., 2008;Manning et al., 2009;Flores et al., 2015). Moreover, our group has previously shown that different STs interact variably with host cells. ST-17 strains, for instance, had an enhanced ability to attach to gestational tissues, elicited stronger proinflammatory responses, and could persist longer inside macrophages than other STs (Korir et al., 2014(Korir et al., , 2017Flaherty et al., 2019). Conversely, ST-12 strains were found to display increased tolerance to ampicillin relative to ST-17 strains (Korir et al., 2017), highlighting the divergence of these lineages and variation in the ability to withstand different stressors. The mechanisms underlying these strain-dependent differences, however, are poorly understood.
Many bacteria produce membrane vesicles (MVs) of varying sizes (20-500 nm) containing toxins and other virulence factors that can modulate immune responses and influence pathogenesis (Brown et al., 2015). In addition, GBS can produce MVs that have been implicated in infection risk, though this remains an area in need of more research (Surve et al., 2016;Armistead et al., 2021). While the exact role of GBS MVs in pathogenesis is not clear, intra-amniotic injection of GBS MVs produced by an invasive ST-7 strain induced preterm birth and intrauterine fetal death in mice (Surve et al., 2016). GBS MVs were also found to contain active virulence factors that could weaken murine gestational membranes, stimulate immune cell recruitment, and lyse host cells (Surve et al., 2016;Armistead et al., 2021). Hence, an important, unanswered question is whether MVs derived from strains belonging to distinct phylogenetic lineages and clinical sources vary in composition and pathogenic potential.
In this study, we sought to compare the quantity and protein composition of MVs produced by genetically distinct GBS strains and evaluate the relationships between proteomic profiles, strain characteristics, and clinical presentation. To accomplish these goals, we isolated MVs from six clinical strains representing three phylogenetic lineages (ST-1, , and used labelfree proteomics to define the protein composition. Using this approach, we report that MV production and composition vary in a strain-and ST-dependent manner, highlighting the importance of strain diversity on pathogenicity and virulence.
Because we had no prior knowledge of MV production across lineages, these strains were selected based on molecular data as well as epidemiological and clinical associations described previously. The ST-17 strains, for instance, have consistently been associated with invasive neonatal disease (Jones et al., 2003;Manning et al., 2008;Poyart et al., 2008) and were more likely to persist in mothers following childbirth and intrapartum antibiotic prophylaxis (IAP) (Manning et al., 2008). ST-12 strains, however, were more common during pregnancy and more readily lost following IAP (Manning et al., 2008(Manning et al., , 2009. Although ST-1 strains have been linked to invasive disease in adults (Flores et al., 2015), they were more commonly recovered from women during pregnancy than neonates in our studies (Manning et al., 2008(Manning et al., , 2009. It is also notable that the ST-1 neonatal GB37 strain has unique traits in that it is non-pigmented and nonhemolytic (Singh et al., 2016). This diverse set of strains with varying characteristics and epidemiological associations was chosen to maximize our ability to detect differences in MV production across strains.
Strains were cultured using Todd-Hewitt Broth (THB) or Todd-Hewitt Agar (THA) (BD Diagnostics, Franklin Lakes, New Jersey, United States) overnight at 37 • C with 5% CO 2 . For enumeration of colony forming units (CFUs), bacteria were serially diluted in Phosphate Buffered Saline (PBS) and plated onto THA. Plates with 20-200 colonies were counted and the number of colonies per mL was determined. Growth curves were performed by diluting overnight THB cultures 1:50 into fresh THB. Cultures were grown for 6 h with OD 600 measurements taken hourly. Growth curves were performed in triplicate for each isolate.
Membrane Vesicle Isolation and Purification
The isolation and purification of MVs was performed as described (Chutkan et al., 2013;Klimentová and Stulík, 2015;Surve et al., 2016;Nguyen et al., 2021), with some modifications. Briefly, overnight THB cultures were diluted 1:50 into fresh broth and grown to late logarithmic phase (optical density at 600 nm, OD 600 = 0.9 ± 0.05). Aliquots of culture were serially diluted and plated on THA for bacterial enumeration. Cultures were centrifuged at 2,000 × g for 20 min at 4 • C. Supernatants were collected and re-centrifuged at 8,500 × g for 15 min at 4 • C, followed by filtration through a 0.22 µm filter and concentration using Amicon Ultra-15 centrifugal filters (10 kDa cutoff) (Millipore Sigma, Burlington, MA, United States). Concentrated supernatants were subjected to ultracentrifugation for 2 h at 150,000 × g at 4 • C. For quantification, pellets were washed by resuspending in PBS, re-pelleting at 150,000 × g at 4 • C, and resuspending in PBS; pellets were stored at −80 • C until usage.
For proteomics, pellets were resuspended in PBS and purified using qEV Single size exclusion columns (IZON Science, Christchurch, New Zealand) per the manufacturer's instructions. MV fractions were collected and re-concentrated using the Amicon Ultra-4 centrifugal filters (10 kDa cutoff) (MilliporeSigma, Burlington, Massachusetts, United States) and brought to a final volume of 100 µL in PBS. To preserve the integrity of vesicle proteins, ProBlock Gold Bacterial Protease Inhibitor Cocktail (GoldBio, St. Louis, Missouri, United States) was added. MVs were stored at −80 • C until usage.
Electron Microscopy
To visualize GBS and the MVs associated with each strain, scanning electron microscopy (SEM) was performed on bacterial cultures grown to stationary phase in THB. Culture aliquots were fixed in equal volumes of 4% glutaraldehyde in 0.1 M phosphate buffered saline (pH 7.4), placed on poly-L-lysine coated 12 mm coverslips, and incubated for 5 min. The coverslips were washed with water and dehydrated through increasing concentrations of ethanol (25, 50, 75, and 95%) for 5 min in each followed by three 5-min changes in 100% ethanol. Samples were dried in a Leica Microsystems (model EM CPD300) critical point drier using liquid carbon dioxide as the transitional field. Lastly, samples were mounted on aluminum stubs using epoxy glue (System Three Quick Cure 5, System Three Resins, Inc., Lacey, Washington, United States) and coated with osmium (∼10 mm thickness) using a NEOC-AT osmium coater (Meiwafosis Co., Ltd., Tokyo, Japan). Imaging was performed using a JEOL 7500F scanning electron microscope.
To evaluate MV morphology and purity without contaminating extracellular components, transmission electron microscopy (TEM) was performed on purified vesicles as described (Nguyen et al., 2021). MVs were fixed in 4% paraformaldehyde, loaded onto formvar-carbon coated grids, and counterstained with 2.5% glutaraldehyde and 0.1% uranyl acetate in PBS. Samples were imaged using a JEOL 1400 Flash transmission electron microscope. During proteomics experiments, preparations with high concentration of MVs and minimal extravesicular contamination were included for downstream analyses. Each proteomics preparation was imaged with TEM prior to analysis to confirm the presence of spherical MVs.
Quantification of Vesicle Production
Nanoparticle tracking analysis was performed to quantify MVs produced by each strain (n = 8-9 replicates per strain) using a NanoSight NS300 (Malvern Panalytical Westborough, MA, United States) equipped with an automated syringe sampler as described previously (Nguyen et al., 2019(Nguyen et al., , 2021. For each sample, MVs were diluted in phosphate buffered saline (1:100-1:1,000) and injected with a flow rate of 50. Once loaded, five 20-s videos were recorded at a screen gain of 1 and camera level of 13. After capture, videos were analyzed at a screen gain of 10 and a detection threshold of 4 and data were subsequently exported to a CSV file for analysis using the R package tidyNano 1 (Nguyen et al., 2019). Total MV counts were normalized by dividing by the colony forming units (CFUs) of each original bacterial culture following growth to an OD 600 of 0.9 ± 0.05. Differences in MV quantities were assessed using the Kruskal Wallis test followed by a post hoc Dunn's Test with a Benjamini-Hochberg correction. Outliers were identified by multiplying the interquartile range by 1.5, which was used to extend the upper and lower quartiles.
Proteomics and Genomics
Proteomic LC-MS/MS analysis of MVs was performed in duplicate or triplicate by the Proteomics Core at the Michigan State University Research Technology Support Facility (RTSF). Protein concentrations of purified MVs were determined using the Pierce Bicinchoninic Acid Assay (Thermo Fisher Scientific, Waltham, Massachusetts) supplemented with 2% SDS in water to reduce the background signal from excess lipids contained within the vesicles. MVs (1.5 µg) were concentrated into a single band in a 4-20% Tris-Glycine SDS-PAGE gel (BioRad, Hercules, CA) that was fixed and stained using colloidal Coomassie blue (Dyballa and Metzger, 2009).
Protein bands were excised from the gels and stored in 5% acetic acid at 4 • C. Prior to analysis, in-gel trypsin digest and peptide extraction were performed. Briefly, gel bands were dehydrated twice using 100% acetonitrile and incubated with 10 mM dithiothreitol in 100 mM ammonium bicarbonate (pH∼8.0) at 56 • C for 45 min. Bands were incubated in the dark with 50 mM iodoacetamide in 100 mM ammonium bicarbonate for 20 min followed by another dehydration. Sequencing grade modified trypsin (0.01 µg/µL in 50 mM ammonium bicarbonate) was added to each gel band and incubated at 37 • C overnight. Peptides extracted by bath sonication (in 60% acetonitrile, 1% trichloroacetic acid solution) were vacuum dried and re-suspended (in 2% acetonitrile/0.1% trifluoroacetic) prior to separation using a Thermo ACCLAIM C18 trapping column. Peptides were sprayed onto a Thermo Fisher Q-Exactive HFX mass spectrometer for 90 min; the top 30 ions per survey were analyzed further using high energy induced dissociation. MS/MS spectra were converted into peak lists using Mascot Distiller v2.7.0 and searched against a SwissProt database containing all GBS sequences available through the National Center for Biotechnology Information (NCBI; accessed 2/08/2019). Contaminants were identified and removed using Mascot searching algorithm v2.7.0, while protein identities were validated using Scaffold v4.11.1. Raw proteomic data was submitted to the MassIVE database and can be accessed via<EMAIL_ADDRESS>or at doi: 10.25345/C5 RC1H.
Whole-genome sequencing was performed previously on GB00020 (Parker et al., 2017) and GB00037 (Singh et al., 2016). These genomes were examined more comprehensively to confirm the presence of specific genes found to be absent in the proteomics analysis. Raw reads were trimmed using Trimmomatic 0.39 (Bolger et al., 2014) followed by an assessment of sequence quality using FastQC (Barbraham Bioinformatics). De novo genome assembly was performed on high-quality pairedend reads using SPAdes 3.13.1 (Prjibelski et al., 2020). Assembly quality was assessed using QUAST 5.0.2. Protein sequences were downloaded from GenBank and aligned to assembled contigs using tblastn. Proteins with 90% identity or higher were considered present.
Data Analysis
To compare MV proteins between strains, proteomic data from all strains were compiled and normalized for interexperimental variability using Scaffold. Only proteins with a minimum of two identified peptides falling above a 1% false discovery rate and 95% protein threshold, were considered for downstream analysis. Proteins identified as contaminants (via the Mascot searching algorithm v 2.6.0) were removed, whereas proteins identified in both replicates for at least one strain were classified as MV-associated. Subcellular localization analysis was performed using pSORTdb 2 with protein localization data for GBS strain 2603VR (downloaded from pSORTdb on 3/6/2021). Data visualization and statistical analyses were performed using R version 4.1.0. 3 Principle component analysis (PCA) was performed and visualized using the prcomp and fviz_pca functions, respectively. Hierarchical clustering was performed using the pheatmap function and clustered using Euclidean distances. Shapiro tests were used to determine whether data followed a normal distribution and Student t-test (two-sided) or Kruskal-Wallis one-way analysis of variance (ANOVA), in combination with the Dunn's post hoc test, were utilized to test for differences between groups. Multiple hypothesis testing was corrected using Benjamini-Hochberg or Bonferroni correction when necessary.
Membrane Vesicles Are Produced by Different Group B Streptococcus Strains Representing Common Sequence Types
Prior to MV isolation, each strain was monitored for growth, which did not differ significantly throughout the logarithmic phase (Supplementary Figure 1). Although a slight decrease in OD 600 was observed for GB1455 in early stationary phase, all strains reached late logarithmic/early stationary phase at an OD 600 of 0.9. In addition, all strains displayed a similar length of each growth phase, suggesting minimal differences in growth dynamics.
To determine whether each of the six GBS strains could produce MVs, we first used SEM to examine bacterial cultures grown overnight to stationary phase (Figure 1). Visualization using SEM revealed abundant production of MVs by all six strains and showed that some MVs were closely associated with bacterial cells as was described in prior studies (Brown et al., 2015). Because these overnight cultures likely contain cellular debris as well as MVs, further confirmation was necessary to rule out extra-vesicular contamination. To limit the possibility of detecting debris in the MV preparations, we grew each of the six strains to late logarithmic phase at an OD 600 of 0.9 ± 0.05 prior to MV isolation and purification. Imaging by TEM revealed that MVs were produced by all six strains. On average, they ranged in diameter between ∼50 and 100 nm and displayed a spherical morphology with a lipid bilayer and slightly electron dense interior (noted by arrows in Figure 2). The MVs appeared similar to other bacterial-derived MVs described in the literature (Brown et al., 2015) and for GBS strain A909 (Surve et al., 2016).
The Level of Membrane Vesicle Production Differs Across Group B Streptococcus Strains
Because electron microscopy suggested differences in MV production, we used NanoSight analysis to quantify MV size and production. MVs from each of the six strains displayed a uniform size distribution, ranging between 100 and 200 nm ( Figure 3A). Similar size distributions were also observed by ST. For MV quantification, total MV counts were normalized to the number of CFUs in the original bacterial cultures. Among the six strains, the average number of MVs/CFU was 0.108 with a range of 0.048-0.206 MVs/CFU; however, there was considerable variation between strains ( Figure 3B). Although FIGURE 2 | Transmission electron microscopy (TEM) of membrane vesicles (MVs) isolated from the six group B streptococcal strains. An abundance of spherical structures with a bright membrane bilayer and slightly electron dense interior was observed ranging in size between ∼50 and 100 nm. The MV population was isolated from each strain following late logarithmic growth. MVs were purified using ultracentrifugation and size exclusion chromatography (2-3 replicates per strain). TEM images were taken at a magnification of 20,000×; the scale bars indicate a length of 200 nm. Table 1). Of note, the number of unique proteins varied by strain. MVs from ST-1 strains, for instance, had fewer unique proteins relative to the other STs with an average of 281 proteins compared to 601 and 493 for the ST-12 and ST-17 strains, respectively. Regardless of ST, however, pSORTdb predicted numerous proteins to be membrane (12-17%) and cell wall (2-11%) localized, while 22-52% were predicted to be localized in the cytoplasm (Figure 4A). Although many proteins had a predicted subcellular localization, a large proportion of proteins had unidentified or unpredicted subcellular localization.
Among the total proteins detected, 62 were found in all biological replicates for the six strains (Supplementary Table 2). These proteins did not vary in spectral abundance between STs and represent the shared MV proteome. Of these 62 proteins, 11 were highly abundant with a mean spectral count greater than 50 (Supplementary Table 3). Putative, uncharacterized transporters constituted many of these shared proteins, accounting for 39-44% of membrane protein spectral counts. In addition, 19-25% of spectral counts were predicted to have a membrane associated subcellular localization (Figure 4B).
In other species, studies have demonstrated that MV composition can vary across strains, which could confer strain FIGURE 4 | Subcellular localization analysis of membrane vesicle (MV) proteomes. The subcellular localization of (A) all 643 MV proteins identified, and (B) a subset of 62 shared MV proteins identified using a pSORTdb database for published Streptococcus agalactiae sequences (accessed 3/3/21). Percentages were determined from mean spectral counts for a given sequence type (ST).
FIGURE 5 | Distribution of proteins detected in membrane vesicles (MVs) among six strains. An Upset plot was generated to show the distribution of all 643 proteins detected across the six GBS strains examined. The y-axis indicates the total number of proteins detected for a given set of strains. Protein presence is defined as having a non-zero spectral count for a given protein in at least one biological replicate for a specific strain. The matrix at the base of the plot shows the strains ordered vertically by sequence type (ST) with filled bubbles indicating which strains are positive for the number of proteins detected, and overlaid bars representing number of shared proteins. specific functionality (Jeon et al., 2016;Tandberg et al., 2016). Therefore, we sought to determine how many of these proteins were strain-specific or shared among the six strains examined. Of all 643 proteins detected, 192 (29.9%) were detected in at least one biological replicate for all six strains regardless of the clinical phenotype or ST (Figure 5). This analysis enhanced our certainty that a protein was present in a given strain, while permitting us to compare its abundance across strains even if it was not detected. Notably, 124 (19.3%) proteins were shared by the four ST-12 and ST-17 strains but were absent in the ST-1 strains, suggesting that the ST-1 MVs have a unique protein composition. To determine whether these compositional differences were due to genome divergence, analysis of whole-genome sequencing data revealed that 122 of the 124 corresponding protein genes were present in the ST-1 genomes. Interestingly, the two proteins absent from these genomes were ARC24477.1 and ARC24478.1 encoding a CHAP-domain containing protein and an abortive phage resistance protein, respectively, both of which are located within a putative phage. Although a minor proportion of proteins were ST-or strain specific, none were shared by all invasive or all colonizing strains.
Compositional Protein Profiles Differ Across Group B Streptococcus Membrane Vesicles
Given that differences in protein abundance were observed, we next considered the relationship between protein composition and strain characteristics. Rather than differential protein abundance analysis, we assessed whole proteome composition using PCA (Figure 6). This method takes into consideration the spectral abundance of all proteins simultaneously, giving a more thorough evaluation of population level changes in composition. In our analysis we found that the first two principal components accounted for a high proportion of the total variation (50.1%). Although the protein composition of MVs from invasive and colonizing strains overlapped, it was segregated by ST. Some overlap, however, was observed between the ST-12 confidence ellipse and those for other STs. No overlap was seen between the ST-1 and ST-17 strains, highlighting their distinct proteomes. This distinct clustering was not observed when the relationship between protein composition FIGURE 7 | Hierarchical clustering of membrane vesicle (MV) proteomes shows sequence type (ST) specific clustering. A heatmap was generated using hierarchical clustering with the pheatmap function in R, which uses Euclidean distance to cluster rows and columns with similar profiles. Individual rows represent a single accession number for an identified protein, with the color gradient of individual boxes corresponding to the natural log (ln) transformation of spectral counts for a given protein of interest. Columns represent a single proteomic sample, which are color coded by strain. and clinical phenotype was analyzed (Supplementary Figure 3). Specifically, invasive and colonizing samples displayed a high degree of overlap with little to no separation of their respective confidence ellipses.
To confirm the PCA results, we then applied a hierarchical clustering algorithm to our dataset, which utilizes a different statistical assessment to evaluate the relationship between MV composition and various strain characteristics. Indeed, hierarchical clustering of the protein data further demonstrated that MVs from strains belonging to the same ST had similar protein profiles, forming distinct clusters by ST regardless of the clinical phenotype (Figure 7). For instance, proteins from the ST-12 and ST-17 strains formed a distinct branch in the phylogeny that was separate from the ST-1 proteins, thereby indicating that their protein composition was more similar to each other than to ST-1 strains. This observation supports the PCA, showing a higher degree of overlap between ST-12 and ST-17 strains compared to ST-1 strains. Nonetheless, ST-12 and ST-17 strains were still distinguishable, with distinct nodes forming based on protein composition, indicating their divergent composition. This analysis provided additional confirmation that ST-1 strains lacked several proteins that were highly abundant in both the ST-12 and ST-17 strains. To a lesser degree than the ST-1 MVs, several highly abundant proteins found among the ST-17 strains were also absent in the ST-12 strains.
Differential Abundance of Key Virulence Factors in Membrane Vesicles From Distinct Group B Streptococcus Strains
To determine which proteins contributed most to the segregation observed in the PCA as well as the hierarchical clustering analysis, we more thoroughly examined the 335 proteins that were significantly enriched in at least one ST (Supplementary Table 4). Notably, several purported virulence factors including the C5a peptidase, hyaluronidase, and sialidase were highly enriched in a ST-dependent manner (Figure 8). Both the hyaluronidase and C5a peptidase were significantly more abundant in the two ST-17 strains compared to the ST-1 and ST-12 strains, whereas the sialidase was detected at significantly higher levels in ST-1 vs. ST-12 strains.
Several proteins of unknown function were also among the most highly abundant and differentially enriched proteins detected. One hypothetical protein, for instance, was significantly more abundant in the ST-1 strains relative to strains representing the other two lineages (Figure 8). Similarly, another hypothetical protein was more abundant in the ST-12 strains (Supplementary Figure 4); however, considerable variation was observed across replicates. Numerous phage-associated proteins including a holin and capsid protein, were also detected and found to be more abundant in the ST-17 strains along with several proteins associated with cell division (Supplementary Figure 5). For example, the average abundance of cell division proteins FtsE, FtsQ, FtsZ, and FtsY, was significantly greater in the two ST-17 strains compared to those from other lineages. Differences in proteins linked to cell wall modification such as penicillinbinding proteins and capsule biosynthesis proteins, were also detected (Supplementary Figure 6).
DISCUSSION
Current knowledge regarding GBS derived MVs is restricted to one clinical strain (Surve et al., 2016;Armistead et al., 2021) and hence, we sought to examine MV production and composition in a set of clinical strains with different traits. While no clear association was observed between clinical phenotype and the production or composition of MVs, we have demonstrated that the GBS MV proteome is ST-dependent. The same was observed for MV production, though some variation was noted between strains of the same ST. Together, these data indicate that GBS MVs have strain-dependent functions that could impact survival in hosts, immunomodulation, and virulence.
This study expands our current knowledge of GBS MVs by highlighting their potential impact on virulence. Specifically, we demonstrated that GBS MVs have a high abundance of immunomodulatory virulence factors including C5a peptidase, hyaluronidase, and sialidase (Cheng et al., 2002;Kolar et al., 2015;Yamaguchi et al., 2016). The bifunctional C5a peptidase has been shown to interact with fibronectin and degrade the proinflammatory complement component (C5a) while simultaneously promoting bacterial invasion into host cells (Cheng et al., 2002;Kolar et al., 2015). MVs from both ST-17 (cpsIII) strains examined herein contained high levels of C5a peptidase, whereas ST-1 and ST-12 strains lacked this protein.
Intriguingly, ST-17 strains were previously shown to possess distinct virulence gene profiles as well as unique alleles of scpB encoding the C5a peptidase (Brochet et al., 2006;Springman et al., 2009), suggesting that ST-17 strains may be primed to cause invasive infections. This suggestion is in line with epidemiological data showing that ST-17 strains are important for invasive disease in adults and neonates (Jones et al., 2003;Manning et al., 2009;Flores et al., 2015) as well as mechanistic studies showing an enhanced ability to attach to gestational tissues, induce stronger proinflammatory responses, and persist inside macrophages (Korir et al., 2014(Korir et al., , 2017Flaherty et al., 2019). Nonetheless, it is important to note that our clinical definitions of "invasive" vs. "colonizing" strain types may not be representative of each strain population. Although strains isolated from an active infection clearly demonstrate "invasive" potential, it is possible that strains designated as "colonizing" could also cause an infection in specific circumstances and host environments.
Although sialidases have no known role in GBS pathogenesis (Yamaguchi et al., 2016), these proteins were shown to be immunomodulatory in other bacterial species (Aruni et al., 2011;Sudhakara et al., 2019) while simultaneously promoting biofilm production and metabolism of host sugars (Hardy et al., 2017;Zaramela et al., 2019). The presence and abundance of sialidase was variable: the ST-1 and ST-17 MVs all contained sialidase, but the ST-12 MVs lacked it. In two prior studies of GBS MVs produced by a ST-7 strain, A909, neither C5a peptidase nor sialidase were identified (Surve et al., 2016;Armistead et al., 2021), further highlighting differences across strains. However, we cannot rule out the possibility that the abundance of these virulence factors was beneath the detection limit in those studies. Similarly, the previous analysis of GBS MVs highlighted the importance of hyaluronidase (Surve et al., 2016). This immunomodulatory factor has previously been shown to promote ascending infection, degrade host extracellular matrix components, and dampen the host immune response (Kolar et al., 2015). While we also found high levels of hyaluronidase in ST-17 MVs, our results further show that the ST-12 and ST-1 MVs contained significantly lower amounts of this protein.
Additionally, the ST-1 strains lacked 124 proteins found in MVs from other lineages. Analysis of the ST-1 genomes detected the presence of the genes encoding 122 of these proteins, suggesting that lineage-specific composition is not due to genome divergence. Because we have previously shown that virulence gene expression in clinical isolates varies during infection of host cells (Korir et al., 2014), variable gene expression profiles FIGURE 8 | Highly abundant proteins are present at variable levels in group B streptococcal membrane vesicles (MVs). The spectral counts of specific proteins were plotted after stratifying by the sequence type (ST). The median spectral count associated with each ST is represented within each box. The black dots represent a single biological replicate for a given strain. Statistical comparison was performed using a Kruskal Wallis test. Multiple pairwise comparisons were then made using the pairw.kw function in R, which uses a conservative Bonferroni correction method to correct for multiple hypothesis testing. Comparisons with p-values < 0.05 are denoted with an asterisk. could drive MV compositional differences. Alternatively, in the absence of varied gene expression, it is possible that there is strain-specific packaging of proteins within MVs. Further studies, however, are required to determine the mechanisms behind this altered composition.
It is also notable that multiple uncharacterized and hypothetical proteins were detected. Previous reports have demonstrated that in gram positive species, roughly 30-60% of all MV proteins map to the cytoplasm (Lee et al., 2009;Cao et al., 2020). While our results are consistent with this observation showing ∼22-52% of all proteins mapping to the cytoplasm, roughly 25-41% of the GBS MV proteins had an unidentifiable subcellular localization. Similar trends of ST-dependent enrichment of several hypothetical proteins were observed, with these representing some of the most highly abundant proteins. Although some uncharacterized proteins, such as those classified as putative ABC transporters, have predicted functions, their role in vesicle function or virulence is currently unknown. Future analyses must be undertaken to identify which proteins play a role in MV associated pathogenesis.
Through this study, we have also identified a shared proteome among MVs from phylogenetically distinct GBS strains. In total, 62 proteins were consistently found within GBS MVs regardless of the ST. Indeed, over 17% of these shared proteins were highly abundant, indicating that they may be important for MV functionality. Even though many of these proteins have yet to be characterized, we identified an abundance of transporter proteins in MVs suggesting a potential role in MV function. Some these shared proteins may be of value as potential MV markers in future studies.
While various mechanisms have been proposed for the biogenesis of gram positive MVs, those mechanisms important for GBS MV biogenesis are unclear (Brown et al., 2015;Briaud and Carroll, 2020). Our data demonstrate that diverse GBS strains produce MVs with consistent size distributions, indicating that GBS MV production is ubiquitous. Purported mechanisms of MV biogenesis in other pathogens include phage mediated biogenesis (Toyofuku et al., 2017(Toyofuku et al., , 2019, membrane budding during division (Vdovikova et al., 2017), and cell wall remodeling (Brown et al., 2015;Wang et al., 2018). Consistent with these mechanisms, our proteomics analysis revealed the presence of phage associated proteins, division septum-associated proteins and cell wall-modifying enzymes. Several of these proteins were also differentially abundant, with some proteins being more highly enriched in certain STs than others. For instance, phage proteins were enriched in ST-17 strains but were nearly absent in ST-12 and ST-1 strains. Although we observed similar enrichment of cell division proteins in ST-12 and ST-17 strains relative to the ST-1 strains, cell wall modifying proteins were most abundant in the ST-17 strains. Taken together, these data indicate that MVs are produced by diverse strains with varying traits; however, it is possible that the mechanisms for MV biogenesis are strain-dependent. Additional studies are needed to test this hypothesis.
Although our study has enhanced our understanding of the proteomic composition of GBS MVs, it has a few limitations. Because strains of each GBS lineage possess the same capsule (cps) type, it is difficult to differentiate between ST vs. cps effects. Another concern when dealing with MVs is the presence of non-vesicular contaminants. In some eukaryotic and prokaryotic systems where the composition of MVs is well defined, markers are used to assess purity (Sarker et al., 2014;Rompikuntal et al., 2015;Vanaja et al., 2016). Due to the relatively unknown composition of GBS MVs, however, we were unable to target specific markers to evaluate the purity. Rather, we relied on size exclusion chromatography followed by TEM to further remove non-vesicular proteins from each MV preparation. While some contaminant proteins are likely present, the purity of our preparations exceeds those performed in prior GBS studies (Surve et al., 2016;Armistead et al., 2021) and mimics protocols optimized for removing extravesicular macromolecules from Gram positive MVs (Surve et al., 2016;Dauros Singorenko et al., 2017;Mehanny et al., 2020;Armistead et al., 2021). Indeed, studies in Staphylococcus aureus and Streptococcus mutans have confirmed the presence of similar proportions of cytoplasmic and extracellular proteins within MVs (Lee et al., 2009;Cao et al., 2020). The MV isolation method used herein is standard for the field, however, it is important to note that the isolation is not complete, as a small proportion of MVs can remain associated with the bacterial surface post-isolation. Current protocols to isolate surface-associated MVs remain limited. Because our protocol was consistent across all production and proteomics experiments, the data could be directly compared across strains, thereby greatly enhancing our understanding of GBS MV composition across strains. Although other macromolecules have also been detected within GBS MVs (Surve et al., 2016), it is not clear whether these macromolecules have a ST-dependent composition and hence, further studies are warranted.
In summary, this analysis of GBS MVs from strains representing three phylogenetically distinct lineages demonstrates strain-dependent composition and production of MVs. Our data further show that MVs carry known virulence factors as well as proteins of unknown function that vary in abundance between strains, suggesting they may have an altered functionality or ability to promote virulence. Follow up studies elucidating virulence and immunomodulatory properties of GBS MVs isolated from a larger and more diverse strain collection are therefore warranted, particularly given the high level of variation in protein composition observed among only these six strains. Taken together, these findings further highlight the importance of strain variation in GBS pathogenesis and shed light on the potential role of MVs in virulence.
DATA AVAILABILITY STATEMENT
The datasets presented in this study can be found in online repositories. The names of the repository/repositories and accession number(s) can be found in the article/ Supplementary Material.
AUTHOR CONTRIBUTIONS
CM, MGP, and SM designed the study. CM performed the laboratory work, conducted the analysis, and drafted the manuscript. MEP performed genome assembly and assisted with gene extraction analysis. MGP, SM, DA, and JG provided institutional support, guidance and resources. All authors contributed to and approved of the manuscript content.
FUNDING
This work was funded by the National Institutes of Health (NIH; AI154192 to SM and MGP) with additional support provided by AI134036 to DA, HD090061 to JG and BX005352 from the Office of Research, Department of Veterans Affairs. Graduate student support for CM was provided by the Reproductive and Developmental Science Training Program funded by the NIH (T32 HDO87166) as well as the Eleanor L. Gilmore Endowed Excellence Award. | 8,316 | 2021-11-22T00:00:00.000 | [
"Biology"
] |
Opto-Nanomechanics Strongly Coupled to a Rydberg Superatom: Coherent vs. Incoherent Dynamics
We propose a hybrid optomechanical quantum system consisting of a moving membrane strongly coupled to an ensemble of N atoms with a Rydberg state. Due to the strong van-der-Waals interaction between the atoms, the ensemble forms an effective two-level system, a Rydberg superatom, with a collectively enhanced atom-light coupling. Using this superatom imposed collective enhancement strong coupling between membrane and superatom is feasible for parameters within the range of current experiments. The quantum interface to couple the membrane and the superatom can be a pumped single mode cavity, or a laser field in free space, where the Rydberg superatom and the membrane are spatially separated. In addition to the coherent dynamics, we study in detail the impact of the typical dissipation processes, in particular the radiative decay as a source for incoherent superpositions of atomic excitations. We identify the conditions to suppress these incoherent dynamics and thereby a parameter regime for strong coupling. The Rydberg superatom in this hybrid system serves as a toolbox for the nanomechanical resonator allowing for a wide range of applications such as state transfer, sympathetic cooling and non-classical state preparation. As an illustration, we show that a thermally occupied membrane can be prepared in a non-classical state without the necessity of ground state cooling.
Introduction
The remarkable experimental developments of Cavity Optomechanics have essentially been based on linear coupling of the mechanical oscillator to the light field. This includes the preparation of mechanical resonators [1,2,3,4] in both classical and non-classical states via sideband cooling, the observation of coherent coupling with light [5,6,7], and displacement detection at the standard quantum limit [8,9]. A main challenge in the field of opto-nanomechanics remains achieving non-linearities on the single phonon level, and strong coupling of the mechanical resonator to a two-level system in particular [10]. Such nonlinearities are a key to generate entangled states and can be utilized for enhanced readout, quantum information processing and teleportation [11,12,13].
In the present work we will study the coupling of a nanomechanical oscillator via light to a Rydberg superatom representing a two-level system. We will show that the strong coupling regime can be reached in this setup in the sense of a Jaynes-Cummings model [14], where coherent couplings between the oscillator and the atom dominate dissipative effects. Our particular setup is motivated both by recent advances in experimentally realizing hybrid systems of nano-mechanical oscillators coupled to cold trapped atoms in experimentally compatible setups [15,16,17] (see also [18,19,20,21]), as well as the remarkable experimental achievements in realizing Rydberg superatoms with cold atomic ensembles [22,23,24]. A Rydberg superatom consists of an ensemble of N cold atoms, which are excited by light to the Rydberg state, where the dipole blockade mechanism based on the strong Van-der-Waals interaction between the Rydberg levels allows only a single collective excitation in the whole ensemble, thus forming an effective two-level system [22,23,24]. The collective nature of the excitation representing the superatom leads to a collectively enhanced coupling to the light field of √ N. In our proposal we exploit this enhanced coupling to achieve the strong coupling limit of Cavity Optomechanics to the Rydberg superatom via light field interactions. While the collective resonator -Rydberg superatom coupling benefits from the √ N coupling, we show that dissipation (spontaneous emission of the atom) can scale in an appropriate parameter regime as the single particle decay rate.
The paper is organized as follows. In Sec. 2 we will provide an overview introducing the two model systems of interest. We first describe the conceptually simple setup, where both the mechanical resonator and the atomic ensemble are placed inside a single cavity, cf. Fig. 1(a), and describe an alternative setup, where the mechanical resonator is coupled to a distant cloud of atoms (compare [18,19,20,21]), cf. Fig. 1(c-d). For the cavity mediated case, we discuss in detail the incoherent and coherent part of the dynamics in a microscopic model, cf. Sec. 3. Writing the dynamics in a collective basis in Sec. 4, we discuss an effective model and show that the strong coupling regime for experimental accessible parameters can be reached. The alternative set-up, in which the mechanical resonator and the atomic ensemble is spatially separated, is described in Sec. 5, where the derivation is summarized. Both setups allow to utilize the Rydberg superatom as a toolbox for optomechanical experiments in the strong coupling regime, cf. Sec. 6. We discuss sympathetic cooling, state transfer and the preparation of non-classical states for a membrane coupled to heat bath, before we conclude the paper in Sec. 7.
Overview
Before giving details of the proposed setup we summarize in this section the important results and main features of the derivation to strongly couple nanomechanics to a Rydberg superatom.
The main goal is to realize a nonlinear interaction between a moving membrane and a two-level system in the strong coupling regime. This has applications such as non-classical mechanical state preparation. The coherent part of the dynamics is governed by a Jaynes-Cummings type of interaction: where b ( †) is the annihilation (creation) operator of the mechanical mode of the membrane with frequency ω m , and σ ± , σ z are the Pauli operators for a two-level system with frequency ω s . G eff is the coupling constant and governs the excitation transfer between the membrane and the two-level system. In addition to the coherent dynamics we also have dissipative dynamics such as phonon heating and radiative decay of the excited atomic states. The complete dynamics of the system is governed by the following master equation Here, the decay of the mechanical mode γ and the two level system Γ are written in Lindblad form, i.e. D [A]ρ := 2AρA † − {A † A, ρ}. In order to reach the strong coupling regime, i.e. G eff γ, Γ, we propose to use a Rydberg superatom as the two-level system since it provides a strongly enhanced atom-light coupling [25,24]. Here, the question arises whether the benefit of the enhanced coherent dynamics is diminished by also enhanced dissipative dynamics. A Rydberg superatom consists of an ensemble of N atoms with highly excited Rydberg states. These Rydberg states interact via a strongly repulsive van-der-Waals potential. As a consequence, if a single Rydberg excitation is present, neighbouring Rydberg states are shifted out of the laser resonance and further Rydberg excitations are suppressed within a socalled Rydberg blockade radius [22,23,24]. We are interested in a situation where this radius is larger than the size of the atomic ensemble to allow only a single Rydberg excitation. In this limit a strong non-linearity is created via the enhancement of the Rabi frequency by the square root of the number of atoms in the ensemble. For resonance conditions identified below, we find that only the coherent dynamics benefits from the collective enhancement and thereby outrivals the incoherent part. Hence, the atomic ensemble can be described as an effective two-level system [26,24].
The setup we have in mind can be implemented in two different ways. One possibility is to consider both the mechanical oscillator and the Rydberg superatom inside a high finesse cavity that mediates the interactions, cf. Sec. 3 and 4 and Fig. 1(a). An alternative setup is discussed in Sec. 5 and depicted in Fig. 1(c-d). It consists of a mechanical oscillator inside a cavity [Harris paper] coupled via a mediating laser to a distant atomic Rydberg ensemble. This proposal thereby opens the possibility of strongly coupling a nanomechanical oscillator to a Rydberg superatom, which then serves as a toolbox for optomechanical experiments in the non-classical regime, cf. Sec. 6 .
In the following we start with a microscopic model for the full system. Subsequently, we derive the effective Jaynes-Cummings type dynamics given in Eq. (2).
Model
In this section we discuss the details of the setup as depicted in Fig. 1(a). It consists of a moving membrane, a high finesse cavity and an ensemble of N atoms (labelled i) with a Rydberg state |r i . We assume the Rydberg state is excited in a two-photon process from the ground |g i via an intermediate excited state |e i (see inset Fig. 1(a)) [25,24]. The full Hamiltonian of the setup reads where H int consists of the different interaction Hamiltonians and H 0 is the free evolution given by (h = 1): with the membrane phonon annihilation operator b (frequency ω m ), the cavity photon annihilation operator a (frequency ω 0 ), the Rydberg state |r i (transition frequency ω gr ) and the intermediate state |e i (transition frequency ω ge ), where the ground state energy is set to zero. In addition we have introduced an auxiliary cavity mode a p (frequency ω p ), which is needed to enhance the optomechanical coupling without driving the atomic ensemble, as discussed below. The interaction between the moving membrane and the atomic ensemble of Rydberg atoms is mediated by the cavity and the laser field, see Fig. 1(b), such that the full interaction Hamiltonian reads where H m-c describes the interaction between the membrane and the cavity, H at-l includes the interaction between the atomic ensemble with the cavity and also an external laser field. Finally, there is the dipole-dipole interaction H d-d between the atoms in the ensemble. In the following we give details of these interaction Hamiltonians. Coupling mechanism. Membrane couples via cavity to internal states of atomic ensemble. (c) Modular long-distance setup of a membrane inside a cavity coupled to an ensemble of Rydberg atoms. The membrane can be kept in a cryogenic environment, and the atoms at a distance in a vacuum chamber, cf. [21]. (d) Advancement: cavity-enhanced long-distance coupling, cf. [17].
Membrane-Cavity Interaction
The membrane-cavity interaction is described by the general expression for the radiation pressure Hamiltonian [27] plus an additional driving field with amplitude E p : with the radiation pressure force constant g 0 , which is typically small. In order to enhance the coupling without resonantly driving the atomic ensemble, we propose to pump an auxiliary cavity mode a p with an external laser (frequency ω m L ) and assume this cavity mode to be detuned from the atomic transitions.
In the limit of an intense pumping field E p the auxiliary cavity mode is in a coherent state: a p → α + a p such that the radiation pressure coupling in Eq. (6) can be linearized [28]. The linearized membrane-cavity coupling in the rotating wave approximation then reads with G = αg 0 (α ∝ E p ) as the enhanced membrane-cavity coupling. A detailed derivation of this beam splitter interaction Hamiltonian is given in Appendix A.
In the following we replace H m-c by the linearized Hamiltonian in Eq. (7), such that the interaction Hamiltonian reads:
Atom -Light Interaction
Benefitting from the collective enhancement of the cavity-superatom coupling is essential for the proposal. To maximize this effect we choose to couple the transition between ground |g i and intermediate excited state |e i to the cavity and drive the transition from the intermediate excited to the Rydberg state |r i with an external laser, cf. Fig. 1(a). We choose this particular setup, since the external laser field can be adjusted in its intensity to compensate for the decrease in the dipole strength (∼ ν −3/2 ) for increasing effective quantum numbers ν of the Rydberg state [29]. ‡ The atom-light interaction Hamiltonian in dipole and rotating wave approximation reads: where the atom-cavity coupling is denoted with g i and the amplitude of the external laser with Ω (frequency ω L ). Note, that g i depends on the position of the atoms. Here, we assume that the position of the atoms do not change on the timescale of the system dynamics.
Atom -Atom Interaction
We take the dipole-dipole interaction between the highly excited Rydberg states into account to render an effective two-level system with the atomic ensemble: ‡ ν := n − δ (n) includes the principal quantum number n and the quantum defect δ (n).
Opto-Nanomechanics Strongly Coupled to a Rydberg Superatom: Coherent vs. Incoherent Dynamics6
where |r i r j is the doubly excited Rydberg state and ∆ i j R is the induced level shift, which depends on the interatomic distance and the type of Coulomb interaction, e.g. ∆ i j R := −C 6 /|r i − r j | 6 in the Van der Waals regime [25] with C 6 ∝ ν 11 . An important consequence of the level shift is the Rydberg blockade mechanism, i.e. the level shift prevents multiple Rydberg excitations within a Rydberg blockade radius R b ∝ 6 √ C 6 . Typically, the blockade radius is in the order of microns. Here we assume that the Rydberg shift is large enough to allow only a single Rydberg excitation in the system (∆ i j R Ω), cf. Fig. 1(a).
Dissipation Processes
Apart from the coherent excitation transfer governed by the Hamiltonian in Eq. (3), various dissipation processes contribute to the incoherent dynamics. We include dissipation by considering the following master equation: The second and third contribution in Eq. (10) corresponds to the coupling of the membrane to a bath of finite temperature. The membrane undergoes Brownian motion, which leads to a temperature dependent finite phonon life time. In the Markov approximation, the phonon decay can be expressed via the jump operator J b := √ γ m b with decay rate γ m and thermal occupation of the mechanical mode N m ≈ k B T /ω m [28]. The fourth contribution denotes the cavity decay with jump operator J a := √ κ a and cavity photon decay rate κ. Finally, the excited states in the atomic ensemble decay radiatively. This spontaneous emission process is described by J i e := √ Γ e |g i e i | for the decay of the intermediate excited state with rate Γ e and by J i r := √ Γ r |g i r i | for the decay of the Rydberg state [30]. § The cascaded Rydberg decay to the ground state is modelled as an effective single decay rate [32]. Furthermore, our model ignores black body radiation and super radiance effects [33,34].
Until now we considered an ensemble of N three-level atoms. In the following section we want to proceed by transforming from the microscopic description of single atoms to the macroscopic description of the Rydberg superatom. Therefore, we first introduce a collective basis and subsequently eliminate the intermediate state as well as the cavity degrees of freedom. As a result, we find that the effective dynamics can be described by a Jaynes-Cummings type of interaction.
Superatom Picture: Collective Dynamics
In this section we want to write the previous microscopic description of our atomic ensemble and introduce a collective description. In doing so we first describe the dynamics on the § We assume identical radiative decay constants for the individual atoms. Collective enhancement factors of the radiative decay constants do not change the order of magnitude in typical Rydberg ensembles, cf. [30,31]. atomic side and formulate conditions and limits in which we obtain an effective two-level description of the ensemble -the superatom. In particular we discuss dissipation processes within the atomic ensemble, which lead to population of undesired non-symmetric collective states. We find that in the regime identified below only the coherent dynamics (g i → √ Ng i ) benefits from the collective enhancement in contrast to the radiative decay.
As a second step, we formulate an effective superatom-membrane interaction. In doing so, we eliminate the intermediate excited state of the atoms as well as the cavity degree of freedom leading to an effective Jaynes-Cummings type of interaction between the Rydberg superatom and the membrane as given in Eq. (2) in the overview. In the last part of this section we discuss the strong coupling conditions and show that the strong coupling regime can be reached within experimentally accessible parameters.
Collective Basis
In the following we introduce the collective basis by first discussing the coherent part of the dynamics, see Fig. 2(a). In a second step, we extend the discussion to incoherent population transfer and show how dissipative processes lead to population of non-symmetric states, cf. Fig. 2(a).
4.1.1.
Coherent Dynamics The coherent excitation dynamics are governed by the Hamiltonian in Eq. (3). We assume that all atoms are initially in the ground state |G := |g 1 ...g N . The cavity photons then excite the atomic ensemble with coupling strength g to a symmetric superposition of intermediate excited states with normalization N j E := N! j!/(N − j)! [23]. From this intermediate excited states the atomic ensemble is driven by the external laser with Rabi frequency Ω to the collective Rydberg state defined by Here, |E j R are the symmetric superpositions of all collective states with one atom being excited to the Rydberg state, while the other atoms are either in the ground or intermediate excited state. Since we assume a blockade radius larger than the size of the atomic ensemble, only a single Rydberg excitation can exist within the ensemble, see Sec. 3.3.
We assume a setup, where the atoms couple equally to the cavity mode, i.e. g i = g Experimentally, this can be achieved by positioning the atoms inside the cavity via state-of-the-art trapping techniques as in Ref. [35]. . Figure 2. (a) The radiative decay of the intermediate state as an example for a mechanism which transfers population from the symmetric subspace to the non-symmetric subspace. The radiative decay from non-symmetric (symmetric) states is denoted with a dotted (solid) arrow. (b) A quantum Monte Carlo trajectory for N = 9 three-level atoms is depicted. Clearly, jumps due to the radiative decay from symmetric states (red, black) lead to an increase of excitations in the non-symmetric subspace (green) for tG eff > 5. We plot ρ GG := G|ρ|G and ρ RR : In case of purely coherent dynamics only the symmetric states, Eqs. (11)- (12), are populated by the interaction Hamiltonian. ¶ However, due to dissipation processes nonsymmetric states also become populated, as discussed in the following. Fig. 2(a), we illustrate how dissipative processes transfer population from the symmetric (left side) to the non-symmetric subspace (right side). The non-symmetric subspace consists of all states that are not permutation invariant, such as |E 1 a := (|e 1 g 2 − |g 1 e 2 ) / √ 2 (in the case of two atoms). To give an instructive example how these dissipation processes lead to population of nonsymmetric states, we consider the radiative decay from the intermediate excited state Γ e in the case an ensemble of two atoms. Non-symmetric states are populated by this process, since the radiative decay acts on the individual atoms and not collectively on the whole ensemble.
Incoherent Dynamics In
Beginning with both atoms in the ground state |G , the atom-cavity coupling creates a superposition state |E 1 = (|g 1 e 2 + |e 1 g 2 ) / √ 2 with one atom excited to the intermediate state, see Sec. 4.1.1. This superposition state can either decay back to the ground state by spontaneously emitting a photon (with probability proportional to the radiative decay constant Γ e ), or it is driven by the external laser to a superposition state with a single Rydberg excitation Fig. 2(a). Then, if the atomic ensemble absorbs another cavity photon, a doubly excited state is created with each an excitation in the intermediate and Rydberg state: This state either couples via the cavity interaction again to the symmetric collective state |E 0 R or decays radiatively. Considering the latter case, we find that the radiative decay of the intermediate excited state of e.g. the first atom J 1 e = √ Γ e |g 1 e 1 |, leads to a single Rydberg excitation: Rewriting this state in the collective basis yields: |g 1 r 2 = |E 0 R − |R a / √ 2, which corresponds to a superposition of a symmetric and a non-symmetric state, see Fig 2(a). The latter is defined by |R a := (|r 1 g 2 − |g 1 r 2 ) / √ 2. As an illustration, we give in Fig. 2(b) a typical quantum Monte Carlo trajectory [36] computed for N = 9 three-level atoms. Clearly, the population transfer from the symmetric subspace (red, black) to the non-symmetric subspace (green) is associated with a quantum jump due to the radiative decay, e.g. at tG eff ≈ 5. The source of this population transfer is the strong radiative decay of the intermediate excited state. In consequence, if such a population of the intermediate excited state is suppressed, the restriction to the symmetric collective basis is a good approximation. The detailed limits in which the intermediate state can be eliminated are discussed in the following section, see Appendix B for an instructive example how to suppress the radiative decay via detuned excitation, and Appendix C for the full master equation in the symmetric collective basis.
Superatom-Membrane Interaction
In this section we derive a Jaynes-Cummings type of interaction between the Rydberg superatom and the moving membrane as given in the overview, cf. Eq. (2).
To suppress the cavity loss and the radiative decay from the intermediate state, which justifies the restriction to the symmetric basis, we assume that all states with a cavity or with an intermediate state excitation to be detuned with ∆ c = ω 0 − ω m L − ω m and ∆ e = ω e − ω m L − ω m , respectively, by choosing a resonance condition ω r = ω L + ω m L + ω m . For large detunings we can treat the dynamics in the adiabatic limit given by where n is the number of excitations in the system. In the spirit of perturbation theory we derive the effective dynamics of the subspace consisting of the states: |G, 0, n and |E 0 R, 0, n − 1 , cf. Fig. 3(green). Here, we have introduced the notation |A, i, j , where the atomic ensemble is in state A with i photons and j phonons. By using the projection operator method [36] we derive an effective master equation for the n excitations subspace: where we have defined |R, n := |E 0 R, 0, n and |G, n := |G, 0, n . The radiative decay Γ e of the intermediate state and the cavity loss κ act as an effective phonon decay rate γ eff m ≈ κ(G/∆ c ) 2 and as an effective decay of the Rydberg state Γ eff r ≈ Γ e (Ω/∆ e ) 2 , respectively. In Appendix D we discuss in more detail the derivation and give the analytical expressions for the single-excitation limit (n = 1).
The effective Hamiltonian then reads with the effective coupling and dispersive shifts defined by Clearly, with this Hamiltonian our goal to engineer a Jaynes-Cummings type of interaction between the membrane and the Rydberg superatom is achieved, cf. Eq. (1). In Eq. (16) we see a superatom-membrane coupling that scales with the square root of the number of atoms. The detuned setup allows the suppression of the dissipative processes, which in consequence are not collectively enhanced. However, this comes at a price of a slower excitation transfer rate G eff . Remarkably, we can fully benefit from the superatom imposed enhancement factor. As a consequence, for very high numbers of atoms N the dissipation processes become negligible and strong coupling is possible. Finally, we remark, that for negligible dispersive shifts (∆ G , ∆ Ω ) compared to the effective coupling G eff we can write H s = ∑ n H n s ≈ H JCM as given in the overview in the corresponding rotating frame. However, in a regime where the dispersive shift ∆ G is large compared to the phonon heating γ m N m , i.e. for very low effective temperatures, the different excitation manifolds can be addressed separately. This is due to the fact that the dispersive shift scales with the number of excitations. It would we very interesting to go the such a parameter regime in the spirit of non-classical state preparation.
Discussion: Strong Coupling Regime
The strong coupling regime is reached if the effective coupling G eff outrivals the losses, i.e the effective phonon decay rate γ eff m , the decay rates of the Rydberg level Γ eff r , Γ r and the coupling of the phonons to a thermal environment By using the definition of the effective coupling and decay constants, and further assuming G = Ω and ∆ c = ∆ e , we can reformulate the condition in (17) We have on the LHS the atom-cavity coupling enhanced by the square root of the number of atoms in the ensemble. In contrast, on the RHS the losses are not enhanced and strong coupling can be reached for experimentally accessible parameters.
As an example, we use the parameters of state-of-the-art experiments in cavity optomechanics [37,15] and Rydberg cQED [24]. Strong coupling can be achieved for a high finesse cavity with a cavity loss κ = 2π × 1MHz and an atom-cavity coupling of g = 2π × 1MHz, laser amplitude Ω = 2π × 10MHz for 87 Rb atoms, using e.g. transitions from the 5S 1/2 ground state to an intermediate state 5P 3/2 with a life time of Γ e = 2π × 3MHz and a transition to the Rydberg state 60S 1/2 with a life time Γ r ≈ Γ e /1000 as in [24,31,25], and G = 2π × 10MHz and a phonon decay in order of magnitude of γ m (N m + 1) ≈ 10kHz as in [4,15,12]. The number of Rydberg atoms in typical experiments [38,24] ranges up to several 1000.
The fact that strong coupling can be achieved between the Rydberg superatom and the membrane is the main result of this article. In the next section, we discuss that our hybrid system can also be realized in a modular long-distance setup.
Long-distance Superatom-Membrane Coupling
Cavity-mediated coupling between atoms and a mechanical oscillator as described in Sec. 3 and 4 is very demanding as it requires to combine ultra high vacuum (needed for experiments with cold atoms) with cryogenic environment (needed for experiments with micromechanical systems). Motivated by this we propose and discuss in this section an alternative setup that does not require a cavity mediated coupling.
The setup we have in mind is depicted in Fig. 1(c) and is similar to Hammerer et al. [21]. Here, we have on side the moving membrane with frequency ω m in a cryogenic environment, on the other side we have the ensemble of Rydberg atoms, which eventually form the Rydberg superatom, in a vacuum chamber. Both systems can in principal be spatially separated in the order of meters. The atomic ensemble is driven by a laser with frequency ω L perpendicular to the z-axis, and we assume that the two-photon excitation scheme to excite the Rydberg superatom is similar to the cavity-mediated case, see Fig. 1(a) inset. Further we choose the resonance condition such that a Rydberg excitation ω r emerges from a two-photon process with one photon being a sideband photon with frequency ω m L + ω m , i.e. ω r = ω L + ω m L + ω m as in Sec. 3.
The coupling between the moving membrane and the superatom is mediated by a laser with frequency ω L m . As discussed in Ref. [39] the coupling yields a cascaded dynamics, i.e. the order in which the systems interact with each other is relevant, which is due to the fact that the setup is only driven from one side. First, we assume the Rydberg superatom is initially in the ground state such that an incoming coupling laser photon ω m L together with the a pump laser photon ω L is not resonant with the Rydberg level exciation, and therefore does not interact with the atomic ensemble. The coupling laser then interacts with the moving membrane and is reflected with imprinted sidebands at ω m L ± ω m due to the motion of the membrane at frequency ω m . Subsequently, the positive sideband can interact with the atomic ensemble and excite the Rydberg superatom. In contrast, an excited Rydberg superatom can emit sideband photons at ω m L + ω m in the direction of the membrane such that it feels a change in radiation pressure.
Cavity-enhanced long-distance coupling as proposed in Ref. [17] has the advantage of a membrane-light coupling constant that is enhanced by the finesse. To also benefit from this enhancement we extend our proposal in the following to a so-called membrane-in-the-middle configuration [4] as depicted in Fig. 1(d).
In the following we first present the full Hamiltonian for such long-distance setup. We further give the master equation of the effective Rydberg superaton-membrane coupling. The derivation of this effective master equation is outlined in Appendix E, where we crucially rely on the formalism as in Ref. [39]. As a result, we find that long-distance coupling of the superatom to the membrane is possible and features a (atomic) position-dependent coupling, which allows to switch coupling and dissipation channels on and off. Finally, we find a limit in which we recover a similar master equation as the one in the cavity-mediated case, see Eq. (14). However, the benefits of cavity-mediated long-distance coupling come at the price of an additional dissipation channel due to the membrane-light diffusion.
Hamiltonian
The Hamiltonian for the long-distance setup as depicted in Fig. 1
(d) is given bỹ
where the free evolution is The interaction Hamiltonian consists of the membrane-light fieldH m−f and the superatomlight field interactionH at−f . The former is given by Eq. (13) in Ref. [17] with a light field quantized similar as presented there. The superatom-light field interaction includes both, the coupling of the classical pump laser ω L with Rabi frequency Ω L to the transition from intermediate excited to Rydberg state as well as the interaction of the coupling laser ω m L with Rabi frequency Ω m L to the transition from ground to intermediate excited state.t Note, compared to the cavity-mediated case, where the membrane coupled to a single cavity mode, we have in the long-distance case a full continuum of field modes centered around the coupling laser frequency ω m L . In Appendix E we give the expression for the full Hamiltonian and further outline of the derivation, which is similar to the methods used in [39], to obtain an effective master equation for the superatom-membrane system. In the following section we present the resulting master equation and discuss a limit, where we recover a similar master equation as in the cavitymediated case in Eq. (14).
Master Equation Dynamics
Concluding the derivation in Appendix E we find that the Hamiltonian in the collective basis, cf. Sec. 4.1, reads with dispersive shift ∆ at , wave vector k m L , the mean position of the atomsz j , σ ab = |a b| and effective long-distance coupling constantḠ eff . The corresponding master equation is given bẏ where γ diff m is the membrane-light diffusion as defined in Ref. [17]. Note that all dissipation channels concerning the atoms as well as the effective coupling in Eq. (21) are position dependent. This due to coupling to internal states of the atomic ensemble in contrast to coupling to the motional states of the atoms, where a Lamb-Dicke expansion is applied [17]. Therefore, by appropriately choosing the position of atoms, dissipation channels can be switched.
As a specific example we consider the case, where the coupling in the Hamiltonian in Eq. (21) gets maximum, i.e. k m L z 1, and thereby recover a similar master equation as in the cavity mediated case, see Eq. (14). Further, taking dissipation due to the radiative decay of the Rydberg level and the heating of the membrane into account we obtaiṅ In contrast to the cavity-mediated case we obtain an additional dissipation channel on the side of the membrane with rate γ diff m resulting from the elimination of the coupling field, cf. [17]. Summarizing the long-distance version of the proposed setup we find that in a certain limit we can recover the previous results, cf. 4. However, other limits can also be engineered due to the position dependency of the dissipation and the coupling.
Toolbox for Cavity Optomechanics
In this section, we point out how a Rydberg superatom can be used as a toolbox for cavity optomechanical experiments. Our hybrid system allows engineered dissipation of the Rydberg superatom and can be utilized to cool the membrane or to read out its state via spectroscopy techniques.
As concluded in Sec. 4 the proposed setup is not restricted to a Jaynes-Cummings type of interaction. For example, by increasing (decreasing) the intensity of the external laser Ω, one can decrease (increase) the Rydberg blockade radius and thus change the number of superatoms in the cavity. Thereby, a superatom based Tavis-Cummings model [40] could be realized. In such a setup, superatoms can become entangled with each other, even when they do not interact directly, and multipartite entanglement is generated [41]. Vice versa, by including a second mechanical mode in the system, a multi-mode Jaynes-Cummings model is realized. This can be done by either addressing two modes on a single membrane or by inserting a second membrane in the system. Analogously to the multi-atom case, the multimode Jaynes-Cummings model allows the preparation of shared entanglement [42,43].
Below, we give three examples for possible applications: Fock state transfer, Superatommediated cooling and non-classical state preparation for a membrane initially in a thermal state.
State transfer
In the strong coupling limit, a transfer of a Fock state with n = 1 from the membrane to the Rydberg superatom and vice versa can be achieved with a very high fidelity. For example, to swap the excitation from the Rydberg superatom to the membrane |R, 0 → |G, 1 a time t g = π 2G eff is necessary and the fidelity F = G, 1|Ψ f (t g ) scales as Due to the strong coupling condition in Eq. (18), it is clear that a high atom number leads to a coupling element that outrivals the losses in the system, and thus high fidelities are possible. However, to obtain high fidelities it is necessary to have a membrane cooled to the ground state in order to avoid degrading effects due to heating.
Sympathetic Cooling
Considerable experimental work has been drawn to realize ground state cooling of the mechanical system [5,6,7]. In addition to the standard optomechanical cooling via the cavity decay [44] we discuss here sympathetic cooling [17] of the membrane by utilizing the welldeveloped AMO toolbox to extract Rydberg excitations from the atomic ensemble [45]. The sympathetic cooling of the membrane is achieved by the following steps: first an excitation is transferred from the membrane via the coherent Jaynes-Cummings coupling to the Rydberg superatom. In a second step, a π pulse with amplitude Ω d in resonance with an auxiliary ground state |g of the atoms is applied and deexcites the atom to this ground state: H cool = Ω d (|R g | + H.c.). The superatom is then its ground state with one atom less N → N − 1, and the atom from in the auxiliary ground state can be removed according to the jump operator: J cool := √ γ cl |vac g |. Eliminating the auxiliary ground state with γ cl Ω d and N 1, we can write an effective dissipation: with γ R cool := Ω 2 d /γ cl and H n s in Eq. (15). The dissipation is steered by the laser with Rabi frequency Ω d and can be switched on and off to remove excitations from the system. In a third step, another excitation from the membrane is transferred and the cooling cycle can be repeated. Using this one could gradually cool the membrane and thereby prepare it its ground state. In the strong coupling limit G eff N m γ m , the steady state phonon number n s scales with [20] Providing a strong coupling, the thermal occupation of the membrane can be reduced to very low mean phonon number and in principle to the ground state.
State preparation
The possibility to coherently drive the superatom and to switch the interaction between the membrane and the superatom allows a great control of the membrane -superatom interaction along the line of nonlinear quantum optomechanics via intrinsic two-level defects [10]. This gives rise to the prospect of deterministically preparing mechanical states by suitable protocols.
In Fig. 4, we numerically evaluate the master equation (23) of the long distance coupling for a membrane coupled to a heat bath with mean phonon occupation number of N m = 15. At t = 0, a laser is switched on and drives the atomic ensemble, continuously creating a Rydberg excitation. As a result of the Jaynes-Cummings type of interaction, the Rydberg excitation is transferred to the membrane and the phonon distribution changes from a Bose-Einstein to a non-classical distribution, centered around the mean value of p n = 1 as a signature of thermal phonon distribution doped with one Fock phonon. With increasing time, Rabi oscillations become visible. First, the main contributions arise between the states |G, 1 and |R, 0 for (G eff t ≈ 5). Due to the pumping process, the mean phonon number increases and the main contributions is changed to the Rabi oscillation between the states |G, 2 and |R, 1 for (G eff t ≈ 10). These dynamics show, that even for a membrane, which is not in the ground state, the strong coupling allows a non-classical state preparation.
Conclusion
We investigated a system consisting of a membrane and a Rydberg superatom formed by an atomic ensemble. Due to the superatom imposed collective enhancement factor, we show that strong coupling between the superatom and the membrane can be achieved for parameters within the range of current experiments. The strong coupling regime can be reached cavitymediated or in a long distance setup and allows to utilize the Rydberg superatom as a toolbox for optomechanical experiments such as ground state cooling and (non-classical) state preparation. This hybrid system constitutes a new feasible implementation for a qubit strongly coupled to a harmonic oscillator. After completing this work we became aware of [46], where also nanomechanics is coupled to a Rydberg ensemble. In contrast to that work, we focus here within a fully microscopical model for a single Rydberg state on the competition between the incoherent and coherent part Due to the strong coupling to a coherently pumped Rydberg superatom, the initial Bose-Einstein distribution is changed into a non-classical state with increasing Fock phonon numbers. If the thermal occupation is high, a stronger pumping is necessary to generate a non-classical state due to the strong dissipative dynamics in between the phonon manifolds.
of the dynamics. This project was supported by the ERC Synergy Grant UQUAM, the SFB FoQuS (FWF Project No. F4006-N16) and the Marie Curie Initial Training Network COHERENCE. A.C. acknowledges gratefully support from Alexander-von-Humboldt foundation through the Feodor-Lynen program.
Appendix A. Enhancement of the Optomechanical Coupling
In this section we demonstrate one possible way to obtain the enhancement of the optomechanical coupling without also driving the atomic transitions. We investigate a two mode cavity system with frequencies ω 0 and ω p , which mediates an excitation transfer between a moving membrane and an atomic ensemble. To enhance the radiation pressure coupling g 0 between the membrane with frequency ω m and the cavity modes [27], a laser with frequency ω m L and Rabi frequency E p pumps the cavity from the right, see Fig. 1(a). A two-mode cavity set-up is assumed to avoid a strongly driven transition between the ground and intermediate excited state of the atoms. In a corresponding rotating frame the Hamiltonian of the full system reads (h = 1) : |e i e i | (A.1) , with the detuning of the pump cavity field ∆ p = ω p − ω m L , the detuning of the cavity field that couples to the atomic ensemble ∆ c = ω 0 − ω m L − ω m , the detuning of the intermediate excited state ∆ e = ω e − ω m − ω m L , and the resonance condition ω r = ω m + ω m L + ω L . Since the pump cavity field a p is far detuned from the transition between ground and intermediate excited state of the atoms, the coupling between them is neglected. The Langevin formalism accounts for the complete open dynamics of the system [36]. Explicitly, the quantum Langevin equations (QLEs) of the cavity modes (a † p , a † ) and the membrane (b † ) read [28]: Here, the in-field operator a in includes the steady state average amplitude of the external field |E p | as well as a fluctuating contribution δ a in characterized by the twotime correlation functions where we used that the thermal occupation N ω ≈ k B T /ω vanishes for optical frequencies.
Here, k B is the Boltzmann constant and T is the temperature of the bath. The Brownian noise ξ (t) affecting the mirror due to coupling to its support satisfies the following correlation function and commutator in the Markovian limit The thermal occupation of the mechanical mode is denoted with N m . In the limit of an intense pumping field E p , the cavity mode a p can be described by a coherent state α plus an additional fluctuation, and the modified equilibrium position of the resonator is given by β . We model this by applying the displacements: a p → a p + α and b → b + β . Assuming the steady state condition the equilibrium mean values read with α 1. In the steady state limit, the classical (c-number) contribution vanishes in the QLEs. Including only optomechanical interaction strengths of order α and α 2 the QLEs for the fluctuation operators read: with the detunings defined by∆ n = ∆ n + g 0 (β * + β ) for n = {p, 0}. Choosing a rotating frame with respect to:∆ p a † p a p + ω m b † b, we require that the pump cavity mode fulfills ∆ p − ω m = 0 to apply the rotating wave approximation with αg 0 |∆ p − ω m |. Neglecting terms that are fast oscillating in comparison to the coupling strength, the Hamiltonian in leading order reads: where G = −αg 0 . Relabeling∆ c → ∆ c finally leads to the Hamiltonian in Eq. (3) in the corresponding rotating frame.
Appendix B. Suppression of the Radiative Decay
In the following, we show how the decay of the intermediate excited state is suppressed when we choose a detuned setup. To illustrate this suppression of the radiative decay, we numerically evaluate a semi-classical model governed by the following master equation:
Appendix C. Master Equation in the Symmetric Subspace
To rewrite the master equation (10) in the symmetric subspace, we use the complete set of basis states in case of a single Rydberg excitation with the subscript s for symmetric and the superscript N denoting up to N atomic excitations. + The first part of the Hamiltonian governs the free evolution. We assume the same resonance condition and rotating frame as in Sec. 4 where the first two lines correspond to the atom-cavity interaction and the last line to the atom-laser interaction. For increasing atomic excitation numbers j, the cavityatom interaction scales with √ j, whereas the collective enhancement factor N − ( j − 1) decreases correspondingly. The Rabi frequency of the atom-laser interaction is also increased by √ j similar to the cavity-atom coupling. + Note, the dipole-dipole interaction is already incorporated by considering states with only a single Rydberg excitation.
The interaction Hamiltonian between the cavity mode and the membrane is left unchanged such that H N s,m-c = H m-c . Finally, the complete master equation in the symmetric subspace reads: where we used that |E 0 = |G . The important result is that the radiative decay scales only with the number of excitations and not with the number of atoms. Note, this master equation is valid for a mainly coherent dynamic, in which the radiative decay is much smaller than the coherent coupling elements.
Appendix D. Effective Hamiltonian in the Single-and Multi-Excitation Limit
To derive the effective Hamiltonian in the multi-atomic excitation case, we use the Schrödinger equation with the Hamiltonian expressed in terms of the collective symmetric states: We apply the projector of the relevant sub-space with |R, 0, N := |E 0 R, 0, N on the wave function: As an example, Q 1 projects onto the following states |G, 1, n − 1 , |E 1 , 0, n − 1 , |E 0 R, 1, n − 2 and Q 2 consists of the states |G, 2, n − 2 , |E 1 R, 0, n − 2 , |E 1 , 1, n − 2 , |E 0 R, 2, n − 3 . To solve this identity, we need furthermore: It is clear, that the coupling to Q 2 leads to a higher order contribution in the perturbation theory as second order Born-Markov. We assume the detuning large to be enough, that the effective Hamiltonian can be written as: After evaluating this expression, we obtain the effective Hamiltonian in Eq. (15).
Appendix E. Long-Distance Coupling
In the following we outline the derivation of an effective master equation for the coupling between the membrane and the superatom. The derivation is done in three steps: first, we formulate the full Hamiltonian for the long-distance setup, then we eliminate the mediating coupling field, and finally we transform the resulting master equation into the collective basis, as described in Sec. 4.1.1. The full Hamiltonian for the problem reads: withH 0 defined in Eq. 20,H m-f defined in [17] andH at-f given by H at-f =h ∑ j Ω m L σ j eg dω c ω sin(k m L z j ) + Ω L e −iω L t σ j er + h.c. , (E.2) where |e i g i | = σ i eg , Rabi frequencies Ω m L and Ω L . In contrast to the cavity-mediate case, where we coupled to a single cavity mode, we assume here a continuum of field modes c ω centered around the coupling laser frequency ω m L , that mediate the interaction. The membrane-light field interaction can be linearized around a strong laser drive at frequency ω m L such that it is given bỹ where g m is the laser enhanced coupling element derived in [17].
In order to eliminate the intermediate excited state, we choose the frequency of the driving field to be off-resonant with the transition of the ground to intermediate state, i.e. ∆ e = ω e − ω m L , to suppress a resonant driving of the atomic ensemble. The resonance condition reads ω r = ω L + ω m L + ω m as in Sec. 3. Due to ∆ e Ω m L , Ω we can then eliminate the intermediate excited state. We further need assume the same delay throughout the atomic ensemble, i.e. τ j = kz j /c ≈ kz j /c = τ with an average positionz j .
We further need to eliminate the mediating quantized laser field, which is in a very similar way done in [39], and it uses the framework of the quantum stochastic Schrödinger equation, cf. [21,36]. The elimination finally leads to a master equation for a cascaded quantum system, where the dynamics cannot longer be described by a unique Hamiltonian. The coherent dynamics is governed by the induced Hamiltonian where δ i j is the Kronecker delta and γ diff m := 2g 2 m is the membrane-light field diffusion as in [17].
This master equation is still in the microscopic single particle picture of the atomic ensemble. In order to rewrite the master equation in the collective basis for a single Rydberg excitation and thereby reducing the atomic ensemble to an effective two-level atom, we assume, that the atoms in the ensemble couple equally strong to the light field g i at = g at and remain at constant positionz j =z. Further, we then haveG j eff =G eff and ∆ j at = ∆ at . Under the condition that only a single Rydberg excitation is possible due to a Rydberg blockade radius, we can restrict the Hilbert space to the states: |G and |E 0 R =: |R . Using this we conclude with the Hamiltonian in Eq. (21) and the associated master equation (22). | 11,502.4 | 2013-12-23T00:00:00.000 | [
"Physics"
] |
Review on the Command Platform of Production and Rush-Repairs for Distribution Network
The command platform of production and rush-repairs for distribution Network is a supporting platform, based on information technology, for the business applications of the command center. This article briefly described the basic functions and structural features of the platform, and discussed in detail the integration of business information, the contents and solutions needed to be focused on during inter-system data exchange, and the implementation techniques of the command platform. The benefits of the platform in the improvement of the distribution network production and rush-repairs are finally summed up. The entire above are provided as references.
Introduction
The key business of the distribution network production is distribution network assets operation and maintenance which include equipment management, defect management, patrol inspection, fault repair, field operation, and instruments management, involving other related business such as outage management, power management, and etc.
With continuous investment in information construction of power grid company, support has been realized from information for most production and management business in recent years, e.g., PMS realized the equipment management, defect management, patrol management, and etc.In the fundamental platform construction, electric power companies have mostly constructed GIS system and dispatching automation system.Distribution network automation has been under construction during the past two years.However, information support is relatively weak for production and fault repair business.
The following problems still exist in current power company: 1) Lacking of unified technique for production command: production and management personnel cannot timely master and check production operation situation.Operation crew cannot effectively master and check equipment information, real time information, and operation related information of distribution network.
2) Operation field is not connected to PMS, DMS, and the like system information: PMS realizes all-life cycle management and related patrol repair and other produc-tion business for distribution network equipment, but it cannot share information with field production operation.Even though mobile operation technology is adopted, mainly off-line, real-time interaction cannot be realized to support operation field.
Propose of the Command Platform of Production and Rush-Repairs for Distribution Network
In 2011, State Grid issued distribution 156 document [1] , where standardized rush-repair for distribution network is put forward: aggressively develop distribution network production standardization management, distribution network state management, online operation, standardized rush-repair operation and the like system construction; strengthen management of the entire process of the distribution network production; comprehensively improve the work quality of the distribution network operation, maintenance, repair and technical reform; form systematic and organizational guarantee for the improvement of the supply reliability and service quality.
With the progression of standardization, distribution network command organization has a larger and larger demand on information acquisition, integrated command, and unified resources dispatch.In 2012, state grid production technology department promulgated the construction of practical and efficient distribution production and rush-repairs command platform [2] so as to improve the function of the information communication bus, strengthen resources integration of the geographic infor-mation system, production and management system, marketing system, distribution network repairs, 95598 and usage information acquisition, realize distribution network operation monitoring, supply risk analysis, equipment abnormal management, automatic fault isolation, remote repair command and the like functions, and improve the quick response ability of the distribution network fault repair, providing technical support for the improvement of power supply reliability and service quality.
Fundamental Functions of the Command Platform of Production and Rush-Repairs for Distribution Network
Distribution network production and rush-repairs command platform is divided into foundation application platform, production command application, failure diagnosis, repair command, and analysis and decision, as is shown in Table 1.
As a support platform of the distribution network production and rush-repairs command application, foundation platform mainly includes system management, log management, rights management, Diagram module library management, report management, visualization application support management, multimedia application support, and integration service management.
Production command provides guidance and auxiliary decision analysis for regular production, mainly including: planned outage analysis management, fault plan management, power management, distribution network operational risk warning analysis, equipment on-line monitoring and warning, and outage plan optimization auxiliary decision-making.According to the accident solving process, command platform contains production and rush-repairs situation analysis, production and rushrepairs command, and other related functions.
Failure diagnosis obtains information from various systems, and identifies power failure, essentially including: customer fault repair analysis, usage information acquisition system fault analysis, and fault identification analysis.
Repair command provides auxiliary decision for fault repair so as to realize rapid and efficient repair, and mainly includes repair scheduling management, field repair operation terminal application management, production information situation analysis, video monitoring, and auxiliary decision-making for repair resource dispatch optimization.
Analysis and decision functions monitor and manage significant indexes, and do production and rush-repairs statistical analyses, mainly including distribution automation assessment index monitoring, reliability index monitoring, repair comprehensive statistical analysis, voltage-qualified rate monitoring, report analysis and statistics, and etc.
Business Integration of the Command Platform of Production and Rush-Repairs for Distribution Network
Information integration of the distribution network production and rush-repairs command platform is the significant link to guarantee the realization of the function.
The information communication bus has to be used and related application service has to be adopted from the upper and the lower established application system so as to achieve the objective of information sharing.According to state grid unified information standard, information integration and business application among application systems must be based on the principle of "one source one end, global sharing".Through information communication, resources sharing and function integration is realized between distribution network production and rush-repairs command platform and related application systems.Distribution network production and rush-repairs command platform realizes information communication with marketing systems, 95598, distribution automation, usage information acquisition, PMS, GIS and other related systems through information communication bus which is in the accordance with distribution automation information communication standard.Production and rush-repairs command are core applications, and information integration application is realized.
1) Integration with GIS Distribution network production and rush-repairs command platform receives diagram and module data through the GIS system interface.Distribution network topology information and map information is visualized in the command platform.And network topology simulation analysis is realized by GIS topology information.Data interactive method is request response.
2) Integration with PMS Distribution network production and rush-repairs command platform receives outage plan, work ticket, scheduling command ticket, equipment defect, account information, and other information from PMS interface.Interactive method is request response.And the command platform will feed the fault, fault treatment, and other related information back to the PMS system.Interactive method is PMS push.In addition, the platform also has access to PMS interface for distribution network on-line monitoring information, such as distribution network equipment online temperature, sub-section post environment, and SF6 gas concentration.
3) Integration with distribution automation system Distribution automation system pushes switch position information, fault and its treatment information to distribution network production and rush-repair command platform through the information communication bus.The command platform obtains switch state section on demand.Data interactive method is active push by distribution automation.
4) Integration with dispatching automation system Distribution network production and rush-repairs command platform receives main network real-time information from dispatching automation system interface.Dispatching automation system releases the following information through the bus: switch deflection information (real-time), fault information (real-time), and real-time information section (fixed cycle, such as 30 minutes).Data interactive method is active push by dispatching automation system.5) Integration with marketing system Marketing system (CIS) provides equipment account inquiry service, customer file inquiry service, important customer information, and etc. for distribution network production and rush-repair command platform.Interactive method is request response.6) Integration with 95598 systems Distribution network production and rush-repair command platform receives real-timely distribution system fault repair work order form 95598 system, and feeds repair process information back to the 95598 system.Interactive method is active push by 95598 system.95598 systems also provides power failure inquiry service and release of power failure analysis results.
7) Integration with electric energy data acquisition system Electric energy data acquisition system actively finds out abnormal distribution network power supply, realtimely analyzes fault location together with PMS and GIS platforms, pushes fault location information to the command platform, assists repair commander determine whether it is distribution fault or not, precisely locates fault range, and provides service support for 95598 system.Interactive method is active push.8) Integration with vehicle management system Vehicle management system sends vehicle location information to production and rush-repairs command platform.Interactive way is active push by vehicle management system.9) Integration with inventory management system Production and rush-repairs command platform receives instrument inventory information.Data interactive method is request response.
Implementation Techniques of the Command Platform of Production and Rush-Repairs for Distribution Network
Distribution network production and rush-repairs command platform should be built in the city-level power supply company [3], coupled integrated with related systems by data communication bus or data center.Service-oriented architecture (SOA) is adopted and service is released through the bus.Models and interactive standards comply with the IEC61970/61968-CIM and SG-CIM standards and specifications.Resources of original information systems can be made full use of.Construction costs can be reduced as well as construction period.
And comprehensive benefit of information system application can be improved.
Service Oriented Architecture
Service-oriented architecture (SOA) is an emerging technology solution to enterprise application integration.It retrieves discrete business functions of enterprise application, and organizes them into an interactive, standard-based service.SOA offers a flexible and efficient system integration scheme by providing services to the enterprise.It combines and reuses modular and portable service in the composite application to meet business needs rapidly.Service, which refers to the function of defining interface specification (including format and transmission protocol) based on the open and neutral standard, is the most important part of the SOA system.
Because interface specification has no relation to the specific hardware platform, operating system, and programming language, the caller and service provider can communicate in a unified and standard way.And service acts as a link connecting various business applications, technical standards, and implementation technologies.
Data Communication Bus
Data communication bus responses for the data transmission channel across isolation, interfaces with upper application system using standard web service interface, JMS message and etc., and supports transparent data transmission among upper application systems.From the viewpoint of the network, work completed in information interaction bus basically belongs to the transport layer, and transfers, but not analyzes the data of the upper application system.When application system transfers data through the bus, interface program should be developed and data should be encapsulated and analyzed in accordance with the interface format agreed to the bus.
Information Integration Technology
Information integration is a management process, which is based on the trend of information development, and led by certain organization, to realize orderly, shared, and controllable information resources, and further information resource configuration optimization, to broaden the application fields of the information resources, and to maximize the information value.The purpose of the information integration is to realize sending the right information to the right user in the right time, in the right way, and in the distributed environment.Integration architecture is shown below.
Portal Technology
Considering the function of the production and rushrepairs command platform as a production command center, its own content need to be displayed, besides, interface information of the other systems need to be integrated in the future.Portal technology will be adopted.The interface demonstration is combined with GIS, charts, Gantt chart, instrument panel, and etc., through dynamic pages to meet the display demand of different roles (leaders, business specialties, and etc.).As a comprehensive platform across multiple professional fields, distribution network production and rushrepairs command platform has to combine and integrate with multiple professional systems in various levels.It will be a challenge to make full use of the information resources of the professional systems, and at the same time consider personal view of different business domain users.Interface integration will be one of the important techniques to combine the system with different professsional applications.
Conclusions
The covered area of the distribution network production is expanded in the context of the Three-Integrated and Five-Big, and professionalization rises up the operation requirement of maintenance and repair.Technical support is lacked for the unified management of production operation.Existing information flow and interaction must be accelerated to improve the efficiency of production operation.
Distribution network production and rush-repairs command platform helps command staff grasp production operation information, including the involved people, tools, and etc.The field operation condition can be very intuitively monitored in the platform.And interaction is realized among the crew.Therefore, the platform will be one with artificial intelligence, scientific analysis, operation simplification, and application practiced production and repair management platform.And it will further strengthen distribution network production and rushrepairs command, improve distribution network repair efficiency, and continuously rise up power supply reliability and service level. | 2,916.2 | 2013-06-30T00:00:00.000 | [
"Engineering"
] |
Can Brain Signals Reveal Inner Alignment with Human Languages?
Brain Signals, such as Electroencephalography (EEG), and human languages have been widely explored independently for many downstream tasks, however, the connection between them has not been well explored. In this study, we explore the relationship and dependency between EEG and language. To study at the representation level, we introduced \textbf{MTAM}, a \textbf{M}ultimodal \textbf{T}ransformer \textbf{A}lignment \textbf{M}odel, to observe coordinated representations between the two modalities. We used various relationship alignment-seeking techniques, such as Canonical Correlation Analysis and Wasserstein Distance, as loss functions to transfigure features. On downstream applications, sentiment analysis and relation detection, we achieved new state-of-the-art results on two datasets, ZuCo and K-EmoCon. Our method achieved an F1-score improvement of 1.7% on K-EmoCon and 9.3% on Zuco datasets for sentiment analysis, and 7.4% on ZuCo for relation detection. In addition, we provide interpretations of the performance improvement: (1) feature distribution shows the effectiveness of the alignment module for discovering and encoding the relationship between EEG and language; (2) alignment weights show the influence of different language semantics as well as EEG frequency features; (3) brain topographical maps provide an intuitive demonstration of the connectivity in the brain regions. Our code is available at \url{https://github.com/Jason-Qiu/EEG_Language_Alignment}.
Introduction
Brain activity is an important parameter in furthering our knowledge of how human language is represented and interpreted (Toneva et al., 2020;Williams and Wehbe, 2021;Reddy and Wehbe, 2021;Wehbe et al., 2020;Deniz et al., 2021).Researchers from domains such as linguistics, psychology, cognitive science, and computer science * *marked as equal contribution have made large efforts in using brain-recording technologies to analyze cognitive activity during language-related tasks and observed that these technologies added value in terms of understanding language (Stemmer and Connolly, 2012).
Basic linguistic rules seem to be effortlessly understood by humans in contrast to machinery.Recent advances in natural language processing (NLP) models (Vaswani et al., 2017) have enabled computers to maintain long and contextual information through self-attention mechanisms.This attention mechanism has been maneuvered to create robust language models but at the cost of tremendous amounts of data (Devlin et al., 2019;Liu et al., 2019b;Lewis et al., 2020;Brown et al., 2020;Yang et al., 2019).Although performance has significantly improved by using modern NLP models, they are still seen to be suboptimal compared to the human brain.In this study, we explore the relationship and dependencies of EEG and language.We apply EEG, a popularized routine in cognitive research, for its accessibility and practicality, along with language to discover connectivity.
Our contributions are summarized as follows: • To the best of our knowledge, this is the first work to explore the fundamental relationship and connectivity between EEG and language through computational multimodal methods.
• We introduced MTAM, a Multimodal Transformer Alignment Model, that learns coordinated representations by hierarchical transformer encoders.The transformed representations showed tremendous performance improvements and state-of-the-art results in downstream applications, i.e., sentiment analysis and relation detection, on two datasets, ZuCo 1.0/2.0 and K-EmoCon.
• We carried out experiments with multiple alignment mechanisms, i.e., canonical correlation analysis and Wasserstein distance, and The architecture of our model, where EEG and language features are coordinately explored by two encoders.The EEG encoder and language encoder are shown on the left and right, respectively.The cross-alignment module is used to explore the connectivity and relationship within two domains, while the transformed features are used for downstream tasks.
proved that relation-seeking loss functions are helpful in downstream tasks.
• We provided interpretations of the performance improvement by visualizing the original & transformed feature distribution, showing the effectiveness of the alignment module for discovering and encoding the relationship between EEG and language.
• Our findings on word-level and sentence-level EEG-language alignment showed the influence of different language semantics as well as EEG frequency features, which provided additional explanations.
• The brain topographical maps delivered an intuitive demonstration of the connectivity of EEG and language response in the brain regions, which issues a physiological basis for our discovery.(Devlin et al., 2019) and found the relationships between these two modalities were generalized across participants.Huang et al. (2020) leveraged CT images and text from electronic health records to classify pulmonary embolism cases and observed that the multimodal model with late fusion achieved the best performance.However, the relationship between language and EEG has not been explored before.
Multimodal Learning of EEG and Language
Foster et al. (2021) applied EEG signals to pre-dict specific values of each dimension in a word vector through regression models.Wang and Ji (2021) used word-level EEG features to decode corresponding text tokens through an open vocabulary, sequence-to-sequence framework.Hollenstein et al. (2021) focused on a multimodal approach by utilizing a combination of EEG, eye-tracking, and text data to improve NLP tasks, but did not explore the relationship between EEG and language.More related work can be found in Appendix E.
Overview of Model Architecture
The architecture of our model is shown in Fig. 1.The bi-encoder architecture is helpful in projecting embeddings into vector space for methodical analysis (Liu et al., 2019a;Hollenstein et al., 2021;Choi et al., 2021).Thus in our study, we adopt the bi-encoder approach to effectively reveal hidden relations between language and EEG.The MTAM, Multimodal Transformer Alignment Model, contains several modules.We use a dual-encoder architecture, where each view contains hierarchical transformer encoders.The inputs of each encoder are EEG and language, respectively.For EEG hierarchical encoders, each encoder shares the same architecture as the encoder module in Vaswani et al. (2017).In the current literature, researchers assume that the brain acts as an encoder for highdimensional semantic representations (Wang and Ji, 2021;Gauthier and Ivanova, 2018;Correia et al., 2013).Based on this assumption, the EEG signals act as low-level embeddings.By feeding it into its respective hierarchical encoder, we extract transformed EEG embeddings as input for the cross alignment module.As for the language path, the language encoder is slightly different from the EEG encoder.We first process the text with a pretrained large language model (LLM) to extract text em-
Experimental Results and Discussions
In this study, we evaluate our method on two downstream tasks: Sentiment Analysis (SA) and Relation Detection (RD) of two datasets: K-EmoCon (Park et al., 2020) and ZuCo 1.0/2.0Dataset (Hollenstein et al., 2018(Hollenstein et al., , 2020b)).Given a succession of word-level or sentence-level EEG features and their corresponding language, Sentiment Analysis (SA) task aims to predict the sentiment label.For Relation Detection (RD), the goal is to extract semantic relations between entities in a given text.More details about the tasks, data processing, and experimental settings can be found in Appendix C.
In Table 1, we show the comparison results of the ZuCo dataset for Sentiment Analysis and Relation Detection, respectively.Our method outperforms all baselines, and the multimodal approach outperforms unimodal approaches, which further demonstrates the importance of exploring the inner alignment between EEG and language.The results of the K-EmoCon dataset are listed in Appendix D
Ablation Study
To further investigate the performance of different mechanisms in the CAM, we carried out ablation experiments on the Zuco dataset, and the results are shown in Table 6 in Appendix D.2.The combination of CCA and WD performed better compared to using only one mechanism for sentiment analysis and relation detection in all model settings.We also conducted experiments on word-level, sentencelevel, and concat word-level inputs, and the results are also shown in Table 6.We observe that word-level EEG features paired with their respective word generally outperform sentence-level and concat word-level in both tasks.
Analysis
To understand the alignment between language and EEG, we visualize the alignment weights of wordlevel EEG-language alignment on the ZuCo dataset.From word level alignment in Fig. 2 and 3, beta2 and gamma1 waves are most active.This is consistent with the literature, which showed that gamma waves are seen to be active in detecting emotions (Li and Lu, 2009) and beta waves have been involved in higher-order linguistic functions (e.g., discrimination of word categories).Hollenstein et al. Figure 4: Brain topologies.(2021) found that beta and theta waves were most useful in terms of model performance in sentiment analysis.In Kensinger (2009), Kensinger explained that generally, negative events are more likely to be remembered than positive events.Building off of Kensinger (2009), negative words can embed a more significant and long-lasting memory than positive words, and thus may have higher activation in the occipital and inferior parietal lobes.
We performed an analysis of which EEG feature refined the model's performance since different neurocognitive factors during language processing are associated with brain oscillations at miscellaneous frequencies.The beta and theta bands have positively contributed the most, which is due to the theta band power expected to rise with increased language processing activity and the band's relation to semantic memory retrieval (Kosch et al., 2020;Hollenstein et al., 2021).The beta's contribution can be best explained by the effect of emotional connotations of the text (Bastiaansen et al., 2005;Hollenstein et al., 2021).
In Fig. 4, we visualized the brain topologies with word-level EEG features for important and unimportant words from positive and negative sentences in the ZuCo dataset.We deemed a word important if the definition had a positive or negative connotation.'Upscale' and 'lame' are important positive and negative words, respectively, while 'will' and 'someone' are unimportant positive and negative words, respectively.There are two areas in the brain that are heavily associated with language processing: Broca's area and Wernicke's area.Broca's area is assumed to be located in the left frontal lobe, and this region is concerned with the production of speech (Nasios et al., 2019).The left posterior superior temporal gyrus is typically assumed as Wernicke's area, and this locale is involved with the comprehension of speech (Nasios et al., 2019).
Similar to Fig. 2,3, we can observe that beta2, gamma1, and gamma2 frequency bands have the most powerful signals for all words.In Fig. 4, ac-tivity in Wernicke's area is seen most visibly in the beta2, gamma1, and gamma2 bands for the words 'Upscale' and 'Will'.For the word 'Upscale,' we also saw activity around Broca's area for alpha1, al-pha2, beta1, beta2, theta1, and theta2 bands.An interesting observation is that for the negative words, 'Lame' and 'Someone', we see very low activation in Broca's and Wernicke's areas.Instead, we see most activity in the occipital lobes and slightly over the inferior parietal lobes.The occipital lobes are noted as the visual processing area of the brain and are associated with memory formation, face recognition, distance and depth interpretation, and visuospatial perception (Rehman and Khalili, 2019).The inferior parietal lobes are generally found to be key actors in visuospatial attention and semantic memory (Numssen et al., 2021).
Conclusion
In this study, we explore the relationship between EEG and language.We propose MTAM, a Multimodal Transformer Alignment Model, to observe coordinated representations between the two modalities and employ the transformed representations for downstream applications.Our method achieved state-of-the-art performance on sentiment analysis and relation detection tasks on two public datasets, ZuCo and K-EmoCon.Furthermore, we carried out a comprehensive study to analyze the connectivity and alignment between EEG and language.We observed that the transformed features show less randomness and sparsity.The word-level language-EEG alignment clearly demonstrated the importance of the explored connectivity.We also provided brain topologies as an intuitive understanding of the corresponding activity regions in the brain, which could build the empirical neuropsychological basis for understanding the relationship between EEG and language through computational models.
Limitations
Since we proposed a new task of exploring the relationship between EEG and language, we believe there are several limitations that can be focused on in future work.
• The size of the datasets may not be large enough.Due to the difficulty and timeconsumption of collecting human-related data (in addition, to privacy concerns), there are few publicly available datasets that have EEG recordings with corresponding natural language.When compared to other mature tasks, (i.e.image classification, object detection, etc), datasets that have a combination of EEG signals and different modalities are rare.In the future, we would like to collect more data on EEG signals with natural language to enhance innovation in this direction.
• The computational architecture, the MTAM model, is relatively straightforward.We agree the dual-encoder architecture is one of the standard paradigms in multimodal learning.Since our target is to explore the connectivity and relationship between EEG and language, we used a straightforward paradigm.Our model's architecture may be less complex compared to others in different tasks, such as image-text pre-training.However, we purposely avoid complicating the model's structure due to the size of the training data.We noticed when adding more layers of complexity, the model was more prone to overfitting.
• The literature lacks available published baselines.As shown in our paper, since the task is new, there are not enough published works that provide comparable baselines.We understand that the comparison is important, so we implemented several baselines by ourselves, including MLP, Bi-LSTM, Transformer, and ResNet, to provide more convincing judgment and support future work in this area.
Ethics Statement
The goal of our study is to explore the connectivity between EEG and language, which involves human subjects' data and may inflect cognition in the brain, so we would like to provide an ethics discussion.First, all the data used in our paper are publicly available datasets: K-EmoCon and Zuco.We did not conduct any human-involved experiments by ourselves.Additionally, we do not implement any technologies on the human brain.The datasets can be found in Park et al. (2020); Hollenstein et al. (2018Hollenstein et al. ( , 2020b) ) We believe this study can empirically provide findings about the connection between natural language and the human brain.To our best knowledge, we do not foresee any harmful uses of this scientific study.Let X e ∈ R De and X t ∈ R Dt be the two normalized input feature matrices for EEG and text, respectively, where D e and D t describes the dimensions of the feature matrices.To encode the two feature vectors, we feed them to their hierarchical transformer encoders: V e = E e (X e ; W e ); V t = E t (X t ; W t ), where E e and E t denotes the separate encoders, V e and V t symbolizes the outputs for the transformed low-level features and W e and W t denotes the trainable weights for EEG and text respectively.The outputs of these two encoders can be further expanded by stating
A Three paradigms of EEG and language alignment
, where n and k denotes the number of instances in a given output vector and v n e and v k t denotes the instance itself.The details about Transformer encoders are introduced in the section below.
B.2 Transformer Encoders
The transformer is based on the attention mechanism and outperforms previous models in accuracy and performance.The original transformer model is composed of an encoder and a decoder.The encoder maps an input sequence into a latent representation, and the decoder uses the representation along with other inputs to generate a target sequence.Our model only adopts the encoder, since we aim at learning the representations of features.
First, we feed out the input into an embedding layer, which is a learned vector representation.Then we inject positional information into the embeddings by: P E (pos,2i) = sin pos/10000 2i/d model , P E (pos,2i+1) = cos pos/10000 2i/d model (1) The attention model contains two sub-modules, a multi-headed attention model and a fully connected network.The multi-headed attention computes the attention weights for the input and produces an output vector with encoded information on how each feature should attend to all other features in the sequence.
There are residual connections around each of the two sub-layers followed by a layer normalization, where the residual connection means adding the multi-headed attention output vector to the original positional input embedding, which helps the network train by allowing gradients to flow through the networks directly.Multi-headed attention applies a self-attention mechanism, where the input goes into three distinct fully connected layers to create the query, key, and value vectors.The output of the residual connection goes through layer normalization.
In our model, our attention model contains N same layers, and each layer contains two sub-layers, which are a multi-head self-attention model and a fully connected feed-forward network.Residual connection and normalization are added in each sub-layer.So the output of the sub-layer can be expressed as: Output = LayerNorm(x + (SubLayer(x))), For the Multi-head self-attention module, the attention can be expressed as: attention = Attention(Q, K, V ), where multi-head attention uses h different linear transformations to project query, key, and value, which are Q, K, and V , respectively, and finally concatenate different attention results: (2) where the projections are parameter matrices: where the computation of attention adopted scaled dot-product: )V For the output, we use a 1D convolutional layer and softmax layer to calculate the final output.
B.3 Cross Alignment Module
As shown in Fig. 5, there are three paradigms of EEG and language alignment.For word level, the EEG features are divided by each word, and the objective of the alignment is to find the connectivity of different frequencies with the corresponding word.For the concat-word level, the 8 frequencies' EEG features are concatenated as a whole, and then concatenated again to match the corresponding sentence, so the alignment is to find out the relationship within the sentence.As for sentence level, the EEG features are calculated as an average over the word-level EEG features.There is no boundary for the word, so the alignment module tries to encode the embeddings as a whole, and explore the general representations.In the Cross Alignment Module (CAM), we introduced a new loss function in addition to the original cross-entropy loss.The new loss is based on Canonical Correlation Analysis (CCA) (Andrew et al., 2013) and Optimal Transport (Wasserstein Distance).As in Andrew et al. (2013), CCA aims to concurrently learn the parameters of two networks to maximize the correlation between them.Wasserstein Distance (WD), which originates from Optimal Transport (OT), has the ability to align embeddings from different domains to explore the relationship (Chen et al., 2020).
Canonical Correlation Analysis (CCA) is a method for exploring the relationships between two multivariate sets of variables.It learns the linear transformation of two vectors to maximize the correlation between them, which is used in many multimodal problems (Andrew et al., 2013;Qiu et al., 2018;Gong et al., 2013).In this work, we apply CCA to capture the cross-domain relationship.Let low-level transformed EEG features be V e and low-level language features be L t .We assume (V e , V t ) ∈ R n 1 × R n 2 has covariances (Σ 11 , Σ 22 ) and cross-covariance Σ 12 .CCA finds pairs of linear projections of the two views, (w ′ 1 V e , w ′ 2 V t ) that are maximally correlated: In our study, we modified the structure of Andrew et al. (2013) while honoring its duty by replacing the neural networks with Transformer encoders.w * 1 and w * 2 denote the high-level, transformed weights from the low-level text and EEG features, respectively.
Wasserstein Distance (WD) is introduced in Optimal Transport (OT), which is a natural type of divergence for registration problems as it accounts for the underlying geometry of the space, and has been used for multimodal data matching and alignment tasks (Chen et al., 2020;Yuan et al., 2020;Lee et al., 2019;Demetci et al., 2020;Qiu et al., 2022;Zhu et al., 2022).In Euclidean settings, OT introduces WD W(µ, ν), which measures the minimum effort required to "displace" points across measures µ and ν, where µ and ν are values observed in the empirical distribution.In our setting, we compute the temporalpairwise Wasserstein Distance on EEG features and language features, which are (µ, ν) = (V e , V t ).For simplicity without loss of generality, assume µ ∈ P (X) and ν ∈ P (Y) denote the two discrete distributions, formulated as µ = n i=1 u i δ x i and ν = m j=1 v j δ y j , with δ x as the Dirac function centered on x.Π(µ, ν) denotes all the joint distributions γ(x, y), with marginals µ(x) and ν(y).The weight vectors ∈ ∆ m belong to the n− and m−dimensional simplex, respectively.The WD between the two discrete distributions µ and ν is defined as: where Π(u,v)={T ∈R n×m + |T 1m=u,T ⊤ 1n=v}, 1 n denotes an n−dimensional all-one vector, and c (x i , y j ) is the cost function evaluating the distance between x i and y j .
Loss Objective
The loss objective for the CAM module can be formalized as: Loss = l CE + α 1 l CCA + α 2 l W D , where α i ∈ {0, 1}, i ∈ (1, 2) controls the weights of different parts of alignment-based loss objective.
Sentiment Analysis (SA) Given a succession of word-level or sentence-level EEG features and their corresponding language, the task is to predict the sentiment label.The ZuCo 1.0 dataset consists of sentences from the Stanford Sentiment Treebank, which contains movie reviews and their corresponding sentiment label (i.e., positive, neutral, negative) (Socher et al., 2013).The K-EmoCon dataset categorizes emotion annotations as valence, arousal, happy, sad, nervous, and angry.For each emotion, the participant labeled the extent of the given emotion felt by following a Likert-scale paradigm.Arousal and valence are rated 1 to 5 (1: very low; 5: very high).Happy, sad, nervous, and angry emotions are rated 1 to 4, where 1 means very low and 4 means very high.The ratings are dominantly labeled as very low and neutral.Therefore to combat class imbalance, we collapse the labels to binary and ternary settings.
Relation Detection (RD) The goal of relation detection (also known as relation extraction or entity association) is to extract semantic relations between entities in a given text.For example, in the sentence, "June Huh won the 2022 Fields Medal.", the relation AWARD connects the two entities "June Huh" and "Fields Medal" together.The ZuCo 1.0/2.0datasets provide the ground truth labels and texts for this task.We use texts from the Wikipedia relation extraction dataset (Culotta et al., 2006) that has 10 relation categories: award, control, education, employer, founder, job title, nationality, political affiliation, visited, and wife (Hollenstein et al., 2018(Hollenstein et al., , 2020b)).
C.2 Datasets and Data Processing K-EmoCon Dataset K-EmoCon (Park et al., 2020) is a multimodal dataset including videos, speech audio, accelerometer, and physiological signals during a naturalistic conversation.After the conversation, each participant watched a recording of themselves and annotated their own and partner's emotions.Five external annotators were recruited to annotate both parties' emotions, six emotions in total (Arousal, Valence, Happy, Sad, Angry, Nervous).The NeuroSky MindWave headset captured EEG signals from the left prefrontal lob (FP1) at a sampling rate of 125 Hz in 8 frequency bands: delta (0.5-2.75Hz), theta ).We used Google Cloud's Speech-to-Text API to transcribe the audio data into text.
ZuCo Dataset The ZuCo Dataset (Hollenstein et al., 2018(Hollenstein et al., , 2020b) is a corpus of EEG signals and eye-tracking data during natural reading.The tasks during natural reading can be separated into three categories: sentiment analysis, natural reading, and task-specified reading.During sentiment analysis, the participant was presented with 400 positive, neutral, and negative labeled sentences from the Stanford Sentiment Treebank (Socher et al., 2013).The EEG data used in this study can be categorized into sentence-level and word-level features.The sentence-level features are the averaged word-level EEG features for the entire sentence duration.The word-level EEG features are for the first fixation duration (FFD) of a specific word, meaning when the participant's eye met the word, the EEG signals were recorded.For both word and sentence-level features, 8 frequency bands were recorded at a sampling frequency of 500 Hz and denoted as the following: theta1 (4-6Hz), theta2 (6.5-8Hz), alpha1 (8.5-10Hz), alpha2 ).
C.3 Experimental Setup
The hierarchical transformer encoders follow the standard skeleton from Vaswani et al. (2017), excluding its complexity.To avoid overfitting, we adopt the oversampling strategy for data augmentation (Hübschle-Schneider and Sanders, 2019), which ensures a balanced distribution of classes included in each batch.The train/test/validation splitting is (80%, 10%, 10%) as in Hollenstein et al. (2021).The EEG features are extracted from the datasets in 8 frequency bands and normalized with Z-score according to previous work (S. Yousif et al., 2020;Fdez et al., 2021;Du et al., 2022) over each frequency band.To preserve relatability, the word and sentence embeddings are also normalized with Z-scores.We use pre-trained language models to generate text features (Devlin et al., 2019), where all texts are tokenized and embedded using the BERT-uncased-base model.Each sentence has an average length of 20 tokens, so we instantiate a max length of 32 with padding.In the case of word-level, we use an average length of 4 tokens for each word and establish a max length of 10 with padding.The token vectors' from the four last hidden layers of the pre-trained model are withdrawn and averaged to get a final sentence or word embedding.These embeddings are used during the sentence-level and word-level settings.For concat word-level, we simply concatenate the word embeddings for their respective sentence.All the experimental parameters are listed in Appendix C.4.
In this section, we present implementation details for our multilayer perceptron (MLP), ResNet, and BiLSTM models during baseline retrieval.Throughout all baseline results, we used a pre-trained BERTuncased-base model to extract useful features for text.In the case of EEG features, we used the signals as is.Both text and EEG features were normalized with a Z-score before inputting them into the models.We also used the cross-entropy loss function for all baseline results.We configure the MLP with 6 hidden layers.At every step before the last output layer, we established a rectified linear unit activation function and a dropout rate of 0.3.Starting from the input layer, we use a hidden layer sizes of 256, 128, and 64 for our baseline results.Our 1D ResNet architecture has 34 layers (Hong et al., 2020).The BiLSTM
D.4 t-SNE Feature Projections
In order to interpret the performance improvement, we visualized the original feature distribution and the transformed feature distribution.As shown in Fig. 7, the transformed feature distribution makes better clusters than the original one.The features learned by CAM can be more easily separable, showing the effectiveness of discovering and encoding the relationship between EEG and language.Figures 8,9,10 show more t-SNE projection results of the K-EmoCon dataset on Sentiment Analysis task.
D.5 Sentence-level Alignment
Figure 11 shows the negative and positive sentence-level alignment weights of ZuCo dataset.In Figure 11, we can find that alpha1, beta1,and gamma1 frequency bands show larger different response between negative and positive sentences.
D.6 Baseline Results
In this section, we provided baseline results that directly used either EEG, language, or fusion as input for the downstream applications.The results are shown in Table 7 and Table 8.
Figure 1 :
Figure1: The architecture of our model, where EEG and language features are coordinately explored by two encoders.The EEG encoder and language encoder are shown on the left and right, respectively.The cross-alignment module is used to explore the connectivity and relationship within two domains, while the transformed features are used for downstream tasks.
Fig. 2
and Fig. 3 show examples of negative & positive sentence word-level alignment, respectively.The sentence-level alignment visualizations are shown in Appendix D.5.
Figure 5 :
Figure 5: Three paradigms of EEG and language alignment.
Figure 7 :
Figure 7: TSNE projection comparison of untransformed & transformed features of ZuCo dataset, where different colors represent different classes.
Figure 8 :
Figure 8: Transformed feature projections of K-EmoCon dataset on Sentiment Analysis, where different colors represent different classes.
Figure 9 :
Figure 9: Transformed feature projections of ZuCo dataset on Sentiment Analysis, word-level, where different colors represent different classes.
Figure 10 :
Figure 10: Transformed feature projections of ZuCo dataset on Sentiment Analysis, concat word-level, where different colors represent different classes.
Figure 11 :
Figure 11: Negative and Positive sentence-level alignment of ZuCo dataset.
Table 1 :
Comparison with baselines on Zuco dataset for Sentiment Analysis (SA) and Relation Detection (SD).
connectivity-based loss function.In our study, we investigate two alignment methods, i.e., Canonical Correlation Analysis (CCA) and Wasserstein Distance (WD).The output features from the cross alignment module can be used for downstream applications.The details of each part are introduced in Appendix B.3.
Table 2 :
Experiment parameters used in the paper, where the best ones are marked in bold | 6,518.4 | 2022-08-10T00:00:00.000 | [
"Computer Science"
] |
Pressure-induced superconductivity in the three-dimensional topological Dirac semimetal Cd3As2
The recently discovered Dirac and Weyl semimetals are new members of topological materials. Starting from them, topological superconductivity may be achieved, e.g., by carrier doping or applying pressure. Here we report high-pressure resistance and X-ray diffraction study of the three-dimensional topological Dirac semimetal Cd3As2. Superconductivity with Tc≈2.0 K is observed at 8.5 GPa. The Tc keeps increasing to about 4.0 K at 21.3 GPa, then shows a nearly constant pressure dependence up to the highest pressure 50.9 GPa. The X-ray diffraction measurements reveal a structure phase transition around 3.5 GPa. Our observation of superconductivity in pressurised topological Dirac semimetal Cd3As2 provides a new candidate for topological superconductor, as argued in a recent point contact study and a theoretical work. Cd3As2 is shown to become a topological superconductor candidate under pressure. Topological superconductors are typically obtained by modifying the surface electronic structure of topological insulators. Doping and pressure have been demonstrated as two ways to achieve this. However, pressure has yet to be explored as a route to topological superconductivity in the recently discovered three-dimensional (3D) Dirac semimetals, the Fermi surface of which exhibits 3D Dirac points. The electrical resistance of Cd3As2 single crystals have now been investigated under pressure: superconductivity initiates at around 8.5 GPa, and increases to 4 K at approximately 21 GPa. A structural phase transition is additionally evidenced from x-ray diffraction data at roughly 3.5 GPa. This study demonstrates that pressure could be a feasible route to achieve topological superconductivity in 3D topological Dirac semimetals.
INTRODUCTION
In recent few years, the search for topological superconductors (TSCs) has been a hot topic in condensed matter physics. 1,2 The TSCs have a full pairing gap in the bulk and gapless surface states consisting of Majorana fermions. 1 This is in close analogy to the topological insulators (TIs), which have a full insulating gap in the bulk and gapless edge or surface states. 1 The TSC is of great importance, as it is not only a new kind of exotic superconductor but also one source of Majorana ferimons for future applications in quantum computations. 1,2 Experimentally, the simplest way to get a candidate for TSC is to convert a TI into superconductor, by tuning the parameters such as doping or pressure. For example, by doping, Cu x Bi 2 Se 3 and Cu x (PbSe) 5 (Bi 2 Se 3 ) 6 are considered to be candidates for TSCs, [3][4][5][6] while Sn 1 − x In x Te is considered as a candidate for topological crystalline superconductor. 7,8 Under pressure, Bi 2 Te 3 , Bi 2 Se 3 , Sb 2 Te 3 and Sb 2 Se 3 become superconducting, which are also regarded as candidates for TSCs. [9][10][11][12][13][14] Note that there are debates on whether these candidates are indeed TSCs, [9][10][11][12][13][14][15][16][17] therefore further experimental works are needed to definitely identify a TSC and manipulate the Majorana fermions on its surface.
More recently, a new kind of topological material, the threedimensional (3D) Dirac semimetal was discovered, with examples of SrMnBi 2 , Na 3 Bi and Cd 3 As 2 . [18][19][20][21][22][23][24][25][26][27][28][29] As a 3D analogue to graphene, the Fermi surface of the 3D Dirac semimetal only consists of 3D Dirac points with linear energy dispersion in any momentum direction. 19,23 The exotic Fermi surface of Na 3 Bi and Cd 3 As 2 was confirmed by the angle-resolved photoemission spectroscopy experiments. [20][21][22][24][25][26] The compound Cd 3 As 2 is of particular interests, as it is stable in air, unlike Na 3 Bi. On the basis of quantum transport measurement, a non-trivial π Berry's phase is obtained, which provides bulk evidence for the existence of 3D Dirac semimetal phase in Cd 3 As 2 . 28,29 By symmetry breaking, this 3D Dirac semimetal may be driven to a topological insulator or Weyl semimetal. 23 More interestingly, it was predicted that topological superconductivity may be achieved in Cd 3 As 2 by carrier doping, 23 but this has not been realised so far. As pressure is an effective way to induce superconductivity in TIs, [10][11][12][13][14][15] it will be very interesting to check whether superconductivity can be achieved by applying pressure on Cd 3 As 2 .
Here we present the resistance measurements on Cd 3 As 2 single crystals under pressure up to 50.9 GPa. After an initial increase with pressure, the low-temperature resistance starts to decreases with pressure above 6.4 GPa. Superconductivity appears at 8.5 GPa with T c ≈2.0 K, and the T c increases to about 4.0 K at 21.3 GPa, then persists to the highest pressure 50.9 GPa. A structure phase transition around 3.5 GPa is also observed by X-ray diffraction (XRD) measurements. These results suggest that Cd 3 As 2 may be a new topological superconductor under high pressure. Figure 1a shows the crystal structure of Cd 3 As 2 . 30 The cubic Cd lattice with two vacancies resides in a face-centred cubic As lattice. Figure 1b plots a typical resistivity curve of Cd 3 As 2 single crystal at 0 GPa. It is metallic and non-superconducting down to 1.5 K.
Pressure-induced superconductivity
In Figure 2, the resistance curves for Cd 3 As 2 single crystal under various pressures are plotted. From Figure 2a, the temperature dependence of resistance already changes to insulating behaviour (dR/dTo0) at 1.1 GPa. With increasing pressure, it becomes more and more insulating until 6.4 GPa. However, upon further increasing pressure, the resistance at low temperature decreases with pressure. In Figure 2b, it becomes more and more metallic up to 32.7 GPa. Figure 2c,d) show the low-temperature part of the resistance curves above 8.5 GPa. A drop of resistance is observed below 2.0 K at 8.5 GPa, which is like a superconducting transition.
At 11.7 GPa, the resistance drops to zero, and the transition temperature T c = 3.3 K is defined at the cross of the two straight lines. The T c increases to about 4.0 K at 21.3 GPa, then persists to the highest pressure 50.9 GPa.
To make sure the resistance drop in Figure 2 is a superconducting transition, we measure the low-temperature resistance under 13.5 GPa in magnetic fields applied perpendicular to the (112) plane, as shown in Figure 3a. The resistance drop is gradually suppressed to lower temperature with increasing field, which demonstrates that it is indeed a superconducting transition. Note that such a superconductivity we observed here is very unlikely due to contamination of pure As, as its highest T c under pressures is much lower, and the pressure dependence of T c is quite different. 31 Figure 3b plots the temperature dependence of H c2 . Although limited by the temperature range we measured, one can see an apparently linear temperature dependence of H c2 . With a linear fit to the data, H c2 (0)≈4.29 T is roughly estimated. This value is higher than the orbital limiting field H orb c2 0 ð Þ ¼ 0:72T c dH c2 =dT j j T¼Tc ¼ 3:71 T, according to Werthamer-Helfand-Hohenberg formula. 32 It is much lower than the Pauli limiting field H P (0) = 1.84T c = 7.89 T, 33,34 suggesting an absence of Pauli pair breaking. The linear temperature dependence of H c2 in Figure 3b is actually very interesting. It may come from a two-band Fermi surface topology as in MgB 2 , [35][36][37] or an unconventional superconducting state as in heavy-fermion compound UBe 13 . 38 Similar linear temperature dependence of H c2 has recently been observed in pressurised TSC candidates Bi 2 Se 3 and Cu x Bi 2 Se 3 , in natural TSC candidate Au 2 Pb and in noncentrosymmetric superconductor YPtBi under ambient and high pressures, which was considered as an indication of unconventional superconducting state. 11,[39][40][41] We notice that no superconductivity was observed up to 13.43 GPa in an earlier pressure study of Cd 3 As 2 single crystal. 42 The reason may be that their sample is slightly different from ours, and pressure higher than 13.43 GPa is needed to induce superconductivity. Interestingly, we also notice two recent point contact studies on Cd 3 As 2 polycrystal and single crystal, respectively. 43,44 In both studies, indication of superconductivity was found around the point contact region on the surface, with T c comparable to ours. In particular, no superconductivity is observed by the 'soft' point contact technique, therefore it was suggested that the superconductivity observed around the point contact region under the 'hard' tip might be induced by the local pressure. 44 In this sense, our bulk resistance measurements under hydrostatic pressure confirm pressure-induced superconductivity in Cd 3 As 2 , although the local pressure under the 'hard' tip is more like uniaxial stress.
Pressure-induced crystal structure phase transition Before discussing whether the pressure-induced superconductivity is topological or not, it is important to know whether it is accompanied by a structure phase transition, as observed in pressurised TIs. [9][10][11][12][13][14] High-pressure powder XRD measurements on Cd 3 As 2 were performed up to 17.80 GPa. In Figure 4, the XRD patterns below 2.60 GPa can be well indexed as the tetragonal phase in space group I4 1 /acd. 30 All the peaks are slightly shifted to higher angle with increasing pressure, due to the shrink of the lattice. However, when the pressure increases to 4.67 GPa and above, a set of new peaks emerges, which is clearly different from that of low-pressure tetragonal phase. This abrupt change indicates that a new crystal structure phase appears, and we roughly determine the transition pressure around 3.5 GPa. Similar high-pressure XRD patterns have been observed in an earlier work, and the new high-pressure phase was determined as monoclinic in space group P2 1 /c. 42 The unusual T c -p phase diagram In Figure 5, we plot the temperature versus pressure phase diagram for Cd 3 As 2 . As the resistance was only measured down to 1.8 K, we cannot judge whether the superconductivity emerges at the same time as the structural transition near 3.5 GPa, or inside the high-pressure phase. Nevertheless, after increasing from 1.8 to about 4.0 K, there is apparently a region of constant T c from 21.3 to 50.9 GPa. Such a phase diagram is very similar to that of 3D TI Bi 2 Se 3 , which also shows a nearly constant T c from 30 to 50 GPa after an initial increase of T c starting from 12 GPa. 11 A constant T c over such a large pressure range is highly anomalous, as Kirshenbaum et al. 11 already pointed out. For Bi 2 Se 3 , two mechanisms with contrasting pressure-dependant T c may be balanced to produce a pressure-invariant T c over a wide range of pressure. 11 It was argued that the unique pressure evolution of T c and the anomalous linear temperature dependence of H c2 are two evidences for unconventional superconductivity in Bi 2 Se 3 . 11 The Figure 1. Crystal structure and resistivity of Cd 3 As 2 . (a) The crystal structure of Cd 3 As 2 . The cubic Cd lattice with two vacancies resides in a face-centred cubic As lattice. (b) A typical resistivity curve of Cd 3 As 2 single crystal at 0 GPa. similarity between Cd 3 As 2 and Bi 2 Se 3 under pressure is worthy of further investigation.
DISCUSSION
Now we discuss whether the superconducting state of Cd 3 As 2 under high pressure is topological or not. In ref. 44, the observation of zero bias conductance peak and double conductance peaks under 'hard' tip reveal p-wave like unconventional superconductivity in Cd 3 As 2 . Considering its special topological property, they suggested that Cd 3 As 2 under high pressure is a candidate of the TSC. 44 Furthermore, a recent theoretical work also argued that Cd 3 As 2 likely realises a TSC with bulk point nodes and a surface Majorana fermion quartet. 45 Under high pressure, the symmetry-lowering effect may stabilise the TSC phase by increasing the condensation energy, as the point nodes in the TSC phase are gapped when C 4 reduces to C 2 (the structure phase transition from tetragonal to monoclinic). 45 These two works suggest that the superconductivity we observe under hydrostatic pressure is topological, although detailed band structure calculation for the high-pressure phase of Cd 3 As 2 is needed to give more information about this possible TSC phase.
In summary, we have done resistance measurements on the 3D Dirac semimetal Cd 3 As 2 single crystals under pressures up to 50.9 GPa. It is found that superconductivity with T c ≈2.0 K emerges at 8.5 GPa. The T c increases to 4.0 K at 21.3 GPa, then it shows an anomalous constant pressure dependence up to the highest pressure measured. High-pressure powder XRD measurements reveal a structure phase transition around 3.5 GPa. Our observation of superconductivity in Cd 3 As 2 under high pressure provides an interesting candidate for topological superconductor.
MATERIALS AND METHODS
High-quality Cd 3 As 2 single crystals were grown from Cd flux. 28 The largest natural surface was determined as (112) Pressure-induced superconductivity in Dirac semimetal Cd 3 As 2 L He et al hexagonal boron nitride. 9,13,14 The sample size is about 80 × 80 μm 2 in the (112) plane, with the thickness of ∼ 10 μm. The pressure was determined by ruby fluorescence method at room temperature before and after each cooling down. The high-pressure powder XRD measurements with synchrotron radiation were performed at the HPCAT of Advanced Photon Source of Argonne National Lab (Lemont, IL, USA) using a symmetric Mao Bell diamond anvil cell at room temperature. The X-ray wavelength is 0.434 Å. Figure 4. Crystal structure phase transition of Cd 3 As 2 under pressure. The powder XRD patterns of Cd 3 As 2 under different pressures at room temperature. Below 2.60 GPa, the XRD patterns can be well indexed as the tetragonal phase in space group I4 1 /acd (shown by short black lines). A set of new peaks emerges when increasing pressure to 4.67 GPa and above, which shows a structure phase transition from tetragonal to monoclinic phase. Figure 5. The phase diagram of Cd 3 As 2 . Temperature versus pressure phase diagram of Cd 3 As 2 . A structure phase transition occurs between 2.60 and 4.67 GPa. After increasing from 1.8 to about 4.0 K, there is apparently a region of constant T c from 21.3 to 50.9 GPa. Such a phase diagram is similar to that of the 3D topological insulator Bi 2 Se 3 . | 3,418.8 | 2016-09-23T00:00:00.000 | [
"Physics"
] |
Salinity Alleviation and Reduction in Oxidative Stress by Endophytic and Rhizospheric Microbes in Two Rice Cultivars
Increased soil salinity poses serious limitations in crop yield and quality; thus, an attempt was made to explore microbial agents to mitigate the ill effects of salinity in rice. The hypothesis was mapping of microbial induction of stress tolerance in rice. Since the rhizosphere and endosphere are two different functional niches directly affected by salinity, it could be very crucial to evaluate them for salinity alleviation. In this experiment, endophytic and rhizospheric microbes were tested for differences in salinity stress alleviation traits in two rice cultivars, CO51 and PB1. Two endophytic bacteria, Bacillus haynesii 2P2 and Bacillus safensis BTL5, were tested with two rhizospheric bacteria, Brevibacterium frigoritolerans W19 and Pseudomonas fluorescens 1001, under elevated salinity (200 mM NaCl) along with Trichoderma viride as an inoculated check. The pot study indicated towards the presence of variable salinity mitigation mechanisms among these strains. Improvement in the photosynthetic machinery was also recorded. These inoculants were evaluated for the induction of antioxidant enzymes viz. CAT, SOD, PO, PPO, APX, and PAL activity along with the effect on proline levels. Modulation of the expression of salt stress responsive genes OsPIP1, MnSOD1, cAPXa, CATa, SERF, and DHN was assessed. Root architecture parameters viz. cumulative length of total root, projection area, average diameter, surface area, root volume, fractal dimension, number of tips, and forks were studied. Confocal scanning laser microscopy indicated accumulation of Na+ in leaves using cell impermeant Sodium Green™, Tetra (Tetramethylammonium) Salt. It was found that each of these parameters were induced differentially by endophytic bacteria, rhizospheric bacteria, and fungus, indicating different paths to complement one ultimate plant function. The biomass accumulation and number of effective tillers were highest in T4 (Bacillus haynesii 2P2) plants in both cultivars and showed the possibility of cultivar specific consortium. These strains and their mechanisms could form the basis for further evaluating microbial strains for climate-resilient agriculture.
Introduction
Climate change poses a great challenge worldwide for food security, which is an area with high-priority on the list of UN Sustainable Development Goals [1]. Increased abiotic stress is one of the greatest challenges arising out of climate change. Several abiotic stresses are hampering the quality and quantity of the produce [2]. Salinity is one of the abiotic stressors that is severely hampering crop growth, and the effect is increasing dayby-day. Soil salinity causes serious limitations in achieving the yield potential of a cultivar.
Increasing soil salinity is a grave threat to the crop production system. There is a sharp decline in both quality and quantity of produce due to increasing soil salinity. Intensive agricultural practices have made the expansion of saline soil faster. Salinity stress in plants imparts serious ill effects on nutrient uptake, osmotic balance, membrane integrity, and overall growth, thus hampering the whole crop dynamics [2]. It also causes generation of excessive reactive oxygen species, which besides acting as signaling molecule, it can harm plant function and reduce productivity at higher concentrations [3]. A large acreage of quality land is coming under salinity every year. This poses serious limitations to crop productivity and limits sustainable land use. Any attempt of reducing the effects of salinity in a plant system that could support improved growth under elevated salinity could be an important strategy for developing climate-resilient agriculture.
Microbes are reported to have a close association with plants for nutrient cycling and alleviation of biotic and abiotic stresses [4,5]. Microbes suitable for mitigating the deleterious effects of soil salinity on plant growth and productivity are being explored for sustainable agriculture. Microbes are reported to have a tremendous capacity to sustain plant growth under salinity, such as improving nutrient uptake, osmotic balance, ionic balance, membrane stability, overall growth, etc. [6]. The varietal development is although an option for developing climate-resilient cultivars, but has limitations of finding tolerant donors in each crop. Whereas in the case of salinity alleviating microbes, it can be applied to any rolling varieties of a number of crops. Exploring endophytes and rhizospheric microbes that could improve such ill effects of salinity on different physiological parameters could be a great resource in crop package and practices. Induced systemic tolerance is reported as one of the most crucial mechanisms by which microbes help plants in mitigating the effects of salinity [7]. Plants have antioxidant enzymes that guard them against the damaging effects of extremely high reactive oxygen species (ROS) created during stress, which are induced from microbial inoculation [8].
Microbes have a vast functional diversity [9]. They can perform salinity alleviation as rhizosphere microbes or as endophytes. Rhizosphere microbes act on the rhizosphere and could be instrumental in the plant-soil interface where plants encounter salinity, and endophytic microbes act inside the plant system where the ill effects of salinity are realized [5]. Therefore, exploring the combined possibility of screening rhizosphere and endophytes microbes for conferring stress tolerance could provide the benefits of two different niches, which could be used in complementing each other as inoculants. It is again important to evaluate that whether the difference in mechanisms of rhizospheric and endophytic microbes could bring significant changes in crop growth under salinity.
In the agriculture sector, rice (Oryza sativa L.) is considered as staple food for billions of mouths. Since rice cultivation is undertaken in flooded conditions, a huge amount of salt is accumulated in the upper soil layer as the water evaporates; this soil salinity affects the crop development. Thus, rice could be considered as a model system to study salt alleviation. For our study, we have taken two cultivars with distinct features. The CO51, which is a short-duration, high-yielding rice cultivar, and has a higher tolerance to stresses [10]. The second variety was Pusa Basmati 1 (PB1), which is the world's first semi-dwarf Basmati variety, has higher yields, and is the most widely grown Basmati variety, but is relatively susceptible to some stresses [11]. Looking at the increasing detrimental effects of salinity, it is important to characterize different microbial systems for staple food crops, such as rice, so that the base for effective climate-resilient cropping strategies could be widened and make Indian farming future-ready. Therefore, it was considered worthwhile to evaluate the potential of endophytic and rhizospheric bacteria in the mitigation of salt stress in rice. With this objective and during this period, a pot trial of two rice varieties, Pusa Basmati-1 (PB1) and CO51, was conducted with two endophytes, two rhizospheric bacteria, and a Trichoderma strain as the standard inoculation.
Results
The halotolerant endophytes and rhizobacteria were screened in planta and were found to influenece the dry matter accumulation, root-shoot length, increased antioxidant activities, and supplemented plants' machinery for abiotic stress mitigation. Bacillus safensis BTL5, Bacillus haynesii 2P2, Brevibacterium frigoritolerans W19, and Pseudomonas fluorescens 1001 were effective in vitro as well as in the in planta trial (Figure 1). Trichoderma viride was taken as the standard inoculated check. The effect of salinity and inoculation were visible in plant growth and development ( Figure 2). Significant differences were found in root development and corresponding differences could be observed from shoot growth.
Results
The halotolerant endophytes and rhizobacteria were screened in planta and were found to influenece the dry matter accumulation, root-shoot length, increased antioxidant activities, and supplemented plants' machinery for abiotic stress mitigation. Bacillus safensis BTL5, Bacillus haynesii 2P2, Brevibacterium frigoritolerans W19, and Pseudomonas fluorescens 1001 were effective in vitro as well as in the in planta trial (Figure 1). Trichoderma viride was taken as the standard inoculated check. The effect of salinity and inoculation were visible in plant growth and development ( Figure 2). Significant differences were found in root development and corresponding differences could be observed from shoot growth.
Results
The halotolerant endophytes and rhizobacteria were screened in planta and were found to influenece the dry matter accumulation, root-shoot length, increased antioxidant activities, and supplemented plants' machinery for abiotic stress mitigation. Bacillus safensis BTL5, Bacillus haynesii 2P2, Brevibacterium frigoritolerans W19, and Pseudomonas fluorescens 1001 were effective in vitro as well as in the in planta trial (Figure 1). Trichoderma viride was taken as the standard inoculated check. The effect of salinity and inoculation were visible in plant growth and development ( Figure 2). Significant differences were found in root development and corresponding differences could be observed from shoot growth. T1 T2 T3 T4 T5 T6 T7
Chlorophyll and Carotenoids Content
In CO51, highest chlorophyll content was obtained from T6 (2.06 mg g −1 FW), while T2 plants had a significantly lower (1.19 mg g −1 FW) chlorophyll content ( Figure 3A). Whereas in PB1, the highest chlorophyll content was obtained from T4 and T3 (2.39 and 2.32 mg g −1 FW, respectively), while T2 plants recorded a significantly lower (1.10 mg g −1 FW) chlorophyll content. In both CO51 and PB1, the carotenoid content was highest in T5 plants (0.78 and 0.89 mg g −1 FW, respectively) and lowest was observed in T2 plants (0.31 and 0.44 mg g −1 FW, respectively; Figure 3B).
Chlorophyll and Carotenoids Content
In CO51, highest chlorophyll content was obtained from T6 (2.06 mg g −1 FW), while T2 plants had a significantly lower (1.19 mg g −1 FW) chlorophyll content ( Figure 3A). Whereas in PB1, the highest chlorophyll content was obtained from T4 and T3 (2.39 and 2.32 mg g −1 FW, respectively), while T2 plants recorded a significantly lower (1.10 mg g −1 FW) chlorophyll content. In both CO51 and PB1, the carotenoid content was highest in T5 plants (0.78 and 0.89 mg g −1 FW, respectively) and lowest was observed in T2 plants (0.31 and 0.44 mg g −1 FW, respectively; Figure 3B).
Antioxidant Enzymes
In CO51, the highest CAT activity was found in T4 (3364.92 µmol ml −1 ), whereas plants in T1, T3, and T7 treatments had significantly lower CAT activity. In the case of PB1, the highest CAT activity was obtained from T7 (3374.35 µmol ml −1 ), whereas plants in T1, T4, and T5 treatments had the lowest CAT activity ( Figure 4A).
In CO51, the highest SOD activity was found in T3 (8.25 Unit g −1 FW), whereas plants in T1 and T5 treatments had significantly lower SOD activity. In the case of PB1, the highest SOD activity was obtained from T4 and T6 (8.21 and 7.97 Unit g −1 FW 1 , respectively), whereas plants in T1 had the least SOD activity (2.54 Unit g −1 FW; Figure 4B).
Antioxidant Enzymes
In CO51, the highest CAT activity was found in T4 (3364.92 µmol ml −1 ), whereas plants in T1, T3, and T7 treatments had significantly lower CAT activity. In the case of PB1, the highest CAT activity was obtained from T7 (3374.35 µmol ml −1 ), whereas plants in T1, T4, and T5 treatments had the lowest CAT activity ( Figure 4A).
In CO51, the highest SOD activity was found in T3 (8.25 Unit g −1 FW), whereas plants in T1 and T5 treatments had significantly lower SOD activity. In the case of PB1, the highest SOD activity was obtained from T4 and T6 (8.21 and 7.97 Unit g −1 FW 1 , respectively), whereas plants in T1 had the least SOD activity (2.54 Unit g −1 FW; Figure 4B).
Shoot and Root Dry Weight
In CO51, the highest shoot dry weight was found in T5 (8.19 g), which was on par with T4 (8.10 g), while T1 plants had a significantly lower shoot dry weight (6.63 g; Table 1). In PB1, the highest shoot dry weight was found in T4 (12.08 g), while T1 plants had a significantly lower shoot dry weight (7.39 g). In CO51, the highest root dry weight was found in T4 (6.60 g), which was on par with T5 (6.27 g) and T7 (6.26 g), while T1 plants had a significantly lower dry weight (4.97 g). In PB1, the highest root dry weight was found in T4 (7.99 g), which was on par with T5 (7.61 g), while T1 plants had a significantly lower root dry weight (6.45 g).
Shoot and Root Length
In CO51, the highest shoot length was found in T6 (84 cm), while T7, T3, and T1 plants had a significantly lower shoot length (Table 1). In PB1, the highest shoot length was found in T6 (74 cm), which was on par with T2, T3, T5, and T7, while T1 plants had a significantly lower shoot length (68.33 cm). In CO51, the highest root length was found in T1 (34.33 cm), which was on par with T4 and T7, while T6 plants had a significantly lower root length (22.33 cm), which was on par with T2 and T3. In PB1, the highest root length was found in T3 (43 cm), while T7 plants had a significantly lower root length (33.67 cm).
Number of Tillers
In CO51, the highest number of tillers was found in T7 (8.92), while T2 plants had a significantly lower number of tillers (3.92) ( Table 2). In PB1, the highest number of tillers was found in T7 (7.75), which was on par with T1 and T6, while T2 plants had a significantly lower number of tillers (5.62). In CO51, the number of effective tillers was found to be highest in T7 (8.50), while T2 plants had a significantly lower number of effective tillers (3.67). In PB1, the highest number of effective tillers was found in T6 (6.17), which was on par with T1, T4, and T5, while T2 plants had a significantly lower number of effective tillers (3.95).
Root Parameters
In the case of CO51, the cumulative length of the total root present in the entire root system was significantly highest in the negative control (2861.81 cm) and T7 (2931.01) treated plants ( Table 3). The projection area, surface area, and number of forks were highest in the negative control followed by T7. Average diameter, root volume, and fractal dimension were highest in T5 plants. The highest number of tips was found in T7. In PB1, the cumulative length, projection area, surface area, average diameter, fractal dimension, and number of tips and forks were highest in T7 followed by T4. The root volume was highest in T7 and T4.
Gene Expression Study
In this study, we have assessed the expression of the OsPIP1, MnSOD1, cAPXa, CATa, SERF, and DHN genes ( Figure 5). Changes in the expression of salt stress responsive genes were recorded from the treatments inoculated with microbial agents. In this study, both cultivars resulted in differential gene expression. In CO51, the expression of the OsPIP1 gene was highest in Bacillus safensis BTL5 (T5) and Brevibacterium frigoritolerans W19 (T6) inoculated plants, whereas inoculation of Trichoderma viride was found to downregulate OsPIP1 expression. However, in PB1, Bacillus safensis BTL5 (T5) resulted in the highest expression, followed by the positive control (T2) and Pseudomonas fluorescens (T7). In the case of MnSOD1, CO51 plants showed the highest expression in T6 and T3, whereas PB1 plants had the highest expression in T6 (Brevibacterium frigoritolerans). The cAPXa gene expression was upregulated the most by T6 and T5 in CO51, and T7 and T6 in PB1. In CO51, CATa gene expression was highest in T4 and T5 plants, whereas in PB1, the highest fold change was recorded from T7 and T5 plants. In the case of the SERF transcription factor, CO51 plants recorded the highest fold increase in T5 and T3, whereas PB plants had expression, followed by the positive control (T2) and Pseudomonas fluorescens (T7). In the case of MnSOD1, CO51 plants showed the highest expression in T6 and T3, whereas PB1 plants had the highest expression in T6 (Brevibacterium frigoritolerans). The cAPXa gene expression was upregulated the most by T6 and T5 in CO51, and T7 and T6 in PB1. In CO51, CATa gene expression was highest in T4 and T5 plants, whereas in PB1, the highest fold change was recorded from T7 and T5 plants.
CSLM Study for Na + Accumulation in Leaf Tissues
In this study, a relative higher Na + accumulation was seen in the leaves of both cultivars. In CO51, the higher sodium accumulation was seen from the positive control and was less in all inoculated treatments ( Figure 6). In the positive control, the accumulation was higher between sclerenchyma cells, whereas among other treatments, the accumulation was less between sclerenchyma cells. T4 and T7 had a sodium green fluorescence similar to the negative control. In PB1, the positive control had the highest sodium green fluorescence, followed by T3, T4, and T5. The lowest accumulation was seen from the
CSLM Study for Na + Accumulation in Leaf Tissues
In this study, a relative higher Na + accumulation was seen in the leaves of both cultivars. In CO51, the higher sodium accumulation was seen from the positive control and was less in all inoculated treatments ( Figure 6). In the positive control, the accumulation was higher between sclerenchyma cells, whereas among other treatments, the accumulation was less between sclerenchyma cells. T4 and T7 had a sodium green fluorescence similar to the negative control. In PB1, the positive control had the highest sodium green fluorescence, followed by T3, T4, and T5. The lowest accumulation was seen from the negative control, T4, and T7 treatments.
Discussion
Several microbial agents have been reported to have protective roles under elevated salinity [6]. Microorganisms have been found to re-establish ion homeostasis and reduce the ill effects of ion toxicity and oxidative stress. In this study, the potential of endophytic and rhizosphere bacteria was tested in two different rice cultivars for salt stress alleviation
Discussion
Several microbial agents have been reported to have protective roles under elevated salinity [6]. Microorganisms have been found to re-establish ion homeostasis and reduce the ill effects of ion toxicity and oxidative stress. In this study, the potential of endophytic and rhizosphere bacteria was tested in two different rice cultivars for salt stress alleviation under pot conditions.
In the present study, the decrease in the biomass accumulation (root and shoot dry weight) and number of tillers from the addition of 200 mM additional NaCl in T2 could be seen as an ill effect of salinity as compared to the non-positive control (T1). Cells maintain a certain balance among different ions for cell functioning. Excess salt in soil solution cause ion imbalance resulting in ion toxicity in the cell, which disturbs normal cellular processes [5]. As a side effect, an excess of reactive oxygen species is generated in plant cells under stress. This leads to reduced growth and biomass accumulation [6]. The inoculation of microorganisms (treatments T3-T7) under similar saline conditions to T2 could improve biomass accumulation and the number of tillers ( Table 2). The increase in tillers is directly associated with the yield of rice. The results showed that, in both cultivars, inoculated treatments had a significantly higher number of effective tillers. This could be due to higher photosynthetic machinery in inoculated treatments. Our results on the chlorophyll and carotenoid content showed them to be higher in all inoculated conditions than the positive control in both cultivars. High salinity affects the photosynthetic pigments and reduces the photosynthetic rate, as seen from the chlorophyll content of positive control plants (T2). It has been reported, in other studies, that the application of endophytic and rhizospheric bacteria could improve chlorophyll accumulation [6]. This could be one of the direct reasons for the increased dry matter accumulation. An increase in chlorophyll and carotenoids by microbial inoculation has been documented previously. This increase in photosynthetic machinery is directly related to the dry matter accumulation.
Alleviation of ion toxicity and oxidative stress from plants would be a reason for higher photosynthetic pigments. Inoculation of salinity tolerance microbes are reported to reduce the accumulation of Na + ions and improve the ionic balance of cells, as seen in Figure 6, where confocal scanning laser microscopy showed the reduced accumulation of Na + ions in inoculated plants under salinity. A similar report was shown by Sahu et al. [6] in tomatoes under salinity with possible explanations of Na + exclusion, uptake of K + , compartmentalization in vacuoles, etc., for the reduced Na + content. Plant tissues with different NaCl applications showed a similar difference in Sodium Green™ Tetra (Tetramethylammonium) fluorescence [12]. This Sodium Green™ represents light-excitable Na + indicator fluorescent probes, which gives information regarding Na + concentrations and has a very high specificity as compared to other monovalent cations, such as K + [13].
This reduction in Na + accumulation could be due to the activation of salinity stress responsive regulatory genes SERF1 ( Figure 5E) and DHN ( Figure 5F), which activate multiple pathways for salinity tolerance. SERF1 (SALT-RESPONSIVE ERF1) activates other transcription factors responsible for salinity stress mitigation such as MAPKs, DREB2A, and Zinc Finger Protein [14]. It also regulates ROS-mediated salinity stress signaling [15]. DHN gene is key regulator for abiotic stress responses in plants, having important role in scavenging excess ROS, and has been reported in rice to alleviate salinity stress [16].
Increased photosynthetes translocation to the roots could have additionally helped plants to respond to microbial signals for improved root growth and to develop a robust root system to fight salinity. Table 2 shows the improvements in different root architecture parameters by microbial inoculation. Roots are highly affected by excessive salinity [17,18]. Root architecture improvement could help plants to survive in higher soil salinity [19]; such improvements in parameters including the root projection area, root volume, surface area, number of lateral roots, and number of forks and tips in inoculated plants could have helped rice plants to absorb nutrients and water. As in our study, modulation in the expression of the hydroporins gene OsPIP1 in the roots was found in T5 and other treatments ( Figure 5A). Similar to these results on root architecture improvement by inoculation, several other workers also reported the relevance of root parameters with salinity alleviation [20][21][22]. Secretion and regulation of plant hormones by these microbes could be very important mechanisms behind influencing root development [23][24][25]. This study provides details on the different root architecture parameters in CO51 and PB cultivars of rice under elevated salinity.
The ability of endophytic and rhizospheric bacteria to reduce the effects of high salinity could have resulted in increased biomass accumulation under 200 mM NaCl stress. Sahu et al. [5,6] have discussed several mechanisms by which these microbes could improve plant metabolism under salinity stress. One of them is the accumulation of compatible solutes such as proline. The proline accumulation study ( Figure 5) showed higher accumulation in the plants treated with potential microbes. In the CO51 cultivar, T6 and T4 plants had the highest proline accumulation, whereas in PB1, T6 and T3 plants had higher proline accumulation. This increase in proline accumulation could be partially responsible for the improved ion homeostasis and reduced osmotic imbalance, which is in line with the reports of Das and Roychoudhury [26], suggesting that proline is responsible for minimizing the harmful effects of ROS under stress. Similarly, the application of Staphylococcus haemolyticus and Bacillus subtilis improved the production of different osmolytes, such as proline, which contributed to enhancing plant performance under salinity [27]. Proline is also reported to induce the expression of several other stress responsive genes in the plants [28]. The reports of Sahu et al. [6] and Nguyen et al. [29] indicate that proline has a dual role as an antioxidant and in osmoregulation in salinity stress alleviation. Some reports suggested that inoculation of bacterial agents upregulated the expression of genes responsible for the biosynthesis of proline [30]. Increased accumulation of compatible solutes is reported as an important salt stress alleviation strategy [5]. It can effectively reduce the ill effects of salinity and allow the cells to maintain cellular homeostasis.
Similar results were observed from our study regarding the modulation of the antioxidant enzymes (superoxide dismutase, catalase, peroxidase, phenylalanine ammonia lyase, and polyphenol oxidase) from the plants inoculated with different microbes. Though these antioxidant enzymes were differentially activated in both cultivars by the applied microbes, they were all improved under salinity stress. There are numerous instances of endophytes enhancing plants' antioxidant enzymatic activity as a crucial method for reducing salt stress [6]. However, the pattern in our study was varying among treatments and cultivars (Figure 4). In the case of peroxidase, the T5 inoculation had the highest accumulation in CO51, whereas Trichoderma inoculation showed a higher accumulation in PB1. In catalase, CO51 had the highest accumulation by T4 inoculation but PB1 had the highest accumulation by T7 inoculation. This could be due to varied mechanisms of the inoculants for induced systemic tolerance. Some other studies also indicated that distinct microbial inoculants have diverse strategies for reducing salt stress [6]. This also indicates towards complementing roles of different microbial inoculants for the fitness of host plants. As a holobiome, the host plant is in active interaction with numerous microbes. It may be possible that all these interactions could yield a different response in the plant system. A similar response was recorded from our study; all five antioxidant enzymes, superoxide dismutase, catalase, peroxidase, phenylalanine ammonia lyase, and polyphenol oxidase, are activated differently in both of the cultivars. These enzymes may be a rationale for the decreased ROS damage in rice plants, including the activation of PO, CAT, PPO, PAL, and SOD activities through the induction of systemic tolerance. The gene expression study also validated the modulation of genes for antioxidant enzymes biosynthesis (CATa, cAPX, MnSOD1) in both CO51 and PB1. A plant's antioxidant machinery being supplemented by microbial inoculation would be more economically significant for reducing the negative impacts of salt stress. The performance is different in both cultivars, which shows that both cultivars respond differently to the microbial inoculation. This is similar to the findings of Sahu et al. [9], where differential microbial functions were reported from two different cultivars.
Preparation of Pots, Inoculation and Transplanting
Pots were filled with 5 kg of non-sterile field soil and farm-yard-manure (FYM) in a 3:1 ratio. A blanket application of NPK fertilizer (@120:80:40 kg ha −1 ) was also applied as the basal dose. Row and column randomization was done thrice to avoid any heterogeneity in soil. After randomization, three wetting and drying cycles were given to bring the soil to natural compaction. In this study, seven treatments were given in two different rice cultivars, CO51 and PB1, with three replications of each. The nursery was raised for these two cultivars as per the protocol given in Nawaz et al. [31]. The microbial inoculation was done by seedling dip in respective treatments (Table 4). Seedlings were in respective culture broths (2 mL per liter) for a period of 30 min with 0.01% carboxymethyl cellulose as a sticking agent. After a period of 1 h, the seedlings were transplanted into the pots. Two paddy seedlings per hill, having equal height, and two hills per pot were transplanted in the pots. Plants were raised following standard cultivation practices and the observations were taken time to time. In brief, rice plants were irrigated at field capacity, and 50% of nitrogen was applied in two split doses in a 30-day interval after inter-culture operation. Pots were randomized twice during the growth period to avoid any heterogeneity in light interception.
Chlorophyll and Carotenoids Content
Leaves were taken at 60 days after transplanting to assess the chlorophyll and carotenoids content, as described by Witham et al. [32]. Briefly, one gram of leaf tissue was crushed in 80% pre-chilled acetone and the volume was made up to 100 mL with pre-chilled acetone. The absorbance of the supernatant was recorded at 452, 663, and 645 nm using a UV-vis 1700 spectrophotometer, Shimadzu, Japan. The amount of chlorophyll and carotenoids present (mg/g) in leaf tissue was calculated by the formula given in Sadashivam and Manickam [33].
Proline Content
Proline content was measured by crushing 0.5 g sample in 10 mL 3% aqueous sulphosalicylic acid followed by filtering with Whatman no. 2 filter paper. In 2 mL of filterate, 2 mL glacial acetic acid and 2 mL acid ninhydrin was added and kept in a boiling water bath for a period of 1 h. The reaction was terminated by placing it in an ice bath. Further, toluene (4 mL) was mixed by stirring for 20-30 s, and the solution was kept at room temperature for toluene layer separation. The upper layer is taken, and the absorbance of the red color was taken at 520 nm with a UV-vis 1700 spectrophotometer, Shimadzu, Japan. Calculations were performed using a standard curve, as described in Sadashivam and Manickam [33].
Electrolyte Leakage
The leakage of electrolytes from rice leaves was assessed by the autoclaving method. Ten leaf discs were taken from both cultivars from different treatments, placed in 25 mL deionized water, and incubated at ambient temperature for 4 h. Post incubation, the content was autoclaved at 121 • C for 30 min. The electrical conductivity was measured before and after autoclaving and electrolyte leakage was calculated as per Khare et al. [34].
Peroxidase (PO)
The enzyme extract was prepared using 1 g fresh plant tissue, which was ground in phosphate buffer (0.1 M, 3 mL, pH 7.0) using a pestle and mortar. The content was centrifuged at 12,000 rpm for 15 min. The 100 µL enzyme extract was mixed with 50 µL of 20 mM guaiacol solution and 3 mL 50 mM phosphate buffer. Finally, 30 µL of 12.3 mM H 2 O 2 was added to the cuvette to start the reaction and absorbance was recorded at 436 nm using a UV-vis 1700 spectrophotometer, and ∆t was calculated (Hammerschmidt et al. [35]).
Catalase (CAT)
Catalase activity was measured as per the protocol followed by Luck [36]. Tissue homogenization and preparation of the enzyme extract was followed similarly to the peroxidase. The reaction mixture was prepared by mixing 100 µL enzyme extract and 3 mL 50 mM phosphate buffer. The addition of 30 µL of 12.3 mM H 2 O 2 was done at the last step in the cuvette to start the reaction. The H 2 O 2 degradation was recorded at 240 nm using a UV-vis 1700 spectrophotometer, and ∆t was calculated.
Superoxide Dismutase (SOD)
SOD activity was measured as per the protocol followed by Beauchamp and Fridovich [37]. The enzyme extract was prepared using 1 g fresh plant tissue, which was ground in phosphate buffer (0.1 M, 3 mL, pH 7.0). The 3 mL reaction mixture contained 50 mM phosphate buffer (pH 7-8), 13 mM methionine, 75 mM NBT, 2 mM riboflavin (added at the last), 1 mM EDTA, and 50 µL of the enzyme extract, and the tubes were shaken and placed 30 cm below a light source consisting of two 15 W fluorescent lamps. The reaction was started by switching on the light and was allowed to run for 10 min during which time it was found earlier to be linear. The reaction was stopped by switching off the light and the tubes were covered with a black cloth. The absorbance by the reaction mixture at 560 nm was read. There was no measurable effect of the diffused room light. The reaction mixture lacking the enzyme developed the maximum color and this decreased with an increasing volume of the enzyme extract added. Log As60 was plotted as a function of the volume of the enzyme extract used in the reaction mixture.
Phenylalanine Ammonia Lyase (PAL)
The enzyme extract was prepared following the protocol of Havir [38]. Borate buffer (0.5 mL), enzyme solution (0.2 mL), and water (1.3 mL) were mixed and the reaction was initiated by mixing 1 mL L-phenylalanine solution. It was then incubated at 32 • C for 60 min. Tri-choloroacetic acid (1 M, 0.5 mL) was added to stop the reaction. Further, absorbance was measured at 290 nm to assess the activity of phenyl alanine ammonia lyase using a UV-vis 1700 spectrophotometer.
Polyphenol Oxidase (PPO)
Polyphenol oxidase activity was determined according to Gauillard et al. [39]. The enzyme extract was prepared by homogenizing 100 mg leaf sample in 2 mL 0.1 M phosphate buffer and centrifuging. Enzyme activity was measured by taking 1.4 mL of 0.1 M citratephosphate buffer in 0.5 mL TNB and 1 mL 2 mM catechol solution. The reaction was started by adding 100 µL of enzyme extract and absorbance was read at 412 nm in 30-s intervals for 3 min. The polyphenol oxidase activity was expressed by the change in absorbance (change in optical density; ∆OD) per min per mg fresh weight.
Ascorbate Peroxidase (APx)
Ascorbate peroxidase (APX) activity in rice leaves was assessed as per the methods of Nakano and Asada [40]. In the 10 µL enzyme extract (prepared as describe for PPO), 180 µL 0.2 mM ascorbate and 0.2 mM hydrogen peroxide were added to start the reaction. The absorbance was recorded at 290 nm for 120 s using the spectrophotometer. APX activity was calculated according to Maksimovic and Zivanovic [41].
Biomass Accumulation
Root and shoot dry weight were recorded by uprooting the plants and gently washing the soil of the pots until clean in running tap water. Roots and shoots were detached. Fresh tissues were weighed and kept in a hot air oven at 60 • C ± 5 until a stable weight was achieved. The dry weight was recorded after drying of the samples.
Shoot and Root Length and Number of Tillers
Root and shoot length were measured at the maximum vegetative phase. Roots were carefully washed from the pots to harvest and measure. The number of tillers was counted at 90 DAS. The tillers bearing a panicle were counted as effective tillers and tillers without a panicle was counted as non-effective tillers.
Root Scanning
The roots were carefully washed out from the respective pots and cleaned twice with SDW. The roots were scanned in the EPSON Expression 12,000 XL scanner and different architecture parameters were recorded using the WINRHIZO Pro software. Parameters such as the root area, root volume, total length, number of tips, forks, fractal dimension, and average diameter were taken. The results were analyzed and presented in the text.
Gene Expression Study
The total plant RNA was isolated using the PureLink TM RNA Mini Kit (Invitrogen, Waltham, MA, USA) following the manufacturer's instructions. Total RNA was immediately converted to cDNA using the High Capacity RNA-to-cDNA kit (Thermo Fisher Scientific, Waltham, MA, USA). The yield of cDNA among different treatments of the two cultivars was assessed using a nanodrop and a final concentration of 50 ng/µL was used uniformly for assessing the differences in the transcript levels among the treatments. The cDNA was used for RT-PCR studies with key genes for salinity stress tolerance taking Actin as the endogenous control. The gene expression analyses were carried out using the RT-qPCR (Bio-Rad, Hercules, CA, USA) and Eva Green SYBR Green Supermix Kit (Bio-Rad, Hercules, CA, USA) in triplicate. Gene-specific primers (1.5 µL each; Table S1) were taken at 10 pmol/µL in a reaction mixture of 10 µL along with 2 µL cDNA and 5 µL master mix. The RT-qPCR conditions were described elsewhere [6]. The data was normalized using the 2 −∆∆Ct method and to the Ct values of Actin as per Livak and Schmittgen [42].
CSLM Study for Na + Accumulation in Leaf Tissues
In this study, relative Na + accumulation in rice leaves was compared among different treatments. The fine sections of were made and treated with cell impermeant Sodium Green™, Tetra (Tetramethylammonium) Salt. The unbound dye was removed by washing with sterile distilled water (SDW). The slide was mounted on a grease-free glass slide and visualized under a 488 nm Laser in Confocal Scanning Laser Microscope (CSLM). The X-Y plane images were captured using a uniform camera setting and were processed in the Nikon NIS element software.
Statistical Analysis
The pot trial was conducted under a glass house condition with seven treatments and 3 replications of each ( Figure 1) in a randomized complete block design (RCBD). The hypothesis was to evaluate microbial induction of stress tolerance in two different rice cultivars. This experiment was carried out to investigate the efficiency of plant rhizospheric microorganisms and endophytes to mitigate salinity stress in rice (Oryza sativa). Inoculation was done through seed treatment. Data were analyzed and means were compared with Duncan's multiple range test at p ≤ 0.05.
Conclusions
Inoculation with halotolerant endophytes and rhizobacteria were found to modulate salt stress tolerance in rice. Inoculation was found to have positive role in improving oxidative enzymes, proline, chlorophyll, carotenoids, shoot length, root length, shoot dry weight, root dry weight, and number of effective tillers. In terms of plant biomass accumulation and the number of effective tillers, T4 (Bacillus haynesii 2P2) performed best. These two are key parameters to evaluate stress mitigation effects in the plants under salinity. In a generalized manner, enzyme activity was highest in T4, and gene expression was highest in T5 and T6 in CO51, whereas T6 and T7 were highest in the case of PB1. The core summary of different mechanisms indicated that T4 performed best amongst all, which could be combined as a consortium with T5 (Bacillus safensis BTL5) in CO51 and with T7 (Pseudomonas fluorescens 1001) in PB1. The third-best possible member of this consortium could be T6 (Brevibacterium frigoritolerans W19) in both CO51 and PB1. This conclusion of this study would be practically useful for forming a consortium based on plant responses to different rhizospheric and endophytic microbes. This could be a useful tool in evaluating microbial resources for climate-resilient agriculture. Further evaluation of these strains for consortial application and adaptability should be taken up. | 8,748.6 | 2023-02-21T00:00:00.000 | [
"Environmental Science",
"Biology",
"Agricultural and Food Sciences"
] |
SARS-CoV-2 ORF6 Disrupts Bidirectional Nucleocytoplasmic Transport through Interactions with Rae1 and Nup98
SARS-CoV-2, the causative agent of coronavirus disease 2019 (COVID-19), is an RNA virus with a large genome that encodes multiple accessory proteins. While these accessory proteins are not required for growth in vitro, they can contribute to the pathogenicity of the virus.
ABSTRACT RNA viruses that replicate in the cytoplasm often disrupt nucleocytoplasmic transport to preferentially translate their own transcripts and prevent host antiviral responses. The Sarbecovirus accessory protein ORF6 has previously been shown to be a major inhibitor of interferon production in both severe acute respiratory syndrome coronavirus (SARS-CoV) and severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2). Here, we show SARS-CoV-2-infected cells display an elevated level of nuclear mRNA accumulation compared to mock-infected cells. We demonstrate that ORF6 is responsible for this nuclear imprisonment of host mRNA, and using a cotransfected reporter assay, we show this nuclear retention of mRNA blocks expression of newly transcribed mRNAs. ORF6's nuclear entrapment of host mRNA is associated with its ability to copurify with the mRNA export factors, Rae1 and Nup98. These protein-protein interactions map to the C terminus of ORF6 and can be abolished by a single amino acid mutation in Met58. Overexpression of Rae1 restores reporter expression in the presence of SARS-CoV-2 ORF6. SARS-CoV ORF6 also interacts with Rae1 and Nup98. However, SARS-CoV-2 ORF6 more strongly copurifies with Rae1 and Nup98 and results in significantly reduced expression of reporter proteins compared to SARS-CoV ORF6, a potential mechanism for the delayed symptom onset and presymptomatic transmission uniquely associated with the SARS-CoV-2 pandemic. We also show that both SARS-CoV and SARS-CoV-2 ORF6 block nuclear import of a broad range of host proteins. Together, these data support a model in which ORF6 clogs the nuclear pore through its interactions with Rae1 and Nup98 to prevent both nuclear import and export, rendering host cells incapable of responding to SARS-CoV-2 infection. IMPORTANCE SARS-CoV-2, the causative agent of coronavirus disease 2019 (COVID- 19), is an RNA virus with a large genome that encodes multiple accessory proteins. While these accessory proteins are not required for growth in vitro, they can contribute to the pathogenicity of the virus. We demonstrate that SARS-CoV-2-infected cells accumulate poly(A) mRNA in the nucleus, which is attributed to the accessory protein ORF6. Nuclear entrapment of mRNA and reduced expression of newly transcribed reporter proteins are associated with ORF6's interactions with the mRNA export proteins Rae1 and Nup98. SARS-CoV ORF6 also shows the same interactions with Rae1 and Nup98. However, SARS-CoV-2 ORF6 more strongly represses reporter expression and copurifies with Rae1 and Nup98 compared to SARS-CoV ORF6. Both SARS-CoV ORF6 and SARS-CoV-2 ORF6 block nuclear import of a wide range of host factors through interactions with Rae1 and Nup98. Together, our results suggest ORF6's disruption of nucleocytoplasmic transport prevents infected cells from responding to the invading virus.
RESULTS
SARS-CoV-2-infected cells accumulate mRNA in the nucleus. Numerous RNA viruses, including VSV and Zika virus, block host mRNA export in infected cells (7,12). We examined whether SARS-CoV-2 similarly blocks nuclear export of host mRNA by infecting the human lung adenocarcinoma cell line, Calu3, and a human bronchial epithelial cell line stably expressing the SARS-CoV-2 receptor angiotensin-converting enzyme 2 (ACE2), HBEC3-ACE2, with SARS-CoV-2. Twenty-four hours postinfection, we examined the mRNA distribution in the SARS-CoV-2-infected and mock-infected cells ( Fig. 1A and B; see also Fig. S1 in the supplemental material). In SARS-CoV-2-infected cells, mRNA was primarily localized to the nuclei, while the mock-infected cells displayed a more even distribution of mRNA in the nuclei and cytoplasm. This nuclear mRNA accumulation phenotype was observed in both SARS-CoV-2-infected Calu3 ( Fig. 1A and Fig. S1) and HBEC3-ACE2 (Fig. 1B) cells.
SARS-CoV-2 ORF6 blocks nuclear export of host mRNA. ORF6 interacts with the mRNA export factor Rae1 and the nuclear pore complex component Nup98 (4). VSV M and KSHV ORF10, which both interact with Rae1 and Nup98, produce an accumulation of mRNA in the nuclei of transfected cells (7,9). We investigated whether ORF6 was responsible for the nuclear localization of mRNA observed during SARS-CoV-2 infection ( Fig. 1A and B) by transiently transfecting human embryonic kidney, 293T, cells with either green fluorescent protein (GFP), GFP-tagged ORF6, or GFP-tagged VSV M. In cells transfected with GFP, mRNA was distributed throughout the cell, indistinguishable from the mRNA localization pattern in untransfected cells (Fig. 2). In contrast, mRNA in cells expressing wild-type (WT) ORF6 and VSV M was present in multiple foci within the nucleus, suggesting that the mRNA in these cells was accumulating in the nucleus (Fig. 2). Identical mRNA nuclear accumulation phenotypes were observed in Calu3 cells and the lung epithelial carcinoma cell line, A549, transiently transfected with ORF6 ( Fig. S2A and B).
SARS-CoV-2 ORF6 downregulates protein expression of newly transcribed genes. We next examined how the nuclear accumulation of host mRNA in cells expressing ORF6 affected host protein expression. We transiently transfected 293T cells with mCherry, mCherry-tagged ORF6, or mCherry-tagged VSV M and measured nascent protein expression in these cells 24 h posttransfection using a Click-iT labeling assay in which newly synthesized proteins incorporate L-azidohomoalanine instead of methionine. Nascent protein synthesis can then be quantified by labeling the L-azido-
FIG 2
The SARS-CoV-2 accessory protein ORF6 is responsible for the nuclear mRNA accumulation phenotype observed in SARS-CoV-2-infected cells. 293T cells were transiently transfected with GFP, GFP-SARS-CoV-2 ORF6, or GFP-VSV M. Staining for poly(A) mRNA was conducted 24 h posttransfection. Cells expressing SARS-CoV-2 ORF6 or VSV M (white arrows) displayed an accumulation of mRNA in the nuclei, while those transfected with GFP displayed mRNA localization patterns identical to those of untransfected cells. Bars, 5 mm. SARS-CoV-2 ORF6 Disrupts Nuclear Import and Export ® homoalanine residues with a fluorescent marker and compared across conditions by normalizing to the total number of cells labeled. Similar levels of nascent protein expression were observed in cells expressing mCherry (mean fluorescein isothiocyanate [FITC]/Hoechst ratio, 1.18), ORF6 (mean FITC/Hoechst ratio, 1.32; P = 0.31), and VSV M (mean FITC/Hoechst ratio, 1.25; P = 0.59) (Fig. S3), suggesting that ORF6 does not impact translation of existing cytoplasmic mRNA transcripts and likely blocks expression of only newly transcribed mRNA transcripts.
For VSV M and KSHV ORF10, which both prevent nuclear export of mRNA, downregulation of expression from newly transcribed transcripts has been measured using luminescent and fluorescent reporter assays (7,9). In these assays, cells are concurrently transfected with the viral protein and reporter constructs. Cells expressing the viral protein display a marked reduction in reporter expression, as the newly transcribed reporter transcripts are largely retained in the nuclei, inaccessible to the cell's translational machinery. To assess whether ORF6's blockage of nuclear export of mRNA similarly results in a reduction of newly transcribed transcripts and to map the residues critical for the nuclear accumulation of mRNA, we constructed a series of N-terminal GFP-tagged ORF6 constructs (Fig. 3A). We included a mutant ORF6 protein, ORF6 D22-30, which has independently arisen in multiple clinical SARS-CoV-2 strains and in a serially passaged culture SARS-CoV-2 isolate (see Table S1 and Fig. S4A to C in the supplemental material). We then cotransfected 293T cells with these ORF6 constructs and a reporter plasmid encoding mCherry. In VSV M, a motif consisting of a methionine residue surrounded by acidic residues is critical for reducing expression levels of cotransfected reporters. The methionine residue within the motif is conserved between VSV M and KSHV ORF10, and a similar motif with a methionine residue is present in the SARS-CoV-2 ORF6 C terminus (Fig. S4A). We changed this methionine residue in ORF6 to an alanine, generating the construct ORF6 Met58Ala ( Fig. 2A). Transfection of ORF6 Met58Ala did not downregulate mCherry expression (MFI, 1.08; SE, 0.09) ( Fig. 3B and C), suggesting that Met58 is critical for the function of ORF6. We then validated that the observed increase in mCherry expression in cells transfected with ORF6 Met58Ala compared to cells transfected with WT ORF6 was attributed to differences in mRNA localization. Staining of mRNA in transiently transfected 293T (Fig. 3E), A549 (Fig. S5C), and Calu3 (Fig. S5D) cells revealed distinct mRNA localization patterns in WT ORF6-and ORF6 Met58Ala-transfected cells. Unlike WT ORF6-expressing cells, ORF6 Met58Ala-expressing cells did not display an accumulation of mRNA in the nucleus, confirming the importance of Met58 to the functioning of ORF6.
The C terminus of SARS-CoV-2 ORF6 interacts with Rae1 and Nup98. In VSV M and KSHV ORF10, downregulation of cotransfected fluorescent and luminescent reporters and impairment of mRNA nuclear export occur due to interactions with the nuclear mRNA export factor Rae1 and nuclear pore complex component Nup98 (7,9). VSV M displaces single-stranded RNA in the Rae1·Nup98 complex to prevent nuclear export of host mRNA (8). We hypothesized the inability of the ORF6 C-terminal deletions to downregulate mCherry expression in a manner similar to that of WT ORF6 (Fig. 3B to D) was attributed to the loss of the interaction between these ORF6 constructs and Rae1 and Nup98. We transfected 293T cells with GFP-tagged ORF6 SARS-CoV-2 ORF6 Disrupts Nuclear Import and Export ® constructs ( Fig. 3A) and rapidly affinity purified the GFP-tagged proteins. Western blotting on the eluates confirmed that WT ORF6, along with ORF6 constructs with N-terminal deletions, interacts with Rae1 and Nup98 (Fig. 4). The C-terminal deletion constructs, ORF6 D38-61 and ORF6 D50-61, did not pull down Rae1 or Nup98 (Fig. 4). These data suggest that the C terminus of ORF6 interacts with Rae1 and Nup98, while the N terminus is not essential for the observed interactions. This is consistent with the observation that C-terminal deletion mutants of ORF6 did not dramatically reduce expression of the mCherry reporter ( Fig. 3B to D).
The methionine residue in the Rae1-Nup98 interacting motif of VSV M forms multiple intermolecular interactions with amino acid residues in the nucleic acid binding site of Rae1 and facilitates the interaction between VSV M and the Rae1·Nup98 complex (8). We hypothesized that Met58 of SARS-CoV-2 ORF6 is similarly responsible for interactions with Rae1 and Nup98. Affinity purification of ORF6 Met58Ala revealed that it does not interact with Rae1 or Nup98 (Fig. 4), confirming the importance of Met58 in the ORF6-Rae1 and ORF6-Nup98 interactions.
Overexpression of Rae1 restores mCherry reporter expression in cells transfected with ORF6. We next investigated whether we could restore mCherry expression in 293T cells transfected with ORF6 by overexpressing Rae1. Rae1 overexpression restored mCherry expression in a dose-dependent manner ( Fig. 5A and B). Subsequent Western blotting confirmed this Rae1 dose-dependent rescue of mCherry expression (Fig. 5C). These data indicate that ORF6's interaction with Rae1 is responsible for downregulating mCherry reporter expression in cell culture.
SARS-CoV-2 ORF6 more strongly copurifies with Rae1 and Nup98 compared to SARS-CoV ORF6. We next compared the relative ability of SARS-CoV ORF6 and SARS-CoV-2 ORF6 to downregulate reporter expression. SARS-CoV ORF6 and SARS-CoV-2 ORF6 share 69% identity by amino acid, including the same methionine residue surrounded by acidic residues (Fig. 6A). SARS-CoV ORF6 has been shown to downregulate expression of a cotransfected construct in a dose-dependent manner (11), suggesting that its C terminus may also interact with the Rae1·Nup98 complex.
We cotransfected 293T cells with GFP-tagged SARS-CoV ORF6 or GFP-tagged SARS-CoV-2 ORF6 and mCherry to assess the impact of these constructs on protein expression. Compared to cells transfected with GFP alone, cells transfected with SARS-CoV ORF6 displayed reduced mCherry expression (MFI of 1 and SE of 0.08 versus MFI of 0.71 and SE of 0.03); however, this difference was not significant (P = 0.06) ( Fig. 6B and C). Cells transfected with SARS-CoV-2 ORF6 displayed a significant reduction in mCherry expression compared to cells transfected with SARS-CoV ORF6 (MFI, 0.3; SE, FIG 4 Affinity purification of GFP-tagged constructs. 293T cells were transiently transfected with GFPtagged constructs. Forty-eight hours after transfection, the GFP-tagged proteins were rapidly captured using an anti-GFP resin. Western blotting revealed that ORF6 interacts with the mRNA nuclear export factor Rae1 and the nuclear pore complex protein Nup98. ORF6 constructs with Cterminal deletions or a substitution did not pull down Rae1 or Nup98. Addetia et al. . Western blotting also demonstrated decreased expression of SARS-CoV-2 ORF6 relative to SARS-CoV ORF6, suggesting that expression levels do not explain the differential effects on reporter gene expression (Fig. 6D).
We hypothesized the differences in mCherry expression between SARS-CoV ORF6 and SARS-CoV-2 ORF6 could be attributed to differences in copurification of Rae1 and Nup98. We transfected 293T cells with the GFP-tagged constructs and affinity purified the tagged proteins. Western blotting revealed that SARS-CoV ORF6 interacts with Rae1 and Nup98, similar to SARS-CoV-2 ORF6 (Fig. 6E). Densitometry on the ratio of prey to bait demonstrated that SARS-CoV-2 ORF6 copurified with 1.3-fold more Rae1 (Fig. 6F) and 2.7-fold more Nup98 (Fig. 6G) compared to SARS-CoV ORF6. These data suggest that SARS-CoV-2 ORF6 may more dramatically repress protein expression via a stronger interaction with the Rae1·Nup98 complex compared to SARS-CoV ORF6.
Next, we examined the sequence variation of ORF6 across the Sarbecovirus subgenus. Three distinct clades of sarbecoviruses have been described thus far with SARS-CoV and SARS-CoV-2 belonging to clade 1 and clade 2, respectively. The ORF6 protein in the clade 1 sarbecoviruses is two amino acid residues longer than the ORF6 protein in the clade 2 sarbecoviruses (Fig. 6A and Fig. S6A) (13). Notably, the protein sequence of ORF6 in the clade 2 Rhinolophus affinis sarbecovirus RaTG13 (GenBank accession no. MN996532.2) is more similar to that in the clade 2 pangolin sarbecovirus Pangolin-CoV/Guangdong/1/2019 (EPI_ISL_410721) compared to that in the clade 1 Rhinolophus affinis sarbecovirus LYRa11 (GenBank accession no. KF569996.1). This suggests that sequence variation in ORF6 is unrelated to the zoonotic host of sarbecoviruses, consistent with Rae1 and Nup98 being highly conserved across eukaryotes (6).
SARS-CoV and SARS-CoV-2 ORF6 block nuclear import of a broad range of host factors. Both SARS-CoV and SARS-CoV-2 have been demonstrated to block nuclear import of the transcription factor, STAT1, during infection in cell culture (3,14). Consistent with previous reports, we found that nuclear import of STAT1 was impaired in cells expressing either SARS-CoV ORF6 or SARS-CoV-2 ORF6 (Fig. 7A). STAT1 accumulated in the nuclei following interferon beta (IFN-b) stimulation in cells expressing GFP or SARS-CoV-2 ORF6 Met58Ala (Fig. 7A) but remained in the cytoplasm after IFN-b stimulation in cells expressing SARS-CoV ORF6 or SARS-CoV-2 ORF6.
We reasoned that the blockade of nuclear import was unlikely to be specific to STAT1. The transcription factor glucocorticoid receptor (GR) is shuttled into the nucleus SARS-CoV-2 ORF6 Disrupts Nuclear Import and Export ® following stimulation with a steroid through interactions with importin b and the nucleoporin Nup62 (15). In cells expressing GFP or SARS-CoV-2 ORF6 Met58Ala (Fig. 7B), GR was translocated into the nuclei following dexamethasone stimulation. However, GR remained in the cytoplasm after dexamethasone stimulation in cells expressing SARS-CoV ORF6 or SARS-CoV-2 ORF6 (Fig. 7B), consistent with a broad blockade of nuclear import by ORF6. We then investigated how SARS-CoV ORF6 and SARS-CoV-2 ORF6 impacted the localization patterns of the importins KPNA2 and KPNA3. These importins bind cargo proteins and facilitate translocation of their cargo into the nuclei. KPNA2 and KPNA3 were nuclear localized in cells expressing GFP or SARS-CoV-2 ORF6 Met58Ala (Fig. 7C and D). In contrast, both KPNA2 and KPNA3 localized to the cytoplasm in cells expressing SARS-CoV ORF6-and SARS-CoV-2 ORF6, suggesting that ORF6, through its interactions with the Rae1·Nup98 complex, clogs the nuclear pore, preventing nuclear import of a broad array of host factors.
DISCUSSION
Here, we demonstrate that SARS-CoV-2 enacts a bidirectional block of nucleocytoplasmic transport at the nuclear pore, preventing both mRNA export from and stimulus-dependent host protein import into the nuclei of infected cells. We show the accessory protein ORF6 is responsible for this nuclear imprisonment of mRNA, which further results in downregulation of expression of new transcribed transcripts. Inhibition of mRNA nuclear export by ORF6 is attributed to its interactions with the mRNA nuclear export factor Rae1 and the nuclear pore complex component Nup98. We demonstrate that inhibition of mRNA nuclear export, reporter repression, and the host-virus proteinprotein interactions is critically dependent on a methionine residue in the ORF6 C terminus. Additionally, we show an ORF6 allele with a 9-amino-acid deletion that has arisen in multiple clinical SARS-CoV-2 isolates and a serially passaged culture isolate maintains the ability to downregulate expression of cotransfected reporter and interact with Rae1 and Nup98. We find that SARS-CoV-2 ORF6 more strongly represses reporter expression and more strongly copurifies with Rae1-Nup98 compared to SARS-CoV ORF6. Finally, we show that both SARS-CoV and SARS-CoV-2 ORF6 inhibit nuclear import of a broad range of host factors, including those that interact with nucleoporins besides Nup98. Together, these data indicate that the Sarbecovirus accessory protein ORF6 prevents bidirectional nucleocytoplasmic transport through its interactions with Rae1 and Nup98, leaving host cells incapable of responding to viral infection.
RNA viruses, including coronaviruses, that replicate in the cytoplasm have mechanisms to suppress cellular translation, which allows these viruses to use the host's translational machinery to preferentially express viral proteins (16)(17)(18). In SARS-CoV, ORF6 is not required for growth in vitro; however, expression of SARS-CoV ORF6 can increase the replication kinetics of SARS-CoV and the related murine hepatitis virus in vitro (19,20). In addition, recombinant SARS-CoV isolates containing ORF6 grow to higher viral loads than recombinant isolates lacking ORF6 (19). This enhancement in viral growth could be attributed to both SARS-CoV ORF6's ability to prevent host antiviral responses to viral infection via nuclear import or to blockade of nuclear export of newly transcribed mRNAs. As SARS-CoV-2 ORF6 similarly interacts with Rae1 and Nup98, we speculate that ORF6 is required for optimal growth of SARS-CoV-2.
In addition to enhancing viral replication, preventing bidirectional nucleocytoplasmic transport doubly suppresses the host antiviral response (16)(17)(18). The ability of the M protein of VSV to bind Rae1 and Nup98 and prevent mRNA nuclear export is associated with suppressed interferon-b gene expression (21). Furthermore, VSV strains containing a mutation at the residue responsible for the VSV M-Rae1-Nup98 interactions induce significantly higher interferon-a protein levels than strains containing wild-type alleles of the M protein (22). SARS-CoV-2 ORF6 has been shown to be an interferon antagonist (23) and likely downregulates both the induction of antiviral genes and the export of their mRNAs.
Beyond interfering with interferon expression by restricting nuclear export of mRNA, SARS-CoV-2 ORF6 acts as an interferon antagonist by preventing nuclear import of the transcription factor STAT1 (14,24,25). The results of previous studies have suggested that SARS-CoV ORF6 similarly blocks STAT1 nuclear import by sequestering KPNA2 in the cytoplasm (3); however, recent work has argued that SARS-CoV-2 ORF6 prevents STAT1 nuclear import by preventing docking at Nup98 (14). Our results further suggest that SARS-CoV-2's blockage of nuclear import extends to additional host factors and extends to nuclear export. Our results support a model in which the interaction between SARS-CoV-2 ORF6 and the Rae1·Nup98 complex clogs the nuclear pore to prevent bidirectional nucleocytoplasmic transport of a broad array of factors. Together, our demonstration that SARS-CoV-2 ORF6 blocks both nuclear export of host mRNA and nuclear import of various host factors suggests that SARS-CoV-2-infected cells are likely incapable of responding to viral infection, consistent with SARS-CoV-2-infected cells displaying reduced expression of transcriptionally activated genes (26).
To date, SARS-CoV-2 has caused several thousand-fold more infections than SARS-CoV in part due to the distinct clinical presentations between the two viruses. COVID-19 patients display peak viral loads and maximum infectivity upon the onset of symptoms rather than after the onset of symptoms which is typical in patients with SARS (27). Furthermore, asymptomatic transmission was infrequently reported for SARS-CoV (28,29); however, presymptomatic and asymptomatic transmission have been a defining challenge of the current SARS-CoV-2 pandemic (30)(31)(32). An important scientific challenge is defining the virological basis for these radically different infection profiles despite their close homology. Both the delayed onset of clinical symptoms and presymptomatic and asymptomatic transmission of SARS-CoV-2 could be attributed to increased potency of interferon antagonization in SARS-CoV-2 compared to SARS-CoV. ORF6 has already been shown to be a major interferon antagonist in both SARS-CoV and SARS-CoV-2 (3,23,33). ORF6 is one of the least similar accessory proteins (69% identical by amino acid) between the two viruses. Coupled with our demonstration of SARS-CoV-2 ORF6 more strongly downregulating protein expression and copurifying with more Rae1 and Nup98 than SARS-CoV ORF6, the differences between SARS-CoV ORF6 and SARS-CoV-2 ORF6 could explain at least some of the differences in clinical presentations between SARS and COVID-19.
Large-scale SARS-CoV-2 genomic surveillance projects have demonstrated that deletions can arise within the accessory genes of SARS-CoV-2 (34)(35)(36). Notably, none of these deletions have arisen in multiple SARS-CoV-2 lineages through multiple independent genomic rearrangement events. Our identification of seven unrelated clinical isolates with the same ORF6 deletion suggests that this deletion may be repeatedly selected for in SARS-CoV-2. This is further evidenced by the identification of a cultured SARS-CoV-2 that acquired the same deletion after successive passages in Vero cells (37). Similar to wild-type ORF6 allele, the clinical allele, ORF6 D22-30, can repress expression of a cotransfected reporter and still retains the Rae1·Nup98 interacting motif of ORF6. Further work is required to understand the functional role of the ORF6 N terminus and determine the selective pressures that are repeatedly selecting for the observed deletion.
Our study has a number of limitations. We relied on an mCherry reporter assay to measure ORF6's impact on expression of newly transcribed transcripts. As such, our results may not perfectly reflect the degree to which expression of newly transcribed host transcripts is downregulated by ORF6 or during SARS-CoV-2 infection. More comparative work between SARS-CoV-2 and SARS-CoV ORF6 is needed in the context of viral replication. It would be intriguing to swap ORF6 between SARS-CoV and SARS-CoV-2 isolates to test the hypothesis that ORF6 is the major determinant of interferon antagonization and delayed symptom onset in animal models of SARS-CoV-2.
In summary, our results demonstrate the accessory protein ORF6 of SARS-CoV-2 imprisons mRNA in the nucleus, prevents nuclear import of a broad range of host factors, and strongly inhibits expression of newly transcribed transcripts via its interactions with the mRNA nuclear export factor Rae1 and the nuclear pore complex component Nup98. We hypothesize that the blockage of bidirectional nucleocytoplasmic transport by the Sarbecovirus accessory protein ORF6 likely leaves infected cells incapable of responding to the invading virus, allowing for the delayed host response and asymptomatic transmission observed in the current SARS-CoV-2 pandemic.
MATERIALS AND METHODS
Viral infection and oligo(dT) in situ hybridization. Calu3 and HBEC3-KT-ACE2 cells were plated in m-Slide VI 0.4 ibiTreated slides at densities of 30,000 and 50,000 cells per lane, respectively, and grown to 90% confluence. SARS-CoV-2/USA-WA1/2020 (NR-52281) was obtained from BEI Resources and propagated in Vero cells (USAMRIID). Calu3 and HBEC3-KT-ACE2 cells were mock infected or infected with SARS-CoV-2 at a multiplicity of infection (MOI) of 1 in Opti-MEM supplemented with 2% fetal bovine serum (FBS) for 1 h, and infection inoculum was replaced with Opti-MEM containing 2% FBS (Calu3) or airway epithelial growth medium (HBEC3-ACE2; PromoCell). Infection was performed within the biosafety level 3 (BSL3) facility at University of Washington following biosafety protocols. At 24 h postinfection (h.p.i.), cells were fixed with 4% paraformaldehyde in phosphate-buffered saline (PBS) at room temperature for 15 min and washed with PBS supplemented with SUPERaseIn RNase inhibitor.
The fixed cells were permeabilized with methanol and rehydrated in 70% ethanol followed by 1 M Tris-HCl (pH 8.0) (Invitrogen). The monolayer was then covered with hybridization buffer (1 mg/ml yeast tRNA, 0.005% bovine serum, 10% dextran sulfate and 25% formamide in 2Â SSC buffer [1Â SSC is 0.15 M NaCl plus 0.015 M sodium citrate]) containing an oligo(dT) (30) probe with an Alexa Fluor 594 fluorophore (IDT) attached to the 59 end of the probe and incubated overnight at 37°C. The hybridization buffer was removed, and the cells were washed once with warmed 4Â SSC buffer (Thermo Fisher), once with warmed 2Â SSC buffer, and twice with room temperature 2Â SSC buffer.
Constructs and cloning. The wild-type, N-and C-terminal mutant SARS-CoV-2 ORF6 constructs were amplified from double-stranded cDNA from a previously sequenced clinical SARS-CoV-2 isolate (WA12-UW8; EPI_ISL_413563) using the primers listed in Table S2 in the supplemental material. CloneAmp Hi-Fi PCR Premix (TaKaRa) and the following PCR conditions were used to generate the amplicons: 98°C for 2 min, followed by 35 cycles, with 1 cycle consisting of 98°C for 10 s, 55°C for 15 s, and 72°C for 30 s, followed by a final extension for 72°C for 5 min. ORF6 D22-30 was amplified from WA-UW-4572 (MT798143), and the matrix protein from vesicular stomatitis virus was amplified from pVSV eGFP dG (a gift from Connie Cepko; Addgene plasmid 31842) as described above using the primers listed in Table S2. A gBlock gene fragment (IDT) for ORF6 of SARS-CoV was synthesized based on the genome sequence of SARS-CoV isolate TW1 (GenBank accession no. AY291451.1). The resulting amplicons and gene fragment were then cloned into a modified pLenti CMV Puro plasmid (a gift from Eric Campeau & Paul Kaufman; Addgene plasmid 17448), which contains a 39 WPRE sequence following the insert and a 39 simian virus 40 (SV40) polyadenylation signal after the puromycin resistance cassette, with an N-terminal GFP or mCherry tag using the In-Fusion HD cloning kit (TaKaRa).
For cloning of Rae1, STAT1, NR3C1, KPNA2, and KPNA3, RNA was extracted from 239T cells using the RNeasy Miniprep kit (Qiagen), and cDNA was synthesized using Superscript IV and oligo(dT) (IDT). The genes were then amplified from the resulting cDNA using the primers listed in Table S2 and CloneAmp Hi-Fi PCR Premix under the following PCR conditions: 98°C for 2 min, followed by 35 cycles, with 1 cycle consisting of 98°C for 10 s, 55°C for 15 s, and 72°C for 1 min, followed by a final extension for 72°C for 5 min. The resulting amplicon for Rae1 was cloned into a modified pcDNA4-TO vector with a C-terminal FLAG tag, the STAT1 amplicon was cloned into a modified pLenti CMV Puro plasmid with a C-terminal mCherry tag, and the remaining constructs were cloned into a modified pLenti CMV Puro plasmid with a N-terminal mCherry tag using the In-Fusion HD cloning kit.
Specimen collection and whole-genome sequencing of SARS-CoV-2-positive clinical specimens. Whole-genome sequencing of SARS-CoV-2-positive clinical specimens was conducted as part of an ongoing University of Washington Institutional Review Board-approved study (STUDY00000408) (38)(39)(40)(41). Nasopharyngeal swabs were collected from patients suspected to have an infection with SARS-CoV-2 and stored in 3 ml of viral transport medium. RNA was extracted from 140 ml of medium using the Qiagen Biorobot. Sequencing libraries were prepared as previously described (34,42). Briefly, RNA was treated with Turbo DNase (Thermo Fisher), and first-strand cDNA was synthesized using Superscript IV (Thermo Fisher) and random hexamers (IDT). Double-stranded cDNA was created using Sequenase version 2.0 (Thermo Fisher) and purified using 1.6Â volumes of AMPure XP beads (Beckman-Coulter). Multiplex amplicon sequencing libraries were constructed using Swift Biosciences' SARS-CoV-2 Multiplex Primer Pool and Normalase Amplicon kit and sequenced on a 2 Â 300-bp run on an Illumina MiSeq.
The deletion identified within ORF6 of WA-UW-4752 was confirmed by reverse transcription-PCR and Sanger sequencing. For reverse transcription, single-stranded cDNA was constructed using Superscript IV. The resulting cDNA was used as the template for PCR with Phusion high-fidelity polymerase (Thermo Fisher) and the following primers: 59-ATCACGAACGCTTTCTTATTAC-39 and 59-CTCGTATGTTCCAGAAGAGC-39. PCR was conducted using the following conditions: 98°C for 30 s, followed by 35 cycles, with 1 cycle consisting of 98°C for 10 s, 55°C for 15 s, and 72°C for 30 s, followed by a final extension at 72°C for 5 min. The resulting amplicons were run on a 2% agarose gel, extracted from the gel using the QIAquick gel extraction kit (Qiagen), and Sanger sequenced by Genewiz, Inc., with the same primers used for PCR.
Other strains with the same deletion in ORF6 were identified by querying GISAID (accessed 17 July 2020). The genetic relatedness of these strains was assessed by aligning the genomes of these strains as well as 110 other global clinical SAR-CoV-2 strains using MAFFT v7.453 (45). A phylogenetic tree was generated using RAxML version 8.2.11 (46) and visualized with R (version 3.6.1) using the ggtree package (47). Strains were further classified using the web-based lineage assigner, Pangolin (https://pangolin.cog -uk.io/) (48). 293T, A549, or Calu3 cells were plated in m-Slide eight-well ibiTreated chamber slides at a density of 50,000 to 120,000 cells per well and grown overnight to 50 to 90% confluence. 293T cells were transfected with 300 ng of plasmid DNA using a 3:1 ratio of PEI MAX (Polysciences) in Opti-MEM (Thermo Fisher). A549 and Calu3 cells were transfected with 250 ng of plasmid DNA using Lipofectamine 3000 (Thermo Fisher) diluted in Opti-MEM. All three cell lines were incubated for 24 h posttransfection. The cells were then washed with PBS (pH 7.4; without Ca 21 or Mg 21 ) (Thermo Fisher) and fixed with 4% paraformaldehyde. Oligo(dT) in situ hybridization was performed as described above. The cells were then blocked with 1% bovine serum in PBS containing 0.1% Tween 20 (PBST) for 1 h. To detect the GFPtagged proteins, the cells were incubated with a FITC-conjugated anti-GFP antibody (1:1,000; Abcam) for 1 h. The antibody was removed, and the cells were washed three times with PBST. The cells were mounted in Vectashield Vibrance antifade mounting medium with DAPI and visualized with a Leica SP8X confocal microscope.
Measurement of nascent protein synthesis. 293T cells were plated at a density of 15,000 cells per well in poly-L-lysine-coated 96-well, clear-bottom, opaque-walled plates and grown overnight until they reached approximately 70% confluence. The cells were then transfected with 70 ng of mCherry-tagged constructs using a 3:1 ratio of PEI MAX in Opti-MEM and incubated for 24 h. The cells were then washed twice with the DMEM containing no methionine (Thermo Fisher) and supplemented with 1Â GlutaMAX and 200 nM L-cystine (Sigma) and incubated in this medium for 30 min. Nascent protein synthesis was measured using the Click-iT AHA Alexa Fluor 488 Protein Synthesis HCS assay kit (Thermo Fisher) following the manufacturer's recommendations. In brief, cells were incubated in DMEM containing no methionine and supplemented with 1Â GlutaMAX, 200 nM L-cystine, and 50 mM Click-iT AHA reagent for 2 h. A control condition in which 2 mM puromycin was added to the labeling solution was included for each replicate. The cells were fixed with 4% paraformaldehyde and permeabilized with 0.5% Triton X-100 (Sigma). Nascent proteins which incorporated the Click-iT AHA reagent were then FITC tagged using the Click-iT reaction cocktail. The nuclei were then stained with Hoechst 33342, and the FITC and Hoechst 33342 fluorescent values in each well were measured with a Victor Nivo plate reader (Perkin Elmer). The relative level of nascent protein synthesized between each condition was determined by calculating the FITC/Hoechst 33342 ratio. Differences in the mean FITC/ Hoechst 33342 ratio between experimental conditions were assessed in R using the unpaired t test.
ORF6-mCherry transient cotransfections. Transient cotransfections with GFP-tagged constructs and a modified pLenti CMV Puro vector encoding the fluorescent reporter mCherry were conducted in six-well plates. The day prior to transfection, 500,000 293T cells were plated into each well of the six-well plate and grown overnight until they reached approximately 50% confluence. The cells were then transfected with 2 mg of GFP-tagged construct and 2 mg of mCherry using a 3:1 ratio of PEI MAX in Opti-MEM. A549 cells were plated into six-well plates at a density of 500,000 cells per well and grown overnight until they reach approximately 85% confluence. The A549 cells were then transfected with 1.5 mg of GFP-tagged construct and 1.5 mg of mCherry using Lipofectamine 3000 diluted in Opti-MEM. Cells were incubated for 24 to 48 h following transfection and visualized using the EVOS M5000 imaging system (Thermo Fisher) with GFP and Texas Red filter cubes. mCherry fluorescence intensities were measured with ImageJ v1.53a by an individual blinded to experimental design. All images were 8-bit grayscale and 2,048 Â 1,536 (3.1 megapixels). Background thresholds were set at the same level across all images, and mean fluorescence intensities of regions of interest greater than 200 pixels were calculated. Three fields were analyzed for each experimental condition. The mean fluorescent intensity for each field was calculated after adjusting for background fluorescence signal and normalized to the control condition. Differences in mean fluorescent intensities between experimental conditions were assessed in R using the unpaired t test.
Affinity purification of GFP-tagged constructs. The day prior to transient transfection, 10-cm plates were seeded with 4 Â 10 6 293T cells and grown overnight to approximately 50% confluence. The cells were transfected with 7 mg of plasmid DNA using a 3:1 ratio of PEI MAX in Opti-MEM. Forty-four to 48 h after transfection, the cells were washed with PBS and collected using PBS containing 0.1 mM EDTA. The cells were pelleted, resuspended in 500 ml TEN (50 mM Tris [pH 8.0], 150 mM NaCl, and 1 mM EDTA) buffer with 0.5% NP-40 and lysed by rotation for 45 to 60 min at 4°C. The lysates were centrifuged at 13,000 RPM for 5 min at 4°C, and the supernatant was transferred to a new tube and cleared of residual IgG by rotation with Protein G Sepharose 4 Fast Flow (GE Healthcare Life Sciences) for 30 min at 4°C. Cleared lysates were transferred to new tubes and incubated overnight at 4°C with anti-GFP Nanobody Affinity gel (BioLegend). The affinity gel was then pelleted and washed twice using TEN buffer with 0.1% NP-40 and resuspended in equal volumes of NuPage LDS sample buffer (Thermo) containing 143 mM 2mercaptoethanol (Sigma-Aldrich). Western blotting using the elutes from affinity purification and the prepurified input lysates were performed as described above with the following primary antibodies: 1:1,000 anti-GFP, 1:1,000 anti-a-tubulin, 1:2,000 anti-Rae1 (Abcam; clone EPR6923), and 1:1,000 anti-Nup98 (Abcam; clone 2H10).
Rae1 rescue of mCherry expression. 293T cells were plated in six-well plates at a seeding density of 500,000 cells per well and grown overnight until they reached approximately 50% confluence. Cells were then transfected with 0.5 mg of the GFP-SARS-CoV-2 wild-type ORF6 construct, 0.5 mg of mCherry, and 0, 0.25, 0.5, 1, or 2 mg of Rae1-FLAG using a 3:1 ratio of PEI MAX in Opti-MEM. GFP expression and mCherry expression were visualized 44 to 48 h following transfection using the EVOS M5000 imaging system with GFP and Texas Red filter cubes. Western blotting was performed as described above with the following primary antibodies: 1:1,000 anti-GFP, 1:500 anti-mCherry, 1:1,000 anti-a-tubulin, and 1:1,000 anti-FLAG (Sigma; clone M2).
Nuclear import assays. For all nuclear import assays, 293T cells were plated at a density of 50,000 cells per well in m-Slide eight-well ibiTreated chamber slides and incubated overnight. STAT1 nuclear import was analyzed by cotransfecting cells with 150 ng of GFP-tagged constructs and 150 ng of a STAT1 construct containing a C-terminal mCherry tag. The transfected cells were incubated for 24 h and stimulated with 100 IU/ml recombinant human interferon beta (R&D Systems) for 1 h. Glucocorticoid receptor nuclear import was analyzed by cotransfecting cells with 150 ng of GFP-tagged constructs and 150 ng of an mCherry-tagged glucocorticoid receptor construct. The cells were incubated for 24 h and stimulated with 100 nM dexamethasone (Sigma) for 30 min. KPNA2 and KPNA3 localization patterns were analyzed by cotransfecting cells with 150 ng of GFP-tagged constructs and 150 ng of mCherrytagged KPNA2 or KPNA3 constructs. The localization patterns were visualized 24 h posttransfection. All wells were fixed with 4% paraformaldehyde, mounted in in Vectashield Vibrance antifade mounting medium with DAPI, and visualized with a Leica SP8X confocal microscope.
Data availability. Sequencing reads and genome assemblies are available under NCBI BioProject accession no. PRJNA610428.
SUPPLEMENTAL MATERIAL
Supplemental material is available online only.
ACKNOWLEDGMENTS
We thank Nathaniel Peters and the University of Washington W. M. Keck Microscopy Center for assistance and access to the Leica SP8X confocal microscope. | 8,327.2 | 2021-04-13T00:00:00.000 | [
"Medicine",
"Biology"
] |
Gene Flow among Populations of Two Rare Co-Occurring Fern Species Differing in Ploidy Level
Differences in ploidy levels among different fern species have a vast influence on their mating system, their colonization ability and on the gene flow among populations. Differences in the colonization abilities of species with different ploidy levels are well known: tetraploids, in contrast to diploids, are able to undergo intra-gametophytic selfing. Because fertilization is a post-dispersal process in ferns, selfing results in better colonization abilities in tetraploids because of single spore colonization. Considerably less is known about the gene flow among populations of different ploidy levels. The present study examines two rare fern species that differ in ploidy. While it has already been confirmed that tetraploid species are better at colonizing, the present study focuses on the gene flow among existing populations. We analyzed the genetic structure of a set of populations in a 10×10 km study region using isoenzymes. Genetic variation in tetraploid species is distributed mainly among populations; the genetic distance between populations is correlated with the geographical distance, and larger populations host more genetic diversity than smaller populations. In the diploid species, most variability is partitioned within populations; the genetic distance is not related to geographic distance, and the genetic diversity of populations is not related to the population size. This suggests that in tetraploid species, which undergo selfing, gene flow is limited. In contrast, in the diploid species, which experience outcrossing, gene flow is extensive and the whole system behaves as one large population. Our results suggest that in ferns, the ability to colonize new habitats and the gene flow among existing populations are affected by the mating system.
Introduction
Gene flow is the successful movement of genes among populations by mating or by the migration of diaspores, and it is one of the key factors determining the spatial genetic structure of populations [1,2]. Gene flow is usually considered beneficial for population survival, preventing inbreeding depression and the loss of genetic variation in small populations due to genetic drift [3]. In some cases, gene flow can also be detrimental for small populations because it prevents differentiation through the local adaptations of populations in different extreme conditions and reduces individual fitness through outbreeding depression [4].
The intensity of gene flow is, to a vast degree, influenced by the mating system of the species [5,6]. The effects of the mating system on the intensity of gene flow has been extensively studied in seed plants (meta-analysis in [6]), and it was shown that gene flow among populations increases with an increasing level of outcrossing [6]. Compared to seed plants, the breeding system in ferns is more complex due to a specific life cycle involving independent haploid and diploid phases. Three types of fertilization occur in ferns [7]: (i) intra-gametophythic selfing (the fusion of sperm and egg from the same gametophyte resulting in a complete homozygote); (ii) inter-gametophytic selfing (the fusion of sperm and egg from different gametophytes derived from the same parental sporophyte, which is equivalent to selfing in seed plants); and (iii) outcrossing (the fusion of sperm and egg from gametophytes derived from spores of different sporophytes).
The fact that fertilization in ferns occurs on the gametophyte, which originates from spores, is of crucial importance for dispersal. In seed plants, fertilization takes place prior to dispersal, before the seed is formed. Theoretically, one seed is enough to colonize a new habitat. In ferns, fertilization is a post dispersal process on the haploid gametophyte. For ferns unable to use intra-gametophytic selfing, this means that the two spores must fall in close proximity and under favorable conditions on a new habitat. Fertilization can only occur after a gametophyte with archegonia/anteridia develops from both. Colonization of a new habitat is problematic in this case [8]. Spores transported long distances are unlikely to establish close enough to allow for inter-gametophytic selfing or crossing, despite the huge amount of spores produced by ferns. On the other hand, if the species is able to undergo intra-gametophytic selfing, one spore, which develops in a hermaphroditic gametophyte, is enough to establish a new, totally homozygous population. Many fern species that rely on intra-gametophytic selfing as the main breeding system have been shown to be great colonists on the continental scale [9][10][11].
The differences in colonization abilities naturally result in very different patterns of genetic variation. Thus, the structure of genetic diversity was used as a source of information about the mating system of ferns in many studies [12][13][14][15][16]. The type of mating system in ferns is often connected to the ploidy level of the species. The comparison of the distribution of genetic variation in populations of tetraploid species with diploid ancestors suggests that diploid species primarily undergo outcrossing and tetraploids primarily undergo selfing. Diploid parental species have a rather limited area with high genetic variability, while descendant tetraploids are widely distributed, but genetically uniform due to single spore colonization and inbreeding [15,16].
Studies on the distribution of genetic diversity within and among populations are limited because it is difficult to distinguish between the gene flow among already established populations and historical processes during the colonization of new habitats ( [17] but see [12]). Despite this difficulty, distinguishing between these two processes is crucial, and many studies have not paid proper attention to it [18][19][20]. During colonization of empty habitats, selfing species have a higher probability of establishing themselves in a vacant area and selfing, which strongly increases the probability of colonization. In contrast, outcrossing is more advantageous for gene flow between already established populations because it facilitates the incorporation of new genetic information into the gene pool of the given population, thus enhancing the gene flow among populations. To separate these two types of processes, information about the genetic structure of the population is not sufficient (but see [17]). Additional information about the colonization rates of the system are necessary because the current patterns of genetic diversity combine both processes that can occur during the different phases of population development.
In the present study, we analyzed the genetic structure of the populations to investigate gene flow among the populations of two rare fern species, Asplenium adulterinum and A. cuneifolium, which differ in both ploidy level and their breeding system. In our model system, the two species occupy very similar habitats -serpentine rocks, which are scattered in an area of 10610 km. From our previous study, we have additional information about the colonization rates of the species in the system [21] and on the relative speed of the metapopulation dynamics of the species [22].
Our study addressed the following questions: 1) What is the distribution of genetic diversity within and among the populations of Asplenium adulterinum and A. cuneifolium?, 2) What is the intensity of gene flow among the populations? and 3) Do the patterns differ between species, and do they correspond to the expected difference in mating systems? Furthermore, we compare the results from the current study about the genetic structure of the populations of the two species with information about the colonization rates in the system [21]. We discuss processes that might play a role during colonization and the subsequent gene flow among populations.
Study Species
The study involves two fern species, a tetraploid, Asplenium adulterinum Milde. and a diploid, Asplenium cuneifolium Viv. (Aspleniaceae), both of which are restricted to the serpentine substrate in Europe. Distribution of both species is highly scattered, following serpentine rocks in Europe from the Mediterranean to Norway and from Greece to Spain (A. cuneifolium, A. adulterinum is only found from Austria to France) [23]. In the study area, the Czech Republic, both species occur mainly in western Bohemia (Slavkovský les), with several localities in north-eastern Bohemia. There are only a few small populations found in the rest of the country. A. cuneifolium is generally more widespread than A. adulterinum.
Both species are rare and of interest to nature conservation groups throughout Europe [23]. Additionally, A. adulterinum is classified as a species of interest to the European ecological network Natura 2000 [24]. The species differ in ploidy level, A. adulterinum is allotetraploid (parental species A. viride L. and A. trichomanes Huds. subsp. Trichomanes), [25], wheras A. cuneifolium is diploid [26].
Study Site
This study was carried out in the region of Slavkovský les in western Bohemia, Czech Republic. In this region of ca. 10 6 10 km, 98 serpentine rocks are scattered across the landscape [21]. The system is rather isolated from other populations of both species; the next closest population is 50 km away. Both A. adulterinum and A. cuneifolium are quite common in the area. A. adulterinum is more common and occupies rocks in both unforested habitat and the forest (dominated by Pinus sylvestris and Picea abies). In total, there are 66 populations located throughout the area, ranging from a few individuals to nearly 2000 individuals [27]. A cuneifolium prefers rocks under the forest canopy. The unforested rocks are rarely inhabited and the populations are very small. In total, there are 48 populations of A. cuneifolium in the study region, mostly in the central area, with several more distant localities, and the populations range from several individuals up to several hundred individuals [27].
For both species, unoccupied suitable habitats exist in the area; A. adulterinum occupies 81% of the suitable habitats and A. cuneifolium occupies 73%, indicating metapopulation dynamics in the study system [21].
Sample Collection
Samples for isoenzyme analysis were collected from 14 localities in A. adulterinum and 12 localities in A. cuneifolium. Sampling design followed the distribution of the species in the study area (A. adulterinum has more populations). If available, 20 plants per population were sampled for each species. In total, we sampled 268 individuals of A. adulterinum and 227 individuals of A. cuneifolium.
Samples were evenly distributed over each locality to represent the entire range of variability within the population (under the assumption that geographically more distant plants are less closely related). We sampled 1-2 young leaves without spores per plant, being careful to not seriously damage the plant. The position of each plant was recorded using GPS (Global Position System) or marked on a map in the field, followed by digitalization of the map.
Isoenzyme Analysis
Samples collected in the field were kept on ice for 24-48 hours until the isoenzymes were extracted in the laboratory. Electrophoresis was performed on the crude protein extracts of the leaf material. All enzymes were resolved on polyacrylamide gels using an 8.16% separating gel and a 4% stacking gel.
Nine enzymatic systems were studied, 7 of which provided an interpretable pattern and were variable at least for one of the study species: LAP, DIA, 6-PGDH, SHDH, PGM, ADH and AAT. For a detailed methodology of the isoenzyme extraction, electrophoresis and staining procedures, see Appendix S1.
Band Interpretation
Bands were interpreted in two ways. First, only the presence or the absence of alleles was recorded, and the data were further treated as a dominant marker (for similar approach see, e.g., [28]).
The dominant data approach was chosen because A. adulterinum is an allotetraploid, and it was rarely possible to assess the exact ratio of alleles from the intensity of the bands. Moreover, allotetraploids often have fixed pairs of alleles that always segregate together [11]. Thus, heritability may be disomic rather than tetrasomic, and it is often impossible to reliably distinguish which alleles segregate together as one de facto allele. As a result, recording only the presence or the absence of the alleles was the only appropriate way to treat the obtained pattern in all of the enzymatic systems. In A. cuneifolium, the same approach of treating the data as a dominant marker was used to compare the species.
We also evaluated the data as a co-dominant marker. This was possible for all enzymatic systems in A. cuneifolium, but only in one system in A. adulterinum -where we were able to distinguish how the alleles segregate in fixed pairs (see [11]). Because the data are very limited for A. adulterinum, it must be interpreted with caution. Despite this, the data brought interesting insight to the comparison of the mating systems of the two species.
Statistical Analysis
Dominant marker. Binary (presence/absence) data were prepared in the program FAMD [29] and then imported into the program Arlequin [30], where most of the analyses were performed. A mantel test was performed using PopTools [31].
The mean gene diversity [32] was calculated for each population and averaged over all populations within a species. In addition, the number of haplotypes (band patterns) for each species and population was calculated. Distribution of genetic variability among and within populations was investigated using an AMOVA [33] and tested with a permutation test (1000 permutations). A Mantel test [34] was used to test for the relationship between pairwise F st values between populations and pairwise geographic distance (in meters) between centroids of localities hosting the populations (obtained in study [21]). The relationship between the total size of the population and its genetic diversity was examined using a simple linear regression in the program R [35].
Co-dominant marker. We calculated the mean expected and observed heterozygosity and inbreeding coefficient [32] over all populations for both species using PopGene [36].
Band Pattern
The 7 enzymatic systems provided a total of 9 interpretable loci: AAT, ADH-1, ADH-2, DIA, LAP-1, LAP-2, 6-PGHD, PGM and SHDH. In A. adulterinum, 4 loci were variable: LAP-1, DIA, 6-PGDH, SHDH. In A. cuneifolium, 7 loci were variable: AAT, ADH-1, ADH-2, LAP-1, LAP-2, PGM and SHDH. Because A. cuneifolium and A. adulterinum are not closely related species, the loci do not always correspond. However, for the purpose of these analyses, it is important to have the same number of loci for both species. Each of the 9 loci had 2 alleles, resulting in data matrix of 18 (presence/absence of an allele) 6 the total number of samples.
In tetraploid A. adulterinum, only one locus (LAP-1) could be reliably evaluated as allelic data. This locus either showed fixed heterozygosity (balanced pattern AABB) or fixed homozygosity (pattern AAAA), indicating diploid inheritance (due to a disomic heritability in allotetrapoids). Rarely, a clear pattern of AAAB was observed (in 2.6% of examined plants, see Appendix S2). This unbalanced pattern was interpreted as a heterozygote of fixed allele pairs AA and AB (see [11]). In diploid A. cuneifolium, all polymorphic enzymatic systems were evaluated as diploid allelic data.
Dominant Marker
The mean gene diversity across the populations was 0.47 (ranging from 0.029 to 0.8) in tetraploid A. adulterinum and 0.94 (ranging from 0.93 to 0.99) in diploid A. cuneifolium. In A. adulterinum, only 14 haplotypes were present in the entire dataset, with separate populations containing 2-8 haplotypes. In A. cuneifolium, 96 haplotypes were present in the entire dataset, with separate populations containing 4-19 haplotypes.
In A. adulterinum, 40.6% of genetic variation was within populations and 59.4% was among populations (F st = 0.594, p,0.0001). It contrast, in A. cuneifolium, 81.0% of genetic variation was within populations and only 19.0% was among populations (F st = 0.190, p,0.0001), Fig 1. The correlation between geographic and genetic distance between populations was highly significant (Mantel r = 0.335, p = 0.001) in A. adulterinum. In contrast, in A. cuneifolium, the relationship was not significant (Mantel r = 0.093, p = 0.317) , Fig 2. Larger populations of A. adulterinum host more genetic diversity. This relationship was, however, only marginally significant (R 2 = 0.152, p = 0.093). In A. cuneifolium, no relationship between the size of the population and the genetic diversity was observed (p = 0.290),
Co-dominant Marker
According to the one enzymatic system (LAP-1) that made allowance for reliable co-dominant scoring, there was very high deficiency of heterozygotes in A. adulterinum. As expected, the observed heterozygosity in A. cuneifolium was much more balanced in all enzymatic systems (Table 1).
Discussion
The present study revealed considerable differences in the genetic structure of two populations of rare fern species differing in ploidy level. Populations of the allotetraploid, Asplenium adulterinum, have a high genetic differentiation, and the difference increases with geographical distance. However, the individuals are rather uniform within populations. In contrast, populations of the diploid, A. cuneifolium, are very similar to each other, but individuals within populations are genetically variable. Based on these results, we can deduce the probable mating system of the species and the intensity of gene flow between populations. Together with the results from our previous study [21], we can discuss the probable mechanisms of colonization and gene flow among the populations of the two fern species.
Mating System
Our original assumption about the differences in the mating system between A. adulterinum (4n) and A. cuneifolium (2n) was based on their ploidy level and on the common expectation that diploids undergo selfing and polyploids undergo outcrossing [37]. However, recent studies do not confirm this strict division (e.g., [12]). Thus, it was necessary to obtain independent data to confirm or reject this expectation.
The mating system of a species is commonly estimated from the genetic structure of its populations, where the genetic variation is partitioned mainly within populations of outcrossing species and among populations of selfing species [16,[38][39]. The results of our study corroborate those of previous studies -Asplenium adulterinum, which is expected to undergo selfing, has 59.4% of the genetic variability partitioned among its populations, while A. cuneifolium, which is an expected outcrossing species, has 19% of the genetic variability partitioned among populations (based on the dominant marker).
Additionally, the enzymatic system, in which interpretation based on alelles was possible, shows a striking lack of heterozygotes -we found only 2.6% of heterozygotes in Asplenium adulterinum. This suggests a high level of inbreeding. In contrast, the observed heterozygosity in A. cuneifolium was rather balanced, as expected. Therefore, the genetic population structure suggests that selfing is the prevailing mating system for A. adulterinum, while outcrossing is the prevailing mating system for A. cuneifolium. Our data further showed that A. adulterinum is capable of outcrossing because we found heterozygotes, which are clearly a direct output from crossing two different genotypes. However, the frequency of individuals resulting from outcrossing is quite low. This finding corresponds to the conclusions of previous studies that gametophytic selfing is the main mating system in polyploid ferns [16,38].
Unfortunately, our data cannot provide information regarding whether A. cuneifolium, the species with prevailing outcrossing, is capable of intragametophytic selfing, and thus, single spore colonization. However, single spore colonization has been confirmed in other diploid species, such as Asplenium scolopendrium [12]. This study [12] further suggested that the offspring originating from gametophytic selfing are competitively excluded (due to smaller vitality) by offspring that originated from outcrossing. A similar scenario may be expected in both of our study species: the genetic structure of the sporophyte population may not fully reflect the frequency of the type of gametophytic mating (selfing/outcrossing). Because both inbreeding and outbreeding depression was clearly documented in ferns [40], young sporophytes originating from selfing in predominantly outcrossing diploids (or from outcrossing in predominantly selfing polyploids) might be excluded from the population via competition.
Gene Flow among Populations
Our study revealed that the genetic structure of the populations of the two species strongly differ on a regional spatial scale (ca 10 km). Populations of A. adulterinum show strong genetic differentiation similar to other predominantly selfing fern species, e.g., [10,41]; the genetic distance between populations increases with the geographical distance, and larger populations contain more genetic variability. The gene flow among populations of this selfing species is rather limited. In contrast, in A. cuneifolium, there is no relationship between genetic and geographic distance, and most of the genetic diversity can be found within populations. This result suggests that the whole system of this species functions as one large population with frequent dispersal throughout the area and a high level of gene flow, as is often found in various outcrossing fern species (e.g., [42][43][44],but see also [45]).
Probable Processes Forming Genetic Structure
When we combine the results of the present study about the genetic structure of the populations with information on the colonization ability of the species [21], we can hypothesize which processes form the genetic structure of the two fern species during colonization and the subsequent gene flow.
A. adulterinum is predominantly a good colonist of empty patches [21], but subsequent gene flow between the populations is rather limited. The genetic diversity of populations of this species is likely affected by the founder effect. The patch is occupied by the few genotypes that arrive, and they do not mix (or do to a very limited degree) with other genotypes. If outcrossing occurs, its product may be excluded due to outbreeding depression [46]. As a result, several independent genotypes exist on patches and reproduce mostly via selfing, leading to highly differentiated populations.
In comparison with A. adulterinum, A. cuneifolium has a more difficult time colonizing empty patches. However, outcrossing facilitates effective gene flow between already established populations because the new genotype must be crossed with the local gametophytes to be incorporated into the population's gene pool (as suggested by Wubs [12]). Moreover, if inbreeding occurs in a predominantly outbreeding species, its product may have a disadvantage due to inbreeding depression [40] and may be excluded from the population. The resulting effective gene flow thus diminishes any spatial structure among populations.
Supporting Information
Appendix S1 Detailed methods of isoenzyme analysis. | 4,965.8 | 2012-09-20T00:00:00.000 | [
"Biology",
"Environmental Science"
] |
Design and Optimization of Tapered Optical Fiber Probes for SERS Utilizing FDTD Method
In this work, we report a strategy of Ag nanoparticle (Ag NP)-coated tapered optical fiber probes by finite difference time domain (FDTD) simulations. Investigation shows that the fiber-tip decorated Ag NPs has excellent electric field enhancement and confinement of light capabilities. Moreover, we demonstrate the effect of key parameters such as tip radius, conical angle, Ag NP size, and gaps between them on the field enhanced utilizing typical excitation wavelengths of 532, 633, and 785 nm. To further improve the electrical field effect, a noble metal substrate is introduced below the tip apex, which exhibits a higher field enhancement generated by tip-substrate coupling. The presence of the Au substrate does not lead to a significant change in the plasma characteristic peak of the probes at 490 nm. This study provides a useful reference for the fabrication of tapered optical fiber with plasmonic nanostructures and the design of robust tapered fiber-optic Raman sensors.
Introduction
Sensitive optical fiber, as a powerful sensing technology, can be used in various applications, such as food safety, environmental, chemical, and biological sensing [1][2][3]. With the development of optical fiber technology, several studies combining surface-enhanced Raman scattering (SERS) with optical fiber to develop promising LSPR and SERS fiber probes have been reported [4,5]. Compared with traditional SERS substrates, the fiber probes can not only realize large SERS enhancement but also provide an ideal platform for remote measurements and make in situ sensing possible [6][7][8]. As a result, the development of excellent SERSbased sensors has received wide attention in both industry and academia.
The key principle of the SERS fiber probes is the preparation of nano-structures at the end of the excitation fiber and modified SERS-active sensing layers such as noble metal nanoparticles and metal films. In addition, it is a known fact that the detection sensitivity of the probes is related to the local electric field enhancement generated by the plasma micro/ nanostructure, in which the fiber probe can be delivered both the excitation light to the sample and the backscattered SERS signal to the Raman spectrometer. To date, there are various types of optical fiber structures serving developed as SERS fiber probes [9][10][11][12][13]. Although many fibers probe have been prepared and excellent Raman signals obtained, the practical applications of SERS are still limited, especially the knowledge of electromagnetic field enhancement mechanism [14,15]. The current theoretical analysis method for SERS fiber probes is numerical analysis. This a powerful tool for analyzing electromagnetic field enhancement. To obtain high Raman enhancement, several performance improvement methods are usually combined with numerical simulations to optimize the geometry of the fiber probes and shearing of the SERS-active layer. Tang et al. [16] successfully fabricated spherical SERS fiber probes and coated the spherical fiber tips with Ag NPs. The higher SERS enhancement factor can be obtained by optimizing the diameter of the fiber spheres. A grid nanostructured SERS fiber sensor using the numerical simulation analysis has been presented and shown a double characteristic peak, which offers the possibility to realize plasmon resonance excitation mode [17]. In addition, the field enhancement properties and parameter optimization of tapered fiber SERS probes coated with Au NPs on their tip have been also investigated using the FDTD method [18]. The FDTD methods are also applied to the design of gas fiber probes to analyze the electric field distribution, polarization properties, and the interpretation of the physical mechanisms [19,20]. The three-dimensional computation numerical can be used to pinpoint the location of hot spots on the probe surface and to analyze the ultrashort pulse propagation of the tapered optical fiber probe [21]. Both the above experimental and theoretical studies show that the SERS sensitivity of the probe is strongly influenced by fiber geometry and the NPs. At present, various methods are used to fabricate SERS fiber probes with randomly undefined shapes and sizes [22,23], which are mainly due to the effect of fiber size, poor operability, and controllability of the fabrication procedures. Therefore, it is necessary to combine numerical modeling simulation to design probes to get further insights into the general importance of identified relationships, reduce production cost and improve the production efficiency of the probes. Among them, tapered fiber probes have a larger specific surface area than other shape probes and show many advantages of the high light transmission efficiency and large interaction area for the excitation light and SERS signal [24].
In this study, FDTD simulations are used to design and numerically simulate the tapered fiber probes. The results show that the field enhancement properties of the fiber tipmodified Ag NPs are significantly improved. Moreover, the relevant parameters, including tip radius, cone angles, Ag NPs size, and gaps between them, are optimized. Finally, the electric field strength can be further improved by introducing a noble metal substrate below the fiber tip. This study provides a realistic theoretical reference for the application study of tapered fiber probes in Raman spectroscopy.
Propose Structure Description and FDTD Simulation
Firstly, we designed a tapered fiber probe with Ag NPs modified on the surface of the fiber tip. A default design parameter tapered angle (θ = 41.2°), tip radius (50 nm), Ag NPs radius (50 nm), the gap between Ag NPs (5 nm), and the gap between the Ag NPs and fiber tip surface of 1 nm were provided [25]. Subsequently, the electromagnetic field simulations were performed using the FDTD method via a commercial software package (Lumerical Solutions, Inc.) to investigate the local electric field enhancement and SERS enhancement factor (EF). The base solver directly solves Maxwell's equations in both time and space on a spatial grid, where there are no any simplifying approximations, making the analysis far more accurate. To improve simulation efficiency, trade-off memory requirements, and simulation time for an ordinary computer, a 2D simulation model of tapered fiber coated with Ag NPs was designed on the x-y plane, as shown in Fig. 1. This appropriate simplified 2D scattering problem will not reduce the accuracy of the results [26]. Gaussian Beam was used as excitation light with the same total width as the excitation boundary, and the excitation field amplitude (E 0 ) was set to be 1 V/m. The vector K was defined to propagate in the y direction with a polarization mode either in the x-or z-directions (p and s polarizations, respectively). Perfectly matched layers (PML) were used in the x-y directions as the boundary conditions to truncate computational regions. According to the Drude model, the dielectric constant (ε m ) of Au in the visible and near-IR can be written as [27].
where ∞ is the angular frequency of incident light, ω p is the plasma frequency of Ag which is the frequency of the oscillations of electron density in the metal, ω c stands for collision frequency, which corresponds to the damping of electron density oscillations due to collisions among the electrons, and the data are given by Johnson and Christy [28]. As for calculation capability, the mesh spacing is fixed at dx = 0.001 μm and dy = 0.001 μm to ensure accuracy, sufficiency, and stability for this study.
Uncoated Tapered Optical Fiber
Before studying other parameters, the optical propagation characteristic in a tapered optical fiber was analyzed by two different polarization directions. The incident light is perpendicular to the excitation boundary, the shape of the fiber tip is determined by a tip radius of 50 nm, excitation boundary of the radius width of 0.8 μm, and cone angle of 41.2°. Figure 2 shows the spatial field distribution of the fiber tip calculated by 2D-FDTD simulations. For two different polarizations: Fig. 2a is p-polarized, and Fig. 2b is s-polarized light excited by 785 nm, respectively. The color map represents the maximum electric field value. From this figure, it can also be observed that the intensity radiation shows an enhancement close to the fiber tip, and the electric field intensities (|E|/|E 0 | max ) of a p-and s-polarized are 1.52 and 1.89, respectively. In addition, the tapered region is smaller along the axial direction of the fiber tip, and the fiber's cross-section decreases due to the tapering of the fiber, resulting in the diffractive effect being significantly enhanced and the light escapes to the surrounding area via the evanescent field. The energy of the evanescent field is inherently related to the refractive index, the higher the refractive index of the surrounding medium of the fiber, indicating that the less confined will be the light in the fiber [25]. Therefore, the uncoated tapered optical fiber is poor confinement of light field in a small cross-section area.
Tapered Optical Fiber with Ag Nanoparticles
A tapered optical fiber of the local electric field enhancement is investigated when Ag NPs are coated at the fiber tip. As above, the shape of the fiber tip was determined using the default value. The radius of the Ag NPs is taken as 50 nm and the gap between them is 5 nm. The NPs are 1 nm away from the fiber tip surface. Figure 3 shows a simulated electric field distribution of a tapered optical fiber with different wavelengths of 532, 633, and 785 nm. These results demonstrate that the electric field can be significantly enhanced in p-polarized light due to the light leaking out from the fiber tip surface and excitation of localized surface plasmons (LSPs) between the Ag NPs. Furthermore, the "hot spot" that depends on the interaction between the NPs and the excitation light occurs between the Ag NPs, and its position varies with wavelength. The |E|/|E 0 | max arrived at 15.8 in the case of the tapered optical fiber at 532 nm. For s-polarized light, Fig. 4 shows the evanescent field uncoupling the LSPs between neighboring Ag NPs. Obviously, they do provide a reflective layer, like a mirror, that reflects light back into the tip, creating a standing wave [25]. The maximum electric field intensity is 4.46, and the electric field characteristics do not exhibit obvious variers in all wavelengths. Thus, we will focus on p-polarized light in the subsequent analyses.
Furthermore, the optical characteristic of the proposed probes was performed, and transmission spectrums were measured by frequency-domain field and power, and the monitor presented at 20 nm underneath the fiber tip. In Fig. 5a, a peak concave is observed at 490 nm, attributed to the absorption occurring between the Ag NPs, resulting in the transmission spectrum being concaved. For SERS fiber probes, it is necessary to select the appropriate lasers to excite plasmon, which is advantageous in Raman applications and enhanced signals. Thus, we analyze the electric field along a line connecting the Ag NPs at different wavelengths, as shown in Fig. 5b. The results suggest that the electric field intensity near 490 nm is significantly higher than over the whole wavelength from 450 to 800 nm. This is expected since the fiber tip coated Ag NPs, the dip of the transmission spectrum is the signature of the excitation of the fundamental gap plasmon resonance [25,29]. On the other hand, the p-polarized light with the electric field perpendicular to the tapered surface of the Ag NPs is coupled to the surface plasmons polaritons (SPPs) supported TM mode in the probes [30].
Parameter Optimization of Tapered Optical Fiber
Notably, the SERS study indicates that optimization geometric parameters of the fiber probes are important since the optimal fiber probes can obtain the most efficient SERS signal and electric field enhancement. Therefore, we have performed simulations for four different parameters, including tip radius, conical angle, Ag NPs size, and the gap between them, to record the effect of electric field using the excitation wavelengths of 532, 633, and 785 nm. A single-valued variable method is used for optimization, other parameters set as default tapered angle (θ = 41.2°), tip radius (50 nm), Ag NPs radius (50 nm), the gap between Ag NPs (5 nm), and the gap between Ag NPs and fiber tip surface is 1 nm. As shown in Fig. 6a, the effect of the tip radius (50 to approximately 500 nm) on the |E|/|E 0 | max between the Ag NPs was calculated. Note that the electric field under the wavelength of 532 nm is larger than that of 633 and 785 nm, especially since the tip radius is less than 200 nm. As the tip radius reduces, a more evanescent field outside the walls of the taper is coupled to the NPs, leading to an enhanced electric field between the Ag NPs. In general, the effect of the electric field and emission energy at 532 nm is stronger than a longer wavelength. The effect of the cone angle from 10 to 70° is calculated, as described in Fig. 6b. The electric field fluctuations at excitation wavelengths of 532 and 633 nm are more obvious than those at 785 nm, and the optimum field enhancement is obtained at a cone angle of 25 to 35°, while the electric field variations at the excitation wavelength of 785 nm are not significantly at all conical angles. The effect of the gap between the Ag NPs on the electric field enhancement was investigated, as evidenced in Fig. 6c. The figure indeed shows that the electric field intensity is rapid reduction as the spacing between the Ag NPs increased. In other words, the strongest hotspots exist when the dimers of Ag NPs gap distance are less than 5 nm region [31]. Finally, the effect of the Ag NPs radius was evaluated in size range from 30 to 100 nm, as shown in Fig. 6d. As expected, the electric fields acquired show that the optimal electric field at each wavelength corresponds to different NP sizes. For 532, 633, and 785 nm excitation, the maximum of |E|/|E 0 | max was obtained when the NPs size of the radius is 55, 70, and 95 nm, respectively. It can be explained that the wavelengths of LSPs are slightly different for certain dimensions of the Ag NPs.
The Tapered Optical Fiber Probe with Substrate
In the above analysis, we discussed the field enhancement characteristics of the proposed probes only included fiber tip-coated Ag NPs. To further improve the electrical field enhancement, we designed a noble metal (Au) substrate located L = 23 nm below the probes-tip, as shown in Fig. 7a. The higher enhanced electric fields can be found between the tip-substrate coupling and between the Ag NPs, as illustrated in Fig. 7b. The coupling field enhancement mainly relies on the excitation wavelength, substrate, and relative distance between the fiber tip and substrate. Here, we mainly pay attention to the excitation wavelength of 785 nm. As shown in Fig. 7c, the field strength of the Fig. 6 The electric field distribution of fiber sensors in x-y plane on different a tip radii, b cone angles, c gap between NPs, and d Ag NP radius under excitations of 532, 633, and 785 nm, respectively Fig. 7 a Sketch of the structure of tip-substrate coupling with the distance L of 23 nm, b simulated electric field distributions in the x-y plane under the excitation wavelength of 785 nm, c the electric field variation between Ag NPs in different wavelengths with substrate and without substrate, d The distribution of electric field at the tip-substrate and between NPs as the distance L increases, e the distribution of electric field and the effective mode area without substrate and f with Au substrate tip-substrate coupling between Ag NPs is significantly larger than that of the fiber tip without a substrate in the wavelength range of 450-800 nm. At the same time, both curves have similar electric field intensity variations near 490 nm. This again proves that the peak of 490 nm represents the plasmon resonance peak generated between Ag NPs on the fiber tip surface because the position of the peak is not related to the substrate. Figure 7d shows the variation of the electric field in the x-y plane with the gap between the fiber tip-substrate. The gap distance is varied from 23 to 400 nm. It is shown that the strong electric field is reduced as the increase of distance L. When the L is less than 26 nm, the maximum of the |E|/|E 0 | max arrived at 22.3 attributed by tip-substrate coupling. Whereas, when the L is greater than 26 nm, the electric field between adjacent Ag NPs is higher than tip-substrate coupling mainly dominated by both plasmonic gap modes and image-force effect [32]. On the other hand, as the distance increases, the field strength between Ag NPs is enhanced, which can avoid the detection distance problem in the application of probe detection. In addition, we investigated the effective model area in both models that is the tip without substrate coupling Fig. 7e and the tip-substrate coupling Fig. 7f. The latter has a higher enhancement of the electric field and the effective mode area is severely reduced due to strong mode coupling.
Finally, we compared the fields enhancement that two substrate-Au and Ag, were used to evaluate the tip-substrate coupling efficiency. Table 1 shows that field enhancement performance is best for the Ag substrate, followed by Au. In addition, the EF calculations showed that the probes with Ag substrate have stronger EF values at the wavelengths of 532 nm. In fact, Au substrates have a more stable EF in different wavelengths. In the actual SERS system, the Ag substrate obviously has a larger SERS signal enhancement than Au substrate. The result of our simulation is consistent with other model simulation obtained in previous reports [33][34][35], indicating that the model is feasible. In general, above obtained results indicate that the simulation model and calculation results provide a reference for tapered SERS fiber probes applications in the Raman fields.
Conclusion
In this study, we have reported a design of a tapered optical fiber probe using Ag NPs deposited on the fiber tip. The electrical field enhancement and spectrum properties of the fiber probe are obtained using the 2D-FDTD method. The analysis shows that the maximum electric field enhancement is related to the transmission spectrum. The probe has a stronger electric field intensity compared to the reported modified Au NPs. The influence of the tip radius, conical angle, Ag NPs size, and gaps between them on the electric field enhanced has been quantified under a typical excitation wavelength of 532, 633, and 785 nm. Moreover, the electric field of the proposed probe is further enhanced from 13.9 to 22.3 times when Au substrate was introduced under the tapered fiber tip. This study can provide theoretical support for the development of tapered fiber probes and has a reference value for the preparation of Raman fiber probes for biosensing applications. | 4,355.4 | 2022-09-15T00:00:00.000 | [
"Physics"
] |
MACROECONOMIC STABILITY AND TRANSPORT COMPANIES’ SUSTAINABLE DEVELOPMENT IN THE EASTERN EUROPEAN UNION
. The paper’s primary aim is to evaluate the influence of macroeconomic stability on transport companies’ sustainable development in the eastern EU from 2008 to 2019. The first part discusses the theoretical problems. The empirical part includes the methodology, results of the research and conclusions. To determine the relationship between variables, we use Pearson’s R and the Ordinary Least Square Method. The contribution to knowledge is using the pentagon of macroeconomic stability to evaluate macroeconomic stabilisation’s influence on transport companies’ sustainable development. The results indicate that macroeconomic stability is one of the essential determinants of the transport companies’ sustainable development. According to Pearson’s R, the highest level of dependence is in Slovenia (0.96), Bulgaria (0.9), and Slovenia (0.83). The lowest is in Latvia (0.69). The OLS regression results indicate that the highest significance is in Slovakia ( α 1 = 1.994), the lowest is in Lithuania ( α 1 = 0.691). The states’ economic policies should favour the freedom to conduct business, create appropriate legal regulations, and support ecological investments. It is necessary to act for a stable and fair tax system, ensure access to finance. The issue is contemporary and requires further analysis.
Introduction
The relationship between macroeconomic stability (M SP ) and transport companies' sustainable development (SD TC ) is a current and important issue in climate degradation. The literature on companies' sustainable development is gaining importance and requires more in-depth and broader analysis (Evers, 2018;Chang, 2020).
Researchers undertake theoretical analyzes of sustainable development, focusing on its evaluation and development determinants (Bordon & Schmitz, 2015). Many of them focus on individual economic entities' situation (Mao et al., 2018), analyzes reports on sustainable development of companies (Harymawan et al., 2020), and attempts to evaluate and measure the companies' sustainable development and determine its determinants (Misztal, 2019;Matinaro et al., 2019;Comporek et al., 2021). Some researchers analyze transport companies in terms of their impact on the natural environment (Brussel et al., 2019;Pieloch-Babiarz et al., 2021); analyzes focus on green supply chains, ecological innovations (Andersson & Forslund, 2018) or an attempt to identify determinants influencing the sustainable development of transport companies (Brussel et al., 2019).
Although the macroeconomic stability for the development of companies is the subject of analyzes and scientific considerations, there is a certain insufficiency, as there are no analyzes of the influence of M SP on SD TC . Researchers indicate that macroeconomic situations, including the level of GDP, inflation, unemployment, and the trade balance, affect the transport sector (Misztal & Kowalska, 2020;Comporek et al., 2021). Investigating the nature and direction of these links will increase the dynamics of companies' sustainable development and implement a more effective economic and environmental policy.
The paper's primary aim is to evaluate the influence of M SP on SD TC in the eastern EU from 2008 to 2019. The research supplements the literature on the subject and is important from the point of view of implementing states' economic policy. To evaluate the statistical relationship between variables, the Authors use the Ordinary Least Square Method, which is commonly used for similar analyzes (Oberhofer & Dieplinger, 2014). The estimated model is linear and fulfils the conditions necessary for the application of this method.
The research sample includes transport companies from the countries of the eastern European Union. The research sample covers the years from 2008 to 2019. Transport companies were selected for the research sample due to their role in developing other economic sectors. Moreover, this sector has one of the largest negative impacts on the natural environment.
The structure of the paper is as follows: an introduction, a literature review, a research methodology, research results, conclusions, and references.
The Authors discuss selected theoretical issues connected with the sustainable development of transport companies in the context of macroeconomic stability. The empirical part of the paper presents the research results and conclusions. We build the single equation model, use the Pearson' R and the Ordinary Least Square Method (OLS) to verify the research hypothesis. The research's significant limitation. It does not consider the situation before the economic crisis and its impact on companies' sustainable development. Also, only one explanatory variable was included in the model. Therefore, further research should be carried out to identify the key determinants for companies' sustainable development. Moreover, the model considers only quantitative data, which is also a significant limitation.
The literature review
Sustainable development means achieving the best economic performance while respecting the environment and social development (Evers, 2018;Cohen et al., 2021). Over the years, the concept of sustainable development evolved significantly, becoming a key reference area in many global programs and initiatives for the common good (Mao et al., 2018;Pieloch-Babiarz et al., 2021). Business activities are fundamental for stable economic growth. Unfortunately, it has very often a negative influence on the natural environment (Škare & Golja, 2013;Słupik & Lorek, 2019). Companies should implement the assumptions of sustainable development into their business processes (Salari & Bhuiyan, 2018;Powe, 2020). It requires achieving the best possible financial results, multidimensional management, testing various business models and scenarios, implementing continuous learning processes, looking for and levelling threats around achieving sustainable development goals (Misztal, 2019;Saygili et al., 2021). The implementation of sustainable development tasks provides a competitive advantage (Suprayoga et al., 2020).
Numerous empirical studies focus on the environmental activities of transport companies (Valjevac et al., 2018;Banik & Lin, 2019). It is necessary to minimize the negative impact of transport entities, create balanced transport systems, and implement eco-innovation (Zikic, 2018). Ecological activities should reduce emissions of harmful substances and waste, minimize the use of non-renewable resources, reduce noise, etc. (Misztal, 2019;Cohen et al., 2021).
Sustainable development of transport companies' factors is internal (a financial situation, environmental awareness, etc.) and external (micro and macroeconomic factors) factors (Bordon & Schmitz, 2015;Andersson & Forslund, 2018;Brussel et al., 2019). One crucial factor for sustainable development is macroeconomic stabilization, which means lasting economic balance (internal and external) in both the real and monetary aspects (establishing a macroeconomic system characterized by an equilibrium of flows and stocks alike). It eliminates uncertainty in business and boosts future economic activity growth (Kołodko, 1993;Sokolov Mladenović et al., 2019;Chang, 2020).
The company's sustainable development is strongly associated with the level of macroeconomic growth (Škare & Hasić, 2016;Comporek et al., 2021). A higher economic level means higher expenditure on research and development, greater availability of knowledge and greater environmental awareness of customers. Thus, stable economic growth leads to rationalization of decisions in environmental protection (Cek & Eyupoglu, 2020).
Macroeconomic stability understood as stable conditions for economic growth is of key importance for sustainable economic development. The improvement of stability is related to improving business conditions and stable legal regulations (Misztal & Kowalska, 2020;Lisiński et al., 2020). Most researchers emphasize that high GDP, low inflation, and low unemployment rate increase confidence and improve its sustainable development (Krajnakova et al., 2018;Misztal, 2019). The companies' sustainable development is dependent on interest rates, foreign investments, and government expenditure (Barkauskas et al., 2015).
Macroeconomic stability ensures full and productive employment and decent work for all people. Hence, from the perspective of the sustainable development of companies, a decrease in the unemployment rate has a positive effect on the sustainable development of companies (Fedulova et al., 2019). As for the issue of interest rates, they largely influence the investment decisions of companies. Higher interest rates mean a higher credit price and lower ecological innovations (Wu et al., 2021).
Macroeconomic stability affects the sentiments and expectations of entrepreneurs about the future. A good economic situation is conducive to undertaking ecological investments (Kekre, 2016;Raczkowski, 2015;Harting, 2019). There is also a positive correlation between macroeconomic conditions and consumer expectations. There is pressure on companies in developed countries to take care of the environmental and social aspects (Pieloch-Babiarz et al., 2021).
The methodology of the research
The paper's primary aim is to evaluate the influence of macroeconomic stability on transport companies' sustainable development in the eastern EU from 2008 to 2019. The research period and the sample selection result from the adopted purpose and the availability of data. The study's significant limitation is that it does not consider the situation before the economic crisis and its impact on companies' sustainable development. Moreover, the model considers only quantitative data, which is also a significant limitation.
We focus on eleven eastern European Union countries, which have several common characteristics, including geolocation, history, economic systems transformation, and business operations changes.
The study refers to the transport companies, which can contribute to the region (the sample was selected to ensure the results' statistical significance). Not without value is that transport companies emit several pollutants, which hurt the natural environment and human health and life.
The central research hypothesis is "Macroeconomic stability has a statistically significant influence (p < 0.05) on the transport companies' sustainable development in the eastern European Union in the period 2008-2019". To evaluate the significance of the variable M SP 's influence on the variable SD TC , we verify the hypothesis: with the alternative hypothesis H 1 : α j ≠ 0 (p-value < 0.05).
Assumption: macroeconomic stability is one of the decisive determinants affecting green business investments.
Also, highlighted the sub-hypothesis: -H1: "The transport companies' sustainable development in the eastern part of the EU has a positive trend from 2008 to 2019". The following equation describes the dynamics: SD TC = α 1 t + α 0 , we verify the hypothesis: H 0 = α 1 > 0; the alternative hypothesis H 1 = α 1 < 0. Justification for the H1 hypothesis: actions taken by state and EU authorities to initiate environmental and social investments, including the introduction of standards and legal principles in environmental protection. The positive trend is also the result of the increased environmental awareness of entrepreneurs and customers.
-H2: "The macroeconomic stability in the eastern EU has a positive trend from 2008 to 2019".
we verify the hypothesis H 0 = α 1 > 0; the alternative hypothesis H 1 = α 1 < 0. Justification for the H2 hypothesis: the research period covers the time to recover from the economic slowdown and slow growth in corporate investment.
-H3: "The highest average value of the transport companies' sustainable development (SD TC ) is in countries with the highest mean value of the macroeconomic stability (M SP )". We verify the hypothesis Justification for the H3 hypothesis: M SP means stimulating economic growth, increasing employment, ensuring internal balance (by reducing the inflation rate), and providing external balance (by striving to achieve the balance of payments). Thus, attain M SP has a positive effect on the level of investment in the company's sector.
The variables are stimulants (positively affect synthetic indicators) andstimulants (alytical variables whose increase affects the decrease in the sustainabldevelopment indicator).
We use following variables to assess the indicators: -economic development ( where GDP ∆ -∆ gross demestic product, HICP -Harmonised Index of Consumer Prices, U -unemployment rate, G -government debt, CA -current account balance to gross domestic product. We use the Pearson' R to measure the correlation between M SP and SDTC and create two types of the regression model (the model meets the conditions for the application of the least square method) based on the formula:
Result of the research
The research sample consists of 44% Polish (146 039), 12% Czech and Romanian (39 424, 39 646), 9% Hungarian (28 926), 6% Bulgarian (20 625), 5% Slovak (15 266), 3% Slovenian, Lithuanian and Croatian (8 580, 11 286, 9 460), 2% Latvian (6 672) and 1% Estonian (4 806) transport companies (Figure 1). Figure 2 presents SD TC from 2008 to 2019. All countries show a positive trend in the SD TC over the analyzed period, which should be assessed as a favourable situation, which means that activities in the transport sector undertaken for economic, social, and environmental development are effective and efficient. The highest dynamics is in Hungary (SD TC = 0.0523t + 0.2; R² = 0.9585) and in Estonia (SD TC = 0.052t + 0.2485). The SD TC fell during the economic crisis of 2008, and after 2012, it began to rise rapidly in all countries. Figure 3 presents M SP in east EU countries. There is a positive trend in the M SP in the analyzed countries. In most countries, its values slightly decreased during the crisis and then Figure 4 presents the result of the correlations between M SP and SD TC . The Pearson's R between SD TC and M SP is significant at p < 0.05. The highest correlation is in Slovenia (0.96), the lowest in Latvia (0.69). The correlations between the variables are either strong or very strong, which proves a high degree of relations between the variables. Table 1 presents the OLS regression. All factors have a positive influence on transport companies. The highest impact of M SP1 is in Estonia (4.868), the lowest is in Romania (0.495). The highest impact of M SP2 is in Slovakia (2.392) and the lowest is in Estonia (0.087). In most countries, the M SP1 and M SP2 are statistically significant (except M SP1 in Slovakia). The coefficient of determination (R 2 ) is from 0.573 (M SP1 , M SP2 and SD TC in Czechia) to 0.981 (M SP1 , M SP2 and SD TC in Romania). M SP has a positive influence on the transport companies' sustainable development. The highest impact is in Slovakia (1.994), while the lowest is in Lithuania (0.691) ( Table 1). The results of the research allow confirming the research hypothesis (H). The study gathered evidence that macroeconomic stabilization has a statistically significant impact on transport companies' sustainable development from 2008 to 2019. According to the Pearson's R, the highest level of dependence occurred in Slovenia (0.96), Bulgaria (0.9), and Slovenia (0.83). The lowest in Latvia (0.69). The OLS regression results indicate that the highest impact of M SP on SD TC is in Slovakia (α 1 = 1.994) while the lowest is in Lithuania (α 1 = 0.691).
In the analyzed period in the eastern part of the European Union, there are positive phenomena that go hand in hand, as there are balanced economic growth and sustainable transport companies' development. Moreover, lasting economic balance leads to an increase in social well-being and changes the conditions for doing business. Table 1 The sub-hypothesis H1 is correct because, in all countries, SD TC is positive from 2009 to 2019. It means that entrepreneurs take actions for economic, social, and environmental development. The programs implemented by the European Union and countries work well.
End of
The sub-hypothesis H2 is true. In all analyzed countries, it is the positive dynamics of M SP . This is the result of an improvement in the economic situation, an increase in investments, and an improved positive mood among consumers.
The sub-hypothesis H3 is wrong because only in Estonia, the highest mean value of the sustainable development of transport companies (SD TC = 0.59) is accompanied by the highest average value of the macroeconomic stabilization indicator (M SP = 0.31).
The model with two explanatory variables M SP1 and M SP2 does not indicate which group of factors, internal (M SP1 ) or external (M SP2 ) is crucial for the sustainable development of transport companies. The highest impact of internal factor is in Estonia (α 1 = 4.868) while the lowest is in Romania (α 1 = 0.495). The highest impact on external factors is in Slovakia (α 2 = 2.392), and the lowest in Estonia (α 2 = 0.087).
The sustainable development of transport companies is very important research issues. This research focuses only on macroeconomic stability, which is a severe limitation. The most important conclusion is that the more advanced countries are, the more meaningful demand for companies to comply with SDG.
Therefore, it is vital to create favorable circumstances for doing green business. From this perspective, the state authorities' role is necessary and essential for the countries' stable development with harmony with nature. The transparent legal regulations and substantive and financial support are also crucial for undertaking ecological investments by companies.
Conclusions
The sustainable development of the companies is conditioned by several factors, both internal and external. Internal factors include assets and financial possibilities, the adopted business model, the strategy, and the environmental management approach. External factors, including the industry's competitiveness and ecological harmfulness, socio-economic increase in the country and its future perspective, and legal regulations in environmental protection.
The research results indicate that macroeconomic stability (stable economic growth) is one factor determining the transport companies' sustainable development in east EU countries. The Pearson's R and the OLS regression indicate the high correlation between macroeconomic stabilization and transport companies' sustainable development. From 2008 to 2019, there is a positive dynamic of SD TC and M SP .
The research's significant limitation. It does not consider the situation before the economic crisis and its impact on companies' sustainable development. Also, only one explanatory variable was included in the model. Therefore, further research should be carried out to identify the key determinants for companies' sustainable development. Moreover, the model considers only quantitative data, which is also a significant limitation.
The research results are useful for setting the direction of governments' economic and environmental policies and for managing companies. The directions of the states' economic policies should favour the freedom to conduct business, create appropriate legal regulations, and support the development of ecological investments. It is necessary to act for a stable and fair tax system and ensure access to finance.
Authorities should use regulatory mechanisms and market control, from corporate governance to verifying the public finances sector (only to create appropriate self-regulating mechanisms). Achieving macroeconomic stability is a challenging task, especially for developing economies. In countries where economic transformation has also taken place, it is crucial to conduct macroeconomic policy to support ecological and pro-social companies' initiatives. Macroeconomic stability strengthens the economy's position and is the starting point for ecological development and reducing the negative influence of economic activities on the natural environment. It affects the credit policy, which is essential for making new environmental investments.
From business managers' perspective, the information about macroeconomic stabilization is vital in defining development strategies and building business models. Maintaining appropriate economic relations affects the moods and expectations of companies and customers. The persistent macroeconomic stabilization leads to an increase in society's welfare and changes the consumption model. Not only economical but also social and environmental issues are gaining in importance.
The SD TC and M SP have a growing trend. Which indicates that the actions taken so far in the analysed countries are right, although a more comprehensive approach to the development of economies is required. It seems that these countries, apart from taking care of economic development, need to implement environmental protection and community support policies more actively and effectively.
The sustainable development of transport companies is significant as this sector is responsible for some of the highest emissions of harmful substances into the environment. Moreover, the development of the transport sector influences other sectors of the economy.
The research shows the relationship between sustainable development and macroeconomic stabilization, which means implementing current and forecasted macroeconomic information in strategies and business models in business practice. The obtained results also indicate the tasks faced by the ruling states whose role in creating conditions for companies' stable and sustainable development is undeniable.
Macroeconomic stability is only one of the factors influencing the sustainable development of economic entities. It is necessary to conduct further analyses devoted to isolating the determinants of economic, social, and environmental decision-making by companies. Further research will focus on assessing the influence of determinants on the transport companies' sustainable development in the EU. It is also essential to identify the determinants of sustainable development in other companies and conduct a comparative analysis. | 4,620 | 2021-12-14T00:00:00.000 | [
"Economics"
] |
PROBABILISTIC APPROACH FOR THE DETERMINATION OF CUTS PERMISSIBLE BRAKING MODES ON THE GRAVITY HUMPS
The paper presents the research results of cuts braking modes on the gravity humps. The objective of this paper is developing the methods for assessment of braking modes of cuts under conditions of fuzziness of their rolling properties, as well as selecting the permissible starting speed range of cuts from retardant positions. As a criterion for assessing the modes of target control of cut rolling speed, it was proposed to use an average gap size on a classification track at the established norms of probable exceeding of permissible speed of cars collision and their stop in retarders. As a criterion for evaluating the modes of interval control of cuts rolling speed, using the risk of their nonseparation on the switches was proposed. Using the simulation modeling and mathematical statistics, the configuration of the range of permissible speed of cuts coming out from retardant positions has been set. The conducted researches allow simplifying the choice of cut braking modes in systems of automatic control of cut rolling speed.
Gravity humps are basic technical facility, which provides breaking-up and making-up of freight trains.Automation of sorting process is the main direction of improvement of safety in trains breaking-up, improvement of working conditions and reduction of operational costs for processing car traffic volume on hump yards.Nowadays different automation systems of hump operations are developed [1][2][3][4][5].It should be pointed out that such systems PROYARD III, Star II, MSR-32 are expensive complexes, equipped with plenty of various sensors and complex systems for control of stop blocks.At the same time hump conductors successfully deal with sorting process proceeding from their own experience.In this respect solution of tasks for sorting process automation to a large extent depends on improvement of algorithms for automation of trains splitting-up control.It allows to rise sorting process quality due to software upgrading, but not due to complication of technical facilities and, thus, to reduce cost of splitting-up control systems.
LITERATURE REVIEW AND DEFINING THE PROBLEM
On large Ukrainian hump yards three-position gravity humps equipped with beam stop blocks are used.Control of speed of cuts rolling over a hump is one of basic tasks to be solved while breaking-up trains.Purposes being achieved due to this are related to ensuring of cuts separation on point switches and stop blocks on their rolling routes, ensuring of permissible speeds of cuts running on stop blocks, permissible speeds of cuts reaching cars standing on marshaling tracks, achievement of maximum filling of marshaling tracks without gaps.
Automated control of rolling speed of cuts is based on a mathematical model of its motion along the hump.
In the process of rolling the gravity force effects on the cut, as well as motion resistance forces: main resistance (friction of car parts with each other, wheels friction over rail track, wheel impact over rail track in joints etc.), environmental and wind resistance, resistance due to points and curves, resistance of car stop blocks [6][7].At that the rolling process of the cut can be described with a differential motion equation, which independent variable is a route where: g' -is gravity acceleration, considering inertia of rotating masses, m/s 2; s -is distance from top of the hump to the first axle of the cut being rolled down, m; v -is motion speed of the cut, m/s; i(s) -is reduced slope under the cut, ‰; w r -is main specific motion resistance, N/kN; w sc -is specific resistance due to points and curves, N/kN; w ew -is specific environmental and wind resistance, N/kN; b r -is specific resistance being created by car stop blocks, N/kN.
The abovementioned equation is solved through numerical computations [8].
Rolling speed of cuts is controlled due to action of stop blocks on cars and their creation of additional resistance force.Herewith controllable parameters are speeds of cuts coming out of retarder positions specified by control system.These speeds form braking mode of the cut Braking modes selection is limited with a set of conditions.In particular, braking modes must ensure permissible speeds of cuts running into the second braking position max 2 bp v , as well as cuts approaching to cars standing on marshaling tracks max t v ( ) ( ) With successive rolling of cuts additional time intervals between them should be provided in order to throw over points and to forward cuts according to their rolling routes.Otherwise the cut will be forwarded to an incorrect track of the break-up yard and an additional shunting work will be necessary to move it to the appropriate track.Also an additional time interval should be ensured for switching over stop blocks between stages while braking of different cuts, otherwise braking mode of the cut will not conform to the calculated one.
The interval value on a point or a stop block between successively rolling cuts is determined from the expression ( , ) ( , ), 0 where: i θ -is an initial interval between cuts in i-th pair on top of the hump, s; -is time of i-th cut rolling from breakaway torque until unlocking a separating element and of the next cut unt il reserving the separating element, s; n -is number of cuts in a train.
Cuts are separated en route, if: where: de t -is time necessary for functioning of hump automation systems when throwing over a point or switching over a stop block, s.
Intervals between cuts in a train are related to each other.At that increase of interval with the preceding cut, as a rule, results in decrease of interval with the next one.Due to this a group of three cuts is regarded as a design group when solving the task of selection of braking modes of cuts; in this group braking modes of extreme cuts are fixed and braking mode of the middle cut varies.
Solution of task of selecting braking modes of the controllable cut in the design group is given in [8].Disadvantage of the solution represented in [8] is that it is obtained with known values of motion resistances and exact realization of specified speeds of cuts coming out by retarder positions.In fact, processes taking place on gravity humps are stochastic and all indicated values are random [9][10][11].Besides, the fact that conditions of regulation of rolling speed of cuts are inequalities is evidence of existence of a great number of permissible braking modes of cuts.The purpose of this investigation is to select the range of permissible braking modes of the middle cut in the design triplet of cuts under conditions, when motion resistances of cuts and their speeds of coming out of retarder positions are random values.
METHODOLOGY
Investigation of rolling of cuts is made by means of software package, which simulates the process of their rolling over a gravity hump [11].The main window of the software package, simulating cuts rolling with representation of time curves t=f(S) of three cuts successively rolling is shown in Fig. 1.A plan and a longitudinal profile of rolling routes, parameters of cuts, parameters of distribution of random values of resistances to motion of cut and speeds of its coming out of retarder positions are specified as initial data for simulation.In this investigation rolling down were performed for conditions of the gravity hump with 32 tracks in the break-up yard and 5 separating points along routes of cuts motion.The gravity hump is equipped with two retarder positions on its sloping part after the first and second separating points, as well as the yard retarder position on marshaling tracks.Reference designation of the rolling route is shown in Fig. 1 below.When simulating rolling of the cut, a set of parallel experiments with various initial values of the random-number generator is being made.According to the results of rolling the parameters of random values of motion speeds of cuts in characteristic points are determined, as well as time of cuts rolling to these points.
In formalization of task of selection of braking modes of cuts the variables are specified speeds of cuts coming out of retardant positions of the gravity hump.
It should be pointed out that not all specified speeds can be realized by retardant positions due to their limited capacity.In order to decrease scale of the task let us consider the process of target braking of the cut by the third retardant position.An ideal solution of this task is to determine such speed of the cut coming out of the third retardant position d v′′′ , at which it connects to cars standing on the marshaling track with permissible speed.However owing to lack of exact information about rolling conditions such solution of this task is not applicable to all cases.At that rise of value d v′′′ results in improvement of marshaling track filling and decrease of probable size of "gap" between cars ( ) p v′ ′ ′ charac- terize safety of trains breaking-up, they are suggestion to be normalized.As a result, the task of selection of specified speed of cuts coming out of the third retardant position will be as follows: ; , where: ,max ,max , t r p p -are respectively permissible probabilities of exceeding the specified speed of cars collision and stopping the cut in the stop block.It's worth noting that value d v′′′ does not influence conditions of interval regulation of rolling speed of cuts, because the third retardant position is located after the last separating point.Speed of cuts coming out of the second retardant position d v′′ is the factor, which connects condi- tions of target and interval regulation of rolling speed of cuts.As a result of investigations it is concluded that in solution of the task (7) for every value d v′′ the single value ( ) ′′′ ′′ can be specified, which corresponds to the best indices of target regulation of rolling speed of cuts, which can be reached under these conditions.According to the results set of simulation experiments for different values d v′′ a dependence of probable size of "gap" on speed of the cut coming out of the second retard- ant position can be established.General view of such dependence is represented in Fig. 2. Speed of cut coming out of the second retardant position is restricted within the limits of , probability of stopping cuts in the third retardant position or before it exceeds the permissible value even with absence of braking at the latter.In cases if , probability of exceeding the specified speed of cars collision in the break-up yard exceeds the permissible value even with full-rated braking of cuts at the third retardant position.Within the limits of permissible values of speeds of cuts coming out of the second retardant position two sections can be selected.On the first section ,min probable size of the gap in the marshaling track decreases with increase of value d v′′ , so if . On the second section , probable size of the gap is a constant value in spite of d v′′ .Thus, a conclusion can be made that conditions of target regulation of rolling speed of cuts restrict permissible speeds of cuts coming out of the second retardant position.Herewith the speed of cuts coming out of the third retardant position depends on cuts coming out of the second retardant position.For this reason representation of braking mode of the cut ( 6) can be simplified and represented as follows
For graphical representation of braking modes of cuts for all of them an appropriate point in the plane can be put.At that permissible rolling modes for a single cut are represented as a closed range Ω t .Rolling speeds of cuts physically are nonnegative values.Due to this all possible speeds of cuts coming out of retardant positions are in the first quadrant.The marginal or maximum permissible speeds of cuts coming out of retardant positions conform to boundary of range Ω t .Description of these restrictions is given in Table 1.
An important peculiarity of range Ω t is that it consists of two sub-ranges Ω t1 and Ω t2 , separated by line d dt v v ′′ ′′ = .Braking modes of section 1 conform to braking modes in sub-range Ω t1 (see Fig. 3).
Within the limits of this sub-range a change in speed of cuts coming out of the second retardant position results in a change of indices of target regulation of cuts rolling speed.Braking modes of section 2 conform to braking modes in sub-range Ω t2 (see Fig. 3).Within the limits of this sub-range indices of target regulation of cuts rolling speed do not depend on braking modes at retardant positions on the sloping part of the hump and have constant values.Restriction on permissible speed of cars collision in the break-up yard.Restriction is linear.
5
Restriction on permissible speed of the cut coming into the second retardant position.
Restriction is linear.
Restriction on capacity of the first retardant position.They are realized while rolling the cut with full-rated braking at the first retardant position.Restriction is linear.
7
Restriction on capacity of the second retardant position.They are realized while rolling the cut with full-rated braking at the second retardant position.Restriction is nonlinear.
Additional restrictions of braking modes appear during successive rolling of cuts from the hump.In this case restrictions on conditions of the cut separation from the preceding and next one on points and stop blocks appear [12].Considering the fact that the process of rolling cuts is stochastic, these conditions can be represented as follows: ( ) where: d p -is permissible probability of non-separation of cuts.
According to [13] the value ( ) , i de i p t t < δ can be determined on the basis of statistical analysis of simulation results of cuts rolling from the following expression ( ) where: Ф(x) -is Laplace's function; Restriction on permissible probability of separation from the next cut on a stop block of the second retarder position.Restriction is nonlinear.Restrictions 1, 2 and 8-10 select the range of permissible braking modes of the cut provided that it is separated from adjacent cuts Ω d .Configuration of this range depends on rolling characteristics and conditions of the controllable cut as well as cuts adjacent to it.
When breaking-up real trains consisting of more than three cuts the number of restrictions implied by conditions of interval regulation of rolling speed may increase owing to separation of non-adjacent cuts [14].
INVESTIGATION RESULTS
An example of selection of the range of permissible braking modes is shown in Fig. 3.In this figure range Ω t . is shown with a thick line.Shaded areas conform to range Ω d .Ω =Ω ∪Ω .In the range Ω a1 when braking modes v are changed, indices of both target and in- terval regulation of rolling speed of cuts are changed.In the range Ω а2 when braking modes v are changed, only indices of interval regulation of rolling speed of cuts are changed, but indices of target regulation remain constant.Availability of many permissible braking modes allows solving the task of regulation of rolling speed of cuts even in the absence of exact information about rolling characteristics of cuts and conditions of their rolling.Area S Ω is an important characteristic of permissible braking modes.The largest areas of ranges Ω а are typical for cases, when the group includes multi-car cuts or separation occurs on points 1 and 2 en route of rolling.In these cases area of range Ω а can be equal to area of range Ω t .The smallest areas of range Ω t are typical for cases, when separation of single-car cuts on the last separating point in the first and second pairs.Zero area of range Ω а is indicative of the necessity to change braking modes of adjacent cuts or to reduce breaking-up speed.
CONCLUSION
Investigations made allow to make the following conclusions.Braking modes of cuts can be characterized with speeds of the cut coming out of retardant positions located on the sloping part of the hump.At that the speed of cuts coming out of the third retardant position depends on the speed of cuts coming out of the second one and is selected proceeding from conditions of reaching the best indices of target braking.Range of permissible braking modes is an enclosed area, in which indices of target and interval regulation of rolling speed of cuts take permissible values.Basic restrictions of braking modes of cuts are established.Investigations made, allow simplifying solution of task of controlling rolling speed of cuts, when there is exact information about both rolling performance and rolling conditions.
Fig. 1 .
Fig. 1.Main window of the software, simulating cuts rolling from the gravity hump Рис. 1. Главное окно программы, моделирующей скатывание отцепов с сортировочной горки d l v′′′ , but simultane- ously probability of exceeding the specified speed of cars collision ( ) t d p v′ ′ ′ ; reduction of value d v′′′ re- spectively results in the opposite change of indices.With low values d v′′′ probability of stopping the cut in the yard retardant position
Table 1
Restrictions of permissible rolling modes of a single cutNo.
Requirements of interval regulation of rolling speed of cuts is represented in the form of restrictions 8-10, conforming to braking modes with limit values of probability of cuts separation.Description of these restrictions is given in Table2.Table2Restrictions of permissible modes of the cut rolling under conditions of separation from adjacent cuts | 4,041.8 | 2016-01-01T00:00:00.000 | [
"Engineering"
] |
Generation Mechanism of Linear and Angular Ball Velocity in Baseball Pitching †
The purpose of this study was to quantify the functional roles of the whole-body’s joint torques including fingers’ joints in the generation of ball speed and spin during baseball fast-ball pitching motion. The dynamic contributions of joint torque term, gravitational term, and motion-dependent term (MDT) consisting of centrifugal force and Coriolis force, to the generation of the ball variables were calculated using two types of models. Motion and ground reaction forces of a baseball pitcher, who was instructed to throw a fastball into the target, were measured with a motion capture system with two force platforms. The results showed (1) the MDT is the largest contributor to ball speed (e.g., about 31 m/s prior to ball release) when using 16segment model, and (2) the horizontal adduction torque of pitching-side shoulder joint plays a crucial role in generating ball speed with conversion of the MDT into other terms using a recurrence formula.
Introduction
Throwing a fastball would be a competitive technique of baseball pitchers.In order to obtain large ball speed, the pitchers accelerate the ball by rotating their joints through exerting joint torques (e.g., [1]).Previous studies on high-seed swing motions based on multi-body dynamics have reported that motion-dependent term, which contains centrifugal force and Coriolis force, contributes largely to the generation of distal end points such as ball or hand in baseball pitching [2,3], bat head in baseball batting [4,5], racket head in tennis service [6], and so on.
This study has tried to quantify functional roles of whole-body's joint torques including fingers' joints in the generation of ball speed and ball angular velocity (i.e., ball-spin speed) with use of two types of human whole-body models with and without consideration of finger joints.In addition to quantification of contributions of individual terms (e.g., joint torque term and gravitational term), the generating factors of the MDT was also considered using a recurrence formula with regard to the generalized velocity vectors consisting of linear and angular velocity of all segments of the system.
Methods
An induced speed analysis was conducted to quantify the dynamic contribution of joint torque inputs.Since the analysis needs the equation of motion for target system as well as data of pitching motion, this section mainly explains modelling of body, derivation of the equation of motion, and quantification of dynamic contributions calculated from the equation of motion in consideration of generating factors of the MDT.
Data Collection
A male collegiate right-handed baseball pitcher was instructed to pitch a ball as fast as possible to the target settled 18 m apart from him.Three-dimensional coordinate data of the pitching motion (body: 47 markers; fingers: 11 markers; ball: 6 markers) were captured using a 20-camera motion capture system (VICON-MX, Vicon Motion Systems, Oxford, UK) operating at 500 Hz.Ground reaction forces of the individual legs were measured using two force platforms (9281A and 9287B, Kistler Instruments AG, Winterthur, Switzerland) operating at 1000 Hz.This study was approved by the institution's ethics committee (No. 28-138).
Dynamical Model of Whole Body and Ball
The whole-body segments with a ball were modelled not only as a system of twenty-two-rigid linked segments (Figure 1a) but also as a system of sixteen-rigid linked segments (Figure 1b).Each lower limb is assumed to be connected with the ground via a virtual joint at the center of pressure (COP) of the foot when contacting with the ground.Anatomical constraint axes (e.g., varus/valgus axis at elbow and knee joints; internal/external rotation axis at wrist joint), along which the joints cannot rotate freely, are considered.
Equation of Motion for the Whole Body and Ball System
An analytical form of the equation of motion for the whole body with ball system can be expressed as follows: where V is the generalized velocity vector consisting of linear velocity vectors with respect to the center of gravity (CG) and angular velocity vectors for all the segments; ATa and AG indicate the coefficient matrices for the active joint torque vector Ta and gravitational force vector G; V AV indicates the motion-dependent term (MDT) consisting of force and moment caused by centrifugal and Coriolis forces and gyroscopic effective moment; AG is the coefficient matrix of gravitational acceleration vector; AErr is the modelling error term consisting of residual joint force term, residual joint moment term and fluctuation terms caused by segments' lengths and anatomical constraint joint axes [7].
Contributions to Ball Variables
After integrating Equation ( 1) with respect to time, the ball variables are calculated as: ball, ball ball,1 ball ball,2 ball , , where the matrix Sball denotes the transforming matrix from the generalized velocity vector to the ball CG velocity vector ball x , and ball angular velocity vector ωball.The dynamic contributions of the individual terms to the generation of the ball variables are shown as: where the terms CTrq, CMDT and CG respectively denote the contributions of joint torque term, MDT, and gravitational term; the term CErr is the contribution of modelling error term, to the generation of ball variables; CV0 is the contribution of the initial velocity term caused by the initial velocity state of the system at the start of analysis.For example, contribution of the joint torque term is given as follows: x ω e S A T e e x ω (4)
Contributions to Ball Variables Considering Generating Factors of MDT
The equation of motion for the system, Equation ( 1), was discretized with respect to time as follows: where k denotes the time in the discrete-time system, and the vector V denotes the input vector consisting of the discretized terms shown as: The generalized acceleration vector was expressed by difference approximation using the time interval Δt of the discretized system shown as Combining Equations ( 5) and ( 7) yields a recurrence formula for the generalized velocity vector V as follows [6]: Equations ( 6) and (8) provide us the information about the contributions of the input terms (i.e., the joint torque term, the gravitational term, the modelling error term) at the time k, to the generation of the generalized velocity vector at the time k + 1 in the discrete-time system without use of the MDT.
The contribution of each term at every instant to the generation of the generalized velocity vector can be derived from Equation (8).For example, the generalized velocity vector at the time k can be calculated from the time history of the input vector V as follows: where the function Π denotes the factorial function.
Consequently, the contribution of the active joint torque to the generation of ball variables at the time k is expressed as follows: The contribution of the active joint torque, Equation (10), can be furthermore divided into the contributions of the individual active joint torques about axes of the whole-body joints.Since the generalized velocity vector, Equation (9), is calculated without use of the component of the MDT, Equation (10) shows the contribution which considers generating factors of the MDT.
A Solution for Closed Loop Problem in Fingers and Ball System
When using the 22-segment model, the ball is supported by the three fingers (i.e., index, middle and thumb) that exert forces on the ball surface to grasp and manipulate the ball during fastball pitching motion.It is impossible to determine the forces exerted by individual fingers via inverse dynamics calculation because of the kinetic redundancy of the ball-fingers system.A simulation with respect to the 22-segment model, therefore, was conducted to estimate the contact forces by setting diagonal spring and damper elements between each fingertip and each contact point on the ball surface.
The position and velocity vectors of the individual fingertips are given by using contact points calculated from ball variables (i.e., ball CG's position and velocity, ball orientation, and ball angular velocity) under the assumption that the ball contact points of fingertips maintain same location with respect to ball coordinate system until each finger loses contact with the ball.The exerting force is calculated by the following equation as: where the vectors p and v denote the position and velocity vectors at fingertip (ft) calculated from the ball variables measured in experiments and the vectors at ball contact (bc) points of the individual fingers calculated from simulation described as below, and the matrices Kc and Cc denote diagonal matrices consisting of stiffness and viscous components, respectively.The estimated values of the exerting forces at the ball contact points are calculated under the following condition because of avoiding pulling force components of Equation (11).
, ( index, middle, thumb) where the vector rb with barred subscript cg-C,j denotes position vector running from the ball CG to each fingertip.
A simulation was carried out by using the equations of ball motion for translational and rotational movements as follows: where mball is the mass of ball, xball,cg is the position vector of the ball's center of gravity (CG), g is the gravitational acceleration vector, ball Î is the inertia matrix of the ball expressed in the global reference coordinate system.The inverse dynamics calculation for fingers' joints was carried out using the estimated values of the ball contact forces exerted by the fingers.Data was analyzed with regard to the normalized time, 0-100%, from the instance when the ball is farthest from catcher to the instance when the thumb's fingertip loses contact with the ball just prior to ball release.
Contribution to Ball Speed with Respect to 22-Segment Model
Figure 2a,b show the contributions of the individual terms and major contributors to the ball speed for a motion with stable data acquisition.The sum of contributions coincides with the measured speed (Figure 2a).The joint torque term is the largest contributor to the ball speed in this model.The MDT shows small contribution to the ball speed.The PIP-joint torques of index and middle fingers are the major contributors to the ball speed (Figure 2b).
Contribution to Ball Variables with Respect to 16-Segment Model
Figure 3a,b show the contributions of the individual terms to the ball speed and ball angular velocity with respect to the 16-segment model.In contrast to the result in Figure 2a, the MDT is the largest contributor to the ball variables just before ball release.
Main Contributors to Ball Speed in Consideration of Generating Factors of MDT
Figure 4a-c show the main joint-torque contributors of shoulder joint, elbow and wrist joints, and torso joint to the generation of ball speed after converting MDT into other terms using Equations (9) and (10).The horizontal adduction torque is the positive contributor to the ball speed.The elbow flexion/extension-axial torque shows large magnitude of negative/positive contribution, and is the one of the largest contributors just prior to ball release.The lateral flexion-axial torus and ante/retroflexion-axial torque are positive and negative contributors to ball speed, respectively.
Conclusions
This study has quantified the generation mechanism of linear and angular velocities of ball by using an induced speed analysis with and without finger segments.The results show that the roles of finger joints are supporting the ball during pitching motion, and that the torques of pitching-side arm and torso joints are major positive or negative contributors to the generation of the ball variables by utilizing cumulative effects of joint torque inputs.
Figure 1 .
Figure 1.A schematic representation of whole body and ball model: (a) model with finger segments (22-segment model); (b) model without finger segments (16-segment model).The distal and middle segments of index and middle fingers are treated as one segment in the analysis.
Figure 2 .
Figure 2. Contributions to the ball speed with to the 22-segment model: (a) contributions of the individual terms; (b) contributions of individual joint torque terms.The legends in Figure 2a,b are as follows, speed: measured ball speed; total-Trq: total contribution of joint torque terms; V0: the contribution of initial velocity term; sum of ctb: total sum of individual terms; PDF: palmar/dorsal flexion; FE: flexion/extension; HAA: horizontal abduction/adduction; IER: internal/external rotation.
Figure 3 .Figure 4 .
Figure 3. Contributions of individual terms to the ball variables with respect to the 16-segment model: (a) contributions to the ball speed; (b) contributions to the ball angular velocity. | 2,798.6 | 2018-02-23T00:00:00.000 | [
"Engineering",
"Physics"
] |
How Single Amino Acid Substitutions Can Disrupt a Protein Hetero-Dimer Interface: Computational and Experimental Studies of the LigAB Dioxygenase from Sphingobium sp. Strain SYK-6
Protocatechuate 4,5-dioxygenase (LigAB) is a heterodimeric enzyme that catalyzes the dioxygenation of multiple lignin derived aromatic compounds. The active site of LigAB is at the heterodimeric interface, with specificity conferred by the alpha subunit and catalytic residues contributed by the beta subunit. Previous research has indicated that the phenylalanine at the 103 position of the alpha subunit (F103α) controls selectivity for the C5 position of the aromatic substrates, and mutations of this residue can enhance the rate of catalysis for substrates with larger functional groups at this position. While several of the mutations to this position (Valine, V; Threonine, T; Leucine, L; and Histidine, H) were catalytically active, other mutations (Alanine, A; and Serine, S) were found to have reduced dimer interface affinity, leading to challenges in copurifing the catalytically active enzyme complex under high salt conditions. In this study, we aimed to experimentally and computationally interrogate residues at the dimer interface to discern the importance of position 103α for maintaining the integrity of the heterodimer. Molecular dynamic simulations and electrophoretic mobility assays revealed a preference for nonpolar/aromatic amino acids in this position, suggesting that while substitutions to polar amino acids may produce a dioxygenase with a useful substrate utilization profile, those considerations may be off-set by potential destabilization of the catalytically active oligomer. Understanding the dimerization of LigAB provides insight into the multimeric proteins within the largely uncharacterized superfamily and characteristics to consider when engineering proteins that can degrade lignin efficiently. These results shed light on the challenges associated with engineering proteins for broader substrate specificity.
Introduction
The aromatic heteropolymer lignin accounts for 10-35% of lignocellulosic biomass, making it the second most abundant renewable organic material in the biosphere, after cellulose [1]. Production of fuels and fine chemicals from lignin has the potential for high sustainability and low environmental costs compared to other carbon mass sources [2]. However, effective degradation of the heterogeneous aromatic structure and conversion of lignin-derived aromatic compounds (LDACs) to high-value products remains a challenge for existing methods [1,3]. Therefore, there is considerable interest in microbial catabolism of lignin and LDACs, particularly the aromatic ring-cleaving dioxygenases, such as LigAB [4][5][6].
Structurally, LigAB is a homodimer of α/β heterodimers ( Figure 2A) [10]. The ironbinding active site is located at the interface between the small alpha and the large beta subunits of each dimer ( Figures 2B and S1). Interestingly, all but one catalytic residue, including those of the iron-binding motifs, are contributed by LigB subunit (the β domain of the heterodimer). F103α, si ing at the interface of the allosteric pocket and the active site ( Figures 2B and S1), is contributed by the LigA (alpha) subunit [10]. Crystallography data indicate that F103α is not involved in metal coordination or acid/base catalysis. Previous mutagenesis studies of LigAB from our lab revealed that the residue controls the enzyme's substrate specificity through interaction with the C5-functionality of bound substrates [18]. F103A and F103S mutations of LigA were also shown to prevent the LigA and LigB proteins from copurifying. This suggested an additional role for F103α in proteinprotein interactions in the LigAB complex ( Figures 2B and S1). Figure 1. Lignin degradation pathway in which LigAB catalyzes the deoxygenation reaction of various phenolic compounds. (A) A portion of the lignin degradation pathway from Sphingomonas paucimobilis sp. strain SYK-6 in which LigAB is found. LigAB catalyzes the ring opening of its endogenous substrate, 4,5-protocatechuic acid, but also other LDACs within and outside of the shown pathway. (B) The aromatic ring opening reaction of protocatechuic acid, the native substrate of LigAB. The F103α recognizes functional groups on the C5 position (indicated with a blue arrow). The H at the C5 position of PCA can be substituted with hydroxyl and hydroxymethyl groups, to respectively yield gallate and 3MGA.
C5
Structurally, LigAB is a homodimer of α/β heterodimers ( Figure 2A) [10]. The ironbinding active site is located at the interface between the small alpha and the large beta subunits of each dimer ( Figures 2B and S1). Interestingly, all but one catalytic residue, including those of the iron-binding motifs, are contributed by LigB subunit (the β domain of the heterodimer). F103α, sitting at the interface of the allosteric pocket and the active site ( Figures 2B and S1), is contributed by the LigA (alpha) subunit [10]. Crystallography data indicate that F103α is not involved in metal coordination or acid/base catalysis. Previous mutagenesis studies of LigAB from our lab revealed that the residue controls the enzyme's substrate specificity through interaction with the C5-functionality of bound substrates [18]. F103A and F103S mutations of LigA were also shown to prevent the LigA and LigB proteins from copurifying. This suggested an additional role for F103α in protein-protein interactions in the LigAB complex ( Figures 2B and S1). Phenylalanine plays an important role in the thermodynamic stability of interfaces in many proteins [19][20][21]. For instance, in signal transducer and activator of transcription 5 (STAT), phenylalanine at position 706 facilitates the homodimerization of the protein through forming an intramolecular network of hydrophobic interactions with other nonpolar residues at the cognate domain of the same dimer [19][20][21]. The resultant hydrophobic interface substantially contributes to the recognition of the phenylalanine and its associated hydrophobic network on the other dimer. In other systems, such as ErbB2, a transmembrane protein, phenylalanine residues in the monomers associate first to assist in the dimer formation, then rotate outwards in order that the helices can align [22]. Overall, it has been proposed that phenylalanine residues contribute to the dimerization propensity of proteins [22,23]. In the case of LigAB heterodimer, the single catalytic phenylalanine may have been evolutionarily selected to enhance the protein subunits' ability to dimerize.
In an effort to elucidate the role of F103α in LigAB stability, we further characterized a collection of mutants that were previously studied by Barry et al. [18] with both computational and experimental methods. Quantifying and comparing the percentage of each subunit that copurifies for the mutant proteins to the wild-type protein allowed for mutant induced changes to be observed. Computational protein mutagenesis and molecular dynamic simulations allowed for the calculation of thermodynamic changes to occur on the molecular level for each of the experimentally tested mutants. Molecular dynamic simulations revealed changes in free energies of the heterodimer depending upon the identity of residue 103α and enabled determination of their relative stability, which strongly correlated to the observable and significant changes on the experimentally determined, macroscale dimerization stability.
Protein Dimer Purification and SDS-PAGE Analysis
To purify and isolate a kinetically active form of LigAB, the gene pair is co-expressed, and the dimer is purified anaerobically. Of the two proteins, only LigA has an N-terminal His6-Tag, but the dimer associates readily in solution, and thus the dimer copurifies without any extensive steps. Since we have previously observed that SDS-PAGE analysis of both aerobically and anaerobically purified LigAB are indistinguishable ( Figure S2), the purification described herein was completed aerobically. The aerobic affinity Phenylalanine plays an important role in the thermodynamic stability of interfaces in many proteins [19][20][21]. For instance, in signal transducer and activator of transcription 5 (STAT), phenylalanine at position 706 facilitates the homodimerization of the protein through forming an intramolecular network of hydrophobic interactions with other nonpolar residues at the cognate domain of the same dimer [19][20][21]. The resultant hydrophobic interface substantially contributes to the recognition of the phenylalanine and its associated hydrophobic network on the other dimer. In other systems, such as ErbB2, a transmembrane protein, phenylalanine residues in the monomers associate first to assist in the dimer formation, then rotate outwards in order that the helices can align [22]. Overall, it has been proposed that phenylalanine residues contribute to the dimerization propensity of proteins [22,23]. In the case of LigAB heterodimer, the single catalytic phenylalanine may have been evolutionarily selected to enhance the protein subunits' ability to dimerize.
In an effort to elucidate the role of F103α in LigAB stability, we further characterized a collection of mutants that were previously studied by Barry et al. [18] with both computational and experimental methods. Quantifying and comparing the percentage of each subunit that copurifies for the mutant proteins to the wild-type protein allowed for mutant induced changes to be observed. Computational protein mutagenesis and molecular dynamic simulations allowed for the calculation of thermodynamic changes to occur on the molecular level for each of the experimentally tested mutants. Molecular dynamic simulations revealed changes in free energies of the heterodimer depending upon the identity of residue 103α and enabled determination of their relative stability, which strongly correlated to the observable and significant changes on the experimentally determined, macroscale dimerization stability.
Protein Dimer Purification and SDS-PAGE Analysis
To purify and isolate a kinetically active form of LigAB, the gene pair is co-expressed, and the dimer is purified anaerobically. Of the two proteins, only LigA has an N-terminal His6-Tag, but the dimer associates readily in solution, and thus the dimer copurifies without any extensive steps. Since we have previously observed that SDS-PAGE analysis of both aerobically and anaerobically purified LigAB are indistinguishable ( Figure S2), the purification described herein was completed aerobically. The aerobic affinity chromatog-raphy process was completed in the same manner as the previously described anaerobic purification process ( Figure S2).
When purifying wild-type LigAB, in the third elution step where the fractions are collected for further buffer exchange and analysis, the two proteins are expressed at approximately the same abundance. A small fraction of LigB is initially released from the resin at a slightly higher percentage than LigA. As this elution step continues, later fractions contain a greater percentage of LigA compared to LigB. Protein abundance in each fraction is assessed and fractions are collected in order that the final ratio of LigA to LigB is about 1:1 for catalytic and other characterization steps ( Figure S3, Table 1). Table 1. The calculated abundances of the α and β subunits for wild-type LigAB and F103α mutants. The ImageQuant calculated abundance of each α and β subunit in the elution fractions E 3-1 to E 3-5 compared to the relative abundances of the subunit in all the elution fractions (see Supplemental Figures S2 and S3). The ratios of α to β and β to α are also reported as a basis of comparison. The purification and quantification processes were repeated for the other mutants of F103α that were previously characterized ( Figure S4A-F). Interestingly, two of the three non-polar mutants (A and L) had one fraction with a greater amount of LigB in the second elution fraction compared to the third elution fraction ( Figure S4A,E). When looking at fractions E 3-1 to E 3-5 , the percentages of the LigA are decreased by about 40% for these mutants and the percentages of LigB are decreased by about 20 or 25% for F103L and F103A, respectively. Although this is the case, all three non-polar residues were still found to have a greater ratio of β:α than wild-type when comparing E 3-1 to E 3-5. This suggests that these residues may be destabilizing the α-subunit or the dimer interface in some way in order that less of the protein is stably expressed and purification is reduced. However, the overall ratio of the protein subunits remains relatively consistent when compared to wild-type.
As for the purification of the polar mutants ( Figure S5), the opposite trend was observed. All polar mutations allowed for the isolation of LigAB, where the α:β ratio was greater than 1. F103S purified most similarly to wild-type ( Figure S5A); conversely, the other two polar mutants deviated noticeably from the wild-type. The F103T mutant was observed to have a similar abundance of the α-subunit in the fractions of interest, but with a considerably lower abundance of the β-subunit leading to a large ratio of α:β ( Figure S5E). In the case of F103H, the expression (total protein abundance) is diminished as compared to wild-type ( Figure S5C). In the fractions of interest, isolated LigA was slightly reduced as compared to WT-LigAB (<10%), whereas the expression of LigB for the F103H mutant was approximately half of those for wild-type. This suggests that although the α-subunit itself may not be destabilized greatly due to the mutation, the dimer interface may be weakened.
Native Gel Analysis
A native gel was carried out to understand the effect of the mutations on the formation of the catalytically active oligomerization state of LigAB ( Figure 3). Due to the effective purification of F103S and F103A mutants described above, they were used as markers for their group. LigA is the smaller of the two dimers with a molecular weight of 17.711 kDa (when expressed with His 6 -tag), and LigB has a molecular weight of 33.292 kDa.
Interestingly, the wild-type protein exists at a molecular weight corresponding to a trimer of dimers, although some other higher order oligomeric states exist. As seen with our denaturing SDS-PAGE analysis, the F103S variant behaves similarly to wild-type. This mutant, similar to wild-type, exists primarily as an α 3 β 3 dimer, but there is an increase in the abundance of the higher-order oligomers (which appear to be a hexamer of heterodimers). Conversely, F103A exists more as the heterodimer (α 1 β 1 ) or as a higher-order oligomer. Additionally, the bands are less distinct, suggesting perhaps that other interactions are being observed. Perhaps this can help explain the excess of the beta subunit observed for the F103A, F103V, and F103L mutants. The bands for the F103A mutant enzyme are consistent with a mass of the α 1 β 1 heterodimer at low concentrations, but the protein band appears more elongated and smeared at higher concentrations. It may be possible that the overexpression of the hydrophobic mutants allows for the formation of other intermolecular protein interactions and warrants further study. 17.711 kDa (when expressed with His6-tag), and LigB has a molecular weight of 33.292 kDa. Interestingly, the wild-type protein exists at a molecular weight corresponding to a trimer of dimers, although some other higher order oligomeric states exist. As seen with our denaturing SDS-PAGE analysis, the F103S variant behaves similarly to wild-type. This mutant, similar to wild-type, exists primarily as an α3β3 dimer, but there is an increase in the abundance of the higher-order oligomers (which appear to be a hexamer of heterodimers). Conversely, F103A exists more as the heterodimer (α1β1) or as a higher-order oligomer. Additionally, the bands are less distinct, suggesting perhaps that other interactions are being observed. Perhaps this can help explain the excess of the beta subunit observed for the F103A, F103V, and F103L mutants. The bands for the F103A mutant enzyme are consistent with a mass of the α1β1 heterodimer at low concentrations, but the protein band appears more elongated and smeared at higher concentrations. It may be possible that the overexpression of the hydrophobic mutants allows for the formation of other intermolecular protein interactions and warrants further study.
Heterodimer Stability Calculation
The free energy of LigAB heterodimer wild-type and several F103 mutant variants were calculated to determine changes in stability of the complex via thermodynamic cycle ( Figure 4A). During the simulation, the wild-type and mutant variants equilibrated with an overall root mean square deviation (RMSD) of approximately less than 3.5 Å ( Figure S6). This demonstrates that the simulated complexes have reached equilibrium and without dramatic variability across the trajectory, these structures were suitable for further simulations to calculate the free energy difference of the mutation relative to wild-type. We observed that the overall fluctuations of the individual residues do not vary drastically when calculating the Cα root mean square fluctuations (CαRSMF) for the wild-type and
Heterodimer Stability Calculation
The free energy of LigAB heterodimer wild-type and several F103α mutant variants were calculated to determine changes in stability of the complex via thermodynamic cycle ( Figure 4A). During the simulation, the wild-type and mutant variants equilibrated with an overall root mean square deviation (RMSD) of approximately less than 3.5 Å ( Figure S6). This demonstrates that the simulated complexes have reached equilibrium and without dramatic variability across the trajectory, these structures were suitable for further simulations to calculate the free energy difference of the mutation relative to wildtype. We observed that the overall fluctuations of the individual residues do not vary drastically when calculating the C α root mean square fluctuations (C α RSMF) for the wild-type and mutant proteins (Figures S7 and S8). Locally, the greatest changes to per residue fluctuations occur downstream of the F103α residue in the mutant simulations, whereas globally, residues in chain B experience greater fluctuations due to these mutations ( Figure S7B). The F103A, F103V, and F103L variants had a free energy difference relative to the wild-type of 1.96 ± 0.17, −0.22 ± 0.17, and −0.22 ± 0.11 kcal/mol, respectively ( Figure 4B). The F103S and F103T mutants have relative free energy differences of 3.77 ± 0.20 and 0.9 ± 0.30 kcal/mol, respectively. Since the histidine side chain has two nitrogen atoms that can harbor a proton in the neutral state; therefore, we calculated free energy differences for both the delta (HID) and epsilon (HIE) form, which provided values of −0.38 ± 0.11 and 1.37 ± 0.07 kcal/mol for their free energy difference relative to the wild-type. Furthermore, the solvent accessible surface area of the entire LigAB complex was calculated, and the difference (∆SASA; Figure S9) was determined relative to the wild-type. The serine mutation caused the burial of several other residues in the protein ( Figure S9A). Threonine has the opposite effect where globally, it drives other residues to become more solvent exposed. These reorganizations are relatively subtle and do not cause changes in the radius of gyration, but they could contribute to the solvation of the protein-protein interface. The other variants have negligible effects on the global desolvation of other residues. All variants at the mutated residue are more solvent exposed than phenylalanine, except for the histidine variants ( Figure S9B). The histidine protonated at the delta position (HID) is more buried, whereas the histidine protonated at the epsilon position does not display any large differences. (Figures S7 and S8). Locally, the greatest changes to per residue fluctuations occur downstream of the F103 residue in the mutant simulations, whereas globally, residues in chain B experience greater fluctuations due to these mutations ( Figure S7B). The F103A, F103V, and F103L variants had a free energy difference relative to the wild-type of 1.96 ± 0.17, −0.22 ± 0.17, and −0.22 ± 0.11 kcal/mol, respectively ( Figure 4B). The F103S and F103T mutants have relative free energy differences of 3.77 ± 0.20 and 0.9 ± 0.30 kcal/mol, respectively. Since the histidine side chain has two nitrogen atoms that can harbor a proton in the neutral state; therefore, we calculated free energy differences for both the delta (HID) and epsilon (HIE) form, which provided values of −0.38 ± 0.11 and 1.37 ± 0.07 kcal/mol for their free energy difference relative to the wild-type. Furthermore, the solvent accessible surface area of the entire LigAB complex was calculated, and the difference (ΔSASA; Figure S9) was determined relative to the wild-type. The serine mutation caused the burial of several other residues in the protein ( Figure S9A). Threonine has the opposite effect where globally, it drives other residues to become more solvent exposed. These reorganizations are relatively subtle and do not cause changes in the radius of gyration, but they could contribute to the solvation of the protein-protein interface. The other variants have negligible effects on the global desolvation of other residues. All variants at the mutated residue are more solvent exposed than phenylalanine, except for the histidine variants ( Figure S9B). The histidine protonated at the delta position (HID) is more buried, whereas the histidine protonated at the epsilon position does not display any large differences.
Discussion
The mutants of interest were previously identified as kinetically active mutants, F103V/T/L/H, in addition to two mutants that were catalytically inactive F103A/S 18 . We hypothesized that if the affinity of the heterodimer interface was reduced, then an excess of the α-subunit should elute off the column due to this subunit with an N-terminal Histag. Additionally, since the β-subunit should not stick to the Ni-NTA affinity column on its own; therefore, it should be found in lower levels in the elution fractions. We analyzed the abundance of the LigA and LigB proteins and quantified the relative abundance of the subunits using Image Quant software.
In previous research, mutations at the F103α position to residues that were slightly smaller and still hydrophobic led to enhancements to catalysis for non-native substrates [18]. We hypothesized that these mutations enlarged the area of the active site, allowing for substrates with larger C5 groups to more easily bind in chemically competent poses within the active site. During this previous investigation, other hydrophobic mutations were also predicted to enhance the reactivity of non-native substrates by allowing for
Discussion
The mutants of interest were previously identified as kinetically active mutants, F103V/T/L/H, in addition to two mutants that were catalytically inactive F103A/S 18 . We hypothesized that if the affinity of the heterodimer interface was reduced, then an excess of the α-subunit should elute off the column due to this subunit with an N-terminal His-tag. Additionally, since the β-subunit should not stick to the Ni-NTA affinity column on its own; therefore, it should be found in lower levels in the elution fractions. We analyzed the abundance of the LigA and LigB proteins and quantified the relative abundance of the subunits using Image Quant software.
In previous research, mutations at the F103α position to residues that were slightly smaller and still hydrophobic led to enhancements to catalysis for non-native substrates [18]. We hypothesized that these mutations enlarged the area of the active site, allowing for substrates with larger C5 groups to more easily bind in chemically competent poses within the active site. During this previous investigation, other hydrophobic mutations were also predicted to enhance the reactivity of non-native substrates by allowing for larger functional groups at the C5 position, but these mutants did not enable LigAB to efficiently copurify.
Upon purifying the mutants of interest for this study, where we utilized a lower salt concentration than previously reported, all mutants could now copurify to some extent. Specifically, to enable copurification of all of the mutants, the buffer was changed from 20 mM HEPES, 500 mM NaCl, pH 7.4 to 20 mM HEPES, 300 mM NaCl, pH 8.0. The change in buffer conditions facilitated the purification of all mutants, especially F103A and F103S mutants that previously purified as only the LigA protein. Without changing the buffer conditions, careful energetic analyses and comparison to our computational data would not be possible. We believe that the electrostatics of these purification conditions may have contributed significantly to the mutant proteins' ability to copurify.
A novel property discovered of LigAB is its oligomerization state as determined by the native gel. From its crystal structure, LigAB was believed to exist as a homodimer of heterodimers (α 2 β 2 ) [10], but the molecular weight of wild-type LigAB in the native gel is more consistent with a trimer of the heterodimers (α 3 β 3 ), as shown in Figure 3. While this oligomerization state is unique amongst enzymes characterized in this class of dioxygenases, it is yet to be determined whether the extradiol dioxygenase activity is reliant on any one oligomerization state [24]. Further experiments, such as size-exclusion chromatography or analytical ultracentrifugation need to be completed to validate the results of the native gel. Additionally, the design of constructs to allow LigA and LigB to be individually purified in order that the oligomerization state can be further studied are planned. Furthermore, it would be valuable to determine whether LigAB is catalytically active in multiple oligomeric states. Continued studies of other related extradiol dioxygenase enzymes could provide insights on the dependence of oligomerization state on the catalytic activity for this superfamily.
When the experimental and computational data are taken together, both the size and polarity of the residue at the 103α impact the stability of the heterodimer interface. All nonpolar mutants tested were seen to have diminished abundances of both the α and β subunits relative to wild-type. When considering the ratios of the subunits, less of the α subunit is being isolated. This may be due to destabilization of the LigA protein alone or due to reduction in protein adopting a catalytically competent dimer interface. Although all mutants with non-polar amino acids at position 103 lead to protein with α:β ratios near 0.8 (suggesting an excess of the β subunit), the F103A mutation had a positive free energy difference indicating that only this nonpolar mutation may be destabilized in some way. Looking at the native gel, most of the enzyme exists in the monomeric state and other higher order oligomers. This mutant is not observed to exist in the predominant oligomeric form of wild-type LigAB. This indicates that while the two subunits copurify, the dimer interface may not be sufficiently stable over time. This may be why previous purifications were unsuccessful for the F103A mutant enzyme.
When Phe103 was exchanged for polar amino acids, the ratios of α:β are not consistent and do not follow any patterns based upon electronics or size. Previous studies by Barry et al. [18] have indicated that F103H and F103T are kinetically active, while F103S is not despite the observation that F103S has an α:β ratio most similar to wild-type. Substitution of polar residues for phenylalanine lead to a positive relative free energy, indicating that these polar mutations are destabilizing. Furthermore, the native gel for F103S indicated that while the trimer-of-heterodimers was formed, other higher order oligomers formed at greater abundances than wild-type. This suggests that the observation about purification and activity differences may result from destabilizing changes in the heterodimer ↔ active dimer form ↔ oligomer equilibrium.
Further evidence of heterodimer destabilization is provided by our molecular dynamic simulations. The hydrophobic residues (valine, leucine) have a small stabilizing effect, whereas the polar residues (serine, threonine) have a destabilizing effect. The hydrophobic residues are uniformly smaller than phenylalanine, and while they can pack more readily into the hydrophobic pocket of the active site, they provide a lower desolvation barrier than the wild-type phenylalanine. The alanine variant is destabilizing, which may be due to the small size and flexible nature of the residue. If phenylalanine is acting similarly to a cap for the active site, then alanine could be small enough where it would not act as an effective barrier between the active site and the solvent. Polar residues, such as serine and threonine are destabilizing since they force a global rearrangement for the enzyme to become drastically more solvent accessible (threonine) or inaccessible (serine). Serine seems to have a global effect that may be where the oligomeric destabilizing effect stems from. Interestingly, the effect of mutagenizing phenylalanine to histidine depends upon which nitrogen in the ring is protonated. When histidine is protonated at the delta position, it is stabilizing since it makes contact with several residues in LigB. When protonated at the epsilon position, histidine is in a hydrophobic pocket with no stabilizing interactions from other residues. The protonation of the histidine at the epsilon position (HIE) is more consistent with the experimental data, indicating that this may be the experimentally relevant form, and thus explaining why this mutant has the lowest α:β ratio of all the mutants.
Materials and Methods
Commercially available reagents and solvents were purchased from Fisher Scientific, apart from ampicillin, which was purchased from Goldbio, and acrylamide/bisacrylamide, which was purchased from Bio-Rad Laboratories. DH5α and BL21 (DE3) chemically competent E. coli cells were purchased from New England Biolabs (Ipswitch, MA, USA). Centrifugation was performed on DuPont Instruments (Wilmington, DE, USA) Sorvall RC-5B centrifuge. All cells were lysed using an Avestin (Ottawa, ON, Canada) Emulsiflex-C5 high-pressure homogenizer.
Molecular Dynamic Simulations and Relative Binding Free Energy Calculation
All molecular dynamic simulations were performed with the Gromacs 2022.1 package and the Amber99sb forcefield in triplicate to ensure reproducibility [25,26]. All wild-type and mutant structures were derived from the wild-type LigAB heterodimer (PDB: 1BOU) and were prepared from chain A and B. Missing loops and sidechains were modeled with Prime [27,28]. Protonation states for ionizable sidechains were determined with PROPKA [29,30]. The pmx extension was used to generate hybrid structures and topologies for the wild-type (λ = 0) and mutant (λ = 1) state [31,32]. Each system was placed into a dodecahedron periodic boundary condition with a 10 Å buffer region. The system was solvated with a TIP3P water model and electroneutralized with counterions to a final concentration of 0.150 M. One thousand steps of a steepest descent were used as energy to minimize the system followed by isochoric/isothermal (NVT) and isobaric/isothermal (NPT) equilibrations for a total of 1 ns at 298 K. Furthermore, all-atom restraints were applied during equilibration and were slowly released during an extended 500 ps (NPT) starting with sidechains, then backbone. The equilibrated system was then subject to a 50 ns simulation and a total of 200 frames, which were evenly spread across the second half of the trajectory that were subject to non-equilibrium simulation. Herein, the λ value was varied from 0 to 1 or vice versa for 200 ps with a softcore potential [33]. The free energy was determined based on Crooks fluctuation theorem [34,35]. Simulations were performed with a 2 fs timestep. Long range electrostatics were calculated with the particle-mesh-ewald with a grid spacing of 1.2 Å and a 4th order cubic interpolation. Short range nonbonded interactions are calculated with a 11 Å cutoff. Temperature and pressure are coupled with the v-rescale and Parinello-Rahman thermostat and barostat, respectively. Hydrogens were constrained with holonomic constrains through the LINCS method [36]. The solvent accessible surface area (SASA) was calculated in GROMACS based on the double cubic lattice method [37]. The SASA is defined as the surface of a sphere between the solvent radius of the probe and the Van der Waals radius of atoms in question without overlapping with other atoms. When performed on the entire protein, this defines the area that can be penetrated by a solvent molecule.
Protein Expression and Purification
All mutants for this study (A, S, V, L, H, T) were made via site-directed mutagenesis as previously described by Barry et al. [17]. An overnight culture of E. coli BL21 cells containing a pET15b plasmid containing the genes for LigA and LigB (wild-type or mutant version) was inoculated into 15 mL of Luria Broth (LB) media supplemented with ampicillin (100 µg/mL) and allowed to shake at 37 • C overnight. This seed culture was used to inoculate 2 L of LB supplemented with 100 µg/mL of ampicillin. Cells were allowed to shake at 200 rpm and 37 • C until an OD 600 of 0.4-0.6 was reached, at which point, gene expression was induced with the addition of isopropyl β-d-thiogalactopyranoside (IPTG) to a final concentration of 1 mM. The culture was allowed to grow for an additional 24 h after induction. Cells were harvested by centrifugation at 5422× g for 10 min and lysed via an Avestin (Ottawa, ON, Canada) Emulsiflex-C5 high-pressure homogenization with 5-7 passes at approximately 15,000 psi. The lysate was centrifuged at 21,728× g for 40 min at 4 • C to pellet insoluble cellular debris. The supernatant was loaded onto 15 mL of HisPur Ni-NTA resin pre-equilibrated with bind buffer (50 mM HEPES, 300 mM NaCl, 10 mM imidazole, pH 8.0). The His-tagged LigAB enzyme was washed with wash buffer (50 mM HEPES, 300 mM NaCl, 20 mM imidazole, pH 8.0) and eluted by an imidazole step gradient. The buffers contained 50 mM HEPES, 300 mM NaCl at pH 8 with, increasing imidazole sequentially, 62.5 mM (2.5 CV), 125 mM (2.5 CV), and 250 mM (5 CV) imidazole. Fractions (5 mL) were collected for all elution steps.
SDS-PAGE and Analysis
One flow through fraction (FT), one wash fraction (W), every other fraction eluted with 62.5 mM imidazole (E 1 ), and every fraction eluted with 125 mM and 250 mM imidazole (E 2 and E 3 , respectively) were analyzed by SDS-PAGE (15/6% acrylamide). After electrophoresis, gels were incubated in a fixing solution (40% v/v ethanol, 10% v/v Glacial Acetic Acid) for at least 20 min. The gels were then stained using a Coomassie stain (10% acetic acid, 40% methanol, 605 µM Coomasie Brilliant Blue) for at least 20 min. The bands were resolved using a destain solution (10% acetic acid, 20% methanol). The gels were immediately digitized on a Typhoon FLA 9000 using a 473 nm laser. Image Quant Program (GE Healthcare Biosciences, Chicago, IL, USA) was utilized to analyze each mutant gel and quantify the relative amount of protein in each subunit band.
Conclusions
While we previously reported that mutations of LigAB at the F103α position have resulted in broader substrate utilization profiles, these mutations had variable impacts on the overall stability of the enzyme complex. Analysis of SDS-PAGE separations for these proteins indicates that non-polar mutants: V, L, and A, purify a greater amount of LigB when compared to wild-type. This may be due to the mutation causing a minor destabilization of LigA, which causes it to purify at lower amounts during the elution steps, or that somehow the LigB protein interacts with the LigA/B dimer even in the absence of its cognate LigA. These mutations may prevent the proteins from properly forming the correct oligomeric form; rather forming only the monomeric heterodimer complex and higher-order oligomers or versions with skewed α:β ratios. Conversely, the polar mutations, as they become bulkier, express more LigA compared to LigB. This could be due to the destabilization of the dimer interface resulting from the mutation. These mutations form oligomers observed for wild-type, but also allow for the formation of higher-order oligomers not observed for wild-type LigAB. This study provided insight into the stability and oligomeric states of a series of LigAB mutants that were previously identified to alter the substrate utilization profile of this enzyme, and caution is always warranted when attempting to reengineer protein specificity. Further studies of these LigAB mutants are planned to determine whether the subunits can be individually expressed and the active complex reconstituted, in order to enable determination of the thermodynamic parameters of dimer association. Additionally, investigation of the impact of oligomerization state(s) on protein catalysis and stability will enable understanding of the role of the observed monomers and trimers of the αβ-LigAB heterodimer, in order that future efforts for enzyme redesign will ensure high levels of enzyme activity for mutant versions that are able to catabolize a variety of substrates. | 8,093.4 | 2023-03-28T00:00:00.000 | [
"Biology",
"Chemistry"
] |
BIOCHEMICAL IMPLICATION OF ADMINISTRATION OFMETHANOL EXTRACT OFOCIMUM GRATISSIMUM LEAF ON HAEMATOLOGICAL PROFILE OF WISTAR RATS
In the search for medicinal plants that will provide ameliorative measure to patients with anaemic disorders. The methanol extract of O. gratissimum leaf was administered to Wistar rats for its effect on haematological profile. Twenty-eight (28) male Wistar rats ranging from 180-200g was randomly picked and placed into plastic cages labeled A-D. Group A served as the control group while groups B-D was the test groups. The animals in group A was administered with distilled water orally by gavage. Group B were administered 50mg/ kg body weight of methanol extract of O. gratissimum, group C were administered with 100mg/kg bodyweight while group D were administered with 200mg/body of methanol extract of O. gratissimum for 14 days. Blood was collected from all the test rats and control by cardiac puncture using disposable syringe and needle then dispensed into tubes containing EDTA. The extract displayed a significant increase (p<0.05) in RBCs, Hb, PCV and platelet counts. More so, the extract produced no significant (p>0.05)difference in MCV, MCH, MCHC and RDW WBC counts when compared with the normal control .Therefore, it will be logical to conclude that the extract of O. gratissimum might be a panacea in the management of anaemic conditions when properly harnessed due to its erythropoietic, haematopoietic and thrombopoietic effect.
INTRODUCTION
Plants such as Ocimum gratissmium are richest resources of drugs in the traditional and modern systems of medicine, nutraceuticals, food supplements, pharmaceutical intermediates and chemical entities for synthetic drugs (Mbata and Saikia, 2006). The use of O. gratissimum can be used as therapeutics regimes such as epilepsy, diarrhoea, and can be use as food supplement, and medicinal effect. Plants products such as O. gratissimum can be used in medicines, which can be traced to its Ayuvidic origin. In developing countries like Nigeria in particular, were several plant of folkloric medicine are used in treatment of diseases like malaria, diabetes, obesity, atherosclerosis, anaemia, opportunistic infections such as HIV/ AIDS as well as microbial and anti-inflammatory management etc (Adeyemi et al., 2002).
Medicinal plants are plant which when administered to man or animal (mammal), exert a sort of pharmacological action on them. Medicinal remedies are seen to have various advantage of traditional medicine namely, low cost, affordability, acceptability and perhaps, low toxicity (reduced site effect). Herbs make up most the plant sources for the productions of useful drugs that are being utilized by people worldwide (Agbo et al., 2000). The phytochemical evaluation of Ocimum gratissimum shows that it is rich in alkaloid, tannins, phytates, flavonoids and oligosaccharides (Ijeh, et al., 2004). In the coastal area of Nigeria, the plant Ocimumgratissimum is used in the treatment of epilepsy, high fever and diarrhoea (Ladipo et al., 2010). The plant O. gratissimum is one of those plants widely known and used for both medicinal and nutritional purposes. It is a perennial plant that is widely distributed in the tropics of Africa and Asia. It belongs to the Family Labiatae and it is the most abundant of the genus Ocimum. The common names of the plant are Basil Fever plant or Tea bush and vernacular names include Daidoya tagida (Hausa), Nichonwu (Igbo), Tanmotswangiwawagi (Nupe), Ntong (Efik) and Efinrin (Yoruba) (Idris et al., 2011). It is woody at the base and has an average height of 1-3 meters. The leaves are broad and narrowly ovate, usually 5-13cm long and 3-9cm wide. It is a scented shrub with lime green leaves. The plant is consumed by the Igbos as a leafy vegetables and the nutritional importance of this plant is based on its usefulness as a seasoning due to aromatic flavour. It is also used by the Igbos in the management of the baby's cord. It is believed to keep the baby's cord and wound surface sterile, as well as treatment of both viral and microbial infections. In the coastal area of Nigeria, the plant O. gratissimum is used in the treatment of epilepsy, high fever and diarrhoea (Ladipo et al., 2010). This research was aimed at determining the effect of extract of methanol O. gratissimum on haematological profile.
Materials Plant materials
Fresh leaves of O. gratissimum were collected from the Federal Secretariat Farms, Calabar, Cross River State, Nigeria. The leaves were taken to the University of Calabar, Department of Botany for identification and authentication. The voucher number of 201 was been deposited for future reference at the Department's Herbarium.
EXPERIMENTAL ANIMALS
The Wistar rats were obtained from the animal holding unit of the Department of Medical Biochemistry, Cross River University of Technology. The animals were allowed to acclimatize for a period of 7 days, in a wellventilated room at room temperature and relative humidity of 29°C and 70% respectively with 12 hours natural light-dark cycle. They were allowed feed and water ad libitum. Good hygiene was maintained by daily cleaning and removal of faeces and spills from their cages.
Preparation of methanol extract of O. gratissimum leaves
The leaves of O. gratissimum were collected and dried at room temperature for a period of 21days until constant weight was obtained. The dried leaves were then pulverized to powder form by a machine blender and sieved. Thereafter, 400g of the pulverized plant material (O. gratissimum) was dissolved in 1200ml of 70% methanol for 72 hours. This was followed with vacuum filtration and extracts was concentrated using rotary evaporator and water bath at 40°C to obtain a solvent free extract, and stored in a refrigerator at 4°C.
Animal grouping and administration of extract
Twenty-eight (28) male Wistar rats were randomly picked and placed into plastic cages labeled A-D. Group A served as the control group while groups B-D were the test groups. The animals in group A was administered with distilled water orally. Group B was administered 50mg/ kg body weight of methanol extract of O. gratissimum, group C was administered with 100mg/kg bodyweight while group D were administered with 200mg/body of methanol extract of O. gratissimum.
Blood Sample Collection
Blood was collected from all the test rats and control by cardiac puncture using disposable syringe and needle then into tubes containing the anticoagulant ethylene diamine tetra acetic acid (EDTA). The specimens were labeled with the identification alphabets/ number. The EDTA samples were kept at room temperature until processing, which occurred within 30 minutes of collection.
Laboratory analysis
Full blood count was performed using a KN-21N Haematology analyzer (Sysmex, Kobe, Japan), a threepart auto analyzer able to test 7 parameters per sample including Hb concentration, PCV, RBC concentration, MCH, MCV, MCHC, WBC count, and PLT count. Standardization, calibration of the instrument, and processing of the samples were done according to the manufacturer's instructions.
Procedures
Each blood sample was mixed well and then approximately 20 μL was aspirated by allowing the analyzer's sampling probe into the blood serum sample and depressing the start button. Results of the analysis were displayed after about 30 seconds, after which the analyzer generated hard copy of the results on thermal printing paper.
Statistical analysis
The data obtained were analyzed using One Way Analysis of Variance(ANOVA) followed by post hoc test at P<0.05. The Statistical Package for Scientific solutions (SPSS) Software version 20.0 was used for the analysis.
RESULTS
The result below indicates the effect of administration of methanol extract of Ocimum gratissimum on haematological profile of Wistar albino rats. The effect administration of the extract on red blood cells (RBC), haemoglobin (Hb) and haematocrit (PCV) concentration showed a significant increase (P>0.05) following the administration of the extract at 50, 100 and 200mg/kg body weight respectively when compared with the normal control respectively (figure1). From the results, it was also observed that the extract produced a significant (P<0.05) increases in platelet count at 50mg/kg body weight when compared to the normal control (figure 2). The extracts also produced no significant different on White blood cells count (WBC) at 50, 100 and 200mg/kg body weight respectively when compared to the normal control ( fig.2). The effect of administration of methanol leaf extract of O. gratissimum on MCV indicate a significant (p<0.05) increase at 100mg body weight when compared with the normal control but the extract produced no significant difference in serum MCV at 200 mg/kg body weight when compared with the normal control (Table 1). Likewise, the extract produce a significant (P<0.05) decrease at 50mg/kg body weight from MCH when compared with the normal control while the extract produced no significant (p<0.05) difference at 100mg/kg body weight, but produced a significant (p<o.05) increase at 200mg/kg body weight when compared to the normal control. More so, the extracts produced no significant difference for serum MCHC at 50 and 200mg/kg body weight when compared with the normal control (table 1). The extract produced no significant (P>0.05) different in serum RDW at 50,100 and 200mg/kg body weight when compared with the normal control.
It was also observed the effect of administration of O. gratissimum on serum MPV and PDW showed no significant (P>0.05) difference at 50, 100 and 200mg/kg body weight when compared with the normal control ( fig.3).
Figure 1: Effect of administration of methanol extract of O. gratissimum on red cells, haemoglobin and heamatocrit (packed cell volume) of Wistar rats.
Values are expressed as mean + STD; n = 5 rats per group. The same colour bars; a = significantly different from NC (P<0.05). Legend: NC = normal control; OG 50 = dose I group; received 50mg/kg body weight of methanol extract, OG 100 = dose II group; received 100mg/kg body weight of methanol extract and OG 200 = dose III group; received 200mg/kg body weight of methanol extract.
Figure 3: Effect of administration of methanol extract of O. gratissimum on MPV (µm 3 ) and PDW (%).
Values are expressed as mean + STD; n = 5 rats per group. No significant difference among the groups. Legend: NC = normal control; OG 50 = dose I group; received 50mg/kg body weight of methanol extract, OG 100 = dose II group; received 100mg/kg body weight of methanol extract and OG 200 = dose III group; received 200mg/kg body weight of methanol extract.
DISCUSSION
Haematological parameters are useful markers used to ascertain the adverse effect of plant extracts or drugs on blood constituents (Ashafa et al., 2010). Haematological parameters are determined in order to assess the degree of the well-being of an animal (Ajayi and Raji, 2012). Thus, haematological parameters are good indicators of the physiological and biochemical status of animals (Khan and Zafar, 2005). The major functions of the white blood cell and its differentials are to fight infections, defend the body by phagocytosis against invasion by foreign organisms and to transport and distribute antibodies by immune response. Thus, animals with low white blood cells are exposed to high risk of disease infection, while those with high WBCs counts are capable of generating antibodies in the process of phagocytosis and have high degree of resistance to diseases (Soetan et al., 2013) and enhance adaptability to local environmental and disease prevalent conditions (NseAbasi et al., 2014). Decreased count of WBCs also indicates the suppression of deleterious effect of the extracts on leucocytes and their production from bone marrow (Odesanmi et al., 2010).From this present research, the non-significance difference in serum WBCs following the administration of methanol extract of O. gratissimum at 50,100 and 200 mg/kg body weight suggests that the extract might contain no bioactive ingredient to fight against infection or defend the body against invasion by foreign organisms. Packed cell volume (PCV) which is also known as haematocrit (Ht or Hct) or erythrocyte volume fraction (EVF) is the percentage (%) of red blood cells in blood (Purves et al., 2003). It measures the percentage volume of red blood cells in the blood; anaemic condition is associated with low production of red blood cells (Guenter & Lawrence, 2005). Packed cell volume is also involved in the transportation of oxygen and absorbed nutrients. Increased PCV concentration shows a better transportation and thus results in an increased primary and secondary polycythemia (Isaac et al., 2013).The observed marked increase in PCV from this work suggests that the plant extract at varying concentrations may positively interfere with osmoregulatory and haematopoietic system of the blood that can enhance management of anaemia. This was contrary to the report by (Audu et al., 2014). Whose report acknowledged that significant reduction in PCV could be indication of severe anaemia caused by destruction of erythrocytes or haemodilution, resulting from impaired osmoregulation across the epithelium (Audu et al., 2014). In this study, treatment with the plant extract led to a significant increase in PLTs in rats in administered with 50,100 and 200 mg/kgbwt of the methanol extract. According to McLellan et al., (2003), increase in PLT in experimental rats indicates good action on the blood's oxygen transporting ability as well as thrombopoietin. The observed increase in the PLTs in this study indicates that the extract may improve the blood oxygen transporting ability. Increases in red blood cells (RBCs) and Hb were also observed. The observed increase in haematological indices could indicate erythrocyte synthensis (Dede et al., 2002) .Therefore, the increase observed in RBC count and Hb may connotes that the extract enhances haematopoeisis and or erythropoiesis. Likewise, the oxygen-transporting ability of the blood and the oxygen supplied to the tissues may be improved following the administration of the extract
MPV PDW
The other haematological parameters like MCH, MCHC and MCV display no significant difference following the administration of the extract of O. gratissimum at varying concentrations. Therefore, since MCH, MCHC and MCV are not affected by the treatment at 50,100 and 200 mg/kgbwt. It suggests that there is neither the incorporation of extract of O. gratissimum into Hb or RBC, nor an alteration in the morphology and fragility of RBCs.
CONCLUSION
From the data obtained we therefore, conclude that the extract of O gratissimum might be a panacea in the management of anaemic conditions due to its erythropoietic, haematopoietic and thrombopoietic effects. | 3,246.6 | 2020-11-02T00:00:00.000 | [
"Medicine",
"Biology"
] |
Inhibitor of Apoptosis (IAP)-like Protein Lacks a Baculovirus IAP Repeat (BIR) Domain and Attenuates Cell Death in Plant and Animal Systems*
A novel Arabidopsis thaliana inhibitor of apoptosis was identified by sequence homology to other known inhibitor of apoptosis (IAP) proteins. Arabidopsis IAP-like protein (AtILP) contained a C-terminal RING finger domain but lacked a baculovirus IAP repeat (BIR) domain, which is essential for anti-apoptotic activity in other IAP family members. The expression of AtILP in HeLa cells conferred resistance against tumor necrosis factor (TNF)-α/ActD-induced apoptosis through the inactivation of caspase activity. In contrast to the C-terminal RING domain of AtILP, which did not inhibit the activity of caspase-3, the N-terminal region, despite displaying no homology to known BIR domains, potently inhibited the activity of caspase-3 in vitro and blocked TNF-α/ActD-induced apoptosis. The anti-apoptotic activity of the AtILP N-terminal domain observed in plants was reproduced in an animal system. Transgenic Arabidopsis lines overexpressing AtILP exhibited anti-apoptotic activity when challenged with the fungal toxin fumonisin B1, an agent that induces apoptosis-like cell death in plants. In AtIPL transgenic plants, suppression of cell death was accompanied by inhibition of caspase activation and DNA fragmentation. Overexpression of AtILP also attenuated effector protein-induced cell death and increased the growth of an avirulent bacterial pathogen. The current results demonstrated the existence of a novel plant IAP-like protein that prevents caspase activation in Arabidopsis and showed that a plant anti-apoptosis gene functions similarly in plant and animal systems.
A novel Arabidopsis thaliana inhibitor of apoptosis was identified by sequence homology to other known inhibitor of apoptosis (IAP) proteins. Arabidopsis IAP-like protein (AtILP) contained a C-terminal RING finger domain but lacked a baculovirus IAP repeat (BIR) domain, which is essential for antiapoptotic activity in other IAP family members. The expression of AtILP in HeLa cells conferred resistance against tumor necrosis factor (TNF)-␣/ActD-induced apoptosis through the inactivation of caspase activity. In contrast to the C-terminal RING domain of AtILP, which did not inhibit the activity of caspase-3, the N-terminal region, despite displaying no homology to known BIR domains, potently inhibited the activity of caspase-3 in vitro and blocked TNF-␣/ActD
All living organisms use a process of cell suicide to achieve and maintain homeostasis during normal development as well as in response to environmental stress or during pathogen challenge (1). This functionally conserved process, known as pro-grammed cell death (PCD) 5 or apoptosis, is genetically regulated and associated with distinct morphological and biochemical characteristics. Extensive study over the past decade has illuminated the biological and molecular mechanisms of the regulation of apoptosis in animal systems (2)(3)(4)(5)(6)(7). Apoptosis is triggered by the sequential activation of cysteine proteases known as caspases, which results in protein cleavage and the breakdown of DNA molecules. This apoptotic cascade is regulated by both initiators and inhibitors and can be activated by diverse stimuli. Caspases are synthesized as zymogens that are activated by proteolytic cleavage at specific aspartic acid residues in the P1 position (8). Compartmentalization of caspases and their cofactors suggests that two major apoptotic pathways exist. One pathway of apoptosis, observed in animal systems, can be induced by the deprivation of serum from tissue culture cells, leading to the release of cytochrome c from mitochondria. Apoptosis activating factor-1 (Apaf1) and cytochrome c form a complex with procaspase-9, which is then activated. Active caspase-9 triggers the common caspase cascade by cleaving procaspase-3 (9 -11). Caspase-3 is responsible either wholly or in part for the proteolytic cleavage of many key proteins, including poly(ADP-ribose) polymerase and lamin A (12)(13)(14). The existence of another apoptosis pathway derives from the observation that caspase-8 is activated when challenged with tissue necrosis factor (TNF-␣) or Fas ligand (15)(16)(17)(18). Loss of caspase activity is observed in cells that express the viral proteins CrmA, from cowpox, and p35, from baculovirus (19 -23). Furthermore, overexpression of these viral caspase inhibitors in insect, nematode, and mammalian cells results in resistance to apoptosis, providing evidence that the components of the apoptotic pathway are highly conserved throughout evolution. This has led to speculation that functional equivalents of these viral proteins may exist in higher organisms.
The inhibitor of apoptosis (IAP) family of proteins plays a central role in apoptotic and inflammatory processes, conferring protection against cell death. IAP family members inter-fere with the transmission of intracellular death signals by inhibiting caspase-dependent apoptotic pathways. The IAP proteins were initially identified in baculovirus as factors that prevented host cell apoptosis, allowing time for the virus to replicate (24,25). Since then, eight mammalian IAPs (XIAP, HIAP1, HIAP2, ILP2, MLIAP, NAIP, BRUCE, and survivin) and three Drosophila IAP homologs (DIAP1, DIAP2, and Deterin) have been identified (26 -35). IAP proteins exhibit a modular structure characterized by the presence of one or more baculovirus IAP repeat (BIR) domains. The BIR domain is a zincbinding fold of ϳ70 amino acid residues that is essential for the anti-apoptotic properties of IAP proteins. The fact that all known IAP members have a BIR domain suggests that this domain plays a pivotal role in mediating cellular protection. In addition, with the exception of NAIP, all known IAP family members also contain a RING domain in their C terminus, defined by seven cysteine residues and one histidine residue that together coordinate two zinc atoms (36,37). The RING domain confers E3 ubiquitin ligase activity and has been suggested to play a role in apoptosis regulation by directing the ubiquitination of target proteins for degradation by the proteasome (38 -40). The RING domain is not essential for apoptosis inhibition by human IAP family members, which suggests that the BIR domain is sufficient to protect cells from apoptosis (41)(42)(43).
The genes that control PCD are functionally conserved across wide evolutionary distances (44 -46). For example, homologues of the mammalian Bax-inducible cell death inhibitor BI-1 have been identified in several plants, including Arabidopsis, rice, tobacco, and barley (47)(48)(49)(50). In addition, animal apoptotic regulators, such as human Bcl-2 and Bcl-xl as well as nematode CED-9, can either induce or suppress cell death in transgenic plants (51)(52)(53). In plants, PCD occurs during developmental processes, such as flower development, embryogenesis, seed germination, and vessel and trachea formation. Of note, PCD is crucial for a plant defense response termed hypersensitive response (HR), which serves to restrict the spread of pathogens through the process of PCD (54,55). Studies in plant systems have shown that the biochemical and morphological hallmarks of apoptosis, such as cytoplasmic shrinkage, nuclear condensation, and DNA laddering, are similar in animal and plant cells (56 -58). The cytosolic caspase-mediated apoptotic pathway is well defined in animal cells but has yet to be demonstrated in plant cells. However, evidence from recent studies has suggested that there are some similarities between plant apoptosis and caspase-mediated apoptosis in animal cells, with the exception of the presence of IAP-like proteins. For example, in tobacco cells, caspase-1-like proteases participate in HR, and the presence and subcellular localization of caspase-3-like proteases in barley has been reported (59 -62).
In the current study, we identified and characterized a novel Arabidopsis gene, AtILP (for Arabidopsis thaliana IAP-like protein), which encodes a RING finger protein with homology to mammal IAPs. The expression of AtILP efficiently suppressed apoptosis induced by TNF-␣/ActD and the fungal toxin fumonisin B1 (FB1) by blocking the activation of caspases in HeLa cells. Interestingly, despite lacking a BIR domain, an N-terminal fragment of AtILP conferred anti-apoptotic activity in Arabidopsis. Overexpression of the N-terminal domain of AtILP resulted in the suppression of FB1-induced cell death and attenuated cell death caused by the bacterial effector AvrRpt2. These results suggested that AtILP may act as a negative regulator of PCD in Arabidopsis.
Cell Culture and Cell Viability Assay-Human cervical epitheloid carcinoma (HeLa) cells were purchased from American Type Culture Collection (ATCC). HeLa cells were cultured in Dulbecco's modified Eagle's medium (DMEM; Invitrogen) supplemented with 10% heat-inactivated fetal bovine serum (FBS; Invitrogen), 2 mM L-glutamine, 100 units/ml penicillin, and 100 units/ml streptomycin in a humidified CO 2 incubator. Cells were transfected with the indicated expression vectors using Lipofectamine (63). Stable transfectants were selected in the presence of G418 (800 g/ml).
Cell viability was determined by the crystal violet staining method. Briefly, HeLa cells plated in a 12-well dish were exposed to TNF-␣ (100 ng/ml)/ActD (100 ng/ml). Cells were stained with a solution of 0.5% crystal violet in 30% ethanol and 3% formaldehyde for 10 min at room temperature, after which the plates were washed three times with tap water. After drying, cells were lysed in 1% SDS, and dye uptake was measured at 550 nm using a 96-well plate reader. Cell viability was calculated as dye intensity relative to untreated samples.
DEVDase Activity Assay-Cell pellets were washed with icecold PBS and then resuspended in 100 mM HEPES buffer (pH 7.4) containing protease inhibitors (5 mg/ml aprotinin and pepstatin, 10 mg/ml leupeptin, and 0.5 mM phenylmethylsulfonyl fluoride). The cell suspension was lysed by three freeze-thaw cycles, and then the cytosolic fraction was obtained by centrifugation at 100,000 ϫ g for 1 h at 4°C. DEVDase activity was evaluated by measuring proteolytic cleavage of the chromogenic substrate Ac-DEVD-pNA, which serves as a substrate for caspase-3-like proteases. Briefly, cell lysate (40 g of protein) was mixed with 150 l of reaction buffer containing Ac-DEVD-pNA (240 M) in a 96-well plate. The reaction mixture was incubated at 37°C for 90 min. The increase in enzymatically released pNA was measured every 15 min by absorbance at 405 nm; DEVDase activity was calculated from initial velocity.
For measuring DEVDase activity assay in plants, leaves were ground and homogenized in caspase extraction buffer (50 mM HEPES (pH 7.5), 1 mM EDTA, 1 mM DTT, 1% BSA, 1 mM PMSF, 20% glycerol). Samples were mixed with 50 l of caspase assay buffer (caspase extraction buffer containing 150 M Ac-DEVD-pNA) and then incubated at 37°C for 1 h. The increase in enzymatically released pNA was measured every 15 min by absorbance at 405 nm; DEVDase activity was calculated from the initial velocity.
Plants-A. thaliana seedlings were germinated on MS medium containing 2% sucrose and 0.6% Phytagel and maintained in a temperature-and light-controlled growth chamber. Arabidopsis seedlings were grown for 14 days before being transferred to fresh MS plates or to fresh MS plates supplemented with FB1.
For the DNA fragmentation assay, 10 g of genomic DNA was separated by electrophoresis on a 0.8% agarose, 0.6% Meta-Phor-agarose gel and then transferred to a Hybond membrane. As a probe, 50 ng of total genomic Arabidopsis DNA was labeled using a commercially available random labeling kit. Following hybridization, the membrane was washed with 0.1ϫ SSC, 0.1% SDS at 65°C for 2 h.
Bacteria-Bacterial strains were grown at 28°C on KB medium containing the appropriate antibiotics for selection. For assessing ion leakage and to score HR phenotype, plants were infiltrated with 10 7 cfu/ml (A 600 ϭ 0.2) of Pseudomonas syringae pv. phaseolicola (Pph) strain NPS3121 using a needleless 1-ml syringe (see Table 1 and Fig. 6). Pph strain NPS3121 harboring AvrRpt2 was used for the ion leakage and cell death assay. For ion leakage measurements, eight leaf discs (8 mm in diameter) were removed immediately following infiltration (t ϭ 0) and allowed to float in 40 ml of water. After 30 min, the wash water was replaced with 10 ml of fresh water, and then conductance over time was measured using a Fisher brand conductivity meter.
For growth experiments using P. syringae pv. maculicola (Pma) strain M6C⌬E (64) harboring empty vector (pVSP61) or its derivative encoding AvrRpt2 (Fig. 7), the leaves of 5-weekold plants were inoculated with bacterial suspensions in 10 mM MgCl 2 using a needleless 1-ml syringe. After the indicated periods of time, three leaf discs for each sample were ground in 10 mM MgCl 2 and then serially diluted and plated to determine bacterial number.
Subcellular Localization of AtILP Fusion Protein-PCR was used to generate a cDNA fragment encoding full-length AtILP. The cDNA fragment was digested with XbaI and BamHI and then ligated in-frame with soluble modified green fluorescent protein (smGFP) to create AtILP::smGFP. The AtILP::smGFP fusion construct was introduced into Arabidopsis protoplasts using polyethylene glycol-mediated transformation. The expression of red fluorescent protein fused to a nuclear localization signal (RFP::NLS) was used as a positive control for nuclear localization. Transformed protoplasts were incubated at 22°C in the dark. Expression of fusion protein was observed 2 days after transformation by fluorescence microscopy (Olympus AX70) using standard FITC and rhodamine filters.
Identification of an Apoptosis Inhibitor in Arabidopsis and
Demonstration of Anti-apoptotic Activity in Animal Cells-Some aspects of the signaling mechanisms that control apoptosis, including IAP family members, are functionally conserved across wide evolutionary distances. HIAP1 and HIAP2 are functional anti-apoptotic proteins in Homo sapiens (65)(66)(67). To determine whether higher plants carry HIAP-like proteins, homology searches against the Arabidopsis genome sequence database were performed using the sequences of HIAP1 and -2 as the queries. The searches yielded one gene, At4g19700, encoding a putative protein with significant similarity to other IAPs. In particular, the protein contained a RING domain in its C terminus. This protein was named AtILP, for A. thaliana IAP-like protein. The full-length AtILP cDNA was isolated from an Arabidopsis cDNA library. It consisted of 915 nucleotides encoding a putative open reading frame of 305 amino acids (Fig. 1A). Amino acid sequence alignment of the RING domain of AtILP with human HIAP1, HIAP2, XIAP, and KIAP showed that AtILP encodes a perfect C-terminal C3HC4 signature (Fig. 1B). Aside from the highly conserved RING domain, AtILP did not appear to encode any other known conserved domains. IAP proteins are characterized by the presence of one or more BIR domains, a structurally distinct, zinc finger fold domain composed of ϳ70 amino acid residues. It is widely acknowledged that the BIR domain is essential for the antiapoptotic properties of the IAP proteins in animal systems. To determine whether AtILP possessed anti-apoptotic activity, despite not having a BIR domain, HeLa cells were transfected with expression vectors for AtILP or Gpx or empty vector (pcDNA) as a control using Lipofectamine (63), and the response to TNF-␣/ActD-induced cell death was analyzed. Gpx was used as a positive control for apoptosis inhibition (68,69). As shown in Fig. 2, TNF-␣/ActD-induced cell death was considerably reduced in cells expressing AtILP, even more so than in GPx-expressing cells. The viability of AtILP-expressing cells exceeded 85%, whereas that of GPx-expressing cells was ϳ55%. These results indicated that AtILP is a RING finger protein with structural and possibly functional homology to human IAPs and that a gene involved in apoptosis inhibition in plants functions in a similar manner in an animal system.
The N-terminal Domain of AtILP Blocks TNF-␣/ActD-induced Caspase Activation-To define the molecular determinants of AtILP anti-apoptotic activity, four different AtILP protein fragments were constructed (Fig. 3B). HeLa cells were transfected with expression vectors for full-length AtILP or one of the AtILP fragments (fragment a, b, c, or d), and then antiapoptotic activity was measured. Because AtILP did not have a BIR domain, and computer-based sequence homology searches revealed no other similarities with other IAP proteins, we initially expected that the functional domain would map to the C-terminal RING domain. As shown in Fig. 3A, cells transfected with empty vector or fragments c and d underwent cell death in response to TNF-␣/ActD. In contrast, transfection expression vectors for full-length AtIPL, fragment a, or fragment b significantly reduced TNF-␣/ActD-induced apoptosis. Fragments a and b retained ϳ75% of the inhibitory activity of the full-length protein, whereas the anti-apoptotic activity of fragments c and d, which contained the C-terminal RING domain, were comparable with control conditions (Fig. 3B). The various AtILP fragments are depicted schematically in Fig. 3B. Full-length AtILP and the AtILP fragments were all stably expressed in HeLa cells (Fig. 3C). These results indicated that fragment b, which contained the N-terminal 150 amino acid residues of AtILP, contains the main determinant(s) of antiapoptotic activity.
Because caspases are critical mediators of apoptosis, we next examined whether caspase inactivation played a role in the anti-apoptotic activity of AtILP. DEVDase activity was evaluated by measuring the proteolytic cleavage of a chromogenic substrate, Ac-DEVD-pNA, which serves as a substrate of caspase-3-like proteases. As seen in Fig. 4, the inhibitory effects of full-length AtILP and each of the AtILP fragments on DEV-Dase inactivation correlated with the results of the cell viability assay. These data clearly suggested that the activity of AtILP in inhibiting cell death is mediated by the N-terminal domain through the suppression of caspase activation (Fig. 4).
The N-terminal Domain of AtILP Confers Resistance to FB1induced Apoptosis in Arabidopsis-To evaluate the role of AtILP in apoptosis inhibition in plants, transgenic Arabidopsis DECEMBER 9, 2011 • VOLUME 286 • NUMBER 49 lines that constitutively expressed full-length AtILP or the N-terminal (amino acids 1-150) or C-terminal (amino acids 151-304) domain of AtILP under the control of the cauliflower mosaic virus (CaMV) 35S promoter were generated. Several transgenic plants that exhibited high levels of expression of fulllength, N-terminal, or C-terminal AtILP were selected for further analysis (supplemental Fig. 1). The N-terminal and C-terminal domains consisted of 150 and 154 amino acids, respectively. A striking example of plant apoptosis, HR is a cell death program triggered in host cells at or around the site of pathogen infection, resulting in cellular collapse and the formation of necrotic lesions (70). Because it is well known that the fungal toxin FB1 induces HR in plants (56,71,72), we examined the effect of the overexpression of full-length AtILP or the N-or C-terminal domain on FB1-induced HR in Arabidopsis. Wildtype Arabidopsis ecotype Col-0 and transgenic Arabidopsis plants were grown for 2 weeks on MS agar medium, transferred to MS medium containing 3 M FB1, and then observed for morphological changes 4 days after transfer. As shown in Fig. 5A, the leaves of wild-type and transgenic plants harboring the C-terminal fragment of AtILP were completely macerated, and death lesions were readily apparent. In contrast, transgenic plants expressing full-length AtILP or the N-terminal domain exhibited some lesions in the upper leaves but overall were highly resistant to FB1-induced cell death compared with wildtype and C-terminal domain transgenic plants.
Attenuation of Cell Death by BIR-absent IAP-like Protein
Caspase-like activity and a role for caspase-like proteases in HR have been reported in plants, and HR can be prevented through the inhibition of caspase-like proteases (59,73). To determine whether the anti-apoptotic activity of AtILP in plants exposed to FB1 was mediated by caspase-like protease inactivation, as was seen in HeLa cells (Fig. 4), protein extracts from wild-type and transgenic plants were prepared. As seen in Fig. 5B, caspase inactivation correlated with the ability of fulllength AtILP and the N-and C-terminal domains to suppress FB1-induced apoptosis. Treatment with FB1 induced the activation of caspase-like proteases in wild-type and C-terminal domain transgenic plants. In contrast, the overexpression of full-length AtILP or the N-terminal domain effectively suppressed caspase-like protease activation (Fig. 5B). These results suggested that the isolated N-terminal domain of AtILP can prevent plant cell death by suppressing caspase-like protease activation.
Effect of AtILP on the Interaction between Arabidopsis and the
Bacterial Pathogen P. syringae-It was next examined whether the expression of AtILP altered effector protein-induced HR and the associated cell death. Gram-negative plant pathogenic bacteria secrete a complex set of effectors, making it difficult to detect changes in HR induced by a single effector protein. To overcome this limitation, a strain of Gram-negative phytopathogenic bacteria, Pph strain NPS3121, was used that expressed the avirulent gene AvrRpt2. Using this strain, it was possible to measure HR and electrolyte leakage in response to an avirulent bacterial pathogen. The leaves of 6-week-old Arabidopsis plants (wild-type and AtILP transgenic lines) were infiltrated with Pph strain NPS3121 expressing AvrRpt2 (Pph (AvrRpt2)) at a dose of 10 7 cfu/ml (see "Experimental Procedures"). Within 16 h, most wild-type and transgenic plants overexpressing the C-terminal domain of AtILP exhibited confluent tissue collapse at the site of pathogen infiltration, which is a characteristic feature of HR-associated cell death. However, most of the leaves of the transgenic plants overexpressing fulllength AtILP or the N-terminal domain did not show serious signs of HR, although a small percentage developed a weak HR at 16 h. This weak HR in full-length AtILP and N-terminal domain transgenic plants was restricted to a small area surrounding the point of infiltration and was not confluent. Confluent tissue collapse was observed in most of the inoculated leaves of the transgenic plants by 20 h postinoculation (Table 1).
Electrolyte leakage due to membrane damage as a result of plant-pathogen interaction is a characteristic and quantitative feature of HR-associated cell death (74). To determine whether the attenuation of HR was related to membrane damage, electrolyte leakage in wild-type and AtILP transgenic plants was measured after Pph (AvrRpt2) infiltration (10 7 cfu/ml). Leaves from wild-type and transgenic plants overexpressing the C-terminal domain of AtILP reached close to maximal conductivity 12-16 h postinoculation. Transgenic plants overexpressing full-length AtILP or the N-terminal domain exhibited a similar pattern, but the magnitude of the response was significantly lower, and maximal conductivity was reached later compared with wild-type or C-terminal domain transgenic plants (Fig. 6). However, a difference in conductivity was not observed when plants were treated with virulent bacterial pathogen Pph (data not shown). These results indicated that the overexpression of the N-terminal domain of AtILP significantly impairs HR-associated cell death elicited by an avirulent bacterial pathogen.
To further explore the role of AtILP in plant defense response, the effect of attenuated cell death on bacterial growth was assessed using the bacterial pathogen Pma strain M6C⌬E. Disease phenotype was assessed following the inoculation of this virulent strain of P. syringae (Pma strain M6C⌬E) into wild-type and AtILP transgenic lines. All of the plants (transgenic and wild type) exhibited visible chlorosis 3-4 days after inoculation, which progressed over time on the infected leaves. The plant lines were indistinguishable in terms of the severity of chlorosis (data not shown). In addition, there were no differences in bacterial titer among wild-type and AtILP transgenic lines (Fig. 7A). These results indicated that the overexpression of AtILP does not alter the defense response to infection with virulent Pma strain M6C⌬E.
To determine whether AtILP-mediated HR attenuation affected the growth of an avirulent strain of Pma, strain M6C⌬E carrying the avirulence gene AvrRpt2 was used as the inoculum. In this case, the overexpression of full-length AtILP or the N-terminal domain resulted in a 30 -40-fold increase in bacterial growth, which indicated that the overexpression of AtILP decreases resistance to avirulent Pma strain M6C⌬E (Fig. 7B).
AtILP Localizes to the Nucleus and Blocks DNA Fragmentation-Genomic DNA fragmentation during the process of PCD occurs as a result of the activation of cell deathspecific endonucleases that cleave nuclear DNA into oligonucleosomal units. Genomic DNA was extracted from wildtype and transgenic plants treated with or without 3 M FB1. As shown in Fig. 8A, in transgenic plants harboring full-length AtILP or the N-terminal domain, FB1-induced DNA fragmentation was inhibited. Given that DNA fragmentation is a hallmark of apoptosis, these results confirmed that AtILP blocks apoptosis in plants and that the N-terminal domain of AtILP is important for this anti-apoptotic activity.
To confirm that AtILP was present in the nucleus to mediate genomic DNA fragmentation, the subcellular localization of AtILP in vivo was analyzed in Arabidopsis. A C-terminal smGFP fusion protein of AtILP was generated and expressed in Arabidopsis protoplasts. Fluorescence microscopy revealed that AtILP localized to the nucleus (Fig. 8B), and there was some overlap with the control protein, RFP::NLS. These results indicated that AtILP is targeted exclusively to the nucleus in plant cells.
DISCUSSION
A number of genes that regulate PCD, both positively and negatively, have been identified; however, the mechanisms that control PCD in plants remain largely unknown. In the current study, a novel Arabidopsis RING finger protein, AtILP, was identified and shown to be a negative regulator of PCD in Arabidopsis. Overexpression of AtILP suppressed effector proteinand FB1-induced cell death. In addition, AtILP blocked TNF-/ ActD-induced cell death via the suppression of caspase activation in HeLa cells, suggesting that the function of AtILP in inhibiting cell death is preserved across species.
To determine the structural basis for the inhibition of apoptosis by AtILP, the effects of various fragments of AtILP on caspase activity in vitro and on apoptosis suppression in HeLa cells were analyzed. The RING domain of AtILP failed to inhibit the activity of caspase-3, whereas an N-terminal fragment that had no homology to any known BIR domain potently inhibited the activity of caspase-3 in vitro and blocked TNF-␣/ActDinduced apoptosis (Figs. 3 and 4). Amino acid sequence alignment with other IAP proteins indicated that AtILP lacks homology to known BIR domains. The secondary structure of AtILP was investigated, and it was found that AtILP and human IAPs share a common motif consisting of three consecutive  strands, an ␣ helix, a  strand, and an ␣ helix (supplemental Fig. 2). One possibility is that these common structural motifs determine the caspase-inhibitory activity of AtILP. Based on amino acid sequences analysis, AtILP belongs to the family of RING proteins, members of which have diverse biological functions in plants (75). AtILP contained a well conserved RING domain at its C terminus. Overexpression of AtILP in plants resulted in the reduction of cell death in response to an avirulent bacterial pathogen and to low doses of FB1. Most RING finger proteins have enzymatic activities that catalyze reactions within the ubiquitination/26S proteasome protein degradation system (75,76). Many IAP proteins exhibit E3 ubiquitin ligase activity, and the RING domain is critical for biological activity and regulation of PCD (77)(78)(79)(80). In fact, Arabidopsis RING1, which demonstrates E3 ubiquitin ligase activity in vitro, has been implicated in cell death (76). The biochem-ical activity and putative function of another RING domain protein in Arabidopsis, AtHAL1, remain to be elucidated.
Transgenic Arabidopsis lines that overexpressed AtILP demonstrated anti-apoptotic activity when challenged with the fungal toxin FB1. This suppression of cell death was accompanied by the inhibition of caspase activation and DNA fragmentation. The anti-apoptotic activity of AtILP mapped to the N-terminal domain and correlated with the results of similar experiments in HeLa cells. To investigate the role of AtILP in cell death inhibition, T-DNA insertion mutagenesis was carried out, and several AtILP knock-out plant lines were identified and characterized. Mutation of AtILP did not result in any phenotypic differences in terms of germination, flowering, and growth rate as compared with wild-type plants. In addition, plant responses to FB1 and P. syringae pv. tomato DC3000 expressing AvrRpt2 were indistinguishable from wild-type Arabidopsis, indicating that there may be other as yet unidentified genes in Arabidopsis that can compensate for the loss of the cell death inhibition activity of AtILP (data not shown).
Gram-negative plant pathogenic bacteria secrete a complex set of type III effectors directly into host cells via the type III secretion system. For example, the wild-type Pto strain delivers at least 33, and perhaps as many as 50, type III effectors (81,82). Thus, HR in response to bacterial strain Pto is a cumulative effect of multiple effector proteins. As a result, it is almost impossible to detect HR induced by a single effector protein. In the current study, Gram-negative phytopathogenic bacteria Pph strain NPS3121 expressing AvrRpt2 was used. Pph does not trigger HR, which can obscure other defense responses. Pph is a model pathogen that causes halo blight in bean but not in Arabidopsis (83). Thus, the use of this pathogen enabled us to measure the effect of AvrRpt2 on HR and electrolyte leakage.
In many cases, PCD and disease resistance are intricately linked in higher plants (84). During incompatible interactions between plants and bacterial pathogens, HR-associated cell death often triggers the development of plant disease resistance, resulting in the halting of pathogen growth in plant tissues. Cell death, however, can be uncoupled from the resistance response. For example, the Arabidopsis mutant dnd1 (defense no death) is resistant to Pst without HR-associated cell death (85). In the current study, overexpression of AtILP caused a A, wild-type and transgenic plants were grown on an MS plate with or without 3 M FB1. Genomic DNA was isolated, separated by electrophoresis, and then visualized by staining with ethidium bromide. B, 10-day-old Arabidopsis protoplasts were cotransformed with 10 g of AtILP::smGFP and RFP::NLS expression constructs. RFP::NLS was used as a control for nuclear localization. Images labeled Bright, GFP, and RFP were obtained by fluorescence microscopy. Co-localization of GFP and RFP (Merge) appears as yellow. Scale bar, 10 m. decrease in the stress response to an avirulent strain of Pph that resulted in reduced HR cell death. In addition, transgenic Arabidopsis lines overexpressing AtILP supported higher levels of bacterial growth compared with wild-type plants after inoculation with Pma M6C⌬E harboring AvrRpt2. These results indicate that AtILP has distinct functions in regulating PCD and disease resistance (i.e. a negative role in AvrRpt2-induced PCD and a positive role in RPS2-mediated resistance). Furthermore, neither the overexpression (Fig. 7A) nor mutation of AtILP (data not shown) affected the response of plants to a virulent strain (Pma M6C⌬E). Therefore, reduced PCD in AtILP plants is likely to be unrelated to the defense response to virulent pathogens. | 6,764.8 | 2011-09-16T00:00:00.000 | [
"Biology"
] |
Evolutionary and Biogeographic Insights on the Macaronesian Beta-Patellifolia Species (Amaranthaceae) from a Time-Scaled Molecular Phylogeny
The Western Mediterranean Region and Macaronesian Islands are one of the top biodiversity hotspots of Europe, containing a significant native genetic diversity of global value among the Crop Wild Relatives (CWR). Sugar beet is the primary crop of the genus Beta (subfamily Betoideae, Amaranthaceae) and despite the great economic importance of this genus, and of the close relative Patellifolia species, a reconstruction of their evolutionary history is still lacking. We analyzed nrDNA (ITS) and cpDNA gene (matK, trnH-psbA, trnL intron, rbcL) sequences to: (i) investigate the phylogenetic relationships within the Betoideae subfamily, and (ii) elucidate the historical biogeography of wild beet species in the Western Mediterranean Region, including the Macaronesian Islands. The results support the Betoideae as a monophyletic group (excluding the Acroglochin genus) and provide a detailed inference of relationships within this subfamily, revealing: (i) a deep genetic differentiation between Beta and Patellifolia species, which may have occurred in Late Oligocene; and (ii) the occurrence of a West-East genetic divergence within Beta, indicating that the Mediterranean species probably differentiated by the end of the Miocene. This was interpreted as a signature of species radiation induced by dramatic habitat changes during the Messinian Salinity Crisis (MSC, 5.96–5.33 Mya). Moreover, colonization events during the Pleistocene also played a role in shaping the current diversity patterns among and within the Macaronesian Islands. The origin and number of these events could not be revealed due to insufficient phylogenetic resolution, suggesting that the diversification was quite recent in these archipelagos, and unravelling potential complex biogeographic patterns with hybridization and gene flow playing an important role. Finally, three evolutionary lineages were identified corresponding to major gene pools of sugar beet wild relatives, which provide useful information for establishing in situ and ex situ conservation priorities in the hotspot area of the Macaronesian Islands.
Introduction
. Native geographical distribution, ecology, and IUCN conservation status of taxa from subfamily Betoideae; taxonomy according to Kadereit et al. [16]. Patellifolia. The latter differs from Beta by having short tepals that do not overtop the fruit vs. long tepals that overtop the fruit [12]. Previous studies based on morphological features (e.g. [10,[13][14]) failed to recognize Patellifolia as a separate genus but rather as part of the Beta section Procumbentes. Recent molecular phylogenetic studies (e.g. [15,16]) modified the subfamily classification previously proposed. It was also suggested that Acroglochin should be excluded from this subfamily and that the other five genera (i.e. Beta, Aphanisma, Hablitzia, Oreobliton, and Patellifolia) should fall into two clades, i.e. Beteae comprising Beta only, and Hablitzieae with the remaining four genera. These studies have been hampered by the undersampling of species from the Western Mediterranean Region, including the hotspot area of the Macaronesian Islands, where some endemic species are found (i.e. B. patula in the Madeira archipelago, P. webbiana in the Canary Islands, and P. procumbens in all the Macaronesian archipelagos except the Azores). Two of these Macaronesian endemics (i.e. B. patula and P. webbiana) were recently classified as Critically Endangered in the European Red List of Vascular Plants [17]. Though the importance of conservation of these wild taxa has been widely recognized [18], it is also important to understand the relationships within the Beta s.l. gene pools, which will offer an effective approach for utilization of the wild-beet germplasm. For instance it is pointed out that Patellifolia species can transmit traits providing resistance to the most serious diseases of sugar beets worldwide, such as sugar beet cyst nematode (Heterodera schachtii Schmidt), leaf spot disease caused by Cercospora beticola Sacc., curly top virus, rhizomania, and powdery mildew (Erysiphe polygoni DC.) [19][20][21][22].
Despite the great economic importance of the Beta and Patellifolia species [18], a reconstruction of the evolutionary history with a dated molecular phylogeny for the subfamily Betoideae is still lacking. The aims of this study are to: (1) present a hypothesis of the phylogenetic relationships within the subfamily Betoideae, and (2) gain a better understanding of the spatiotemporal history of the wild beet species which occur in the hotspot area of Western Mediterranean Region, including for the first time the endemic species from the Macaronesian Islands and samples from all the five archipelagos (i.e. the Azores, Canaries, Cape Verde, Madeira including the Desertas, and Savage Islands). Taxa marked with an * were included in the phylogenetic analyses; those marked with ** were collected for this study. a Data from fieldwork and bibliography [23][24][25][26][27][28][29][30]. Permissions to collect protected species from protected areas were issued by Portuguese authorities GenBank accession numbers are provided in S1 Table for all the studied samples. Additionally, data on sampling sites of the samples collected in this study, including their geographical coordinates, details about vouchers and their respective herbaria, are also included in S1 Table. Molecular data DNA was extracted using DNeasy Plant Mini Kit (QIAGEN, Valencia, California, USA) and purified, using QIAquick columns (QIAGEN, Valencia, California, USA) or Silica Bead DNA Gel Extraction kit (Fermentas), according to the manufacturer's protocols. Polymerase chain reaction (PCR) amplifications using 20-30ηg of genomic DNA were performed to amplify the complete internal transcribed spacer (ITS) region, using the primers ITS4 and ITS5 [31]. Two coding regions of the chloroplast genome were amplified using primer pairs, matK [32] and rbcL [33], plus two non-coding regions using trnL intron [34] and trnH-psbA [35].
Sixty-eight samples were sequenced for the ITS region (S1 Table). From a preliminary study at population level of the four chloroplast regions, all individuals of each species, of Beta and Patellifolia, and those from a given sampling location exhibited the same cpDNA sequences. Therefore, cpDNA sequences were generated by selecting a subset of 1-5 individuals/island, resulting in 27 representative accessions sequenced. This sub-sample was generated for matK, trnH-psbA, trnL intron and rbcL (S2 Table), but three specimens for the rbcL gene and two specimens for the trnL intron and trnH-psbA spacer could not be sequenced due to PCR amplification problems. Amplified products were purified with Sureclean Plus (Bioline, London, UK) and sent to STAB Vida, Lda (Monte da Caparica, Portugal) for Sanger sequencing. For all the markers, amplicons were sequenced using both directions in ABI 3730 XL DNA Analyzer (Applied Biosystems). Raw sequences were edited and cleaned by hand in SEQUENCHER v4.0.5 (Gene Codes Corporation).
Phylogenetic analyses
Multiple sequence alignments were built for each locus dataset in MAFFT v6.717b [36], using the L-INS-i method as recommended in the manual for difficult alignments. Each dataset was concatenated into a combined matrix using ElConcatenero [37]. Maximum Likelihood (ML) and Bayesian Inference (BI) methods were used to reconstruct phylogenies from the separate (i.e. ITS, matK, trnH-psbA, trnL intron, and rbcL) and combined datasets. ML searches were performed in RAxML v8.0.9 under the GTRGAMMA model with 1000 bootstrap replicates. The best fit model for each locus data set was selected under the AIC, as implemented in MRMODELTEST v2.3 [38] and used in the Bayesian analysis performed in MRBAYES v3.1.2 [39]. Each locus was allowed to have partition-specific substitution parameters. Analyses were generated for 3x10 7 generations, sampled every 3000 th generation and using the default chain heating temperature. The analysis was run three times with one cold and three incrementally heated Metropolis-coupled Monte Carlo Markov chains, starting from random trees. Output files were analyzed and the convergence and mixing of the independent runs were assessed for all parameters using TRACER v1.4 [40]. Trees from the different runs and their associated posterior probabilities (PP) were then combined and summarized in a 50% consensus tree. All computational analyses were performed using the CIPRES Gateway cloud servers [41]. A clade with a PP value > 0.95 or a BS value > 85% was considered well supported. Additionally, for the subfamily Betoideae and using the concatenation of all loci, the NeighborNet algorithm [42] as implemented in SplitsTree v4.0 [43], was used with the default settings to visualize possible incongruences in the dataset. This method relaxes the assumption that evolution follows a strictly bifurcating path and allows for the identification of reticulated evolution or incomplete lineage sorting among the dataset.
Statistics for the alignments and phylogenetic analyses, as well as the model of evolution for the datasets, are presented in S2 Table. Divergence time analyses Divergence times within the subfamily Betoideae were estimated using the Bayesian MCMC algorithm implemented in BEAST v1.7.2 [44]. For this analysis, we used the combination of the ITS and two cpDNA markers (matK and rbcL), for which outgroup sequences could be obtained. The GTR model of sequence substitution was used for all partitions, except for rbcL, which used the GTR+G model. The fossil of Chenopodipollis multiplex, from a pollen record found in the United States and dated to the early Paleocene , was used to calibrate the root of our phylogenetic tree, which was previously suggested as the best constraint location for the fossil [15]. Therefore, a normal prior was applied to the root of the phylogenetic tree of this study with a mean of 60.5 and a standard deviation of 2, in order to accommodate the fossil age uncertainty . A relaxed lognormal molecular clock was used for all partitions, the Yule process was implemented for the tree prior with a constant speciation rate per lineage, and a random tree was used as the starting tree. The Bayesian MCMC was run for 5x10 7 generations, sampling parameters at every 5000 generations. This analysis was conducted three independent times. Tracer v1.4 [40] was used to assess convergence and correct mixing of all parameters by visually inspecting the log traces and estimating the Effective Sample Size (ESS) of each parameter. Results from the three runs were combined with LogCombiner v1.7.2 [44], after discarding the 10 first % of each analysis as burn-in. The remaining trees were summarized using a Maximum Clade Credibility target tree in TreeAnnotator v1.7.2 [44], as well as Bayesian posterior probability (PP), MEDIAN/ MEAN height and the 95% highest posterior density heights interval (95% HPD) of each node. All computational analyses were performed in the CIPRES Gateway cloud servers [41].
Phylogenetic analyses of Betoideae
Here a new phylogeny of the Betoideae is presented based on nuclear (ITS) and cpDNA markers (matK, trnH-psbA, trnL intron, rbcL), covering a widespread sampling within this subfamily, and an outgroup information from other plants from Amaranthaceae family (S2 and S3 Figs). The results provide support for the (i) monophyly of this subfamily, with the exclusion of the genus Acroglochin; (ii) a deep genetic differentiation between Beta and Patellifolia, which are monophyletic groups; and (iii) the identification of three monophyletic lineages, corresponding to major gene pools of sugar beet CWR (i.e. GP1, GP2 and GP3).
Maximum Likelihood (ML) and Bayesian Inference (BI) were used to test phylogenetic hypotheses within the Betoideae subfamily. Topology of the ML tree (S2 Fig) obtained using the concatenated ITS and cpDNA markers was similar to that obtained from Bayesian analysis (tree not shown). Both ML and BI found essentially identical tree topologies, revealing the same major clades. The monophyly of the Betoideae is well-supported by the data (BS = 100%; PP = 1), but the monotypic genus Acroglochin is excluded from this subfamily. Instead, Acroglochin persicarioides constitutes a robust clade (BS = 90%; PP = 1) with Corispermum chinganicum (Corispermoideae subfamily), and both are closely related to Atriplex prostrata (Chenopodioideae subfamily).
Relationships within the Betoideae subfamily remain somewhat uncertain, since the most basal branches are poorly supported. Therefore, the basal relationships shown among the five genera (i.e. Aphanisma, Beta, Hablitzia, Oreobliton, and Patellifolia) could be interpreted as polytomic, according to our results. Nevertheless, the close relationship between Oreobliton and Aphanisma is well-supported (BS = 100%; PP = 1).
Both Beta and Patellifolia appear to be well-supported monophyletic groups: clade I that includes all samples of the genus Beta (BS = 100%; PP = 1), and clade II (BS = 100%; PP = 1), gathering all the Patellifolia representatives. Within clade I the Beta species found in coastal areas of the Western Mediterranean Region and in the Macaronesian Islands (i.e. B. vulgaris subsp. maritima and subsp. vulgaris, B. macrocarpa, and B. patula) form a well-supported monophyletic group (BS = 99%; PP = 1), which is sister to the remaining members of Beta (i.e. B. corolliflora, B. nana, and B. trigyna) from the Eastern Mediterranean Region (BS = 80%; PP = 1). Moreover, the Macaronesian endemic species, B. patula (from Madeira) and P. webbiana (from the Canary Islands) were placed within Beta (clade I-together with the other species from the GP1) or Patellifolia (clade II-together with the other species from the GP3) clades, respectively. However, our analyses failed to resolve with confidence the relationships among these endemics and the rest of the species resulting in a polytomy (S1 and S2 Figs).
Applying the NeighborNet algorithm to the concatenated dataset reveals a substantial degree of conflicting phylogenetic signal at the divergences of Beta, Patellifolia, Hablitzia, Aphanisma and Oreobliton. This is evidenced by the substantial number of loops found in these points of the phylogenetic network (S3 Fig). Further loops are found within Beta and Patellifolia genera, albeit in a smaller number.
Divergence time analyses
Date estimates for nodes within the subfamily Betoideae are presented in Fig 1 (see C1 to C6). Our analysis indicates that the Betoideae must have diverged around 32.5 million years ago (Mya), representing the split between Hablitzia, and the other four Betoideae genera. Within this group, the split between Beta and Patellifolia (C2) was estimated to have occurred at 25. Table 2)
Phylogenetic relationships
Our study provides new insights into the phylogenetic relationships within the Amaranthaceae family, and our major findings should help in further refinement of the taxonomy of the subfamily Betoideae. The monophyly of Betoideae was resolved with confidence in our results, but the monotypic genus Acroglochin was excluded from this subfamily, supporting earlier phylogenetic studies [15,16]. This genus occurs in the remote areas of the Himalayas and forms a strongly supported clade with Corispermum chinganicum (subfamily Corispermoideae), which is also distributed in Asian regions. Our molecular data provides evidence for the inclusion of this monotypic genus within the subfamily Corispermoideae, and is consistent with previous works (e.g. [45]). Nevertheless, we considered that further investigation is necessary to effectively test this hypothesis since there is limited taxonomic information currently available for these two genera [29].
The five extant genera of the Betoideae subfamily seem to have a relatively old origin, but their sister relationships within this clade remain unknown. The difficulty in determining the phylogenetic relationships among members of the Betoideae was evidenced by the low support for the basal nodes of this group. Indeed, our phylogenetic network reveals a rather large number of loops at the base of divergence between the five Betoideae genera (see S3 Fig). There are several reasons that may cause such an unresolved phylogenetic pattern, for example a rapid radiation of most genera that leaves little time for the accumulation of mutations and creates a substantial signal of incomplete lineage sorting [46]. Alternatively, ancient hybridization events may have occurred, which created a mosaic pattern of sequence variation. This remains to be seen when more detailed phylogenetic and population data become available, but the data presented in this study is more consistent with Kühn [14] classification, which placed the five genera in one tribe. Our results contradict those of previous works [15,16] which suggested the inclusion of the Beta species within the tribe Beteae, while the Patellifolia species were included in the tribe Hablitzieae, together with three other monotypic genera (i.e. Aphanisma, Hablitzia, and Oreobliton).
Even though our results cannot confidently place Beta and Patellifolia genera relative to each other, their ancient divergence reinforces their recognition as different genera, and this is supported by former morphological studies (e.g. [47]). Therefore, based on our results and previous morphological studies (M.C. Duarte, et al. unpublished data), our study indicated that Patellifolia species, formerly included in Beta section Procumbentes (i.e. B. patellaris, B. webbiana and B. procumbens), should be regarded as a separate genus. Together with previous molecular phylogenetic studies, our results represent a starting point for a thorough taxonomic revision of the subfamily Betoideae.
Spatio-temporal history of Betoideae
The results of the dated molecular phylogeny suggest a relatively old origin for Betoideae, which may have taken place during the Early Oligocene Glacial Maximum (EOGM). The transition . Bayesian tree obtained from the BEAST analysis based on the concatenated dataset of ITS and cpDNA markers (matK and rbcL), illustrating the estimated divergence ages at selected calibrated nodes. Posterior probabilities (PP) are given above each branch. The geographic origin of each specimen is provided (right side) with a color code for continental areas and several Macaronesian archipelagos and grey bars differentiate the three gene pools previously described (for further details see Frese [10]) and concordant with the present phylogenetic analysis. C1 to C6 as described in Table 2 from Eocene to Oligocene was characterized by major climatic changes, which triggered extinctions in plant and animals [48]. Although the relationships among the five genera remain weakly resolved, their early diversification (ca. 32 Mya) tends to support a model of allopatric speciation within this subfamily. This could be the result of past range contractions of the most recent common ancestor of Betoideae, confirmed for instance by long branches of each genus, and reflecting possible speciation by isolation and/or extinction events during the EOGM. At least in line with such a scenario are the narrow distributions, in distant geographic regions, presented by four of the five genera, this being linked to their different ecology. Specifically, Aphanisma occurs in coastal habitats of California; Hablitzia is native in the deciduous forests of the Caucasus Region; Oreobliton is distributed on the chalk rocks of the Atlas Mountains in North-Africa; and Patellifolia is found on coastal vegetation, in maritime rocks, sea-cliffs and seashore habitats in Southern-Western Europe, with its center of diversity in the Macaronesian archipelagos. Conversely, Beta is the only genus of the subfamily Betoideae that presents more species diversity with a broader distribution, mainly throughout the circum-Mediterranean Region [8], and the large range observed in this genus seems to be the result of more recent climatic and geological events. Moreover, a deep genetic differentiation between Beta and Patellifolia species, which may have occurred in the Late Oligocene was correlated with the major gene pools, reflecting an ancient divergence between Beta (GP1, GP2) and Patellifolia (GP3) species. Between the divergence of these two genera and their own diversification, there was around 15-20 Mya of uncertain evolution, considering their respective long branches, and comparing stem and crown ages of these two clades (see Fig 1). Although, our molecular data did not allow us to provide a more reliable evolutionary scenario, Beta and Patellifolia species occur in very constraining living conditions (e.g. aridity, high salinity levels), providing additional support that both lineages may have had more ability to survive the past dramatic aridity events that have occurred within the Mediterranean Region, compared to other more vulnerable plant lineages.
The second biogeographical pattern that was revealed was the occurrence of two well-differentiated clades on each side of the Mediterranean, in the western coastal areas (GP1: B. vulgaris subsp. maritima and subsp. vulgaris, B. macrocarpa, and B. patula) and in the easternmost part of the species' distribution (GP2: B. corolliflora, B. nana and B. trigyna). The Mediterranean Beta species were probably beginning to differentiate around seven million years ago, which matches the Messinian Age of the Late Miocene. This coincides with the Messinian Salinity Crisis (MSC, 5.96-5.33 Mya) [49,50], a period where the connection between the Mediterranean Sea and the Atlantic Ocean closed, causing the Mediterranean Sea to desiccate and probably generating widespread salt marshes or coastal and halophytic habitats across the Mediterranean coast [50]. Such dramatic changes would have promoted the differentiation between GP1 and GP2. These two groups currently occur in different geographical areas and quite differentiated habitat types. GP1 occur in coastal cliffs, salt marshes and ruderal places of the Western Mediterranean Region and Macaronesia Islands, while GP2 is mainly present in continental mountainous zones of the Eastern Mediterranean.
This West-East disjunction pattern has also been found in other plants currently occupying the Mediterranean Basin, both in tree genera such as Laurus L. [51] and Juniperus L. [52], with representatives in Macaronesia, and in herbaceous genera such as Erophaca Boiss. [53]. In these studies current patterns are explained by the contraction of favorable areas, mainly due to an increase of aridity (see [53]) or by the distribution of tectonic microplates and the appearance of water barriers during the Neogene (see [51,52]). Both in the cases of Juniperus and Erophaca the authors suggest a western to eastern speciation sequence, while in Laurus [51], the opposite is hypothesized, with westward expansion of a single haplotype, which colonized across the Western Mediterranean, reaching the Macaronesia Islands.
The subsequent end of the MSC could additionally have promoted further differentiation by vicariance, with Western Mediterranean Beta populations, previously adapted to prevailing salt conditions (e.g. salt marshes habitats), being isolated by loss and fragmentation of this habitat due to post-MSC conditions. Later influential events occurred during the Plio-Pleistocene, with sea level and climate oscillations [54], leading to repeated isolation and connection of taxa, and possible subsequent speciation within the Western Mediterranean Beta. Some of the western wild beets would have later expanded and colonized the Macaronesian Islands. The diaspore adaptations of Beta and Patellifolia species towards sea dispersal (thalassochory) would have promoted their long-distance dispersal and have been clearly advantageous in the colonization of these archipelagos (see [55]). A key role of marine currents in dispersal was also suggested in a recent population study of B. macrocarpa and B. vulgaris subsp. maritima, which encompass the shoreline from France to Morocco [56]. This study suggested that B. vulgaris subsp. maritima went through a postglacial recolonization scenario from the Mediterranean-Atlantic region, with southern Iberia and Morocco, including the Strait of Gibraltar, acting as a long-term refuge.
Altogether, our results support the hypothesis that the Messinian Salinity Crisis and subsequent climatic changes in the Mediterranean Region during the Plio-Pleistocene were probably the major drivers of diversification in the genus Beta, thus explaining the current geographical ranges.
Diversification of wild beets on Macaronesia
The estimation of divergence times provides information on the genetic distance among wild beet species, and facilitates understanding the process and timing of evolution within Beta and Patellifolia species, revealing that the diversification was quite recent, during the Pleistocene. Although this pattern was also reported for other Macaronesian native plant lineages (e.g. [57,58]), the available data does not allow us to recognize this due to insufficient phylogenetic information, and consequently we could not discard the existence of a soft polytomy, meaning that some of the Beta and Patellifolia species may have diverged at different times.
Within the Patellifolia clade there are unresolved polytomies and our study could not infer the monophyly of each species as well as the relationships among P. procumbens (from Madeira, Canary Islands and Cape Verde), P. patellaris (from mainland, Madeira, Canary Islands and Cape Verde) and P. webbiana (from Canary Islands) (see S2 Fig). This pattern can be the consequence of recent island colonization and differentiation, recurrent gene flow with the ancestral mainland populations or congeneric species, or even incomplete lineage sorting. Regarding the latter, the DNA regions sequenced in our study cannot provide a clear resolution for this shallow evolutionary event. Likewise, within the Beta clade, encompassing B. vulgaris subsp. maritima, the cultivated forms and all the Macaronesian species group, the phylogenetic relationships also remain unresolved. Sequences from B. vulgaris subsp. maritima from Madeira Island clustered with B. macrocarpa from Canary Islands, could be explained by introgression or hybridization processes, which are in accordance with the loops observed in our networks analyses (see S3 Fig). A recent study using a flow cytometry analysis, revealed the existence of mixed-ploid populations of B. vulgaris subsp. maritima and B. macrocarpa, in the South of Portugal [59]. Consequently our results suggest that these clustered sequences could reflect an ancient hybridization between the diploids, B. vulgaris subsp. maritima and B. macrocarpa, as was previously suggested by Villain [60]. This author suggested ranking the tetraploid B. macrocarpa from Canary Islands as a separate taxon, and it was proposed that these tetraploid populations result from at least two independent colonization/hybridization events in that archipelago [61]. The increased number of polyploids among these island species can be attributed to the higher adaptive potential of the polyploids [62], which might have been particularly successful in periods of ecological upheaval when new ecological niches were occupied by vigorous polyploids and less competitive diploids were outcompeted [63].
Within the Beta and Patellifolia genera, potential hybridization and the risk of demographic or genetic assimilation of rare endemics (i.e. B. patula in Madeira and P. webbiana in Canary Islands) by other native congener may occur. One possible reason is that the weakness of genetic barriers to hybridization in many islands groups is a by-product of a small genetic differentiation in recently radiated species [64]. As B. patula is classified as Critically Endangered (CR) and is one of the closest wild relatives (GP1) of domestic B. vulgaris subsp. vulgaris, it should be afforded higher conservation priority over the more distantly related species [2]. Thus prioritizing threatened species and conserving the entire extent of their natural ranges was recently recognized as a crucial step towards a better strategy to conserve the endemic flora in the Macaronesia archipelagos [65]. Beyond the actual or potential socio-economic value of these wild relatives as a genetic resource for crop improvement, their extinction would entail the loss of genetic resources that could help such plants overcome the future climatic shifts [66].
Conclusions
This study uncovered the phylogenetic relationships between sugar beet (Beta vulgaris subsp. vulgaris) and the wild species, with particular emphasis on the Beta and Patellifolia species that are commonly found in coastal areas of the Western Mediterranean Region and Macaronesian Islands. The phylogeny recovered on a time-calibrated Bayesian-tree revealed a deep genetic differentiation between Beta and Patellifolia species, which may have occurred in the Late Oligocene. Furthermore, we hypothesized that ecological divergence of Beta in the Mediterranean Basin may have occurred during the Messinian Salinity Crisis (MSC, 5.96-5.33 Mya). Western and Eastern Beta species inhabit very contrasting ecological areas, from salt marshes to mountainous zones respectively. The MSC with its deep habitat modifications and extension could have provided an extraordinary period for Western Mediterranean Beta adaptation to these extreme ecological conditions. The subsequent end of the MSC could additionally have promoted further differentiation by vicariance due to fragmentation and isolation of a previously extended habitat. Some of the western wild beets later expanded and colonized the Macaronesian Islands during the Pleistocene. Moreover, the two endemic taxa (i.e. B. patula and P. webbiana), classified as threatened according to IUCN criteria, are associated with short phylogenetic branches and polytomic groups revealing that the diversification was quite recent in these archipelagos, and unraveling a potentially complex biogeographic pattern with hybridization and gene flow playing an important role. Finally, our phylogenetic analysis of the Betoideae sheds light on the genetic differentiation among the major gene pools of sugar beet wild relatives which are of high evolutionary, ecological, and economic relevance, providing useful data for establishing conservation priorities in the hotspot area of the Macaronesian Islands. We considered that only the conservation of populations in their natural habitats ensures renewal of gene pools and the continued supply of novel genetic material potentially critical for future crop improvement, which is recognized as an asset in maintaining global food security. | 6,403.6 | 2016-03-31T00:00:00.000 | [
"Biology"
] |
Formation of Iron Oxide Nanoparticles in the Internal Cavity of Ferritin-Like Dps Protein: Studies by Anomalous X-Ray Scattering
DNA-binding protein from starved cells (Dps) takes a special place among dodecamer mini-ferritins. Its most important function is protection of bacterial genome from various types of destructive external factors via in cellulo Dps–DNA co-crystallization. This protective response results in the emergence of bacterial resistance to antibiotics and other drugs. The protective properties of Dps have attracted a significant attention of researchers. However, Dps has another equally important functional role. Being a ferritin-like protein, Dps acts as an iron depot and protects bacterial cells from the oxidative damage initiated by the excess of iron. Here we investigated formation of iron oxide nanoparticles in the internal cavity of the Dps dodecamer. We used anomalous small-angle X-ray scattering as the main research technique, which allows to examine the structure of metal-containing biological macromolecules and to analyze the size distribution of metal nanoparticles formed in them. The contributions of protein and metal components to total scattering were distinguished by varying the energy of the incident X-ray radiation near the edge of the metal atom absorption band (the K-band for iron). We examined Dps specimens containing 50, 500, and 2000 iron atoms per protein dodecamer. Analysis of the particle size distribution showed that, depending on the iron content in the solution, the size of the nanoparticles formed inside the protein molecule was 2 to 4 nm and the growth of metal nanoparticles was limited by the size of the protein inner cavity. We also found some amount of iron ions in the Dps surface layer. This layer is very important for the protein to perform its protective functions, since the surface-located N-terminal domains determine the nature of interactions between Dps and DNA. In general, the results obtained in this work can be useful for the next step in studying the Dps phenomenon, as well as in creating biocompatible and solution-stabilized metal nanoparticles.
INTRODUCTION
Approximately one-third of the well-characterized proteins and almost half of the enzymes contain from one to several metal ions [1,2]. Among those, one of the most vital and abundant chemical elements is iron. This metal is commonly found in the hemes o r iron-sulfur prosthetic groups in proteins [2,3]. The biological signifi cance of iron is determined by its ability to undergo reversible redox reactions. Divalent and trivalent iron is essential in vital processes in all eukaryotes and most prokaryotes.
Iron participates in various metabolic processes, such as oxygen transport, DNA synthesis, and electron transport for energy production. It is a universal microelement that ensures normal functioning of all body systems at the cellular level.
On the other hand, the excess of iron promotes the damage of DNA, proteins, and lipids, leading to the disruption of cellular homeostasis. The existing evolutionary developed mechanisms of detoxifi cation and iron removal from the cytosol include oxidation of excessive divalent iron to the trivalent iron in the Fenton reaction [4]: Fe 2+ + H 2 O 2 → Fe 3+ + OH + OH − In living organisms, trivalent iron is accumulated in the ferritin-like proteins. The superfamily of ferritin protein has evolved to provide iron sequestration in its soluble, non-toxic, and bioavailable form [5]. These proteins function as iron depots and store it in a non-reactive form, this limiting its involvement in cellular processes and protecting the cell from the oxidative damage initiated by this metal [6,7]. Ferritin is a protein complex that plays a role of major intracellular iron depot in humans and animals. It is found virtually in all organs and tissues. Mammalian apoferritin (protein lacking iron ions) is a 24-mer protein with the molecular mass (MM) of ~450 kDa; each of its polypeptide subunits has MM of ~20 kDa. Ferritin is a globular protein with the inner cavity that can store approximately 4500 ions of hydrated trivalent iron oxide (Fe 2 O 3 • H 2 O) together with a variable number of phosphate groups. The external diameter of the protein is 12-13 nm; the dimeter of its inner cavity is 7-8 nm [7,8]. The protein component of the ferritin molecule has numerous pores through which iron is transported [9].
There are at least three types of ferritin-like prokaryotic proteins -bacterial ferritin, bacterioferritin, and dodecamer ferritin (mini-ferritin) -that are related to eukaryotic ferritins. Similar to the mammalian ferritins, bacterial ferritin and bacterioferritin consist of 24 subunits and have the central cavity that can accommodate ~2500 iron atoms in bacterial ferritin [10] and ~1800 iron atoms in bacterioferritin [10,11]. The presence of heme (iron protoporphyrin IX) located between each two protein subunits and linked to methionine in each of these subunits is a characteristic feature of bacterioferritin that distinguishes it from other ferritin forms. As a result, the bacterioferritin molecule contains 12 heme groups, the role of which is yet to be elucidated [12][13][14].
Dps (DNA-binding protein from starved cells) has a special place among the dodecamer mini-ferritins. Its most important function is protection of genome from harmful factors such as starvation, high temperature, UV-and γ-radiation, toxins, chemical shock, and oxidative stress. Dps binds to DNA thus forming a stable Dps-DNA complex that protects DNA from damage [15][16][17].
One of the important consequences of this protective response is the emergence of bacterial resistance to antibiotics and other drugs. That is why formation of protective Dps-DNA complexes in the stress-induced bacterial cells has attracted a signifi cant attention of many research groups in recent two decades. The most known works in this area are the studies by Minsky and colleagues, who were the fi rst to prove this phenomenon experimentally [18][19][20].
In bacterial cells, Dps is present in minor amounts, and its synthesis is induced during the stationary phase of bacterial growth, during starvation, or under oxidative stress [21]. The Dps dodecamer lacks structural modules for the recognition of specifi c nucleotide sequences. It is assumed that its binding with the negatively charged sugar-phosphate DNA backbone occurs via electrostatic interactions with the lysine-enriched N-terminal domains of Dps monomers [15,[22][23][24]. However, the exact mechanism of Dps binding to DNA is still unknown [25].
Dps monomer is a polypeptide consisting of 167 amino acid residues (MM 18.7 kDa), that contains a highly conserved sequence of four α-helices [26]. The Dps dodecamer has MM 224.4 kDa and displays type 23 symmetry with the external diameter of 8-9 nm with the inner cavity diameter of 4-5 nm. As Dps is a ferritin-like protein, each monomer is capable of binding up to 40 iron atoms. Hence, Dps can accommodate approximately 500 Fe 3+ ions in its inner cavity [27].
Therefore, Dps simultaneously performs two vitally important functions in bacterial cells: (i) protects bacterial genome from unfavorable external factors by forming a crystal complex with DNA; (ii) serves as an iron depot and protects cells from the oxidative damage initiated by the iron excess. These two functions are interrelated. In particular, Zhao et al. [27], who studied DNA damage in vitro, demonstrated that Dps prevents the cleavage of Escherichia coli DNA simultaneously exposed to FeSO 4 and H 2 O 2 . That implies that by forming the protein-DNA complex and neutralizing hydrogen peroxide during interaction with iron, Dps preserves E. coli genome under stress conditions [28].
Despite the fact that E. coli Dps does not contain any canonical ferroxidase sites present in E. coli bacterioferritin or bacterial ferritin, Dps oxidizes Fe 2+ at some sites in the protein macromolecule followed by its accumulation as Fe 3+ in the protein central cavity [27]. It has been assumed that the ferroxidase sites of Dps are formed by the iron-binding amino acids, such as aspartate, glutamate, and histidine [28,29]. Thus, the N-terminal domain of the expressed Deinococcus radiodurans Dps (DrDps1) contains a region (residues 30-55) that includes the metal-binding site (Asp 36 x 2 His 39 x 10 His 50 x 4 Glu 55 motif) and is located at the external surface of the dodecamer. Disruption of this site aff ects the protein self-assembly, as well as reduces its DNA-binding capacity, i.e., decreases its protective capacity [30]. BIOCHEMISTRY Unlike Dps, far from all bacterial dodecamer mini-ferritins can simultaneously protect an organism from the oxidative stress by forming a crystalline complex with DNA and serve as a source of iron in the case of its defi cit [31]. That is one of the reasons why the unique features of Dps attract so much attention. Moreover, there is also another aspect that makes investigating this protein so important. Accumulation of a large number of metal ions in the inner cavity of Dps protein presumes an existence of magnetic properties; such protein molecules could be considered as natural biosensors of electromagnetic radiation. Dps is capable of transmitting the received signal to DNA, which gives a promise for the development of a new generation of logic elements. That is why investigating the formation of iron nanoparticles inside the stabilizing Dps protein shell has not only scientifi c, but also practical signifi cance.
In order to use the small angle X-ray scattering (SAXS) for elucidating the structure of metal-containing specimens and size distribution of metal nanoparticles in the sample, the contribution of these components to the total scattering should be separated. Traditional SAXS techniques are used for studying the structure of original samples prior to the formation of metal particles (i.e., structure of the matrix) [32]. The standard approach based on the subtraction of the matrix scattering from the total scattering of the metal-containing sample is possible only in the cases, when the structure of the matrix does not change during formation of the metal particles in it. When structural changes in the original sample are presumed, the commonly used method is anomalous SAXS (ASAXS) [33][34][35][36]. In this method, experimental SAXS curves obtained for the metal-containing samples are recorded at diff erent energies of the X-ray radiation: (i) close to the edge of the absorption band of the metal (when only matrix scattering is recorded) and (ii) away from the band edge (when total scattering is recorded), so that the diff erence between these two curves could be attributed only to the scattering of nanoparticles containing the metal atoms. Analysis of the contribution of each component allows to calculate the size distribution of the metal nanoparticle and to evaluate the structure of the metal-containing sample during its interaction with the metal.
In this study, we investigated accumulation of iron oxide in the inner cavity of the ferritin-like Dps protein using ASAXS.
MATERIALS AND METHODS
Preparation of iron-containing Dps samples. Dps was expressed and purifi ed according as described earlier [37,38]. The purifi ed protein was concentrated with an Amicon® centrifugal concentrator (10 kDa cut-off ; Merck Millipore, USA) to the concentration of 3 mg/ml followed by dialysis against a buff er containing 50 mM NaCl, 0.5 mM EDTA, 50 mM Tris-HCl (pH 8.0). The purity of the obtained protein sample was evaluated by electrophoresis in a 15% polyacrylamide gel; protein concentration was determined from absorbance at 280 nm (A 280 ) using molar absorption coeffi cient from [39].
Freshly prepared solution of FeSO 4 ⋅ 7H 2 O was added to the purifi ed Dps (3 mg/ml) in a buff er containing 50 mM NaCl, 0.5 mM EDTA, and 50 mM Tris-HCl (pH 8.0) in the amounts corresponding to 50, 500, and 2000 iron atoms per dodecamer, and the mixture was incubated for 30 min at room temperature. The resulting samples are designated in the text as Dps-Fe50, Dps-Fe500, and Dps-Fe2000, respectively.
SAXS and analysis of the obtained data. Traditional SAXS. Dps structure was investigated by traditional SAXS with the Petra III synchrotron (DESY, Germany), beamline P12 [40]. The P12 beamline was equipped with an automatic sample changer and two-dimensional Pilatus 2M detector (DECTRIS, Switzerland). The scattering intensity, I(s), was measured in the range of momentum transfer values 0.08 < s < 3 nm -1 , where s is scattering vector (4πsinθ/λ), 2θ is the scattering angle, and λ is the radiation wavelength (0.124 nm). For each sample, 50 frames were recorded to evaluate possible radiation damage. No radiation damage was observed in the process.
The scattering curves were primarily processed with the PRIMUS program [41]. Analysis of the obtained data and structural modeling were performed using the ATSAS software package [42].
The radius of gyration (R g ) of the scattering particles was determined from the initial part of the scattering curve in the region of the smallest s values using the equation (1): (1) which is valid in the region ( sR g ) < 1.3. The scattering intensity at the zero angle, I(0), was determined from the slope of the linear part of the Guinier plot (ln I(s) versus s 2 ), which is proportional to molecular mass of the scattering object.
Molecular masses were calculated from the SAXS data by two diff erent methods: (i) using Bayesian approach (MM Bayesian ) [43] and (ii) based on the excluded volume V p (Porod volume) inaccessible to the solvent [44] using the empirical ratio between V p and MM (1.65 for proteins) [45].
The GNOM program [46] was used to calculate the distance distribution function p(r), which was required for the reconstruction of the Dps protein shape in solution based on the SAXS data. The distance distribution function, p(r), was determined as an indirect Fourier transform of the scattering intensity in accordance with the equation (2): where I(s) is the scattering intensity. The maximum particle size (D max ) was calculated under conditions p(r) = 0 at r > D max .
The ab initio method for the reconstruction of the low-resolution shape of Dps was based on the simulated annealing algorithm and was implemented using the DAMMIN program [47], which allows the construction of structural models with the minimization of residual χ 2 between the experimental scattering and scattering obtained for the models (3): where N is the number of experimental points, I exp (s j ) and σ(s j ) are experimental intensities and errors, I calc (s j ) is the calculated intensity for the model, c is the scaling coeffi cient.
The structure of the iron-containing protein was calculated using the multiphase ab initio modeling, which allowed to obtain not only the low-resolution structure of the protein component in the complex, but also to determine the location of iron atoms in the protein matrix. The two-component (two-phase) model was constructed with the MONSA program [47]. This program takes into account the diff erences in the electron densities of the protein and metal components of the complex, as well as the ratio between their volumes. SAXS curves from the original protein and protein containing metal atoms were used to produce the two-phase model of the protein-metal complex. The theoretical scattering intensity for the constructed models was calculated using the CRYSOL program [48].
ASAXS experiments were also carried out with a synchrotron Petra III, beamline P12, and involved recording of X-ray scattering curves at diff erent wavelengths (λ), i.e., at diff erent energies of the incident beam (E). The measurements were conducted for the original Dps protein and Dps samples containing 50 (Dps-Fe50), 500 (Dps-Fe500), and 2000 (Dps-Fe2000) iron atoms per dodecamer. The scattering data were recorded at several diff erent photon energies E k with the E 0 energy (10,000 keV, λ = 0.124 nm) being suffi ciently far from the edge of the Fe absorption band, and, hence, selected for investigating the structure of the original Dps molecule (traditional SAXS technique). The obtained SAXS and ASAXS data were corrected for the background scattering and fl uorescence, and were processed with the ATSAS software package [42] using the recently developed strategy for ASAXS data acquisition and processing [49].
The atomic scattering factor was determined from the following equation (4): (4) where dispersion correction factors f′(E) and f′′(E) become signifi cant in the vicinity of the edge of the resonance atom absorption band. In our case, the measurements were conducted close to the K-band (absorption band of Fe atom), i.e., at the photon energy E = 7.125 keV (λ = 0.174 nm). The changes in the dispersion corrections factors f′(E) and f′′(E) with changes in the energy of photons used in this study, E, are shown in Fig. 1.
For each sample, the scattering curves I(s, E k ) were recorded with 7 diff erent energies of the incident radiation E 1-7 : 7.100, 7.110, 7.118, 7.125, 7.128, 7.130, and 7.133 keV in the region at the edge of K-band (E = 7.125 keV). The anomalous correction factors for E = 7.125 keV were 8.13 and 0.48, respectively. The difference between the scattering curves produced at diff erent energies Δ k (s) = I(s, E 0 ) -I(s, E k ) is proportional to the scattering by the resonance atoms [33][34][35][36]. These difference curves were used for calculating the volume size distribution functions D V (R). The integral equation (5) was solved for D V (R) using the GNOM program [46] assuming a spherical shape of the formed nanoparticles.
In this equation, R is the sphere radius; R min and R max are the minimal and maximal sizes, respectively; i 0 (x) = {[sin(x) -x cos(x)]/x 3 } 2 and m(R) = (4π/3)R 3 are the sphere form factor and its volume, respectively. The scattering length density for anomalous atoms is defi ned as follows: Δρ = (N 0 2 -N k 2 )e/v at , where N 0 and N k are the number of electrons contributing to scattering far from the resonance and at E = E k , respectively; e is the electron charge; and v at is atomic volume.
We also used an alternative approach to analyze the size distribution of Fe nanoparticles formed in the Dps protein using the MIXTURE program [41]. In this approach, the scattering intensity I(s) of the mixture of k diff erent components with diff erent sizes is represented as a linear combination (6): (6) where ν k is the volume fraction of component k, I k is the scattering intensity for this component, K is the number of components. The MIXTURE program models the scattering of mixtures containing K number of scattering objects with varying shape and size using theoretical scattering from the simple bodies (sphere, hollow sphere, ellipsoids, cylinders, etc.). Each object is characterized by its own volume fraction, average size, polydispersity distribution width, contrast, and, optionally, by the possibility of interparticle interactions. An experimental BIOCHEMISTRY (Moscow) Vol. 87 No. 6 2022 scattering pattern is approximated by the weighted combination of the calculated individual (partial) scattering curves from the components to minimize residual χ 2 between the experimental and model data.
RESULTS AND DISCUSSION
SAXS studies of the Dps structure. We used traditional SAXS technique to investigate the Dps structure in solution and to determine the SAXS invariants, such as radius of gyration (R g ), Porod volume (V p ), distance distribution functio n p(r), the maximum size (D max ), and molecular mass (MM). Preliminary experiments have shown the absence of concentration dependence and aggregate formation in the solution at the Dps concentrations ranging from 1.0 to 3.0 mg/ml. Based on this observation, we selected the sample with the concentration of 3.0 mg/ml for further SAXS experiments and data processing, as it produced a suffi ciently informative SAXS curve with a low noise in the range of angle vectors 0.25 < s < 3.0 nm -1 and clearly pronounced shape factor (Fig. 2, curve 1).
The SAXS curve presented in Fig. 2 is typical for a solution of a monodisperse spherical protein. The distance distribution curve p(r) (Fig. 2, inset), which describes the shape of a scattering object [32] and was used in the DAMMIN program for reconstruction of the low-resolution structure from the SAXS data [47] also indicates a spherical shape of the protein. It can be concluded based on the p(r) function profi le that in this case, the spherical object is hollow, since the distance distribution function is asymmetric and its maximum is shifted to the right. Moreover, considering that the amplitude of the p(r) function is proportional to the electron density of individual parts of a scattering object, it can be suggested that the protein has a less dense surface layer, because there is a weakly scattering "tail" in the p(r) profi le in the size range of ~8-9.6 nm. Based on the published data and our previous research, such scattering corresponds to the fl exible N-terminal domains of the protein [24,50,51]. The low-resolution Dps structure (bead model) was reconstructed from the SAXS data using the DAMMIN program [47]. The shape reconstruction was based on the algorithm of annealing within the sphere with a diameter of the maximum size of the protein molecule, D max , which, in turn, was determined from the distance distribution function and was 9.6 nm. The results of the reconstruction are presented in Fig. 2b with grey beads. The residual χ 2 for the experimental data was 1.9, which indicates good agreement of the experimental SAXS curve with the scattering curve produced by the obtained low-resolution shape (Fig. 2a, curve 2). The bead model of the Dps structure has an inner cavity and corresponds to the known crystal structure of the protein (PDB ID: 1DPS) (Fig. 2b), although the bead model is slightly bigger in size due to the scattering by the N-terminal domains (which are absent in the 1DPS atomic-resolution structure) due to their fl exibility and impossibility of their crystallization [26].
The main structural characteristics of Dps determined directly from the SAXS curve without modeling (SAXS invariants) are presented in Table 1.
The data in Table 1 together with the reconstructed low-resolution shape of the protein, indicate its native state, correspond to the characteristics of this protein known from the literature [26,50,51], and make it possible to further use the protein in investigating the processes of accumulation of iron atoms in the inner cavity of the Dps dodecamer.
ASAXS studies of the iron nanoparticles formation in Dps. Experimental ASAXS curves recorded at diff erent energies are presented in Fig. 3.
Analysis of the scattering curves for the samples with diff erent Fe 3+ content and measured at diff erent radiation energies E 1-7 revealed the following features of the Dps interaction with iron ions.
1. The scattering curves recorded at the energy E 0 in the regions far from the iron absorption band (10,000 keV, λ = 0.124 nm) and at the energies E 1,2 (7.100 and 7.110 keV) were virtually identical for all Fe-containing samples. (Hence, we omitted SAXS curves recorded at E 0 in Fig. 3 to avoid plot overloading.) 2. The low iron content (50 atoms per Dps molecule) did not aff ect the general structural characteristics of the protein at the energies relatively far from the iron absorption band (K-band). It is important to emphasize once more that the preliminary structural analysis of the original Dps protein at the energy E 0 revealed that the protein was in the state suitable for its use as a matrix for the formation of iron nanoparticles.
3. In contrast to the specimens with the low iron content (Dps-Fe50), Dps-Fe500 and Dps-Fe2000 demonstrated a signifi cant increase in the scattering intensity at very small angles, i.e., exhibit high polydisperse due to the ability of iron as a transition metal to form stable complexes with, for example, protein amino groups, thus bringing closer together protein chains of the neighboring macromolecules. The protein matrix, thus, changed signifi cantly upon interaction with the iron-containing compounds, which strictly required the use of ASAXS in this case. 4. In the samples with the high iron content (Dps-Fe500 and Dps-Fe2000), the presence of large metal nanoparticles was expected due to the formation of metal-containing protein aggregates.
5. The scattering curves recorded at diff erent photon energies E 1-7 demonstrated a certain dependence on the energy of incident beam in the range of angle vectors 0.5 < s < 1.3 nm -1 with the minimum at s = 0.95 nm -1 (Fig. 3). Although this dependence was most pronounced for Dps-Fe2000, nevertheless, for all iron-containing samples it was possible to calculate the diff erence between the scattering curves obtained at diff erent energies Δ k (s) = I(s, E 1 ) -I(s, E k ), which was proportional to the scattering by the resonance Fe atoms and could be used for the analysis of size distribution of metal nanoparticles formed in the protein.
6. The maximum of the anomalous signal for all Fe-containing samples was observed at the photon energy E 5 = 7.128 keV, which is close to the K-band of Fe absorption (Fig. 1). Accordingly, the scattering curve at this energy was used to evaluate the resonance signal Δ 5 (s) = I(s, E 1 ) -I(s, E 5 ) only from the iron nanoparticles formed in the Dps inner cavity. The criterion that the diff erence signal is indeed due to Fe particles is the absence of the signal from the protein matrix, i.e., from the Dps form factor, on the obtained anomalous scatter-ing curves. This criterion was met for all Fe-containing samples (Fig. 4a).
The obtained curves of the anomalous signal were then used to calculate the volume size distribution functions D V (R) of iron oxide nanoparticles formed in the internal cavities of the Dps dodecamer. (Fig. 4b). It is important to note that the ASAXS curves were well pronounced for Dps-Fe500 and Dps-Fe2000, but not for Dps-Fe50 (weak signal, high noise). Hence, the size distribution function calculated from this curve should be considered only as an estimate.
Analysis of the volume size distribution functions (Fig. 4b) revealed the following regularities. Predominantly small nanoparticles ~2-nm in size were formed at low iron ion concentrations. The increase in the iron content to 500 atoms per protein molecule resulted in the additional emergence of 4 to 5-nm particles (shoulder in the D V (R) function), as well as minor amounts of larger structures (up to 20 nm) associated with the iron oxide nanoparticles in protein aggregates. At the iron content of 2000 atoms per dodecamer, mostly 4 to 5-nm particles were formed, as well as large Fe-containing structures, while the 2-nm particles were absent. This implies that at the high iron concentration, the entire inner cavity of the protein is fi lled with metal nanoparticles. Their size is limited by the size of the Dps dodecamer inner cavity and determined by the iron concentration in the solution, which is important when using this protein matrix for the formation of solution-stabilized metal nanoparticles.
Similar results were obtained using an alternative approach for determining the fractional composition of iron nanoparticles in the protein dodecamer with the MIXTURE program, suggesting that the shape of the formed particles is spherical. For each specimen, three fractions with a broad size range (D min -D max ) were specifi ed. For each fraction, the average particle size and volume ratio of the fraction were calculated using the MIX-TURE program ( Table 2).
Similar to determining the volume size distribution D V (R) with the interactive GNOM program, the use of MIXTURE program revealed a predominant presence of the 2-4-nm nanoparticles in all samples together with a small amount of larger formations. Since both methods were used to determine the volume fractions Table 2 of the nanoparticles, in quantitative terms the number of large particles was low even for the high iron concentration in the initial solution, but these large particles signifi cantly contributed to SAXS due to their size (see Fig. 3, b and c). The SAXS curves calculated from the fractional composition of each sample presented in Table 2 are in good agreement with the obtained anomalous signals (Fig. 5).
Multiphase ab initio modeling based on the SAXS data. According to the published data, the iron-binding sites are located not only in the inner cavity of the Dps protein, but also at the surface of the Dps dodecamer [28][29][30]52]. Although the nanoparticles do not form on the dodecamer surface, each monomer contains up to four amino acid residues capable of metal binding in the region of the N-terminal domains, so the bound iron atoms contribute to the scattering of the entire macromolecule and to the size distribution, in particular, due to their high electron density. The ab initio multiphase modeling (MONSA program) is a SAXS technique that allows to use the diff erence between the electron density of diff erent parts of the scattering object to localize some structural features associated with this diff erence [47]. The calculations are based on the diff erences between the electron density of the protein and metal components of the complex, as well as the ratio between the volumes of these components. In order to obtain a two-phase protein-metal model, we used the original SAXS curves of the iron-free and those of the iron-containing protein.
We selected the Dps-Fe500 specimen for the modeling, which provided a reliable detection of the formed nanoparticles in the protein inner cavity, but at the same time, allowed localization of the signifi cantly smaller metal-containing groups on the dodecamer surface on the background of strong scattering by the large particles. The results of modeling are presented in Fig. 6.
The multiphase modeling has allowed for the fi rst time to visualize the location of iron atoms in the pro-tein surface layer, and, at the same time, to confi rm the presence of most metal ions in the central cavity of the Dps dodecamer. We observed a good agreement between the experimental data and calculated curves for the bead models with χ 2 = 2.1. This generalized model provides the most comprehensive picture of formation of iron oxide nanoparticles in the ferritin-like protein that also protects bacterial genome.
CONCLUSIONS
Not all mini-ferritins protect bacterial genomes during oxidative stress, and not all nucleotide-associated proteins are capable of detoxifi cation and accumulation of iron atoms. Hence, the multifunctionality of the ferritin-like protein Dps is unique, and its consequence is the stable resistance of bacteria to drugs and antibiotics. Undoubtfully, the main feature of Dps, which attracts a considerable interest of researchers, is its ability for the in cellulo biocrystallization with DNA. The persistent microbial cell formed via this mechanism are resistant to numerous adverse environmental factors; they can retain their viability for a long time and give rise to a new population with preserved pathogenic properties under favorable conditions [53]. That is why DNA "archiving" by in cellulo co-crystallization with Dps requires attention and detailed investigation. However, it is equally important to study the ability of Dps for detoxifi cation and trans-formation of toxic Fe 2+ into non-toxic Fe 3+ ions followed by the accumulation of the trivalent iron in the protein inner cavity that is protected from the external exposure. These both functions of Dps are closely interrelated [30]. In our previous studies, we have investigated the formation of the Dps-DNA co-crystals, in particular, those formed with the involvement of divalent metal ions [37,38,50]. In this work, we studied the second feature of the functional properties of Dps, namely, the accumulation of iron ions in the inner cavity of this mini-ferritin. Using ASAXS technique, we were able to demonstrate that 2 to 4-nm nanoparticles were basically formed within the protein molecule, indicating that the growth of metal nanoparticles was limited by the spatial characteristics of the protein inner cavity. However, a certain number of iron ions was found in the Dps surface layer. This layer is very important for the protective functions of this protein, because the N-terminal domains located in this layer defi ne the type of interactions of Dps with DNA in solution, since these fl exible lysine-enriched N-terminal domains are responsible for the interaction with the negatively charged sugar-phosphate backbone of deoxyribonucleic acid [15,[22][23][24]. Therefore, the N-terminal domains must be accessible to bind DNA [50,51,53]. However, iron atoms could bind to the negatively charged amino acids (Asp and Glu) in the same N-terminal regions of Dps. Iron is a transition metal and can form stable complexes, for example, with the amino group nitrogen (N) in amino acids. Therefore, divalent iron ions could form bonds with the neighboring protein chains, pulling them towards each other and stabilizing the surface regions of the dodecamer. In this case, the N-terminal domains are pressed to the protein surface and the Dps-DNA crystal complexes are not formed [50,54]. However, in the presence of chelating agent (e.g., EDTA), the bonds between the iron ions and amino acids on the dodecamer surface are disrupted, which can result in the restoration of the Dps interactions with DNA without aff ecting protein inner regions where the nanoparticles have formed. Hence, the charge of the iron ions in the composition of Dps (on the surface and in its core) is of great importance and requires detailed investigation.
SAXS structural studies do not allow to determine the valence of the iron atoms. By defi nition, the function of ferritin-like proteins, including Dps, is oxidation of Fe 2+ to the trivalent state followed by accumulation of this non-toxic form in the protein central cavity. However, it was demonstrated that the mechanism of removal of the divalent iron from the solution and its oxidation at the respective protein sites is more complicated. In particular, the use of the Mössbauer spectroscopy for determination of the iron ion charge in Dps demonstrated that this charge was non-uniform. The protein contained iron in a form of magnetite FeO · Fe 2 O 3 , i.e., a mixture of diand trivalent iron ions [55]. The authors concluded that this composition of the Dps inorganic core supports its dodecamer structure.
In general, formation of nanoparticles that contain metal atoms, and not only iron, but, for example, cobalt and possibly others, in the Dps inner cavity is of certain practical interest. Hence, the studies carried out in this work can be useful not only in terms of studying the unique properties of Dps, but can also contribute to the development of solution-stabilized biocompatible nanocapsules with magnetic properties. The plasticity and structural stability of the Dps protein matrix provides this possibility [56]. A. Gruzinov, European Molecular Biology Laboratory (EMBL), for conducting SAXS experiments and valuable discussion.
Funding. This work was supported by the Russian Science Foundation (project no. 18-74-10071).
Ethics declarations. The authors declare no confl icts of interest in fi nancial or any other sphere. This article does not contain description of studies with human participants or animals performed by any of the authors.
Open access. This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution, and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and in-dicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/. | 8,227.8 | 2022-06-01T00:00:00.000 | [
"Chemistry"
] |
Exceptional points in composite structures consisting of two dielectric diffraction gratings with Lorentzian line shape
Using scattering matrix formalism we derive analytical expressions for the eigenmodes of a composite structure consisting of two dielectric diffraction gratings with Lorentzian profile in reflection. Analyzing these expressions we prove formation of two distinct pairs of exceptional points, provide analytical approximations for their coordinates and by rigorous simulation demonstrate eigenmodes interchange as a result of encircling said exceptional points.
Introduction
Exceptional points (EPs) are degeneracies in non-Hermitian systems, which appear when several eigenmodes coalesce [1]. The properties of such system dramatically change in the vicinity of EPs and lead to such phenomena as enhanced optical sensing [2], loss-induced transparency [3], unidirectional transmission or reflection [4], and lasers with reversed pump dependence [5] or single-mode operation [6]. One promising feature of an EP is that adiabatically encircling an EP can result in an exchange of the eigenstate. Such behavior is expected to have applications in asymmetric mode switching [7] and on-chip non-reciprocal transmission [8] and light stopping [9].
As a rule, exceptional points are studied through analyzing the eigenvalues and eigenvectors of the proper Hamiltonian [10] or by analyzing the eigenmodes dispersion relation [11]. In this work we demonstrate the formation of EPs using ω-k x resonant approximation of Lorentzian line shape. By obtaining analytical expressions for the eigenmodes of a composite structure consisting of two dielectric diffraction gratings (DGs) with Lorentzian line shape we show the formation of two distinct EPs which can be achieved by varying the distance between said stacked gratings l. This theoretical conclusion is supported by rigorous calculation results that show eigenmodes swapping as a result of encircling said EPs in the l-k x parameter space.
ω-k x Lorentzian line shape in composite structures
A scattering matrix S relates the complex amplitudes of the plane waves incident on the diffraction structure from the superstrate u I and the substrate d I regions with the amplitudes of the transmitted T and reflected R diffraction orders [12]. For a horizontally symmetrical subwavelength DG, which allows only the 0 th reflected and transmitted diffraction orders to propagate, the S matrix takes the form: 1 1 1 1 1 , , , , , , , , x T k ω are the complex reflection and transmission coefficients of the DG for a unit-amplitude incident wave. It is worth noting that the scattering matrix in (1) does not describe the near-field effects associated with the evanescent diffraction orders of the DG.
In this paper we consider the elements of the scattering matrix (1) to be functions of the angular frequency ω and the in-plane wave component x k of the incident light. Let the DG have Lorentzian line shape profile. In this case the appropriate reflection and transmission coefficients can be approximated as follows [13]: As a DG under consideration we propose a dielectric structure shown in figure 1a. The agreement between its reflection spectra calculated using rigorous coupled-wave analysis and the one calculated using the approximations (2) confirms that said DG has Lorentzian reflection profile (figure 1b). See the caption of figure 1 for the parameters of the DG as well as the parameters for the approximation. , , where ( ) ( ) sin x env k c n ω ω θ = , θ being the angle of incidence. (3), we obtain the scattering matrix 2 S of the composite DG with the following reflection and transmission coefficients:
Exceptional points
According to (5) the eigenmodes of the composite structure (reflection and transmission coefficients complex poles) have the following form: . Let us choose a contour in the l-k x parameter space centering at the analytically estimated EPs coordinates (figures 2b and 2d). After encircling the EPs counterclockwise following said contours one can notice that the two rigorously calculated using RCWA complex poles corresponding to the same square root in (6) swap places (figures 2c and 2e). This interchange of eigenmodes is an intrinsic feature of EPs and proves the existence of EPs. More accurate estimate of the EPs location can be acquired by solving (7) accounting for ψ being the function of ω and θ .
Conclusion
By means of scattering matrix formalism we derived analytical expressions for the eigenmodes of a composite structure consisting of two dielectric diffraction gratings with Lorentzian profile in reflection. Using said approximations we formulate a criterion for the grating eigenmodes that if satisfied allows formation on two distinct pairs of exceptional points. Rigorous calculation results show eigenmodes interchange upon encircling said EPs. | 1,059.8 | 2021-11-01T00:00:00.000 | [
"Physics"
] |
NEXCADE: Perturbation Analysis for Complex Networks
Recent advances in network theory have led to considerable progress in our understanding of complex real world systems and their behavior in response to external threats or fluctuations. Much of this research has been invigorated by demonstration of the ‘robust, yet fragile’ nature of cellular and large-scale systems transcending biology, sociology, and ecology, through application of the network theory to diverse interactions observed in nature such as plant-pollinator, seed-dispersal agent and host-parasite relationships. In this work, we report the development of NEXCADE, an automated and interactive program for inducing disturbances into complex systems defined by networks, focusing on the changes in global network topology and connectivity as a function of the perturbation. NEXCADE uses a graph theoretical approach to simulate perturbations in a user-defined manner, singly, in clusters, or sequentially. To demonstrate the promise it holds for broader adoption by the research community, we provide pre-simulated examples from diverse real-world networks including eukaryotic protein-protein interaction networks, fungal biochemical networks, a variety of ecological food webs in nature as well as social networks. NEXCADE not only enables network visualization at every step of the targeted attacks, but also allows risk assessment, i.e. identification of nodes critical for the robustness of the system of interest, in order to devise and implement context-based strategies for restructuring a network, or to achieve resilience against link or node failures. Source code and license for the software, designed to work on a Linux-based operating system (OS) can be downloaded at http://www.nipgr.res.in/nexcade_download.html. In addition, we have developed NEXCADE as an OS-independent online web server freely available to the scientific community without any login requirement at http://www.nipgr.res.in/nexcade.html.
Introduction
Complex dynamical systems govern the patterns and processes observed across all domains of life, ranging from molecular frameworks within our cells to large-scale ecological communities, even globally interlinked social associations, transportation networks and internet communication [1,2,3]. Such systems are increasingly being conceptualized as interconnected networks using graph theory as a unifying language for exploration of a given entity in context of its structural or functional neighborhood [4,5,6]. This is an interdisciplinary approach that combines high throughput experimental techniques with computational mathematical analysis. In recent years, it has been successfully employed in almost all kinds of system-wide data exploration efforts for quantitatively defining the principles governing organizational complexities [7,8]. Well documented applications of the network paradigm to systems as diverse as inter atomic chemical bonding networks [9,10], viral infectome or human diseasome networks [11,12,13], co-authorship networks [14], and many others, highlight the success and efficacy of this method in providing insights towards a more complete understanding of the system. Systems biology (or network science) is now witnessing a tremendous interest in the 'robust, yet fragile' nature of complex systems, arising from the recognition that they are not immune to attack or failure [15,16,17,18]. Cellular malfunctions and diseases that often arise from perturbations in the intermolecular commu-nication channels between bio-molecules [19,20] or terrorist attacks that can instantly impair international air traffic and communication [21], have revealed the necessity and importance of predicting the behavior of a system in response to different kinds of disturbances. It has been observed that catastrophic changes in the overall state of a system can ultimately derive from its organization, or from linkages that may often be latent and unrecognized. Here-in lies the strength of computational systems biology and graph based mathematical tools which can enable prediction of global structural reorganizations upon perturbation.
Although perturbation analyses have now become routine exercises in both experimental and bioinformatics data interpretation, there is currently no automated mechanism of simulating the technique. Induced perturbations may be small, large, local, global, single, grouped, or sequential; they may be loss based or modifications of existing functionalities as in the outage of an interface in a power-grid network. For example, analysis of the yeast proteome network has shown that the likelihood of lethality upon node loss, (or the phenotypic consequence of a single gene deletion) is affected to a large extent by the topological position of its protein product in the interaction network [22]. Similarly, loss of an edge, as in case of disruption of hydrogen bonds by strong electrostatic repulsion is sufficient to destroy the stability of crossbeta network in amyloid fibrils [9]. Analysis of the E.coli metabolic network has shown that a non-hub node can also be vital to the stability of the network if it connects one or more key structural or functional modules [23]. The affects of paired perturbations can also be equally informative as single perturbations, such as in case of synthetic lethal interactions where loss of both nodes in a genetic network can be fatal to the cell [24,25]. Extending the same concept, insights from the analyses of grouped perturbations can help in understanding the roles played by the nodes in that group, arising from modular functional units within the graph structure. In contrast to these real-world perturbation scenarios, sequential perturbations are studied more as 'simulations' to understand the possible affects of cascading disturbances on complex systems. Simulation of sequential perturbations is a standard technique employed in ecological network analyses, where the global biodiversity crisis and rapid population declines have galvanized investigations in the possible cascading affects of species extinctions and quantitative estimation of species loss [26,27,28]. This approach involves targeted removal of each entity from a given network in a sequential manner based on a specific attribute of the targeted node, most commonly, its degree or the number of links [29,30,31]. In all such analyses, the respective networks may show robust (perturbation independent) or non-robust (perturbation dependent) behavior in response to different perturbation. The outcomes of such studies provide insights into 'network resilience', i.e, the ability of a system to achieve fault tolerance against failures of its components [32,33].
These examples illustrate the need for development of appropriate tools for analysis and modeling of perturbations in real world networks, since a large number of potential users do not have the requisite computational skills or mathematical background to carry out such analyses for their data. Accordingly, a broad range of academic and commercial platforms and tools are available for generic analyses, comparisons and visualization of networks and their properties. However, one of the major lacunae in this field is the assessment of network resilience or susceptibility, upon perturbation [34,35]. Such a functional limitation becomes very important in view of the fact that this area is fast becoming one of the most prominent areas of network science, as also evident from the increasing numbers of publications dealing with perturbations and their affects [NCBI Pubmed Jan 2012 data]. However, in most cases, the perturbation analyses involve physics, mathematics and synthetic data whereas, it is equally important to focus on empirical real world data, since the architecture of complex biological, social and economic networks show topologies differing radically from random networks [2].
Based on our insights from an extensive analysis of the architecture of more than a hundred large publicly available real world networks and their responses under attack, we have developed NEXCADE, a program for simulation and analysis of perturbations in a complex system, and to monitor the altered system attributes at every step, in order to determine how associated perturbations are either generated or propagated from the previous event. Apart from an existing Cytoscape plugin that assesses the affects of protein abundance changes on proteinprotein interaction (PPI) networks [35] from within Cytoscape [36], NEXCADE, to our knowledge, is the only software available to date that enables diverse kinds of perturbation analyses on all types of networks. We provide NEXCADE in two modes, an online web server for quick testing of the program's capabilities and as a downloadable standalone unix package. The NEXCADE software is designed to automate the analysis of the vulnerability of networks based on the quantitative assessment of the impact of small or large-scale, static or dynamic perturbations. Despite the seeming differences between different types of real-world networks, we find that perturbations can affect these systems in very similar ways, since real world networks share several architectural properties especially scale free topology, high clustering coefficients, short average path lengths and greater than expected diameters [4]. NEXCADE would benefit users transcending varied disciplines; from a plant physiologist comparing gene regulatory networks across different species, or a biochemist searching for drug targets, to a restoration ecologist, or even a banker interested in identifying critical risk areas in a financial network.
Network Concepts and Indicators
A graph is defined as a non empty set of nodes, a set of edges or links, and a map that assigns two nodes to each link [37]. We denote a network as a binary undirected graph G = (V,E) where V is the set of nodes (vertices) while E is the set of undirected edges (links) between two nodes if they are functionally linked to one another. Nodes of the network may represent genes, proteins, species or any entity of interest. In functional terms, an edge signifies relationships or ties or functional interaction between two nodes. Edges between a vertex and itself are not included. In graphical terms, each element of the set E is a pair of elements of set V. Although in many situations, links can be assigned a direction and a positive or negative weight to designate the strength of interaction, NEXCADE simplifies such graphs and uses only binary pairwise connections for analyses. For a given network G, Network Size is denoted as S{G} and calculated as the total number of nodes in G. The Degree k of each node i is calculated as the total number of vertices adjacent to node i, and k(i) = |N(i)| where N is the neighborhood of node i, or the set of vertices adjacent to i. The density of the graph measures how many edges are in set E compared to the maximum possible number of edges between vertices in set V. For an undirected network that has no loop and can have at most |V| * (|V| 2 1)/2 edges, the density is measured as 2 * |E|/(|V| * (|V| 2 1)). The average degree of the network is K{G} = (sum(k(i))/S{G}), where k(i) is the degree k of each node i as explained above, and S is the size of the network. The distance d(i,j) between two vertices i and j is the length of the shortest path from i to j, considering all possible paths in G from i to j. The distance between any node and itself is 0. If there is no path from i to j then d(i,j) is infinity.
Input Format
For input, users can select between different kinds of undirected and un-weighted datasets for analysis, such as protein-protein interaction data, co-expression data, bipartite ecological webs of interactions between organisms, and social network data. In this manner, users are prompted at the outset to classify their data in order to delineate terms used thereafter, throughout the analyses. For example, a node may be a gene, protein or a species depending upon the type of network being studied. Similarly, an edge may be a relationship between two individuals in a social network or an interaction between two ORFs in a PPI network. Data is entered into NEXCADE in a simple and user-friendly format, as a list of interactions per line separated by a whitespace, which is then converted into graph format, such that each line of input defines an edge for the network that connects the node listed in the first column with the node listed in the second column. In this manner, information about network components and their interactions is read in as undirected and un-weighted. Loops if any, are removed, and each output line denotes two nodes that are connected to each other by an edge.
Network Preprocessing & Visualization
In this step, each input graph is scanned for basic topological statistics including structural properties at the vertex, edge and network levels, using in-house Fortran programs and custom made shell scripts that incorporate the graphical capabilities of IGRAPH [38] within R CRAN (http://www.r-project.org/) for complex network research. Verification of whether a graph is connected is an essential preprocessing step. A graph that is fully connected has exactly one connected component, consisting of the whole graph. In disconnected graphs, each connected component of an undirected graph is a sub graph in which symmetric and transitive paths connect any two vertices to each other, and which is connected to no additional vertices. The number of connected components is an important topological invariant of a graph that plays a key role in the definition of graph toughness or robustness and we use this attribute to color the graph during visualization. The connected components of a graph are computed using breadth-first search, beginning at some vertex v and finding the entire connected component containing v (and no more) before completing. To find all the connected components of a graph, loops are run through its vertices, starting a new search whenever the loop reaches a vertex that has not already been included in a previously found connected component. Finally the network nodes are assigned colors based upon the connected component they belong to, and visualization of optimal component distribution is enabled using the fruchterman-reingold vertex layout algorithm [39].
For every vertex or node in the network, four topological centrality measures are calculated. These include the degree centrality k, betweenness centrality, closeness centrality and eigenvector centrality [40]. The vertex betweenness can roughly be defined by the number of geodesics (shortest paths) going through that vertex 'v' and is measured as, sum(G_ivj/G_ij, i! = j,i! = v,j! = v) where G is the graph and v is the vertex in question, and ivj is the shortest path from i to j passing through v. The Closeness centrality roughly measures the number of steps required to access every other vertex from a given vertex. For a given vertex, this is defined by the inverse of the average length of the shortest paths to/from all the other vertices in the graph: If there is no (directed) path between vertex v and i then the total number of vertices is used in the formula instead of the path length. Eigenvector centrality corresponds to the values of the first eigenvector of the graph adjacency matrix; which may, in turn, be interpreted as arising from a reciprocal process in which the centrality of each vertex is proportional to the sum of the centralities of other vertices that are directly connected to it. In general, vertices with high eigenvector centralities are those which are connected to many other vertices which are, in turn, connected to many others and so on [41].
Perturbations of Network Components
In graphical terms, we define a perturbation as a random or targeted loss of one or more nodes or edges from a given network. Loss of a node indicates the deletion of an entity while the loss of an edge implies the destabilization of a function between two existing entities. The assumption is that each node in the network can function only of it has at least a single support link connecting it to another node in the network. NEXCADE employs the IGRAPH library at its backend to carry out each perturbation event while using a variety of shell scripts and R functions to compute and present topological affects and for plotting graphs. Each perturbation removes one or more nodes or links from the network and we find the fraction of nodes that remain functional at the end of the process. For example, if perturbation of an entity X causes another entity Y to lose its entire support link to the remaining sub-network, it (Y) is considered to become non-functional, or to have undergone 'secondary extinction' in association with entity X. In this manner, behavior of the network after each successive perturbation is monitored to measure robustness or susceptibility, in terms of the ''cost'' associated with each vertex removal [17,26], which in turn may signify change in any of the local or global network properties, or additional perturbations generated by propagation of the previous event/s, such as secondary extinctions or associated co-extinctions as explained above. An additional and interesting dimension has been added to NEXCADE for assessing how a given network reacts to the random removal of any one node at a time. In this approach, all nodes are taken out and put back into the network, one at the time and topological properties are calculated and plotted across the removal of all the individual nodes while the network size remains constant as complete network minus one. These curves can than be compared across networks to assess how different networks behave upon random single perturbations. The cascading or 'targeted' perturbation approach involves simulations of random or ordered primary extinctions based on a given node property such as the number of links or 'degree'. In summary, nodes of the input network are sorted and ranked by degree. Each of these nodes are then systematically removed in the sorted order, either from highest connected node to the least connected node or vice versa. For random cascades, all nodes are shuffled and then removed one after the other in a random sequence. Randomization can be repeated as many times as desired for comparative purposes. After every single node removal, the network is analyzed in terms of various properties described above. The reduced network is then used for carrying out removal of the next node in the list, followed by complete analysis, and so on. Finally, the structural integrity of the network is predicted for each loss sequence based on the threshold period for complete collapse, and the change in critical global topological attributes during the entire cascade are plotted as a function of the percentage of nodes perturbed. This method has been well established over the last decade and the response of the network in terms of resulting secondary extinctions or other network properties can be used to infer the significance of the node attribute being studied [29,30,31]. The sub-network remaining after each subsequent perturbation to the original input network can be visualized as described above. NEXCADE can also plot the complete outcomes of multiple perturbation cascade curves to enable comparative analysis of one extinction sequence with the others.
Program Automation & Testing
In-house Fortran programs and shell scripts were used to streamline and automate the entire analytical process from input data scanning, network preprocessing for topological and statistical properties, and visualization using R source scripts, followed by simulations of single, grouped and sequential perturbations and the comparison and/or plotting of network attributes after simulation. Figure 1 depicts a flowchart for complete pipeline organization of NEXCADE. The program was converted into a web server by incorporating R functions and libraries into CGI on APACHE linux with additional code built in to enable multiple independent instances of the program so that up to 99 users may access the program simultaneously. Figure 2 depicts a schematic overview of the NEXCADE query submission protocol. The source code of the software (Data S1) is being released under the GNU General Public License (v2 1991) (Data S2), as a standalone unix package along with the online web server. It only requires pre-installation of the freely available IGRAPH R CRAN package. Detailed instructions for set up and usage are provided within the package (also in Data S3).
Example Datasets
An example dataset is provided with the distributed source code for users to test the command line version of the program along with detailed usage instructions and description of the output. In addition, NEXCADE runs were simulated on five publicly available example networks and one original unpublished dataset. These six networks were selected to represent various kinds of biological and social interactions, and to depict the efficacy of NEXCADE in analysis of diverse webs. Each network is given a four-letter reference code (depicted in square brackets below) which is used throughout the text. These six networks include the largest connected component of the Rattus rattus protein-protein interaction network [PRAT], the Arabidopsis thaliana genetic interaction network [ATHG], and the yeast RNA-protein interaction datasets [YPRN], downloaded from BioGrid (release 2.0.33) [42]. We also include the well studied dolphin community social network [43] [DOLF]. In addition, NEXCADE was applied to two ecological networks including a seed dispersal network [GNIC] from the tropical rainforests of Great Nicobar Island, India (SB Ph.D. thesis) and a pollinator network [MEMM] that represents the structure of a plantpollinator food web [44]. The outcome of NEXCADE implementation on these networks along with their references is provided on the respective web pages of each network in the Browse Webs section of NEXCADE. NEXCADE has two other parts that include 'Browsing' example datasets and a 'Tutorials' section that illustrates the methodology, ease of operation and the range of situations and outcomes available, by steering users stepwise through the various options. Features of the four individual sections of analysis are described below using case studies from the pre-simulated networks. Figure 3 depicts a screenshot of the visualization page for a given network. Each input network can be visualized as an image containing filled circles connected by lines, the circles representing the nodes of the input network, which may be genes/proteins/ species, or any interactor of interest. The lines connecting a pair of nodes represent an interaction between the two nodes. At a single glance, users can have an immediate perception of whether the network is completely connected or fragmented into multiple disconnected clusters, based on node color. As can be seen from this figure, nodes are colored by compartments, such that all nodes that lie in a single connected component of the network have the same color. The nodes in different colors belong to individual disconnected compartments, members of which do not have any interaction with one another in the dataset. Users have the choice to label nodes if required. The visualization section further allows an examination of the basic topological indicators of the network and its components. For example, GNIC is a completely connected network constituted by 812 interactions between 219 species of trees, birds and mammals. It is a highly cohesive network with an average degree of 7.4 and average path length of 3. For each node in the network, NEXCADE measures and displays a sorted list of degree centrality, betweenness centrality, closeness centrality and eigenvector centrality values. A high quality network image can also be downloaded in vector format for obtaining resolution independent figures.
Visualization
Apart from providing information about these key aspects of network topology, the main purpose of this section of NEXCADE is to assist users in selection of optimal nodes for perturbation i.e whether one would like to maximize the preferential perturbation or minimize it based on the observation that affect of preferential or targeted perturbations is strongly influenced by topological dependencies such a vertex degree. While making rational decisions about what kinds of perturbations to simulate, users can select one or more entities in a single, grouped, random or sequential manner. Perturbations can be simulated on both, interactors as well as interactions in the network, as described below with examples.
Single Perturbations
Depending upon the input dataset, the removal of a single node or edge may represent mutation in a protein, knock-out of a gene, extinction of a species, or even the elimination of a relationship, such as correlated expression between two genes in the dataset. In this section of NEXCADE analysis, users can select any node of their choice and also selectively remove an edge or interaction of the node under consideration and visualize the resulting network. The consequence of such a perturbation can be assessed in terms of changes in the overall node and network level attributes, as shown in Figure 4, as well as in terms of additional perturbations that are either generated or propagated from the initial event. For example, single perturbation analysis shows that removal of the species Daucus carota, which is pollinated by several dipteran and hymenopteran insects, would have disastrous consequences for the plant pollinator network MEMM, resulting in at-least ten associated secondary extinctions. Extinction of Daucus carota drastically affects network size and density, leading to imminent co-extinctions of many other species in the network. As can be seen in Figure 4, the reduced network has a much smaller size, and higher values of average path length, average degree and density although it retains its single connected character visible in the common color of all nodes in the sub-network. It may be noted that the targeted species has highest degree centralities in the network and removal of such a key node is expected to have disastrous consequences. However, it may not be correct to undermine the relevance of a node just because it has few connections. Sparsely connected nodes are sometimes connectors of critical network modules or functional clusters and their removal can adversely affect the system. Such an affect has been previously observed in the E.coli metabolic network, where the role of N-carbamoyl-L-aspartate is vital even though it participates in only three reactions, because it connects the pyrimidine metabolism, to the core metabolism through alanine and aspartate metabolism [23]. A similar affect is observed through NEXCADE upon targeted removal of NTRK1, a tyrosine kinase receptor from the mouse protein-protein interaction network PRAT. This protein has only three reported interactions with proteins that have several links with other proteins, and its removal is not expected to result in any far-reaching affects. However, NTRK1 connects three structural modules within the network and therefore, its removal results in disruption of inter-module communication, fragmenting the network into three distinct subunits.
Clustered Perturbations
Pairs of genes or proteins often have parallel roles in the cellular milieu, and the removal of such coupled entities can affect the system negatively [24,25]. An example of this can be seen in NEXCADE, wherein, removal of the protein SLC6A3 alone from the PPI network PRAT, does not damage the network drastically, but when SLCA3 is removed along with another protein ARRB2, the paired perturbation causes the largest connected component of the network to fragment into disconnected clusters. It is clear from the first section for PRAT analysis (Visualization and Attributes) that neither of these proteins is highly connected, but they are both independent connectors between two important modules of the dataset and thus have high structural relevance for the network. Although, removal of either of these is not sufficient to sever intermodule connectivity, it renders the inter-module topology of the network highly susceptible to the next perturbation.
The affect of perturbing larger clusters of nodes or edges from a network, rather than in pairs, can also be analyzed in this section of NEXCADE. In the online version, users can specify and target up to nine vertices (or their edges) for inducing clustered perturbations, while the distributed version of the program (Data S1) has no upper limit on the size of the group to be perturbed. For example, Figure 5 shows the affect of removal of three specific genes in the example network ATHG, resulting in fragmentation of the network into several disconnected sub-networks. These genes were selected using NEXCADE by scanning the initial unperturbed network for nodes that have a high betweenness centrality, but not very high degree centralities, demonstrating the ability of the program to assist users in identifying nodes or sets of nodes that may be critical for network sustenance.
Sequential Perturbations
For cascading or sequential perturbations on the input networks, NEXCADE uses degree centrality as a ranking property to carry out serially ordered perturbations, each involving successive vertex removal. The affect of the perturbation can be analyzed and visualized at each step of the serial extinction cascade as described already. In addition, the overall change in a specific network attribute can be monitored as a function of the percentage removed nodes, throughout the simulation cascade. These curves, called co-extinction curves, are usually curvilinear for real world systems. For example, Figure 6 shows the affect of simulating sequential perturbations on GNIC, in terms of the number of secondary extinctions, as the primary extinctions are carried out. As can be seen from this figure, when targeted extinctions are carried out from the most connected to least connected species, secondary extinctions begin with the deletion of the first node itself. The network quickly disintegrates into several disconnected fragments (within 10% node removals) and undergoes complete collapse within 52 primary extinctions i.e 22% node loss. In contrast, if the preferential extinctions are simulated from the least linked (specialist) to most linked (generalist) species, the network size decreases slowly and secondary extinctions do no occur till almost 90% primary extinctions have occurred. It may be noted that the network does not undergo fragmentation at all and is able to retain its single connected character for more than 90% species removals, revealing the robustness of the network under attack, in terms of its ability to remain stable for much longer lengths of time when perturbed. Figure 6 also shows the corresponding status of the reduced sub-networks after 7% node removals in the two opposing cascades emphasizing the contrasting network response in terms of robustness. As can be seen from this figure, the network is able to withstand the targeted removal of specialists whereas; it is highly susceptible to the removal of generalists. Such a contrasting affect on a system under specific extinction sequences has often been observed in ecological networks and is considered a measure of network robustness. Furthermore, NEXCADE also enables simulation of Random co-extinction curves and comparisons of different perturbation series with each other in terms of their affects on Network topological indicators, as shown in Figure 6. Users have the option to simulate multiple random extinction series on a given network, if necessary. NEXCADE, to our knowledge is the only program that automates the entire targeted extinction cascade approach, thereby enabling users to evaluate and compare network stability.
Availability, Processing Time & Limitations
NEXCADE is available in two forms, the online and distributed version. One, as an online interactive webserver with a very simple, user-friendly interface and help pages, freely available to the scientific community without any login requirement at http:// nipgr.res.in/nexcade.html, that describes the scope of the program along with a tutorial, a feedback form as well as a comprehensive mechanism for testing the program with several different sets of pre-simulated data. The distributed command line version of NEXCADE is a unix tarball containing the source code of the program along with detailed installation and usage instructions, along with an example dataset (Data S1 and S3). The latest version can also be downloaded at http://nipgr.res.in/ nexcade_download.html.
A network of about 1000 edges takes approximately four seconds to load on a 3.2 GHz processor with 8 GB RAM. Although NEXCADE can handle networks of any size, processing time may get longer in case multiple parallel sessions of the online program are being run. It may be noted that the webserver is not designed to store datasets for long periods of time, but in case the connection to our servers is lost during a run, or processing becomes extremely time consuming, the results can be accessed after a short period via a five letter code assigned to the user upon data input and visible in the address bar. Further, in case of large networks, the simulations, particularly the compute-intensive coextinction curves, can be extremely time consuming. Therefore, we recommend the use of the command line version of NEXCADE for large datasets in order to take the load off our servers and for users to store NEXCADE simulation results for as long as desired. The online version is optimally suited to datasets having up to 300 nodes and 1000 interactions and is mainly designed to enable overall assessment of the software and its abilities.
It may be emphasized that one of the implicit assumptions of deletion based perturbation analyses is that the input dataset is sufficiently exhaustive and inclusive. However, this may not always be the case, and unknown dependencies may exist between network components that are not included in the input dataset. Further, limitations of the input data combined with the method (NEXCADE simplifies and reads all input graphs as undirected) may moderate the impact of the analysis and limit the true assessment of disturbances. However, we justify NEXCADE and its applicability to complex system research based on the widely Figure 5. Results of clustered perturbation simulated on the example network shown in Figure 3. Clicking section III in Figure 3A enables users to: (A) select a group of target nodes based on degree and (B) analyze the affect of perturbation. Clicking the first option on this page returns Panel (C) -the selection form for simulating clustered edge perturbation on the edges of pre-selected nodes and the resulting sub-network can be analyzed comparatively as shown in (D). As can be seen in panel (E), the reduced sub-network upon clustered node perturbation is much fragmented and smaller, in contrast with the unperturbed network shown in Figure 3B. doi:10.1371/journal.pone.0041827.g005 accepted usage of this method of simulating perturbations and the fact that it is a first attempt to automate different kinds of disturbances and prediction of their impact on complex systems.
Conclusions
In this work, we have given an overview of the rationale, design and implementation of the program NEXCADE that can assist in analysis of perturbations and assessment of the consequences of perturbations on complex systems defined by networks that can be expressed as interconnected matrices of interactions. It enables users to assess the outcome of seemingly minor events such as a random gene mutation or metabolic fluctuation, which once set in motion, may become explosive and in extreme cases, lead to irreversible collapse through a cascade of detrimental affects. Although such analyses are now being used routinely in diverse areas of scientific research, a large number of potential users are unable to use these methods for analysis of their own datasets for lack of mathematical and/or computational skills. NEXCADE bridges this gap in a simple user friendly way. To demonstrate its generality and use in a variety of different scenarios, we have applied NEXCADE to several reported social, ecological and biochemical networks, providing a glimpse of the applications that NEXCADE can be used for. We anticipate that it can have wideranging benefits to the scientific community and would facilitate risk assessment and threat based management studies in complex network analysis.
In future versions, we hope to incorporate network loops (e.g. self interactions) and edge weights (e.g. abundance and expression values etc) so as to enable users to analyze the affects of perturbing interaction strengths, thereby emulating 'knock-downs' in addition to 'knock-outs'. We are also currently developing methods to add perturbation specific scores for networks and gain of function perturbations that add new components to an existing network, with user-definable attributes, through an approach similar to the one presented in this paper. Such an extension to NEXCADE would, for example, help to gain insights into biological invasions, and it would also contribute to the development of effective algorithms for more diverse kinds of perturbation analysis, yet to be explored.
Supporting Information
Data S1 Compressed/ZIP File Archive. Contains Unix Tarball of the distributed version of NEXCADE. | 8,198.4 | 2012-08-03T00:00:00.000 | [
"Computer Science"
] |
Improving Medium Access Efficiency With Intelligent Spectrum Learning
Through machine learning, this paper changes the fundamental assumption of the traditional medium access control (MAC) layer design. It obtains the capability of retrieving the information even the packets collide by training a deep neural network offline with the historical radio frequency (RF) traces and inferring the STAs involved collisions online in near-real-time. Specifically, we propose a MAC protocol based on intelligent spectrum learning for the future wireless local area networks (WLANs), called SL-MAC. In the proposed MAC, an access point (AP) is installed with a pre-trained convolutional neural network (CNN) model to identify the stations (STAs) involved in the collisions. In contrast to the conventional contention-based random medium access methods, e.g., IEEE 802.11 distributed coordination function (DCF), the proposed SL-MAC protocol seeks to schedule data transmissions from the STAs suffering from the collisions. To achieve this goal, we develop a two-step offline training algorithm enabling the AP to sense the spectrum with the aid of the CNN. In particular, on receiving the overlapped signal(s), the AP firstly predicts the number of STAs involving collisions and then further identifies the STAs’ ID. Furthermore, we analyze the upper bound of throughput gain brought by the CNN predictor and investigate the impact of the inference error on the achieved throughput. Extensive simulations show the superiority of the proposed SL-MAC and allow us to gain insights on the trade-off between performance gain and the inference accuracy.
I. INTRODUCTION
In the past few years, IEEE 802.11 based wireless local area networks (WLANs), commonly known as WiFi networks, have experienced a noticeable growth and keep pace with an ever-increasing number of mobile devices with low cost [1]. In such a case, how the mobile devices proficiently achieve channel coordination to improve the spectrum efficiency in the dense deployment WLANs scenarios (e.g., ultradense 5G networks [2]) are attracting significant devotion from both industry and academia [3]. Conventionally, the The associate editor coordinating the review of this manuscript and approving it for publication was Kim-Kwang Raymond Choo . IEEE 802.11 distributed coordination function (DCF) uses a binary exponential backoff (BEB) scheme as a carrier sense multiple access with collision avoidance (CSMA/CA) mechanism to decrease collisions. However, this scheme severely degrades the network performance when there exists a large number of devices contending the channel [4].
Recently, the field of machine learning (ML), especially deep learning (DL), has emerged as a promising technique to enable intelligent spectrum sensing and management capabilities in the future wireless communications [5]. By training deep neural networks, the features of signals can be extracted, which goes beyond simple waveform classification, e.g., amplitude and phase. Furthermore, this newly achieved popularity of DL for wireless communications is because of its potential driving the machine intelligence capability into some relevant applications of wireless communications. This in turn poses many challenges for the DL for WLAN's physical (PHY) layer [6] (e.g., wireless channel modeling [7], [8], modulation scheme recognition [9]) as well as MAC layer design [10]. The spectrum at the unlicensed ISM frequency bands bears more and more traffic from the heterogeneous networks. Since some interests in wireless networks and machine learning has been quite visible in recent years, how to integrate deep learning with spectrum sensing to achieve the intelligent medium access on the unlicensed ISM band is still ongoing research. In this context, artificial intelligence (AI) functioning on the MAC will be a general trend, and how to improve channel access efficiency with the aid of DL achieving backward compatibility with conventional IEEE 802.11 DCF is the focus of this work. The above reasons motivate this work, and thus it is desirable to design a MAC protocol powered by DL to improve the network capacity for WLANs.
This paper aims to improve channel access efficiency of IEEE 802.11 DCF on the unlicensed ISM band. Instead of the traditional spectrum sensing methods that can only know whether the channel is occupied or not, we hope to find a way to explore more about the channel usage information, e.g., how many devices are sharing the spectrum and who are they. In this case, an additional deeper level of information is required to make the MAC protocol design more intelligently and efficiently. Therefore, by integrating MAC design with deep learning, better channel coordination and spectrum sharing can be achieved based on the inference results from the convolutional neural network (CNN) model. Compared with the conventional CSMA/CA based 802.11 DCF scheme, the unique advantage of the proposed solution is ''Intelligent'' yet ''Undemanding''. Different from the conventional IEEE 802.11 DCF, the proposed SL-MAC protocol becomes more intelligent with the aid of deep learning and more efficient without requiring additional information. To the best of the authors' knowledge, this is the first attempt to integrate MAC layer design with deep neural networks predictor in a conventional IEEE 802.11 DCF setup.
The main contributions of this paper are summarized as follows.
• Spectrum learning-based MAC framework. We introduce a comprehensive MAC framework integrating with SL to improve MAC efficiency and bandwidth utilization for the future WLANs. We formulate the users' identification as a multi-class classification problem, which is resolved with the pre-trained CNN models.
• Design and implementation of the CNN structure.
We first present the design of the CNN structure, including the master-CNN model and slave-CNN models. Then we detail the implementation of the CNN models, including data collection, two-step offline training and online inference. Based on the implementation of the pre-trained CNN models, the Access Point (AP) can identify the users from the overlapped signals and dynamically schedule the conflicted users' data transmissions.
• Performance analysis of the proposed MAC framework. The upper bound of the throughput gain of the proposed SL-MAC protocol is analyzed, and the impact of the inference error on the achieved throughput is also investigated. Extensive simulations demonstrate the superiority of the proposed SL-MAC protocol. The remainder of this paper is organized as follows. In Section II, we introduce related works. We present the proposed SL-MAC protocol in detail in Section III. In Section IV, the upper bound of the throughput gain brought by deep learning is first analyzed and then the impact of the inference error on the achieved throughput is investigated. The CNN framework including data collection, offline training, and online inference, is presented in Section V. In Section VI, simulations are conducted to demonstrate the superiority of the proposed SL-MAC protocol. Conclusions and future work are finally discussed in Section VII.
II. RELATED WORK
As aforementioned, ML techniques are playing a more and more critical part in the MAC layer design for WLANs. In general, ML can be roughly categorized as supervised learning and unsupervised learning. In this section, we elaborate on the role of the two categories of learning techniques in the MAC scheme design in wireless communication networks.
A. SUPERVISED LEARNING-BASED MAC DESIGN
In supervised learning, the learning agent learns from a labeled training dataset supervised. The objective is to find the mapping from the input feature space to the label so that reliable prediction can be achieved for the new input data. Due to the features of the supervised ML, the supervised learning-based MAC design is not suitable for the scenario where the device learns the environment without the help of a supervisor and the labeled training dataset. Specifically, a deep CNN model was used to perform the classification directly from spectrograms to identify PU behavioral patterns in a cognitive radio context [11]. Ruan et al. [12] proposed an ML-based predictive dynamic bandwidth allocation algorithm to address the uplink bandwidth contention and latency bottleneck of such networks. Rajendran et al. [13] achieved the automatic modulation classification based on the LSTM model, which learns from the time domain amplitude and phase information of the modulation schemes present in the training data for a distributed wireless spectrum sensing network. In [14], Liu et al. used a deep neural network (DNN) to explore the data-driven test statistic intelligently and proposed a covariance matrix-aware CNN-based spectrum sensing algorithm to improve the detection performance further. In [15], the machine learning algorithms were demonstrated that appreciably outperform classical signal detection methods in the 3.5-GHz band. Furthermore, Gao et al. [16] proposed a deep learning-based signal detector that exploits the underlying structural information of the modulated signals. Peng et al. [17] explored the transfer learning to address robustness in DL-based spectrum sensing to improve the robustness.
In particular, to achieve intelligent spectrum sensing, our previous works [18], [19], proposed a distributed MAC framework assisted by the deep learning, where a DNN model was trained offline to help to coordinate channel access by exploring the features from the overlapped signals. Kim et al. [20] proposed a deep learning-aided sparse code multiple access scheme. They used the autoencoder structure of a DNN to learn the codebook and decoding strategy for SCMA to minimize the bit error rate (BER). In [21], the medium access control protocol identification was investigated for applications in cognitive radio networks, where the second (CR) users detect the MAC schemes used by the primary users with the help of SVM. Thereby the CR users can be aware of the time and frequency of the spectrum hole.
B. UNSUPERVISED LEARNING-BASED MAC DESIGN
In unsupervised learning, the learner is provided only with unlabeled data, while learning is performed by finding an efficient representation of the data samples without any labeling information. As such, the unsupervised learning-based MAC design is suitable for the practical wireless network scenarios where no prior knowledge about the outcomes exists. Specifically, recent years have witnessed the wide study on deep reinforcement learning (DRL) in the field of dynamic spectrum access problems in wireless networks [22]- [26]. In particular, Nguyen et al. [22] proposed to use the deep Q-learning method to learn a state-action value function that determines an access policy from the observed states of all channels. In [23], the authors applied the DRL for SUs to learn appropriate spectrum access strategies in a distributed fashion, assuming no knowledge of the underlying system statistics. In [24], a multi-agent deep reinforcement learning method was adopted by the secondary users to learn sensing strategy from the sensing results of some of the selected spectra to avoid interference to the primary users and to coordinate with other secondary users in cognitive radio networks. Furthermore, Yu et al. [25] investigated a DRL-based MAC protocol to learn an optimal channel access strategy to achieve a certain pre-specified global objective for heterogeneous wireless networking. Cao et al. [26] proposed a DRL-based MAC protocol to assist the backscatter communications for Internet-of-Things (IoT) networks, where the DRL was introduced to learn the reserved information and make decisions accordingly.
Nevertheless, most of the ML-based MAC works listed in the literature above have introduced significant overhead for the existing wireless communication systems. They are not applicable for the conventional IEEE 802.11 WLANs networks, where the stations (STAs) are usually with low complexity, and it is difficult for them to ''learn and predict'' the dynamic changes in the networks. Our proposal MAC framework falls within the first category of ML approaches (i.e., supervised learning-based MAC), aiming to enable AP to figure out the STAs involved the collisions during the channel contention and then schedule them to re-transmit without collisions.
III. THE PROPOSED MAC FRAMEWORK
A. PRELIMINARY AND BASIC IDEA This paper considers a typical WLAN scenario where there exists a total of N STAs which are associated with the AP and trying to transmit data packets to the AP. In the context of the typical application scenarios of IEEE 802.11 WLANs (e.g., the office building where the associated WiFi users belong to the company employees in general), it is reasonably assumed that the user base keeps stable within a future period in this paper. All the STAs contend to access the channel following the BEB scheme introduced by the conventional IEEE 802.11 DCF. Generally, the conventional IEEE 802.11 DCF scheme (e.g., four-way handshake) is used in WLANs to coordinate channel access among users. Once more than one user transmits the request-to-send (RTS) packets at the same time, the collisions occur and these users need to increase the backoff counter and contend to reaccess the channel. According to the analysis in [29], the collision probability is proportional to the number of STAs. In this case, the channel is usually with low utilization due to the severe collisions, especially in the dense deployment scenarios. In this paper, through machine learning, we change the fundamental assumption of the traditional MAC layer design. We obtain the capability of retrieving the information even the packets collide by training a CNN model offline with the historical RF traces and inferring the STAs involved collisions online in near-real-time.
Without loss of generality, we assume that each STA contends the channel using the traditional four-way handshake (i.e., RTS-CTS-DATA-ACK), and some of the STAs may choose the same backoff counter during their backoff procedures. As a result, the collisions usually occur at the AP side due to receiving multiple RTS packets at the same time, e.g., Alice and Bob in Fig. 1(a). In this paper, a novel MAC protocol, called spectrum learning-powered MAC (SL-MAC), is proposed for future dense WLANs, as highlighted by Fig. 1(b). In the proposed SL-MAC, suppose that a pre-trained CNN model is deployed at the AP in advance, which enables the AP to identify the STAs from the overlapped RTS signal(s). On receiving the overlapped signals, the AP can detect the number of users involved collisions and identify who they are with the aid of the pre-trained CNN model. This process can be considered as a multi-class classification problem. According to the inference results, the AP replies a CTS packet, which includes the scheduling information of the users' data transmissions. The other users set their network allocation vector (NAV) accordingly and keep silent within this period. After the NAV expires, all the users can contend to reaccess the channel following the conventional IEEE 802.11 DCF.
B. PROPOSED MAC PROTOCOL
In the proposed SL-MAC protocol, the pre-trained CNN model files are implemented at the AP. On receiving the RTS signal(s), the AP not only can detect the number of STAs (denoted as n, n ∈ N , N is the set of all the STAs) but also identify who are they. An example of the proposed protocol with three STAs contending to access the channel is illustrated in Fig. 2. The proposed SL-MAC protocol includes three operation steps, which are channel contention step, collision detection and identification step, and scheduling transmission step, as detailed in the following three subsections.
1) CHANNEL CONTENTION
In this step, all the STAs with data traffic accumulated in the MAC queue first contend to access the channel based on the BEB scheme. According to the conventional IEEE 802.11 DCF scheme, denote the minimal and the maximum contention window (CW) size as CW min and CW max , respectively. In the beginning, all the STAs (e.g., the STA A , STA B and STA C in Fig. 2) first randomly chooses their backoff counters' value as B i ∈ [0, CW min ], i ∈ N , then the STAs start to perform backoff via the BEB scheme. Note that the backoff counter is hanged-up once the channel becomes busy. When the backoff counter becomes zero, i.e., B i = 0, the STAs transmit the RTS packets to AP. For example, STA A and STA B finish their backoff at the same time to transmit the RTS packets simultaneously. In this case, the collision occurs at the AP. Otherwise, the RTS packet can be received successfully by the AP.
2) COLLISION DETECTION AND IDENTIFICATION
In this step, on receiving the RTS packet(s), the AP replies the CTS packet based on the inference results given by the pre-trained CNN model. Suppose that the pre-trained CNN model files have already been implemented at the AP. On receiving the RTS signal(s), the AP achieves the inference results by performing the feedforward calculation. Denote the time cost achieving the inference as θ, then the AP replies the CTS packet after the time t SIFS + θ, t SIFS denotes the duration of the short inter-frame space (SIFS), as illustrated in Fig. 2. To realize collision avoidance and maintain compatibility with legacy IEEE 802.11, a ''Scheduling Info'' field including the conflicting STAs' ID and their time instants to transmit data packets is introduced to the traditional IEEE 802.11 CTS packet, as illustrated in Fig. 3. Based on the inferred number the STAs involving collisions (denoted as n), the AP sets the ''Duration'' field of the CTS packet as NAV CTS = t SIFS + t TXOP and broadcast the CTS packet to all the STAs, where t TXOP = n (t DATA + t SIFS + t ACK ) denotes the scheduled period of the transmission opportunity (TXOP). On receiving the CTS packet, all the STAs decode the ''Scheduling Info'' field. If the STAs are not scheduled to transmit data packets, they set their network allocation vector (NAV) according to the ''Duration'' field of the CTS (i.e., NAV CTS ) and keep silent during this period. Otherwise, the STAs transmit data packets according to the scheduled time instants.
The reason why the number of STAs can be predictable and the STAs are identifiable is that the spectrum usage identification can be considered as a multi-class classification problem, which can be well solved by training a deep neural network [27], [28]. In this paper, the AP only needs to identify the number and ID of the STAs involving collisions. As a result, the scalability of the proposed CNN based MAC protocol mainly depends on the average number of STAs suffering collisions. It is known that in the practical WLAN scenarios, the data traffic of the STAs usually follows the Poisson distribution, which indicates that the MAC buffer of the STAs is not always full and the number of STAs transmitting at the same time is far less than N . Therefore, high classification accuracy can still be achieved by the well-trained CNN model.
To better understand this, an intuitional example is presented with two STAs, denoted as S 1 and S 2 . In this particular example, since we have a controlled environment by setting up various transmission scenarios, we can collect the RF traces, which includes all the combinations of STAs in the network, and we label the data using the ground truth. In this example, a dataset including 4 different coexisting transmission scenarios (i.e., 'Idle channel', 'STA 1 waveform only', 'STA 2 waveform only', and the 'Combined waveforms') are collected from the testbed. We consider that the RF traces collected in our USRP2 testbed consists of Inphase (I) and Quadrature (Q) signals in a matrix form, which includes sophisticated features of the wireless signals. Similar to the structure of images consisting of pixels in a matrix form, the deep neural networks (especially CNN) are generally the preferred methods extracting and learning the higherlevel information hidden in the RF traces. Based on this, the CNN classifier is trained offline with the historical RF traces collected from the four scenarios above until it is able to learn the features from the RF traces and make the reasonable inference. Therefore, this results in a 4-class classification problem, where the four classes are detailed as below.
• Class-1: 'Idle'. None of the STAs is transmitting, i.e., the collected RF dataset is only from the noise floor measurements.
• Class-2: S 1 . Only S 1 is transmitting, i.e., the collected RF dataset is from the noise floor measurements plus the signal from S 1 .
• Class-3: S 2 . Only S 2 is transmitting, i.e., the collected RF dataset is from the noise floor measurements plus the signal from S 2 .
• Class-4: S 1 + S 2 . Both of the two STAs are transmitting simultaneously, i.e., the collected RF dataset is from the noise floor measurements plus the overlapped signal from S 1 and S 2 .
3) SCHEDULED DATA TRANSMISSIONS
On receiving the CTS packet, the STAs which sent the RTS packets first check the ''Scheduling Info'' field of the CTS packet and then identify the ID of the scheduled STAs and the data transmission time instants. The other STAs just set the NAV according to the duration field of the CTS, which is detailed in the previous subsection. Based on the scheduling information of the CTS packet, the scheduled STAs transmit their data packets according to the scheduled time instants. When the scheduled TXOP period finishes, all the STAs start to contend to reaccess the channel. Note that to achieve the backward compatibility with conventional IEEE 802.11 DCF, the STAs involved in the RTS collisions also increase their backoff stages until reaching the maximum value.
IV. PERFORMANCE ANALYSIS
The SL-MAC protocol design presented in Section III-B can ensure a high overall throughput by identifying the STAs involved collisions with the trained CNN model. However, the achieved throughput of SL-MAC tends to be degraded due to the inference errors introduced by the trained CNN model, especially when the number of STAs involving collision is large. In this section, we first analyze the upper bound of throughput gain, where the inference errors do not occur. After, we investigate the impact of inference errors on the achieved throughput of SL-MAC protocol.
A. UPPER BOUND OF THROUGHPUT GAIN
Compared to the conventional IEEE 802.11 DCF scheme, the throughput performance gain introduced by the deep learning is analyzed, where the inference error of the pretrained CNN is assumed to be neglected. In this case, the analyzed gain becomes an upper bound. According to Bianchi's Markov model [29], the saturation throughput of the conventional IEEE 802.11 DCF is where E[P] denotes the average size of the data packet payload, σ denotes the duration of empty slot time. P tr denotes the probability that there is at least one transmission in the considered slot time and P s represents the conditional probability that a transmission occurring on the channel is successful. Besides, T s = t DIFS + t RTS + t CTS + t DATA + t ACK + 3t SIFS is the average time the channel is sensed busy due to a successful transmission, and T DCF c = t DIFS + t RTS is the average time the channel is sensed busy by each device during a collision following the IEEE 802.11 DCF scheme.
Even though the collision occurs in our proposed SL-MAC protocol, the channel can be still utilized by scheduling the STAs' data transmissions within a TXOP, as shown in Fig. 2. Therefore, the saturation throughput of SL-MAC (abbreviated as φ DM ) can be obtained as where n denotes the average number of STAs involved in RTS collisions. T DM c denotes the average time the channel is utilized by scheduling STAs' transmissions when a collision is detected for the proposed SL-MAC protocol.
According to Fig. 2, T DM c can be calculated as where θ denotes the inference delay with the pre-trained CNN. t TXOP is the time period of the TXOP including scheduled n data transmissions, i.e., t TXOP = n(t DATA + t SIFS + t ACK ).
As a result, we can obtain According to (1), (2) and (5), the upper bound gain brought by SL-MAC can be calculated as Remark 1: Suppose that the inference error of the pretrained CNN is not considered, it can be seen from (6) that the gain brought by the pre-trained CNN is proportional to the number of STAs involved in the RTS collisions (i.e., n). In practice, the inference error of the multi-class classification problem increases with an increase of n. Therefore, there exists a trade-off between performance gain (η) brought by deep learning and the inference accuracy.
B. ACHIEVED THROUGHPUT WITH INFERENCE ERRORS
To characterize the impact of the inference error on the achieved throughput of SL-MAC, the definition of the inference error rate is introduced as follows.
Definition 1 (Inference Error Rate): Inference error rate introduced by trained CNN model is defined as the ratio of the number of correct inference to the total number of inference, i.e., γ = N correct N total , and γ ∈ [0, 1]. Based on Definition 1, the throughput of SL-MAC is calculated as where φ w/e DM denotes the throughput of SL-MAC without inference error, i.e., φ . φ e DM denotes the throughput of SL-MAC with inference error rate γ .
To calculate φ e DM , two typical cases are considered: 1) over-estimation, and 2) under-estimation. Specifically, if over-estimation happens, the inference results include not only the STAs involved collisions but also the other STAs not being collided. Denote the number of users inferred by the CNN as n, then n > n holds and n transmissions are scheduled, which leads to the channel resources waste in this case. If under-estimation occurs, the inference results may miss one or more users involved collisions, i.e., n < n. As a result, just a portion of the users involving collisions can be scheduled to transmit, which, however, degrades the fairness of each device.
Denote the achieved throughput in two cases as ψ over and ψ under , respectively. Then we have where α ∈ [0, 1] is the coefficient representing the probability that over-estimation occurs. Similarly, 1 − α is the probability that under-estimation occurs. In the following, we analyze the achieved throughput under the two cases.
1) CASE 1: OVER-ESTIMATION
In this case, the number of data transmission opportunities scheduled is larger than the true value, i.e., n over > n, n over is the number of users suffering collisions inferred by the CNN with over-estimation. According to (2), the achieved throughput in case 1 can be obtained as where T over DM denotes the average length of a slot time when over-estimation happens, T DM −over c is the average time the channel is utilized by scheduling STAs' transmissions when a collision is detected for the proposed SL-MAC protocol under the case 1.
In this case, T DM −over c is calculated as where t over TXOP is the time period of the TXOP including scheduled n over data transmissions, i.e., t over TXOP = n over (t DATA + t SIFS + t ACK ).
Proposition 1: Compared to the perfect inference case where the inference errors do not consider, the performance loss (denoted as θ ∈ (0, 1)) introduced by over-estimation can be calculated as
2) CASE 2: UNDER-ESTIMATION
In this case, the number of data transmission opportunities scheduled is smaller than the true value, i.e., n under < n, n under is the number of users suffering collisions inferred by the CNN with under-estimation. According to (2), the achieved throughput in case 2 can be obtained as where where t under TXOP is the time period of the TXOP including scheduled n under data transmissions, i.e., t under TXOP = n under (t DATA + t SIFS + t ACK ).
Proposition 2: Compared to the perfect inference, the performance loss introduced by under-estimation can be calculated as Remark 2: On one hand, it is observed from (11) that the worse inference results the larger performance loss when over-estimation happens. That is to say, a larger n over can lead to a smaller value of θ over . Therefore, the over-estimation deteriorates the system throughput of the SL-MAC protocol. On the other hand, when under-estimation occurs, we can infer from (14) that the impact of inference errors on system throughput performance of SL-MAC is not decisive, i.e., the total throughput may decrease or remain the same. However, since only a portion of STAs suffering collisions can be scheduled to transmit data within the TXOP, the remaining STAs not being scheduled will double the contention window size following the traditional CSMA/CA scheme. In such a case, it becomes more difficult for them to access the channel to lead to fairness problem.
Substituting (9) and (12) into (8), the achieved throughput with inference errors is calculated as (15), as shown at the bottom of this page. Then, by substituting (15) into (7), the system throughput can be achieved as (16), as shown at the bottom of this page, where γ denotes the inference error rate and α is the probability that over-estimation occurs.
V. CNN FRAMEWORK DESIGN A. OVERVIEW OF THE CNN ARCHITECTURE
To identify the collisions in the proposed MAC protocol, a CNN framework is proposed to predict the number and the ID of the STAs involving collisions by offline training a large set of labeled dataset. 1 As illustrated in Fig. 4, the CNN framework includes a master-CNN model inferring the number of STAs and N − 2 slave-CNN models identifying the ID of the STAs accordingly. Specifically, in the trained CNN models, considering that CNN is typically suitable for processing grid-like data, e.g., 1-D grid time-series data and 2-D grid image data [30], so the collected RF traces can be fed into the convolutional layer with data preprocessing. The collected I and Q samples are reshaped into 4-dimensional tensor suitable for Keras convolutional layer.
The proposed CNN structure is illustrated in Fig. 4, where each Convolutional layer is followed by a Rectified Linear Units layer (denoted as ''Conv + ReLU ''). For the feature extraction, the pattern of the master-CNN and slave-CNN models is based on [Conv + ReLU ] × K 1 M and [Conv + ReLU ] × K 1 S , respectively. After this, total of K 2 M and K 2 S fully-connected (FC) layers are used for the master-CNN model and slave-CNN model to process the flattened matrix and then classify the signals using Softmax activation 1 It is worth noting that several methods have been proposed to scale up deep neural network training across graphics processing unit (GPU) clusters [33], which helps to reduce the runtime of the offline training. The slave-CNN model infers the ID of the STAs involving collisions, and the output is a C n N × 1 vector indicating the probability of each class. Finally, Adam optimizer [31] is used to optimize the CNN models, and the CNN based predictor is trained offline until it can learn the features from the RF traces and make the reasonable inference from the overlapped signals. The CNN framework in the proposed MAC protocol consists of three aspects: data collection, offline training, and online inference, as illustrated in Fig. 5. It worth noting that the data collection and offline training are performed only once. After the CNN models are trained offline, we can achieve inference online once given an I/Q dataset.
B. DATA COLLECTION
During the data collection, we collect the RF traces with a constant SNR using our USRP2 testbed, which is wired connected to a host PC (e.g., a laptop) with an implementation of the GNU Radio [32], as shown in Fig. 6(a). The data collected with a constant SNR is valid because the closed-loop power control is generally used to obtain a constant received power at the AP. Specifically, for the device side, the laptop is mainly responsible for baseband processing while the universal software radio peripheral (USRP2) focuses on the upconversion, digital-to-analog (D/A), and transmitting from the wireless radio. For the AP side, the USRP2 module first receives signals from the radio, then performs A/D and down-conversion. After that, the laptop receives the signals from the USRP2 via Ethernet and carries on the baseband processing. Finally, the I/Q sequences can be stored as a file at the laptop. examples from I and Q samples after reshape, each consisting of w (w is the window size) time-series samples. Besides, N channels = 1 similar to RGB values in imagery, Dimension 1 = 2 holding for our I and Q channels, and Dimension 2 = w. In this paper, the window size (w) is set to be 32, 128, and 512, respectively. With the collected data traces (i.e., I and Q samples), we train on 80% of collected RF data set (training set). It contains about 790 million I and Q samples, validate on 10% of data set (validation set) and test on 10% of data set (testing set) each corresponding to about 100 million of the I and Q samples.
C. OFFLINE TRAINING AND ONLINE INFERENCE
The CNN based predictor is offline trained based on the historical radio frequency (RF) traces. Then the pre-trained CNN predictor can be deployed at the AP and used to indicate the STAs from the overlapped signals, as highlighted in Fig. 5. Specifically, we first collect the RF traces with our USRP2 testbed within a relatively large window size. Suppose that the window size is w, the dimension of each VOLUME 8, 2020 RF sample is 2 (i.e., In-phase (I) and Quadrature (Q) signals), then the dimensionality of the input space equals to 2w. After, the CNN predictor is trained and tested offline based on the collected historical RF traces. During the offline training, back-propagation is performed to train the CNN model. It is worth noting that we use the Graphics Processing Unit (GPU) cluster (i.e., NVIDIA DGX-1) with TensorFlow installed to accelerate the offline training process [33], which only needs to be executed once, as highlighted in Fig. 6(b). Considering that the ground truth is known to us, the identification of STAs can be considered as a multi-class classification problem which belongs to the supervised learning. After the offline training, the online RF traces can be fed into the pre-trained CNN predictor deployed at the AP to identify the STAs from the overlapped signals in near-real-time.
Specifically, for the multi-class classification problem, the probability of each class is predicted using the Softmax function. The predicted probability for the i-th class is given as where M is the total number of classes, z is the output of the last fully connected layer. Then, we conventionally set the loss function of the classification (denoted as L) as cross-entropy [34], which is given as where x i is the i-th input data sample, y i denotes the corresponding groundtruth and f (x i ) is the actual output of neurons.
The two-step offline training and online inference are illustrated with an example with a total of four STAs existing in the network, as shown in Fig. 7. During the first-step training, the master-CNN model detecting the total number of STAs involved collisions is trained offline based on the whole RF traces, and the inference result falls into one of the three classes. Note that the class-3 has a fixed number of STAs (i.e., all the STAs in class-3). Therefore, we only need to indicate the STAs' ID in the remaining two classes, i.e., class-1 and class-2. In the second-step training, three slave-CNN models are trained separately offline based on different RF traces. Compared to the conventional CNN training that includes 16 classes, the accuracy of the proposed two-step CNN training can be improved due to the decrease in the number of classes.
D. IMPLEMENTATIONS
The implementation of the proposed CNN predictor that includes one master-CNN model file and two slave-CNN model files is illustrated in Fig. 8. In the CNN predictor, the AP first infers the number of STAs involved collisions according to the new RF traces via the pre-trained master-CNN model. After this, based on the previous reference result (i.e., the number of STAs), the AP selects a slave-CNN model accordingly and then performs the inference to identify the STAs' ID. For the implementation of the proposed CNN-based MAC, the complexity and generalizability are discussed as below.
1) TIME AND SPACE COMPLEXITY a: TIME COMPLEXITY
The CNN model that is selected to perform the inference in our proposed MAC has the quadratic time complexity, i.e., O(M 2 L), where L is the number of layers, M is the number of neurons in a hidden layer which indicates the scale of the neuron network model. Specifically, when the window size (w) is set to be 32, 128, and 512, the corresponding training time is 685 mins, 414 mins, and 243 mins, respectively. Besides, we only need to train the CNN model once, which can be performed offline via machines with strong computing and storage capabilities, e.g., the GPU clusters.
b: SPACE COMPLEXITY
After the offline training, the total size of the pre-trained CNN model is less than 5 MB, which is far less than the storage (even the memory) of the AP. This indicates that the AP has enough space to save and even cache all the pre-trained CNN model files into memory to perform the online inference more efficiently.
2) GENERALIZABILITY
The generalizability of the proposed SL-MAC protocol is exactly one of the focuses of this paper. Considering that the wireless channel environment is complex and time-varying, in such a case, the received signals over the wireless channel are usually with different SNR values. To learn inherent significant features of the wireless channel, we collected a large amount of RF traces via the real wireless channel across a wide SNR range (from 0 to 20 dB), aiming to cover many different scenarios.
In this paper, the proposed CNN based MAC protocol can work well in the environments where the wireless channel environment is complex and time-varying (e.g., the STAs and pedestrians move) because the CNN model is trained using a wide range of parameter settings. Although the input parameters do not include every possible combination of the parameters, they do cover a wide range of parameter settings, and the well-known generalization property of machine learning models will enable the trained CNN to produce accurate inference even for parameter settings not included in the training samples. As proved by the probably approximately correct (PAC) learning, generalization can be achieved by designing a proper CNN model with enough training datasets [35].
By training the deep learning models based on our collected RF traces, the deep neural networks can extract the inherent features from the overlapped signals with different SNR values (i.e., different channel conditions). This demonstrates that the nonstationary nature of the wireless environment has been considered in our experiment, and the proposed method can generalize well to different scenarios where the received signals are with different SNR values. Furthermore, in this paper, we use the hold-out validation method to avoid over-fitting by dividing the dataset into training, validation, and testing data. Therefore, the proposed SL-MAC with pre-trained CNN model can generalize well to a new environment or at different positions because of the nonstationary nature of the wireless environment has been learned well.
A. TESTING RESULTS OF CNN
The hyper-parameters of the pre-trained CNN model files are summarized as follows. For the master-CNN model detecting the number of STAs and the slave-CNN model-1, they contain three convolution layers and one fully connected (FC) layer, i.e., K 1 For the slave-CNN model-2, it contains four convolution layers and one fully connected (FC) layer, i.e., K 1 S = 4, K 2 S = 1. The average inference accuracy is presented in Table 1, where four STAs are taken as an example, and the window size is set to be w = 128. It can be seen from Table 1 that all of the pretrained CNN models have achieved a relatively high inference accuracy (i.e., ≥ 90%). Therefore, the inference error of the pre-trained CNN is reasonably neglected in the following simulations.
B. PERFORMANCE EVALUATION 1) SIMULATION SETTINGS
The simulations are carried out using the ns-2 simulator [36] according to the PHY and MAC layer parameters that are presented in Table 2. To integrate with the experimental results of deep learning and the ns-2 network simulations, we consider the inference error rate (η) as an input to the ns-2 simulations. Specifically, we collect the RF data using our USRP2 testbeds and perform the testing via the pretrained deep learning model. Since the deep learning experiments have been done in TensorFlow, we use a Log-Sigmoid function η = D 1+Ae (B−CN ) to portray the relationship between η and N according to the inference results given by the CNN [19]. Therefore, if the total number of devices N is given, η can be obtained by mapping accordingly and then fed into the ns-2 simulations. We consider a general star topology VOLUME 8, 2020 network scenario with a totally N STAs uniformly distributed within the AP's converge radius of 50 m. We assume that the inference time cost is ignored (i.e., θ = 0), and each STA is under the saturation traffic with the same data payload size. To demonstrate the superiority of the MAC efficiency improved by the deep learning, we implemented and compared the performance of SL-MAC with the conventional IEEE 802.11 DCF protocol using a four-way handshake, i.e., RTS-CTS-DATA-ACK. The simulation results are averaged from 100 runs.
2) PERFORMANCE EVALUATION
In Figs. 9-10, we compare the normalized throughput of the proposed SL-MAC protocol with IEEE 802.11DCF to evaluate the upper bound of the performance gain brought by deep learning, where the inference errors of SL-MAC are ignored. Fig. 9 presents the normalized throughput against the number of devices (N ). First, it is observed from Fig. 9 that the analysis of SL-MAC matches well with the simulation. The throughput of the proposed SL-MAC protocol increases and then keeps stable as N increases while the throughput of the IEEE 802.11 DCF severely decreases. This is due to the severe collisions occur using CSMA/CA, but the collisions can be avoided with the proposed SL-MAC based on deep learning. Suppose that the inference error is neglected, it can be seen from Fig. 9 that a larger N can lead to more significant normalized throughput of SL-MAC. This demonstrates that the proposed SL-MAC protocol is suitable for dense WLAN scenarios.
Moreover, it is observed from Fig. 9 that when the data payload size is relatively large (e.g., 1024 bytes in Fig. 9(b)), RTS/CTS handshake scheme can help to reduce the time spent during a collision, with respect to the basic access mechanism. However, when the data payload size is relatively small (e.g., 32 bytes in Fig. 9(a)), RTS/CTS handshake degrades throughput performance due to the control overhead. Therefore, Fig. 9 demonstrates that the normalized throughput gain brought by deep learning is decreased when the data payload size increases. This phenomenon is also verified by Fig. 10. The main reason is that when the data payload size is relatively large, the time ratio occupied by the control handshake is reduced, and thus the total time with RTS collisions resolved by the deep learning is decreased. Therefore, the advantage of SL-MAC declines for the larger data payload size. Besides, given a data payload size, we find that the more devices existing in the network, the more improvement can be achieved by the proposed SL-MAC, which verifies the Remark 1. This is because when the number of devices increases, more collisions will occur, which is exactly what the proposed SL-MAC expected. Only when the collision occurs, the AP can get the opportunity to schedule the STAs involved in collisions to transmit their data packets within the TXOP. To better understand the benefit of SL-MAC even in the absence of inference error, we define the user density as the ratio between the total number of STAs and the AP's coverage area, i.e., ρ = N χ , where χ = πr 2 and r denotes the radius of the AP coverage area. ρ denotes the number of STAs deployed per m 2 . Taking a radius of 50 m as an example, we evaluate the probability of RTS collision, as shown in Table 3. It is observed that the RTS collision probability increases significantly with the increase of user density. In particular, when 25 STAs are deployed, i.e., the user density is only 3.18×10 −3 , the RTS collision probability has exceeded 50%. Furthermore, if scenarios with dense deployment are considered, e.g., stadium, train station, or conference room, the user density may usually reach up to 1 STA per m 2 , i.e., at least one STA is deployed with 1 m 2 [37]. In this case, the RTS collision probability reached almost 100%, which leads to the failure of the traditional 802.11 DCF scheme. Therefore, the proposed SL-MAC can explore promising advantages of deep learning in wireless networks, especially with dense deployment scenarios.
Figs. 11-12 evaluate the impact of the inference error rate (γ ) of a trained CNN model on the normalized throughput of the proposed SL-MAC protocol, where the over-estimation probability (α) is set to be 0.5 and 0.1, respectively. It is observed from Fig. 11 and Fig. 12 that the normalized throughput of SL-MAC protocol decreases with an increase of γ . This is because that when γ becomes larger, more inference errors occur, which degrade the achieved throughput. Moreover, we can observe that the decline of normalized throughput with α = 0.5 is more significant than that of α = 0.1. The reason for this outcome is apparent since the larger value of α can cause a higher probability of over-estimation, which leads to throughput degradation, as presented in Remark 2.
Figs. 13-14 evaluate the impact of the over-estimation probability (α) on the normalized throughput of the proposed SL-MAC protocol, where the inference error rate (γ ) is set to be 0.5 and 0.1, respectively. It can be observed that when γ is given, the normalized throughput of SL-MAC protocol decreases with an increase of α because of the higher probability of over-estimation. Furthermore, compared to Fig. 13 with γ = 0.5, it is observed from Fig. 14 that the normalized throughput almost keeps unchanged as α increases when γ is set as 0.1. This is because that a larger value of γ leads to a larger inference error rate. In such a case, as α increases, more over-estimations occur when γ = 0.5, which can seriously degrade the throughput performance of SL-MAC. Moreover, we can see from Figs. 11-14 that with the increase of data payload size, the throughput gain (i.e., the normalized throughput gap) decreases because that the inference errors can introduce more damage to the SL-MAC with larger payload size.
VII. CONCLUSIONS AND FUTURE WORK
In this paper, we propose a novel MAC protocol for future WLANs. Because of the severe collisions that occur using the traditional CSMA/CA scheme, spectrum learning-powered MAC protocol (SL-MAC) is proposed to schedule the STAs' data transmissions who involved in the RTS collisions. An essential feature of the proposed SL-MAC protocol is the backward compatibility with the conventional IEEE 802.11 DCF mechanism. Then, both of the superiority and inferiority brought by the CNN predictor are analyzed, which demonstrates the necessity of the potential applications of deep learning to the MAC design. Extensive simulations demonstrate the advantages of the proposed SL-MAC protocol. This paper is the first attempt to integrate fundamental MAC layer design with deep neural networks in a conventional IEEE 802.11 DCF setup to avoid introducing additional hardware overhead and aim to lay a foundation for further related research.
Our potential future works are listed as follows. How to generate new RF training datasets in practical scenarios and retrain the CNN models more efficiently is regarded as one of our future work, known as the ''scalability'' issue. For example, when new STAs join to the WLAN, this impacts on the inferring results of the slave-CNN models (i.e., the ID of the STAs involving collisions). It would be difficult to collect enough labeled RF datasets to retrain the CNN models since there exist too many combinations in real environments.
To mitigate this issue, we can take advantage of the historical labeled data and combine it with some unlabeled data. 2 In this context, to achieve high inference accuracy of the CNN models, fine-tuning the slave-CNN models with the collected combined RF dataset (some of the RF datasets is unlabeled) becomes promising. Since semi-supervised learning (SSL) makes use of unlabeled RF traces to facilitate the learning process and transfer learning (TL) can learn generalizable representations in source domain [38], the SSL and TL could be combined in our future work to yield significant inference improvements on the spectrum learning.
ACKNOWLEDGMENT
The U.S. Government is authorized to reproduce and distribute reprints for governmental purposes notwithstanding any copyright notation thereon. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of the Office of the Under Secretary of Defense for Research and Engineering (OUSD(R&E)) or the U.S. Government. | 11,333 | 2020-01-01T00:00:00.000 | [
"Computer Science",
"Engineering"
] |
Special Issue “Laser Technologies in Metal-Based Materials”
The first publication, analyzing the prospects for the use of laser radiation, was published under the authorship of the American physicist Arthur Shawlow in November 1960 (Schawlow, A [...].
The first publication, analyzing the prospects for the use of laser radiation, was published under the authorship of the American physicist Arthur Shawlow in November 1960 (Schawlow, A. L. Bell Lab. Rec., November, 403 (1960)) immediately after the creation of the first laser by Theodor Meiman on 16 May 1960. Later, Arthur Shawlow received the Nobel Prize. Subsequently, many brilliant scientists (A. Zeveil, V.S. Letokhov, N.V. Karlov, and many others) joined the topic of laser-induced processes, which ensured rapid progress in this area [1][2][3][4][5][6][7]. As a result, new directions in chemistry and physics have been formed-laser chemistry and laser physics, which continue to be a dynamically developing science. These laser-related directions consider the fundamental issues of the synthesis/transformation of substances and the problems of high precision and highly controlled laser technologies. Insightful publications of the late 20th century reporting on original ideas of laser irradiation use for various processes of materials transformation and fabrication [8][9][10] turned into extensive areas related to laser technologies since the beginning of the 21st century.
This Special Issue aims to bring the fields of laser technologies and metal nanostructures together for both benefits. We consider different aspects of laser technologies for fabrication of metal-based functional nanomaterials here, as numerous modern instruments and devices are based on processes related to metal nanostructures. It should be noted that the laser effect on a material can initiate physical phenomena (heating, phase transitions, etc.) and/or chemical phenomena (oxidation, reduction, chemical transformations). Thus, the articles of the current Special issue harmoniously combine physical and chemical phenomena and offer advanced laser technologies to modern society.
Regarding publications in laser-induced physical processes, one can find the article by A. V. Agapovichev et al. on selective laser melting to produce Ni-Cr-Al-Ti-Based Superalloy [11]. The authors of the article present sintering processes by pulsed nanosecond laser for obtaining aerosol agglomerates of Pt, Au, and Ag NPs [12]. The interesting combination of processes of laser-induced surface texturing simultaneously with laser-induced anchoring of silver NPs from colloidal solution is discussed by Jakub Siegel et al. in [13]. Such textured polymer surfaces decorated with Ag NPs can be prospective antimicrobial coatings. Another example of laser-induced physical phenomena is laser shock peening, demonstrating significantly improving the fretting fatigue life of TC11 titanium alloy [14]. In the article by Piotr Kupracz et al. [15] laser re-solidification was demonstrated as an approach for the modulation of morphology and structure of metal-decorated TiO 2 nanotubes to obtain visible light harvesting.
Interesting advanced approaches for creating nanostructured metal materials with various functionality were presented in laser-induced chemical processes. Thus, laser ablation of monocrystalline silicon in isopropanol containing AgNO 3 allowed the singlestep formation of Ag-decorated Si microspheres with SERS performance [16]. Here, the physical process of laser ablation is accompanied by the chemical process of Ag NPs formation onto ablated Si species. Femtosecond laser reductive sintering allowed for obtaining high-purity Cu patterns from CuO NPs inks [17]. At the same time, a variant of selective laser reductive sintering created copper and nickel microsensors for nonenzymatic glucose detection [18]. Highly controllable decoration of substrates by plasmonic Ag, Pt NPs with uniform or periodic NPs distribution was demonstrated due to laserinduced deposition [19]. This laser-induced process is based on the photodecomposition of metal-containing precursors and following redox processes onto the substrate surface. Interestingly, a similar process can be realized as a laser-induced thermal process resulting in composite materials based on iridium, gold, and platinum [20].
Conflicts of Interest:
The authors declare no conflict of interest. | 863.8 | 2023-06-21T00:00:00.000 | [
"Materials Science",
"Physics"
] |
Comparative Material and Mechanical Properties among Cicada Mouthparts: Cuticle Enhanced with Inorganic Elements Facilitates Piercing through Woody Stems for Feeding
Simple Summary Cicadas are one of the most popular insects. Their loud mating songs, newsworthy mass emergences and prolonged lifespan underground (17 years in some species) make cicadas a model organism for building bridges between scientific studies and the public. A key aspect of cicada biology is that the adults use their tube-like mouthparts to pierce through the hard wood of trees to feed on fluids, an ability that suggests that their mouthparts might have adaptations for piercing wood, such as increased hardness and stiffness. Here, we aimed to determine if the cuticle that comprises cicada mouthparts is enhanced with metals and other inorganic elements that could increase cuticular hardness and stiffness. We used scanning electron microscopy and energy dispersive X-ray spectroscopy to study mouthpart morphology and to determine which elements are found in the mouthpart cuticle. We found metals and other inorganic elements in the cicada mouthparts. Additionally, nanoindentation was also used to determine mouthpart mechanical properties. Metals were mostly located at the tip of the mouthparts (the part that pierces wood) and were harder than other regions. These findings are not only valuable to the fields of material sciences, coevolution, and ecology, but provide another interesting aspect of cicada biology. Abstract Adult cicadas pierce woody stems with their mouthparts to feed on xylem, suggesting the presence of cuticular adaptations that could increase hardness and elastic modulus. We tested the following hypotheses: (a) the mouthpart cuticle includes inorganic elements, which augment the mechanical properties; (b) these elements are abundant in specific mouthpart structures and regions responsible for piercing wood; (c) there are correlations among elements, which could provide insights into patterns of element colocalization. We used scanning electron microscopy (SEM) and energy dispersive X-ray spectroscopy (EDS) to investigate mouthpart morphology and quantify the elemental composition of the cuticle among four cicada species, including periodical cicadas (Magicicada sp.). Nanoindentation was used to quantify hardness and elastic modulus of the mandibles. We found 12 inorganic elements, including colocalized manganese and zinc in the distal regions of the mandible, the structure most responsible for piercing through wood; nanoindentation determined that these regions were also significantly harder and had higher elastic modulus than other regions. Manganese and zinc abundance relates to increased hardness and stiffness as in the cuticle of other invertebrates; however, this is one of the first reports of cuticular metals among insects with piercing-sucking mouthparts (>100,000 described species). The present investigation provides insight into the feeding mechanism of cicadas, an important but understudied component of their life traits.
Introduction
The expansive diversity of insect feeding habits is considered an important contributor to their massive ecological and evolutionary successes [1][2][3]. Although several insect lineages have retained the ancestral structural ground plan of chewing mouthparts [4], many insect groups have evolved an array of mouthpart shapes, chemistries and structural organizations that facilitate access to new food sources and feeding habits [5][6][7]. In addition, some insects have mouthparts augmented with inorganic elements, including transition metals, which influence the mechanical properties of the cuticle by hardening structures, increasing resistance to wear and affecting elastic modulus (i.e., Young's modulus) [8][9][10].
Several hemipteran species are important pests that pierce and feed on fruits and other crops [25][26][27], which could require metal-augmented mouthparts for piercing, thus warranting further study in this insect group. A noteworthy group of hemipterans that might have mouthparts modified with inorganic elements are cicadas (Cicadidae, 3000+ described species). Cicadas produce the loudest sounds among insects (over 150 dB in Cyclochila australasiae) [28] and the 13-and 17-year periodical cicadas (Magicicada sp.) are famous for their mass emergences in eastern North America, contributing to their popularity among scientists and the public [29,30]. In addition, trees of various species acquire damage during mass emergences of periodical cicadas due to the females using their ovipositors to pierce through woody stems to lay their eggs (i.e., flagging) [31][32][33]. For this reason, we previously investigated the material properties of the ovipositor cuticle and found a variety of inorganic elements at the distal ovipositor tip, the region responsible for piercing through wood [34].
Many insects pierce wood for oviposition (e.g., some Hymenoptera) [35,36]. However, cicadas are unique in that they also use their mouthparts to pierce wood for xylem feeding, i.e., it is uncommon for an organism to have two separate cuticular tools needed for piercing through wood for separate purposes. The immature cicadas feed primarily on the roots of grasses and small lateral tree roots [37], which might be softer due to the moisture content in the soil [38], but adults pierce through the harder stems and branches [39]. The piercing-sucking mouthparts of cicadas consist of a stylet comprised of two medial maxillae, which form a salivary duct and a food canal, and two lateral mandibles that in some cicada species have bumps at the distal region [39]. The stylet is enclosed by a sheath-like labium that allows the stylet to project distally into wood when feeding.
Here, we explore the material composition and mechanical properties of cicada mouthparts and hypothesize that the cuticle has inorganic elements concentrated in mouthpart regions responsible for piercing wood. In addition, studies have shown some correlative traits between inorganic elements present in the cuticle [40,41], which might work synergistically for specific mechanical adaptations. Here, we tested the following predictions: 1.
The presence and accumulation of inorganic elements will vary based on the mouthpart structure, the location on the structure and by cicada species.
2.
Mouthpart regions with transition metals will be harder and have higher elastic modulus.
3.
Correlations between elements might exist, showing patterns of colocalization.
Species
Pinned specimens of four cicada species were obtained from the insect collection at . Only females were used for each species to remove the possibility of sexual dimorphisms that could impact our analysis.
Mouthpart Morphology
Cicada mouthparts were imaged with scanning electron microscopy (SEM). The labial sheath was removed from the head by sliding it distally with forceps so that the stylet (mandibles and maxillae) remained attached to the head. The heads, with attached stylets and the detached sheaths were then placed through an ethanol dehydration series (15 min each in 70%, 80%, 90% and 100% EtOH) followed by at least 24 h in hexamethyldisilazane. The mandibles and maxillae were removed from the head and individually secured to an aluminum stub using carbon graphite tape so that the dorsal side was exposed. The labial sheath was positioned similarly. The mouthparts were sputtercoated with 10 nm of platinum using an EMS 150TS sputter coater and imaged at 75× magnification for the sheaths and 300× magnification for the mandibles and maxillae using a JEOL 6010LV SEM.
Serial images were combined into single composite images in Microsoft PowerPoint and measurements of structures were acquired using ImageJ software [42]. Mandible and maxilla lengths were measured from the base to the distal tip for each individual. Only the distal segment of the labial sheath had its length measured because this section remained intact during the removal process. Each mandible, maxilla and distal segment of the sheath had its width measured in three locations along the length: a distal (Location 1), middle (Location 2) and proximal location (Location 3) (Figure 1). The locations for width measurements were determined by first drawing a line along the length for each structure on the serial image in Microsoft PowerPoint, then dividing that line into three equal-sized parts to represent different regions. The middle of each region was used as one of the three width measurements per structure. The distal region of a mandible was further examined to determine bump number, bump length and bump width ( Figure 1). Bump length was measured at the base and bump width was measured from the base to its distal tip. For individuals with multiple bumps, only the middle bump was measured.
Elemental Composition with Energy-Dispersive X-ray Spectroscopy
Energy-dispersive X-ray spectroscopy (EDS) (X-Max50, Oxford Instruments) was used to quantify the elemental composition of mouthpart cuticle. The mandible, maxilla and sheath were analyzed (20 kV, spot size 60-65, magnifications >200×) for three minutes at four specified locations: a proximal location at 25% of the mouthpart length from the base, a middle location at 50%, a distal location at 75% and a distal tip location ( Figure 1). The locations were identified using SEM and the magnification was increased until the entire field of view consisted only of the location of interest (i.e., no background or debris visible). EDS data was reported in Aztec software (Oxford) as percentage weight for each detected element. EDS is capable of identifying and measuring the elemental composition for most elements that have an atomic number higher than that of Neon (atomic number of 10).
Hardness and Elastic Modulus Measurements
Mandibles from individuals of N. linnei, M. cassinii and M. septendecula (n = 3 individuals for each species) were removed with a razorblade and placed into a droplet of dH 2 O on a glass slide. We did not study the hardness and elastic modulus of M. septendecim due to a lack of available specimens. Forceps were used to manipulate the mandibles while a paintbrush wetted with dH 2 O was used to remove debris. The mouthparts were then positioned on a dry glass slide so that the lateral side was exposed and secured with clear tape. A Bruker Hysitron TI Premier nanoindenter with a Ti-0045 cono-spherical probe (90 • cone angle, 5 µm tip radius, diamond coated) was used with Triboscan v9.6 software to acquire hardness and elastic modulus measurements. It was determined prior to experimentation that the actual shape of the indenter probe agreed with the default area-function of a spherical probe, −A = π·h 2 + 2πR·h, where A is the cross-sectional area, h is the indentation depth and R is the probe tip radius. Three measurements were acquired on the lateral side of each mandible at the proximal and distal locations, each with load-controlled quasi-static indentation tests using a standard trapezoidal loading function (5 s loading, then 2 s dwell, followed by 5 s unloading times) and a maximum load set to 1000 N. Force-displacement curves were analyzed by the TriboScan software, which is based on the Oliver-Pharr Method [43] and allowed obtaining of the reduced modulus and hardness values. The reduced modulus was converted to elastic modulus using the assumption that the Poisson's ratio of the cicada mouthparts is 0.3 [44,45].
Statistics
Assumptions were tested a priori of statistical analysis. Necessary transformations were performed on variables that violated assumptions of independence, normality, homoscedasticity or multicollinearity. An analysis of variance (ANOVA) was used to determine if there were significant differences (p < 0.05) in mouthpart morphology among and within species with JMP v 16 statistical software. Significant differences in means were ranked using a post-hoc Tukey HSD test.
All statistical analyses for the EDS measurements were performed in R software [46]. To increase statistical power and minimize a Type I error associated with multiple dependent variables, a MANOVA was used to understand the relationship between the abundance of elements (organic and inorganic) in the mouthparts (mandible, maxilla and sheath), locations on each mouthpart (tip, distal, mid and proximal) and species (M. cassinii, M. septendecim, M. septendecula and N. linnei). Using the dplyr package in R [47], the model included organic and inorganic elements as the response variables, mouthpart structure and location within the mouthpart as fixed effects and species were included as random effects. Variables that were statistically significant (p < 0.05) were further investigated using ANOVAs and post-hoc Tukey HSD tests to analyze differences between means. To investigate patterns within individual species, additional ANOVA models were created for each species with mouthpart and location within the mouthpart as fixed effects and elements as the response variables. Carbon and oxygen were excluded from species models because of their prominence in all species.
Pearson's correlations were run between elements to determine correlative effects in the following groups: mouthpart structure and cicada species. Correlations between elements were used to determine if elements colocalized with other elements within specific mouthparts and mouthpart structures. We also performed principal component analyses (PCAs) and created plots using the ggfortify package in R [48] to determine patterns of inorganic elements abundance in mouthpart structures and locations within structures. The PCA models used a correlation matrix that standardizes each variable to better explain structure and variable relationships [49]. Carbon and oxygen were excluded from the analyses.
A linear discriminant analysis was used to determine if mouthpart morphology or EDS data (inorganic elements only) can be used as an accurate species classification system. A hierarchical clustering analysis (Ward's method with standardized data) was used to simultaneously evaluate the morphology and EDS inorganic-element data to produce a dendrogram in order to determine if the four cicada species show phylogenetic grouping patterns.
Differences in Lengths and Widths of Mouthpart Structures among Species
There were significant differences in the lengths of each mouthpart structure among species (Supplementary Tables S1 and S2). The mandible and maxilla were significantly longer in N. linnei than the Magicicada sp. (p < 0.0001 for both structures). There also were differences in the sheath length among species with N. linnei having the longest sheath and significantly shorter sheaths for M. septendecim and M. cassinii (p = 0.0002) (Supplementary Tables S1 and S2).
The sheath at the distal region (Location 1) was significantly wider for N. linnei than M. septendecula and M. cassinii (p = 0.0016) and sheath width for M. septendecim was the largest for the periodical cicadas. The sheath width at Location 2 and Location 3 had similar patterns where it was widest in N. linnei (p = 0.0001, p = 0.0002, respectively). The maxilla width had a similar pattern among species and was widest in N. linnei (Location 1, p < 0.0001; Location 2, p = 0.0017; Location 3, p < 0.0001). All locations along the mandible were generally wider in N. linnei than the periodical cicada species (p = 0.0511) and Locations 2 and 3 were significantly wider in N. linnei (p = 0.0009, p = 0.0246, respectively) (Supplementary Tables S1 and S2). The extent of tapering of each mouthpart was determined by comparing the widths at each location within species (Supplementary Tables S1 and S3). The mandible, maxilla and sheath widths were similar along the length of each structure in M. cassinii (p = 0.6640, p = 0.2612, p = 0.2204, respectively), indicating a lack of extensive tapering. For M. septendecim, the maxilla and sheath widths were consistent along its length (p = 0.1100, p = 0.1004, respectively), but the mandible width significantly tapered along its length (p = 0.0338). For M. septendecula, the mandible and sheath widths were similar along their lengths (p = 0.4763, p = 0.8065, respectively), but the maxilla significantly tapered distally (p = 0.0046). The mandible width of N. linnei was consistent along its length (p = 0.4801), but there was significant tapering in the maxilla and the sheath (p = 0.0076, p = 0.0094, respectively) (Supplementary Tables S1 and S3).
Tip Morphology of Mandibles and Maxillae
All observed maxillae consisted of two sections that together created a salivary duct and a food canal. In addition, the maxillae appeared to have the ability to perform a sliding mechanism where a maxilla can move posteriorly, exposing the food canal. Linking structures were observed on the proximal locations, which likely keep the two parts together while performing the sliding mechanism ( Figure 2). Bumps were located near the tip of the mandible ( Figure
Inorganic Elements by Mouthpart Structure, Location and Cicada Species
The mouthpart structures (sheath, mandible and maxilla), locations on the structures, species and the interactions of these variables significantly affected the abundance of in-
Inorganic Elements by Mouthpart Structure, Location and Cicada Species
The mouthpart structures (sheath, mandible and maxilla), locations on the structures, species and the interactions of these variables significantly affected the abundance of inorganic elements (p < 0.05, Supplementary Table S7). When mouthpart locations were compared per structure, however, there was a trend of higher concentrations of Cl near the distal regions, particularly for the mandible (Figure 3). When individual species were analyzed per structure, Magicicada sp. had less variation in Cl abundance than N. linnei (Table 1) Only M. cassinii had consistent changes of K and Cl (Table 1); K abundance decreased from the tip to the proximal base of each mouthpart structure (Supplementary Figure S1) and the mandibles showed a similar trend to Cl (Table 1). Sodium (Na) was significantly different among mouthpart structures, locations within mouthpart structures, species and their interactions (mouthpart structure, p = 0.001; location within structure, p = 0.009; species, p < 0.001, Supplementary Figure S2, for interactions see Supplementary Table S7). There were greater amounts of Na in M. septendecula, particularly in the sheath, compared to other Magicicada sp. (all p < 0.05). In the mandibles of M. cassini and M. septendecim, Na was more abundant at the tip and distal regions compared to other structures (all p < 0.05, Supplementary Figure S2). Despite having overall more Na than other species, there were no differences of Na abundance in mouthpart structures and locations within mouthparts among M. septendecula specimens (Supplementary Figure S2, Table 1). Sulfur (S) and phosphorus (P) remained consistent across mouthpart structures and locations of mouthparts (all p > 0.05, Table 1, Supplementary Table S7).
Alkaline Earth Metals: Calcium, Magnesium
With species as a random effect, calcium (Ca) remained constant across mouthpart structures and locations on structures (all p > 0.05, Supplementary Table S7). Magnesium (Mg) significantly varied among species (p = 0.016) and with species as a random factor, Mg was detected in greater amounts in the sheath compared to other structures (p = 0.008, p = 0.028, respectively) and had significantly higher abundances in the distal regions (all p < 0.05). Only N. linnei and M. septendecim showed differences in Mg abundance (Table 1). Interactions between mouthpart structures and locations showed a significantly greater concentration of Mg in the distal region of the sheath compared to other structures and locations (p = 0.032, Supplementary Figure S3). (Table 1). In fact, Al accumulation was only statistically significant in M. septendecim (Supplementary Figure S4, Table 1) and was most abundant in the distal and tip regions (p < 0.001). There were no significant differences in iron (Fe) or silicon (Si) abundance across mouthpart structures, locations on structures, or species (all p > 0.05, Table 1). Fe was not detected in M. septendecim or M. septendecula specimens (Table 1). There were significant differences in manganese (Mn) and zinc (Zn) abundances in mouthpart structures and locations within the mouthparts (manganese, p < 0.001, Figure 4; zinc, p < 0.001, Figure 5
Correlations between Organic and Inorganic Elements by Mouthpart Structure
O was negatively associated with several inorganic elements (Table 2); however, the inorganic elements varied with mouthpart structure. Na and Zn, for example, were negatively associated with O in mandibles ( Table 2). In the maxillae, O was negatively associated with S, Cl, K and Ca (Table 2) and in the sheath O was negatively associated with Na, S, Cl and K ( Table 2). Cl was positively associated with K, Na and Ca in all mouthpart structures ( Table 2).
Correlations between Organic and Inorganic Elements by Species
Individuals of M. cassinii had strong negative correlations between Si and organic elements, except for P (Supplementary Table S8). Individuals of M. cassinii and M. septendecim both showed strong positive relationships between Cl and several inorganic elements, including Na, Mn and Zn (Supplementary Tables S8 and S9), but M. septendecula did not (Supplementary Table S10). C and O in M. septendecim showed a strong negative relationship between several inorganic elements including S, Cl and K. In M. septendecim, S had strong positive correlations with Cl, K, Mn and Ca (Supplementary Table S9). In M. septendecula, there were strong negative correlations between C and O and alkaline metals (Cl, K and Mn, see Supplementary Table S10). Similar to M. septendecim, N. linnei demonstrated negative correlations between O and Cl, K and S (Supplementary Table S11). Aside from these correlations, N. linnei showed different patterns of associations between elements from the Magicicada species; for example, N. linnei was the only species that did not show associations (positive or negative) between carbon and other elements. Individuals of N. linnei also showed several positive relationships between Al and Ca, Cl and S (Supplementary Table S11), which were lacking in the Magicicada sp.
Correlations between Organic and Inorganic Elements by Species
Individuals of M. cassinii had strong negative correlations between Si and org elements, except for P (Supplementary Table S8
Generalized Patterns of Metal Bioaccumulation in Cicada Mouthparts and Locations on the Mouthparts
For individual mouthpart structures, PC1 explained 43% of the variation in the mandibles, 44% in maxillae and 32% in the sheath (Supplementary Figure S5 and Table S12). PC2 explained 16% in mandibles, 17% in maxillae and 23% in sheath (Supplementary Table S12). K and Cl explained much of the variation in PC1 for all three mouthpart structures and Ca also was important for understanding PC1 for the maxillae and sheath (Supplementary Figure S5 and Table S13). As previously stated, Al was not present in the mandibles or the sheath but explained much of the variation in PC1 of the maxillae. For locations on the mouthpart structures, PC1 explained 34% in the tip location, 36% in the distal location, 19% in the mid location and 42% of the variation in the proximal location ( Figure 6, Supplementary Table S14). In the mandibles, Si explained most variation in PC2 for the distal location and Mg explained most of the variation in the distal region of the sheath ( Figure 6, Supplementary Table S15). Table 2. Pearson correlations (r) between elements found in the mouthpart structures of cicadas.
Mandible
* Aluminum was not detected in the mandibles and maxillae. Significant correlations (p < 0.05) are recorded in bold.
Patterns in Cicada Grouping by Morphological Measurements and EDS Results
A linear discriminant analysis revealed an accurate classification system when only morphological measurements were used (100% accurate classification for each species) (Table 3). However, when only EDS measurements were used, the classification system inaccurately grouped several individuals. For example, only 60% of individuals of M. cassinii were correctly classified, with 40% inaccurately classified as M. septendecim or N. linnei. For M. septendecim and N. linnei, only 60% were accurately classified within each species group. In contrast, 100% of M. septendecim were correctly classified (Table 3). Table S12). PC2 explained 16% in mandibles, 17% in maxillae and 23% in shea mentary Table S12). K and Cl explained much of the variation in PC1 for all th part structures and Ca also was important for understanding PC1 for the m sheath (Supplementary Figure S5, Supplementary Table S13). As previously was not present in the mandibles or the sheath but explained much of the varia of the maxillae. For locations on the mouthpart structures, PC1 explained 34% location, 36% in the distal location, 19% in the mid location and 42% of the vari proximal location (Figure 6, Supplementary Table S14). In the mandibles, S most variation in PC2 for the distal location and Mg explained most of the vari distal region of the sheath (Figure 6, Supplementary Table S15).
Hardness and Elastic Modulus
The mechanical properties of cicada mandibles were determined by quantifying the elastic modulus and hardness at the distal and proximal locations using nanoindentation. There were significant differences between the proximal and distal locations for both measurements for all species. For M. cassinii, elastic modulus and hardness were greater in the distal region (both p < 0.0001) (Figure 8). Similar patterns were observed in M. septendecula (elastic modulus, p = 0.0077; hardness, p = 0.0091) and N. linnei (elastic modulus, p = 0.0141; hardness, p = 0.0065) (Supplementary Table S16). Comparisons of each location among species revealed a pattern where there were significant differences in proximal locations, but not the distal locations (elastic modulus, p = 0.2706; hardness, p = 0.1252)
Hardness and Elastic Modulus
The mechanical properties of cicada mandibles were determined by quantifying the elastic modulus and hardness at the distal and proximal locations using nanoindentation. There were significant differences between the proximal and distal locations for both measurements for all species. For M. cassinii, elastic modulus and hardness were greater in the distal region (both p < 0.0001) (Figure 8). Similar patterns were observed in M. septendecula (elastic modulus, p = 0.0077; hardness, p = 0.0091) and N. linnei (elastic modulus, p = 0.0141; hardness, p = 0.0065) (Supplementary Table S16). Comparisons of each location among species revealed a pattern where there were significant differences in proximal locations, but not the distal locations (elastic modulus, p = 0.2706; hardness, p = 0.1252) (Supplementary Table S17). The elastic modulus in the proximal location was significantly higher for N. linnei than the Magicicada sp. (p = 0.0005) and N. linnei and M. septendecula had significantly harder proximal regions than M. cassinii (p = 0.0007) (Figure 8).
Discussion
The present study is the first to reveal the elemental composition of the mou cuticle of cicadas and to our knowledge, the first to find a wide array of inorganic el in piercing-sucking mouthparts of insects (100,000+ species). Insects exhibit a ra mouthpart types but piercing-sucking mouthparts are found in the true bugs (Hem thrips (Thysanoptera), lice (Psocodea), some flies (Diptera), fleas (Siphonaptera) an moths (Lepidoptera) [6]. The composition of the mouthpart cuticle of these grou mains relatively unstudied. Here, we found that cicada mouthparts contain tra metals (Fe, Mn and Zn), a post-transitional metal (Al), alkaline Earth metals (Ca an alkali metals (K and Na), a metalloid (Si), non-metals (P and S) and a halogen (Cl).
Transition metals (Fe, Mn, Zn and copper (Cu)) are arguably the most studie ganic elements found in insect cuticle [9,15,24,50,51]. Given their adaptive role in ticle, such as increased hardness and elastic modulus, transition metals are local
Discussion
The present study is the first to reveal the elemental composition of the mouthpart cuticle of cicadas and to our knowledge, the first to find a wide array of inorganic elements in piercing-sucking mouthparts of insects (100,000+ species). Insects exhibit a range of mouthpart types but piercing-sucking mouthparts are found in the true bugs (Hemiptera), thrips (Thysanoptera), lice (Psocodea), some flies (Diptera), fleas (Siphonaptera) and some moths (Lepidoptera) [6]. The composition of the mouthpart cuticle of these groups remains relatively unstudied. Here, we found that cicada mouthparts contain transition metals (Fe, Mn and Zn), a post-transitional metal (Al), alkaline Earth metals (Ca and Mg), alkali metals (K and Na), a metalloid (Si), non-metals (P and S) and a halogen (Cl).
Transition metals (Fe, Mn, Zn and copper (Cu)) are arguably the most studied inorganic elements found in insect cuticle [9,15,24,50,51]. Given their adaptive role in the cuticle, such as increased hardness and elastic modulus, transition metals are localized or colocalized in regions of cuticular "tools" responsible for cutting or piercing through hard substrates [20,51]. Cu was not found in the cuticle of cicada mouthparts, but the transition metals Fe, Mn and Zn were present. However, Fe was only found in small abundances in the mouthparts of M. cassinii and N. linnei.
Mn and Zn were colocalized at the distal regions of the mouthparts, particularly where the mandibular bumps were found (Figures 2, 4 Figure S5)-a region subjected to high friction forces and wear during the piercing mechanism. Similar colocalization patterns of Mn and Zn have been found in the cuticular tools of several other distantly related groups, including wasps [36,50,51], beetles [13], spiders [19,20,52] and polychaetes [53,54], that also are subjected to wear or breakage. As shown in this study, Zn and Mn often colocalized with the halogen Cl ( Figure 6, Supplementary Figure S5). Zn can form cross-links with nitrogen on histidine amino acids, possibly as Zn(His) 4 or as Zn(His) 3 Cl [51,53,55] and these additional chemical bonds augment the mechanical properties of the cuticle. The role of Mn in increasing cuticular hardness is more contentious [9,50]. Recent evidence suggests that Mn not only has the capacity to perform similarly as Zn [51,54], but at concentrations lower than what is required of Zn. Mn can create a range of bonds with protein ligands (up to six), whereas Zn can only create three bonds [18,54,56]. The colocalization of Mn and Zn with Cl suggests that manganese chloride (MnCl 2 ) and zinc chloride (ZnCl 2 ) might be present, but Cl could be present in other compounds too, such as chlorotyrosines, that are often found in regions of cuticle with extensive sclerotization [57].
and 5; Supplementary
The proposed role of Zn and Mn was supported in this study, as the distal regions of the mandibles, where Zn and Mn were primarily located, were harder and had greater elastic modulus than the proximal regions ( Figure 8). In addition, N. linnei and M. septendecula had harder mouthparts and relatively more Zn and Mn, further supporting their adaptive role (Figures 4 and 5). Hardness is defined as a material's resistance to permanent deformation when a particular force is applied and elastic modulus is the ratio of stress to strain during the deformation of a material [58,59]. These mechanical properties have been measured on the cuticle of a wide variety of invertebrates, including bed bugs [60], beetles [61,62], grasshoppers [44], flies [11], among others. Here, the distal region of the cicada mandibles had an average elastic modulus of 2.16 GPa and hardness of 155.05 MPa (Figure 8). These values are similar to those reported for other insect species, such as the elytra on the dung beetle, Geotrupes stercorarius [63,64] and the pre-stomal teeth of the yellow dung fly, Scathophaga stercoraria [11] and are comparable to the polymer polycarbonate [65]. The reported values here, however, might differ from those of living cicadas, because the mechanical properties of the cuticle are largely affected by the extent of hydration [66].
The deposition of Mn and Zn into the cuticle occurs chronologically where Mn is incorporated before Zn and both take place after cuticle has formed and sclerotized [8,55,67]. Given that metal incorporation occurs after the cross-linked matrix of cuticle has already formed, the mode of transportation of metal ions into the cuticle requires further study. At this point in time, the leading hypothesis for metal incorporation relates to the discovery of channels in spider fangs, up to 50 nm in diameter, that might be used for transporting Zn and Cl to specific locations [68].
The maxillae likely had lower hardness and elastic modulus values because Zn was almost entirely absent and Mn was present in lower quantities than that found in the mandibles. The sheath displayed a different pattern of elemental composition mostly by lacking transition metals and instead having larger amounts of Mg and K (Supplementary Figures S1 and S3). The contribution of Mg and K to the mechanical properties of insect cuticle, however, is not clearly understood. The hard material high-magnesium calcite ((CaMg(CO 3 ) 2 ) was previously reported in the exoskeletons of the leaf-cutter ant, Acromyrmex echinatior [69]. The lack of a correlation between Ca and Mg in cicada mouthparts indicates the absence of high-magnesium calcite. Given that the sheath does not pierce wood, the large amount of Mg suggests an adaptive role other than increased hardness or greater elastic modulus and perhaps contributes to decreasing susceptibility to fracturing.
We previously reported the elemental composition of cicada ovipositors [34], using the same individuals used in this study, thus providing an opportunity to compare two piercing structures from the same group of individuals. In the present study, the abundance of Zn was in relatively high concentrations in the mandibles where it averaged approximately 0.55%wt (1.1%wt at distal and tip regions) but was nearly absent in the ovipositors (0.02%wt). The lack of Zn in the mouthpart structures not responsible for piercing (maxillae had 0.01%wt and the sheath had 0%wt) was expected, but the lack of Zn in the ovipositors, which do pierce, indicates a potential mechanism whereby particular elements, including transition metals, are differentially allocated to specific regions on specific structures. This proposed hypothesis is further supported by examining the allocation of Mn, where the cicada ovipositors had an average of 0.2%wt of Mn at an abundance slightly higher than those reported here for the mandibles (0.09%wt for the entire mandible, 0.17%wt at the distal and tip regions). The periodical cicada, M. cassinii, displayed the greatest differential in Mn abundance with an average of 0.31%wt in the ovipositor compared to 0.06%wt in the distal and tip regions of the mandibles [34]. For M. septendecim, the pattern was the opposite where there were high levels in the tip and distal regions of the mandible (average 0.17%wt) but only small quantities in ovipositors (0.02%wt). The mouthparts of N. linnei had less Mn than the ovipositors and interestingly, M. septendecula completely lacked Mn in its ovipositor but had high amounts of it in the mandibles (0.30%wt).
Although Mn and Zn both likely contribute to hardness and elastic modulus properties, the complex nature of their chemistry and bond-formation potential with other elements that can create a variety of molecules suggests that they might be able to contribute to other mechanical properties. For example, Mn might play an important role in preventing fracturing, which would include the formation of other chemical bonds. Unfortunately, nanoindentation was not used in the ovipositor study to assess mechanical properties, which could have provided an opportunity to examine how Mn contributes to hardness or elastic modulus in a general absence of Zn.
The life history of cicadas makes them difficult to study. The periodical cicadas, for example, spend up to 17 years underground, so key aspects of their biology, such as growth rates and feeding preferences remain relatively unknown. Here, we consider element presence and abundance and how these inorganic elements are distributed to various structures as a method for determining cicada life history traits. The presence and abundance of inorganic elements in cicada cuticle likely comes from ions that are ingested while feeding on the xylem from trees. Trees host a variety of inorganic elements in their xylem [70,71] and cicadas are likely to begin acquiring these elements as immature nymphs; however, it is unknown if inorganic element acquisition begins at early stages of cicada development or closer to the adult stage. If tree species differ in the presence and abundance of elements, this could be reflected by the cicada cuticle. In addition, tree species differ in their hardness and we could expect that cicadas with harder mouthparts and ovipositors are adapted for harder trees, similar to what has been found regarding feeding behaviors of termites [72] and oviposition preferences of damselflies [73]. However, in this study, different cicada species overlapped in element abundance and presence, hence these characters were not useful for species delineation (Figure 7). In addition, although the cicada species studied here differ in their mouthpart morphology, the similarities among their structures make it difficult to use morphology as a tool to assess specific feeding preferences.
It is unclear if the absence of particular inorganic elements in structures, such as a lack of Zn in the ovipositor [34], is due to prioritizing Zn distribution to the distal regions of the mandibles or if there is a lack of a mechanism to allocate this element to the ovipositor. It is also unclear as to why natural selection has apparently favored Mn distribution in the ovipositor, but not Zn, which could be adaptive in facilitating ovipositor piercing. These questions represent some of the most compelling questions in this field: how are inorganic elements distributed to specific locations in the cuticle and what mechanism of selection is in place to ensure particular elements reach specific structures?
It is now known that adult cicadas feed, but this was not always known. The idea that cicadas do not feed dates to Plato in ancient Greece, who wrote that cicadas were originally men that were enchanted by the Muses to sing for so long that they did not eat, and died. The Muses, as a reward, turned the men into cicadas, so they could sing all day without the need to eat. The view that adult cicadas did not eat continued until Paul Dudley in 1733, corrected it writing, "some have inclined to think (cicadas) eat nothing . . . but at length by a careful observation, it has been found that they are nourished by the juices of the tender twigs, especially of young apple trees, which they draw out by piercing them with the proboscis" [74].
Conclusions
The present study provides additional information about how cicadas are likely to feed. The sheath lacks significant amounts of transition metals, suggesting that its main function is to house the stylets, possibly keeping them clean from debris and injury. Once a suitable tree host is located, the stylet exits the sheath and the mandibles begin antiparallel piercing movements to reach xylem, which is facilitated by having larger abundances of transition metals in the distal regions. After the wood is pierced, the maxilla enters the vascular bundle to feed on xylem by a sucking mechanism that requires the sucking pump in the cicada's head to induce a pressure differential. Although we now know that adult cicadas feed, several questions remain regarding the mechanism for the allocation of inorganic elements and additional studies are needed to determine how elements other than transition metals, such as K, Na, P and Si, augment the insect cuticle.
Supplementary Materials: The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/biology12020207/s1, Figure S1: Violin plots showing differences in potassium (K) abundance in mouthparts and locations within the mouthparts of Magicicada cassinii; Figure S2: Violin plots demonstrating differences in sodium (Na) abundance at different locations of cicada mouthpart structures; Figure S3: Violin plots showing differences in the abundance of magnesium (Mg) in mouthpart structures and locations within the structures of Neotibicen linnei; Figure S4: Violin plots showing differences in aluminum (Al) abundance in mouthparts and locations within the mouthparts of Magicicada septendecim; Figure S5: Principal component analysis (PCA) of all four cicada species; Table S1: Morphological measurements (mean ± S.E.) of cicada mouthpart structures; Table S2: ANOVA results of measurements of mouthpart morphology among cicada species; Table S3: ANOVA results of width measurements of mouthpart structures within cicada species; Table S4: Measurements (mean ± S.E.) of bump number and sizes among different cicada species; Table S5: ANOVA results of measurements of mandibular bumps among cicada species; Table S6: MANOVA results of differences among inorganic element abundance in mouthpart structure, location on each structure, cicada species and the interactions of these variables; Table S7: ANOVA results analyzing differences of inorganic element abundance in mouthparts (M), locations on mouthparts (L), cicada species (S) and the interactions of these variables; Table S8: Pearson's correlations (r) between elements in M. cassinii; Table S9: Pearson's correlations (r) between elements in M. septendecim; Table S10: Pearson's correlations (r) between elements in M. septendecula; Table S11: Pearson's correlations (r) between elements in N. linnei; Table S12: Explained variance of each of the principal components analyzed for mouthparts; Table S13: Table of loadings of all variables for each of the first three principal components studied for cicada mouthparts; Table S14: Explained variance of each of the principal components analyzed for locations within mouthparts; Table S15: Table of loadings of all variables for each of the first three principal components studied for location on cicada mouthparts; Table S16: ANOVA results comparing hardness (H) and elastic modulus (EM) between proximal and distal locations on the mandibles within cicada species; Table S17: ANOVA results comparing hardness (H) and elastic modulus (EM) between proximal and distal locations on the mandibles among cicada species. Data Availability Statement: The datasets used and/or analysed during the current study are available from the corresponding author on reasonable request. | 9,312.8 | 2023-01-29T00:00:00.000 | [
"Materials Science",
"Environmental Science",
"Biology"
] |
Branching process approach for epidemics in dynamic partnership network
We study the spread of sexually transmitted infections (STIs) and other infectious diseases on a dynamic network by using a branching process approach. The nodes in the network represent the sexually active individuals, while connections represent sexual partnerships. This network is dynamic as partnerships are formed and broken over time and individuals enter and leave the sexually active population due to demography. We assume that individuals enter the sexually active network with a random number of partners, chosen according to a suitable distribution and that the maximal number of partners that an individual can have at a time is finite. We discuss two different branching process approximations for the initial stages of an outbreak of the STI. In the first approximation we ignore some dependencies between infected individuals. We compute the offspring mean of this approximating branching process and discuss its relation to the basic reproduction number \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$R_0$$\end{document}R0. The second branching process approximation is asymptotically exact, but only defined if individuals can have at most one partner at a time. For this model we compute the probability of a minor outbreak of the epidemic starting with one or few initial cases. We illustrate complications caused by dependencies in the epidemic model by showing that if individuals have at most one partner at a time, the probabilities of extinction of the two approximating branching processes are different. This implies that ignoring dependencies in the epidemic model leads to a wrong prediction of the probability of a large outbreak. Finally, we analyse the first branching process approximation if the number of partners an individual can have at a given time is unbounded. In this model we show that the branching process approximation is asymptomatically exact as the population size goes to infinity.
Introduction
Sexually transmitted infections (STIs) are among the world's most common diseases remaining as a major global threat. In addition to accounting for millions of deaths so far, over a million STIs are acquired every day worldwide, and STI pandemics continues to cause a major socio-economic burden on many developing countries (see, e.g. WHO 2015).
Over the past decades, several authors have used mathematical models to asses the impact of the structure of partnerships on the spread of HIV (Eaton et al. 2011;Heesterbeek et al. 2015). See also the Introduction of the Ph.D. thesis of Leung (2016) for an excellent discussion. In order to study the disease dynamics of HIV and other infectious diseases, much attention has been devoted to static networks (see e.g. Newman 2002;Diekmann et al. 2013;Ball et al. 2010, and references therein). The underlying assumption of that work is that once a connection is formed between two individuals this will remain unaltered and during an epidemic outbreak no new partnerships are formed. However, social interactions do often vary over time: new connections being formed and others being dissolved, providing short term opportunities for disease transmission. To incorporate the dynamics, Leung et al. (2012) (see also Leung 2016;Leung and Diekmann 2017) developed and analysed a deterministic model for the spread of an S I epidemic on a dynamic network. Here S stands for susceptible and I stands for infective. Their network model incorporates demographic turnover through individuals entering the population and dying and allows for individuals to have multiple partners at the same time, with the number of partners varying over time. This network model can be seen as an extension of pair formation models (Kretzschmar and Dietz 1998) to situations where individuals are allowed more than one partner at a time. Leung et al. (2012Leung et al. ( , 2015 extended the traditional pair formation models by incorporating the assumption that individuals have at most n partners at a given time.
A key parameter in epidemic modelling is the basic reproduction number, R 0 . In epidemics on networks, it is usually defined as the expected number of secondary infections caused by a typical case in the early stages of the epidemic, but not the initial case, in a predominantly susceptible population. This concept is used both in deterministic and stochastic models for infection spread (Diekmann et al. 2013). It is well known that for a susceptible-infectious-recovered (SIR) epidemic in a homogeneously mixing population the process describing the number of infectious individuals during the early stages of the epidemic is well approximated by a suitable branching process (Ball and Donnelly 1995). In those branching process approximations, giving birth corresponds to infecting someone and death corresponds to actual death or recovery, while R 0 corresponds to the offspring mean in the branching process. In particular, if R 0 ≤ 1, then no epidemic is possible, while if R 0 > 1 the probability of a large outbreak is strictly larger than 0, but often strictly less than 1. There has been a lot of research on analysing the epidemic threshold, i.e. R 0 = 1, by rigorous branching approximation for the stochastic epidemic models involving networks (see, e.g. Britton 2010, and references therein). In fact, the technique of Ball and Donnelly (1995) can be used to approximate the initial phase of an epidemic on the contact network that has a large size by a suitable branching process (see, e.g. Ball et al. 2009Ball et al. , 2014. The present study is an extension of the work of Leung et al. (2015) and Leung (2016). Leung and co-authors use deterministic models to study different epidemic models on the dynamic graphs introduced in their work (that we briefly discuss in the following paragraph). In this deterministic approach, one implicit assumption is that the initial fraction of the population which is infectious might be very small, but always positive, which implies that the number of initially infectious individuals is large, because it is effectively assumed that the total population size is infinite. In the present study, we consider the epidemic and population dynamics as stochastic processes, where the expected population size is large but finite.
The network model of Leung et al. (2015) and Leung (2016) can be described as follows (for a detailed description see Sect. 2). Individuals enter the population at rate μN and die at rate μ per individual. This implies that the population size converges to N , which is assumed to be very large (and in the deterministic models effectively chosen to be infinite). Individuals enter the population without partners. An individual has at most n partners at a time, where n is a strictly positive integer (and can be chosen to be ∞). The possible partnerships are represented by so-called binding sites. At time t, let (1 − F(t))n be the average number of partners per individual in the population, i.e. F(t) is the fraction of binding sites that are "free" at time t. If an individual has k partners at time t it acquires a new partner at rate (n − k)ρ F(t), where ρ is a constant (the rate at which each free binding site tries to connect with another binding site; and that site is chosen uniformly from all binding sites and is thus free with probability F(t)) and partners separate at rate σ per partnership. In the S I epidemic framework, a susceptible individual becomes infectious at a rate β times the number of his or her infectious partners. Infectious individuals cannot recover, but of course they stop spreading when they die. A key ingredient in the models of Leung et al. (2015) is the mean-field at distance one assumption, which is a (non-exact) approximation of the distribution of the number of partners of partners of a newly-infected individual.
We approach the models by Leung and co-authors from a stochastic perspective. To do this, we make some further assumptions, which make computations easier and the communication of our main message clearer. In contrast to the deterministic models mentioned before, we do not assume that a new individual in the population starts as single. Instead, we assume that the individuals upon entering the population immediately form a (random) number of partnerships with individuals already in the population. The distribution of this random number is chosen in such a way that the distribution of the number of partners of an individual does not change over time. That is to say, incoming individuals have a stationary distribution of the number of partners (usually referred to as the degree distribution). The advantage of this assumption is that dependencies between the number of partners of an individual and the infection status of the individual become more tractable and the mean-field at distance one approximation of Leung et al. (2015) and Leung (2016) is no longer needed. We follow Leung et al. (2015) to ignore the difference between male and female in our model and in this way effectively consider a homosexual or asexual population. Although this might be unrealistic, we think our main message is highlighted clearer by this omission.
The main purpose of this paper is to analyse possible approximations of the early stages of a stochastic epidemic in the described network by suitable branching processes. Our analysis focuses on the early stage of an epidemic outbreak where only a small number of individuals is initially infected. Note that this assumption does not fit within the deterministic framework, where the number of initial infectives is either exactly 0 or large, because in those models the initial fraction of the (effectively infinite) population infected has to be either 0 or strictly positive. In particular, we are concerned with deriving explicit formulas for the threshold parameter R 0 and the probability of extinction. For this, we use two approximations for the model.
In the first approximation we consider a general maximal number of partners, n, but because of certain dependencies to be described in detail afterwards, it is not possible to do more than computing R 0 , which is here purely interpreted as the expected number of other individuals infected by one infectious individual during the early stages of the epidemic. We note that we are not able to prove that this R 0 has the desired threshold property which it has for epidemics in homogeneously mixing populations.
The second approximation is only valid for n = 1, which corresponds to the pair formation model of Kretzschmar and Dietz (1998). What makes this approach different from the first is that here we can describe the dynamics of the disease through an asymptotically exact approximating branching process. From this we can easily obtain the extinction probability as well as a threshold parameter, denoted byR 0 . This reproduction numberR 0 differs from R 0 and cannot be interpreted as the expected number of individuals infected by a typical infected individual. The interpretation of R 0 is discussed in Sect. 3.2. Unfortunately, we did not find a way to generalize this approach to n > 1. For further reflections on R 0 , we refer the reader to Cushing and Diekmann (2016).
Finally, in order to avoid undesirable dependencies that appear and complicate the two branching process approximations, we also study the case in which there is no maximal number of partners, i.e. when n = ∞ (cf. Altmann 1995). For this model, we can compute the reproduction number as well as an implicit expression for the extinction probability.
The main contributions of the current work are: -to present a branching process approach for analysing the early stages of an outbreak of a sexually transmitted infection, or any other infectious disease, spreading along the dynamic network. In doing this, we show why an appealing straightforward branching process approximation of the epidemic process is not correct, because it ignores some subtle dependencies. -to characterize the basic reproduction number and the probability of extinction for the dynamic network by using a branching process approach.
The paper is structured as follows. Section 2 is devoted to the model definition and assumptions. In Sect. 3, we present two stochastic approximations of the model. In the first, we use a naive (appealing but wrong) branching process approximation to analyse the early phase of an epidemic spreading through a dynamic sexual network. We use the second (less intuitive) approximation of the model with n = 1 to compute a threshold parameterR 0 and the correct probability of extinction during the initial phase of the epidemic. Here we also provide a discussion of the influence of the dependencies. In Sect. 4, the first approximation of the model is used to study the epidemic on the dynamic network when the partnership capacity is infinite, i.e. when n = ∞. In this particular case, dependencies fall away and we may use branching processes to analyse the early phase of an S I epidemic spreading through a dynamic sexual network. In particular, we compute the reproduction number R 0 , the offspring distribution and compare reproduction numbers when n → ∞. Finally, we discuss our analytical findings and give an outlook on future work in Sect. 5.
Model definition and assumptions
In our model we assume that individuals enter the population at rate μN (i.e. according to a Poisson process with intensity μN ) and that individuals have independent exponentially distributed "lifetimes" (or time they stay in the active population), with expectation 1/μ, i.e. individuals leave the active population at rate μ times the number of individuals in this population. This implies that the distribution of the population size, say N * (t), converges as t → ∞ to a Poisson distribution with mean N , i.e. the stationary and limiting distribution of the population size is Poisson distributed with expectation N (Resnick 2013, Ch. 5). We assume that N is very large.
When an individual enters the population, he or she immediately forms partnerships with a random number of partners. This random number of partners is independent for different individuals and binomially distributed with parameters n and p in , where n is a positive integer, representing the maximal number of partners an individual can have at any given time (the partnership capacity) and p in is a constant between 0 and 1, to be specified later. So, the probability that an entering individual has partners is n ( p in ) (1− p in ) n− . The probability that the incoming individual forms a partnership with an individual that already has k partners at that moment is proportional to n −k. A given individual with partners acquires new partners among the individuals already in the population at rate (n − )ρ F(t), where F(t) is the fraction of binding sites free at time t. Again, the probability that a partnership is formed with an individual that at that moment already has k partners is proportional to n − k. Note that we can interpret this construction as follows: a given individual with partners and a given individual with k partners form a partnership at rate (n − )(n − k)ρ/(n N ). Partnerships have independent exponential durations with expectation 1/σ , i.e. partners separate at rate σ per partnership (if the partnership has not ended by death of one of the partners). If an individual leaves the active population, then all of its partnerships break. From a modelling perspective individuals can be seen as collections of n "binding sites", where binding sites can either be free or occupied (by a partner). As long as individuals are alive, their binding sites behave independently where partnership formation and separation is concerned. Let F(t) be the fraction of binding sites in the population which are free at time t. We want this fraction to converge (with high probability) to a constant F, which we use in the formulation of the branching process. Observe that, because the number of partnerships of an individual just after entering the population is binomially distributed with parameters n and p in , as a result, we can consider the binding sites of such an individual to be independent and free with probability 1 − p in . We choose p in such that F is equal to 1 − p in , because that is a necessary condition for the distribution of the number of partners of an individual to be stationary. Note that if a binding site is occupied it becomes empty at rate σ + μ, where the σ term is caused by separation and the μ term is caused by death of the partner. A binding site, that is already in the population, acquires new partners already present in the population at rate ρ F(t). The rate at which occupied binding sites enters the population is μN np in . The number of free binding sites in the population is F(t)N * (t)n. Therefore, per binding site, the rate of acquiring newly arrived partners is μN np in F(t)N * (t)n . So an empty binding site acquires a new partner at rate ρ F(t) + μN np in nN * (t)F(t) . If F(t) indeed converges to F = 1 − p in (and using the fact that N * (t)/N converges in probability to 1 as N → ∞), then the rate of acquiring a new partner at a binding site is well approximated by Putting the above together with the theory of Markov on-off processes (Resnick 2013, p. 405), the long run fraction of binding sites which are free is given by This fraction should be equal to F. As a result, (Clearly the other solution of (1) is negative.) . The parameters of our model are summarized in Table 1.
Because the probability that a binding site is free is F(t) and whether or not a binding site is free is independent of other binding sites, the number of partners of a living individual is binomially distributed with parameters n and 1 − F(t), i.e. the number of partners of a living individual is k with probability n k (1− F(t)) k (F(t)) n−k (for k = 0, 1, . . . , n). Furthermore, for a given individual assuming that the individual does not die, the transitions of the number of partners are described by In the following lemma (the proof is presented in "Appendix A", we show that F(t) indeed converges (in a suitable sense) to F as time t and the population size parameter N tends to infinity.
Lemma 1 As N → ∞ the fraction of free binding sites F(t) satisfies (on every
bounded interval with probability tending to 1) the differential equation It is not hard to see that the asymptotically stable equilibrium solution of the differential equation (3) is [see Eq.
(2)] and after some trivial computation we have that, for t → ∞ In the following analysis we assume that the population has already reached equilibrium and F(t) can be replaced by the constant F. Next, we consider an S I epidemic spreading on the dynamic network described above. In this S I model, pairs of individuals make contacts according to independent Poisson processes with per partnership intensity β, as long as the pair is in a partnership. If a susceptible individual contacts an infectious one, it becomes infectious immediately and stays so until it leaves the population. We assume that the infection is introduced in the population by a single infectious individual, when the distribution of the configuration of the network is stationary. All other individuals are at that moment susceptible. With some abuse of terminology, we say that a binding site is susceptible (respectively infectious) if the partner it is connected to (if any) is susceptible (respectively infectious). In the next section we approximate the spread of an S I epidemic on the dynamic network by a branching process. For this approximation we need some further assumptions and notations. In this branching process approach, we keep track of properties of the infectious individuals and their binding sites. We implicitly assume that the number of susceptible individuals that are not connected to infectious individuals is very large and their properties, such as the distribution of the number of other susceptible partners etc., does not change as long as the branching process approximation is valid, i.e. we study the initial phase of the epidemic.
The possible states of a binding site of an infectious individual are: free (denoted by φ) or occupied by a susceptible (denoted by −) or occupied by an infectious individual (denoted by +). The binding sites of an individual move among the possible states according to a Markov process. The disease is transmitted from an infectious partner to a susceptible partner at rate β. Such a transmission causes a transition of the state of the binding site from − to +. Other possible transitions are from − or + to φ, which both happen at rate σ + μ (end of partnership or death of partner) and from φ to − at rate ρ F + μ(1−F) F (formation of new partnership; the new partner being susceptible with high probability). Finally, the dynamics of this particular Markov process stops by death of the infectious individual under consideration, which happens at rate μ. The states and the transitions of this Markov process are shown schematically in Fig. 1.
A first naive approach
In this subsection, we study the spread of an STI (or other infectious disease) on the partnership network in the beginning of an epidemic by employing an appealing but wrong branching process approximation approach. In the approximating branching process dependencies, which are present in the spread of an epidemic on a network, are ignored. However, we still present this approximation because the approximation is so appealing. Furthermore, the dependencies that are ignored in the branching process approximation are also ignored in deterministic descriptions of the epidemic (see Leung et al. 2015;Leung and Diekmann 2017). We highlight dependencies that are present and how ignoring those dependencies leads to wrong predictions of the probability of a large outbreak.
Here, we assume that everyone has n binding sites (i.e. an individual has at most n partners at the same time). In the present approach, the dynamic network model can be seen as a discrete space, continuous time Markov chain. So, we can describe the dynamics of the process in terms of rates (depending on the current state of the population), where times between events are exponentially distributed.
Recall that an infectious individual can produce new infectious binding sites through contacts at his or her susceptible binding sites. The number of new infections caused by one infectious individual is the same as the sum of the number of times we have a transition from the − state to the + state, where the sum is taken over all binding sites of the infectious individual. Thus, in our consideration a child is born (i.e. an infectious binding site is created) whenever there is a passage from the-state to the + state. In the terminology of Galton-Watson branching processes (Jagers 1975), infectious binding sites generated by an infectious individual are considered as his or her offspring. However, we stress again, as we show later, that the epidemic process is not approximated by this branching process in the sense that it is not asymptomatically exact.
In "Appendix B" we derive an offspring distribution for the branching process, which then leads to an expression for the offspring mean and the probability of extinction of the branching process. Which, if the branching process approximation is asymptotically exact, correspond respectively to the basic reproduction number R 0 and the probability of a minor outbreak. Let R 0 be the offspring mean of the branching process, we deduce that (4) If n = 1, the probability of extinction of the branching process can be easily deduced as well (see "Appendix B"). Indeed, the probability of extinction of the branching process, when n = 1 and the ancestor is one infectious individual without a partner, is given by Note that in computing R 0 we do not need independence of the number of children at different binding sites (which indeed are not independent). Therefore, this offspring mean is a good approximation for the expected number of new infections caused by an infected individual during the early stages of the epidemic, in a mostly susceptible population. However, we do not know whether there is a branching process approximation of the spread of the epidemic which is asymptotically exact and has the same offspring mean. So we do not know whether R 0 = 1 is a threshold for a large outbreak of an epidemic which starts with only a few infectious individuals. The number of partners in our model can have a great effect on R 0 . To see this effect, we assume that the average number of partners of an individual is a constant C i.e. n(1 − F) = C. Using this in (4) and treating n as a positive continuous variable, straightforward computation gives Since C is always less than n, the derivative ∂ R 0 ∂n > 0, i.e. the basic reproduction number R 0 increases as a function of the number of partners.
The independence of the number of children of individuals can be viewed as the very defining property of branching processes and we have already emphasized that although the stochastic process leading to (4) and (5) is a branching process, it does not approximate the epidemic process well since the epidemic process violates the required independence criterion of reproducing individuals, even for the simplest case n = 1. Indeed, information about the state of the partners of one of the individuals provides some information about the state of the partner of other individuals. To understand this, consider what happens if an individual in state + dies. We know with certainty that his or her partner gets a free binding site. While, if two infected partners separate, then we know for sure that both the infected individuals that were in the partnership, get a free binding site at the same time. We further clarify the dependencies that violates the independence criterion of reproducing individuals for n = 1 through the following example. In this example we use the following probabilities.
π φ : Probability that a φ binding site becomes + before it disappears, i.e. before the individual under consideration dies. π − : Probability that a − binding site becomes + before it disappears. π + : Probability that a + binding site becomes + again after having been − or φ before it disappears.
It is straigtforward to deduce (see "Appendix B") that Example 1 Consider the case when an infector has exactly 1 "child", the infectee. We consider what happens from the moment of the first infection on.
P (infectee has 0 children | infector has 1 child) = (1 − π φ )P (first event after infection is separation | infector has 1 child) + (1 − π φ )P (first event after infection is death of infector | infector has 1 child) + P (first event after infection is death of infectee | infector has 1 child) This example shows that there is dependence between the states of the two events even for the simplest case n = 1.
When n > 1, the dependencies become even clearer, since when an individual dies all of its partners obtain a free binding site at the same time. Furthermore, whether or not a partner of a partner of an infectious individual is infectious is dependent on how long the individual under consideration has been infectious itself, which creates dependencies between individuals which are not even partners of each other. This observation also means that considering the spread of the epidemic at the level of binding sites (see e.g. Leung and Diekmann 2017) or typing individuals by their own infection status and the number of susceptible and infectious partners they have (cf. Ball and House 2017) is not enough to obtain an asymptotically exact (multi-type) branching process approximation.
Asymptomatically exact branching process approximation
As stated earlier, we cannot expect that the branching process defined above approximates the epidemic well. Still, this branching process is used to compute the extinction probability in Eq. (5). Therefore, this probability is not necessarily the extinction probability of the epidemic process which is approximated by the branching process. That motivates us for defining a branching process which correctly approximates the epidemic process, so that we can get the true extinction probability for the model when n = 1. Unfortunately, we do not know how to extend this approach to n > 1.
For this branching process, we base our bookkeeping on the empty binding sites. Assume for the moment that we start the epidemic with one infectious individual with binding site in state φ. Now the individual can either die (in which case no new empty binding sites are created), which occurs at rate μ or form a partnership with a susceptible individual (recall that we are in the early stages of an epidemic), which occur at rate In case of a partnership between an infectious individual and a susceptible individual four things can happen: (i) a separation, in which case there is one infectious individual with an empty binding site which occurs at rate σ , (ii) the susceptible individual dies, in which case there is also one infectious individual with an empty binding site; this occurs at rate μ, (iii) the infectious individual dies, in which case there is no infectious individual with an empty binding site; this occurs at rate μ or (iv) the infectious individual infects the susceptible one (rate β), in which case there is a partnership between two infectious individuals. In creating the branching process approximation below, we consider the resulting infectious individual with an empty binding site in case (i) and case (ii) as new individuals.
In case of a partnership between two infectious individuals two things can happen: (i) a separation, in which case there are two infectious individuals each with an empty binding site, which occurs at rate σ , (ii) one of the individual dies (rate 2μ), in which case there is one infectious individual with an empty binding site. Again, in creating the branching process approximation below, we consider the resulting infectious individuals with an empty binding site as new individuals. The possible transitions and their rate are schematically depicted in Fig. 2.
Observe that an empty binding site can generate, after possibly going through some stages in which the binding site was occupied, zero, one or two "new" empty binding sites. Here one of the "new" binding sites might actually be the old binding site, which for modelling purposes is considered to be new. To clarify the idea behind our branching process approximation, consider as an example a separation of two infectious individuals. This can be seen as the death of an originally free binding site which paired with and then infected a free susceptible binding site leading to birth of two free binding sites. The "newborn" free binding sites are independent copies of the initial free binding site, which is why this description leads to a branching process approximation of the epidemic spread which is asymptotically exact.
So, each free binding site generates a random number Y , Y ∈ {0, 1, 2}, of free binding sites in the next generation, independently of other free binding sites. The probabilities of having Y children in the approximating branching process are given by This simple interpretation for the branching process is no longer valid if the number of binding sites of an individual exceeds 1, because death of an individual may cause several pairs of infectious individuals to break at the same moment and in that way cause dependencies, which violate the defining properties of branching processes. For the branching process with an offspring distribution given through the random variable Y, we can compute the offspring mean (which corresponds to the expected total number of new free binding sites generated by one free binding site). We denote this offspring mean byR 0 , which is given bŷ Note that thisR 0 is not a basic reproduction number in the biological sense of the word, but as written above, it is a threshold parameter. For the branching process with offspring distribution Y, we can also calculate the probability of extinction, which we denote byq φ . This probability is the minimal solution of the following equation (see Jagers 1975) which is given byq Using Eq. (4) and noting that for n = 1, Eq. (4) gives we obtainq φ = min 1, The R 0 in this equation is the basic reproduction number obtained through the naive branching process approximation and does not correspond to the offspring mean of the branching process used to deriveq φ , still it is useful to use this R 0 to simplify the expression forq φ . By writingq φ as a function of R 0 instead of a function of β, the explicit dependence ofq φ on F disappears and we have freedom to choose F. However, the denominator in (9) has to be positive and therefore we cannot always choose F arbitrary close to 1. Note that the probability of extinction for the approximating branching processes is not the same. In fact, if R 0 > 1, then As written earlier, the reason for this is that the first branching process approximation is not a good approximation of the epidemic process because of the dependence between "siblings" and "parents and their children". In Fig. 3 we compare the two extinction probabilities q φ andq φ as functions of σ , where σ ≥ μ(R 0 −1), while keeping R 0 = 3 and μ = 1/30 fixed. We observe that for the given parameter values and for σ only slightly above μ(R 0 − 1), the difference between the two extinction probabilities is considerable. Note that if R 0 is given and when σ is only slightly larger than μ(R 0 −1), then F is necessarily close to 0, in order for the denominator in (9) to be positive and β is necessarily large.
Remark Of particular interest is the critical infection rate β, denoted by β c , for whicĥ R 0 = 1, i.e. the minimum of β which is necessary to possibly cause an epidemic. We want to know whether R 0 is also equal to 1 for this value of β. It can easily be checked that for n = 1, indeed both R 0 = 1 andR 0 = 1 for This observation, in addition to the deterministic model reproduction number interpretation of R 0 given by Leung and co-authors in e.g. Leung et al. (2015), makes us believe that also for n > 1 a single infected individual can cause a major outbreak with positive probability if and only if R 0 > 1, but we did not find a proof for this.
Model without maximum partnership capacity
To circumvent the difficulty of dependencies that arises in the branching process approximation for the epidemic process with n > 1 in the previous section, we con- Fig. 3 The two extinction probabilities q φ andq φ for the two branching process approximations of the epidemic process. The solid line is obtained using the naive branching process approximation of a minor outbreak (q φ ), while the dashed line gives the correct probability of a minor outbreak (q φ ). The plots are for σ ≥ μ(R 0 − 1), where R 0 = 3 and μ = 1/30 sider the model with n = ∞, i.e. the model in which there is no maximal number of partners of an individual. In this model, it is again assumed that new individuals enters the population with a random number of partners where those random numbers are assumed to be independent and identically distributed and chosen in such a way that the number of partners of an individual is stationary during the whole "lifetime". In order to avoid a situation where individuals accumulate new partners at infinite speed in n = ∞ model, we set for n < ∞ the rate at which an individual enters a new partnership per free binding site as ρ =ρ n , whereρ is a constant. Note that for n < ∞, an individual with k partners enters a new partnership at rate (n − k)ρ F, which would go to infinity if ρ > 0 and n → ∞. Note that for ρ =ρ n and n → ∞, every individual forms new partnerships with individuals already in the population at total rateρ. This rate is independent of the number of partners the individual (or his or her new partner) already has. Thus avoiding the source of dependence between individuals in the model with a bounded number of partners per individual.
We assume that an individual enters the population with an expected number μ in of partners. and consider the stationary distribution of an individual's number of partners. Individuals acquire new partners at rateρ + μμ in (note that this is independent of the number of partners the individual already has) and lose partners at rate σ + μ. If the stationary distribution of the number of partners is distributed as D, then for k = 0, 1, . . . , the probabilities d k = P(D = k) needs to satisfy the following balance equation Noting that ∞ k=0 d k = 1, it follows from (10) that i.e. D is Poisson distributed with expectationρ +μμ in σ +μ . In order to let the degree of entering individuals be stationary from the start, we want μ in to satisfy the following equation: which implies μ in =ρ/σ . So, D is Poisson distributed with expectationρ/σ . Furthermore, newly arriving individuals also have this degree distribution.
Threshold parameter
Having determined the degree distribution, we can now compute the expected number of partners infected by one infectious individual. We denote this expected number by R 0 , which corresponds to R 0 as defined for the finite n case. First we compute the probability that the infectious individual (say v 1 ) infects a given other individual, say v 2 , who was already a partner of v 1 at the moment v 1 got infected. This probability to infect susceptible partner is given by Here, 0 can be seen as the time when v 1 got infected, t is the time when v 1 dies and u is the time when v 1 infects v 2 . Since the expected number of susceptible partners of an individual at the time of infection isρ/σ , the expected number of partners v 1 infects, among those individuals who were already partners at the time v 1 was infected isρ Similarly, we can compute the probability that an individual v 1 who dies at time t after being infected, infects a partner v 2 which it contacts at time s since v 1 got infected (s < t). This probability is given by So, using the fact that the total rate an individual acquires new partners is (ρ + μρ σ ), the expected number of individuals v 1 infects, among those individuals who were not yet partners of v 1 at the time v 1 got infected is given by .
(14) Combining the above two observations (13) and (14), we arrive at the following expression for the basic reproduction number: Remark Altmann (1995) considers a model very similar to ours but not exactly the same. He considers an SIR epidemic in a population in which individuals do not die but recover (and acquire eternal immunity) and no new individuals can enter the population. It is easy to check that R 0 in (15) obtained above is in agreement with the result in equation (1) of Altmann (1995) after setting the death rate of partners to 0, which leads to replacing β + σ + 2μ by β + σ + μ in the denominators of both terms in the middle expression of (15) and dropping the factor (σ + μ)/σ in the second term of the middle expression of (15).
Outbreak probability
In order to find the probability of a minor outbreak, we need the distribution of the number of new infectious binding sites that are generated by each infected individual. This then defines the offspring distribution for our branching process. Assume that individual v 1 is infectious for t time units. We have already computed the probability of infecting a given other individual who was already a partner of v 1 at the moment v 1 got infected [see (12)]. Conditioned on t, this probability is where α = β + μ + σ . Furthermore, conditioned on t, whether a given partner of v 1 at the time v 1 got infected (say time 0), will itself be infected by v 1 is independent of which other individuals v 1 infects. This implies that the probability generating function of Z 1 (t), the number of partners of v 1 at time 0, who are ultimately infected by v 1 for s ∈ [0, 1] is given by Still assuming that v 1 lives until time t since infection, v 1 can also infect individuals that are not yet partners of v 1 at time 0. As described above, an individual acquires new partners according to a homogeneous Poisson process with intensityρ σ +μ σ . Up to time t, the distribution of the number of acquired partners is therefore Poisson distributed with expectationρ σ +μ σ . If we condition on v 1 acquiring m partners in the time interval (0, t), then, by standard properties of the Poisson process (Resnick 2013, Section 4.5), those m time points are distributed as m independent uniformly distributed random variables on (0, t). Let Z 2 (t, m) be the random number of individuals v 1 infects that were not partners yet at time 0, conditioned on v 1 dying at time t and acquiring m partners in (0, t). This argument shows that (17) Further, let Z 2 (t) be the random number of individuals v 1 infects which were not partners yet at time 0, conditioned on v 1 dying at time t, not conditioned on the number of partners acquired in (0, t). We obtain: Note that because we assume n = ∞, conditioned on t, Z 1 (t) and Z 2 (t) are independent of each other, which implies that E(s Z 1 (t)+Z 2 (t) ) = E(s Z 1 (t) )E(s Z 2 (t) ). Let Z be the random variable describing the total number of individuals infected by v 1 . By integrating over time, we obtain by (16) and (18) that: where E(s Z ) is the probability generating function of Z . Equation (19) can further be simplified as follows.
To simplify notation, we write, recalling that α = β + μ + σ , that c =ρ β(σ +μ) After rearranging the terms in (20), a little algebra yields: Thus, we have found an expression for the probability generating function for the number of offspring generated by an infectious individual, involving an infinite series with infinite radius of convergence. Note that, the probability P(Z = k) for a specific k can be determined from the probability generating function through but explicit expressions for these probabilities are long and hardly insightful. Furthermore, we can also find the probability of extinction of the branching process as the smallest positive root of s = E(s Z ). Again, there is no nice closed form expression for this root, however it can be approximated numerically.
Comparison of R 0 and R 0
Finally, we compare R 0 in the model with the infinite partnership capacity to the basic reproduction number R 0 of Sect. 3.1 in the limit of large partnership capacity. In the limit n → ∞, the asymptotic fraction of free binding sites as described in Eq. (2) becomes (4), a little algebra confirms that which agrees with Eq. (15).
Conclusion
The reproduction number and the probability of extinction are among the most fundamental concepts in the theory of mathematical modelling of the spread of infectious diseases. These quantities have importance for health officials for planning and allocation of funds to control the spread of those diseases. We explored different strategies to derive explicit expressions for these two important quantities for an S I epidemic on a dynamic sexual network using branching processes. Although it is difficult to derive analytical expressions for threshold conditions and the probability of extinction for a disease spreading on a dynamic network, the branching process approach provide insights for determining the analytical expressions both for the threshold quantity and the probability of extinction. To derive these quantities, we proposed two approaches.
In the first approach, we considered the case in which every individual has n binding sites. This approach suffers from some undesired dependencies, as a result we ended up with an approximating branching process that in fact was not an asymptomatically exact approximation of the original epidemic process. The dependencies are demonstrated in detail and an example is provided to clarify the dependencies that violate the (for branching processes) crucial independence criterion of reproducing individuals. The obtained insights are a warning of dependencies which are easily overlooked.
By the simple modelling framework of this first approach, it is only possible to derive the value of the basic reproduction number R 0 . However, the probability of extinction of this approximate branching process is also computed to compare it with the true probability of extinction for the special case in which an individual has at most one partner at a time. Interestingly, starting from one infectious individual, the derivation of R 0 does not depend on the fundamental independence criteria of the number of children at different binding sites. This suggests that the corresponding explicitly derived value of R 0 is exact. However, this does not guarantee the occurrence of a major outbreak with positive probability even if the basic reproduction number R 0 > 1. This finding is in contrast to classical epidemic models where a major outbreak has strictly positive probability if and only if R 0 > 1.
In the second approach, we demonstrated a simple version of the model in which every individual can have at most one partner at a time. For this model, we managed to establish an asymptotically exact branching process approximation and derived the offspring distribution of this branching process, which allows us to easily compute the probability of extinction for the branching process (and thus for the epidemic). The expectation of the offspring distribution is a threshold parameter. Finally, for n = 1, it is verified that the epidemic threshold parameters obtained by the two different schemes are the same.
In deriving our models and sticking to branching process approximation as a tool for the analysis, we find that the dependence has a subtle influence on the approximations of the epidemic process by branching processes. This dependence disappears if n = ∞. In that case we can compute the basic reproduction number R 0 and the degree distribution of the number of partners of an individual. The probability generating function of the distribution of the number of offspring produced by an infectious individual, that involve a convergent infinite series, is also calculated. This helps us derive an implicit expression of extinction probability. Moreover, we show that our computations are consistent in the sense that for n → ∞, R 0 → R 0 .
The current study is only a first step in studying the spread of the disease on a dynamic network using a branching process approach. In future work, we hope to further investigate the disease dynamics by dropping the stationary distribution assumption of the number of partners at debut.
Here and throughout this appendix we assume that F N (0) → F(0) as N → ∞. The proof we provide is inspired by the proof of (Ethier and Kurtz 2009, Thm 11.2.1, p. 456). The key distinction between our proof and the proof presented in Ethier and Kurtz (2009) is that F N (t) is the number of free binding sites divided by the total number of binding sites at time t, where the latter is proportional to the population size at time t and not to the expected population size N . Because of this minor discrepancy, our model cannot exactly be written in terms of equations (2.1)-(2.3) of (Ethier and Kurtz 2009, p. 455) and all steps in the proof have to be checked for our model.
Let N * N (t) be the actual population size at time t and recall that the stationary distribution of N * N (t) is Poisson distributed with expectation N . Throughout we assume that From (Ethier and Kurtz 2009, Thm 11.2.3) we then deduce that for all t > 0, Note that (24) implies that, Let M N (t) be the number of partnerships in the population at time t. Because each partnership involves two individuals, the sum of the number of partners over all individuals is therefore 2M N (t). Furthermore, There are four events which change the number of partnerships in the population.
(i) A new individual entering the population, which leads to an expected increase of np in to the number of partnerships and occurs at intensity μN . (ii) An individual dies, which leads to an expected decrease of 2M N (t) N * N (t) to the number of partnerships and occurs at (time dependent) intensity μN * N (t). (iii) Separation, which decreases the number of partnerships by 1 and occurs at intensity σ M N (t) and (iv) Formation of a new pair, which increases the number of partnerships by 1 and occurs at intensity Denote the times at which one of the above events occur by t 1 < t 2 < · · · and set t 0 = 0. Let be the number of events up to time t. Define That is, λ(t) is the rate at which the first event after time t occurs. By (24) we have Let F i be the σ -algebra generated by the whole dynamic random graph process up to Because J i = 1 with probability at least we obtain that with high probability [here we used (27) and (24)], Furthermore, So using (22), we obtain which in turn provides us with the following upper bound We can now analyse the six terms on the right hand side separately.
1. We may bound the first term as follows, For all 0 < u ≤ t the first term on the right hand side is w.h.p. bounded by 5N −1/3 [using (25)]. The second term converges to 0 because (24). By Kolmogorov's inequality [see (Durrett 2010, Thm 2.5.2)] we obtain that Furthermore, ι(t) is the number of events up to time t, which, by N * N (u) < 2N for all u ∈ [0, t] w.h.p. and by (24) and (27), is w.h.p. bounded above by a Poisson distributed random variable with expectation n N t 5μ n + σ + ρ . This bound is distributed as the sum of N i.i.d. Poisson distributed random variables with mean nt 5μ n + σ + ρ . So, by the (weak) law of large numbers Combining the above, we obtain that there exists a positive constantĈ such that Because u − t ι(u) is exponentially distributed with parameter at least μN /2 for all u in the interval [0, t] by (29), sup 0≤u≤t (u − t ι(u) ) is stochastically bounded above by the maximum of ι(t) independent exponentially distributed random variables with mean at least μN /2. Denote this maximum by X . Let c(N ) be a function depending on N , then , which converges to 0 as e μN c(N )/2 /ι(t) → ∞, which by (30) holds w.h.p. for c(n) = N −1/2 . 4. By (25) and by t ι(u) ≤ t for u ∈ [0, t] we have
Again by (25) and by
6. Using (25) and that t ι(u) ≤ u ≤ t for u ∈ [0, t] for a third time, we see that Combining the above inequalities we obtain that where ε(N ) → 0 as N → ∞. It follows now by Gronwall's inequality [see (Ethier and Kurtz 2009, Appendix)] that for all t > 0, in probability as N → ∞. In particular, there exists s 0 such that for all ε > 0, all t > 0 and all s > s 0 Here we have used that F(t) → F as t → ∞.
Appendix B: Branching process approximation ignoring dependencies
In this appendix we derive some of the results for the model presented in Sect. 3.1.
Recall that in that subsection we approximate the epidemic by a branching process in which we ignore dependencies between the offspring sizes of different individuals. We do incorporate the dependence between binding sites of the same individual, which occurs through death of the individual, but that is the only dependence which appears in the branching process. We refer back to Fig. 1 for a flow chart of the Markov process governing the dynamics of the states of a binding site. Recall that π φ : Probability that a φ binding site becomes + before it disappears, i.e. before the individual under consideration dies. π − : Probability that a − binding site becomes + before it disappears. π + : Probability that a + binding site becomes + again after having been − or φ before it disappears.
Using the transition probabilities represented in Fig. 1, we obtain the following set of equations.
As stated in Sect. 3.1 the branching process approximation of the epidemic presented in this appendix is not asymptotically exact. In order to to show that ignoring dependencies in the epidemic process really has a substantial effect, we compare in Sect. 3.2 the probability of extinction of this branching process with the correct probability of a minor outbreak of the epidemic as N → ∞. In order to compute this correct probability of a minor outbreak using the methods of Sect. 3.2, we have to assume that n = 1. What is left to do is to compute the probability of extinction of the branching process described in Sect. 3.1 for n = 1.
Standard results from the theory on branching processes (Jagers 1975) give that the extinction probability q of a branching process originating from one case, with his or her offspring distributed as X + , is the smallest non-negative fixed point of the offspring generating function G(s) = ∞ i=0 p i s i where p i = P(X + = i) and 0 ≤ s ≤ 1. Recall that at the moment an individual gets infected, he or she has 1 infectious partner (namely his or her parent). From (33), we know that an infected individual has ( = 0, 1, 2, . . .) children with probability P(X + = ) = π + (1 − π + ). So, the extinction probability, denoted here by q + , is given by the smallest root of q + = ∞ =0 π + (1 − π + )(q + ) = 1 − π + 1 − π + q + .
Using this value of β in Eq. (32), we can write π φ and π + in terms of R 0 as follows: π φ = (σ + 2μ)R 0 (σ + μ)(R 0 + 1) and π + = R 0 R 0 + 1 , where the first equation of (36) implies that we should take σ ≥ μ(R 0 − 1) in order for π φ ≤ 1. So, if R 0 = π + 1−π + > 1, then q + = 1/R 0 . If we assume that the branching process starts with an individual with an empty binding site, we can still compute the probability of extinction of the branching process by making use of the following observation. If the initial individual has k children, then the offspring of this initial individual only goes extinct if the offspring of the k children goes extinct. Those k children all correspond to infectious individuals with an infectious binding site at the moment of infection. Recall that So, if we denote the probability that the offspring of an individual with an empty binding site goes extinct by q φ , then q φ satisfies q φ = ∞ k=0 P(X φ = k)q k + = 1−π φ + π φ (1−π + )q + 1 − π + q + = 1−π φ +π φ q 2 where we have used (36) and q + = 1/R 0 in the last equality. It is not hard to see that if R 0 is greater than 1, then the extinction probabilities calculated above are less than 1, which is a minimal requirement for consistency. | 13,374.2 | 2017-06-01T00:00:00.000 | [
"Mathematics"
] |
Plant diversity has contrasting effects on herbivore and parasitoid abundance in Centaurea jacea flower heads
Abstract High biodiversity is known to increase many ecosystem functions, but studies investigating biodiversity effects have more rarely looked at multi‐trophic interactions. We studied a tri‐trophic system composed of Centaurea jacea (brown knapweed), its flower head‐infesting tephritid fruit flies and their hymenopteran parasitoids, in a grassland biodiversity experiment. We aimed to disentangle the importance of direct effects of plant diversity (through changes in apparency and resource availability) from indirect effects (mediated by host plant quality and performance). To do this, we compared insect communities in C. jacea transplants, whose growth was influenced by the surrounding plant communities (and where direct and indirect effects can occur), with potted C. jacea plants, which do not compete with the surrounding plant community (and where only direct effects are possible). Tephritid infestation rate and insect load, mainly of the dominant species Chaetorellia jaceae, decreased with increasing plant species and functional group richness. These effects were not seen in the potted plants and are therefore likely to be mediated by changes in host plant performance and quality. Parasitism rates, mainly of the abundant chalcid wasps Eurytoma compressa and Pteromalus albipennis, increased with plant species or functional group richness in both transplants and potted plants, suggesting that direct effects of plant diversity are most important. The differential effects in transplants and potted plants emphasize the importance of plant‐mediated direct and indirect effects for trophic interactions at the community level. The findings also show how plant–plant interactions critically affect results obtained using transplants. More generally, our results indicate that plant biodiversity affects the abundance of higher trophic levels through a variety of different mechanisms.
| INTRODUCTION
Several studies have shown that plant diversity affects the diversity and abundance of other trophic levels (e.g., Scherber et al., 2010).
However, the mechanisms driving these effects remain unclear because impacts of plant diversity on trophic interactions, such as parasitism or predation, are only rarely investigated (Ebeling, Klein, Weisser, & Tscharntke, 2012). Pest control by natural enemies has been studied in agro-ecological research (e.g., Bianchi, Booij, & Tscharntke, 2006;Menalled, Marino, Gage, & Landis, 1999;Thies & Tscharntke, 1999) where higher parasitoid efficiency, that is, higher parasitism rates of insect herbivores, is often found in more structurally complex or species-rich systems (Andow, 1991;Langellotto & Denno, 2004;Price et al., 1980). This is often referred to as the Enemies Hypothesis (Root, 1973). However, most of these studies addressed isolated trophic levels or did not consider effects of plant diversity. The few studies that did examine plant diversity effects on parasitism used plant communities with very few species, that is, a maximum diversity of three species. More recently, multi-trophic interactions have been studied in experimentally manipulated plant communities with longer plant diversity gradients, where plant diversity effects were shown to cascade up the food chain (e.g., Ebeling et al., 2012Ebeling et al., , 2014Petermann, Müller, Weigelt, Weisser, & Schmid, 2010). However, these studies were not able to look in detail at the mechanisms driving these cascading diversity effects.
Bottom-up effects of biodiversity on trophic interactions may be caused by both direct effects of the plant community and indirect effects mediated by changes to host plant performance. Plant diversity can directly affect higher trophic levels by increasing the complexity of the olfactory, optical, and structural environment. This may mask the odors or visual cues that herbivores use to find their host plants (Coll & Bottrell, 1994;Finch & Collier, 2000;Randlkofer, Obermaier, Hilker, & Meiners, 2010) making the hosts less apparent to the herbivore. Particular plant species may also cause this effect through associational resistance (Barbosa et al., 2009): for example, similarly colored neighboring plants may attract insects away from the host plant, or volatiles emitted by neighboring plants may repel herbivores or reduce their ability to find the host plant. Diversity effects may be driven by a number of such neighboring species and therefore by a general increase in complexity in species-rich communities. A lack of alternative hosts in the plant community, as may occur in (diverse) communities of taxonomically distant plant species, may also reduce food supply for herbivores and therefore herbivory rates on particular target plants (Jactel & Brockerhoff, 2007;Root, 1973).
As well as reducing herbivore abundance, high plant diversity could also reduce the efficiency of parasitoids in finding their insect hosts by increasing structural complexity and producing odor blends that mask the focal plant (Andow & Prokrym, 1990;Bukovinszky, Gols, Hemerik, van Lenteren, & Vet, 2007;Gols et al., 2005;Randlkofer, Obermaier, & Meiners, 2007). Alternatively, predators and parasitoids could benefit from plant diversity if diverse plant communities provide a greater range of food sources, such as more floral resources (nectar and pollen) for parasitoids (Araj, Wratten, Lister, & Buckley, 2008;Lavandero, Wratten, Didham, & Gurr, 2006) or a greater diversity of prey for generalist predators (Root, 1973). Parasitoids could also operate more efficiently when herbivore abundance is low (Ebeling et al., 2012), as would be expected in diverse plant communities. However, lower herbivore abundance may also decrease parasitism rates (e.g., White & Andow, 2005) if patch tenure time is longer in high-density patches, as predicted by several patch time allocation models (Van Alphen & Bernstein, 2008). Analyses controlling for herbivore abundance are necessary to test these ideas.
In addition to these direct effects of the plant community on higher trophic levels, plant diversity can also indirectly affect herbivore and predator communities. Indirect effects arise when plant diversity influences the growth and nutrient levels of host plants (e.g., Nitschke et al., 2010;Roscher, Kutsch, & Schulze, 2011), which in turn affects insect herbivores (Awmack & Leather, 2002;Mattson, 1980). We consider these effects to be more indirect than effects of plant diversity mediated by structural, odor, or resource diversity because they are caused by an effect of the plant community on individual plants, mediated by changes in competition, which in turn affects higher trophic levels. In contrast, changes in structural, odor, or resource diversity occur as a direct consequence of changed plant community diversity.
A reduction in individual plant performance with increasing diversity is likely to reduce the availability of resources for herbivores in general, such as by reducing the number of flower heads for flower feeding herbivores. The performance of individual plants might be reduced if they suffer more competition: for instance, plant tissue nutrient levels may be reduced in diverse communities due to more efficient nutrient use in species-rich assemblages (van Ruijven & Berendse, 2005) and/ or due to increased light competition, which causes plants to invest more in structural, carbon-rich tissues (Hirose & Werger, 1995). The lower plant quality could reduce the abundance or performance of herbivores in diverse plant communities. Complex effects may also occur; for example, Kigathi, Weisser, Veit, Gershenzon, and Unsicker (2013) showed that plants may change their emission of volatile compounds when growing in competition with other plants, which has effects on the attraction of herbivores and their natural enemies. Overall, effects on the third trophic level are likely to be weaker as they are even more indirect (Kagata, Nakamura, & Ohgushi, 2005;Scherber et al., 2010).
To separate these direct and indirect effects, we used transplanted and potted host plants (the common knapweed, Centaurea jacea) placed into experimental plant communities differing in species richness. We then analyzed the response of its flower head in- If apparency drives the effects, then a measure of apparency, such as the height of host plants relative to the rest of the community (which indicates how easy it is for insects to find their host using visual cues), should significantly increase herbivory. Structural heterogeneity might reduce herbivory, which means that herbivory is lower when LAI, which can serve as a proxy for structural complexity, is high. To test for indirect effects mediated by host plant performance, we can include a measure of resource density per host plant. For parasitoids, we hypothesize indirect effects to be of minor importance and expect positive direct effects of plant diversity. We should therefore see similar effects of plant diversity on parasitoids in both transplants and potted plants. Parasitoids are likely to respond strongly to structural complexity (strong effects of LAI would be hypothesized in this case) and to resource density (quantity of tephritid hosts). We test for these effects using a large grassland diversity experiment, the Jena Experiment (Roscher et al., 2004). The Jena Experiment contains experimental plant communities of 20 × 20 m 2 , which differ in plant species richness and the number of plant functional groups. We analyzed data from experimental plant communities containing 1-8 species and 1-4 plant functional groups (i.e., grasses, legumes, small herbs, tall herbs).
| The study system
Centaurea jacea L. s. l. (brown knapweed; Asteraceae) is native to Eurasia and common throughout Germany. The long-lived perennial hemicryptophyte (plants with overwintering buds at soil level) reemerges in spring (Press & Gibbons, 1993) producing vegetative side rosettes, flowers, and fruits between June and October (Jongejans, de Kroon, & Berendse, 2006). C. jacea flower heads are widely attacked by Tephritidae, an abundant family of Diptera that mainly inhabit fruits or other seed-bearing organs of flowering plants (White, 1988). Six species of Tephritidae, with flight periods between May and September, are associated with C. jacea in Germany. Four of them have a narrow host range (i.e., either monophagous on C. jacea or using a few Centaurea species only (Merz, 1994;White, 1988): Acinia corniculata (Zetterstedt), Urophora quadrifasciata (Meigen), Urophora jaceana (Hering), and Chaetorellia jaceae (Robineau-Desvoidy), the latter two are likely to be monophagous on C. jacea in the study area (HZ, pers. obs.). Two species are associated with more than 15 composite host plant species of different genera (Merz, 1994): Acanthiophilus helianthi (Rossi), Chaetostomella cylindrica (Robineau-Desvoidy). These Tephritidae have different foraging behaviors ranging from destructive feeding on the flower head (C. jaceae) to inducing complex woody galls in the capitulum (U. jaceana); however, detailed information is not available for all potentially occurring species. Parasitoids of the families Eurytomidae, Pteromalidae, Eulophidae (all Chalcidoidea), Braconidae, and Ichneumonidae attack these flower head phytophages in great numbers (Dempster, Atkinson, & Cheesman, 1995;Varley, 1947;Zwölfer, 1988). For tephritid hosts, chalcid wasps are the major parasitoids and these have a broad host range (see Figure S1 for a potential interaction web based on Tephritidae attacking C. jacea). Since the study site is mown twice a year, Tephritidae and their parasitoids, which commonly overwinter in flower heads of the host plant, recolonize the experimental field site every year from source populations in the surrounding meadows.
| Experimental design
In order to investigate responses of the second (Tephritidae) and third trophic levels (parasitoids) to plant diversity, the study was were assigned to four plant functional groups (legumes, grasses, tall herbs, and small herbs) and mixtures of 1, 2, 4, 8, and 16 species were created by randomly selecting species from the pool of 60.
Each plant species richness level was replicated on 16 plots, except for the 16-species communities (14 plots); additionally, four plots were sown with all 60 species (for details, see Roscher et al., 2004; The Jena Experiment and Table S1). The design also manipulated functional group richness to be as orthogonal as possible to plant species richness: that is, there are 8-or 16-species plots with only one or two functional groups present. Experimental plots were arranged in four blocks, mostly to account for the change in soil conditions with increasing distance to the river (Roscher et al., 2004). heads of a given plant were stored together at room temperature until insects emerged; at this point, all emerging insects were identified. We then dissected 10% of the flowering flower heads per plant (i.e., those at the most advanced phenological stage), but we always dissected at least five flower heads, which could be more than 10% on plants with few flower heads. Both, unemerged insects and empty pupae of emerged insects are detected by dissection and give a clear measure of the number of infestations that occurred per flower head (insect load). Moreover, dissection of the single flower heads allowed for a precise determination of the proportion of flower heads that were infested (tephritid infestation rate). These two tephritid responses are not easily derived from pure emergence data on the plant level. However, the parasitoid community is well represented by the emerged insects, and as we assume that there are no differences in emergence success between hosts and parasitoids, we defined the parasitism rate to be the proportion of emerged insects that were parasitoids.
In 2008, we streamlined and standardized our data collection by only collecting 10% (and at least five) of the flower heads of each transplant (those at the most advanced phenological stage) and by storing flower heads individually at room temperature until insects emerged and could be identified. We then dissected all flower heads and additionally recorded the insects which did not emerge. Table S2). Potted plants had therefore grown unaffected by the communities into which they were placed. Potted plants remained in the experimental plant communities for 7 weeks, before being harvested.
Aboveground interactions (e.g., shading by neighboring plants) over the course of one growing season are expected to be of minor importance for potted plant performance, compared to the impact these interactions had on the transplants over a number of years (Nitschke et al., 2010). At harvest, we recorded the total number of flower heads (flowering and bud) per pot, and their maximum height in the field.
All flower heads of a pot (up to a maximum of 20) were collected and stored individually at room temperature until insects emerged. We then dissected the flower heads to identify tephritids and parasitoids that had not emerged.
We used data from separately stored flower heads to derive trophic relationships between the different species of Tephritidae and parasitoids (for details on identification and assignment of parasitoids to host species, see supplement, p.1).
| Response variables for the higher trophic levels
We calculated a series of variables from our dissection and emergence data. We calculated tephritid infestation rate as the proportion of dissected flower heads that were infested, and insect load as the number of infestations per dissected flower head. The total number of infestations was the sum of tephritid and parasitoid individuals (because each parasitoid must have emerged from a tephritid), as well as pupae.
In 2007, parasitism rate was defined as the proportion of all emerged insects that were parasitoids, while in 2008, parasitism rate was defined as the proportion of hosts that were found to be parasitized following flower head dissection. Except for three cases, parasitoids were all solitary and could be unequivocally related to tephritid hosts.
In order to compare the potted plants with the transplants, we excluded the potted plant data from the 16-species plots and analyzed all responses across the 1-8 plant species gradient for all datasets (transplants 2007 and 2008, potted 2008
| Variables mediating diversity and plant functional group effects on higher trophic levels
In order to identify potential mediators of diversity effects, we used a series of variables as covariates in our statistical models. LAI (Leaf Area Index) is a measure of above ground space use and light penetration (Welles & Norman, 1991) and commonly increases with plant species richness in experimental plant communities (Spehn, Joshi, Schmid, Diemer, & Korner, 2000). LAI is derived from measurements of "all light blocking objects" in a community and reflects structural complexity (Rutten, Ensslin, Hemp, & Fischer, 2015); therefore, we used this parameter as proxy for habitat complexity. Since host plant apparency (the probability that a host plant is found by its herbivore (Endara & Coley, 2011)) can affect insect herbivory (e.g., Castagneyrol, Giffard, Pere, & Jactel, 2013), we included the variable relative height of the host plant as an apparency measure. Relative height indicates how easy it is for insects to find their host plant using visual cues, as a plant which is much taller than its neighbors will be easy to find, whereas one that is shorter will be hard to locate. There is some evidence that Tephritidae, especially those with a narrow host range, use visual cues (especially shapes, but also size and to a lesser extent color) to find their hosts. In tests, female tephritids tended to prefer oviposition site models substantially larger than natural sites (e.g., fruits). Specifically, females of one species of Urophora and Chaetorellia (two of the genera in our samples) were shown to be most attracted toward sophisticated visual mimics of floral buds of their host plant (Díaz-Fleischer, Papaj, Prokopy, Norrbom, & Aluja, 1999). Here, we focus on the visual apparency of the host plants as we were able to measure this. Other types of apparency, such as chemical odor apparency could also play a role but could not be quantified here. Plant community LAI and height were measured on all plots twice a year during peak standing biomass.
| Statistical analysis
All analyses were conducted using the R statistical software (R Development Core Team 2010, Version 2.12.1). We tested for effects of plant diversity and plant functional composition on (i) the second (tephritids) and (ii) the third trophic level (parasitoids) using mixed effects models fitted with the "lme4" package (Bates & Sarkar, 2007).
Response variables were (i) tephritid infestation rate and insect load and Table S3 for the number of plots in each dataset).
For each of the response variables and datasets, we fitted a full model with plot as a random factor (to account for the nested design of our study, with several measures from each plot). We then carried out a two-step analysis. In the first step, we tested for bottom-up effects of plant diversity and functional composition and included fixed effects for plant species richness (log transformed), functional group richness, and the presence of particular functional groups (legumes, grasses, and small and tall herbs). As it was not possible to fit the presence of all four functional groups, together with the number of functional groups, we determined which variables should be included for each response.
For each response, we fitted five models with each of the five functional group variables and excluded the variable whose model had the largest AIC value (indicated in Table 1). To account for spatial variation in tephritid and parasitoid communities, we also included block as a fixed effect (we treated it as fixed because it has only four levels and estimating variance for variables with few levels is unreliable). The full model in R syntax is as follows (shown here for the case where models with grass presence had the highest AIC): In a second step, we included several covariates to determine whether these mediated the effect of any of the design variables. As potential mediators of diversity effects (species richness or functional diversity), we included community LAI (a proxy for structural complexity), the relative maximum height of transplants/potted plants (a simplified measure of C. jacea apparency; i.e., the likelihood of a host plant being found by its herbivore) and the number of flower heads per host plant in the analysis of tephritid infestation, and tephritid infestation rate in the analysis of parasitism rate (as measure of resource density).
As mediators of functional group presence (tall and small herbs), we included the presence of Asteraceae in the community to account for potential spill overs from related plant species. Both the tephritid species and their associated parasitoids can also use other host plants within the Asteraceae (with almost no associations occurring with plants of y ∼ block + species richness + number of functional groups + legumes + tall herbs + small herbs + (1|plot ID) T A B L E 1 Effects of community and Centaurea jacea characteristics on tephritid and parasitoid responses other families). The full model from the second step is as follows (again assuming grass presence was dropped): In all cases, full models were simplified by progressively removing non-significant terms, and comparing models with likelihood ratio tests to produce a minimal adequate model (final model). Significance of terms in the final model was assessed by separately removing terms from that model; however, block was always retained in the models.
For infestation rate and parasitism rate (binary responses), we used a generalized linear mixed model with binomial error distributions.
Other response variables were transformed if necessary in order to meet the assumption of the model (indicated in Table 1).
Figures were created in R using the packages "effects" (Fox, 2003), "plotrix" (Lemon, 2006), and "gplots" (Warnes, 2010). Data for figures were derived from the statistical model using the package "effects" (version 3.1-2). Values for responses that were significantly affected by the predictor to be shown in the figure came from the final step-2 model and were therefore corrected for the random effects and any other significant fixed effects. Where the response was not significantly affected by the predictor to be shown in the figure, we could not use values from a minimal adequate model and therefore took them instead from a simplified model of the type: response ~ block + predictor + (1|plot ID). In these cases, values were only corrected for block and random effects. Graphical illustration of the trophic relationships was produced in R (package "bipartite, " Dormann, Gruber, & Fründ, 2008). For an analysis of species co-occurrences, see supplement. Significant results are reported as mean ± SE derived from the final step-2 model.
| Description of the insect community in flower heads
We found 855 tephritid individuals in total and identified four tephritid species attacking C. jacea. The stenophagous Chaetorellia jaceae
| Responses of the second trophic level (Tephritidae) to plant diversity
Only plant functional group presence affected the herbivorous Tephritidae in the analysis of potted plants. Insect load (i.e., the number of infestations per dissected flower head) was lower in the presence of grasses (1.5 ± 0.1 individuals) than in their absence (1.7 ± 0.1 individuals) (χ 2 = 3.83, p = .050, Figure 4), both in the first and second step of the analysis (i.e., without and with covariates, Table 1). Figure S2). These results suggest that plant species and functional group richness effects are stronger when both direct and indirect effects are operating. Figure S3). We therefore find evidence for direct effects of plant species richness on parasitoids but not on herbivores. mixtures, χ 2 = 8.63, p = .003, Table 1, Figure 6). The presence of tall herbs also increased parasitism rates by ca. 24% (χ 2 = 4.62, p = .032, Figure 6). When adding covariates to the models in the second step of the analyses, an additional LAI effect was seen in transplants of both years (Table 1, Figure 7). Increasing LAI values in the plant communities reduced parasitism rates (2007: χ 2 = 4.25, p = .039;
| Response of the third trophic level (parasitoids) to plant diversity
2008: χ 2 = 5.88, p = .015). Including LAI in the model did not remove the other effects of the plant community (Table 1), suggesting that these are not mediated by changes in structural complexity.
The lack of a significant LAI effect in potted plants (though pointing into the same direction, see Figure 7) may result from the generally slightly elevated position of potted plants that caused a proportion of them to be higher than the surrounding plant community. The co- variates Asteraceae presence, relative height, and tephritid infestation rate never significantly affected parasitoids. Plant diversity effects on parasitoids were therefore found when direct effects alone and when both direct and indirect effects could operate. However, we were not able to explain the mechanisms driving these effects with our covariates.
| Herbivores and parasitoids show opposing responses to plant diversity
Plant diversity had opposing effects on the herbivore and parasitoid communities of Centaurea jacea in a large grassland diversity experiment. Herbivore abundance (infestation rate and insect load) declined with increasing plant diversity, which agrees with the results of several previous studies (Balvanera et al., 2006;Haddad et al., 2009;Unsicker et al., 2006). However, these effects were variable and did not occur for all measures of the herbivore community or in all years. Herbivore populations are highly temporally variable (Solbreck & Sillén-Tullberg, 1986;Walker, Hartley, & Jones, 2008) and plant diversity effects may only be detected in particular years. Plant diversity mostly tended to have negative (or in one case neutral) effects, so although there is variation in the strength and significance of effects, the direction is largely consistent. However, the variation in strength of the effect indicates that, although diversity may act in a similar way in different years, other drivers of herbivore abundance (such as climate or dispersal) may frequently mask the effects of plant diversity. In contrast, and in line with the Enemies Hypothesis (Root, 1973) and a range of more recent studies (Albrecht, Duelli, Schmid, & Muller, 2007;Bianchi et al., 2006;Haddad et al., 2009;Vanbergen, Hails, Watt, & Jones, 2006), parasitoid abundance increased with plant diversity. We found that both plant species richness and functional group richness were impor- (Nitschke et al., 2010), further supporting this idea.
F I G U R E 7 Influence of community "leaf area index" (LAI) on parasitism rate in the three datasets. Significance in final step-2 models is abbreviated: * p ≤ .05, n.s. p > .05. Effects (grey line) and 95% confidence intervals (CI, dotted lines) derived from final step-2 or simplified models (see Methods)
| Plant diversity indirectly affects the herbivorous Tephritidae
Plant diversity effects on Tephritidae were few and were only detected in the transplants where both direct and indirect effects of plant diversity can occur. The fact that no such effect was observed in potted plants (direct effects only) implies that these effects were largely mediated by changes in plant quality or performance traits along the plant diversity gradient. Accordingly, we found little evidence that apparency or the availability of alternative hosts in the plant community affected the herbivores. This may be because the herbivores (dominated by the monophagous Chaetorellia jaceae) are very efficient at finding their hosts and can locate them regardless of their surroundings. This is supported by the lack of an effect of structural complexity. We hypothesized that increasing structural complexity in the communities would impair the tephritids' host finding abilities, but our proxy for complexity (LAI) was never retained in the final herbivore models. We did find that grass presence reduced tephritid abundance, which might be explained by associational resistance effects (Barbosa et al., 2009); however, there is no evidence that this can explain the effect of plant diversity on herbivore abundance.
Instead, variation in host plant performance and quality seem more likely to explain the plant diversity effects. Transplant performance (i.e., biomass, the number of flower heads) declines with increasing plant diversity (Nitschke et al., 2010) and this is likely to result in a decrease in food quality for the herbivores. In agreement with this idea, an increase in the number of flower heads did increase infestation rates in the transplants. This indicates that the Tephritidae are resource limited, which agrees with findings by Dempster et al. (1995).
However, contrary to our expectations, resource density per host plant did not explain the effect of plant functional group richness on the herbivore community (i.e., functional richness remained significant when fitted alongside the number of flower heads). This suggests that other host plant characteristics must account for the negative effect of functional diversity.
In addition to host plant size, nutritional quality is also likely to affect herbivore communities. Nutritional quality is expected to decline with increasing plant diversity as a result of increased light competition and/or increased nutrient use efficiency in diverse plant communities, and thus to negatively affect herbivore abundance (Abbas et al., 2013; but see Ebeling et al., 2014). The opposite pattern could occur if shading reduces plant chemical defenses and increases specific leaf area and hence palatability (Crone & Jones, 1999;Guerra, Becerra, & Gianoli, 2010;Mraja, Unsicker, Reichelt, Gershenzon, & Roscher, 2011). However, the negative response of tephritid species to diversity suggests a decline in plant quality in this case. A decline in plant quality could affect oviposition, as many tephritid species can assess host plant quality and adjust clutch size in response (Burkhardt & Zwölfer, 2002;Freese & Zwölfer, 1996;Pittara & Katsoyannos, 1992;Rieder, Evans, & Durham, 2001). For instance, Burkhardt and Zwölfer (2002) found that ovipositing females of the gall forming U. jaceana preferred high-quality plants and flower heads which resulted in increased larval growth and fecundity. The production of a gall is costly and time-consuming, which means that there is likely to be a strong advantage for U. jaceana in assessing host plant quality and not investing in poor quality plants. Other monophagous species like Chaetorellia jaceae, the most abundant species in our study, may also have evolved mechanisms to assess host plant quality. We might expect very different patterns for more generalist herbivores that could benefit from diet mixing in diverse plant communities (Pfisterer, Diemer, & Schmid, 2003). These negative indirect effects of plant diversity on herbivore communities, mediated by changes in host plant quality, have been largely overlooked but our results suggest that they may be important, particularly for monophagous species.
| Plant diversity directly affects parasitoid communities
In accordance with predictions from the Enemies Hypothesis (Root, 1973), we found that parasitism rates increased with increasing plant functional group or species richness. This agrees with an observational study, where plant diversity was affected by grazing intensity, which found that the parasitism rate by Pteromalus elevatus on the tephritid Tephritis conura increased with plant species richness (Vanbergen et al., 2006). Our study allows us to look at the mechanisms driving these effects in more detail. As the diversity effect in potted plants cannot be attributed to changes in host plant performance, it must be caused by changes in the plant community, which suggests that direct effects of diversity are the most important. One of the main direct effects of plant diversity may be to increase the availability of floral resources, which is expected to benefit parasitoids through increased nectar provision (e.g., Araj et al., 2008;Lavandero et al., 2006). In (Weber, Porturas, & Keeler, 2015). However, the list is not fully comprehensive and only contains those species currently reported to have EFNs.
Further measures of EFNs on the field site would be needed to test for such an effect. In addition to these potential effects of floral resource availability, parasitism rate may be affected by host resource availability. Tephritid infestation rate and insect load both decreased with increasing plant diversity (functional group or species richness), which means the increase in parasitism could also be driven by decreasing host density. This response was reported in a study on trap nesting bees on the same field site (Ebeling et al., 2012). However, such an effect seems less likely as an explanation of our results because we found that tephritid infestation rate did not affect parasitism rate. This is in accordance with results from a study by Walker et al. (2008), which implies that host density was either not a main driver of parasitism rates or that parasitoids cannot easily determine host density. The latter notion is supported by a reduction in parasitism rates when the number of flower heads was high in potted plants. This might indicate decreasing parasitoid efficiency with an increasing number of potential host locations, that is, a "dilution effect." The parasitoids that we found can attack herbivores associated with some other Asteraceae species found on our field site, with almost no other associations recorded (Noyes, 2016). However, the variable Asteraceae presence did not affect parasitoid parameters, which implies that parasitoids were not using alternative hosts within the high diversity communities. We detected another direct community effect on parasitoid abundance: our proxy for structural complexity (LAI) significantly reduced parasitism rate suggesting that host finding was impaired by increasing complexity of the vegetation. Although LAI is positively related to plant diversity (Spehn et al., 2000), it has opposing effects on parasitoid abundance. This implies that, although an increase in structural complexity might reduce the ability of parasitoids to find their hosts in diverse plant communities, other benefits of plant diversity are sufficient to override this effects.
Our results suggest that diverse plant communities harbor a more efficient parasitoid community, likely because of a greater provision of floral resources.
| General conclusion
Our study in a model tri-trophic system-Centaurea jacea, Tephritidae, parasitoids-suggests opposing responses of herbivores and parasitoids to plant diversity with clearer effects seen in parasitoids than in herbivores. The negative herbivore response seems to be mostly driven by changes in host plant quality. This suggests that some of the negative effects of plant diversity on herbivore abundance found in previous studies could be explained by these more indirect effects on plant quality. Future studies should therefore consider controlling for changes in quality and potted plants placed into communities are a useful way to do this. In contrast to the herbivores, parasitoid abundance increased with diversity. This seems to be partly driven by increased resource availability (i.e., nectar and pollen) although direct measures of the resources available to parasitoids would be needed to confirm this. The increase in parasitism rate (and decrease in herbivory) also argue for the value of diverse plant communities in providing more efficient pest control. Our results show that plant diversity is a key driver of the abundance of higher trophic levels and that a wide variety of mechanisms can operate to explain these effects.
ACKNOWLEDGMENTS
We thank a number of enthusiastic students and student helpers for | 7,795.2 | 2017-10-05T00:00:00.000 | [
"Environmental Science",
"Biology"
] |
On the Application of Wavelet Transform in Jet Aeroacoustics
: Wavelet transform has become a common tool for processing non-stationary signals in many different fields. The present paper reports a review of some applications of wavelet in aeroacoustics with a special emphasis on the analysis of experimental data taken in compressible jets. The focus is on three classes of wavelet-based signal processing procedures: (i) conditional statistics; (ii) acoustic and hydrodynamic pressure separation; (iii) stochastic modeling. The three approaches are applied to an experimental database consisting of pressure time series measured in the near field of a turbulent jet. Future developments and possible generalization to other applications, e.g., airframe or propeller noise, are also discussed.
Introduction
Wavelet transform is widely used as an efficient tool to extract localized features from random signals, whatever the nature of the analyzed quantity. Indeed, since their introduction in the scientific community, the fields of applications of wavelets have multiplied and include, for example, fluid dynamics, finance, medicine, meteorology, and electronic engineering, to cite just a few. Comprehensive reviews about the wavelet theory and its applications can be found in many reference papers or books, and we refer to the literature for the details (e.g., [1,2]).
During recent decades, wavelet transform has been extensively applied in turbulence to process both experimental and numerical data. The objectives were, for example, the identification of localized intermittent events and their separation from the background (see, e.g., [3,4]), the eduction of coherent structures [5][6][7], the computation of time-frequency correlations [8,9], and the statistical characterization of relevant energetic events (see, among many, [10,11]).
To the extent of jet aeroacoustics, many studies on jet noise carried out in the last 50 years demonstrated that large-scale vortices formed in the shear layer close to the jet exit contribute to the generation of sound radiated to the far field, in particular at low emission angles (see among many, the early paper by Mollo-Christensen [12] and the review given in [13]). The correlation between the intermittent and localized nature of those flow structures and the noise production mechanism has been verified by several authors [14][15][16][17], and it is nowadays recognized that a correct prediction of the jet far-field noise can only be accomplished if such an intermittent dynamic is considered (e.g., [18]).
Within this context, due to their temporally localized nature, wavelet analysis has been applied successfully to extract intermittent sound sources in jet flows. These investigations contributed to the understanding of the physical mechanisms underlying the generation of noise and to the development of reliable predictive models [19][20][21]. Several waveletbased procedures have been proposed in the past, and the scope of the present paper is to review some of them, focusing on those methods introduced recently by the Fluid Dynamics research group of the University Roma Tre of Rome. The main features of the selected methodologies are briefly worked out in the next section, along with a review of Fluids 2021, 6, 299 2 of 12 the relevant literature. To better exploit their properties, examples of applications will be given by considering an experimental database consisting of pressure data taken in the near field of a single stream-compressible sub-sonic jet. The experimental set-up is briefly described in Section 3, and explanatory results are presented in Section 4. Conclusions and final remarks are eventually given in Section 5.
The Post-Processing Procedures
The wavelet decomposition allows for the simultaneous representation of a temporal signal in terms of a time shift (t) and a resolution time scale (s) whose inverse corresponds to the frequency (f ).
Formally, the wavelet transform w(s, t) of a signal p(t) at the resolution time scale s is given by the following expression: where C Ψ denotes a coefficient that accounts for the mean value of Ψ(t), the so-called Mother Wavelet. The integral represents a convolution between p(t) and the dilated and translated complex conjugate counterpart of Ψ(t). The wavelet transform thus amounts to an ensemble of coefficients resulting from the projection of the original signal onto a basis composed of functions which are the translated and stretched version of the Mother Wavelet. Conceptually, the procedure is analogous to the Fourier transform, whose basis is represented by trigonometric functions. A common feature among Wavelet and Fourier transform can be found in the so-called wavelet scalogram that is given by the square of the wavelet coefficients. It provides a decomposition of the energy, or Fourier modes, onto the (s, t) plane and, as reported in [4], represents a localized counterpart of the standard Fourier spectrum that can be recovered by a simple integration in time.
A normalized version of the scalogram is given by the so-called Local Intermittency Measure (LIM) (see, e.g., [4,10]) that is defined as follows: where the symbol <•> denotes a time average. Peaks of the LIM higher than 1 represent events with energy larger than the mean. Therefore, the LIM amplitude at a selected scale s, can be thresholded in order to select events associated with large energy. Several wavelet-based approaches based on the computation of the scalogram and the LIM have been introduced in literature with the scope of analyzing pressure data obtained in jet flows. Among different techniques, we shall focalize on three classes of processing procedures: (i) conditional sampling; (ii) acoustic-hydrodynamic pressure separation; (iii) stochastic modeling. Their main features are briefly worked out in the following, along with the relevant literature, where more details about the methodologies and their validation can be found.
It should be pointed out the results achieved from these methods are not dependent upon the choice of the wavelet type. The techniques are based on the selection of events on the basis of energetic criteria determined from the computation of the scalogram or of the LIM. Therefore, no correlation exists between the choice of the wavelet type (and thus of its shape) and the events to be identified.
Conditional Sampling
Camussi and Guj [10] introduced a coherent structures identification procedure based on the idea that the passage of a flow structure of a characteristic size r i at the instant t k should induce a burst in the LIM at the corresponding time-scale location. The LIM can be thresholded by fixing a proper trigger level T, and the relative maxima which satisfy the condition LIM(r i , t k ) > T can be selected. The selected time instants t k corresponding to the occurrence of the energetic events can be used to perform a conditional averaging of the original signal used to compute the LIM. For a generic signal a(t), the ensemble average can be formalized as follows: where N e is the number of events corresponding to the condition LIM(r i , t k ) > T. ∆t is a proper time window dependent on the estimated persistence of the effect of the detected event while t * k is the time instant where the ensemble average of the signal segments is centered. The order of magnitude of ∆t should be selected greater than the integral time scale of the signal.
If the signal analyzed is the pressure, the ensemble-averaged signature can be associated with noise-generation events. In the present work, the conditioning procedure is applied to near-field pressure signals, showing how relevant the dependence of the shape of the averaged pressure signatures is upon the distance from the jet exit as an effect of the different flow physics related to the jet flow evolution. Indeed, close to the jet exit, the flow physics is dominated by the Kelvin-Helmholtz instability mode, inducing quasiperiodic pressure oscillations, whereas in the region far downstream, a fully turbulent state is reached.
Acoustic-Hydrodynamic Pressure Separation
The procedure applied therein has been proposed for the first time by Grizzi and Camussi [21] and then successively developed by Mancinelli et al. [22]. They assumed that the hydrodynamic contribution related to localized eddy structures compresses well onto a wavelet basis so that it can be described by a few but with large amplitude wavelet coefficients. Thus, the so-called pseudo-sound (i.e., the hydrodynamic component of pressure fluctuations) can be extracted by selecting the wavelet coefficients exceeding a proper threshold. The acoustic counterpart associated with more homogeneous and lowenergy fluctuations is represented by those coefficients having an amplitude lower than the threshold. A crucial role in the procedure is thus played by choice of the threshold that has to be identified on the basis of proper physical assumptions. The methods proposed so far actually differ in the way the threshold is selected.
The two wavelet sets, once selected, are inverse-transformed, allowing for the reconstruction of acoustic and hydrodynamic pressure time series. This is the main advantage in using a wavelet-based procedure rather than more standard Fourier-based separation approaches.
Grizzi and Camussi [21] proposed to select the threshold level based on the computation of a cross-correlation and the estimation of a phase. The signals to be correlated are taken from two microphones positioned in the near field of the jet and aligned with the flow direction. An iterative process is applied to determine a proper threshold that provides the correct phase between the acoustic and the hydrodynamic pressure components. The advantage of this method with respect to previous approaches was mainly in the simplicity of the required set-up. Indeed, only two microphone signals (or pressure time series from numerical simulations) are needed in the near field, acquired (or computed) simultaneously in two positions sufficiently close to each other.
If the near-field pressure is measured from a single probe or/and the far-field pressure is taken simultaneously, other wavelet-based separation methods can be applied, as detailed in [22].
The first of those procedures (denoted as WT1) accomplishes the separation through the estimation of the cross-correlation between near and far-field pressures measured (or computed) simultaneously. It is expected that the near-field acoustic pressure correlates well with the far-field noise, whereas the hydrodynamic counterpart does not. This technique is potentially quite robust because no assumptions are made on the statistics of the acoustic field. However, it requires the use of a transducer in the far field that is not always possible (this is the case, for example, of applications in water).
In the second method (WT2), the near-field acoustic pressure is extracted through an iterative process based on the degree of similarity between the probability density functions of the far-field pressure fluctuations with a Gaussian distribution. The application of such a procedure requires only one microphone in the near field, but it is based on the aprioristic assumption of the Gaussianity of the acoustic pressure that has not yet been fully demonstrated.
The third method (denoted as WT3) also requires only one pressure signal in the near field. In this approach, the hydrodynamic pressure is filtered through the application of the technique proposed by [23] for the extraction of coherent structures in a vorticity field. The separation process is again performed by selecting wavelet coefficients overcoming a threshold that is selected according to statistical conjectures usually adopted in de-noising procedures. More specifically, the threshold level, starting from an initial guess, is iteratively evaluated according to the following formula: where σ 2 i is the variance of the (presumed) acoustic pressure at iteration I and N is the number of samples. The iterative process stops when, at each iteration, the number of selected acoustic wavelet coefficients remains constant.
This approach requires only one microphone in the near field, and its robustness has been assessed in literature (e.g., [23]). This is the reason why, in the following, we will consider only the WT3 method, and examples of its application will be provided in Section 4.
Stochastic Modeling
The results reported by Kearney-Fischer et al. [24,25] support the idea that intermittent events are the dominant feature of jet noise. They applied a method to extract the events relevant for jet noise and developed stochastic models to reproduce their statistics in both the physical and Fourier domains.
A similar approach was adopted by Camussi et al. [26,27], who used wavelet transform to select intermittent events from experimental data and proposed stochastic models to reproduce their relevant statistics. Those wavelet-based procedures are based on the identification of traces of highly energetic events that appear intermittently in time and have variable strength. These approaches are included in the investigations presented therein.
The main ingredient is the computation of the LIM at a reference frequency or scale. In jets, in the region close to the nozzle exit, a bump in the pressure Fourier spectra can be clearly identified, especially for laminar exit conditions. It is the trace of the Kelvin-Helmholtz instability mode, and the corresponding frequency is hereinafter denoted as f KH . In the proposed procedure, the LIM is computed at the wavelet scale corresponding to f KH, and the condition LIM > 1 is used to identify intermittent events having a local energy greater than the average. From the set of selected events, two relevant indicators are computed: the so-called intermittent time, ∆t, representing the waiting time between successive events, and the amplitude A, representing the event energy amplitude and retrieved from the square of the corresponding wavelet coefficient.
The papers by Camussi et al. [26,27] report a detailed characterization of the statistics of these two indicators and propose stochastic models to reproduce their Probability Distribution Functions (PDFs). The models provide analytical approximations of the PDFs using a hyperbolic secant for the intermittent time and a pure decaying exponential function for the amplitude. The adopted functional forms are detailed below: where A * and ∆t * are the random variables normalized with respect to their standard deviations. The amplitudes of the coefficients a and b are given in [27].
The models have been demonstrated to also apply to numerical data and to predict the statistics correctly in different flow conditions [28]. In the present paper, we present further examples of the application of this procedure.
Experimental Set-Up
The experimental data analyzed therein have been acquired in the near field of a compressible jet installed in the semi-anechoic chamber available at the Laboratory of Fluid Dynamics "G. Guj" of the Department of Engineering of the University Roma Tre of Rome. Details about the facility and the instrumentation can be found in [21,26]; here, we briefly report the main features.
The single-stream jet has an exit diameter D of 0.012 m, and the Mach number M is varied from 0.5 to 0.9. Near-field pressure measurements have been carried out using one 1/4" 4135 Bruel and Kjaer microphone installed close to the jet axis and connected to a B&K Nexus 2690 signal conditioner. Signals were acquired through a Yokogawa Digital Scope DL708E, setting the sampling frequency to 500 kHz for a number of samples (per each position) of 4 × 10 6 . The microphone is positioned in several locations in the near field of the jet flow. The axial distance (x) from the jet exit is varied from 0 to 20D with a step of 1D and the radial distance (r) from 1D to 3D with a step of 0.5D. Only a few of those positions will be analyzed therein with the scope of demonstrating the validity of the wavelet-based procedures presented above and addressing physical outcomes related to the noise sources. The models have been demonstrated to also apply to numerical data and to predict the statistics correctly in different flow conditions [28]. In the present paper, we present further examples of the application of this procedure.
Experimental Set-Up
The experimental data analyzed therein have been acquired in the near field of a compressible jet installed in the semi-anechoic chamber available at the Laboratory of Fluid Dynamics "G. Guj" of the Department of Engineering of the University Roma Tre of Rome. Details about the facility and the instrumentation can be found in [21,26]; here, we briefly report the main features.
The single-stream jet has an exit diameter D of 0.012 m, and the Mach number M is varied from 0.5 to 0.9. Near-field pressure measurements have been carried out using one 1/4" 4135 Bruel and Kjaer microphone installed close to the jet axis and connected to a B&K Nexus 2690 signal conditioner. Signals were acquired through a Yokogawa Digital Scope DL708E, setting the sampling frequency to 500 kHz for a number of samples (per each position) of 4 × 10 6 . The microphone is positioned in several locations in the near field of the jet flow. The axial distance (x) from the jet exit is varied from 0 to 20D with a step of 1D and the radial distance (r) from 1D to 3D with a step of 0.5D. Only a few of those positions will be analyzed therein with the scope of demonstrating the validity of the wavelet-based procedures presented above and addressing physical outcomes related to the noise sources. Figure 1 shows an example of the LIM distribution computed from a portion of a pressure signal recorded in the vicinity of the turbulent jet. It is evident that energy is unevenly distributed, and spots corresponding to very large LIM amplitudes can be readily identified at different frequencies, corresponding to different scales.
Results
The Fourier spectrum obtained by the standard PWelch procedure and the one reconstructed through the integration of the wavelet scalogram are compared in Figure 2. The symbol St denotes the Strouhal number representing the non-dimensional frequency (obtained using D, the jet diameter, and U, the mean jet exit velocity, as reference length and velocity scales, respectively).
The agreement is very good, and, according to [4], the wavelet spectrum shows an even better statistical convergence at high frequencies, whereas at low frequencies, due to the compactness of the wavelet base, the Fourier transform applies better. The Fourier spectrum obtained by the standard PWelch procedure and the one reconstructed through the integration of the wavelet scalogram are compared in Figure 2. The symbol St denotes the Strouhal number representing the non-dimensional frequency (obtained using D, the jet diameter, and U, the mean jet exit velocity, as reference length and velocity scales, respectively). The conditional sampling procedure outlined in Section 2.1 has been applied to the pressure data, and examples are reported in Figure 3. The plots refer to M = 0.5, r/D = 1 and different x/D. It can be observed that for small x/D the pressure averaged signature exhibit an oscillatory trend as a trace of the Kelvin-Helmholtz instability that dominates the flow behavior in the vicinity of the jet exit. To this extent, it has been checked that the period of oscillations corresponds to the inverse of fKH. For increasing x/D, the oscillations amplitude decreases and, in the transitional region, the averaged pressure has a positive peak that can be ascribed to the effect of vortex pairing and braid formation, which may induce a velocity defect and thus a pressure increase (see [29,30]). At x/D larger than about 10, a fully developed turbulent state is reached, and a clear averaged pressure signature is no longer observed.
The results reported in Figure 4 replicate those of Figure 3 but at higher M (M = 0.9). It is shown that the overall evolution changes significantly. At low x/D, the averaged pressure does not show any oscillations but rather a concentrated spike. This result seems to suggest that, in terms of pressure energy, at high M, the Kelvin-Helmholtz mode does not significantly influence the near field pressure, even in the region very close to the jet exit. For increasing x/D, the peak intensity decreases, but a positive bump is present even in the fully turbulent region.
The physical reasons behind the relevant differences observed at low and high M are not apparent, and further investigations are surely needed to clarify this interesting point. The agreement is very good, and, according to [4], the wavelet spectrum shows an even better statistical convergence at high frequencies, whereas at low frequencies, due to the compactness of the wavelet base, the Fourier transform applies better.
The conditional sampling procedure outlined in Section 2.1 has been applied to the pressure data, and examples are reported in Figure 3. The plots refer to M = 0.5, r/D = 1 and different x/D. It can be observed that for small x/D the pressure averaged signature exhibit an oscillatory trend as a trace of the Kelvin-Helmholtz instability that dominates the flow behavior in the vicinity of the jet exit. To this extent, it has been checked that the period of oscillations corresponds to the inverse of f KH . For increasing x/D, the oscillations amplitude decreases and, in the transitional region, the averaged pressure has a positive peak that can be ascribed to the effect of vortex pairing and braid formation, which may induce a velocity defect and thus a pressure increase (see [29,30]). At x/D larger than about 10, a fully developed turbulent state is reached, and a clear averaged pressure signature is no longer observed. The results reported in Figure 4 replicate those of Figure 3 but at higher M (M = 0.9). It is shown that the overall evolution changes significantly. At low x/D, the averaged pressure does not show any oscillations but rather a concentrated spike. This result seems to suggest that, in terms of pressure energy, at high M, the Kelvin-Helmholtz mode does not significantly influence the near field pressure, even in the region very close to the jet exit. For increasing x/D, the peak intensity decreases, but a positive bump is present even in the fully turbulent region. As pointed out in [21,31], in the region close to the jet exit, the energy associated with the hydrodynamic pressure fluctuations, the so-called pseudo-sound, dominates the lowfrequency range of the pressure Fourier spectra, whereas the acoustic counterpart is relevant only at high frequencies. The wavelet-based techniques described in Section 2.2 <p> <p> <p> <p> The physical reasons behind the relevant differences observed at low and high M are not apparent, and further investigations are surely needed to clarify this interesting point.
As pointed out in [21,31], in the region close to the jet exit, the energy associated with the hydrodynamic pressure fluctuations, the so-called pseudo-sound, dominates the low-frequency range of the pressure Fourier spectra, whereas the acoustic counterpart is relevant only at high frequencies. The wavelet-based techniques described in Section 2.2 can separate the acoustic pressure from the hydrodynamic one even though its energy is very low and concentrated at high frequencies.
Examples are reported in Figure 5 for M = 0.5, r/D = 1 and two different x/D. The results are obtained by applying the method WT3 that provides efficient separation of the sound and pseudo-sound contributions, considering only one signal. It can be observed that, according to [22], the energy hump of the hydrodynamic contribution moves to low frequencies as the axial distance from the nozzle exhaust increases, such behavior being ascribed to the development of larger and larger turbulent structures in the jet plume. On the other hand, the energy level of the acoustic component, concentrated at high frequencies, decreases for increasing x/D according to the results reported in [21,22]. sound and pseudo-sound contributions, considering only one signal. It can be observed that, according to [22], the energy hump of the hydrodynamic contribution moves to low frequencies as the axial distance from the nozzle exhaust increases, such behavior being ascribed to the development of larger and larger turbulent structures in the jet plume. On the other hand, the energy level of the acoustic component, concentrated at high frequencies, decreases for increasing x/D according to the results reported in [21,22]. The effect of Mach number is investigated in Figure 6, where two cases at M = 0.9 are reported. It is confirmed that the jet separation technique applies well also at high subsonic M, and a weaker dependence of both the hydrodynamic and acoustic spectra shape upon the axial position is observed. By comparing Figure 6 with Figure 5, it is also observed that the increase of the Mach number does not considerably affect the amplitude of the pseudo-sound spectra, whereas the amplitude of the acoustic spectra increases significantly, demonstrating that the acoustic pressure is much more sensitive to the Mach number variations than the hydrodynamic one. The effect of Mach number is investigated in Figure 6, where two cases at M = 0.9 are reported. It is confirmed that the jet separation technique applies well also at high subsonic M, and a weaker dependence of both the hydrodynamic and acoustic spectra shape upon the axial position is observed. By comparing Figure 6 with Figure 5, it is also observed that the increase of the Mach number does not considerably affect the amplitude of the pseudo-sound spectra, whereas the amplitude of the acoustic spectra increases significantly, demonstrating that the acoustic pressure is much more sensitive to the Mach number variations than the hydrodynamic one.
The spectra reported above clearly show that at low x/D, a bump in the power spectra is present at a frequency that corresponds to low St (around 0.1). This is the trace of the Kelvin-Helmholtz instability, and, as pointed out in Section 2.3, the corresponding wavelet scale is selected for the computation of the LIM. From the set of selected events, the intermittent time ∆t and the amplitude A are extracted, and their PDFs are predicted by the stochastic models proposed in [26]. Examples of the predictive capability of the models are reported in Figures 7 and 8 for the intermittency time and the amplitude, respectively. The analytical forms of the PDFs (the continuous red lines) are the ones proposed in [26], and the agreement with the experimental data is very good. It can be concluded that the proposed stochastic models can accurately reproduce the dynamics of the flow structures responsible for the bump in the Fourier pressure spectra at the Kelvin-Helmholtz frequency, those structures being directly correlated with the noise-generation mechanism. The spectra reported above clearly show that at low x/D, a bump in the power spectra is present at a frequency that corresponds to low St (around 0.1). This is the trace of the Kelvin-Helmholtz instability, and, as pointed out in Section 2.3, the corresponding wavelet scale is selected for the computation of the LIM. From the set of selected events, the intermittent time t and the amplitude A are extracted, and their PDFs are predicted by the stochastic models proposed in [26]. Examples of the predictive capability of the models are reported in Figures 7 and 8 for the intermittency time and the amplitude, respectively. The analytical forms of the PDFs (the continuous red lines) are the ones proposed in [26], and the agreement with the experimental data is very good. It can be concluded that the proposed stochastic models can accurately reproduce the dynamics of the flow structures responsible for the bump in the Fourier pressure spectra at the Kelvin-Helmholtz frequency, those structures being directly correlated with the noise-generation mechanism.
Conclusions
This paper presents examples of the successful application of wavelet transform in the study of jet noise. Among different approaches proposed in the literature, the present investigation is focalized on procedures proposed recently by Camussi and co-workers (the main reference papers are [10,21,26]). The effectiveness of the methods is
Conclusions
This paper presents examples of the successful application of wavelet transform in the study of jet noise. Among different approaches proposed in the literature, the present investigation is focalized on procedures proposed recently by Camussi and co-workers (the main reference papers are [10,21,26]). The effectiveness of the methods is demonstrated by their application to an experimental database consisting of pressure time series acquired in the vicinity of a single-stream compressible jet.
The first technique discussed therein consists of a conditional sampling procedure that provides the identification of averaged signatures of the events responsible for the generation of pressure fluctuations that are the most relevant in terms of their energy content. The averaged shape of those highly energetic events is shown to depend greatly upon the distance from the jet exit and on the Mach number as an effect of the different flow physics occurring in the jet flow during its evolution from the jet exit to the fully turbulent conditions at large x/D.
The second method provides a separation between the hydrodynamic and acoustic pressure from signals taken close to the jet flow. In these cases, the two components, often denoted as sound and pseudo-sound, are mixed and cannot be extracted using standard Fourier-based procedures. It is shown that the separation technique also successfully applies in regions where the hydrodynamic contribution is dominant.
The selection of events based on energetic criteria is also the origin of the third technique that provides stochastic modeling of the intermittent time and of the events' amplitude. The analytical models proposed in the literature are shown to apply well and to provide reliable predictions of the PDFs of these quantities.
Some of the techniques reported above have been applied in other configurations out of jet aeroacoustics. Worth to be mentioned are, for instance, the analysis of the pressure field in the vicinity of a cylinder [32], the investigation of thermo-acoustic instabilities in burners [33], or the extraction of intermittent features in flames [34]. Additionally, of interest is the application of the methods in the field of airframe noise, where the techniques presented in [22] are applied in [35] for characterizing the noise emitted by landing gears. In this framework, the methods denoted as WT1 and WT3 are demonstrated to be very efficient.
Future applications may include the study of propeller noise where wavelets can be very useful for separating the tonal components, identified at the Blade Passing Frequencies (BPF), from the broadband component associated, for example, to turbulence or local flow separations. The use of wavelets for these purposes is currently underway by the authors and, hopefully, will be the subject of future publications. | 6,939.8 | 2021-08-23T00:00:00.000 | [
"Engineering",
"Physics"
] |
Phase analysis and corrosion behavior of brazing Cu/Al dissimilar metal joint with BAl88Si filler metal
To meet the requirements of automatic production, a new type of green BAl88Si cored solder was developed. The lap brazing experiments were carried out with copper and aluminum as brazing substrates. The microstructure, phase composition, and corrosion behavior of solder joint interface were studied by field emission scanning electron microscopy, energy dispersive spectroscopy, transmission electron microscopy, electron backscattering diffraction, tensile testing machine, and electrochemical workstation. The results show that the brazing joint of Cu/BAl88Si/Al is metallurgical bonding, and the brazing joint of Cu/BAl88Si/Al is composed of Cu9Al4, CuAl2, a-Al, (CuAl2 + a-Al + Si) ternary eutectic. In addition, there is no obvious preference for each grain in the brazing joint, and there are S texture {123}<634>, Copper texture {112}<111>, and Brass texture {110}<112>. The interface of Cu9Al4/CuAl2 is a non-coherent crystal plane and does not have good lattice matching. The average particle size of CuAl2 is 11.95 µm and that of Al is 28.3 µm. However, the kernel average misorientation (KAM) value at the brazed joint interface is obviously higher than that at the brazed joint interface copper, so the defect density at the brazed joint interface aluminum is higher than that at the brazed joint interface copper. At the same time, due to poor corrosion resistance at the interface on the aluminum side of the brazed joint, serious corrosion spots and corrosion cracks occur at the same time, which leads to the shear performance of the brazed joint decreasing by about 75% after salt spray test for 240 h.
Introduction
Copper is widely used in the manufacture of thermal components in refrigeration fields such as air conditioners and refrigerators, aerospace, and power industries because of its excellent thermal conductivity and electrical conductivity. But copper prices have long soared as resources have become increasingly scarce [1][2][3]. In addition, copper has a higher density, which leads to a higher quality of copper. Especially, the use of copper parts in automobiles and airplanes will improve the quality, which contradicts the concept of energy-saving and emission reduction [4,5]. Copper-aluminum composite metal joints are widely used in aerospace, air conditioning, household appliance refrigeration, and other industries and have high application value [6,7]. Aluminum and its alloys have low density, good thermal conductivity, and electrical conductivity [8]. Therefore, in some parts, aluminum can partially or completely replace copper products, thereby greatly reducing the production costs and realizing the complementary advantages of the two metals [9,10].
The connection of Cu-Al dissimilar metals is the key technique. At present, the connection of Cu-Al dissimilar metals mainly includes mechanical connection, pressure welding, and brazing [11][12][13]. In the process of mechanical connection, due to the active chemical properties of aluminum, a dense oxide protective film was formed in the air. The film has a very high resistivity and is very stable under normal conditions, which seriously affects the electrical conductivity and thermal properties of the copper-aluminum connection point, resulting in premature failure of the copper-aluminum connection point [14]. In the process of press welding, the welding joint of different copper and aluminum metals with good performance can be obtained. However, there are a series of problems in pressure welding, such as high welding cost, complex process, poor adaptability to complex welding parts, long production cycle, etc., which restrict the application of pressure welding in different copper and aluminum welding [15,16]. However, brazing has a series of advantages such as low cost, simple equipment, and large-scale production. It has gradually become a research focus in the industry and has good development prospects. Therefore, brazing is widely used in the connection of two different metals, copper and aluminum, resulting in a joint with high strength and good airtightness [17,18]. Copper and aluminum brazing mainly use Zn-Al, Sn-Zn, and Al-Si brazing materials [19][20][21]. Al-Si brazing materials have good plasticity and are easy to be processed and formed. Compared with Zn-Al brazing filler metal, Al-Si brazing filler metal has better corrosion resistance, and the strength of the brazed joint is much higher than that of Sn-Zn brazing filler metal, so Al-Si brazing filler metal is more suitable for joining dissimilar metals of copper and aluminum [22,23]. However, there are few reports on the corrosion behavior of brazed Cu-Al joints with Al-Si base brazing filler metals. Huang, Ye et al. mainly studied the effects of Zn-Al and Sn-Zn filler metals on the corrosion microstructure and shear strength with the content of alloy element and salt spray test time [24,25].
At the same time, in the traditional brazing process, the composite application form of brazing filler metal and brazing flux usually adopts the way of placing brazing filler metal on brazing parent metal in advance or sticking brazing flux on solid brazing filler metal. This method greatly increases the pre-welding process and operation time and adds a variable during the brazing process, which affects the consistency and quality stability of welding. In addition, to ensure the quality of brazing, excessive brazing flux is often added, which will pollute the air and harm the health of operators and cause brazing flux waste [26]. Therefore, a new type of BAl88Si solder was prepared in this experiment, which meets the requirements of green manufacturing and is suitable for automatic and intelligent welding technology [27]. In addition, intermetallic compounds are the inevitable mesophase in copper-aluminum brazed joints. These hard and brittle intermetallic compounds with high resistivity will greatly reduce the performance of brazed joints and affect the integrity and stability of the joints [28,29]. However, there are few reports on the interfacial microstructure and lattice mismatch of intermetallic compounds in BAl88Si brazed Cu/Al joints.
Therefore, the lattice mismatch, texture preference orientation, and corrosion behavior of the interfacial phase in the brazed Cu-Al joint of BAl88Si were analyzed. The innovative combination of lattice mismatch and KAM value at the brazing interface and corrosion behavior at the brazing interface are helpful to study corrosion protection of copper and aluminum brazed joints in the outdoor wet environment.
Test material
The substrate materials used in the induction brazing experiment are copper plates and aluminum plates. The sizes of the experimental matrix materials are 70 mm × 20 mm × 2 mm and 70 mm × 30 mm × 3 mm, respectively. The chemical compositions of the basic materials and BAl88Si new solders are shown in Table 1.
Experimental method
Pretreatment of the experimental substrate before induction brazing: the substrate surface was polished with 400#, 600#, and 800# sandpaper in sequence, and the polished steel plate was cleaned with alcohol ultrasonic wave. At the same time, mark a scale on the filler metal every 20 mm to quantify the filler metal during welding. Induction brazing: the brazed joint is in the form of a lap joint, as shown in Figure 1. The lap length of the joint is 15 mm, and the substrate is fixed on the special fixture according to the lap length. Start the induction brazing machine for heating, and the current is set to 240 A. When the temperature reaches 580℃, keep the temperature for 10 s. After cooling with hollow air, the joint reaches room temperature, and the residual brazing flux and the oxide film on the surface of the substrate are removed by mechanical cleaning. The 10 mm × 10 mm joints are cut on the brazed lap joint by wire cutting, and the wire-cut joints are inlaid and polished.
After induction brazing, first, the electrochemical tests were carried out on the Cu-Al joint. The scanning speed in the experimental parameters was set to 1 mV/s, and the scanning range in the electrochemical experiment was set to ±250 mV. In the whole electrochemical experiment, the electrolyte solution composed of 3.5% sodium chloride salt and deionized water was chosen, and the solution temperature was room temperature. Second, the inlaid sample and the welded joint are put into a salt spray test box, and the sodium chloride solution with a concentration of 3.5% was poured into the saltwater tank. During the test, the temperature in the cabinet was adjusted to 35℃, and the welding samples were taken out at 0, 24, 96, 168, 240, and 480 h, respectively. Three samples were taken out at a time, rinsed with water, dried, and stored.
The morphology of the brazing joint was observed by scanning electron microscope (SEM). The composition and texture of each phase of the brazing joint were measured by energy dispersive spectrometer, electron backscatter diffraction (EBSD), and field emission scanning electron microscope. Transmission samples at the Cu/BAl88Si/Al interface were prepared by FIB technology for the first time, and the interface and phase boundary structure of the brazed joint were characterized and analyzed.
3 Results and discussion 3.1 Microstructure of the interface region of brazed joint Figure 2 shows a scanning electron microscope image of the microstructure of the Cu/BAl88Si/Al brazing joint. Figure 2(a) shows the microstructure of brazed joints near copper. Figure 2(b) shows the micro-morphology of the middle region of the brazed joint. Figure 2(c) shows the microstructure of the brazing joint near aluminum. It can be seen from Figure 2(a) and (c) that the solder alloy has achieved good metallurgical bonding with the base metal without forming cracks or voids. Two dense intermetallic compound layers were formed at the brazing interface near Cu, with a thickness of 7-15 µm. The intermetallic compounds away from the copper interface of brazed joints tend to grow in brazed joints. In Figure 2(a)-(c), there is no obvious dense intermetallic compound at the brazing interface far from the copper side. It is composed of gray-white skeleton phase, dendritic gray-black phase, and obvious needle-like eutectic phase. Therefore, each phase in Figure 2 is subjected to energy spectrum analysis and surface analysis. The experimental results are shown in Table 1, Figures 3-5. From the surface scanning results, it can be seen that Cu elements in the base metal and brazed joint have been fully diffused, and Cu elements are mainly concentrated in the dense intermetallic compounds near the Cu side interface, as well as gray-white skeleton phase and needle-like eutectic phase in brazed joint, but the content of dendrites in the brazed joint is relatively small. At the same time, the results of surface scanning show that the total aluminum content in the gray-black phase of brazing dendrite is the highest, while the total aluminum content of needle eutectic phase, gray-white skeleton phase, and intermetallic compounds near the copper side gradually decreases. The surface scanning results showed that Si elements were obviously agglomerated in the dendrite gray-black phase and uniformly distributed in other phases of the brazing seam. The analysis results are shown in Table 2. According to the results in Table 2 and related literature, it can be inferred that the intermetallic compound formed near the interface of the copper side in Figure 2(a) is Cu 9 Al 4 , and the intermetallic compound beside Cu 9 Al 4 is CuAl 2 the gray-white skeleton phase in the brazing joint is presumed to be CuAl 2 , gray-black dendrite phase is presumed to be α-Al, and the needle-like eutectic ternary eutectic α-Al + CuAl 2 + Si [30]. At the brazing temperature, due to the high affinity between copper and aluminum, the molten Al-Si brazing alloy wetted the surface of the copper substrate, and the copper atoms diffused into the Al-Si brazing alloy. During the cooling process, the Cu 9 Al 4 phase with high copper atom content forms on the copper/brazing interface. With a further decrease of brazing temperature, the concentration of copper atoms decreases based on Gibbs free energy, and a second IMC CuAl 2 phase was formed on the surface of the Cu 9 Al 4 phase. In the brazing zone, Cu phase and α-Al (Cu) solid solutions are formed by diffusing copper atoms and Al-Si solder. With the decrease of temperature, the copper atoms diffused into the brazing area react with the filler metal to form ternary eutectic α-Al + CuAl 2 + Si [31]. Figure 6 shows a transmission electron microscope (TEM) image of the copper and brazing joint area. Figure 6(a) shows low-power TEM images, and Figure 6(b)-(e) shows electron diffraction patterns of different microscopic regions in Figure 6(a). There are two kinds of reaction products between copper-based metal and BAl88Si solder at the brazing interface near the copper side. According to the SAED calibration in Figure 6(c) and (d), these two intermetallic compounds are CuAl 2 and Cu 9 Al 4 , which is consistent with the results of the above energy spectrum analysis. Figure 6 crystal planes of Cu 9 Al 4 are marked by calibrated diffraction points. The interplanar distances between the two planes are dCuAl 2 (310) = 1.919 nm, dCu 9 Al 4 (660) = 1.025 nm, and dCu 9 Al 4 (100) = 0.7944 nm. Therefore, the lattice mismatch ratio of CuAl 2 (310)/Cu 9 Al 4 (660) crystal plane is 0.466, and the lattice mismatch ratio of CuAl 2 (310)/Cu 9 Al 4 (100) interface is 0.586, so there is no good semi-coherent interface [32]. The crystallographic information of the interface region of the brazed joint was obtained by EBSD, and the microstructure of the interface region was further proved, as shown in Figure 7. It can be seen from Figure 7(a) that every grain in the brazed joint is anisotropic and has no obvious preferred orientation [33]. At the same time, the grain size at the copper interface of the brazed joint was significantly larger than that at the aluminum interface. Therefore, the grain size of the brazed joint was also analyzed, and the analysis results are shown in Figure 7 size of the three phases, the average grain size of CuAl 2 is at least 11.95 µm, while the average grain size of Al is at most 28.3 µm. This is mainly due to the fact that Al grains in the molten brazing filler metal grow up continuously during the brazing process, and the heat transfer capacity of Cu is higher than that of Al, which makes the thermal energy on the fixture continuously transfer to the Cu side interface filler metal after brazing. To further analyze the grain orientation characteristics of Cu, Al, and CuAl 2 , by processing EBSD data, the polar diagram of each phase is obtained, as shown in Figure 8. Compared with the standard texture, there are S texture {123}<634>, Copper texture {112}<111>, and Brass texture {110}<112> in the brazing joint [34]. Figure 9(a) shows the KAM diagram corresponding to the inverted diagram of Figure 9(d), from which the distortion density and distribution at the interface of brazing joints can be explained. It can be seen from the figure that the KAM value on the brazed joint aluminum interface is obviously higher than that at the brazed joint copper interface, which means that defect density on the brazed joint aluminum interface is higher than that on the brazed joint copper interface [35]. Figure 9(b) shows the distribution of grain boundaries in the brazed joint. It can be seen that there is a grain boundary with a large angle at the interface of brazing joint Al, which means that the grain size at the interface of brazing joint Al is smaller and more uniform than that at the Cu interface. Figure 9(c) is the recrystallization diagram of the brazed connection. It can be seen that the KAM value at the brazing joint Cu side interface is lower than the KAM value at the brazing joint aluminum side interface, so the grain boundary migration rate at the aluminum side interface is higher than that at the Cu side interface in the recrystallization process [36]. Figure 10 is the electrochemical test chart of brazing, copper, and aluminum, wherein Figure 10(a) is a Tafel curve. Figure 10(b) is the impedance curve and Figure 10(c) is the electrochemical sample graph. By comparing figures, it can be found from Figure 10(a) and (b) that the maximum corrosion potential of copper is −0.096 V and the impedance radius is the smallest. However, Al and brazing seam corrosion potential are both lower than Cu, which are −0.831 and −0.757 V, respectively, and the impedance radius is larger. And the corrosion current density of the brazed joint is 8.456 × 10 5 . Therefore, when the copper-aluminum joint is placed in the salt spray test chamber, the corrosion resistance of the copper side interface of the brazed joint is significantly higher than that of the aluminum side interface of the brazed joint [37].
Corrosion behaviors of brazed joints
After the neutral salt spray test for 480 h, the microscopic corrosion morphology and shear performance of the brazed joint in each period are shown in Figure 11. Figure 11(a) shows the microscopic corrosion morphology of the brazed joint at 480 h; it can be clearly seen that pitting pits with a depth of 140−170 μm are formed at the aluminum interface of the brazed joints, which is consistent with the conclusion that the corrosion resistance of the aluminum interface is low. Figure 11(b) shows the Figure 11(b). It can be clearly observed that cracks have been formed at the interface between the aluminum side and the brazed joint, and the corrosion crack continues to spread with a-Al. During the shear test, the stress corrosion crack tip of the diffusion layer becomes the stress concentration point, which will seriously reduce the mechanical properties of the joint [38]. This is because the radius of Cl − in the salt spray test is very small, so it can be adsorbed on the surface of the sample and combined with cations in the metal on the surface of the sample to form soluble chloride and then become pitting corrosion nuclei [39]. In this way, the pitting core can corrode the surface of the substrate, and the pitting core will continue to grow, which will lead to the dissolution of the metal anode and gradually becoming pitting pits. This is how the pits observed in Figure 11(a) are generated. The neutral salt spray corrosion is electrochemical corrosion, in which metal is the anode and the other areas are the cathode, thus forming a corrosion cell.
Reduction reaction takes place in the cathode: 4OH . 2 2 Aluminum is oxidized at the anode: As the progress of redox reaction, the anions generated by the cathode attract the cations to move to the cathode, while the actions generated by the anode attract Cl − to move to the anode, and the corrosion continues. With the decrease of pH value in local corrosion pits, pitting corrosion expands and deepens, and the number and area of small pitting corrosion pits increase and gradually expand to the surrounding area and develop into comprehensive corrosion [40].
It can be seen from Figure 11d. The shear strength of the joint before the salt spray test was 64 MPa. With the increase of salt spray corrosion time, the overall shear strength of the sample decreased significantly. The tensile strength decreased by nearly 50% at 96 h, and by nearly 75% at 240 h. In the first 240 h, the tensile strength of the sample decreased obviously, which was mainly due to the corrosion crack at the Al side interface of the brazed joint, which reduced the mechanical properties of the joint to the greatest extent, which was consistent with the microstructure analysis mentioned above.
Conclusion
The experimental results show that the copper-aluminum joint brazed with new green BAl88Si brazing filler metal can obtain good metallurgical bonding as a whole, and no cracks and voids are formed. The interface at the brazed Cu side is composed of two layers of dense intermetallic compounds with a thickness of 7−15 µm, Cu 9 Al 4 and CuAl 2 . The lattice mismatch rate of the crystal surface of CuAl 2 (310)/ Cu 9 Al 4 (660) and the interface of CuAl 2 (310)/Cu 9 Al 4 (100) is 0.466 and 0.586, respectively. Therefore, there is no good semi-coherent interface between these two intermetallic compounds. The interface away from brazing copper is mainly composed of gray skeleton phase CuAl 2 , dendrite gray-black phase a-Al, and obvious needle eutectic phase ternary eutectic a-Al + CuAl 2 + Si. The mean grain size of CuAl 2 in the brazed joint is 11.95 µm, while that of Al was 28.3 µm. However, the KAM value of the brazed joint aluminum interface is obviously higher than that of the brazed joint copper interface, so the defect density of the brazed joint aluminum interface is higher than that of the brazed joint copper interface. At the same time, due to poor corrosion resistance at the aluminum interface of the brazed joint, serious pitting corrosion and corrosion cracks appeared, resulting in the shear performance of the brazed joint decreased by nearly 75% in the first 240 h. | 4,865 | 2021-01-01T00:00:00.000 | [
"Materials Science"
] |
A Gram-Negative Bacterial Secreted Protein Types Prediction Method Based on PSI-BLAST Profile
Prediction of secreted protein types based solely on sequence data remains to be a challenging problem. In this study, we extract the long-range correlation information and linear correlation information from position-specific score matrix (PSSM). A total of 6800 features are extracted at 17 different gaps; then, 309 features are selected by a filter feature selection method based on the training set. To verify the performance of our method, jackknife and independent dataset tests are performed on the test set and the reported overall accuracies are 93.60% and 100%, respectively. Comparison of our results with the existing method shows that our method provides the favorable performance for secreted protein type prediction.
Introduction
Protein secretion is a universal and important biological process and it can occur in both eukaryotes and prokaryotes. In recent years, several secreted proteins have been identified as markers for disease typing and staging [1,2] or the development of drugs [3]. Most bacteria are able to secrete proteins, such as toxins and hydrolytic enzymes, into the extracellular environment. In this process, Gramnegative bacterial proteins have to be transported across the two lipid bilayers, including the cytoplasmic membrane (CM) and the outer membrane (OM) [4]. Proteins, including virulence factors involved in invasion, colonization, and survival within a host organism, are produced in pathogenic Gram-negative bacteria and are secreted to the cell exterior [5]. They play different roles in invaded eukaryotic cells and cause various diseases [4], so it is important to study them for the pathogenesis of diseases and the development of drugs.
Secretion systems are capable of specifically recognizing their substrates and facilitating secretion without disturbing the barrier function of the cell envelope. However, they differ tremendously with respect to their functional mechanism and complexity. So far, eight secretion systems have been found in Gram-negative bacteria and named from the type I (T1SS) to the type VIII secretion system (T8SS) according to the OM secretion mechanisms [4]. Correspondingly, proteins released via the T1SS are called type I secreted proteins (T1SPs), and other types of proteins are named by analogy with this.
In fact, prediction of protein datasets such as protein structural classes prediction and Subcellular localization prediction is a typical and traditional pattern recognition problem. Generally, it can be performed in three main steps: feature extraction, feature selection, and model selection for classification. Among the three steps, feature extraction is the most critical and challenging step for the prediction. Amino acid composition (AAC) [6][7][8][9], pseudoamino acid composition (PseAAC) [10][11][12], polypeptide composition [13], functional domain composition [14], PSI-BLAST profile [15,16], and so on are all the widely used feature extraction methods. In order to reduce the computation complexity and pick out the more informative features, a feature selection step is necessary. Principal component analysis (PCA) [17], SVM-RFE [18], and correlation-based feature selection (CFS) [19] have performed well in the feature selection. Finally, choosing a powerful classification tool is also very important. Neural network [8], support vector machine (SVM) [9,20], fuzzy clustering [21], and rough sets [22] are usually being used. Type Training set Test set T1SP 112 25 T2SP 99 29 T3SP 182 28 T4SP 62 22 T5SP 164 35 T7SP 48 33 In 2013, Yu et al. constructed a dataset of Gram-negative bacterial secreted proteins which contains 839 secreted proteins [23]. The proteins are collected from three data sources, namely, SwissProt, TrEMBL [24], and RefSeq [25]. They used an improved PseAAC consisting of amino acid composition (AAC) and autocovariance (AC) to extract information from PSI-BLAST profile. The support vector machine (SVM) is used to distinguish different types of secreted proteins in their paper and the reported highest overall accuracy of their method is 90.12%.
Recently, some researchers try to improve the prediction accuracy of protein datasets by combining the dipeptide composition and PSI-BLAST profile together [15,16,[26][27][28]. These methods mainly focused on the single-column information extraction based on the hypothesis that two neighboring amino acids are independent which may make the neighboring correlation information lost.
In this study, we also extracted the evolutionary information from PSI-BLAST profile based on correlation method to perform Gram-negative bacterial secreted proteins prediction. A feature set consisting of 309 features is selected by correlation-based feature selection (CFS) method based on training set. With the selected 309 features, the jackknife test and independent test are performed on test set by SVM. The results show that our method is reliable for the secreted protein type prediction.
Materials. Yu et al. constructed a dataset of Gram-
negative bacterial secreted proteins which contains 839 secreted proteins with 25% similarity. The dataset is divided into training set and test set. The 667 secreted proteins belong to training set and the other 172 secreted proteins belong to test set. The protein numbers of each type are listed in Table 1. In fact, 16 T6SPs and 24 T8SPs were also collected from several data sources as shown in the paper of Yu et al.; however, owing to the small numbers and high sequence similarity, they are just suitable for phylogenetic analysis to understand the evolutionary history [23]. Hence, only six types of Gram-negative bacterial secreted proteins are considered. The datasets can be downloaded from http://web .xidian.edu.cn/slzhang/paper.html.
Feature
Extraction. PSI-BLAST profile is usually denoted by a position-specific score matrix (PSSM) which includes abundant evolutionary information. PSSM is calculated by applying the PSI-BLAST [29] in which three iterations are used and its cut off value is set to 10 −6 on SwissProt dataset. Given a protein sequence, PSSM produces the substitution probability of the amino acids along its sequence based on their position with all 20 amino acids. PSSM is a log-odds matrix of size × 20, where is length of the query amino acid sequence and 20 is due to the 20 amino acids. The ( , )th entry of the matrix represents the score of the amino acid in the th position of the query sequence being mutated to amino acid type during the evolution process.
In this study, the PSSM elements are scaled to the range from 0 to 1 using the following sigmoid function: where is the original PSSM value. For convenience, we denote as the PSSM of the query sequence with length , where, for example, is the transpose operator, and , ( = 1, 2, . . . , ) denotes the score of the amino acid in the th position of being mutated to the th amino acid during the evolution process.
The dimension of feature vector is 400 * ( + 1). However, there may exist some irrelevant and redundant information among the extracted features, which can lead to a poor prediction. Hence, a feature selection method is used.
Feature Selection and the Selection of .
Feature selection can reduce the dimensionality of the data and may allow learning algorithms to operate faster and more effectively. Wrapper and filter are two main directions developed for feature selection. In order to determine the value of , CFS method [19] is performed to the ( + 1) * 400 features to filter out poorly informative ones with varying from 0 to 16. As shown in Hall's paper, as a filter method, in many cases CFS gave comparable results to the wrapper and, in general, outperformed the wrapper on small datasets [19].
Then, the jackknife test is performed on the training set based on the selected features. The overall accuracies of training set at different values of are shown in Figure 1, from which we can find that the highest overall accuracy of training set is achieved at = 10. Hence, in this paper, is set to be 10. The selected feature numbers with the varies of when = 10 are listed in Table 2. From Table 2, it is found that when = 2, the selected features are the most which arrives at 45. While when = 5, 10, only 18 features are selected. When is bigger than 10, the long-range correlation of residues becomes more and more weak with increases. This is consistent with the phenomenon shown in Figure 1 that the overall accuracy becomes stable when is bigger than 10.
Classification Algorithm
Construction. SVM can often achieve superior classification performance in comparison with other classification algorithms. In this study, the support vector machine (SVM) classifier is employed as the classification algorithm. The radial basis function (RBF) is selected as the kernel function, which is defined as where is a kernel parameter and and are the feature vector of the th and th proteins, respectively. The regularization parameter (used to control the trade-off between allowing training errors and forcing rigid margins) and kernel parameter are optimized based on tenfold crossvalidation on training set. is allowed to take a value of 2 −5 , 2 −4 , . . . , 2 0 , 2 1 , . . . , 2 15 and to take a value of 2 −15 , 2 −14 , . . . , 2 0 , 2 1 , . . . , 2 5 . Various pairs of ( , ) values are tried and the one with the best cross-validation accuracy is picked. The final classifier uses = 4096 and = 0.5.
Prediction Assessment
Independent dataset test, subsampling test, and jackknife test are usually used to examine the effectiveness of a predictor Type Reference T1SP T2SP T3SP T4SP T5SP T7SP Total Number of sequences 25 29 28 22 35 33 172 The "one-to-one" algorithm Correct hit 22 23 where TP is the number of true positives, FP is the number of false positives, TN is the number of true negatives, and FN is the number of false negatives, respectively.
Results
To evaluate the performance of our method, jackknife test was performed on training set and test set, respectively. The detailed prediction results are listed in Table 3. The overall accuracies are both higher than 85%. If comparing the six types to each other, the prediction of T1SP and T5SP types is both higher than 90% for the training set. For the training set, the prediction accuracy of T4SP is only 67.74%, which may be due to the unbalance of this dataset. For the test set, the accuracies of other four types are all higher than 90% excluding T1SP and T4SP types. Excluding T4SP type, the MCC values of the other five types are all higher than 0.9 which shows that our method is effective for the Gramnegative bacterial secreted protein types prediction. In addition, the independent dataset test is performed on test set. The method is trained by SVM based on training set; then the obtained model is used to perform the prediction of test set. An excellent result is obtained and all the types are predicted correctly and the result is shown in Table 4. The overall accuracy of 100% is obtained by our method for the test data. Compared with the result of Yu et al. [23] obtained by "one-to-one" algorithm, the overall accuracy obtained by our method is 9.88% higher than that of Yu's method. Compared with the "one-to-the-rest" algorithm result of Yu's method (2013), the overall accuracy of our method is 13.95% higher.
The result shows that the extracted information, especially the information extracted from different columns of PSSM, plays an important role in the improvement of the prediction accuracy. In addition, the combined information extracted at different gaps can provide more useful information for the prediction.
Conclusions
In recent years, more and more secreted proteins have been discovered from a variety of Gram-negative bacteria. Hence, how to determine the type of new discovered Gram-negative bacterial secreted protein is becoming an urgent research task. A set which contains six types of Gram-negative bacterial secreted proteins was constructed by Yu et al. in 2013. In this paper, the long-range correlation information and linear correlation information are extracted from position-specific score matrix (PSSM). The best optimal residue distance is determined based on the training set. Results by jackknife test and independent dataset test on the test set show that our method is effective in predicting Gram-negative bacterial secreted protein types. | 2,819.6 | 2016-08-02T00:00:00.000 | [
"Computer Science"
] |
A Convolutional Neural Network for SSVEP Identification by Using a Few-Channel EEG
The application of wearable electroencephalogram (EEG) devices is growing in brain–computer interfaces (BCI) owing to their good wearability and portability. Compared with conventional devices, wearable devices typically support fewer EEG channels. Devices with few-channel EEGs have been proven to be available for steady-state visual evoked potential (SSVEP)-based BCI. However, fewer-channel EEGs can cause the BCI performance to decrease. To address this issue, an attention-based complex spectrum–convolutional neural network (atten-CCNN) is proposed in this study, which combines a CNN with a squeeze-and-excitation block and uses the spectrum of the EEG signal as the input. The proposed model was assessed on a wearable 40-class dataset and a public 12-class dataset under subject-independent and subject-dependent conditions. The results show that whether using a three-channel EEG or single-channel EEG for SSVEP identification, atten-CCNN outperformed the baseline models, indicating that the new model can effectively enhance the performance of SSVEP-BCI with few-channel EEGs. Therefore, this SSVEP identification algorithm based on a few-channel EEG is particularly suitable for use with wearable EEG devices.
Introduction
As a brain-computer interface (BCI) allows the human brain to interact directly with external environment and devices, it has shown great application potential in many fields, such as rehabilitation, sport and entertainment.By decoding electroencephalogram (EEG) signals detected on the scalp, a BCI can transfer human intentions into communication or control demands.For EEG measurement devices, most of them are designed for medical or scientific research purposes.They are generally large in size, heavy and require complex operating procedures, making them unsuitable for daily use in real life.With the advancement of electronic technology, many wearable EEG devices have been designed and produced.Owing to their compact structure, light weight and good wearability, wearable EEG devices have gradually been used in BCI applications, such as robot control [1], remote monitoring [2] and emotion recognition [3].Compared with conventional EEG devices, wearable EEGs typically support a lower number of channels.Multi-channel data generally achieves better BCI performance, as it contains more information.However, more EEG channels mean a longer preparation time and reduced comfort, which is the opposite of the intention of wearable BCIs.Moreover, reducing the number of electrodes effectively lowers the hardware cost of wearable BCIs.Therefore, few-channel EEGs are an attractive option for wearable BCIs.On the other hand, classification performance is a key factor in BCI systems because it is related to the usability and usefulness of the BCI.For wearable BCIs, it is necessary to use few-channel EEGs to achieve comparable performance to enhance system practicality.
Steady-state visual evoked potential (SSVEP) is a classic BCI paradigm that has garnered substantial attention and has been widely used as it supports multiple instructions, achieves high identification accuracy and requires little or no training [4][5][6].In SSVEP-BCIs, the use of six or more channels of EEG signals from the occipital lobe has been proven to be sufficient to achieve excellent decoding performance [7].Therefore, wearable EEG devices are considered suitable for the SSVEP paradigm, and they are used in many SSVEP-BCI studies.Zhu et al. used an eight-channel wearable SSVEP-BCI system to collect a dataset for developing decoding algorithms [8].Na et al. designed an eight-channel wearable low-power EEG acquisition device for four-target SSVEP recognition [9].On the other hand, some researchers paid attention to using few-channel EEGs for SSVEP decoding.Ge et al. designed a dual-frequency biased coding method and used a three-occipital-channel EEG to decode 48 targets with an accuracy of 76% in a 2 s time window [10], proving the availability of the SSVEP-BCI with few EEG channels.Moreover, several studies have shown that a single-channel EEG signal is feasible for few-target SSVEP detection [11][12][13][14][15].It is clear that SSVEP identification based on few-channel EEGs is feasible in wearable BCIs.
As far as SSVEP identification is concerned, numerous algorithms have been developed [16,17].Among them, canonical correlation analysis (CCA) is one of the mainstream fundamental approaches, which is free of training and determines the SSVEP target based on the correlation between a reference and the EEG signal [18].On the basis of CCA, many variant algorithms have been developed, such as extended CCA (eCCA) [19] and filter bank CCA (FBCCA) [20].Although traditional methods achieve good classification performance, the features extracted in these methods are relatively simple, which may not comprehensively represent EEG signals.In the last few years, deep learning technology has been rapidly employed in SSVEP decoding due to its capability to integrate feature extraction and classification.In particular, the convolutional neural network (CNN) is the most utilized neural network [21][22][23], as it offers advantages over other standard deep neural networks.A CNN was first applied to SSVEP identification by Cecotti et al. [24].Nguyen et al. employed fast Fourier transform (FFT) to extract features from a singlechannel EEG and then used a one-dimensional CNN to detect the SSVEP frequency [13].Ravi et al. developed CNN models with the spectrum features derived from EEG signals as the input and found that the CNN based on complex spectrum features performed better than that based on magnitude spectrum features in the SSVEP-BCI [25].Xing et al. constructed the frequency domain templates based on the prior knowledge of SSVEP and used a CNN for signal classification [26].Zhao et al. fused the filter bank technique with a CNN to develop a filter bank CNN (FBCNN) based on the frequency domain SSVEP data [27].Similarly, the combination of the filter bank technique and a CNN can be used for the analysis of time-domain SSVEP data [28].Guney et al. proposed a deep neural network architecture consisting of 4 convolutional layers for processing time-domain EEG signals to predict SSVEP targets [29].With the time-frequency sequences transformed from EEG signals, Li et al. developed a dilated shuffle CNN (DSCNN) to realize SSVEP classification [30].In general, CNN-based methods tend to surpass the traditional methods.The convolutional layers in CNNs are considered to exploit the local spatial coherence inherent in SSVEP signals, making the models suitable for SSVEP analysis [31].However, deep-learning-based methods typically require a lot of data for training and fine-tuning to achieve good results.In wearable BCIs, the reduction in data caused by the decreased EEG channels probably leads to a poor performance of the deep-learning-based method.Therefore, SSVEP identification based on few-channel EEGs remains challenging.
In order to implement the identification of SSVEP by using a few-channel EEG, a CNN-based decoding model is proposed in this study.The CNN-structure model is based on lightweight design to reduce the training data requirements associated with model complexity.In addition, considering the limited spatial information obtained from the few-channel signal, an attention mechanism is introduced to enhance the representation ability of spatial information of the model.Two SSVEP datasets were used for the method evaluation, including a 40-class dataset collected by a wearable EEG device and a public 12-class dataset collected by a conventional apparatus.Three-channel and single-channel data were applied for the comparison to evaluate the effectiveness of the proposed model on a few-channel EEG.
Dataset 1
In this study, a wearable ESPW308 EEG device (BlueBCI Ltd., Beijing, China) was used to collect an SSVEP dataset from six healthy subjects (three females and three males; average age: 25.33 ± 0.82 years) with normal or corrected-to-normal vision.This lightweight EEG apparatus is capable of acquiring eight-channel data from the occipital area (PO3, PO4, PO5, PO6, POz, O1, O2 and Oz).All subjects had no experience with SSVEP-BCIs before this experiment.The experiment was approved by the Institutional Review Board of the University of Hong Kong/Hospital Authority Hong Kong West Cluster.
A 40-target speller was used to induce the SSVEP, where the visual stimulation interface was a 4 × 10 flicker matrix displayed on a 24.5-inch LCD monitor with a full 1080 p resolution.Each flicker was presented as a 165 × 165-pixel square marked with a character.The flickers were encoded by the joint frequency and phase modulation (JFPM) method [32].The frequency in Hz and phase in π for each stimulus is defined as follows: where (i, j) represents the flicker located in the i-th row and j-th column.Each subject completed 10 blocks of the experiment, with each block encompassing 40 trials that corresponded to all 40 targets.A trial contained a 1 s cue, a 3 s visual stimulation and a 1 s rest.It is noted that a strategy of simplifying the system setup was adopted in this experiment to reduce the experimental preparation time and enhance the practicality of the BCI in real-life applications [33].As a result, the signal quality of this dataset might be degraded.
EEG signals from three occipital channels, O1, O2 and Oz, in Dataset 1 were selected as the three-channel EEG, and Oz was selected for the single-channel EEG.
Dataset 2
To validate the effectiveness of the decoding model on EEG signals collected by a conventional device, a public SSVEP dataset was also used in this study.This dataset, which was presented by Nakanishi et al. [34], was collected by a BioSemi ActiveTwo EEG system (Biosemi Inc., Amsterdam, Netherlands) from ten healthy subjects (one female and nine males; average age: 28 years).With the acquisition system, eight-channel EEGs were recorded from the occipital area.During the experiment, a 12-target speller in the form of a 4 × 3 matrix was used.Each flicker was depicted as a 6 × 6-cm square on a 27-inch LCD monitor.The stimulus frequency in Hz and phase in π of each flicker were defined as follows: f (i, j) = 0.5i + 2j + 6.75 where (i, j) represents the flicker in the i-th row and j-th column.Each subject performed a total of 15 experimental blocks, with each block consisting of 12 trials.A trial comprised a 1 s cue and a 4 s visual stimulation.
In Dataset 2, Oz and the two adjacent electrodes were selected for the three-channel EEG, and Oz was selected for the single-channel EEG.
SSVEP Identification 2.2.1. Data Processing
For the original EEG signals, a fourth-order Butterworth filter is applied to remove noise and artifacts as much as possible.Due to the distinct features of SSVEP signals in the frequency domain, along with the automated feature extraction capabilities of neural networks, transforming time-domain signals into the frequency domain can enhance SSVEP identification [27].Furthermore, deep learning models utilizing frequency-domain inputs generally have a relatively simple structure [31].Therefore, the EEG time series is transformed into its frequency-domain counterpart through an FFT.Specifically, the time-domain signal is decomposed after the FFT as follows: where x is the input EEG data in the time domain.
Since FFT data have real and imaginary parts, a magnitude spectrum and a complex spectrum can be obtained depending on the combination of the real and imaginary parts.The magnitude spectrum retains the amplitude information of the Fourier spectrum and removes the phase information, while the complex spectrum retains both types of information.Previous studies have shown that CNNs using complex spectrum features as inputs outperform those based on magnetic spectrum features, as they can extract more discriminative information [25,27].Therefore, the complex spectrum is used as the input in this study.Specifically, the real and imaginary parts of each channel are separated to form two vectors, which are then concatenated into a feature vector as the input for the neural network.Taking the three-channel EEG signal as an example, the complex FFT data are reconstructed as follows: where the real part is placed as the first half and the imaginary part is placed as the second half.This input is consistent in form with previous studies [25,27,35,36].
Network Structure
In this study, a CNN architecture called attention-based complex spectrum-CNN (atten-CCNN) is proposed, which integrates an attention mechanism with a CNN for SSVEP classification by using a few-channel EEG.The architecture of atten-CCNN is depicted in Figure 1, which was inspired by the complex spectrum-CNN (CCNN) model [25] and incorporates the attention mechanism from the squeeze-and-excitation (SE) network [37].The atten-CCNN model consists of two stacked convolution-attention blocks for feature extraction, followed by a fully connected layer that performs non-linear transformation on the features and a dense layer with a softmax operation that is employed for classification.As for the convolution-attention block, a convolutional layer is followed by an activation operation, a batch normalization operation and a dropout operation.Then, a filter-wise attention layer is connected to the convolutional layer and a dropout operation.Additionally, an adjusted connection scheme is designed between the convolutional layer and attention layer.
Regarding the network hyperparameters, the input shape for atten-CCNN is denoted as N ch × N sp , where N ch and N sp are the dimensions of the complex FFT data.The first convolutional layer, Conv1, with a kernel size of [32 × 1] calculates the contribution weight among the selected EEG channels.The second convolutional layer, Conv2, performs spectral-level representation and has a fixed kernel size of [1 × 20].It is worth noting that Conv1 uses the "valid" padding mode while Conv2 uses the "same" padding mode to help reduce the model complexity while preserving the learned convolutional features.Both convolutional layers have 32 filters, providing sufficient power for feature extraction while keeping the number of network parameters relatively low.The first dense layer consists of 144 neurons with the "ReLU" activation function.The bottom dense layer applies linear transformation to the features, and a softmax operation is used with an output shape that corresponds to the number of targets in the SSVEP dataset.Regarding the network hyperparameters, the input shape for atten-CCNN is denoted as , where ch N and sp N are the dimensions of the complex FFT data.The first convolutional layer, Conv1, with a kernel size of [32 × 1] calculates the contribution weight among the selected EEG channels.The second convolutional layer, Conv2, performs spectral-level representation and has a fixed kernel size of [1 × 20].It is worth noting that Conv1 uses the "valid" padding mode while Conv2 uses the "same" padding mode to help reduce the model complexity while preserving the learned convolutional features.Both convolutional layers have 32 filters, providing sufficient power for feature extraction while keeping the number of network parameters relatively low.The first dense layer consists of 144 neurons with the "ReLU" activation function.The bottom dense layer applies linear transformation to the features, and a softmax operation is used with an output shape that corresponds to the number of targets in the SSVEP dataset.
Filter-Wise Attention Mechanism
As the key component of atten-CCNN, the attention mechanism is employed to increase the network's representation space by reweighting the contribution of different feature maps (filters) in each convolutional layer.Specifically, the SE block is used in this study.Owing to the lightweight structure, the SE block slightly increases the computational load and complexity of the original models.Moreover, the SE block has high compatibility and can be integrated into existing network architectures without major modifications.In terms of the connection between the SE attention with the convolutional layer, as shown in Figure 2, an adjusted connection scheme was designed rather than simply using SE block as a plug-and-play module.In this design, the key vector used for calculating the attention vector is derived from the dropout-passed feature maps of the last convolutional layer, while the feature maps that do not undergo dropout are used to compute the final attention-enhanced data.This structural design retains the information from the original feature maps while mitigating overfitting during attention weight calculation.
Filter-Wise Attention Mechanism
As the key component of atten-CCNN, the attention mechanism is employed to increase the network's representation space by reweighting the contribution of different feature maps (filters) in each convolutional layer.Specifically, the SE block is used in this study.Owing to the lightweight structure, the SE block slightly increases the computational load and complexity of the original models.Moreover, the SE block has high compatibility and can be integrated into existing network architectures without major modifications.In terms of the connection between the SE attention with the convolutional layer, as shown in Figure 2, an adjusted connection scheme was designed rather than simply using SE block as a plug-and-play module.In this design, the key vector used for calculating the attention vector is derived from the dropout-passed feature maps of the last convolutional layer, while the feature maps that do not undergo dropout are used to compute the final attention-enhanced data.This structural design retains the information from the original feature maps while mitigating overfitting during attention weight calculation.Regarding the network hyperparameters, the input shape for atten-CCNN is denoted as , where ch N and sp N are the dimensions of the complex FFT data.The first convolutional layer, Conv1, with a kernel size of [32 × 1] calculates the contribution weight among the selected EEG channels.The second convolutional layer, Conv2, performs spectral-level representation and has a fixed kernel size of [1 × 20].It is worth noting that Conv1 uses the "valid" padding mode while Conv2 uses the "same" padding mode to help reduce the model complexity while preserving the learned convolutional features.Both convolutional layers have 32 filters, providing sufficient power for feature extraction while keeping the number of network parameters relatively low.The first dense layer consists of 144 neurons with the "ReLU" activation function.The bottom dense layer applies linear transformation to the features, and a softmax operation is used with an output shape that corresponds to the number of targets in the SSVEP dataset.
Filter-Wise Attention Mechanism
As the key component of atten-CCNN, the attention mechanism is employed to increase the network's representation space by reweighting the contribution of different feature maps (filters) in each convolutional layer.Specifically, the SE block is used in this study.Owing to the lightweight structure, the SE block slightly increases the computational load and complexity of the original models.Moreover, the SE block has high compatibility and can be integrated into existing network architectures without major modifications.In terms of the connection between the SE attention with the convolutional layer, as shown in Figure 2, an adjusted connection scheme was designed rather than simply using SE block as a plug-and-play module.In this design, the key vector used for calculating the attention vector is derived from the dropout-passed feature maps of the last convolutional layer, while the feature maps that do not undergo dropout are used to compute the final attention-enhanced data.This structural design retains the information from the original feature maps while mitigating overfitting during attention weight calculation.
Training Hypermeters
In this study, both subject-independent and subject-dependent strategies were tested on two datasets.For the subject-independent strategy, leave-one-person-out cross-validation was used.If a dataset contained n subjects, the model underwent training utilizing the data of n − 1 subjects and was subsequently evaluated using the data from the remaining subject.In order to implement the model training and testing, all the EEG data were split into non-overlapping segments according to the data length being tested.The finial parameters for the model were as follows: learn rate (0.001), dropout ratio (0.25), L2 regularization (0.001), number of epochs (120) and batch size (256).
For the subject-dependent strategy, 10-fold cross-validation was performed on each subject's data.The EEG data of a subject were firstly segmented into non-overlapping segments, and then the segments were divided into 10 sets randomly.Training data were formed by taking nine sets and leaving one set out for testing.Except for the batch size (16), the other parameters were the same as those in the subject-independent strategy.Additionally, SGD was selected as the optimizer for all the models in all the training strategy modes.
Performance Evaluation 2.3.1. Baseline Methods
To verify the effectiveness of the proposed atten-CCNN model, CCNN [25], EEG-Net [38] and SSVEPformer [36] were used as the baseline models for comparison in this study.
CCNN is mainly composed of two convolutional layers in series, using the complex spectrum of EEG signal as the input.The first convolutional layer is used for spatial filtering and operates on the channels of the input, while the second convolutional layer is for temporal filtering and extracts features along the frequency components.Both convolutional layers are followed by a batch normalization layer, ReLU activation layer and dropout layer.Finally, a fully connected layer is employed in CCNN for classification.
EEGNet is a popular CNN-based architecture for EEG decoding, and it takes timedomain data as the input.EEGNet adopts a four-layer compact structure.The first layer is a convolution layer, which simulates band-pass filtering on each channel.Next is a spatial filtering layer to weight the data through depth-wise convolution.The third layer is a separate convolutional layer to extract category information.The last layer is a fully connected layer for classification.
SSVEPformer is one of the state-of-the-art models for SSVEP identification, which also takes the complex spectrum as the input and consists of three core components: channel combination, SSVEPformer encoder and multilayer perceptron (MLP) head.Firstly, the channel combination block performs weighted combinations of the input through convolutional layers.Then, the SSVEPformer encoder utilizes two sequential sub-encoders to extract features, each of which includes a CNN and a channel MLP.At last, the MLP head block uses two fully connected layers to implement classification.
Metrics
Two metrics were employed to assess the effectiveness of the models, including classification accuracy and information transfer rate (ITR).Accuracy is defined as the proportion of trials where the model makes a correct identification.ITR is a frequently-used parameter for evaluating BCI performance, which is estimated as follows: where K is the number of targets, P is the classification accuracy and T is the target selection time.In addition to the data length of the SSVEP signal, the target selection time included a gaze shift time in this study to imitate the actual use of the BCI.The gaze shift time was set to 0.55 s according to Chen et al.'s study [20].
Dataset 1
Figure 3 illustrates the classification accuracy and ITR on the 40-class wearable SSVEP dataset with a three-channel EEG achieved by the four decoding models.Furthermore, the paired t test was employed to ascertain the difference in the metrics between atten-CCNN and CCNN to verify the improvement in the proposed method.The results show that under both the subject-independent and subject-dependent conditions, atten-CCNN performed better than CCNN with a significant difference at all data lengths.Moreover, the advantage of atten-CCNN over CCNN basically expanded as the data length of the EEG signal increased.At a data length of 1 s, the subject-independence accuracy of atten-CCNN was 37.38%, which was 4.86% ahead of CCNN, and its subject-dependence accuracy was 43.21%, leading CCNN by 5.17%.Compared with subject-independent strategies, deep learning models generally achieve better performances under subject-dependent strategies [25,27,39,40].This finding was consistent in this study.As for EEGNet, it performed well on the short-time EEG signals in this dataset, although it was found to be not as good as a CCNN in previous study [41].Indeed, EEGNet outperformed atten-CCNN and CCNN at 0.2 s.But when the data length exceeded 0.4 s, atten-CCNN surpassed EEGNet in both strategies, and their gap enlarged with the data length.For SSVEPformer, it seems unsuitable for this dataset as it performed the worst among the four models regardless of the training strategies.
Dataset 1
Figure 3 illustrates the classification accuracy and ITR on the 40-class wearable SSVEP dataset with a three-channel EEG achieved by the four decoding models.Furthermore, the paired t test was employed to ascertain the difference in the metrics between atten-CCNN and CCNN to verify the improvement in the proposed method.The results show that under both the subject-independent and subject-dependent conditions, atten-CCNN performed better than CCNN with a significant difference at all data lengths.Moreover, the advantage of atten-CCNN over CCNN basically expanded as the data length of the EEG signal increased.At a data length of 1 s, the subject-independence accuracy of atten-CCNN was 37.38%, which was 4.86% ahead of CCNN, and its subject-dependence accuracy was 43.21%, leading CCNN by 5.17%.Compared with subject-independent strategies, deep learning models generally achieve better performances under subject-dependent strategies [25,27,39,40].This finding was consistent in this study.As for EEGNet, it performed well on the short-time EEG signals in this dataset, although it was found to be not as good as a CCNN in previous study [41].Indeed, EEGNet outperformed atten-CCNN and CCNN at 0.2 s.But when the data length exceeded 0.4 s, atten-CCNN surpassed EEGNet in both strategies, and their gap enlarged with the data length.For SSVEPformer, it seems unsuitable for this dataset as it performed the worst among the four models regardless of the training strategies.The performance of the models on Dataset 1 when using a single-channel EEG for decoding is shown in Figure 4.The relationship between atten-CCNN and CCNN when The performance of the models on Dataset 1 when using a single-channel EEG for decoding is shown in Figure 4.The relationship between atten-CCNN and CCNN when using the single-channel EEG was similar to that with the three-channel EEG.For the long-time EEG signals in this study, the advantages of atten-CCNN tended to be more obvious.However, there was no significant difference between atten-CCNN and CCNN at 1 s under the subject-independent conditions, which seems to indicate that the large difference at this time point was mainly caused by individual subjects.On the other hand, the results show that EEGNet performed well in single-channel EEG decoding.In the subject-independent situation, EEGNet and atten-CNN performed similarly at various data lengths, outperforming CCNN.In the subject-dependent scenario, EEGNet performed the best among the four models on the short-time signal, and atten-CCNN achieved the same as EEGNet at 1 s.But for SSVEPformer, it did not demonstrate the superiority in the single-channel EEG decoding.
time EEG signals in this study, the advantages of atten-CCNN tended to be more obvious.However, there was no significant difference between atten-CCNN and CCNN at 1 s under the subject-independent conditions, which seems to indicate that the large difference at this time point was mainly caused by individual subjects.On the other hand, the results show that EEGNet performed well in single-channel EEG decoding.In the subject-independent situation, EEGNet and atten-CNN performed similarly at various data lengths, outperforming CCNN.In the subject-dependent scenario, EEGNet performed the best among the four models on the short-time signal, and atten-CCNN achieved the same performance as EEGNet at 1 s.But for SSVEPformer, it did not demonstrate the superiority in the single-channel EEG decoding.
Dataset 2
Since CCNN, EEGNet and SSVEPformer have been tested and compared on Dataset 2 in previous studies [36,42,43], only atten-CCNN and CCNN were used for comparison in this part to emphasize the changes in the performance of the proposed model relative to its original model.Figure 5 illustrates the performance of atten-CCNN and CCNN on Dataset 2 with a three-channel EEG.Intuitively, the performance difference between atten-CCNN and CCNN showed different trends under the two conditions.Under the subjectindependent conditions, the advantage of atten-CCNN expanded as the data length increased, and the gap between them was significant at all data lengths.However, in the subject-dependent case, atten-CCNN only had advantages on the short-time data.When the time window was greater than 0.6 s, the improvement in atten-CCNN was not
Dataset 2
Since CCNN, EEGNet and SSVEPformer have been tested and compared on Dataset 2 in previous studies [36,42,43], only atten-CCNN and CCNN were used for comparison in this part to emphasize the changes in the performance of the proposed model relative to its original model.Figure 5 illustrates the performance of atten-CCNN and CCNN on Dataset 2 with a three-channel EEG.Intuitively, the performance difference between atten-CCNN and CCNN showed different trends under the two conditions.Under the subject-independent conditions, the advantage of atten-CCNN expanded as the data length increased, and the gap between them was significant at all data lengths.However, in the subject-dependent case, atten-CCNN only had advantages on the short-time data.When the time window was greater than 0.6 s, the improvement in atten-CCNN was not significant.In terms of the best decoding performance with the three-channel EEG, atten-CCNN achieved a subject-independent accuracy of 64.17% at 1 s, which was 2.35% higher than CCNN, and the highest ITR was 61.55 bits/min at 1 s.On the contrary, under the subject-dependent conditions, the decoding models achieved the maximum ITR at a data length of 0.8 s, and the result of atten-CCNN was 91.20 bits/min.
Figure 6 illustrates the accuracy and ITR of the two models with a single-channel EEG on Dataset 2 under the two conditions.Clearly, the testing strategy had a great impact on the performance of the models in the single-channel EEG decoding.Under the subjectindependent conditions, atten-CCNN outperformed CCNN at all data lengths.The largest accuracy difference between two models occurred at 0.4 s, which was 2.24%.In contrast, in the subject-dependent situation, the new model only significantly outperformed CCNN at 0.4 s, with an accuracy improvement of 4.20%.
significant.In terms of the best decoding performance with the three-channel EEG, atten-CCNN achieved a subject-independent accuracy of 64.17% at 1 s, which was 2.35% higher than CCNN, and the highest ITR was 61.55 bits/min at 1 s.On the contrary, under the subject-dependent conditions, the decoding models achieved the maximum ITR at a data length of 0.8 s, and the result of atten-CCNN was 91.20 bits/min.Figure 6 illustrates the accuracy and ITR of the two models with a single-channel EEG on Dataset 2 under the two conditions.Clearly, the testing strategy had a great impact on the performance of the models in the single-channel EEG decoding.Under the subject-independent conditions, atten-CCNN outperformed CCNN at all data lengths.The largest accuracy difference between two models occurred at 0.4 s, which was 2.24%.In contrast, in the subject-dependent situation, the new model only significantly outperformed CCNN at 0.4 s, with an accuracy improvement of 4.20%.Comparing the decoding performance of the models on the two datasets, it was observed that the subject-independent improvement brought by atten-CNN was broadened with the increase in the data length.The improvement in atten-CNN was numerically greater on the wearable dataset.But under the subject-dependent conditions, the two datasets presented different results.Atten-CNN also showed a greater improvement over Comparing the decoding performance of the models on the two datasets, it was observed that the subject-independent improvement brought by atten-CNN was broadened with the increase in the data length.The improvement in atten-CNN was numerically greater on the wearable dataset.But under the subject-dependent conditions, the two datasets presented different results.Atten-CNN also showed a greater improvement over time on Dataset 1, while its effective improvement on Dataset 2 only occurred in a short time window, such as 0.4 s.
Effect of Number of EEG Channels
It can be found from Figures 3-6 that for the short-time EEG signals, the improvement in atten-CCNN relative to CCNN was significant on both datasets, whether in the subjectindependent or subject-dependent strategies.In order to further explore the relationship between the improvement in the proposed model and the number of EEG channels, the decoding results on a few-channel EEG were compared with that on eight-channel data because eight-channel EEGs are considered sufficient for multi-target SSVEP identification.Figure 7 shows the accuracy difference between atten-CCNN and CCNN when the data length was 0.4 s.Obviously, whether a single-channel, three-channel or eight-channel EEG was used for decoding, the accuracy improvement in atten-CCNN compared to CCNN was significant.According to Figure 7, the magnitude of the improvement in atten-CCNN did not appear to be strongly related to the number of EEG channels.There is no doubt that the algorithms were generally more effective as the number of channels increased.Nevertheless, the proposed model still maintained and even enlarges its advantage as the number of channels increased.
Discussion
A CNN-structure model named atten-CCNN was developed by fusing an SE block in a complex-spectrum CNN.Plenty of studies have shown that excellent SSVEP identification can be achieved by decoding many EEG channels.However, wearable BCIs equipped with a large quantity of electrodes are not appropriate and feasible in real-life applications.Therefore, we evaluated the performance of the classification methods in this study with a few-channel EEG.Similarly, although long signal segments substantially elevated the classification accuracy of the algorithm, the ITR generally decreased.Therefore, we selected EEG segments within 1 s for evaluation.The results show that atten-CCNN outperformed the baseline methods on both the wearable SSVEP dataset and conventional dataset in both subject-independent and subject-dependent scenarios.
CCNN was chosen as the backbone of the model developed in this study because of its simplicity and scalability.It has the flexibility to modify the network structure.We increased the filters in the convolutional layer to extract various types of information from the EEG data, which is also suitable for the filter-level attention mechanism.Additionally, we added a fully connected layer after the feature flattening to further facilitate the learning capacity of the model.As the key difference between the new model and the original CCNN, we added an SE block after each convolutional layer.SE blocks are commonly used attention modules, which are used in conjunction with existing models to improve
Discussion
A CNN-structure model named atten-CCNN was developed by fusing an SE block in a complex-spectrum CNN.Plenty of studies have shown that excellent SSVEP identification can be achieved by decoding many EEG channels.However, wearable BCIs equipped with a large quantity of electrodes are not appropriate and feasible in real-life applications.Therefore, we evaluated the performance of the classification methods in this study with a few-channel EEG.Similarly, although long signal segments substantially elevated the classification accuracy of the algorithm, the ITR generally decreased.Therefore, we selected EEG segments within 1 s for evaluation.The results show that atten-CCNN outperformed the baseline methods on both the wearable SSVEP dataset and conventional dataset in both subject-independent and subject-dependent scenarios.
CCNN was chosen as the backbone of the model developed in this study because of its simplicity and scalability.It has the flexibility to modify the network structure.We increased the filters in the convolutional layer to extract various types of information from the EEG data, which is also suitable for the filter-level attention mechanism.Additionally, we added a fully connected layer after the feature flattening to further facilitate the learning capacity of the model.As the key difference between the new model and the original CCNN, we added an SE block after each convolutional layer.SE blocks are commonly used attention modules, which are used in conjunction with existing models to improve the performance by concentrating on essential features while restraining non-essential ones.Since their emergence, SE blocks have been used in EEG analysis under different tasks [44][45][46][47].In the conventional convolutional layer with multiple spatial filters, each spatial filter uses a local receptive field to avoid the output being affected by contextual information outside that region.In order to leverage the information beyond the local receptive field, the squeeze part in the SE block utilizes global average pooling to produce channel-wise statistics.Then, the excitation part in the SE block applies a gate mechanism consisting of two fully connected layers to learn the nonlinear relationship between channels, thereby exploiting the information obtained in the squeeze operation and completely capturing channel-wise dependencies.Finally, the SE block takes the weight output by the excitation operation as the importance of channels and completes the recalibration of the original features by reweighting the features of each channel, thus boosting feature discriminability and improving the network's performance.The experiments in this study demonstrate that the SE block could improve the performance of the CNN in SSVEP identification with a few-channel EEG.However, SSVEPformer, which also involves an attention mechanism, performed poorly in this study.A potential factor contributing to this result may be its inherently complex network architecture, which may not be suitable for the analysis of few-channel data.Although SSVEPformer has a channel combination block designed to process multi-channel EEGs, this block may not have the expected effect when dealing with few-channel data.Similarly, a few-channel signal may affect the feature extraction ability of the networks.Based on the experimental results, this study verified the hypothesis in Chen et al.'s study that a limited amount of data is a major challenge for SSVEPformer to maintain good performance [36].In contrast, the results show that the compact EEGNet performed well on small-dimensional data, especially short-time, single-channel EEGs, demonstrating its potential in single-channel decoding.On the other hand, it was found that the improvement in atten-CCNN relative to the original model on Dataset 1 was more noticeable than that on Dataset 2. This may be due to the difference in the signal quality between the two datasets.As Dataset 1 was collected by a wearable EEG device under a simplified system setup, the signal quality of Dataset 1 is lower than that of Dataset 2. The noise mixed in the EEG signal interferes with the feature extraction ability of the convolutional layer, but the SE block has the function of strengthening important features and weakening noise or unimportant features.Therefore, we believe that the atten-CCNN model can perform well even for EEG signals with a low signal-to-noise ratio.
The decoding performance on a three-channel EEG and a single-channel EEG was compared in this study.There is no doubt that the three-channel results were better than the single-channel results.But compared with the 12-class dataset, the gap between them was much larger on the 40-class dataset.In the case of a large number of targets, a singlechannel EEG seems to be incompetent for SSVEP identification, especially at short time windows.As for the three-channel EEG with a reasonable data length, it can cope with the recognition of a mass of targets.For fewer targets, such as 12 targets, a single-channel EEG is an attractive option as the gap between three-channel EEG decoding and single-channel decoding is not particularly large.Owing to the improved decoding capabilities brought by deep learning, SSVEP-BCIs based on few-channel EEGs are becoming feasible and practical in daily-life applications.In addition, although the proposed atten-CCNN was designed for few-channel EEG decoding, its improvement over the baseline model was more considerable when the number of EEG channels increased, as shown in Figure 7, indicating the adaptability of this model to EEG decoding with different channel numbers.
There are several limitations in this study.Two SSVEP datasets were used to verify the performance of the models in this study, but the wearable dataset only involved six subjects.
Although the statistical analysis demonstrated the effectiveness of the proposed model, a small sample size may lead to great uncertainty that can affect the credibility of the results.Therefore, we plan to collect more data to further validate the model.Secondly, for the input of the classification model, the most common form of complex spectrum was used in this study.Indeed, there are other ways of composing the real and imaginary parts, such as placing them on different rows of the vector [48].In the next step, we will compare different forms of inputs to determine a suitable one.On the other hand, the decoding model was enhanced by combining multiple modules in this study.In SSVEP recognition, the filter bank technique is proven to be a simple and effective strategy to enhance decoding methods, both for traditional methods [20,49] and deep learning methods [27,28,43,48,50], because the filter bank technique takes advantage of the harmonic characteristics of SSVEP.It is believed that the filter bank technique will have similar effects on atten-CCNN, so we plan to apply this technique to atten-CCNN to further improve its performance.Overall, although the new model was improved compared to the baseline models, there is room for improvement in this model.
Conclusions
This study introduces an atten-CCNN model for SSVEP identification with fewchannel EEGs, which takes the complex spectrum of the EEG signal as the input and integrates an SE block with a CNN.The proposed method was evaluated by a wearable SSVEP dataset and a public dataset under subject-independent and subject-dependent conditions.The results show that, whether for a three-channel or single-channel EEG, the new model had better performance than the baseline models.The improvement in the BCI performance demonstrates the efficacy of incorporating attention mechanisms to bolster the decoding ability of CNNs on few-channel EEGs.The SSVEP identification algorithm based on a few-channel EEG is particularly suitable for wearable BCIs as it achieves good performance with limited information.We believe that this decoding method, combined with the natural advantages of wearable BCIs, can promote the application of BCIs in real life.Informed Consent Statement: Informed consent was obtained from all subjects involved in the study.
Figure 2 .
Figure 2. The connection between the convolutional layer and SE attention.
Figure 2 .
Figure 2. The connection between the convolutional layer and SE attention.Figure 2. The connection between the convolutional layer and SE attention.
Figure 2 .
Figure 2. The connection between the convolutional layer and SE attention.Figure 2. The connection between the convolutional layer and SE attention.
Author Contributions:
Conceptualization, X.L. and Y.H.; methodology, S.Y. and X.L.; validation, N.F. and J.W.; formal analysis, N.F.; investigation, J.W.; data curation, X.L. and Y.H.; writing-original draft preparation, X.L. and S.Y.; writing-review and editing, W.H. and Y.H.; visualization, X.L. and S.Y.; supervision, Y.H.; project administration, Y.H.; funding acquisition, W.H. and Y.H.All authors have read and agreed to the published version of the manuscript.Funding: This research was funded by the Shenzhen Science and Technology Program, grant number "GJHZ20220913143408015", Zhanjiang Competitive Allocation of Special Funds for Scientific and Technological Development, grant number "2022A703-3", Shenzhen Science and Technology Program, grant number "JCYJ20230807113007015", Sanming Project of Medicine in Shenzhen, grant number "SZSM202211004", Shenzhen Key Medical Discipline Construction Fund, grant number "SZXK2020084", and Health Commission of Guangdong Province, grant number "B2024036".Institutional Review Board Statement: This study was conducted in accordance with the Declaration of Helsinki and approved by the Institutional Review Board of the University of Hong Kong/Hospital Authority Hong Kong West Cluster (UW 20-221, approved 7 April 2020). | 9,479.4 | 2024-06-01T00:00:00.000 | [
"Computer Science",
"Engineering"
] |
Precambrian geology of the northern Nagssugtoqidian orogen , West Greenland : mapping in the Kangaatsiaq area
The Nagssugtoqidian orogen and its transition into the Rinkian orogen to the north were the main focus of the field activities of the Geological Survey of Denmark and Greenland (GEUS) in West Greenland in the summer of 2001. This work was carried out within the framework of the Survey’s three-year programme of bedrock mapping and mineral resource evaluation to enhance the understanding of the Archaean and Palaeoproterozoic crustal evolution in the transition zone between the Nagssugtoqidian and Rinkian orogens (Fig. 1). The work in the field season of 2001 comprised geological mapping of the 1:100 000 Kangaatsiaq map sheet described in this paper (Fig. 2), an investigation of the supracrustal rocks at Naternaq / Lersletten (Østergaard et al. 2002, this volume), a geochronological reconnaissance of the southern Rinkian orogen in the northern Disko Bugt region (Garde et al. 2002, this volume), a resource evaluation of the Nagssugtoqidian orogen (Stendal et al. 2002, this volume), a synthesis and interpretation of geophysical data of the central part of the Nagssugtoqidian orogen (Nielsen et al. 2002, this volume) and a report on investigations of the kimberlites and related intrusive rocks in the southern Nagssugtoqidian orogen and its foreland (Jensen et al. 2002, this volume). The present investigations build on recent previous activities in the region. The Disko Bugt project of the former Geological Survey of Greenland investigated the geology and evaluated the resource potential of the southern part of the Rinkian orogen between Nuussuaq and Jakobshavn Isfjord from 1988 to 1992 (Fig. 1; Kalsbeek 1999). The Danish Lithosphere Centre (DLC) led a research project from 1994–1999 into the tectonic evolution of the Nagssugtoqidian orogen concentrating on the southern and central segments of the orogen between Sukkertoppen Iskappe and Nordre Strømfjord (Marker et al. 1995; van Gool et al. 1996, in press; Mengel et al. 1998; Connelly et al. 2000). Previous activity in the area between Nordre Strømfjord and Jakobshavn Isfjord (Fig. 1) included reconnaissance mapping by Noe-Nygaard & Ramberg (1961), 1:250 000 scale mapping by Henderson (1969), and visits to key localities during the DLC project (Marker et al. 1995; Mengel et al. 1998) from which a few reconnaissance age determinations are known (Kalsbeek & Nutman 1996). Most of this area was known from coastal exposures, while map information for large parts of the inland areas was based only on photogeological interpretation. The mineralised parts of the Naternaq supracrustal belt were investigated in detail by Kryolitselskabet Øresund A/S from 1962–1964 (Keto 1962; Vaasjoki 1965). Immediately south of latitude 68°N the 1:100 000 scale Agto (Attu) map sheet was published by Olesen (1984), and the adjacent Ussuit map sheet to the east is in preparation (Fig. 1). Mapping in 2001 concentrated on the Kangaatsiaq map sheet area and the Naternaq area (Østergaard et al. 2002, this volume), while mapping activity for 2002 is planned between Naternaq and Jakobshavn Isfjord (Fig. 1). The field work in 2001 was supported by M/S Søkongen as a floating base from which field camps were established. The shoreline exposures are excellent and the many islands and extensive fjord systems in the map area provide easy access. Limited helicopter support was available for establishment of a few inland camps and reconnaissance in areas far from the coast.
The Nagssugtoqidian orogen and its transition into the Rinkian orogen to the north were the main focus of the field activities of the Geological Survey of Denmark and Greenland (GEUS) in West Greenland in the summer of 2001.This work was carried out within the framework of the Survey's three-year programme of bedrock mapping and mineral resource evaluation to enhance the understanding of the Archaean and Palaeoproterozoic crustal evolution in the transition zone between the Nagssugtoqidian and Rinkian orogens (Fig. 1).The work in the field season of 2001 comprised geological mapping of the 1:100 000 Kangaatsiaq map sheet described in this paper (Fig. 2), an investigation of the supracrustal rocks at Naternaq / Lersletten (Østergaard et al. 2002, this volume), a geochronological reconnaissance of the southern Rinkian orogen in the northern Disko Bugt region (Garde et al. 2002, this volume), a resource evaluation of the Nagssugtoqidian orogen (Stendal et al. 2002, this volume), a synthesis and interpretation of geophysical data of the central part of the Nagssugtoqidian orogen (Nielsen et al. 2002, this volume) and a report on investigations of the kimberlites and related intrusive rocks in the southern Nagssugtoqidian orogen and its foreland (Jensen et al. 2002, this volume).
The present investigations build on recent previous activities in the region.The Disko Bugt project of the former Geological Survey of Greenland investigated the geology and evaluated the resource potential of the southern part of the Rinkian orogen between Nuussuaq and Jakobshavn Isfjord from 1988 to 1992 (Fig. 1; Kalsbeek 1999).The Danish Lithosphere Centre (DLC) led a research project from 1994-1999 into the tectonic evolution of the Nagssugtoqidian orogen concentrating on the southern and central segments of the orogen between Sukkertoppen Iskappe and Nordre Strømfjord (Marker et al. 1995;van Gool et al. 1996, in press;Mengel et al. 1998;Connelly et al. 2000).Previous activity in the area between Nordre Strømfjord and Jakobshavn Isfjord (Fig. 1) included reconnaissance mapping by Noe-Nygaard & Ramberg (1961), 1:250 000 scale mapping by Henderson (1969), and visits to key localities during the DLC project (Marker et al. 1995;Mengel et al. 1998) from which a few reconnaissance age determinations are known (Kalsbeek & Nutman 1996).Most of this area was known from coastal exposures, while map information for large parts of the inland areas was based only on photogeological interpretation.The mineralised parts of the Naternaq supracrustal belt were investigated in detail by Kryolitselskabet Øresund A/S from 1962-1964(Keto 1962;;Vaasjoki 1965).Immediately south of latitude 68°N the 1:100 000 scale Agto (Attu) map sheet was published by Olesen (1984), and the adjacent Ussuit map sheet to the east is in preparation (Fig. 1).Mapping in 2001 concentrated on the Kangaatsiaq map sheet area and the Naternaq area (Østergaard et al. 2002, this volume), while mapping activity for 2002 is planned between Naternaq and Jakobshavn Isfjord (Fig. 1).
The field work in 2001 was supported by M/S Søkongen as a floating base from which field camps were established.The shoreline exposures are excellent and the many islands and extensive fjord systems in the map area provide easy access.Limited helicopter support was available for establishment of a few inland camps and reconnaissance in areas far from the coast.
The Nagssugtoqidian orogen
The Nagssugtoqidian orogen is a 300 km wide belt of predominantly Archaean gneisses which were reworked during Palaeoproterozoic orogenesis.It is characterised by E-W-trending kilometre-scale folds and ENE-WSW-trending linear belts.It is divided into three tectonic segments: the southern, central and northern Nagssugtoqidian orogen (SNO, CNO and NNO, Fig. 1; Marker et al. 1995).These segments are interpreted by van Gool et al. (2002) as, respectively, a southern parautochthonous foreland zone, a central collisional core of the orogen and a northern transition zone to the Rinkian orogen.Archaean granulite-facies gneisses of the North Atlantic Craton, which forms the southern foreland, were reworked in the SNO at amphibolite facies during south-directed thrusting and folding.The CNO comprises, besides Archaean gneisses, two main bodies of Palaeoproterozoic calc-alkaline intrusive rocks: the Sisimiut charnockite suite in the south-west and the Arfersiorfik intrusive suite in the north-east (Kalsbeek & Nutman 1996;Whitehouse et al. 1998), which are interpreted as remnants of magmatic arcs associated with subduction (Kalsbeek et al. 1987).Palaeoproterozoic metasedimentary rocks are known from narrow belts in the CNO and in the northern part of the SNO.In the northern part of the CNO they are intruded by quartz diorite and tonalite of the Arfersiorfik intrusive suite (Kalsbeek & Nutman 1996;van Gool et al. 1999).This association of Palaeoproterozoic intrusive and supracrustal rocks was interleaved with Archaean gneisses by NW-directed thrust stacking during early stages of collision (van Gool et al. 1999(van Gool et al. , 2002;;Connelly et al. 2000).Thrust stacks and associated fabrics were subsequently folded in several generations of folds, the latest forming shallowly east-plunging upright folds on the scale of tens of kilometres.The CNO is largely at granulite facies, with the exception of its north-eastern corner which is at amphibolite facies.Its northern boundary is formed by the Nordre Strømfjord shear zone (Fig. 1; Marker et al. 1995;Hanmer et al. 1997).
The NNO is the least known part of the orogen.Tonalitic orthogneisses of Archaean age are interleaved with supracrustal rocks of both volcanic and sedimentary origin, most of which form belts up to 500 m wide (Mengel et al. 1998).Supracrustal rocks are less common than in the CNO, but the up to 2 km wide Naternaq supracrustal belt in the north-east is one of the largest coherent supracrustal belts in the orogen (Fig. 1).The main deformational features are a regional foliation, large-scale ENE-WSW-trending folds and several ductile high-strain zones, both steeply and shallowly dipping.The metamorphic grade is predominantly amphibolite facies, but increases southwards to granulite facies around Attu (Mengel et al. 1998;Connelly et al. 2000).
40
Ar/ 39 Ar age determinations on hornblende from the NNO indicate that Nagssugtoqidian metamorphic temperatures of at least 500°C prevailed as far north as Ilulissat (Willigers et al. 2002).Nagssugtoqidian deformation in the Nordre Strømfjord shear zone at the southern boundary of the NNO resulted in a penetrative gneissic high-grade fabric, large-scale upright folds and localised shear zones, as seen in the deformation of Palaeoproterozoic intrusive and sedimentary rocks (Hanmer et al. 1997;Mengel et al. 1998;van Gool et al. 2002).It is not clear to what extent the structures and lithologies in the NNO can be correlated with those in the Nordre Strømfjord shear zone or further south.
Geology of the Kangaatsiaq area
The Kangaatsiaq map sheet covers a large part of the western half of the NNO (Figs 1, 2).Supracrustal rocks were previously recognised in a zone trending from the north-eastern to the south-western quadrant of the map where they outline major fold structures.The south-central and south-eastern parts were indicated as homogeneous orthogneiss due to lack of observations (Escher 1971).A quartz-diorite body was distinguished in the south-eastern part of the map area by Henderson (1969).A few minor occurrences of granite were known, of which that at Naternaq is the largest (Figs 1, 2).
During field work in 2001, twelve lithological units were distinguished, of which several were previously unknown.Ten of these rock types are represented on the map in Fig. 2, while occurrences of the others are too small for the scale of the map.Relative age relationships were established for most of the rock types but absolute ages are still largely unknown.The few available geochronological data are discussed in a separate section below.The Naternaq supracrustal sequence is described by Østergaard et al. (2002, this volume).The other lithological units are described below from oldest to youngest.
Mafic intrusive complexes
Dismembered, layered mafic to ultramafic intrusive complexes are dominated by medium-to coarsegrained, massive to moderately foliated, homogeneous amphibolite, but locally igneous layering is preserved (Fig. 3).The rocks contain hornblende and plagioclase, with or without clinopyroxene, orthopyroxene, biotite, quartz or garnet.The protolith rock types include gabbro, gabbro-norite, ultramafic rocks (mostly pyroxenite and hornblendite), and rarely thin anorthosite sheets occur.This association occurs within the domi-nant tonalitic orthogneisses mainly as lenses up to tens of metres in diameter, but also forms larger bodies up to 2 km across.The rocks are cut by tonalitic and granitic intrusive sheets and veins and occur often strongly agmatised.The mafic lenses contain remnants of a foliation and subsequent folding, which predate the intrusion of the regional orthogneisses.The mafic intrusive complexes are most abundant in the southern part of the map area.
Mafic supracrustal sequences
Thinly layered mafic to intermediate sequences with thin felsic intercalations are interpreted as supracrustal, predominantly meta-volcanic sequences (Fig. 4).They are layered on a millimetre-to centimetre-scale and contain variable amounts of hornblende and plagioclase, with or without clinopyroxene, biotite, garnet and quartz.Isolated, thin quartzo-feldspathic layers, c. 5 to 20 cm thick, are interpreted as psammitic incursions in which presumed granule and pebble-sized detrital grains were observed north-east of Kangaatsiaq.These rocks are intruded by the dominant tonalitic gneiss and occur both as up to 500 m thick, laterally extensive sequences and as smaller xenoliths.In several cases the boundary between the tonalitic gneiss and the supracrustal sequence is tectonically reworked.The age relationship between the mafic supracrustal and mafic intrusive rocks could not be established.The mafic supracrustal rocks are common in a c. 20 km wide belt that extends from the south- western to the north-eastern corner of the map area (Fig. 2).The mafic supracrustal sequences contain rare, up to 5 m thick layers of medium-grained, forsteritehumite marble or medium-to coarse-grained, diopsiderich calc-silicate rocks.
Mica schist
Sequences of mica-rich rocks vary from biotite-rich semi-pelitic schists to biotite, garnet-and sillimanitebearing schists and gneisses, which are intercalated with thin quartzo-feldspathic layers and some quartzite.In the northern part of the area the gneisses locally contain muscovite, kyanite or cordierite.The schists are generally associated with mafic supracrustal rocks, and rarely form isolated occurrences.They are especially abundant in a belt in the central part of the map area and in the Naternaq area (Østergaard et al. 2002, this volume).
Quartzo-feldspathic paragneisses
Quartzo-feldspathic gneisses form 2-3 km thick sequences in the south-eastern part of the map area where they are interpreted as metapsammitic rocks.These grey, medium-grained paragneisses are rather homogeneous, often quartz-rich and poor in biotite, and may contain abundant small (1-2 mm) garnets.
Local rounded quartz and feldspar grains up to 1 cm across are interpreted as pebbles.The quartzo-feldspathic paragneisses are interlayered with 5-100 cm wide amphibolite layers, which are probably of volcanic origin.Slightly discordant, deformed mafic dykes (see below) have also been observed.Rare, biotite-rich micaceous layers locally contain garnet and sillimanite.Contact relationships with the surrounding grey orthogneisses and their relative ages are uncertain, and locally these two litholigical units can be difficult to distinguish in the field.
Dioritic to quartz-dioritic gneiss
This unit consists of medium-grained, uniform, darkgrey migmatitic or agmatitic orthogneisses, containing hornblende, plagioclase, quartz and minor biotite.It occurs mainly as small lenses in the tonalitic orthogneiss unit and only seldom forms larger, mappable bodies in the south.The largest bodies and layers of quartz-diorite are up to 50 m wide and occur in the Arfersiorfik area (Fig. 2).Contact relationships with the tonalitic gneisses are not clear everywhere, but a few dioritic bodies occur as xenoliths.None of the quartzdiorite bodies have so far been correlated with the Palaeoproterozoic Arfersiorfik quartz diorite (Kalsbeek et al. 1987) that occurs in the eastern end of Arfersiorfik and Nordre Strømfjord (Fig. 1).However, this correlation cannot be ruled out for at least some of the occurrences, and geochemical analyses and possibly geochronology will be used to test this.The large body of quartz-dioritic gneiss north and south of the fjord Tarajomitsoq in the eastern part of the map area, indicated by Henderson (1969), could not be confirmed.
Tonalitic and associated quartzo-feldspathic orthogneiss
The predominant orthogneiss unit comprises a wide range of lithologies, which in most cases lack sharp mutual contacts and cannot be mapped out as separate units.Grey, fine-to medium-grained biotite-bearing tonalitic gneiss predominates (Fig. 5).Tonalitic gneiss with abundant medium-grained hornblende occurs commonly in the proximity of mafic inclusions, and a plagioclase-porphyric, hornblende-bearing tonalitic gneiss, characterised by up to 2 cm large clusters of hornblende occurs mainly in the north-western part of the map area.In places, the orthogneiss is migmatitic, containing up to 30% coarse-grained, K-feldspar-rich melt veins up to 5 cm thick (Fig. 5).Another less common melt phase intruding all varieties of the grey orthogneiss consists of leucocratic, white, medium-to coarse-grained granodiorite to granite and occurs predominantly in the south.It forms veins and larger coherent bodies up to one metre wide and can locally form up to 30% of the rock volume.
High-grade, mafic dyke relics
These metadolerite dyke relics are homogeneous, fine-to medium-grained, and consist of hornblende, plagioclase and clinopyroxene, with or without orthopyroxene, biotite and quartz.Garnet is seen rarely at the margins.Commonly the dykes are intensely deformed, foliated and lineated, boudinaged, or transformed to mafic schlieren which can be difficult to identify as dykes (Fig. 5).The less deformed dykes are commonly about 20 cm thick, but can reach 50 cm.Discordant relationships can be preserved in areas of low strain, but angles of discordance are always small.The dykes are widespread and locally form up to 25% of the rock volume, but they do not form a map unit that can be depicted on the scale of Fig. 2. They were commonly observed in the southern part of the map area, where they form dense swarms in the grey orthogneisses (Fig. 6).
Granite and granitic gneiss
Numerous small and large intrusive bodies of granite with a wide range of lithological appearances and different states of deformation were mapped.Coarsegrained, homogeneous pink granite predominates and may grade into megacrystic granite, sometimes with rapakivi-textures, pink microgranite, or pegmatite.
White, leucocratic granite is also observed.Based on their deformational state and contact relationships the granite bodies fall into two main categories (not distinguished on the map): foliated granites with gradational boundaries to their host rocks, and relatively undeformed granites with obvious intrusive contacts.The contact zones between tonalitic orthogneiss and the granites can be tens to hundreds of metres wide, beginning with a few thin granitic veinlets in the orthogneiss, grading into granite or granitic gneiss with abundant orthogneiss inclusions, and ending with almost inclusion-free granite.The gneissic fabric in the inclusions is commonly cut by the granites, which may nevertheless themselves be strongly foliated.
Pegmatite
Several generations of pegmatite have been observed, often cross-cutting and in different stages of deformation.They are commonly coarse-grained, rich in pink K-feldspar, and contain quartz and plagioclase with or without biotite.In general, two main types can be distinguished.The older pegmatites are slightly discordant, commonly irregular in shape and can be folded and strongly sheared, resulting in porphyroclastic, mylonitic textures.They appear to be associated with the granitic gneisses described above.Some of these pegmatites can be shown to be syn-kinematic with the latest fold phase (see below).The second, younger group consists of conjugate sets of late, straight-walled pegmatites.They are undeformed and commonly associated with steep brittle faults that have offsets which are consistent with north-south compression.These pegmatites may be younger than the metadolerite dykes described below.
Metadolerite dykes
Massive, 1-20 m wide metadolerite dykes occur mainly in the southern part of the map area.They cut the regional gneissic fabric and most have E-W trends.Foliation is only well developed in the dyke margins although a weak linear fabric can be observed locally in the unfoliated cores.The dykes have metamorphic mineral assemblages of fine-to medium-grained hornblende, plagioclase and clinopyroxene, with or without orthopyroxene, garnet and rarely biotite.In contrast to the older foliated dyke remnants they always occur as isolated bodies.
Globule dyke
A single, N-S-trending, 20-50 m wide composite dolerite dyke with unusual globular structures was described by Ellitsgaard-Rasmussen (1951).The name for the dyke was based on the local presence of spheres with igneous textures that comprise plagioclase and pyroxene phenocrysts in the core, surrounded by glassy mantles.Several locations were revisited and showed the dyke to be undeformed and to consist of a c. 10 m thick central dyke with thinner multiple intrusions on both sides which have glassy, chilled margins.The dyke is exposed in a few outcrops along a 60 km long stretch from the entrance of the fjord Arfersiorfik northwards to the coast east of Aasiaat.On the aeromagnetic map of the NNO (Thorning 1993) this trace is clearly visible, with several right-lateral steps as depicted in Fig. 2.
Geochronology
U-Pb zircon age determinations have been carried out on six samples from the map area (Kalsbeek & Nutman 1996).Archaean ages in the range 2.7-2.8Ga were derived from four biotite orthogneiss samples.A porphyric granite yielded zircons of c. 2.7 Ga, indistinguishable in age from a gneiss which forms the host rock at the same location.It is uncertain whether these two lithologies are indeed of approximately the same age, or whether the granite contains locally derived inherited zircons.One of two samples from the Naternaq supracrustal sequence contained Proterozoic detrital zircons, suggesting that at least part of the sequence is of Proterozoic age (Østergaard et al. 2002, this volume).Kalsbeek et al. (1984) derived an Archaean Pb-Pb isochron age of 2653 ± 110 Ma for a granite that is intrusive into the regional gneisses just south of the map area.An undeformed granite sampled near Aasiaat just north of the map area yielded an intrusive age of 2778 +7 -3 Ma (TIMS U-Pb on zircon, Connelly & Mengel 2000).Preliminary LAM-ICPMS Pb-Pb reconnaissance analyses on detrital zircons from a felsic layer of a dominantly mafic supracrustal sequence north-east of Kangaatsiaq have yielded Archaean ages.
The available isotope data establish that the regionally dominant tonalitic gneisses have Archaean protolith ages, and that at least some granites in the NNO are also Archaean.The ages of the younger pegmatites and of the metadolerite dykes are at present uncertain.A regional dating programme of rocks in the northern Nagssugtoqidian orogen and southern Rinkian orogen is underway to establish the ages of the main lithologies and tectonic events (Garde et al. 2002, this volume).
Metamorphism
The map area is dominated by upper amphibolite facies, medium-pressure mineral assemblages, but has been affected by granulite facies metamorphism south of the fjords Arfersiorfik and Alanngorsuup Imaa (Fig. 2).Mineral assemblages in metapelites include garnetbiotite-sillimanite in most of the area, with minor kyanite or cordierite observed locally north-east of Kangaatsiaq.Muscovite is stable in the northernmost part of the map area.Partial melt veins occur in most of the region giving the gneisses a migmatitic texture.Relic granulite facies rocks occur as patches in the south within areas of amphibolite facies.The granulite facies grade is reflected in the weathered appearance of the rocks, but orthopyroxene is seldom visible in hand specimen.It does, however, appear as relics in thin section.The age of the granulite facies metamorphism is uncertain, but Palaeoproterozoic rocks in the nearby Nordre Strømfjord shear zone (Fig. 1) are also at granulite facies, and were retrogressed to amphibolite facies in high-strain zones during a late phase of Nagssugtoqidian orogenesis.
Structure
Detailed field observations combined with the map pattern show that at least four generations of regionally penetrative structures are recorded in the dominant orthogneisses, while an even older penetrative planar fabric and isoclinal folds are preserved in mafic inclusions.The regional gneissosity dips to the NNW or SSE at steep to moderate angles, and carries a subhorizontal, ENE-WSW-trending mineral grain lineation or aggregate lineation.It is a high-temperature fabric, and commonly migmatitic veins are developed parallel to it.Locally the gneissosity is axial planar to isoclinal, often rootless, folds.The main gneissosity is a composite fabric, heterogeneously developed either progressively over an extended period of time or in several phases before and after intrusion of the mafic dyke swarm in the south.
At least two phases of folding affected the area.The early isoclinal folds have no consistent orientation and may represent several generations of folds, as reported from the Attu area by Sørensen (1970) and Skjernaa (1973).Map-scale isoclinal folds are most obvious in the north-eastern and south-western map quadrants, outlined by the supracrustal sequences.At several locations the isoclinal folding resulted in interleaving of ortho-and paragneisses.It is also possible that some interleaving occurred by thrust repetition, as reported from the Attu area (Skjernaa 1973) and from south of the Nordre Strømfjord shear zone (van Gool et al. 1999), but so far no unambiguous evidence for thrust repetition has been found in the map area.Shear zones are uncommon and of local extent, mainly associated with the reworking of intrusive contacts between supracrustal rocks and orthogneisses and lacking consistent kinematic indicators.Their relative age with respect to the fold phases is uncertain.
Parasitic folds associated with the youngest, major phase of upright folds are sub-horizontal to moderately plunging, with predominantly WSW-plunging axes.Near the hinges of kilometre-scale folds the parasitic folds are commonly steeply inclined, plunging to the south.Mineral lineations are commonly parallel to sub-parallel with the axes of parasitic folds.Sets of late, steeply dipping conjugate fractures trend NE-SW and NW-SE and some of these are filled with a pegmatitic melt phase.
Summary and conclusions
The Kangaatsiaq region in the northern Nagssugtoqidian orogen predominantly consists of Archaean orthogneisses.It includes a major ENE-WSW-trending belt with abundant supracrustal rocks, which runs from south of Kangaatsiaq to the southern part of Naternaq.A second, previously unknown but extensive belt of quartzo-feldspathic paragneisses, presumably of Archaean age, occupies part of the south-eastern corner of the map area.
The main events in the geological evolution of the area comprise: 1. intrusion of mafic igneous complexes and deposition of mafic and associated supracrustal rocks; 2. formation of a foliation and isoclinal folds; 3. intrusion of the main tonalitic and associated granitoid rocks; 4. formation of the regional gneissic fabric; 5. intrusion of a mafic dyke swarm in the south; 6. further deformation, probably associated with isoclinal folding and intensification of the regional gneissosity; 7. intrusion of granite and pegmatite; 8. formation of a foliation and gneissosity in the granites, in part during their intrusion and associated with upright folding; 9. intrusion of the E-W-trending, isolated metadolerite dykes; 10. formation of upright brittle fractures during intrusion of pegmatite.
At present, an evaluation of the tectonic evolution of the Kangaatsiaq area in a regional perspective is difficult, since the absolute ages of several lithological units and deformational events are still unknown.It is uncertain to what extent the Palaeoproterozoic tectonic events known from south of the Nordre Strømfjord shear zone can be correlated with those of the Kangaatsiaq area.The map area lacks the abundance of Proterozoic supracrustal sequences intruded by quartz-diorite and the shear zones associated with their emplacement, that are typical for the central Nagssugtoqidian orogen (van Gool et al. 1999).The most likely candidate for a Palaeoproterozoic supracrustal sequence in the map area is the Naternaq supracrustal belt.Furthermore, the shear zones in the Kangaatsiaq area are not of regional extent.The lack of consistent kinematic indicators in the shear zones suggests that deformation may have been dominated by pure shear.Coincidence of the orientation and style of the youngest upright, E-W-trending folds in the Kangaatsiaq area with similar structures of Palaeoproterozoic age to the south (Sørensen 1970;Skjernaa 1973;van Gool et al. 2002) and in the Disko Bugt area to the north (several papers in Kalsbeek 1999) was suggested by Mengel et al. (1998) as a possible indication for direct correlation.
Fig. 1 .
Fig. 1.Geological map of southern and central West Greenland, showing the divisions of the Nagssugtoqidian orogen and the boundaries with the North Atlantic craton to the south and the Rinkian orogen to the north.Outlined areas indicated A, B and C are, respectively, the Kangaatsiaq, Agto (Attu) and Ussuit 1:100 000 map sheets.ITZ: Ikertôq thrust zone.NSSZ: Nordre Strømfjord shear zone.SNO, CNO and NNO are, respectively, the southern, central and northern Nagssugtoqidian orogen.Modified from Escher & Pulvertaft (1995) and Mengel et al. (1998).
Fig. 3 .
Fig.3.Well-preserved metamorphosed layered gabbro in an outcrop of the mafic intrusive complex on the island of Ikerasak in the south-western corner of the map area.
Fig. 6 .
Fig. 6.Cliff exposing orthogneisses invaded by a dyke swarm which is boudinaged and folded.Vertical dark streaks are caused by water flowing over the cliff.Height of cliff is about 50 m.The outcrop is located at the southern boundary of the map area, 6 km east of the fjord Ataneq. | 6,330.8 | 2002-12-03T00:00:00.000 | [
"Geology"
] |
Halotolerant bacteria in the São Paulo Zoo composting process and their hydrolases and bioproducts
Halophilic microorganisms are able to grow in the presence of salt and are also excellent source of enzymes and biotechnological products, such as exopolysaccharides (EPSs) and polyhydroxyalkanoates (PHAs). Salt-tolerant bacteria were screened in the Organic Composting Production Unit (OCPU) of São Paulo Zoological Park Foundation, which processes 4 ton/day of organic residues including plant matter from the Atlantic Rain Forest, animal manure and carcasses and mud from water treatment. Among the screened microorganisms, eight halotolerant bacteria grew at NaCl concentrations up to 4 M. These cultures were classified based on phylogenetic characteristics and comparative partial 16S rRNA gene sequence analysis as belonging to the genera Staphylococcus, Bacillus and Brevibacterium. The results of this study describe the ability of these halotolerant bacteria to produce some classes of hydrolases, namely, lipases, proteases, amylases and cellulases, and biopolymers. The strain characterized as of Brevibacterium avium presented cellulase and amylase activities up to 4 M NaCl and also produced EPSs and PHAs. These results indicate the biotechnological potential of certain microorganisms recovered from the composting process, including halotolerant species, which have the ability to produce enzymes and biopolymers, offering new perspectives for environmental and industrial applications.
Introduction
The biocatalysts required in several industrial processes exhibit optimal activities at high ranges of salt concentration, pH and temperature. Halophiles are excellent sources of such enzymes and are found in nearly all major microbial clades, including prokaryotic (Bacteria and Archaea) and eukaryotic forms; two categories have been defined: halotolerant microorganisms that are adapted in live at high salinity, and halophiles that require salinity for growth. Halotolerant species tend to live in areas of salin-ity, such as hypersaline lakes, coastal dunes, saline deserts and salt seas (Ventosa and Nieto, 1995).
Halophilic enzymes perform the same enzyme function as their non-halophilic counterparts but require 1-4 M salt concentrations for their full activity and stability. In addition, these enzymes typically demonstrate a large excess of acidic amino acids compared to basic residues (Enache and Kamekura, 2010).
processes (Gupta et al., 2002), and the moderately halophilic aerobic bacteria of genera Bacillus, Pseudomonas, Halomonas and Serratia are important sources of proteases (Ventosa et al., 1998). Amylases are extensively studied due to their potential application in the food, detergent, paper and pharmaceutical industries, representing approximately 25% of the total enzymes in the industrial market. The extracellular production of b-amylase by halophilic Halobacillus sp. LY9 and of two a-amylases from Chromohalobacter sp. has been reported (Li and Yu, 2011;Prakash et al., 2009). Cellulases also have industrial application, including the generation of bioethanol and in the textile industry, and a halotolerant cellulase was characterized in a soil metagenome analysis (Voget et al., 2006). Lipolytic enzymes are of particular industrial interest, and their identification in halophilic bacteria has been reported and recently reviewed (Gomez et al., 2012). Exopolysaccharides (EPSs) and polyhydroxyalkanoates (PHAs) are biotechnological products that were identified and produced from halophilic/halotolerant microorganisms (Legat et al., 2010;Litchfield, 2012).
In this sense, the Organic Composting Production Unit (OCPU) of SPZPF is a potential source of microorganisms, as demonstrated by an OCPU metagenomic analysis, which revealed a diversity of biomass degradation functions and organisms (Martins et al., 2013). The composting process is predominantly aerobic, with organic residues being degraded by microorganisms, generating a humus-like material. In recent years, composting has attracted attention as a viable and environmentally adequate alternative for the treatment of organic waste. The initial phase of composting is thought to be the most dynamic part of the process and is characterized by a rapid increase in temperature, a large change in pH, and the degradation of simple organic compounds (Schloss et al., 2003). A detailed comparison of the bacterial diversity from different composting plants revealed a large difference at both the species and strain levels (Partanen et al., 2010). This paper reports the screening of the composting process of OCPU at SPZPF for bacteria in the presence of a range of NaCl concentrations and also the evaluation of their potential for the production hydrolases and biopolymers. To date, the microbial diversity of this ecosystem has not been explored, particularly with regard to the screening of halotolerant microorganisms.
Material and Methods
Bacterial strains and isolation of DNA Composting process The composting process was conducted in the SPZPF OCPU in 2.5 x 2.0 x 1.6 m (lengthxwidthxheight) cells, as shown in Figure 1. The piles were formed by organic residues including food, droppings and excreta, the beds of native and exotic wild animals, carcasses and wood chips from gardening. The pile has decomposition phases that were considered active degradation (before aeration) and mature compost (after aeration). Pile aeration was achieved by the mechanical turning of the material after 50 to 60 days of composting. The temperature of the pile was monitored at five different points (four sides and one center). The average temperature of the pile at the time of collection was 50°C.
Screening of secreted extracellular hydrolytic activities
Enzymatic agar plate assays were performed to detect the presence of extracellular hydrolases. All media were adjusted to pH 7.3, and NaCl was added to obtain a salt concentration in the range of 0-4 M. The composition of the media used is described below.
Determination of extracellular amylase activity
Amylolytic activity on plates was determined qualitatively using a previously described method (Pascon et al., 2011), which was modified for halophilic microorganisms by adding NaCl in the medium. After incubation at 37°C for 5 days, the plates were exposed to iodine crystals for 5 min to reveal the starch degradation zone that indicates amylolytic activity.
Determination of extracellular protease activity
The cultures were screened in JCM nº 377 medium and YPC medium supplemented with 1% skim milk for the determination of protein hydrolytic activity. Clear zones around the colonies after 7 days were taken as evidence of proteolytic activity.
Determination of extracellular lipase activity
Lipase production by the isolated microorganisms was evaluated in nutrient agar tributyrin medium (NAT), which consisted of 1.3% nutrient broth, 1% tributyrin and 2% agar (Ben-Gigirey et al., 2000). After incubation at 37°C for 7 days, the hydrolytic zones around the bacterial colonies were considered an indication of lipase production.
Determination of extracellular cellulase activity
Cellulase activity was screened on a solid medium containing carboxymethyl cellulose (CMC) (Rohban et al., 2009). After incubation at 37°C for 7 days, the plates were flooded with 0.1% Congo red solution. The clear zone around colonies indicated cellulolytic activity.
Screening of polyhydroxyalkanoates and exopolysaccharides Detection of polyhydroxyalkanoate (PHA)-producing microorganisms
The isolates were evaluated in mineral medium (Schlegel et al., 1970) with 2.5 M NaCl and containing glucose, xylose or octanoic acid as the carbon source. Glucose is known to be a carbon source for the production of short-chain-length PHAs, whereas octanoic acid produces medium-chain-length PHAs. Sugarcane bagasse contains xylose, and its excess is a promising substrate for producing by-products, such as second-generation bioethanol and PHAs (Lopes et al., 2009). After 24 h of incubation (30°C), the isolated strains were evaluated for their ability to grow on these carbon sources; the isolates were stained with Sudan Black B after 72 h to verify their potential to produce PHAs.
Detection of exopolysaccharide (EPS) producers
The isolates were cultivated in Bushnell Haas Salt Medium (50 mL) containing 2.5 M NaCl with glycerol as sole the carbon source for the microbial growth. After incubation for 5 days at 30°C in a rotary shaker (150 rpm), the cultures were centrifuged at 8,200 x g for 15 min (4°C). The emulsification index (E24) of the supernatant was evaluated according to the method described by Fleck et al. (2000) using hexadecane as a hydrophobic model compound. The chemical composition of EPSs precipitated from the supernatant with ethanol up to 70% was dialyzed against pure water, and carbohydrates, proteins and uronic acids were quantified in the retained high molecular weight fraction, as reported (Tanasupawat et al., 2010).
Bacterial identification Mass spectrometry
The isolated microorganisms were treated with ethanol/formic acid for content extraction, following a previously described protocol (Pascon et al., 2011). Measurements were conducted with a Microflex LT mass spectrometer (Bruker Daltonics) using FlexControl software (version 3.0, Bruker Daltonics) in the positive linear mode (laser frequency, 20 Hz; ion source 1 voltage, 20 kV; ion source 2 voltage, 18.6 kV; lens voltage, 7.5 kV; mass range, 2000 to 20 000 Da). For each spectrum, 240 shots in 50-shot steps from different positions of the target spot (automatic mode) were collected and analyzed. The spectra were internally calibrated using Escherichia coli ribosomal proteins. The raw spectra were imported into the BioTyper software (version 2.0, Bruker Daltonics) and processed by standard pattern matching with default settings; the results were reported in a ranking table.
Amplification and sequencing of 16S rRNA gene fragment DNA (30-50 ng) from each strain was incubated in a 50-mL reaction mixture containing 2 mM MgCl 2 , 200 mM dNTPs, 0.3 mM universal primer 27f (5-AGAGTTGATCCTGGCTCAG-3), 0.3 mM 1525r (5-AAGGAGGTGWTCCARCC-3) and 2 U Taq DNA polymerase (Invitrogen) in the recommended buffer. Amplification was performed in a Veriti 96 well Thermal Cycler (Applied Biosystems) with an initial temperature at 94°C for 2 min, 30 cycles at 94°C for 1 min, 55°C for 1 min and 72°C for 3 min. A final extension at 72°C was included for 10 min. The PCR products were purified with a GFX PCR DNA and gel band purification kit (GE Healthcare), and the sequence analysis was performed using a 3500 Genetic Analyzer Sequencer (Applied Biosystems). Subsequently, 5.0 mL purified PCR product was mixed with 4.0 mL of BigDye v. 3.1 (Applied Biosystems) and 1.0 mL sequencing primer (0.5 mmol). The primers used in the sequencing reactions were 27f (Dojka et al., 1998), 782r (5ACCAGGGTATCTAATCCTGT3) (Chun and Goodfellow, 1995) and 1401r (5CGGTGTGTACAAGACC C3) (Nübel et al., 1996). The sequencing program consisted of 25 cycles at 95°C for 20 s, 50°C for 15 s and 60°C for 60 s. The 16S rRNA gene sequence of all the analyzed strains was compared to bacterial sequences deposited in GenBank. Sequences with similarity were retrieved, and the consensus sequences were aligned using CLUSTALW with MEGA 5.05. EzTaxon tools (http://147.47.212.35:8080/) were further employed to confirm the similarities, and phylogenetic trees were constructed based on neighbor-joining, maximumlikelihood and maximum-parsimony methods. The resulting tree topologies were evaluated by a bootstrap analysis based on 1000 replicates.
Identification of halophilic strains
The isolated bacteria were obtained from the composting process during the turning stage (60 th day). Eight out of eleven halophilic isolates in 2.5 M NaCl from the composting process were subjected to a MALDI-TOF mass spectrometry analysis, which indicated that the genera of all isolates were Gram positive, which was confirmed by Gram staining. These procedures ensured and confirmed the purity of the isolates. The 16S rRNA gene sequences of eight strains (> than 1300 bp) were compared with those previously deposited in GenBank. The neighbor-joining and maximum-likelihood trees showed the taxonomic position of these strains, which were affiliated with Bacillus, Staphylococcus and Brevibacterium genera (Figure 2). Strain SR5-7 showed high 16S rRNA gene sequence similarity to Bacillus when compared with the 184 different species of this genus. However, based on the similarity matrix of the 16S rRNA gene, this isolate did not show 100% similarity with any of the species already reported. The species of Bacillus described as halophilic to date are as follows: B. hemicentroti ; B. humanensis ; B. xianensis (Sanchez-Porro et al., 2003;Schlegel et al., 1970); B. alkaliphilic (Zhang et al., 2012); B. halochares (Pappa et al., 2010); B. chungangensis (Cho et al., 2010) and B. subtilis (Takenaka et al., 2011). Thus, the possibility that a new species of halophilic Bacillus was isolated from a compost process is noteworthy.
The YPC-11 strain was identified as Brevibacterium avium (100% similarity). An EzTaxon analysis confirmed that this strain shared 100% 16S rRNA gene sequence similarity with B. avium and 99.97% with Brevibacterium epidermidis, the only halotolerant bacterium (Nagata and Wang, 2005) described in the genus Brevibacterium.
All the selected isolates were deposited at the São Paulo Zoo Park Culture Collection (SPZSP-CCol).
Salt tolerance and growth of halophilic isolates
All the bacteria isolated in 2.5 M NaCl were tested for their ability to grow at different salt concentrations. A slowing of bacterial growth was observed in the presence of high salt concentrations, as indicated by the time (in days) required for detecting the presence of bacteria in the culture medium (Table 1). Staphylococcus strains SR5-12, YPC-6, SR5-6 and YPC-8 showed similar growth behavior from 0 to 4.0 M NaCl. Strain YPC-13 had the slowest growth at high salinity, and strain YPC-15 grew preferentially at 2.5 M NaCl or higher. Bacillus strains SR5-7 and YPC-11 (affiliated with B. avium) exhibited a preference for growing in a culture medium containing 0.5 to 2.0 M NaCl but failed to grow in 4.0 M NaCl. It is important to note that although all of these bacteria tolerated high salinities (2.5 M NaCl or higher), they are not strictly halophilic bacteria. According to Kushner (1978), bacteria that are able to grow in the absence of salt as well as in the presence of relatively high salt concentrations are designated halotolerant or ex-tremely halotolerant if growth extends above 2.5 M. Based on this classification, seven out of the eight isolated microorganisms isolated from composting process were halotolerant or extremely halotolerant. It should be noted that the salt requirement and tolerance of many species vary according to the growth conditions, such as temperature and medium composition.
Several bacteria of Bacillus, Halobacillus and Staphylococcus have been found in saline environments, such as Salt Plains National Wildlife Refuge, Great Salt Plains of Oklahoma, a Bolivian hypersaline lake, deep-sea sediments and tropical marine sediments (Ventosa et al., 1998). Some species of Bacillus sp. are salt tolerant and are important degraders of organic pollutants. Examples include Bacillus cereus, which degrades 1,3-dichlorobenzene derivatives from town-gas industrial influent (Wang et al., 2003), and Bacillus subtilis, which degrades p-aminobenzene from textile industry wastewater (Zissi et al., 1997).
Only YPC-11 (affiliated with B. avium) presented amylase and cellulase hydrolytic activities from 0 to 4 M NaCl. Members of the genus Bacillus are well known enzyme producers, and many industrial processes utilize species belonging to this genus for the commercial production Halotolerant bacteria from zoo composting processes 351 of enzymes (Vasconcellos et al., 2011). The strain SR5-7 (affiliated with Bacillus) produced lipase and protease in 2.0 M and 0.5 M NaCl, respectively. It is interesting to note that the lipase producers reported thus far are limited to representatives of the genera Salinivibrio, Halomonas and Bacillus-Salibacillus (Sanchez-Porro et al., 2003).
Polyhydroxyalkanoate (PHA) producers
All the isolates were evaluated using a medium with nitrogen limitation and different carbon sources ( Table 2). The isolates grew better with glucose as the sole carbon source compared to xylose and octanoic acid. The isolates YPC-13 (affiliated with Bacillus sp.), SR5-7 (affiliated with Bacillus sp.) and YPC-15 (affiliated with Staphylococcus sp.) accumulated PHAs in presence of octanoic acid, xylose and glucose, respectively. The genus Bacillus is known as a producer of PHAs (Lopes et al., 2009), and Staphylococcus epidermidis, which was isolated from sesame oil, presented the ability to produce poly-3-hydroxybutyrate (Wong et al., 2000). The strain YPC-11 (affiliated with B. avium) was detected as a potential producer of biopolymers using octanoic acid and xylose. This result is in accordance with the previous observation that Brevibacterium casei (SRKP2 strain) could produce PHAs in a medium containing dairy industrial waste, yeast extract and sea water (Pandian et al., 2009). Halotolerant microbes are important for the biotechnology industry due to their advantages for use in sterilization processes and the control of contaminants; the PHA-producing halophilic microorganisms have recently been reviewed (Poli et al., 2011). The production of PHAs using xylose is an alternative strategy to produce economically competitive PHAs using agro-industrial products such as sugarcane molasses and bagasse (Gomez et al., 2012).
EPS production and emulsification potential
Microbial exopolymers (EPSs) correspond to compounds produced by microorganisms to solubilize essential nutrients for their survival or to promote their adherence onto surfaces (Ron and Rosenber, 2002). The use of glycerol as a sole carbon source and 2.5 M NaCl resulted in EPS values up to 60% of the emulsification index (E24) of hexadecane (Table 2). A colorimetric analysis showed that the biosurfactant produced by the evaluated halotolerant strains were mainly composed of carbohydrates (95%) but also contained proteins (0.5%) and uronic acids (4.5%) in their composition. A similar EPS composition was also reported in halophilic Archaea strains (Poli et al., 2011).
Conclusion
Screens for halotolerant or halophilic microorganisms in non-saline environments are scarce as is the detection of extracellular enzymes. This study found eight isolates from an organic residue composting process that showed the ability to tolerate a wide range of salinity. Some of these strains presented combined hydrolytic ability in the presence of NaCl. The possibility of these microorganisms, particularly YPC-11 (affiliated with B. avium), to produce EPSs and PHAs in the presence of 2.5 M NaCl can offer new biotechnological and bioremediation perspectives for the treatment of oilfield wastes as well as in MEOR (microbial-enhanced oil recovery) processes. The performance of the halotolerant isolates in the present work were not compared to other already known and classic halophilic microorganisms, and this should be performed in future work. | 4,030.6 | 2015-06-01T00:00:00.000 | [
"Biology",
"Engineering"
] |
Structural interplay between DNA-shape protein recognition and supercoiling: The case of IHF
Graphical abstract
Introduction
The recognition of specific DNA sequences by proteins is not always driven by the complementary pattern of hydrogen bonds between bases and aminoacids (so-called base or direct readout), but also can be driven by sequence-dependent deformability or local DNA structural features (indirect or shape readout) [1]. In the second mechanism, DNA is distorted in conformations that significantly deviate from the ideal B-form double helix in order to optimize the protein-DNA interface [2,3]. Prominent examples are nucleosomes in eukaryotes and nucleic-associated proteins (NAPs) in prokaryotes, which, by bending and wrapping DNA, induce looping and other complex long-range 3D arrangements [4][5][6]. These DNA-bending proteins have crucial roles in organizing and packaging genomes as well as facilitating basic DNA transactions like transcription and replication [7,8].
IHF is a key and representative NAP in Gram-negative bacteria such as Escherichia coli that induces one of the sharpest known DNA bends, with a measured angle of around 160 [9]. The crystal structure reveals that IHF is formed by a core of a helices with a pair of extended b-ribbon arms whose tip each contains a conserved proline that intercalates between two base pairs [9]. These two intercalations stabilize strong bends 9 bp apart and facilitate wrapping of two DNA 'arms' around the protein body, tightened by electrostatic interactions between the phosphate backbone and cationic amino acids, resulting in a binding site with a length between 35-40 bp [9,10] (Fig. 1).
IHF binds preferentially to the DNA consensus sequence WAT-CARNNNNTTR (W is A or T, R is A or G, N is any nucleotide), which is located on the right side of the binding region and is small compared to the total length of the wrapped DNA [11] (Fig. 1A). However, most of the strongest IHF binding sites include an A-tract to the left-hand side (upstream of the specific sequence) that increases affinity, the degree of bending and the length of the attached DNA site [12] (Fig. 1A). IHF, thus, constitutes a clear example of a recognition arising through indirect readout [13][14][15]. The bends induced by this protein result in higher-order structures comprising nucleoprotein complexes that are essential to a large repertoire of biological functions, including gene regulation [16], the opening of the origin of replication [17], the CRISPR-Cas system [18], and the integration and excision of phage k DNA [19]. Through previous studies combining atomistic molecular dynamics (MD) simulations and atomic force microscopy (AFM), we have shown that the IHF-DNA complex is far more dynamic than previously thought [10]. Building on previous work [20], we demonstrated the existence of multiple conformations and provided structural detail of two intermediate meta-stable binding states, which are also characteristic of nonspecific DNA recognition [10]. These include a half-wrapped state in which only the upstream A-tract binds to the protein; and an associated state consisting of only partial binding on each side (see Fig. 1). The fullywrapped state, which is the one described by crystallography, is only observed in the presence of the consensus sequence, where its binding on the right-hand side can only occur after the binding of the A-tract on its left-hand side ( Fig. 1) due to a protein allosteric change [10]. The indirect readout is thus facilitated via cooperativity between the two flanks, defining a mechanical switch on the DNA [10].
We furthermore observed the formation of large DNA-IHF aggregates in AFM images and the bridging of two DNA duplexes by a single IHF protein in MD simulations (see Fig. 1) [10]. This condensating behavior is of particular importance to bacterial biofilms because IHF is located at crossing points in the extracellular DNA lattice [21] and is crucial to biofilm stability [22].
In parallel, in vivo DNA is organized into topologically constrained domains under torsional stress [23], to which DNA responds by supercoiling. This stress causes change on the total number of DNA turns (or linking number, Lk) which is partitioned into twist (Tw) and writhe (Wr) as Lk ¼ Tw þ Wr. Structures with non-zero writhe correspond to large-scale changes in the DNA, with the helix axis twisting and bending to cross over itself, forming typically plectonemes. In the cell, DNA is maintained negatively supercoiled, with a superhelical density r ¼ DLk=Lk 0 $ À0:06 [24,25], being Lk 0 the default linking number.
Due to inherent difficulties in obtaining high-resolution experimental structures of supercoiled DNA, computational approaches have become very useful tools [26][27][28], often giving excellent agreement with microscopy imaging [29,30,25]. In addition, computational studies have started to investigate the rich interplay between DNA topology and proteins, explaining, for instance, how the presence of proteins can shape topological domains [31][32][33]5,6]. Other studies including all-atom MD simulations on supercoiled circular DNA have found the emergence of additional secondary recognition sites between proteins and distal DNA that resulted in the formation of closed loops [34,35]. However, to the best of our knowledge, no structural detail has been provided on the influence of torsional stress on DNA-protein interaction.
DNA supercoiling promotes the formation of its complex with IHF [36]: experiments have shown that the protein presents greater affinity for supercoiled DNA than for linear DNA [11,37], and the disruption of the fully-wrapped state due to mutations on the lateral positions can be recovered by supercoiled DNA [38]. Of particular note is that many of the higher-order structures governed by IHF, such as integrative recombination, transcriptional regulation, and the CRISPR-Cas system, are known to be facilitated by DNA supercoiling [39][40][41]. Conversely, IHF has an influence on the long-range organization of DNA: the polymer is easier to circularize in the presence of the protein [37], and its knockout causes a re-organization of DNA supercoiling at the chromosome level [42].
Here, we provide atomic insight into the structural crosstalk between DNA supercoiling and protein indirect readout, using IHF as a model case of study. This protein is a remarkable example as it induces one of the sharpest bends on DNA. By simulating the dynamics of DNA minicircles bound to IHF, we identify the importance of supercoiling to the protein's binding mode when relying on indirect readout. We observe that enhancement on DNA flexibility and curvature by supercoiling leads to an increase of DNAbinding modes with a tendency to enhance wrapping around the protein. We also explore the entropic reduction of the conformational landscape of supercoiled DNA by IHF, as well as its capacity to constrain superhelical stress. We finally provide further insight into the formation of closed DNA loops bridged by IHF and demonstrate the formation of independent topological domains.
Construction of DNA minicircles
A linear 336 bp DNA fragment was built using the NAB module implemented in Amber16 [43] with a sequence based on the minicircle generated by intramolecular k-integrase recombination [44,30]. This sequence, containing a single IHF binding site, is given in Section 1 of the supplementary material. Six perfectly planar Linear DNA then wraps around the protein presenting two meta-stable states (half-wrapped and associated state) before arriving to the fully wrapped state, if the specific sequence is present, according to a model deduced from simulations and AFM [10]. A bridged state was also observed, in which a single copy of IHF binds to two molecules of DNA [10]. The IHF a subunit is shown in mauve, b subunit in turquoise and DNA in black except when the consensus positions are highlighted in blue and the A-tract in red. The 'near' and 'far' left sites are constituted by the a and b subunits, respectively, while the 'near' and 'far' right sites are the other way round. In the half-wrapped state, the A-tract to the left binds fully while the consensus bases to the right do not interact with the protein. In the associated state, DNA binds only to the 'near' sites. In the fully wrapped state, which is the one observed by crystallography, DNA arms bind to all sites. The A-tract is always placed to the left side and the consensus positions to the right side.
DNA minicircles containing between 29 and 34 turns were then constructed using an in-house program as previously performed [25]. Afterwards, the structure of IHF-DNA from phage k excision complex (Protein Data Bank (PDB): 5J0N [19]) was inserted at the matching IHF-binding H2 site contained at the attR region of the minicircle. Only the central 11-bp from H2 site that enclose the two intercalation sites was replaced by the crystallographic structure and then junctions between DNA fragments were minimized until a canonical structure was achieved, following previous studies [35]. Hence, the resultant structure used to start simulations consisted of DNA minicircles bound to IHF in an 'open state' without lateral interactions (see Fig. 1).
Molecular dynamics simulations
All simulations were set up with the AMBER 16 suite of programs and performed using the CUDA implementation of AMBER's pmemd program [43]. The constructs were solvated using implicit generalized Born model at a sodium chloride salt concentration of 0.2 M with GBneck2 corrections, mbondi3 Born radii set and no cut-off for a better reproduction of molecular surfaces, salt bridges and solvation forces [45][46][47]. Langevin dynamics was employed for temperature regulation at 300 K with a collision frequency of 0.01 ps 1 in order to reduce the effective solvent viscosity and, thus, accelerate the exploration of conformational space [48,10]. The protein and DNA were represented by ff14SB [49] and BSC1 [50] force fields, respectively. Prolines were kept intercalated by restraining the distances between key atoms in the proline side chain and neighboring bases [10]. Following our protocols for minimization and equilibration [10], three replica simulations of 30 ns each were performed for each topoisomer with IHF bound, and three more for the same systems with the protein removed. The first 20 ns were obtained with distance restraints on the WC canonical H-bonds to avoid a premature disruption of the double helix [35], so only the last 10 ns of each simulation were considered for analysis.
Analysis of simulations
Topological DNA twist and writhe were calculated using WrLINE, which outputs global twist and writhe values alongside the molecular contour [51]. Because global and local definitions of twist are not directly compatible [52], the accumulative twist at the DNA binding site was calculated according to the 3DNA definition at the dinucleotide level [53] using SerraNA [54]. Simulations in implicit solvent are known to systematically overestimate DNA twist [55]. To correct this, a linear fit of average writhe for bare minicircles was performed, so we could determine the value of Lk for which Wr ¼ 0 ( Figure S1); this was found to be Lk 0 ¼ 31:08. Then, r for each topoisomer was calculated relative to this value.
Hydrogen bonds were determined using cpptraj [56] with a distance cutoff of 3.5 Å and an angle cut-off of 120 . The number of hydrogen bonds involving each protein residue and DNA was capped at one, so time-averages along trajectories indicate the proportion of frames presenting this interaction. This was compared with the hydrogen bonds presented in the original crystallographic structure, which is the PDB entry 1IHF [9]. It should be noted that PDBPDB5J0J0N was obtained via CryoEM and posterior fitting based on 1IHF. The secondary structure of IHF was evaluated using the DSSP algorithm [57] as implemented in AMBER and grooves widths were calculated with Curves+ [58].
All simulation frames were classified via hierarchical agglomerative clustering based on the average linkage algorithm using rootmean-squared deviation (RMSd) between frames as a distance metric [56]. Only the backbone atoms of IHF and of a 61 bp region of DNA centered on the binding site were considered for the RMSd. The number of clusters was chosen so each had a distinct interaction pattern of hydrogen bonds between the protein and DNA.
Then, these were attached to IHF via only its protruding b-ribbon arms to simulate how DNA spontaneously wraps around the protein following an initial bound state, which resembles an encounter complex formed at the beginning of the recognition process ( Fig. 1) [59,15,10].
Three independent MD simulation replicas were performed for each topoisomer with/without IHF in implicit solvent to allow enough conformational sampling over feasible timescales (see Supplementary Movies 1-12). A continuum representation of the solvent reduces the computational cost of simulations compared with a solvation box with discrete water molecules and ions, and accelerates global structural rearrangements by at least an order of magnitude due to the neglect of solvent viscosity [30]. Although hydration and ion effects are not so accurately described, our implicitly solvated simulations reproduce well the crystallographic IHF-DNA interactions (Fig. 2), the protein secondary structure (Figure S2) and bp step parameters at the binding site ( Figure S3). In our previous study, we also observed that this type of simulations were able to correctly capture the different IHF-DNA binding modes observed by AFM and explicitly solvated simulations in linear DNA ( Fig. 1) [10]. Here, we want to explore how these different complex states are influenced by the supercoiling of DNA.
DNA conformation has an active role in indirect-readout recognition
To identify the principal DNA-binding modes, all frames from all trajectories were merged together and classified into five distinct binding modes ( Fig. 2A) presenting a characteristic DNA-protein interaction pattern ( Fig. 2B) (see Methods).
As has been described previously [10], interactions between IHF and the lateral DNA arms can be divided into four regions based on their position relative to the center of the binding site and the protein subunit to which the involved amino acid belongs. On the lefthand side (containing the A-tract), the a subunit is closer to the center and thus constitutes the 'near left' site, while the b subunit is farther and composes the 'far left'. On the right-hand side (containing the consensus sequence), the a and b subunits are inversely arranged, delimiting the 'far right' and 'near right' sites, respectively (see Fig. 1A).
As expected, the fully wrapped state is observed, presenting very similar protein-DNA contacts to the crystal structure [9] (Fig. 2). The half-wrapped and associated states previously observed for linear DNA (Fig. 1) do not appear, probably due to the inherent curvature of circular DNA (around 64 over a region the length of the IHF-interacting site), which can be expected to bias the system towards more tightly wrapped states. Instead, a 'three-quarters' state emerges in which the A-tract on the left binds fully to the protein while the right DNA arm binds only to the near right site. Two extra new states appear, both involving the binding of the left DNA arm to the ''bottom" of the protein, while the right arm remains either unbound ('half-wrapped + bottom') or bound to only the near site ('three-quarters + bottom') (see Fig. 2). Lysine 20 and Arginine 21 from the subunit a at the far right site are the aminoacids mainly responsible to wrap the left DNA arm around the ''bottom" of the protein (Fig. 2). We also observed a state comprising an IHF-mediated DNA bridge similar to those previously demonstrated [10], where the DNA remains relatively unbent and the two far sites or the ''bottom" of the protein interact with a second DNA double helix (Fig. 2).
We barely observe transitions between states over time within individual replica simulations (see Supplementary Movies 1-12). As Table 1 shows, only one simulation is observed to sample several conformations: replica 1 for the most relaxed topoisomer switches from the three-quarters to the fully wrapped state (Supplementary Movie 5). This suggests that all of these observed binding modes are stable states corresponding to free-energy minima, where the simulations are trapped, rather than temporary transition structures en route to a global minimum, in agreement of what we found in linear DNA [10].
Our simulations reveal that the intrinsic structure and dynamics of DNA have an important role in the interaction with IHF [14], determining the extent of protein-DNA interactions and, as such, the final configuration of the complex. Hence, our study is a direct observation that DNA is not just a passive polymer to be manipulated, but it has an active role in driving the IHF recognition process [36]. Nonetheless, we still observe the same asymmetric cooperativity between sides as in linear DNA [10] (where the Atract on the left binds first around the protein than the specific sequence on the right) because this allosteric switch depends on the protein and not on the DNA [10].
Supercoiling affects DNA recognition by IHF
We find that the populations of these states vary with the superhelical density of DNA (Table 1 and Fig. 3). While relaxed minicircles present the fully wrapped state, they show a preference for more open states like three-quarters and half-wrapped + bottom (Supplementary Movies 5-7). These binding modes are presented approximately in equal proportion in our simulations, which is in rough agreement with the complex variability that we found in linear DNA [10]. The propensity for the fully wrapped state is strongly enhanced for moderate levels of positive and negative supercoiling, as this binding mode is presented exclusively for topoisomers DLk=-1,1 and 2 (Supplementary Movies 4,8 and 9). Hence, our simulations reveal that an increase in the underlying DNA curvature induced by supercoiling significantly facilitates DNA-shape readout by IHF, promoting larger wrapping around the protein compared with relaxed DNA.
We find that readout variability increases for higher superhelical densities (Fig. 3): the most negatively supercoiled topoisomer (DLk=-2) presents different binding modes per each replica (see Supplementary Movies 1-3); the most positively supercoiled [60,25]. These defects are associated with a wider ensemble of possible structures, because they occur stochastically at multiple sites [61,60] and act as flexible hinges, allowing stress release and significant structural readjustments [35]. We observe the emergence of denaturation bubbles in all replica simulations of topoisomer DLk=-2 (see Fig. 3), which presents a superhelical density close to that steadily maintained in most live bacteria (r=-0.067) [24,25]. Because the extent of supercoiling widely differs between chromosomal regions [62], we anticipate that the observed variability is present in vivo. In fact, the dependence of DNA-IHF configuration on supercoiling seems to be exploited by several biological processes, such as replication initiation [63], phage Mu transcription [64] and Tn transposition [65], as their job for IHF is conditioned upon the levels of supercoiling. For example, IHF is transformed from activator to inhibitor of Mu operator when DNA is altered from relaxed to negatively supercoiled, respectively [64]. We argue that the modulation of IHF-DNA binding modes by supercoiling revealed in our simulations could cause a change on the protein's role through an alteration of the resultant DNA architecture.
The effect of IHF on minicircle compactness and twist-writhe partition
Our simulations show that IHF globally compacts relaxed DNA loops (see Fig. 4A), in agreement with previous gel electrophoresis on minicircles, where mobility was accelerated in the presence of IHF, indicating a reduction on its hydrodynamic radius [37]. We observe this effect is proportional to the level of wrapping around the protein: the first replica of topoisomer DLk = 0, where DNA is fully wrapped, presents the strongest reduction in the radius of gyration compared with the second replica, where the DNA is wrapped three-quarter parts, and the third, where the DNA is only half wrapped (Fig. 4A). As the degree of supercoiling increases in either direction, this compaction effect becomes superfluous, as DNA naturally becomes rod-like (see Fig. 3 and 4). An exception to this is the DLk=+1 topoisomer, which remains predominantly open in the absence of IHF and becomes substantially compacted upon protein binding (Fig. 4A).
IHF also brings a significant change in the twist-writhe partition on this topoisomer, which has the effect of correcting the asymmetry between positively and negatively supercoiled DNA (see Fig. 4B and C). On naked DNA, negative supercoiling is associated with more writhed structures than equivalent amounts of positive supercoiling (Fig. 3) [30]. However, IHF appears to correct this asymmetry by shifting the writhe of DLk=+1 topoisomer in the positive direction. Because twist at the binding site (Fig. 4D) cannot explain the altered twist-writhe balance, we hypothesize that this effect is due to IHF-mediated bends, which stimulate writhed apex-like structures (Fig. 3), enabling twist relaxation. Finally, we relate twist-writhe variability observed in topoisomer DLk=-2 ( Fig. 4B and C) to the presence of DNA defects (Fig. 3). Replica 2 presents a bigger denaturation bubble compared with the other two replicas (Fig. 3), which causes extremely low twist values and, as a result, a considerable moderation in writhe (Fig. 4).
In summary, our simulations reveal that IHF compacts DNA loops almost as much as supercoiling, being its action especially significant on relaxed and moderately overtwisted DNA (when bare DNA is mainly in an open conformation) and becoming redundant with the increase of torsional stress. Hence, our results fit with the idea of IHF being a 'supercoiling relief' factor [66], where DNA supercoiling can be functionally replaced by IHF binding. This effect has been described in phage Mu transcription [66] and Tn transposition [65]; along with supercoiling becoming a backup for IHF in recombination [39] and CRISPR-Cas processes [41].
IHF restrains under-or overtwisted DNA depending on supercoiling direction
In the presence of IHF, our simulations reveal that the binding site presents lower or higher values of twist (between 0.5 to 1 heli-cal turn) compared to relaxed DNA, depending on whether the complex is formed under negatively or positively supercoiled DNA, respectively (Fig. 4D). The more extreme values of twist on topoisomers bound to IHF versus unbound are due to the fact that DNA wraps around the protein at the beginning of our simulations when minicircles are writhing, so most of the torsional stress is still in the form of molecular twist. In this respect, our simulations illustrate the situation of DNA being actively supercoiled and simultanously recognized by proteins, which is physiologically relevant as chromosomes are constantly transcribed and manipulated in vivo [67].
To understand the origin of this effect, we looked into the structures in detail and we observed a considerable amount of heterogeneity as DNA is wrapped around the protein under different levels of supercoiling ( Fig. 5 and S4). These conformational adjustments, which mainly consist of changes in molecular twist and , twist for the whole circle (C) and twist on the IHF binding site (D) of DNA minicircles with different levels of supercoiling, with IHF (black, red and blue for replicas 1, 2 and 3, respectively) and without IHF (white). Replica simulations are ordered from left to right as in Fig. 3. The extremely low value in the radius of gyration observed for the 2nd replica of DLk=+3 is due to the formation of a highly compact trefoil structure (see Fig. 3). groove dimensions ( Figure S5), induce the protein to interact with different nucleotides, pinning the double helix in distinct orientations and thus constraining supercoiled DNA (Fig. 4,D 5 and S4).
We also find that, on occasions, DNA supercoiling reduces the number of contact points between a DNA arm and its IHF side from three (encompassing two major and one minor grooves) to two (a major and a minor groove) (see the two bottom structures of Fig. 5). We do not observe this conformational alteration in relaxed DNA, probably due to its natural propensity to optimally wrap IHF. Hence, our simulations reveal that the DNA conformational variability induced by supercoiling not only influences the binding modes of the complex but also its fine structural details.
Previous experiments have given an unclear picture of whether IHF constrains supercoiled DNA: while in vivo experiments found IHF was not able to change the overall supercoiling balance in the chromosome [68,62], in vitro experiments showed that IHF had indeed the capacity to constrain supercoiled DNA on smaller plasmids [37]. Our simulations provide an explanation for these apparently contradictory results: IHF can restrain twist at the binding site, although it cannot modify the global state, because it under-or overwinds DNA depending on the supercoiling direction. In fact, our results suggest that IHF could act as a kind of 'supercoiling buffer' through the release of stored torsional stress by means of DNA breathing or dissociation as the surrounding superhelical density would change.
This view is in agreement with the concept of 'topological homeostat' associated to other NAPs like Fis, which has been detected to rescue promoters from inactivation via the formation of writhed loops, when these deviate from optimal superhelical density [69]. Our simulations suggest that IHF-induced loops could also serve this purpose of protecting promoters from supercoiling variation, apart from the more established function of facilitating their basic assemblage [70]. Interestingly, this 'torsional buffer' effect has also been exposed in eukaryotes through the reorganization of nucleosome fibers as a function of DNA twist [71]. We thus point towards a general need across species of developing cushion mechanisms that can protect against supercoiling imbalance generated by crucial cellular activities, such as transcription and replication [72], as well as external factors like growth stage or environmental stress [73].
IHF reduces the entropy of the DNA supercoiling conformational landscape
In the presence of IHF, plectonemes are mostly observed to form with the protein at their apices (see Fig. 3 and 6). This has the effect of significantly reducing the entropy of the minicircle conformational landscape, relative to the case in which no protein is bound (Fig. 6). We observe that the conformational distribution of the DNA minicircles is significantly broader in naked DNA, as the apex of the plectoneme can be located in multiple positions. In the presence of the protein, the ensemble of conformational states is shifted towards a unique folded state, positioning the IHF at the apex.
We can quantitatively estimate the cost of the entropic reduction by using S ¼ k B lnðWÞ, where k B is the Boltzmann constant and W is the number of possible states. If we assume IHF folds DNA in one state, compared with the 168 possible in naked DNA (an apex of the plectoneme can be pinned to each bp along half of the minicircle), then the entropic reduction is approximately 5.1 k B T or 3 kcal/mol at 300 K. If we consider that not all plectoneme positions are equally probable along the naked minicircle (some conformations are more favorable than others, see Fig. 3 and 6), we then need to reduce the number of states to 50 or 25%. This gives entropic penalties around 4.4 k B T (2.6 kcal/mol) and 3.7 k B T (2.2 kcal/mol), respectively, which are still large enough to be overcome by thermal fluctuations of bare DNA. This entropic simplification could be larger, as IHF could have the capacity to organize longer DNA loops, containing higher levels of inherent conformational variability.
Overall, our simulations support the view that IHF function consists basically of organizing DNA into unique conformations in order to facilitate the types of genetic transactions in which the protein is involved. Interestingly, a similar plectoneme-pinning effect has also been detected in damaged DNA [74,75], showing that local changes in DNA curvature and flexibility are key to regulating the folding of supercoiled loops. This together with the fact that IHF can be functionally replaced by other DNA-bending proteins [70] suggest that the positioning of plectonemes might be a general principle for this type of architectural proteins. However, it remains an open question for future studies whether other proteins might reduce DNA conformational variability to the same degree as hardly any induces such as strong bend on DNA.
IHF-mediated bridging divides DNA into topological domains
A DNA-IHF-DNA bridge involving additional contacts between distal DNA and the "bottom" of the protein was observed to form spontaneously in replica 3 of the most positively supercoiled minicircle (DLk ¼ þ3) (see Supplementary Movie 12). This bridge results from nonspecific interactions between basic aminoacids and the negatively charged DNA backbone (see Fig. 2 and 3). This supports our previous findings indicating that such bridges are both possible and energetically favorable, and that specific recognition can be simply modulated or extended via additional electrostatic-driven interactions between the protein and the DNA [10]. Figure S5 for all replicas). The complete DNA sequence is included, where the consensus binding site is in underlined text and the most conserved positions in bold. The only few CG bp are highlighted in red and serve as rulers to compare DNA orientation relative to IHF sides. The two bottom structures reveal variability in the supercoiled DNA being fully wrapped around the protein with sizable changes in groove dimensions (right side) and a reduction in the contact points (left side). Color scheme and orientation is the same as in Fig. 1: a subunit of IHF is shown in mauve, b subunit in turquoise, A-tract is always placed to the left and the consensus positions to the right.
The observation of this bridge in the most supercoiled minicircle suggests some relationship between bridge formation and supercoiling, which we explain as the result of the proximity of distal DNA sites that are far apart in torsionally relaxed DNA [40,76,77]. In this regard, DNA bridges involving secondary nonspecific recognition sites have also been identified for other bacterial proteins like Topoisomerase IB [34] and ParB [78] in supercoiled DNA. We think IHF needs specially high supercoiling levels to form DNA bridges (r P j0:095j or DLk P j3j, Table 1), because it naturally bends DNA. On extreme supercoiling conditions, DNA can stochastically bend and melt at a variety of points [25], giving the opportunity to avoid protein wrapping and thus to establish a bridge.
The formation of an IHF-mediated DNA bridge in a minicircle results in two closed loops. Measuring the writhe in both of these loops over time (Fig. 7), reveals no evidence of writhe passing between the loops, consistent with the formation of two isolated topological domains. Furthermore, the writhe is not evenly distributed: while the larger loop accounts for 76% of the minicircle's contour length (255 bp), it holds 90% of the total writhe. That this asymmetry was not corrected by the diffusion of writhe into the smaller loop is further evidence for the separation of topological domains.
This effect can be quantified by calculating the correlation coefficients between each pair of time series: if writhe regularly passes between the two loops, one would expect the two datasets to be negatively correlated with R 2 close to 1. In fact, the calculated value is R 2 ¼ 0:0041, indicating that no correlation exists between the two and that IHF is therefore demonstrably dividing the DNA minicircle into two separate topological domains. For comparison, the R 2 values for the correlation of the overall writhe with the large and small loops are 0.75 and 0.14, respectively, indicating as expected that the larger loop has a greater influence on the total writhe and that changes within both loops collectively explain almost all of the change in the minicircle's overall writhe.
Finzi and coworkers have already shown that protein-mediated DNA bridges have the capacity to establish independent topological domains, constraining variable amounts of supercoiling [79,80]. This result was observed by specialized loop-mediating proteins like the CI [79] and lac repressors [80], where each DNA molecule is attached to the bridging protein by means of specific interactions. Here, our simulations provide atomic insight into this effect and reveal that a single bridge is sufficient to create a topological boundary, even if it is locked via nonspecific interactions. However, as this type of binding is weaker than specific recognition, it will probably present shorter lifetimes and, as a consequence, less capacity to define topological domains.
Conclusions
By performing all-atom simulations, we have provided, for the first time, atom-level insights of the interplay between DNA supercoiling and DNA-shape protein recognition (see Fig. 8). We observe that changes in the intrinsic curvature of circular DNA facilitates its bending around IHF and results in the appearance of new binding modes not observed in relaxed linear DNA [10]. We also show that these effects are further enhanced by supercoiled DNA. We anticipate that the 'active role' of DNA [36] detected here for driving protein interaction (Fig. 8A) will be applicable to other systems relying on indirect recognition, where DNA is heavily deformed, including other NAPs and eukaryotic chromatin-binding proteins.
As well as quantifying the influence of supercoiling on IHF binding, we also demonstrate the effect of IHF binding on the topological organization of DNA by showing that IHF strongly and reliably controls the position of plectonemes (Fig. 8B). The protein also acts as a 'supercoiling relief' factor [66,65] by inducing global compaction on relaxed DNA almost to the same extent as supercoiled loops. We anticipate that this capacity of compacting DNA and pinning plectonemes might be general to other DNA-bending proteins, although this effect is probably weaker, as barely any other protein produces a U-turn bend as IHF.
Due to the influence of DNA conformation on indirect recognition, IHF restrains under-or overtwisted DNA, depending on whether the complex is formed under negatively or positively supercoiled DNA. This effect suggests that the protein could act as a 'supercoiling buffer' by increasing or decreasing constrained supercoiled DNA as neighboring superhelical density is changed (Fig. 8C). We hypothesize that IHF-induced loops could shield a supercoiling steady state on promoters for protecting their expres-sion, as has been demonstrated by other NAPs like Fis [69]. Because eukaryotic chromatin fibers also present the capacity to homeostat DNA torsion [71], we propose that supercoiling buffering mechanisms might be essential across species to protect genome functionality from imbalances on superhelical stress.
Additional evidence [10] is also provided for DNA bridging by IHF via a secondary nonspecific interaction driven by positively charged aminoacids at the "bottom" of the protein (Fig. 8D). This is only detected at extreme levels of supercoiling, because bending and melting occur stochastically at different points on the DNA, avoiding the folding of DNA arms around the protein and thus leaving the key aminoacids free. By combining the current results with our previous publication [10], we hypothesize IHF-mediated bridges to be feasible when DNA strands are nearby (i.e. in high DNA supercoiling levels, high DNA and counterion concentration); as well as in weak IHF binding sites where the open DNA state is significantly populated. Probably, this is of significance to a number of biofilms and to nucleoid compaction at the cellular stage when IHF is most abundant. We finally demonstrate that this bridging, even if it is based on nonspecific interactions, has the capacity to divide the DNA into two distinct topological domains.
In essence, the present study points to a collection of observations derived from the influence that DNA structure and dynamics exerts on protein recognition when based on indirect readout. This effect becomes more evident when DNA suffers from superhelical stress as it significantly changes DNA configuration energy landscape. Because this study examines DNA supercoiling within ranges observed in vivo, we expect our findings to be relevant in the living cell. The combination of these effects provides a biological mechanism to control DNA compaction, plectoneme positions, supercoiling and chromosome boundaries, making IHF a valuable tool for the regulation of genes in complex pathways as has been detected at the whole genomic level [42]. We anticipate that this multifaceted mode of action might not be exclusive of IHF, but it could constitute a common principle of architectural proteins responsible for the organization of chromosomes, either in prokaryotes or eukaryotes, and, more generally, of proteins that recognises DNA through alterations on its shape.
Data availability
All relevant data is included in the main manuscript, the supplementary material and the University of York Data Repository (DOI 10.15124/dfc206ca-f6e1-43af-b677-8dd316d3dcf0).
Declaration of Competing Interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. | 8,317.4 | 2022-03-31T00:00:00.000 | [
"Biology"
] |
Multi-nuclear NMR of axially chiral biaryls in polypeptide orienting solvents: spectral discriminations and enantiorecognition mechanisms
Due to the importance of axially chiral biaryl derivatives as chiral auxiliaries and/or ligands for asymmetric synthesis, as well as their structural role in bioactive natural products, continuous efforts have been undertaken to propose efficient methods for their atropo-selective synthesis. As a consequence, proposing robust and reliable analytical tools that are able to discriminate the spectral signal of atropisomeric enantiomers becomes crucial to evaluate the enantiomeric excesses of mixtures. In this work, we show how several multi-nuclear 1D/2D-NMR techniques using homopolypeptide chiral liquid crystals as aligning solvents can provide a panel of analytical possibilities (through differences of chemical shift anisotropies, dipolar and quadrupolar residual couplings) to spectrally discriminate enantiomers of a large collection of trisubstituted axially chiral biphenyls. Approaches involving P, C and H 1Dor 2D-NMR experiments at natural abundance levels are explored. Among noteworthy results, the first examples of spectral enantioseparations using P nuclei as nuclear probe are reported. Finally, the roles of electronic factors and shape anisotropy in the efficiency of chiral discrimination mechanisms are examined and discussed. Molecular modeling calculations were carried out to establish the electronic profile of these analytes in order to understand and rationalize the C–{H} NMR results.
Introduction
Axially chiral biaryl derivatives possess a peculiar stereochemical motif able to generate a couple of stereoisomers.The stereogenic structural motif is present in various potentially bioactive natural compounds and exhibits a wide range of biological properties. 1 For instance, one can mention the well-known vancomycin (a clinically used antibiotic glycopeptide) 2 or steganacin (a cytotoxic tubulin-binding dibenzocyclooctadiene lignan). 3In fact, the biaryl scaffold is a privileged structure for pharmaceutical research as its incorporation assures frequently high entry rates. 4 addition, the stereogenic axes provide rigid molecular frameworks for highly efficient tools in asymmetric synthesis. 5oncomitantly, atropisomeric C 1 -symmetric biaryls play an important and effective role as chiral auxiliaries and/or ligands for asymmetric synthesis.Consequently, continuous efforts have been undertaken by organic chemists to develop efficient methods for the atropo-selective synthesis of ligands based on the biphenyl, binaphthyl, or other biaryl backbones. 6The conformational stability of bridged biaryls can be strongly increased by incorporation of ortho-substituents, the associated rotational energy barrier primary depending on their number and their bulkiness. 7Generally, ortho-trisubstituted biphenyls show no stereo-labile properties, and hence no rapid enantiomerization at room temperature is expected (see Fig. 1a). 8o far, both chiral supercritical fluid chromatography and chiral gas chromatography have been the main methods used to separate the enantiomers of such peculiar chiral compounds. 8,9Although adequate in numerous cases, chromatographic approaches present some well-known specific drawbacks for systematic implementation (price of chiral columns, for instance).Furthermore, the determination of experimental conditions leading to enantiomeric resolution is sometimes highly timeconsuming.As a consequence, proposing (simple) analytical alternatives involving other techniques, such as NMR spectroscopy, to chemists is a valuable task.
In the past, liquid-state NMR methods involving mainly chiral derivatizing agents (MPTA or Mosher's acid) in combination with or without lanthanide shift reagents have been proposed to discriminate enantiomers of (bridged or not) chiral biaryl atropisomers. 10Although successful, these approaches require the presence of accessible reactive groups (-COOH, -OH, -NH 2 ,. ..) to generate (in situ or not) diastereoisomers.
This prerequisite can be overcome when using NMR in lyotropic chiral liquid crystals (CLC).3][14] Compared to classical NMR approaches using chiral derivatizing or solvating chiral agents, NMR in CLC requests no specific functional groups inside the analyte, 15 while all magnetically active nuclei (even at very low natural abundance levels) can provide effective probes.
2][13] A doubling of the spectral information/patterns for a given nuclear site indicates therefore that the enantiorecognition phenomenon occurs and is revealed on the NMR spectrum by a difference of residual chemical shift anisotropies (CSA), dipolar couplings (D) or quadrupolar couplings (Dn Q ) (for spin I 4 1/2) (see Fig. S1, ESI †). 16,17As a first example, deuterium NMR in CLC was used to analyze the intramolecular dynamic processes of chiral and prochiral deuterated ortho-disubstituted biaryls (derivatives of 1-(4-methylphenyl)naphthalene). 14 this work, we show how multi-nuclear 1D/2D-NMR using chiral anisotropic solvents (homopolypeptide CLC) can provide various analytical possibilities to spectrally separate enantiomers of a large collection of ortho-trisubstituted axially chiral biphenyls (see Fig. 1).From an analytical viewpoint, anisotropic NMR results will be discussed in terms of spectral enantiodiscrimination efficiency and an attempted rationalization of the results is proposed.For this purpose, the seventeen analytes investigated have been classified into four series of structurally related molecules (I to IV) depending on the similarity of their substitution patterns, as displayed in Fig. 1b.
Synthesis
The synthesis of these atropisomeric biaryls was recently reported using an original, modular approach (see Fig. S2, ESI †). 8 Briefly speaking, it is based on (a) the preparation of ortho,ortho 0dibromobiphenyls bearing an additional substituent in the 6-position via a transition metal-free aryl-aryl coupling (the 'ARYNE coupling'), 18 (b) the regioselective introduction of an enantiopure p-tolylsulfinyl group as a traceless chiral auxiliary allowing the separation of atropo-diastereoisomers by simple crystallization, (c) the chemoselective functionalization of this auxiliary and (d) subsequent regioselective functionalization of the remaining bromine atoms.During all these chemical transformations, the configuration of the biaryl axis is maintained, and hence no racemization was found to occur. 8The diphosphine 16 was obtained by means of catalytic C-P coupling. 19
Material for oriented NMR samples
In this study, homopolypeptide CLC samples were composed of poly-g-benzyl-L-glutamate (PBLG), purchased from Sigma and dissolved in chloroform. 11,20The degree of polymerization of PBLG is equal to 743 (MW = 162 900 g mol À1 ).The mass of solute in the samples varies from 19 to 100 mg, while the molar variation ranking from 2.1 Â 10 À6 (sample 14) to 1.8 Â 10 À4 mol per enantiomer (sample 17).Table S1 (ESI †) lists the exact composition (samples 1 to 17) and if the chloroform was protonated or deuterated.The preparation of (sealed) anisotropic NMR tubes and practical aspects have been reported in previous papers (also see ESI †). 10,12,13R spectroscopy 13 C, 13 C-{ 1 H} and 31 P-{ 1 H} 1D/2D-NMR spectra were recorded on routine 9.4 T Bruker (Avance I) NMR spectrometers equipped with either a 5 mm BBO, TBI or QXO probe.Unless otherwise specified, the sample temperature was set to 298 K. NAD-{ 1 H} 2D NMR spectra were performed on 14.1 T Bruker (Avance II) spectrometer equipped with a 5 mm 2 H selective cryoprobe, 17,21 and the WALTZ-16 CPD sequence was used to decouple proton (0.5 W). 11 Specific experimental details are given in the figure captions.
Molecular modeling and DFT calculations
Geometry optimizations and electronic structure determinations were carried out using the Gaussian 09 program running on the ''IDA'' cluster of the University of Paris-Sud. 22Density functional theory (DFT) with self-consistent reaction field (SCRF) Tomasi's polarized continuum model (PCM) for solvation 23 was used in all calculations to describe implicitly the solvent (chloroform) for energy minimizations and for description of orbitals.All computations were performed with the hybrid method B3LYP, whereas electronic correlation and exchange were respectively described by the use of the Becke 24 and Lee-Yang-Parr 25 functionals.Relativistic effective core potentials (ECP) were used to describe electrons of heavy atoms (Br and Cl) with the valence double z quality basis sets Lanl2dz. 26The standard 6-311G(d,p) basis sets were used for the rest of the atomic orbitals' descriptions of H, C, O and P atoms.Local minima conditions per molecule were confirmed with the calculation of harmonic vibrational frequencies of all structures.None of the predicted vibrational spectra has any imaginary frequency (data not shown), implying that the optimized geometry of each of the molecules under study lies at a local point on the potential energy surface.The electronic properties such as Molecular Electrostatic Potential (MEP), frontier molecular HOMO-LUMO orbital energies and Mulliken atomic charges have been obtained with the same level of theory as previously described.
Results and discussion
For a global view of the results, Table 1 (see also Table S2, ESI †) summarizes the essential data for the analytes and the sets of experimental results.As all results were obtained with very similar experimental conditions (T E 298 K, W/W of PBLG of 14%), we will then follow with their interpretation in terms of chiral discrimination mechanisms (noted in short as CDMs).In particular, for the 13 C NMR results, attempts to correlate the number of 13 C discriminated sites (noted in short as NDS( 13 C)) and the possible solute-PBLG electrostatic interactions in combination with molecular shape recognition effects will be proposed and discussed.
H 1D-NMR spectroscopy
Even for small-size molecules, the number and the magnitude of (short and long-range) 1 H-1 H residual dipolar couplings significantly increase the linewidth to obtain rather low-resolution 1 H spectra where no fine structures clearly emerge.Some exceptions can be found with molecules possessing isolated methyl groups, for instance.Compounds 4 to 6, 10, 11 and 17 are typical examples.Contrarily to isotropic 1 H NMR, an uncoupled methyl group exhibits a triplet structure in a LC (instead of a single resonance) due to the intramethyl 1 H-1 H dipolar couplings (see Fig. S3, ESI †).In a CLC, two triplets with different splittings (|3D S HH | a |3D R HH |) centered on very close 1 H chemical shifts (due to a small difference of 1 H CSA) are generally detected.
For analytes, 6 and 11, a single triplet (with |3D HH | = 58 and 52 Hz, respectively) is observed, thus revealing no resolved discrimination through a difference of D HH .Two reasons may explain this absence of enantiodiscriminations: (i) the rather low sensitivity of D( 1 H-1 H) to a difference of orientational ordering (compared to 2 H quadrupolar interaction, for instance); (ii) the complex conformational dynamics of ligand Y (here, up to three rotors), which generally leads to averaging down the order parameters of each internuclear vector along the chain, and subsequently reduces spectral enantiodifferences.For solutes 5, 10 and 17, a symmetric 1 H spectral pattern of six lines is observed for methyl bound to the ring (see Fig. S3, ESI †).This structure can be analyzed as follows: (i) a dedoubled triplet if the methyl group is dipolarly coupled with one of the aromatic protons; (ii) two triplets centered on two 1 H chemical shifts due to a surprisingly large 1 H CSA. Two approaches involving the components of mixture can be proposed to assess the origin of the spectral pattern.They consist of either recording the Table 1 Compilation of various solute parameters and the associated number of discriminated sites (racemic series) using proton-decoupled 2 H, 13 27 or using the enantiopure compound (when available) in the CLC.Irrespective of the method used, it appears here that the doubling of triplets has its origin from the dipolar coupling with the ortho-position aromatic proton as simply exemplified in the case of 10 (see Fig. S1, ESI †).Spin-manipulation based alternatives for simplifying the coupling pattern of 1 H signals should be possible but they are beyond the scope of this paper and have not been explored.indicates that enantiomers are discriminated on the basis of 31 P CSA differences (|Ds| = 14 Hz, Dn 1/2 = 3.5 Hz) (see Fig. 2a).
Surprisingly, monophosphine oxide biaryl moieties 13 and 15 are not spectrally discriminated despite the expected increase of the electronic shielding anisotropy of the phosphorous nucleus due to the presence of the oxygen atom, which produces bigger 31 P CSA susceptible to lead (a priori) to larger enantiodiscriminations.Variable temperature 31 P NMR experiments (range of 30 K) did not permit enantiodiscriminations neither for 13 nor for 15.The last strongly suggests that the energy gap of the interconversion barrier between enantiomers is importantly increased by the fact that aryls' free-rotation is sterically hindered by the presence of the oxide and thus only one enantiomer is favored.
Finally, the case of chiral biaryl-based diphosphane 16 is rather peculiar.Indeed in the liquid state, this molecule possesses two anisochronous 31 P atoms (at room temperature) resonating at two distinct chemical shifts (d( 31 P A ) = À12.0ppm and d( 31 P B ) = À14.1 ppm) and mutually coupled as first evidenced in 2011. 28his spin-spin coupling finds its origin via a ''through-space'' scalar coupling (noted J( 31 P A -31 P B ) = 22.7 Hz) and not via the intramolecular five-bond connectivity, which should lead to a small scalar 5 J( 31 P A -31 P B ) coupling.The spectral assignments of P A and P B atoms derive from the analysis of 1 H-31 P 2D HMBC, 31 P 2D J-resolved and 1 H-1 H 2D COSY experiments shown in ESI † (Section SIII).If two doublets (AX spin system) are observed on the isotropic 31 P-{ 1 H} spectrum, four resonances (Dn 1/2 o 0.8 Hz) are detected at each 31 P site in a racemic series (see Fig. S2a and S4, ESI †).This doubling of lines indicates enantiodiscrimination.Three spectral situations corresponding to a difference of either 31 P CSA or 31 P-31 P RDC or from both contributions can explain the presence of two pairs of doublets for each 31 P site (see Fig. S5, ESI †).The assignment of 31 P resonances was assessed by comparing the 1D spectrum of 16 to the one recorded with an enantioenriched mixture (enriched in R isomer, ee = 51.3%)using similar experimental conditions (sample composition and temperature).As seen in Fig. 2b, the peak intensity difference between enantiomers allows their associated signals to be undoubtedly assigned.Various homonuclear 2D experiments confirm qualitatively the presence of each enantiomer, regardless of the absolute configuration of NMR signals (vide infra and ESI †).According to the enantioassignment made, the analysis of 31 P-{ 1 H} 1D spectrum of 16 indicates that total couplings between the two 31 P nuclei for each isomer, |T A ( 31 P-31 P)| and |T B ( 31 P-31 P)| with T = J + 2D, are very close and equal to 18.7 and 19.1 Hz, respectively while each doublet is shifted by 3.9 Hz ( 31 P A ) and 4.5 Hz ( 31 P B ). Assuming a negative value for T A or B , the magnitude of D( 31 P-31 P) is equal to À20.3 and À20.5 Hz for R and S, respectively.Conversely, if T( 31 P-31 P) A or B is positive, D( 31 P-31 P) A or B becomes equal to À2 or À1.8 Hz, respectively.Among the two anisotropic contributions (D and CSA), only the 31 P CSA (DDs = 3.9 Hz and 4.5 Hz) is the relevant NMR interaction here that can be in fine efficiently exploited to evaluate the ee.As CSA is directly proportional to the magnetic field strength, operating with higher magnetic field spectrometers should guarantee larger discriminations.Interestingly, the results obtained here are the two first examples of enantiodifferentiation using 31 P-{ 1 H} NMR, so far.Previous studies using 31 P-{ 1 H} or 31 P NMR as the analytical technique had failed in discriminating enantiomers of phosphorous compounds. 29 1 Hdecoupled 13 C 1D-NMR Although less sensitive compared to 1 H or 31 P NMR, anisotropic 13 C-{ 1 H} NMR at natural abundance is an excellent and competitive method to discriminate enantiomers, in particular when sp 2 hybridized carbon atoms are present in the analyte.The gain in sensitivity of commercially available cryogenic probes with respect to conventional ones (up to a factor of 4 to 5) enables the experimental acquisition times of qualitative 13 C-{ 1 H} NMR to be reduced to less than 1-2 hour(s), 30 even working with small amounts of solutes or with analytes of high MW.However, when a sufficient amount of solute is available (10-30 mg), depending on the enantiomeric mole number (from 2.27 Â 10 À5 to 18.20 Â 10 À5 ) and the S/N ratio (SNR) desired, all 13 C-{ 1 H} 1D spectra were recorded with a moderate magnetic field (9.4 T) using acquisition times in the range of 3-10 hours.
All experimental data related to the number of discriminated 13 C sites in PBLG are presented in Table 1, while Table S3 (ESI †) lists the value of all d( 13 C) and the differences of CSA (DDs) for each of them.
For derivatives of series I and II, the 1D analysis of 13 C-{ 1 H} NMR spectra in CLC can be easily performed by counting the number of 13 C lines and then comparing with the isotropic 13 C-{ 1 H} NMR spectra where no discrimination occurs (see Fig. S6, ESI †).For all of them, several spectral enantiodiscriminations occur both in the CH and quaternary aromatic carbon atoms, but the number of differentiated 13 C sites varies from one (solute 1) to twelve sites (solute 2), with spectral difference from 1 Hz (limit of discrimination) to 14 Hz.Note that carbon atoms belonging to the Y substituent (see Fig. 1) can provide further potential 13 C sites for discrimination like in the case of 4, 9 and 10, for instance (see Table S3, ESI †).
Two illustrative examples of 13 C-{ 1 H} 1D spectra (solutes 2 and 8) are given in Fig. 3.The variations of d( 13 C) between 2 and 8 (and between each solute) result in the well-known electronic effects (+I, ÀI and +M, ÀM) of various substituents on the rings (see also Fig. S4, ESI †).For 2 and 8, about 90% of 13 C sites are discriminated, thus affording a multiple choice for measuring enantiomeric excess (ee) of mixture.However, from a quantitative viewpoint, the choice of the best sites is clearly governed by three parameters (i) the spectral frequency differences between enantiomer signals; (ii) SNR; and (iii) signal overlapping of analytes with solvent signals.Typically, for 2 and 8, the C-4 atom (SNR E 80-100) provides the best site for quantitative purposes when the PBLG/CHCl 3 chiral system is used.Carbon atom C-9 could also provide a discrimination site but the signal overlay with very broad resonances of PBLG aromatic signals complicates the evaluation of enantiomeric purity.Finally, the quaternary C-6 and C-8 (for 2) or C-6 (for 8) carbons also provide large separations (E11 Hz), but their low SNR (E30-35) excludes them from an accurate determination of large ee's.The overall interspectral analysis of all analytes indicates that biaryls with an aldehyde (3 and 9), ether (4) and ester (6 and 11) substituent possess numerous enantiodiscriminated sites (60 to 80%), but these sites show smaller spectral differences (1 to 3.5 Hz).
For methylated biaryls (5 and 10), the ratio of enantiodiscriminated sites does not exceed 50% while the lowest number of sites (o25%) is obtained for the iodo derivatives (1 and 7). 13C-{ 1 H} 1D-NMR spectra of 7 and 9 are given in the ESI.† Similar experimental conditions (co-solvent, w/w of polypeptide and temperature) for all solutes allow 13 C spectral comparisons.Furthermore, the 13 C CSA is weakly sensitive to small variations of T or sample concentration.
Due to the diversity of contributing factors/effects to the CDMs (molecular shape, electronic properties and/or conformational dynamics), the attempt to establish qualitative correlations between the enantiodiscrimination efficiency and molecular properties is far from being trivial.Nevertheless, it is important to investigate them in order to rank their role and evaluate their respective predominance contributing to the CDMs.This is a prerequisite step toward a global insight of the phenomenon, and subsequently the possibility to predict spectral results for any given analyte.
Considering the rather high degree of structural homology (ortho-three-substituted biaryl) of analytes in series I and II, we have first attempted to simply correlate (and explain) the 13 C NDS and the range of spectral separations (DDs( 13 C)) (weak (1-3 Hz), medium (4-8 Hz) or large (4 9 Hz)) to the magnitude of the global dipolar moment of the molecule, m mol , calculated by molecular modeling.For this purpose, the dipole moments (in CHCl 3 ) for all biaryls in their lowest-energy conformation have been calculated using solvent-dependent density functional theory (SCRF-DFT) Mulliken charge distribution analysis (see Experimental section for details).The scalar value is listed in Table 1, while the value on the three-axis components (x, y, z) and their vectorial representation are reported in Table S4 (ESI †).A first inspection of the molecular modelling results shows that: (i) the variation of m mol does not exceed 7% when Br is replaced by the chlorine atom; and (ii) the nature of substituent Y modifies significantly m mol , from 2.1 (1) to 4.2 D (4).
The comparison of NMR results between series I and II shows rather similar results (NDS( 13 C) and DDs( 13 C)), thus indicating that the replacement of Br by Cl in position 6 (ring A) does not change strongly the global molecular properties (dipole moment and shape) of the solute towards the efficiency of CDMs.In contrast, in each series, the difference of properties of ligand Y has a larger impact on both criteria.Except for solutes 1 and 7, which both exhibit the smallest values of m mol and NDS (with small DDs), the analysis of results for other analytes indicates that there is no simple dependency (i.e. a monotonous variation) between both spectral criteria and the magnitude of global dipole moment.In clear, the larger NDS does not occur for the biggest m mol .This absence of direct correlation suggests that the nature of substituent Y and the associated specific electronic properties (presence of labile hydrogens able to form intermolecular hydrogen bondings (HB) or the presence of electronegative oxygen in carbon-oxygen double bond, for instance) play a crucial role in the efficiency of CDM for this series of biaryls, independently to the global molecular dipole moment.Thus, the best results obtained with acid derivatives 2 and 8, and not for the methyl ester analogues (6 and 11), point out that the possibility of forming HB between the substituent Y and the oxygen of the carboxylate group of the PBLG side chain is of primary importance in the CDM.Schematically, the role of HB can be understood as follows.Contrarily to ordering mechanisms (mainly due to the coupling between the solute quadrupole moment and the electric field gradient of the solvent), the CDMs involve short-range intermolecular interactions that derive from the repulsive forces correlated with the size and shape (and the shape-anisotropy) of the solute. 31Hence, the CDM efficiency is strongly dependent on the average distance between the solute and the PBLG chiral helix.In this context, irrespective of the magnitude of the global dipole moment, HB can be seen as a crucial local electronic interaction susceptible to bring the solute nearest the fibers (at small distances), in turn promoting better enantiodiscriminations, this effect being particularly strong when the labile hydrogen is highly topologically accessible, as in the case of 2 and 8.
When no HB are possible, other attractive specific electronic interactions can play important roles in the CDM, in particular to reduce the average solute-PBLG distance.Although probably less efficient than HB, these (secondary) interactions then become key parameters governing the efficiency of discrimination.Clearly the presence of an electronegative oxygen atom with accessible lone pairs (aldehydic or ester groups for instance) appears as an important electronic parameter favoring fibersolute electrostatic interactions (van der Waals type).In contrast (vide infra), it is a priori expected that CDM are much less efficient for biaryls devoid of any groups susceptible to promote any attractive intramolecular interactions (case 5 and 10).This simply explains why better results are obtained for methoxy, carbonyl or ester groups (NDS varying from 8/13 to 10/13), whereas the situation is much less favorable for methyl group (NDS = 6/13).From the spectral enantiodiscrimination viewpoint, the case of methoxy or aldehydic derivatives (3, 4, 9, 17) could be qualified as an ''intermediate'' situation for which only enantiodiscriminations with moderate spectral differences are expected and occur.
As an illustrative example of the experimental quantification of ee's of enantioenriched chiral biaryl mixtures, Fig. 4 compares the signals of four 13 C sites of 10 in racemic (top) and enantioenriched mixtures (R) (bottom).Thus, in the enantioenriched series, a single resonance is observed at quaternary carbons (C-1 and C-7) for which the SNR is smaller (137 and 114) compared to para methine sites (C-4 and C-10) (SNR = 317 and 375).In contrast, for the latter case, a very weak signal can be found at the foot of the most intense 13 C signal, indicating that the mixture is not enantiopure but only enantioenriched (see Fig. 4b, right panel).Evaluation of peak areas by deconvolution indicates an ee of about 95% (in good agreement with the chromatographic results), 8 while the absence of signals for the minor enantiomer at quaternary carbon atoms (C-1 and C-7) could suggest an ee of 100%.This example points out the importance of sites selected for quantitative measurement, in particular when a reliable evaluation of ee is necessary.The analysis of 13 C-{ 1 H} spectra of molecules of series IV is analytically much more challenging for two reasons: (i) the important number of sp 2 carbon atoms (up to 24 sites) in the aromatic region ranging from 123 to 145 ppm (12 and 13); and (ii) the weak distribution of chemical shifts of sp 3 carbon atoms in the aliphatic region (from 25 to 34 ppm) of cyclohexyl rings (14 and 15).For all phosphorous analytes, the determination of d( 13 C) listed here was achieved from the analysis and the intercomparison of 13 C-{ 1 H} and 13 C-{ 1 H, 31 P} 1D-NMR spectra and 13 C-{ 1 H} and 13 C-{ 1 H, 31 P} J-modulated 1D-NMR spectra.All d( 13 C) measured (and the associated |DDs|) at 9.4 T are reported in Table S3 (ESI †).Globally, the results from the 13 C enantiodiscrimination side are rather mediocre and disappointing because only very few carbon sites (with tiny spectral differences) show useful separation both in phenyl and cyclohexyl rings.Actually, two reasons may explain the absence of numerous and large differentiations.The first one is related to the presence of sp 3 carbon atoms in cyclohexyl groups (14 and 15), which are less susceptible to discrimination on the basis of 13 C chemical shift anisotropy (spherical electronic screening).The second one could originate from the global shape of the molecule.Indeed, the presence of a biphenylphosphine (oxide or not) moiety significantly increases the size of the structure (four cycles), and concomitantly leads to a more globular molecular topology (regardless of the conformational dynamics).In a simple (static) schematic view, increasing of the number of (aromatic or aliphatic) cycles reduces the geometrical shape anisotropy (compared to molecules of series I and II), and in turn the efficiency of shape recognition mechanisms, which are also another key parameter in the CDM.17b To illustrate this idea, Fig. 5 shows the electronic topologies of 3 and 12 displayed in terms of the DFT-computed electrostatic potential contour plots associated to the optimized electronic structures.As seen, the surfaces for these two examples significantly differ from each other.Thus, solute 3 shows a rather cylindrical topology (with a diameter (D) of about 8 Å and a length (L) of about 11 Å, leading to a D/L ratio of 0.72) whereas 12 has a roughly spherical topology (with D E 15 Å).Actually, the former is representative of the topology (rod-like type) adopted by biaryls of series I and II (including also 17 of series IV), whose D/L ratios differ accordingly with the nature of the X and Z substituents.In contrast, the latter (a rather spherical shape) is representative of series III (including also analyte 16 of series IV), whose D varies between 15 and 18 Å (see Table S4, ESI †).Reasonably, the shape recognition mechanisms where the topological anisotropy plays a substantial role are expected to be less efficient in the second case.
1 H-coupled 13 C 1D-NMR At a first glance, and compared to 13 C-{ 1 H} NMR in CLC, 13 C NMR might be seen to be of limited practical interest for two reasons: (i) the complexity of spectral pattern due to the presence of both short range ( 1 D CH ) and long range ( n D CH , n = 2, 3) RDCs; and (ii) the distribution of 13 C signals on numerous lines, thus reducing the SNR, and hence the accuracy of the ee.However, this approach must be kept in the panel of tools because it can also provide a solution to reveal chiral discriminations.In particular, it can be a useful alternative to 13 C-{ 1 H} NMR when molecules possess few or no sp, sp 2 carbon atoms.Besides, using simple heteronuclear 2D-NMR experiments (see below), it becomes possible to simplify the spectral analysis of fine 13 C-1 H structures, while the measurement of peak volumes on 2D maps allows the determination of the ee when the signal of each enantiomer has been identified.
Another potential source of useful information can be found from the analysis of 13 C signals of substituent Y (see Fig. 1), in particular when small (isolated) groups such as methyl or methoxy are present.Indeed such moieties can lead to very simple spectral patterns to analyze since they are primarily governed by direct 1 D CH , which in turn gives rise to quadruplet structures (of relative intensities 1 : 3 : 3 : 1).Such a relevant spectral situation exists for 4, for which two slightly shifted quadruplets (DDs = 2 Hz) with two different total couplings (| 1 T A CH | = 156 Hz and | 1 T B CH | = 149 Hz) appear on the 13 C spectrum.Assuming that T is positive, we obtain 1 D A CH = À3.5 Hz and 1 D B CH = À7 Hz ( 1 J CH = + 163 Hz).The presence of further doubling is due to one remote dipolar coupling, whose magnitudes differ for each enantiomer.The differences of fine structures for the four components of the quadruplet originate from the various combinations of lines, which also depend on the difference of 13 C CSA (2 Hz) and the short-and long-range 13 C-1 H RDCs as well.The assignment of 13 C resonances shown in Fig. 6b has also been supported by the analysis of the associated 13 C-1 H heteronuclear T-resolved 2D map.In this example, the shielded component of the quadruplet provides the best site for a quantitative measurement of ee (see Fig. 6).
Heteronuclear correlation 2D-NMR approaches
Except for particular cases (as discussed previously), the analysis of proton-coupled 13 C 1D-NMR in CLC cannot be simply performed, and hence heteronuclear correlation 2D-NMR experiments are needed to tentatively extract the useful spectral information for quantitative purposes.During the last decade, a large panel of heteronuclear 2D experiments (involving HSQC scheme) has been explored to extract one bond 1 H-13 C dipolar couplings. 32 However, the control of 1 H- 13 C polarization transfer efficiency (for quantitative measurement of ee dissolved in CLC) can be subtle and time-consuming for routine NMR users.Considering the framework of this study, we only focused our purpose on 2D experiments based on the well-known heteronuclear ''J-resolved'' schemes. 33As in CLC, the T couplings replace the J couplings, so the experiments were renamed ''T-resolved'' 2D experiments, but the pulse scheme remains identical.As expected, the 13 C chemical shifts are refocused during t 1 while the T( 13 C-1 H) couplings are removed during acquisition by 1 H decoupling.As the 13 C-1 H T couplings evolve only during half of the t 1 evolution period (gated-decoupled method), the T values are scaled down by a factor of 2. Modified sequences of the basic ''T-resolved'' experiment might be proposed.For instance, with the view of simplifying the coupling structures in F 1 , a BIRD cluster can be incorporated to differentiate long-range from direct couplings. 34Additionally, the sensitivity could be improved by incorporating INEPT or DEPT pulse trains as an initial transfer step. 35Nevertheless, one can be faced with either distorted lines or important differences with the transfer efficiency, leading to less accurate ee measurements.CH is negative (compared to 1 J A/B CH , which is always positive and ranging from 150-160 Hz for aromatic carbons). 36ote that a similar spectral situation was also observed for the C-11 site of 3 (see Fig. S5, ESI †).Indeed, here again the 1 T A/B CH (C-11) value (66/74 Hz) is smaller than 1 J CH (C-11), thus indicating that 1 D A/B CH o0.The further splittings (at C-10) and triplets (at C-11) observed on the map originate from the n T CH long-range couplings.For both sites, the separation of coupling patterns in F 1 is facilitated by the 13 C chemical shifts difference between enantiomers.The presence of two triplets (for each enantiomer) is quite unusual but can be explained if C-11 is coupled identically with two inequivalent aromatic protons in the vicinity of C-11 (62 and 64 Hz for A and B, respectively).
The large magnitude of 1 D A/B CH (148 and 170 Hz) for C-10 whereas 1 J CH = +164 Hz indicates that the C 10 -H direction is strongly ordered.Actually, the analysis of other carbon sites of 2 (and 8, also) confirms a rather strong degree of molecular alignment compared to other solutes, leading to large (and unusual) 1 D CH values.This result suggests the presence of HB in the orientation mechanisms of 2 (and 8) leading to a decrease of the average distance between the analyte and the fiber, and hence an increase of the average degree of alignment of the solute.Locally, it is expected that the ordering of each C-H vector (S CH ), and in turn the associated RDC value, increase.
In a framework of a crude two-site interaction model, we can simply write that the S ij order parameter is a weighted sum of two situations corresponding to the case where the solute is either close to the polypeptide fibers (bonded) or at remote location (free): where P bonded and P free are the normalized population ratios of solute (P bonded + P free = 1).From the NMR viewpoint, we can subsequently write that: where Obs i stands for NMR observable at site i (Ds i , D ij or Dn Qi ) and is associated to NMR nuclei detected.In this very simple approach, the solute is strongly oriented in close vicinity to the helix, and not oriented (or very weakly oriented) when the solute is distant from the helix; the respective population and associated splittings directly depend on the strength of interactions between the solute and PBLG.
P-P correlation 2D-NMR experiments
As explained previously, the comparison of the 31 P-{ 1 H} 1D-NMR spectrum of diphosphino biphenyl 16 recorded in racemic and enantioenriched series leads to a rapid assignment of the absolute configuration of lines in the spectrum of the racemic mixture (vide supra).However, this is only possible when an enantioenriched mixture or an enantiopure compound is available and the absolute configuration of the major isomer is known.When the racemic mixture is only available, the assignment of various 31 P peaks of (AE)-16 is obviously not straightforward.Indeed the positions of 31 P resonances can be explained by a difference of 31 P CSA or 31 P RDC, or both (see Fig. 2).To clear up this ambiguity, various homonuclear 2D-NMR approaches were tested to correlate the 31 P resonances to each enantiomer of the mixture: the 31 P-31 P COSY, T-resolved and INADEQUATE 2D 11,37 experiments, no knowledge of 31 P-31 P RDC being requested for the first two.Experimental maps and comments on the experimental results are proposed in ESI.†
The NAD 2D-NMR approach
The main feature of NAD NMR is mainly its low sensitivity due to the very weak natural abundance of deuterium nuclei (1.5 Â 10 À2 %), namely 100 fold less than 13 C nuclei.Besides using CLC, the intensity of NAD signals for a given 2 H site is reduced by a factor four when spectral discrimination occurs.Indeed, the single 2 H peak observed in achiral liquids (see Fig. S2, ESI †) is now split into four resonances (two quadrupolar doublets), hence reducing the SNR, and subsequently the error in the ee value, in particular when the ee's are large.Technically, this situation can be partly overcome using high-field NMR spectrometers, equipped with cryogenic probes when possible.However, the efficiency/interest of this tool will depend primarily on the available amount of analyte, concomitantly to its MW.In this study, the MW of the solutes ranges from 282 to 557 g mol À1 , while the available amounts vary from 20 to 100 mg (half of these masses for each enantiomer).Under these conditions, NAD 2D-NMR experiments in CLC have been only recorded for solutes 2 to 4, 6, 9, 11 and 17, for which a sufficient mass of solute was available (see Table 1); this corresponds to a mole number varying from 7.65 to 18.2 Â 10 À5 mol, namely a mole number of monodeuterated isotopomers [ 2 H] varying from 11.8 to 28.2 Â 10 À9 mol.The number of discriminated 2 H sites for each analyte is reported in Table 1 (see also Table S2, ESI †).
The analytical interest of NAD 2D-NMR is the possibility to separate the useful information on two spectral dimensions (see Fig. S12 to S14, ESI †).Thus on the tilted NAD Q-COSY Fz map of 17 (Fig. S12, ESI †), we can easily assess that seven 2 H sites (over nine) show spectral discrimination (77%).The presence of three DQ (instead of 4) associated to sites 10 and 12 resonating at the same d( 2 H) can lead to two possible interpretations from the discrimination viewpoint: (i) two quadrupolar doublets (QD) for site 12 and one for site 10; and (ii) two QD for sites 10 and 12, considering that both internal doublets for each site possess the same splitting.Actually, the second analysis is less probable for three reasons: (i) the differences of RQCs ( for each aromatic site are quite similar (ranging from 210 to 355 Hz for the outer one and from 167 to 278 for the inner one); (ii) in the structure, the C-2 H 10 and C-2 H 4 bonds are collinear (para position), and hence the order parameters (and their associated Dn Q 's) are expected to be similar for both C-2 H vectors (124 and 140 Hz respectively); and (iii) the absence of discrimination at both sites.The analysis of the aliphatic region indicates that both methyl and methoxy groups are discriminated (until the baseline) with RQCs differences of 24.0 and 23.4 Hz, respectively, namely 12 Hz between R and S components of doublets.Due to the free rotation of methyl deuterons around the C-C bond (1 rotor) or C-O-C bonds (2 rotors), the RQC values are averaged down compared to the RQCs of aromatic 2 H sites (from 3-10 fold less).Interestingly, the contribution of three deuterons to the NAD signals increases the SNR (162 and 146) compared to the aromatic sites, thus providing the two best sites to accurately determine the enantiomeric excesses.
Except for analytes 2 and 8 (see below), the distribution in magnitude of the RQCs of the aromatic 2 H QDs observed on the NAD 2D map is globally quite similar for the various analytes (see Fig. S12 and S13, ESI, † for instance).In contrast, the RQCs measured for the 2 H sites of the flexible Y substituent show large variations in magnitude.In order to illustrate this, Fig. 8 compares the NAD 1D signals of the aldehyde groups of 3 and 9 and methyl groups of 4/17 and 11/17 (see Fig. S15, ESI † for the full 2D map of 6 and 11).For each case, the spectral enantiodiscrimination on Y occurs with differences of RQCs, |DDn Q |, varying from 5 Hz (11) to 35 Hz (4), while the average of the corresponding RQCs varies from 129 Hz to 28.5 Hz.The large variation of SNR between the methine and methyl group originates from the number of 2 H nuclei contributing to signals (1 to 3) but also the mass (60 mg to 100 mg) and the MW (277 to 370) of each sample, and possibly some variations of isotopic ratio from one site to another (but not between enantiomers).
Various comments can be made on this series of results, in particular on the variation of RQCs and the magnitude of enantiodiscriminations for the different 2 H sites. Indeed, the comparison of results indicates that there is not a simple correlation between the magnitude of RQCs, |Dn Q |, the RQC's difference between enantiomers, |DDn Q |, and the number of rotors in the ligand Y (one for 3 and 9), two for 4 and 17, and three for 6 and 11.Actually, various (sometimes contradictory) effects, such as the difference of electronic properties of each ligand (aldehyde, ether and ester group) and the position (number of rotors) of the 2 H site in the ligand can be evoked to explain the results.Thus, the large magnitude of RQCs for 3 and 9 associated to a strong degree of alignment of the C-H direction could suggest a strong site-specific interaction (chargetransfer interactions) between the carbonyl polar group and the PBLG fiber, despite the free rotation around the C-C bond.The large difference of |Dn Q 's| (from 34 to 150) and |DDn Q 's| (from 35 to 5) observed for the methyl groups in 4 (17) and 6 (11) is much more subtle to explain/understand.They both involve significant differences of averaged orientation of the C-D directions, the number of rotors (2 and 3) between the ring and the 2 H site, and the electronic properties of the ligand.On the basis of the rotor number, it could be expected that the |Dn Q 's| for 6 and 11 would be larger than 4 and 17.This trend is not experimentally observed whereas larger |DDn Q 's| are measured for those latter compared to 6 and 11.This illustrates the difference of electronic interaction between ether or carboxylate groups and the PBLG side chain (which can promote a more or less stronger alignment of the Y ligand), but also the ability of the 2 H site to sense the chirality of the biaryl skeleton versus its distance and the number of rotors in the flexible part.Results obtained for the methyl group (6 and 11) suggest that a physical interaction between PBLG and the carboxylate group might generate a higher degree of alignment for the COOMe moiety, and hence for the terminal methyl group, but in this case with a weaker enantiodiscrimination efficiency.
Finally, it must be noticed that the NAD 2D-NMR spectra of the carboxylic acids (2 and 8) were not analytically exploitable whereas 13 C-{ 1 H} NMR had provided well-resolved spectra with excellent results in terms of spectral quality and enantiodiscriminations (see Table 1, Table S3, ESI †).Indeed, 2D maps of 2 and 8 are made of weakly resolved NAD QDs of low intensity, not distinctly emerging from noise (even with strong exponential apodisation), and showing splittings ranging from 1500 to 2000 Hz (see Fig. S14a/b, ESI †).These magnitudes of RQCs are unusually large for small solutes oriented in weakly aligning media as those prepared with the PBLG polymer, whereas the symmetrical shape, the linewidth (3 Hz) and the splitting (around 500 Hz) of chloroform both indicate a homogeneous and uniform mesophase that complies with standards when w/w (PBLG) = 14%.No significant enhancement was obtained by several rehomogenizations of the sample (new cycles of centrifugations) or by a sample temperature variation.
Although a priori unexpected, the low quality of NAD NMR spectra for 6 and 8 (due to unusually large RQCs) can be explained by the presence of HB, and again understood in the frame of the simple model proposed above.Derived from eqn (2), we can write that: Thus, the existence of strong hydrogen bonds may lead to an ''aggregation effect'' of the solute towards PBLG fibers, considerably increasing the solute alignment (and the associated Dn Q at each deuterium site), and finally amplifying excessively the RQCs.In the case of NAD NMR, this aggregative effect can appear as spectrally unfavorable (as seen for 6 and 8), because the larger the 2 H splittings are, the larger the linewidths for each component of DQ are (effect due to the ''disorder'' of orientational order), and hence the smaller the SNR are.In the case of 2, we can note that the magnitude range of RQCs of DQs observed on the NAD map is rather coherent with the range of RDCs measured on the 13 C-1 H NMR spectra (see the 13 C-1 H T-resolved 2D map in Fig. 7) according to the fact that the ratio ''RQC/RDC'' is equal to 12-14 when sp 2 hybridized carbon atoms are involved (this ratio is equal to 11-12 for sp 3 ones).This relationship derives from the fact that 13 C-1 H and 13 C-2 H directions (in the associated isotopomers) are similarly oriented in the mesophase relative to the magnetic field axis, B o . 38Finally, the presence of HB between 2 and 8 and PBLG is simply evidenced when comparing their NAD spectra with those of ester analogues, 6 and 11, which cannot form HB. While the sample composition is similar to those of the acid derivatives, we obtain exploitable NAD 2D spectra of 11 where the ranges of RQCs are standard (see Fig. S14, ESI †).The analysis of both esters indicates that 85% (6/7) of the aromatic 2 H sites are discriminated (|DDn Q | = 20 to 153 Hz) while a small spectral enantiodifference (about 3 Hz) is observed on the terminal methyl group.All these results and arguments point out openly the important role of HB in the ordering mechanisms of the solute (in particular the degree of molecular orientation) and the related analytical consequences according to the NMR properties of the observed nucleus.Thus, the presence of HB involving COOH groups was a major advantage in terms of 13 C enantiodiscrimination (on the basis of 13 C CSAs) since 85% of 13 C sites showed discriminations, but can lead to undesirable effects visible on the NAD spectra (on basis of 2 H RQCs).This beneficial difference results from a more complex and ''dilute'' dependence of the 13 C CSA through the electronic screen tensor (for each 13 C site) toward the molecular ordering of a solute compared to Dn Q .This rather contradictory situation associated to 13 C and 2 H NMR illustrates nicely some versatile aspects of anisotropic NMR as well as the subtle balance between the orientation and the chiral discrimination.
Importance and role of factors involved in the CDM
In the frame of the understanding and the phenomenological description of CDMs in polypeptide CLC (and in particular in PBLG), the analysis of the various factors (and their respective role) governing or involving the efficiency of CDM is a necessary step before proposing models describing the phenomenon or a starting point for any computational modeling of the system.A priori, it is difficult to dissociate the topological properties (shape anisotropy) and the electronic profile of a solute due to their strong inherent intrication.However, for a qualitative description of contributive factors, we may propose this artificial separation.
17b,39 According to the degree of shape anisotropy of solutes (for instance, spheroid, cylinder, spiral,. ..), the efficiency of global shape recognition mechanisms (which are closely related to steric exclusion effects) is basically different, the best situation being met with a spiral topology rather than a spheroid one.
As shape recognition mechanisms are short-range interactions that are highly active when the solute is in the closest vicinity of the polypeptide, they are strongly dependent on possible local electronic (electrostatic) interactions between the solute and the chiral fiber, and in particular the flexible side chain of the polymer.Hence, the electronic local properties related to the nature of substituents (presence of HB, strength of C-O dipole moment, steric hindrance) in combination with the global properties of the analyte (the global dipole moment) play a key role in the solute-PBLG interactions and their capability to ''maintain'' or not the solute close to the chiral fiber, namely when the CDM are the most efficient.
The role of intermolecular HB.In this series of results, we have evidenced the importance and the role of HB between the substituent and the PBLG fibers in the mechanisms, in particular when the access of the labile hydrogen is easy.As can be observed in Table 1, solutes 2 and 8 present the highest degree of discrimination ( 13 C NMR) of all the solutes discussed in the present study.Both solutes possess a COOH moiety with a labile proton susceptible to be engaged in HB with PBLG.Once 2 and 8 are esterified in order to form 6 and 11, respectively, the enantiodiscrimination efficiency is reduced, thus suggesting the importance of the carboxyl proton to interact with PBLG.Moreover, when analyzing cylindrical topologies of series I and II (Table S4, ESI †), it is observed that, independently of the distorted cylindrical ESP-topology of each of the solutes, COOH groups in series 2 and 8 present a proper cavity above the cylindrical ESP form (around 4 Å of diameter) that can promote the access of the PBLG basic moiety in order to ''orient'' the HB interaction, regardless of the known PBLG's conformational dynamics.
The strength of the C-O dipole moment.When HB is not possible, other electronic factors related to the nature of the Y substituent can play a role, and must be taken into account to understand the NMR results and explain the mechanisms.In particular, the presence of a ''C-O'' dipole (with an electronegative oxygen atom with accessible lone pairs) can be seen as an important factor enhancing the PBLG-solute interaction.To discuss this point, the local dipolar moment of ''C-O'' and ''CQO'' bonds in 2, 3, 4 and 6 has been computed by DFT method, and an attempt to correlate NDS (and their magnitude) was performed.The correlation curve (m C-O versus% of discriminated 13 C sites) is plotted in Fig. 9. Regardless of the electrodonor or electro-attractor character of each substituent within the biaryls, the analysis of m C-O values is informative.Thus, we can notice that the variation of m C-O trends is rather linear for 2, 3 and 4 for series I whereas electro-donor (ED)/electro-attractor (EA) properties towards biaryl electronic density differ from each other (EA for 2 and 3, and ED for 4).The divergence observed for 6 (compared to the linearity observed for 2, 3 and 4) is rather surprising because m C-O (6) is very similar to m C-O (2).Indeed, for an isovalue of m C-O , we might expect similar NDS.This situation suggests that HB is the primary driving force of 2 to interact with PBLG.Once no labile proton is available, solute 6 only has the C-O dipole as driving force, such as 3 and 4 in order to interact with PBLG by means of a positive dipole within the mobile arm.This occurrence perfectly illustrates the multivariable dependency of NDS.
Role of p-stacking.Considering the aromatic character of biaryls and the terminal benzyl of PBLG side chain, the existence of non-covalent p-p stacking interactions between these rings is a priori possible.Similarly to other interactions already discussed, the latter can also play a role in the CDM (and subsequently in the NDS) by helping to bring the solute closer to the chiral fiber. 31heoretically, the strength of this interaction is primarily dependent on the energy of the highest occupied molecular orbital (HOMO) of the biaryl (located either on the ring A or B), which depends in turns on the activation/deactivation effects of the Y substituents in the rings (series I and II).
From the chemical reactivity viewpoint, it is known that we can relate the Lewis bases (HOMOs with rich electronic density) with the Lewis acids (LUMOs with deficiency in electronic density). 40n our case, the p-stacking interaction (which can be though of as being a specific kind of dipole-dipole interaction) 41 could also be explained in terms of frontier molecular orbital approach, and hence regarded as a Lewis base (HOMO) interacting with an acidic moiety (LUMO).In this context, dipole-dipole p-stacking interactions between biaryls (HOMO) and PBLG benzylic rings (LUMO) can be reasonably proposed.
To understand our purpose, we have determined the location and the electronic density charge of HOMOs for six model compounds of the series I (1, 2, 6) and II (7, 8, 11), and then correlated both pieces of information with the NDS revealed by 13 C NMR.To illustrate our purpose, Fig. 10 proposes such a graphical correlation.As seen in the figure, the resulting effect due to the various activation/deactivation contributions of the three substituents leads to the HOMOs of biaryls being located either on ring A (1, 7) or on ring B (2, 6, 8, 11).Note however that the replacement of Br by Cl atom on ring A (series I and II) only slightly modifies the electronic density charge but not the position of the HOMOs on the rings.
Excluding all electronic effects discussed previously and just limiting the discussion to the p-stacking interaction, a correlation between the NDS( 13 C) the location of HOMOs, their electronic density charge, and also the steric hindrance on each ring (number and size of substituents) can be proposed.Thus, biaryl species (1, 7) with a Y substituent (I atom) that localizes the electronic density of HOMOs within the di-substituted ring A (1 and 7) present the lowest number of discriminated 13 C sites.In contrast, once the Y group relocalizes the electronic density charge of HOMOs at ring B (mono-substituted), the NDS is considerably enhanced.Besides, Y groups like COOH not only re-localize the electronic density charge on ring B, but also increase the electronegativity of HOMOs (observed by negative red HOMO lobes for COOH-dibromide-biaryl solute 2) due to the COOH's deactivating nature on ring A. The last could be conceived as if COOH increases the Lewis basic nature of biaryls and thus increases the ''reactivity'' towards a Lewis acid (PBLG benzyl ring).Finally, the presence of a lower steric hindrance in ring B (a single mono-atomic ortho-substituent) with respect to ring A (two ortho-substituents with complex dynamics for Y) reinforces the idea that HOMOs at ring B are more susceptible to establishing p-stacking interactions with the LUMOs of PBLG rings (compared to HOMOs at site A).In other words, ring B is more sterically free to form dipole-dipole p-stack interactions with PBLG.
Actually, for these six solutes, the variation of NDS can be globally understood as follows.For solutes 2 and 8, three highly favorable electronic factors enhance the efficiency of CDM (thus leading to a maximal NDS), the presence of HB, the strength of C-O dipole and the location of HOMOs at ring B. For solutes 6 and 11, HB is impossible, and only two favorable electronic factors exist (the strength of C-O dipole and the location of HOMOs at ring B), thus reducing the efficiency of CDM (and the NDS).For solutes 1 and 7, the absence of important elements (HB and C-O dipole) on the Y substituent leads to the less favorable situation with respect to CDM, thus leading to smaller NDS in the series.
The global dipole moment.In a simple interactional model, it seemed reasonable to correlate the experimental NDS( 13 C) and the magnitude of the overall molecular dipole moment (m mol ) of the minimum-energy structure of a solute within a particular solvation media.Experimentally, this univocal correlation does not fully explain the results observed, mainly for two reasons: (i) the specific contribution of local, electronic factors (such as those associated with the Y substituents) as discussed above; and (ii) the too simplified model using a single m mol value associated to the most stable conformer instead of describing the dipole moment distribution (within a dynamic system) as a function of the conformational freedom, e.g.dependent of interring f angle.To illustrate this idea, the theoretical variation of m mol by scanning the electronic energy barrier (which can be related to conformational population) of the rings' interplanar angle of two model biaryls, 2 and 5, was obtained (see Fig. S16, ESI †).In the first case, it is observed that m mol retains roughly constant (variation of m mol versus f is below 10%) regardless the energetic profile, while in the second one a large variation of the overall m mol versus f (up to two fold) exists.This heterogeneity of distribution illustrates the complexity of the contribution of the nature of the Y substituents to the m mol value.It also points out that the m mol value taking into account the 1).Fig. 10 Representation of the highest occupied molecular orbital electronic densities of some biaryls of series I (red zone) and series II (grey zone).HOMOs were plotted with the same contour levels and their relative charges were normalized for all species at the same conditions.conformational distribution should be a more reliable parameter in our attempts to explain the NDS ( 13 C) with the exclusive basis of the molecular dipole moment.
Conclusions
New emerging pure enantiomeric structures need robust analytical tools to determine the enantiopurity of a given class of chiral compounds, in particular when routine methods failed or requested instrumentation specific accessories.
The aim of this work was: (i) to propose a panel of simple and robust 1D/2D-NMR experiments in CLCs to investigate the chiral biaryls, without any need for NMR expertise, (ii) to present the relevant analytical subtleties to understand all NMR observables in CLCs and their consequences on spectra; and (iii) to analyze and correlate all data to provide new insights into the factors playing a role within the CDMs of polypeptide CLC, thus leading to a better understanding of all interaction mechanisms and enantiorecognition phenomena.
Among the appealing results of the present study, the analysis of NDS and the magnitude of enantiodiscriminations in the 13 C NMR spectra revealed the importance of the global molecular shape anisotropy of analytes, the role of (local) electronic properties of substituents (compared to global properties) to maintain the solute in the vicinity of the PBLG fibers and, last but not least, the subtle balance between the electronic effects favoring interaction with PBLG and the steric repulsion associated with the size of substituents.
From the results reported in this work, it appears that NMR in polypeptide CLC should be considered as a valuable tool to analyze the enantiopurity of molecular structures belonging to the fascinating family of (bridged and non-bridged) biaryl atropisomers.
Fig. 1
Fig. 1 (a) General structure of enantiomeric couples of ortho-trisubstituted biphenyls investigated here, along with the systematic atomic numbering used.The aromatic cycle B contains the Z substituent and quaternary carbon atoms are displayed in green.(b) Structures of chiral compounds 1 to 17 defined by series: (I): dibromo derivatives (1 to 6); (II): chloro-bromo derivatives (7 to 11); (III): monophosphorous derivatives (12 to 15); (IV): miscellaneous derivatives (16 and 17).The stereodescriptors reported in the four series correspond to structures drawn.As short notation associated to atropisomers, we will use the stereodescriptors (S/R) instead of (aS/aR).
Fig. 2
Fig. 2 161.9 MHz 31 P-{ 1 H} 1D-NMR spectrum of (a) (R/S)-14 in PBLG/ CHCl 3 at 335 K and (b) of 16 in racemic series (bottom) and enantioenriched series (R isomer) (top) in PBLG/CDCl 3 at 298 K (ref.d(80% of H 3 PO 4 ) = 0 ppm).1000 scans (a) and 5000 scans (b) are added and soft line-sharpened Gaussian filtering is applied.Note the 31 P CSA of 3.9 Hz and 4.5 Hz for P A and P B of 16.The assignment of P A and P B atoms is given in ESI † (see Fig. S14 to S18 in Section SI-SIII).The italicized notations ''A/B'' stand for the chiral stereodescriptors of enantiomers A and B.
Fig. 3
Fig. 3 100.4MHz 13 C-{ 1 H} 1D spectra (BBO probe) of (a) (R/S)-2 and (b) (R/S)-8 recorded in PBLG/CHCl 3 .Only the region ranging from 123 to 136 ppm is displayed (carbon atoms C-1/C-7 and from carboxyl group not shown).Note the doubling of numerous 13 C signals (compared to isotropic spectra) associated to the spectral discrimination of R/S isomers.The very broad resonances observed around 132-134 ppm originate from ortho/ meta aromatic 13 C signals of the benzyl group of the PBLG side chain.
Fig. 5
Fig. 5 Two examples of electrostatic potential (ESP) surfaces (by plotting only their positive contribution and restraining the contour level with isovalues of 0.005 a.u.for both cases) associated to (a) 3 and (b) 12.
Fig. 6
Fig. 6 Comparison of 13 C signals of the methyl group of (R/S)-4 dissolved in PBLG/CHCl 3 recorded with (a) and without (b) 1 H decoupling (4k scans added).Inset (c): zoom on resonances.Inset (d): zoom on the deshielded components of two quadruplets.Integrations of lines (or groups of lines) associated to each enantiomer and showing that ee = 0%.
Fig. 7
displays the region of the T-resolved 2D spectrum where 1 H-13 C coupling patterns and 13 C chemical shifts associated to C-10 and C-11 atoms of 2 appear in F 1 and F 2 dimensions, respectively.The analysis of the map allows the relevant information for each carbon site and each enantiomer (noted A and B) to be separated.For each of them, the spectral pattern is dominated by the direct 1 T CH coupling, which is different for each enantiomer (| 1 T A/B CH (C-10)| = 459/528 Hz and | 1 T A/B CH (C-11)| = 106/82 Hz), as seen on the map.Interestingly, we can measure a large difference of 1 T CH (D 1 T CH ) of about 70 Hz at site C-10.As 1 T A/B CH = 1 J A/B CH + 2 1 D A/B CH , the magnitude of 1 T A/B CH (C-11) suggests that the sign of 1 D A/B
Fig. 8
Fig. 8 92.1 MHz proton-decoupled NAD 1D-NMR signals of (a and b) the methine of the aldehyde group of (R/S)-3 and (R/S)-9, of (c and d) the methyl of the methoxy group of (R/S)-4 and (R/S)-17 and of (e and f) the methyl of the ester group of (R/S)-6 and (R/S)-11, all of them dissolved in PBLG/CHCl 3 at 295 K.All patterns are extracted from their tilted NAD Q-COSY Fz map.Except for e and f (see Fig. S10, ESI †), an exponential filtering (LB = 2 Hz) is applied on both dimensions.
Fig. 9
Fig.9Variation of the predicted C-O dipole moment (obtained by Mulliken charge distribution analysis, see details in the Experimental Section) as a function of the percentage of NDS( 13 C) for biaryl solutes (2, 3, 4 and 6) of series I (see Table1).
C and 31 P NMR results in PBLG
This journal is © The Royal Society of Chemistry and the Centre National de la Recherche Scientifique 2015 RDCs, which obscure spectra.When protons are decoupled, spectra are significantly simplified.Contrarily to 12, 13 and 15, the presence of two 31 P resonances for solute 14 | 14,600.6 | 2015-11-23T00:00:00.000 | [
"Chemistry"
] |
Super Heavy Elements-experimental developments
With his theoretical work Walter Greiner, our mentor, pioneered super heavy element research and motivated us young scientists. He actively shaped the profile of GSI. We are happy that still during his lifetime we could prove some of his predictions: Fusion with magic nuclei paving the way to super heavy elements and the proof of the prediction of the nuclear species existing only by shell stabilization, super heavy elements. With the discovery of oganesson, Z=118, the heaviest element known today, we have come to the end of this era. New experimental developments will be discussed.
Introduction
Super-heavy element research was pioneered and strongly supported by Walter Greiner.In his early calculations he worked on the prediction of super heavy elements: atomic nuclei existing only by shell stabilisation in the sea of liquid-drop instability [1] and heavyelement production in reactions with the doubly magic nuclei, 208 Pb and 48 Ca [2,3].At that time, when we started our experiments, it was not at all clear which reaction could be used to create heavy elements beyond element 106.As member of the "Kernphysikalische Arbeitsgemeinschaft Hessen, KAH" Greiner shaped the research program of GSI, the Gesellschaft für Schwerionenforschung, mbH in Darmstadt, at that time called "SILAB" (Fig. 1).
Main issues were the specifications for the accelerator to accelerate all ions of all elements of the periodic table and the production of new chemical elements and isotopes including Super Heavy Elements, SHE: The uranium beam as a rich source for new isotopes at the GSI fragment separator, and the new super heavy elements at SHIP.Both the basis for success of GSI.https://doi.org/10.1051/epjconf/201818202091ICNFP 2017 Fig. 1 Outline of the concept for the GSI accelerator and the research program taken from the SILAB proposal, Darmstadt 1969.
Super heavy elements: the presence
In March this year two milestones in SHE research were celebrated: The official announcement of the names of the heaviest known chemical elements by Natalya Tarasova, the president of, the "International Union of Pure and Applied Chemistry, IUPAC" [4].Japan celebrated the new element 113, Nihonium in the Japanese Academy of science with a speech of Crown Prince Naruhito.Nihon is the original name of Japan.With nihonium the end of the cold fusion to create super heavy elements with lead or bismuth targets had been reached.Earlier, in March, an International Colloquium dedicated to the naming of elements 115 Moscovium, 117 Tennesine, and 118 Oganesson was held in the Russian Academy of Sciences, Moscow.The elements carry the names of Moscow region, where Dubna, the place of discovery, is located, the US state of Tennessee where Oak Ridge is located, the place of manufacturing the actinide targets, and Yuri Oganessian, "for his pioneering contributions to transactinoid elements research".With element 118 the synthesis of super heavy elements with actinide targets and 48 Ca beams, the era of synthesis of super heavy elements with doubly magic nuclei, as theoretically substantiated by Walter Greiner has come to its end.Now all chemical elements discovered so far are named officially.A complete overview on SHE research is given in a special issue of Nucl.Phys.A [5].
Discoveries
At SHIP we experimentally proved the cold heavy ion fusion for SHE production and the idea of shell stabilized nuclei.First experiments to produce heavy elements in cold fusion reactions were performed by Oganessian.In irradiations of 208 Pb and 209 Bi targets with 50 Ti and 54 Cr beams to produce elements 104,105, 106, and 107.He measured spontaneous fission to identify these new elements.The results were heavily criticised by the Berkeley group.Firstly spontaneous fission is not a safe identification, secondly the Extra-Push model by Swiatecki forbids cold fusion of massive nuclear systems with large proton numbers.In 1980 we irradiated 208 Pb with 50 Ti and created element 104 [6] observing the one-neutron evaporation channel (Fig. 2).Cold fusion was discovered.The way to heavy elements was open.With the synthesis of element 106 and the discovery of element 108 at SHIP we proved the concept of super heavy nuclei.Due to the liquid drop model nuclei become unstable against fission at and beyond element 104 as shown on Fig 2 .Here the liquid-drop fission barrier drops below 1 MeV.Spherical super-heavy elements were predicted for Z=114.Consequently a "sea of instability" beyond element 104 separated the "super heavy island" from the trans uranium elements.For element 108 fission half-lives of 1μs were predicted.
With element 108 as an α-emitter with a half live of the order of milliseconds we discovered a new region of stability.The analysis of our experimental data shows an increase of shell stabilization for elements 106 and 108 to values of -6MeV.From these data we could construct experimental fission barriers as high as 6MeV, due to shell stabilization.We had discovered a region of shell nuclei bridging the transuranium region and the spherical super heavy island [7].Calculations of Möller and Sobiczewski show that this region is centred at Z=108 and N=162 and that the origin of the enhanced stability is a hexadecapole deformation.This is the confirmation of the idea of super heavy nuclei, pushed forward by Walter Greiner.In a Nobel Symposium in 1974 Aage Bohr commented a talk by Adam Sobiczewski: "What about the possibility of super heavy nuclei in other shapes which are stabilized by shell structure?"This shell is the basis for the existence of the elements rutherfordium, Z=104, and beyond.These are now commonly called "super heavy elements".level of 22fb for nihonium, Z=113.The reason is the fusion hindrance due to the fast increasing entrance fissility or, in other words, the fast increasing Coulomb repulsion between target and projectile.In hot fusion (in red) the entrance fissility does not change so fast.For heavy targets the relative change in the nuclear charge and consequently in the entrance is fissility small.The cross section is dominated by structure effects.The small bump is an indication for the Z=114 shell.The compound nuclei are close to the island of spherical SHE.Enhanced stability around Z=114 may show-up in the fusion cross sections but is not observed in the decay data.Cross sections are of the order of 10pb.At oganesson, element 118, the heaviest in sufficient amounts available target 249 Cf has been reached.Going beyond needs 50 Ti or 54 Cr projectiles.These are no magic nuclei: the prediction of fusion cross sections is a problem, they will be much smaller.First attempts to create element 120 have been made by Sigurd Hofmann (see this conference proceedings).
The future -SHE factories
New prospects for SHE research will be opened up with the next generation SHE factories.Table 1 shows the beam intensities and annual doses available at present and with the future SHE-factories under construction.Already the present accelerators such as the UNILAC at GSI, the RIKEN RILAC, and the Dubna U400 cyclotron deliver beams of 6x10 12 New in the field are Rare-Isotope, RI, facilities, SPIRAL2, FRIB, HIE ISOLDE, and FAIR-NUSTAR at GSI.The RI beam intensities close to stability are up to 10 9 /s on the average.They drop to 10 6 /s five to ten isotopes away from stability.It turns out that the RI intensities for all schemes are of the same order of magnitude.FAIR intensities are higher for light beams such as neon and heavier.Because of the low beam intensity the use of RI beams for SHE research is rather limited at present.With an optimistic value of 10 9 projectiles/s for isotopes close to stability the sensitivity is only 4atoms/nb per year.This is about six orders of magnitude less than achievable with the SHE factories with stable beams.Here it must be taken into account that SHE factories are dedicated to SHE research whereas RI facilities have a broad and competitive research program.First generation of experiments are reaction studies and isotope synthesis in the region up to rutherfordium (Fig. 3) Table 1.Intensities and sensitivities for fusion reactions with stable and radioactive beams for SHE production [9] .
Experiments and instrumentation
The Discoveries of new elements beyond seaborgium have been made with the velocity filter SHIP at GSI, the gas filled separators DGFRS at JINR Dubna, and GARIS at RIKEN.While SHIP is a kinematic separator to separate atomic nuclei produced by heavy-ion fusion, gas filled separators separate heavy from light ions with bad resolution.They cannot separate fusion products from nuclei produced by incomplete fusion.The identification of SHEs is based on the measurement of correlated α-decay chains of nuclei implanted into position sensitive surface barrier detectors.Decay chains ending in known nuclides allow the unambiguous assignment of individual, implanted nuclei.
Next generation SHE factories will open new possibilities for SHE research including atomic and more detailed nuclear studies as well as the discovery of new elements and isotopes.Certainly chemistry will play a major role in the future research programs and also rely on in-flight pre-separators to achieve highest sensitivity and clean conditions.A central goal is the exploration of the region of super heavies in the region of Z=112 to Z=120 and to approach the magic neutron shell N=184.The α-α correlation technique successfully applied to the discoveries of the heaviest elements will fail here as already became evident with the new trans-nihonium elements.Their -chains are not connected to the trans- uranium region.We should in addition be able to identify ß-decaying nuclei and those decaying by spontaneous fission.A next generation in-flight separator must include the capability of direct A and Z identification.
The next generation of in-flight separators includes gas filled separators and velocity filters with optimized transmission and separation quality to mention GARIS-II and the gas filled separator at Dubna under construction.SHELS at Dubna is a new velocity filter and within the Giessen-GSI-Manipal collaboration calculations for a compact and optimized velocity filter based on the experience with SHIP are under way (Fig. 4).The new generation of in-flight separators will be equipped with gas-filled stopping cells combined with high resolution experiments including Multiple-Reflection Time-of-Flight Mass Spectrometers (MR-TOF-MS) or Penning Trap systems.A first step in this direction is the SHIP-SHIPTRAP combination.At RIKEN an MR-TOF-MS system has been coupled to GARIS II.With this system the A,Z identification of astatine, polonium, and bismuth isotopes has been achieved by isobaric mass analysis at a mass resolving power of more than 100 000 on the basis of about 10 atomic nuclei per isotope at minimum [10] First measurements of trans-uranium nuclei with A,Z identification have been performed by the same group.At GSI such a system has been operated successfully at FRS with a resolution of 500 000 [11].Isobaric A,Z identification and separation was achieved and the α-decay of 211 Po at a detector behind the MR-TOF-MS was measured.With the MRTOF-MS method in contrast to the presently used identification by decay spectroscopy SHE are directly identified "still alive".
These systems can be operated in a separator mode and coupled to detector system such as Si surface barrier detectors, germanium arrays, or beta spectrometers.They can also be used to identify transfer products in large-scale survey experiments.
Reaction studies
Besides complete heavy ion fusion new reactions such as nuclear transfer will be investigated.Fig. 5 shows a prediction for transfer cross sections for the reaction 238 U + 248 Cm by Zagrebaev and Greiner [12] compared to data from Schädel.They observed mendelevium as the heaviest element.Predicted cross sections drop fast, by about on order of magnitude per element.The power of nuclear transfer for the creation of new trans uranium isotopes is shown in Fig. 5 right.At SHIP in irradiations of 248 Cm with 48 Ca four new and trans uranium isotopes were found [13].The identified isotopes are shaded.By far not all isotopes could be identified with our detection methods.With MR-TOF-MS a complete survey will be possible.Fig. 5 Left panel experimental and predicted cross sections for heavy-element production by nuclear transfer [12], right panel new isotopes, marked by dots, created by in transfer reactions irradiating 248 Cm with 48 Ca [13].
Large-scale investigations of nuclear transfer reactions are under way e.g. with VAMOS at GANIL.The problem of in-beam methods is the sensitivity.As Fig. 5, left panel, shows, our main interest are the most exotic transfers created with small cross sections.Even very rare processes can be observed with MR-TOF-MS as it includes separation.The high resolution of MR-TOF-MS allows to separate isomers indicating the angular momenta of the final nuclei, crucial for the survival of SHE.In addition we aim to measure nuclear transfer under zero degree to create transfer products with low angular momenta to suppress prompt disintegration by fission.A test setup for first transfer experiments is displayed in Fig. 6.It will be placed behind the FRS.Projectile fragments will be decelerated by energy degraders [14] placed inside the stopping cell where the reaction products are thermalized, extracted and guided to the MR-TOF-MS.A quadrupole mass filter will select the mass region of interest.For MR-TOF-MS operated in the separator mode the heavy nuclei can be directed to specialized detector system for detailed spectroscopy.
Conclusion
With the discoveries of nihonium and oganesson the era of fusion with magic nuclei 208 Pb and 48 Ca, theoretically substantiated by Walter Greiner has come to its end.With the discovery of the only by shell stabilization existing elements, rutherfordium and beyond, the prediction of super heavy nuclei existing by shell stabilization in the sea of liquid-drop instability has been proven.
To proceed, higher sensitivity is required, new reactions need being explored.Prospects for SHE research are opened by the new SHE factories with beam intensities increased by one to two orders of magnitude dedicated to SHE research.The search for trans-oganesson elements will be continued.More detailed structure investigations and SHE chemistry will play a major role in the future.The large facilities will be backed by laboratories with specialized research programs like in-beam spectroscopy and reaction studies.Studies of incomplete fusion, deep inelastic collisions including forward angles are under way and need being explored to small cross sections to see rare processes occurring with low probability.As a new development ion-catcher -ion-trap systems or MRTOF-MS will play a major role in SHE research.They will allow for direct identification of new isotopes and elements and in addition open new perspectives for atomic physics including laser spectroscopy of super heavy atomic nuclei.
Fig. 2
Fig. 2 left panel: excitation function for the production of element 104 in irradiations of 208 Pb with 50 Ti [6], right panel upper part: shell correction energies for the doubly even N-Z=48 isotopes: dots and solid line experiment compared to calculations from Cwiok, dashed-dotted, Möller dotted, and dashed lines.Lower part: experimental fission barriers, the dashed line shows the liquid drop part of the barrier[7].
Fig 3 .
Fig 3. displays the production cross sections for the trans fermium elements in cold and hot fusion[8].Cold fusion cross sections (in blue) drop fast towards the heaviest elements to a
Fig. 3
Fig. 3 Production cross sections for the trans fermium nuclei.Left panel: cold fusion, Right panel: hot fusion, the sensitivity limits for SHE factory and RIB are given per year of beam time.Hot fusion cross sections from [8].
ions/s on the average.With the beam time of 100d to 300d available for SHE research at JINR Dubna and RIKEN the sensitivity is 1atom/10fb per year.SHE factories including JINR Dubna with the new DC-280 cyclotron, and RIKEN and SPIRAL2 at GANIL with new powerful linear accelerators and ion sources will have a factor of 10 to 100 more beam intensity.At GSI a new, compact super conducting LINAC dedicated to SHE research is under development.Taking into account the available beam time per year, the sensitivity is increased by a factor of about 50, reaching a sensitivity of 5 atoms/fb and year.Other accelerator labs working on SHE research or planning such experiments: Argonne National Lab, Canberra, FRIB at Michigan State University, IMP Lanzhou, LBNL Berkeley, Tokai, and Jyväskylä will work on special topics including reaction studies, nuclear structure, chemistry, and atomic physics.
Fig. 4 .
Fig. 4. A next generation compact velocity filter including a stopper cell and MR-TOF-MS or Ion Trap which can be used in combination with the commonly used implantation detector.
Fig. 6
Fig.6 Test setup for the investigation of transfer reactions at the FRS.The target is installed inside stopping cell (see inset left) can be irradiated with stable and radioactive beams at energies near Coulomb barrier[14]. | 3,725.4 | 2018-01-01T00:00:00.000 | [
"Physics"
] |
IDENTIFYING THE VILLAGE FUND’S EFFECTIVENESS IN STRENGTHENING SUSTAINABLE TOURISM VILLAGES
This study identified factors that are crucial in sustaining the performance of tourism village through the village fund. Village Fund is one of the fiscal development instruments aimed at accelerating the distribution of welfare for the villages especially for tourism villages. Data used in this study includes the realization of the Village Fund in 2018 and 2019 from the Fiscal Policy Agency (BKF), Ministry of Finance. Data on the realization of the Village Fund are categorised into output codes in the form of numbers. For achieving the research objectives, the analytical tool used is Data Envelopment Analysis (DEA) to measure the
INTRODUCTION
The amount of Indonesia Village Fund allocated to develop and strengthen tourist villages continues to escalate in the number and quality of their use. As a fiscal instrument for development from the periphery, Village Fund is intended to accelerate equity and the quality of life of rural communities in every village. The instrument, derived from the General Allocation Fund (DAU) and the Special Allocation Fund (DAK), is allocated based on the concept of the fiscal gap to help finance special activities within regional affairs and in accordance with national priorities. The purpose of DAK is to fully assist the region, in this case, the village, in managing the Village Fund in the context of accelerating the development of tourist villages. DAK has a particular characteristic that can only be used in conformity with the menu of activities determined by the Technical Department related to the area of DAK allocation (David, 2018).
In 2015, the allocated Village Fund amounted to IDR 20.7 trillion, and each village received Tourism village is one of the themes for tourist destinations which appeal relies on the form of village life values (Marjuka, 2017;Susilo, 2020;Aguzman, Manurung, Pradipto, & Sanny, 2020). As usual, a destination must have the attributes of amenity, attraction, accessibility, ancillary, and available packages (Middleton, Fyall, Morgan, & Ranchhod, 2009). Sitinjak (2020 argues that each element of a tourist village has its weight of importance; for example, the attraction has the highest weight of 25.5%, followed by accessibility 24.60%; ancillary, 18.60%; accommodation, 18.20%; and amenities 13.00%. Attractions are the main priority, so it is strengthened by increasing the thematic variation of tourism, namely cultural events, improving the quality of souvenirs, and structuring the area to become a photo spot. The five attributes of strengthening the tourist village are carried out following the institutional capacity formed by the local community. Community empowerment in tourism is an expectation from the government so that local communities must also be able to draw positive benefits from tourism development efforts (Herawati, Purwaningsih, & Pudianti, 2014).
Communities who are the main actors must be actively involved in tourism development with other relevant stakeholders, both from the government and the private sector, to achieve economic benefits that can improve the welfare of the people.
In achieving the Sustainable Development Goals (SDGs) in 2030, the villages have been documented and are listed in Presidential Decree No. 59/2017. Therefore, the allocated Village Fund is expected to be utilized under the SDGs principles, with no exception for tourist villages. Sustainability, manifested by the SDGs, is a goal for future generations that involves reconciling economic interests with natural resources and local culture so that they continue to provide benefits to stakeholders for an indefinite period. There has been much restructuring in the agricultural sector into modern sectors in rural areas. The tourism sector is an alternative to maintaining socio-economic wealth in rural areas. Many of Indonesia's countryside is rich in natural landscapes, culture, and traditions that need to be preserved.
Rural tourism can promote sustainability and positively impact the community's economy (Marzo-Navarro, Pedraja-Iglesias, & Vinzon, 2015). Stakeholder strength is a requirement of sustainability, environmental support, local policies, and the existence of tourist destinations.
The development of tourist destinations is based on a model of local community empowerment concerning the principles of nature conservation, economy, and socio-culture.
Community empowerment is fundamental as it is the core of sustainable tourism. The community's perspective as a stakeholder has a significant role in developing a tourist destination. Some researchers argue that sustainable and community-based tourism preserves nature, has high control over tourism activities, and benefits the community as the primary host (Scheyvens, 1999). Community empowerment in tourism also reflects expectations from the government so that local communities must also be able to draw positive benefits from tourism development efforts (Marjuka, 2018;Herawati, Purwaningsih, Pudianti, & Surya, 2014).
The basic concept of a tourism village includes territory, rural heritage, rural life, and rural activities. Combined with the community's involvement in solving issues on a small to a global scale, these attributes are the basis of sustainability (Fons, Fierro & Patiño, 2011). Tourist villages throughout Indonesia are an inseparable part of global tourism economic activities and are committed in achieving the SDGs. Utilization of the Village Fund to strengthen sustainable tourism villages will promote the village economy. As the epicentre of economic growth, sustainable tourism villages will create economic multipliers through networks and chains of tourism village products. However, research on the effectiveness of using the Village Fund in efforts to develop sustainable tourism villages is still limited. This study is expected to contribute to identifying the Village Fund's effectiveness in strengthening sustainable tourism villages by improving the 4A functions: attractions, accessibility, amenities, and ancillary, which reflect the organizational/institutional capacity. This study consists of four parts. The first part introduces the concept of a tourism village, the Village Fund in Indonesia, and sustainability. The second part explains the methodology used in the study. The third part discusses the results and analysis. The last part provides the conclusion and suggestions.
Analytical framework
Technically, the effectiveness of using Village Fund can be seen from: a) Changes in output indicators before/after Village Fund is allocated; b) Changes in the efficiency of the use of inputs in the creation of outputs, accompanied by information on the size of outputs or changes in productivity. Extensions in this context refer to the additional input capacity in terms of volume, timeliness, and allocation of use. At the same time, improvements refer to positive progress towards achieving the SDGs targets, both indicated by the SDGs indicators and proxies of tourism output. This research model assumes that tourist village destinations (attractions or tourism programmes) are strengthened by using the Village Fund allocations to create new attractions/spots (and their multiplier effects). In addition, the existence of the Village Fund is expected to improve accessibility, amenities, and ancillary, which are the essential components of tourism villages. Village Fund spending in tourist villages is mapped according to SDGs number 8, 12, 14, and 17. It can help achieve sustainability within the scope of the four SDGs that have been mentioned. A study evaluating the effectiveness of using the Village Fund to develop a tourist village must at least meet the following criteria: analytic, systematic, reliable, reproducible, and easy to use.
Data collection
Data used in this study includes the realization of the Village Fund in 2018 and 2019 from the Fiscal Policy Agency (BKF), Ministry of Finance. Data on the realization of the Village Fund are categorised into output codes in the form of numbers. For example, code 230101 reflects the expenditure and completion of village road maintenance. Then the output codes are categorised into the 4A functions: attractions, accessibility, amenities, and ancillary (institutional). Selection of Village Fund data/output codes related to the use and purchase of goods and services for tourist villages are mapped according to SDGs 8, 12, 14, and 17. The selected villages are located in six provinces: West Java, Central Java, East Java, DI Yogyakarta, Bali, and West Nusa Tenggara. The selection of the six provinces was based on the highest number of tourist villages spread across provinces in Indonesia. The following is the indicator code for village funds in tourist villages in six provinces against the selected SDGs. Furthermore, the data will be processed using Data Envelopment Analysis (DEA) to find efficiency figures from village funds. In order to achieve the research objectives, the Data Envelopment Analysis ( Where are the outputs and are the inputs of each DMU. Then, is the weight assigned to the output on the basis of unit b, is the weight assigned to the input.
RESULTS
Rural tourism is one of the most labour-intensive industries, which has a high potential to contribute to the creation of new workplaces and the economic development of rural areas (Hall, Kirkpatrick, & Mitchell, 2005). The financing aspect, including the Village Fund, is one of the main enabling factors in achieving the development goals of tourism villages. Hall and Daneshmend (2003) stated that financing for tourist villages is often limited due to low tourist traffic and short vacation periods. In Indonesia, the government provides support for the development of tourist villages, one of which is through the Village Fund. The primary distribution of village funds is prioritised to finance local-scale programmes and activities to evolve villages and empower communities. Village Funds are following the priorities to ensure that output achievement can be maximised. Additionally, the amount of Village Fund that increases yearly is expected to be used efficiently through the output. The tourist villages that are the unit of analysis for this research are located in West Java, Central Java, East Java, DI Yogyakarta, Bali, and West Nusa Tenggara. This study categorizes the results of Village Fund efficiency based on the output code, which is then categorised into the tourism output code and 4A (Accessibility, Amenities, Attractions, and Ancillary). Table 6 shows that in 2019, there were not many changes, both in the number of villages that With the improvement of village infrastructure, it is hoped that it will improve the welfare of the people in the village. Attraction is one of the essential components of tourism. An object that has an attraction in an area and is continuously developed will undoubtedly become a source/capital for that area.
Attractions are divided into three, namely: natural resource wealth, cultural tourism wealth, and artificial wealth. Therefore, village funds are used to develop tourist villages. Thus, research on the effectiveness of using village funds for attractions is essential. Tables 9 and 10 Based on the table, it is found that the number of villages that have effectively used village funds is at the efficiency level of 0.8 -1, with only a few or less than seven villages in 2018 and 2019.
This indicates that the utilization of village funds is still not optimal, so it is expected that Tourism villages that are not yet optimal with an efficiency level of 0 -0.79 can try to implement activities or policies for the use of villages that have effectively used them.
In addition to accessibility, amenities, and attractions, the ancillary component also needs to be considered in the development of tourist villages. The local government can provide ancillary as one of services for tourists. Ancillary supports tourism through management agencies, tourist information, travel agents, and stakeholders. Table 11 Sandjojo (2018), admits that village funding still has problems so far, especially on how the village officials allocate these funds. In addition, some of the village leaders or heads of villages still lack capabilities and knowledge in managing the reporting system for Village ( Anderesta, Maretta, & Arsyillah, 2018). These issues need to be addressed accordingly to promote optimal allocation of the Village Fund.
For a planning strategy to be successful, it must focus on organization efficiency (Baum, 2009). In addition, it must be noticed that not all rural areas are equally attractive to tourists, and it is the planners that must discover the special qualities and local attractiveness and plan for the development of these special features (Gunn & Var, 2002).
b. The problem of orderly institutional administration
The reporting system carried out by each village cannot be appropriately implemented due to several factors. Based on the results of the 2018 BPS survey, the obstacles in establishing village financial reports were primarily due to the limited capability of human resources (41.30%), the absence of guidance (24.64%) and others (6.52%).
c. Realization of Village Fund that is not well targeted
The allocation of the Village Fund increases gradually to improve village quality. b. There needs to be data synchronization so that the data used between villages is the same.
c. Policy innovation is needed to help coordination and break the silos among the government institutions. The aim is to have higher capacity and capability of regulator and policymaker. There is a need for more substantial synergy between the central and regional governments to minimize deficiencies and overcome these obstacles. Some empowerment activities can be promoted, such as intensifying training activities for the village officials, including administrative processes, and mapping the strategic village problems and goals that need to be addressed urgently. Nonetheless, this study has answered the research objectives, namely the use of the Village Fund in improving sustainable tourism villages. Based on the results, it can be seen that the number of villages that have been effective in using village funds is shown through the results of the DEA. If the DEA score is one, the village has used the funds efficiently. Hopefully, this research can be used as a basis for policy formulation in improving sustainable tourism villages. However, we realize several shortcomings. It will serve as inputs to enrich further research, including:
CONCLUSION
1. It is necessary to use data over a more extended period to show the impact of the Village Fund.
2.
It is necessary to use formal parametric methods to demonstrate better the impact of Village Fund allocations on the development of tourist villages and show effectiveness | 3,190.6 | 2022-12-23T00:00:00.000 | [
"Economics",
"Environmental Science",
"Business"
] |
Virtual Sensoring of Motion Using Pontryagin’s Treatment of Hamiltonian Systems
To aid the development of future unmanned naval vessels, this manuscript investigates algorithm options for combining physical (noisy) sensors and computational models to provide additional information about system states, inputs, and parameters emphasizing deterministic options rather than stochastic ones. The computational model is formulated using Pontryagin’s treatment of Hamiltonian systems resulting in optimal and near-optimal results dependent upon the algorithm option chosen. Feedback is proposed to re-initialize the initial values of a reformulated two-point boundary value problem rather than using state feedback to form errors that are corrected by tuned estimators. Four algorithm options are proposed with two optional branches, and all of these are compared to three manifestations of classical estimation methods including linear-quadratic optimal. Over ten-thousand simulations were run to evaluate each proposed method’s vulnerability to variations in plant parameters amidst typically noisy state and rate sensors. The proposed methods achieved 69–72% improved state estimation, 29–33% improved rate improvement, while simultaneously achieving mathematically minimal costs of utilization in guidance, navigation, and control decision criteria. The next stage of research is indicated throughout the manuscript: investigation of the proposed methods’ efficacy amidst unknown wave disturbances.
Introduction
Inertial measurement units provide continuous and accurate estimates of motion states in between sensor measurements. Future unmanned naval vessels depicted in Figure 1a require very accurate motion measurement units including active sensor systems and inertial algorithms when active sensor data is unavailable. State observers are duals of state controllers used for establishing decision criteria to declare accurate positions and rates and several instantiations are studied here when fused with noisy sensors, where theoretical analysis of the variance resulting from noise power is presented and validated in over ten-thousand Monte Carlo simulations.
The combination of physical sensors and computational models to provide additional information about system states, inputs, and/or parameters, is known as virtual sensoring. Virtual sensoring is becoming more and more popular in many sectors, such as the automotive, aeronautics, aerospatial, railway, machinery, robotics, and human biomechanics sectors. Challenges include the selection of the fusion algorithm and its parameters, the coupling or independence between the fusion algorithm and the multibody formulation, magnitudes to be estimated, the stability and accuracy of the adopted solution, optimization of the computational cost, real-time issues, and implementation on embedded hardware [1].
The proposed methods stem from Pontryagin's treatment of Hamiltonian systems, rather than utilization of classical or modern optimal estimation and control concepts applied to future naval vessels as depicted in (Figure 1) [2][3][4].
The proposed methods stem from Pontryagin's treatment of Hamiltonian systems, rather than utilization of classical or modern optimal estimation and control concepts applied to future naval vessels as depicted in (Figure 1) [2][3][4]. [4] uses measurement bases depicted in (b) whose graphic is from cited reference modified by author. Photo (c) of Lev Pontryagin from the archive of the Steklov Mathematical Institute [2] used with permission (30 June 2021) Typical motion reference units conveniently have accuracies on the order of 0.05 (in meters and degree for translation and rotation, respectively, as depicted in Figure 1b for representative naval vessels as depicted in Figure 1a). These figures of merit are aspirational for the virtual sensor that must provide accurate estimates whether active measurements are available to augment the algorithm. Lacking active measurements, the algorithm is merely an inertial navigation unit, while with active measurements, the algorithm becomes an augmented virtual sensor. This manuscript investigates virtual sensoring by evaluating several options for algorithms, resulting estimated magnitudes, accuracy of each solution, optimization of resulting costs of motion, and sensitivity to variations like noise and parameter uncertainty of the translational and rotational motion models investigated (both simplified and high-fidelity). Algorithms are compared using various decision criteria to compare approaches for consideration of usage as motion reference units potentially aided by global navigation systems.
Noting the small size of motion measurement units, simple algorithms are preferred to minimize computational burdens that can increase unit size. Motion estimation and control algorithms to be augmented by sensor measurements are based on well-known mathematical models of translation and rotation from physics, both presented in equations. In 1834, the Royal Society of London published two celebrated papers by William R. Hamilton on Dynamics in the Philosophical Transactions. Ref. [5] The notions were slowly adopted, and not presented relative to other thoughts of the age for nearly seventy years [6], but quickly afterwards, the now-accepted axioms of translational and rotational motion were self-evidently accepted by the turn of the twentieth century [7][8][9][10] as ubiquitous concepts. Half a century later [11,12], standard university textbooks elaborate on the notions to the broad scientific community. Unfortunately, the notions arose in an environment already replete with notions of motion estimation and control based on classical proportional, rate, and integral feedback, so the fuller utilization of the first principals languished until exploitation by Russian mathematician Pontryagin [13]. Pontryagin proposed to utilize the first principles as the basis for treating motion estimation and control as the classical mathematical feedback notions were solidifying in the scientific community. Decades later, the first-principal utilization proposed by Pontryagin are currently rising in prominence as an improvement to classical methods [14]. After establishing performance benchmarks [15] for motion estimation and control of unmanned underwater vehicles, the burgeoning field of deterministic artificial intelligence [16,17] articulates the assertion of the first-principles as "self-awareness statements" with adaption [18,19] or [4] uses measurement bases depicted in (b) whose graphic is from cited reference modified by author. Photo (c) of Lev Pontryagin from the archive of the Steklov Mathematical Institute [2] used with permission (30 June 2021).
Typical motion reference units conveniently have accuracies on the order of 0.05 (in meters and degree for translation and rotation, respectively, as depicted in Figure 1b for representative naval vessels as depicted in Figure 1a). These figures of merit are aspirational for the virtual sensor that must provide accurate estimates whether active measurements are available to augment the algorithm. Lacking active measurements, the algorithm is merely an inertial navigation unit, while with active measurements, the algorithm becomes an augmented virtual sensor. This manuscript investigates virtual sensoring by evaluating several options for algorithms, resulting estimated magnitudes, accuracy of each solution, optimization of resulting costs of motion, and sensitivity to variations like noise and parameter uncertainty of the translational and rotational motion models investigated (both simplified and high-fidelity). Algorithms are compared using various decision criteria to compare approaches for consideration of usage as motion reference units potentially aided by global navigation systems.
Noting the small size of motion measurement units, simple algorithms are preferred to minimize computational burdens that can increase unit size. Motion estimation and control algorithms to be augmented by sensor measurements are based on well-known mathematical models of translation and rotation from physics, both presented in equations. In 1834, the Royal Society of London published two celebrated papers by William R. Hamilton on Dynamics in the Philosophical Transactions. Ref. [5] The notions were slowly adopted, and not presented relative to other thoughts of the age for nearly seventy years [6], but quickly afterwards, the now-accepted axioms of translational and rotational motion were self-evidently accepted by the turn of the twentieth century [7][8][9][10] as ubiquitous concepts. Half a century later [11,12], standard university textbooks elaborate on the notions to the broad scientific community. Unfortunately, the notions arose in an environment already replete with notions of motion estimation and control based on classical proportional, rate, and integral feedback, so the fuller utilization of the first principals languished until exploitation by Russian mathematician Pontryagin [13]. Pontryagin proposed to utilize the first principles as the basis for treating motion estimation and control as the classical mathematical feedback notions were solidifying in the scientific community. Decades later, the first-principal utilization proposed by Pontryagin are currently rising in prominence as an improvement to classical methods [14]. After establishing performance benchmarks [15] for motion estimation and control of unmanned underwater vehicles, the burgeoning field of deterministic artificial intelligence [16,17] articulates the assertion of the first-principles as "self-awareness statements" with adaption [18,19] or optimal learning [20] used to achieved motion estimation and control commands. The key difference between the usage of first principals presented here follows. Classical methods impose the form of the estimation and control (typically negative feedback with gains) and they have very recently been applied to railway vehicles [21], biomechanical applications [22], and remotely operated undersea vehicles [23], electrical vehicles [24], and even residential heating energy consumption [25] and multiple access channel usage by wireless sensor networks [26]. Deterministic artificial intelligence uses first principals and optimization for all quantities but asserts a desired trajectory. Meanwhile the proposed methods in this manuscript leave the trajectory "free" and calculate an optimal state and rate trajectory for fusion with sensor data and calculates optimal decision criteria for estimation and controls in the same formulation.
This manuscript seeks to use the same notion, assertion of the first principals (via Pontryagin's formulation of Hamiltonian systems) in the context of inertial motion estimation fused with sensor measurements (that are presumed to be noisy). Noise in sensors is a serious issue elaborated by Oliveiera et al. [27] for background noise of acoustic sensors and by Zhang et al. [28] for accuracy of pulse ranging measurement in underwater multi-path environments. Barker et al. [29] evaluated impacts on doppler radar measurements beneath moving ice. Thomas et al. [30] proposes a unified guidance and control framework for Autonomous Underwater Vehicles (AUVs) based on the task priority control approach, incorporating various behaviors such as path following, terrain following, obstacle avoidance, as well as homing and docking to stationary and moving stations. Zhao et al. [31] very recently pursued the presently ubiquitous pursuit of optimality via stochastic artificial intelligence using particle swarm optimization genetic algorithm, while Anderlini et al. [32] used real-time reinforcement learning. Sensing the ocean environment parallels the current emphasis in motion sensing, e.g., Davidson et al.'s [33] parametric resonance technique for wave sensing and Sirigu et al.'s [34] wave optimization via the stochastic genetic algorithm. Motion control similarly mimics the efforts of motion sensing and ocean environment sensing, e.g., Veremey's [35] marine vessel tracking control, Volkova et al.'s [36] trajectory prediction using neural networks, and the new guidance algorithm for surface ship path following proposed by Zhang et al. [37]. Virtual sensory will be utilized in this manuscript where noisy state and rate sensors are combined to provide smooth, non-noisy, accurate estimates of state, rate, and acceleration, while no acceleration sensors were utilized. A quadratic cost was formulated for acceleration, since accelerations are directly tied to forces and torques and therefore fuels.
" . . . condition of the physical world can either be "directly" observed (by a physical sensor) or indirectly derived by fusing data from one or more physical sensors, i.e., applying virtual sensors". [38] Thus, the broad context of the field is deeply immersed in a provenance of classical feedback driving a current emphasis on optimization by stochastic methods. Meanwhile this study will iterate options utilizing analytic optimization including evaluation of the impacts of variations and random noise in establishing the efficacy of each proposed approach. Analytical predictions are made of the impacts of applied noise power, and Monte Carlo analysis agrees with the analytical predictions. Developments presented in this manuscript follow the comparative prescription presented in [39], comparing many (eleven) optional approaches permitting the reader to discern their own preferred approach to fusion of sensor data with inertial motion estimation:
1.
Validation of simple yet optimal inertial motion algorithms for both translation and rotation derived from Pontryagin's treatment of Hamiltonian systems when fused with sensor data that is assumed to be noisy.
2.
Validation of high-fidelity optimal (nonlinear, coupled) internal motion algorithms for rotation with translation asserted by logical extension derived from Pontryagin's treatment of Hamiltonian systems when fused with sensor data that is assumed to be noisy; 3.
Validation of three approaches for sensor data fused with the proposed motion estimation algorithm (not using classical feedback in a typical control topology): pinv, backslash, and LU inverses derived from Pontryagin's treatment of Hamiltonian systems when fused with sensor data that is assumed to be noisy; 4.
Comparison of each proposed fused implementation algorithm to three varieties of classical feedback motion architectures including linear-quadratic optimal tracking regulators, classical proportional plus velocity feedback tuned for performance spec- ification and manually tuned proportional plus integral plus derivative feedback topologies, where these classical methods are utilized as benchmarks for performance comparisons when fused with sensor data that is assumed to be noisy.
5.
Comparisons are made based on motion state and velocity errors, algorithm parameter estimation errors, and quadratic cost functions, which map to fuel used to create translational and rotational motion. 6.
Vulnerability to variation is evaluated using ten-thousand Monte Carlo simulations varying state and rate sensor noise power and algorithm plant model variations, where noise power is tailored to the simulation discretization, permitting analytic prediction of the impacts of variations to be compared to the simulations provided. 7.
Sinusoidal wave action is programmed in the same simulation code to permit future research, and inclusion of such is indicated throughout the manuscript.
Appendix A, Table A1 contains a consolidated list of variables and acronyms in the manuscript.
Materials and Methods
Inertial navigation algorithms use physics-based mathematics to make predictions of motion states (position, rate, acceleration, and sometimes jerk). The approach taken here is to utilize the mathematical relationships from physics in a feedforward sense to produce optimal, nonlinear estimates of states that when compared to noisy sensor measurements yield corrected real-time optimal, smooth, and accurate estimates of state, rate, and acceleration. Sensors are modeled as ideal with added Gaussian noise and the smooth estimates will be seen to exhibit none of the noise. The optimization of the estimates will be derived using Pontryagin's optimization.
Motion control algorithms to be augmented by sensor measurements are based on well-known mathematical models of translation and rotation from physics, both presented in Equation (1), where both high-fidelity motion models are often simplified to identical double-integrator models where nonlinear coupling cross-products of motion are simplified, linearized, or omitted by assumption. The topologies are provided in Figure 2. Centrifugal acceleration is represented in Equation (1) by −mω × (ω × r ). Coriolis acceleration is represented in Equation (1) by −2mω × v . Euler acceleration is represented in Equation (1) by m . ω × r . In this section, double-integrator models are optimized by Pontryagin's treatment of Hamiltonian systems, where the complete (not simplified, linearized, or omitted) nonlinear cross-products of motion are accounted for using feedback decoupling. Efficacy of feedback decoupling of the full equations of motion is validated by disengaging this feature in a single simulation run to reveal the deleterious effects of the coupled motion when not counteracted by the decoupling approach.
to rotating re f erence where F, τ external force and torque, respectively m, I mass and mass moment of inertia, respectively ω, . ω angular velocity and acceleration, respectively r , v , a position and velocity, and acceleration relative to rotating reference τ = I . ω and F = m a are double integrator plants ω × Iω cross-product rotational motion due to rotating reference frame m . ω × r cross-product translation motion due to rotating reference frame −2mω × v cross-product translation motion due to rotating reference frame −mω × (ω × r ) cross-product translation motion due to rotating reference frame. cross-product rotational motion due to rotating reference frame × cross-product translation motion due to rotating reference frame −2 × cross-product translation motion due to rotating reference frame − × ( × ) cross-product translation motion due to rotating reference frame.
(a) System topology (b) Rotational plant topology Figure 2. SIMULINK simulation program topologies used to generate the results in Section 3: (a) Overall system topology used to simultaneously produce state and rate estimates integrated with noisy sensors and additionally optimal control calculations; (b) Euler's moment from Equation (1) elaborated in [5−12] describing rotational motion (notice the nonlinear coupled motion).
Problem Scaling and Balancing
Consider problems whose solution must simultaneously perform mathematical operations on very large numbers and very small numbers. Such problems are referred to as poorly conditioned. Scaling and balancing problems are one potential mitigation where equations may be transformed to operate with similarly ordered numbers by scaling the variables to nominally reside between zero and unity. Scaling problems by common, wellknown values permits single developments to be broadly applied to a wide range of state spaces not initially intended. Consider problems simultaneously involving very large and very small values of time ( ̅ ), mass ( )/mass moments of inertia ( ̅ ), and/or length ( ̅ ). Normalizing by a known value permit variable transformation such that newly defined variables are of similar order, e.g., indicates generic displacement units like , , . Such scaling permits problem solution with a transformed variable mass and inertia of unity value, initial time of zero and final time of unity, and state and rate variables that range from zero to unity making the developments here broadly applicable to any system of particular parameterization.
Problem Scaling and Balancing
Consider problems whose solution must simultaneously perform mathematical operations on very large numbers and very small numbers. Such problems are referred to as poorly conditioned. Scaling and balancing problems are one potential mitigation where equations may be transformed to operate with similarly ordered numbers by scaling the variables to nominally reside between zero and unity. Scaling problems by common, well-known values permits single developments to be broadly applied to a wide range of state spaces not initially intended. Consider problems simultaneously involving very large and very small values of time (t), mass (m)/mass moments of inertia (I), and/or length (r). Normalizing by a known value permit variable transformation such that newly defined variables are of similar order, e.g., t ≡ t t f , I ≡ I I system = J ≡ J J system , m ≡ m m system , r ≡ r r where r indicates generic displacement units like x, y, or angle. Such scaling permits problem solution with a transformed variable mass and inertia of unity value, initial time of zero and final time of unity, and state and rate variables that range from zero to unity making the developments here broadly applicable to any system of particular parameterization.
Scaled Problem Formulation
The problem is formulated in terms of standard form described in Equations (2)-(8), where x(·), v(·) are the decision variables. The endpoint cost E x t f is also referred to as the Mayer cost. The running cost F(x(t), u(t)) is also referred to as the Lagrange cost (usually with the integral). The standard cost function J[x(·), u(·)] is also referred to as the Bolza cost as the sum of the Mayer cost and Lagrange cost. Endpoint constraints e x t f are equations that are selected to be zero when the endpoint is unity. Minimize .
decision vector H Hamiltonian operator corresponding to system total energy λ T adjoint operators, also called co-states (corresponding to each state) υ T endpoint costates e x t f endpoint constraints.
Hamiltonian System: Minimization
The Hamiltonian in Equation (8) is a function of the state, co-state, and decision criteria (or control) and allows linkage of the running costs F(x, u) with a linear measure of the behavior of the system dynamics f (x, u). Equation (9) articulates the Hamiltonian of the problem formulation described in Equations (2)- (5). Minimizing the Hamiltonian with respect to the decision criteria vector per Equation (10) leads to conditions that must be true if the cost function is minimized while simultaneously satisfying the constraining dynamics. Equation (11) reveals the optimal decision u will be known if the rate adjoint can be discerned.
Hamiltonian System: Adjoint Gradient Equations
The change of the Hamiltonian with respect to the adjoint λ maps to the time-evolution of the corresponding state in accordance with Equations (12) and (13). .
The rate adjoint was discovered to reveal the optimal decision criteria, and the adjoint equations reveal the rate adjoint is time-parameterized with two unknown constants still to be sought. Together, Equations (11)-(13) form a system of differential equations to be solved with boundary conditions (often referred to as a two-point boundary value problem in mathematics).
Terminal Transversality of the Enpoint Lagrangian
The endpoint Lagrangian E in Equation (14) adjoins the endpoint function endpoint cost E x t f and the endpoint constraints functions e x t f in Equation (8) and provides a linear measure for endpoint conditions in Equation (7). The endpoint Lagrangian E exists at the terminal (final) time alone. The transversality condition in Equation (15) specifies the adjoint at the final time is perpendicular to the cost at the end point. In this problem, the endpoint cost E x t f = 0. These Equations (16) and (17) are often useful when seeking a sufficient number of equations to solve the system.
New Two-Point Boundary Value Problem
For the two-state system, four equations are required with four known conditions to evaluate the equations. In this instance, two Equations (3)-(10) have been formulated for state dynamics, two more Equations (18) and (19) have been formulated for the adjoints, and two more Equations (20) and (21) have been formulated for the adjoint endpoint conditions. Four known conditions, Equations (22)-(25) have also been formulated. Combining Equations (11) and (13) produce Equation (26). . . .
Solving the system of two Equations (29) and (30) produces a = −12 and b = 6. Substituting Equation (26) into Equation (11) with a and b produces Equation (31), and substitution of a and b into Equations (27) and (28), respectively, produce Equations (32) and (33) the solution of the trajectory optimization problem.
Equations (31)-(33) constitute the optimal solution for quiescent initial conditions and the state final conditions (zero velocity and unity scaled position). To implement a form of feedback (not classical feedback), consider leaving the initial conditions non-specific in variable-form as described next.
Real-Time Feedback Update of Boundary Value Problem Optimum Solutions
Classical methods utilize feedback of asserted form u = −Kx for state variable x, where the decision criteria (for control or state estimation/observer) and gains K are solved to achieve some stated performance criteria. Such methods are used in Section 3 and their results are established as benchmarks for comparison. So-called modern methods utilize optimization problem formulation to eliminate classical gain tuning substituting optimal gain selection but retaining the asserted form of the decision criteria. Such methods are often referred to as "linear-quadratic optimal" estimators or controllers. These estimators are also presented as benchmarks for comparison, where the optimization problem equally weights state errors and estimation accuracy.
Alternative use of feedback is proposed here (whose simulation is depicted in Figure 3b). Rather than classical feedback topologies asserting u = −Kx utilization of state feedback in formulating the estimator or control's decision criteria, this section proposes relabeling the current state feedback as the new initial conditions of the two-point boundaryvalue problems used to solve for optimal state estimates or control decision criteria in Equations (22) and (23). The solution of (26)-(28) using the initial values of (22) and (23) manifest in values of the integration constants: a = −12 and b = 6. As done in real-time optimal control, the values of the integration constants are left "free" in variable form, and their values are newly established for each discrete instance of state feedback (re-labeled as new initial conditions). This notion is proposed in Proposition 1, whose proof expresses the form of the online calculated integration constants that solve the new optimization problem. The two constantsâ andb are utilized in the same decision Equation (31) where the estimates replace the formerly solved values of the boundary value problem resulting in Equation (40).
Analytical Prediction of Impacts of Variations
Assuming Euler discretization (used in the validating simulations) for output y, index i and integration solver timestep h Equation (43) would seem to indicate a linear noise output relationship. Equation (44) indicates the relationship for quiescent initial conditions indicating the results of a style draw. In a Monte Carlo sense (to be simulated) of a very large number n, Equation (45) indicates expectations from theory Equation (46) in simulation for scaled noise entry to the simulation to correctly reflect the noise power of the noisy sensors in the discretized computer simulation. Equation (46) was used to properly enter the sensor noise in the simulation (Figures 2a and 3a).
Assuming this implementation of noise power for a given Euler (ode1) discretization in SIMULINK, 1− error ellipse may be calculated as Equation (47) for the system in canonical form in accordance with [40] and was implemented in Figure 3a and depicted on "scatter plots" in Section 3's presentation of results of over ten-thousand Monte Carlo simulations.
Numerical Simulation in MATLAB/SIMULINK
Validating simulations were performed in MATLAB/SIMULINK Release R2021a with Euler integration solver (ode1) and a fixed time step of 0.01 s, whose results are pre-
Proposition 1.
Feedback may be utilized not in closed form to solve the constrained optimization problem in real time.
Proof of Proposition 1. Implementing Equations (34)- (37) in matrix form as revealed in Equation (38) permits solution for the unknown constants as a function of time as displayed in Equation (39), and subsequent use of the unknown constants form the new optimal solution from the current position and velocity per Equation (40).
In Section 3, estimation ofâ andb becomes singular due to the inversion in Equation (39) as approaching the terminal endpoint, where switching to Equations (31)-(33) is implemented as depicted in Figure 4d to avoid the deleterious effects of singularity when applying Proposition 1. The cases with switching at singular conditions are suffixes with "with switching" in the respective label. classical methods not re-derived here, but whose computer code is presented in Appendix B, Algorithm A1 and A2. (39) is rank deficient. Notice in subfigure (c) sinusoidal wave input is coded using the identical time-index of the rest of the simulation. The next stages of future research will utilize this identical simulation to investigate efficacy of the proposed virtual sensoring amidst unknown wave actions.
Feedback Decoupling of Nonlinear, Coupled Motion Due to Cross Products
The real-time feedback update of boundary value problem optimum solutions is often used in the field of real-time optimal control, but a key unaddressed complication remains the nonlinear, coupling cross-products of motion due to rotating reference frames. Here, a feedback decoupling scheme is introduced, allowing the full nonlinear problem to be addressed by the identical scaled problem solution presented, and such is done without simplification, linearization, or reduction by assumption. In proposition 2, feedback decoupling is proposed to augment the optimal solution already derived. The resulting modified decision criteria in Equation (42) is utilized in simulations presented in Section 3 of this manuscript, but a single case omitting Proposition 2 is presented to highlight the efficacy of the approach. Proposition 2. The real-time optimal guidance estimation and/or control solution may be extended from the double-integrator to the nonlinear, coupled kinetics by feedback decoupling as implemented in Equation (41).
Proof of Proposition 2. For nonlinear dynamics of translation or rotation as defined in Equation (1), where the double-integrator is augmented by cross-coupled motion due to rotating reference frames, the same augmentation may be added to the decision criteria in Equation (40) using feedback of the current motion states in accordance with Equation (42). The claim is numerically validated with simulations of "cross-product decoupling" that are nearly indistinguishable from open loop optimal solution, and a single case "without cross-product decoupling" is provided for comparison.
Analytical Prediction of Impacts of Variations
Assuming Euler discretization (used in the validating simulations) for output y, index i and integration solver timestep h Equation (43) would seem to indicate a linear noise output relationship. Equation (44) indicates the relationship for quiescent initial conditions indicating the results of a style draw. In a Monte Carlo sense (to be simulated) of a very large number n, Equation (45) indicates expectations from theory Equation (46) in simulation for scaled noise entry to the simulation to correctly reflect the noise power of the noisy sensors in the discretized computer simulation. Equation (46) was used to properly enter the sensor noise in the simulation (Figures 2a and 3a).
Assuming this implementation of noise power for a given Euler (ode1) discretization in SIMULINK, 1 − σ error ellipse may be calculated as Equation (47) for the system in canonical form in accordance with [40] and was implemented in Figure 3a and depicted on "scatter plots" in Section 3's presentation of results of over ten-thousand Monte Carlo simulations.
Numerical Simulation in MATLAB/SIMULINK
Validating simulations were performed in MATLAB/SIMULINK Release R2021a with Euler integration solver (ode1) and a fixed time step of 0.01 s, whose results are presented in Section 3, while this subsection displays the SIMULINK models permitting the reader to duplicate the results presented here. Sensor noise was added per Section 2.8. The classical feedback subsystem is displayed in Figure 4a. The optimal open loop subsystem implements Equation (31), and is elaborated in Figure 4b,c. The real time optimal subsystem implements Equations (42) and (31) augmented by feedback decoupling as in Equation (42). The "switch to open loop" subsystem switches when the matrix inverted in Equation (39) is singular indicated by a zero valued determinant and is elaborated in Figure 4d. The quadratic cost calculation computes Equation (3) and is elaborated in Figure 4b, while the cross-product motion feedback implements the cross product of Equation (42). The P + V subsystem and PD/PI/PID subsystems depicted in Figure 4a implement classical methods not re-derived here, but whose computer code is presented in Appendix B, Algorithms A1 and A2. Figure 5 displays the SIMULINK subsystems used to implement the three instantiations of real-time optimization (labeled RTOC from provenance in optimal control) where the switching displayed in Figure 3b Section 2.10 presented SIMULINK subsystems used to implement the equations derived in the section. Table 1 displays the software configuration used to simulate the equations leading to the results presented immediately afterwards in Section 3. Table 1. Software configuration for simulations reported in Section 3.
Software Version Integration Solver
Step-Size
Results
Section 2 derived several options for estimating state, rate, and control simultaneously as outputs of Pontryagin's treatment of the problem formulated as Hamiltonian systems. Section 2.9 described implementation of sensor noise narratively, while Figure 3 illustrated the topological elaboration using SIMULINK including state and rate sensor with added Gaussian noise whose noise power was set in accordance with Section 2.9. SIMULINK subsystems were presented to aid repeatability (with callback codes in the Appendix B). Those subsystems were used to run more than ten-thousand simulations: a nominal simulation run for each technique with the remainder utilized to evaluate vulnerability to variations as described in Section 2.9. In Section 3.1, benchmarks of performance are established using classical methods for state and rate errors and optimum cost calculated in Section 2. Sections 3.2-3.4 describe real-time optimal utilization of feedback to establish online estimates of the solution of the modified boundary value problem described in Section 2. Each section respectively evaluates the three methods compared: backslash\inverse, pinv inverse, and LU inverse.
General lessons from the results include: 1.
Classical feedback estimation methods are very effective at achieving very low estimation errors, but at higher costs utilizing the estimates in the decision criteria (guidance or control).
2.
Backslash\inverse is relatively inferior to all other inverse methods 3.
Singular switching generally improves state and rate estimation and costs; 4.
LU inverse and pinv inverse methods perform alike with disparate strengths and weaknesses relative to each other.
5.
Choosing the pinv inverse method as the chosen recommendation, Monte Carlo analysis reveals the residual sensitive to parameter variation is indistinguishable from the inherent sensitivity of the optimal solution when using the singular switching technique. Meanwhile, substantial vulnerability to parameter variation is revealed when singular switching is not used. 6.
Lastly, omitting the complicating cross-products in the problem results in an order of magnitude high estimation errors and several orders of magnitude higher parameter estimation error. Therefore, cross-product motion decoupling is strongly recommended for all instantiations of state and rate estimation.
Benchmark Classical Methods
Classical methods as presented in [41,42] with nonlinear decoupling loops as proposed in Equation (42) depicted in Figure 3b were implemented in SIMULINK according to Figure 4a. Computer code implementing these classical methods is presented in Appendix B, Algorithm 1. Estimation was executed by feedback of proportional plus integral plus derivative (PID), proportional plus derivative (PD), and also proportional plus velocity, and the results displayed in Figure 6, establishing the benchmark for state and rate tracking. Table 2 displays quantitative data corresponding to Figure 6's qualitative displays. Notice the optimal estimation of state, rate, and decision criteria is also included in Figure 6 and Table 2, since the optimal cost benchmark is established by Pontryagin's treatment in Equation (31).
Real-Time Optimal Methods with Backslash
This section displays the results of real-time optimal estimation using the back-slash\inverse depicted in Figure 5b Figure 7 reveals real-time optimal state estimation performs relatively poorly using the MATLAB backslash\inverse, but performance is restored to near-optimal performance when augmented with singular switching. State and rate errors are restored to essentially optimal values, while cost is restored to very near the optimal case as evidenced by the quantitative results displayed in Table 3.
Real-Time Optimal Methods with Backslash
This section displays the results of real-time optimal estimation using the backslash\inverse depicted in Figure 5b with and without singular switching displayed in Figure 4d to inverse the [T] in Equation (39). The results are compared to open loop optimal results per Equation (31) displayed in Figure 4c. Figure 7 reveals real-time optimal state estimation performs relatively poorly using the MATLAB backslash\inverse, but performance is restored to nearoptimal performance when augmented with singular switching. State and rate errors are restored to essentially optimal values, while cost is restored to very near the optimal case as evidenced by the quantitative results displayed in Table 3. Table 4 reveal the estimation performance of the constants of integration solving the modified two-point boundary value problem (BVP) using state and rate feedback to reset the initial conditions of the BVP. Oddly, despite relatively superior performance estimating the state and rates when using singular switching augmentation, parameter estimation is far inferior.
(a) estimates of .
(b) estimates of . Real-time optimal using backslash with singular switching 4142 −4121 Section 3.1 presented the results of classical and optimal methods as benchmarks for performance. Meanwhile, Section 3.2 presented the results of implementing real-time optimal estimation with backslash\inverse with and without singular switching compared to the optimal benchmark. Next, Section 3.3 presents results using pinv inverse with and without singular switching. Table 4 reveal the estimation performance of the constants of integration solving the modified two-point boundary value problem (BVP) using state and rate feedback to reset the initial conditions of the BVP. Oddly, despite relatively superior performance estimating the state and rates when using singular switching augmentation, parameter estimation is far inferior. Table 4 reveal the estimation performance of the constants of integration solving the modified two-point boundary value problem (BVP) using state and rate feedback to reset the initial conditions of the BVP. Oddly, despite relatively superior performance estimating the state and rates when using singular switching augmentation, parameter estimation is far inferior.
(a) estimates of .
(b) estimates of . Real-time optimal using backslash with singular switching 4142 −4121 Section 3.1 presented the results of classical and optimal methods as benchmarks for performance. Meanwhile, Section 3.2 presented the results of implementing real-time optimal estimation with backslash\inverse with and without singular switching compared to the optimal benchmark. Next, Section 3.3 presents results using pinv inverse with and without singular switching. Table 4. Comparison of real-time optimal decision methods using backslash matrix inversion.
Decision Method Meanâ Error Meanb Error
Open loop optimal −12 6 Real-time optimal using backslash 21.2 −26.6 Real-time optimal using backslash with singular switching 4142 −4121 Section 3.1 presented the results of classical and optimal methods as benchmarks for performance. Meanwhile, Section 3.2 presented the results of implementing real-time optimal estimation with backslash\inverse with and without singular switching compared to the optimal benchmark. Next, Section 3.3 presents results using pinv inverse with and without singular switching. (39) [T] T . All other facets of the problem are left identical, while only the method of matrix inversion is modified resulting in state and rates estimates and comparison of control in Figure 9 with corresponding quantitative results in Table 5. Parameter estimation accuracy is displayed in Figure 10 and Table 6.
Real-Time Optimal Methods with Pinv
Inversion of the [T] matrix in Equation (39) was also accomplished by the Moore-Penrose pseudoinverse equation: ≅ ≡ ( ) . All other facets of the problem are left identical, while only the method of matrix inversion is modified resulting in state and rates estimates and comparison of control in Figure 9 with corresponding quantitative results in Table 5. Parameter estimation accuracy is displayed in Figure 10 and Table 6. Real-time optimal using pinv 0 −0.1088 6.6914 Real-time optimal using pinv with singular switching 0.0296 0.0600 6.0012 − without cross-product decoupling 0.3381 −0.5936 6.0012
Real-Time Optimal Methods with Pinv
Inversion of the [T] matrix in Equation (39) was also accomplished by the Moore-Penrose pseudoinverse equation: ≅ ≡ ( ) . All other facets of the problem are left identical, while only the method of matrix inversion is modified resulting in state and rates estimates and comparison of control in Figure 9 with corresponding quantitative results in Table 5. Parameter estimation accuracy is displayed in Figure 10 and Table 6. Real-time optimal using pinv 0 −0.1088 6.6914 Real-time optimal using pinv with singular switching 0.0296 0.0600 6.0012 − without cross-product decoupling 0.3381 −0.5936 6.0012 Table 6. Comparison of real-time optimal decision methods using p-inv matrix inversion.
Decision Method Meanâ Error Meanb Error
Open loop optimal −12 6 Real-time optimal using p-inv All other facets of the problem are left identical, while only the method of matrix inversion is modified resulting in state and rates estimates and comparison of control in Figure 11 with corresponding quantitative results in Table 7. Parameter estimation accuracy is displayed in Figure 12 and Table 8. Table 6. Comparison of real-time optimal decision methods using p-inv matrix inversion.
Decision Method Mean Error
Mean Error Open loop optimal −12 6 Real-time optimal using p-inv 21 −26 Real-time optimal using p-inv with singular switching 4101 −4080
Real-Time Optimal Methods with Lu-Inverse
Inversion of the [T] matrix in Equation (39) All other facets of the problem are left identical, while only the method of matrix inversion is modified resulting in state and rates estimates and comparison of control in Figure 11 with corresponding quantitative results in Table 7. Parameter estimation accuracy is displayed in Figure 12 and Table 8. Table 7. Comparison of real-time optimal decision methods using LU-inverse matrix inversion.
Decision Method Final State Error Final Rate Error Decision Criteria/Control Effort
Open loop optimal 0.0296 0.060 6 Real-time optimal using LU-inverse 0.0030 −0.087 6.371 Real-time optimal using LU-inverse with singular switching 0.0284 0.1188 5.8283 Table 8. Comparison of real-time optimal decision methods using LU-inverse matrix inversion.
Decision Method Meanâ Error Meanb Error
Open loop optimal −12 6 Real-time optimal using LU-inverse
Decision Method Mean Error
Mean Error Open loop optimal −12 6 Real-time optimal using LU-inverse 21 21 Real-time optimal using LU-inverse with singular switching 4142 4142
Monte Carlo Analysis Using Pinv (with Singular Switching) and Open Loop Optimal with Cross-Product Deoupling
Over ten-thousand simulation runs were performed with 10% uniformly random variations in plant parameters (mass and mass moment of inertia). Noise was added to state and rate sensors with zero mean and standard deviation 0.01, and the results are displayed in the "scatter" plots in Figure 13 with corresponding quantitative results displayed in Table 9. Feedback implemented by resetting the initial condition of the reformulated boundary value problem (when implemented with singular switching) yielded optimal results when augmented with cross-product decoupling. Table 9. Impact of variations with real-time optimal decision methods using pinv inversion.
Monte Carlo Analysis Using Pinv (with Singular Switching) and Open Loop Optimal with Cross-Product Deoupling
Over ten-thousand simulation runs were performed with 10% uniformly random variations in plant parameters (mass and mass moment of inertia). Noise was added to state and rate sensors with zero mean and standard deviation 0.01, and the results are displayed in the "scatter" plots in Figure 13 with corresponding quantitative results displayed in Table 9. Feedback implemented by resetting the initial condition of the reformulated boundary value problem (when implemented with singular switching) yielded optimal results when augmented with cross-product decoupling.
(a) estimates of (b) estimates of Figure 12. Comparison of real-time optimal methods with scaled time on the abscissae and respective ordinates titled in the subplot captions: dashed line is real-time optimal using backslash, dotted line is real-time optimal using LU-inverse with singular switching. (a) Estimates of , (b) estimates of . The ranges of the zoomed view in the inset are indicated by the respective scales. Table 8. Comparison of real-time optimal decision methods using LU-inverse matrix inversion.
Decision Method Mean Error
Mean Error Open loop optimal −12 6 Real-time optimal using LU-inverse 21 21 Real-time optimal using LU-inverse with singular switching 4142 4142
Monte Carlo Analysis Using Pinv (with Singular Switching) and Open Loop Optimal with Cross-Product Deoupling
Over ten-thousand simulation runs were performed with 10% uniformly random variations in plant parameters (mass and mass moment of inertia). Noise was added to state and rate sensors with zero mean and standard deviation 0.01, and the results are displayed in the "scatter" plots in Figure 13 with corresponding quantitative results displayed in Table 9. Feedback implemented by resetting the initial condition of the reformulated boundary value problem (when implemented with singular switching) yielded optimal results when augmented with cross-product decoupling.
(a) (b) (c) Figure 13. Comparison of the impacts of system variations on real-time optimal using (a) open loop optimal, (b) pinv without switching, (c) pinv with switching. Scaled state on the abscissae and scaled rate on the ordinates. Table 9. Impact of variations with real-time optimal decision methods using pinv inversion. Figure 13. Comparison of the impacts of system variations on real-time optimal using (a) open loop optimal, (b) pinv without switching, (c) pinv with switching. Scaled state on the abscissae and scaled rate on the ordinates. Table 9. Impact of variations with real-time optimal decision methods using pinv inversion.
Decision Method Mean Final State Error Mean Final RateError
Open loop optimal 0.0264 0.0573 Real-time optimal using pinv 0.0041 −4.960 Real-time optimal using pinv with singular switching 0.0264 0.0573
Comparison of Results
Section 3.1 presented the benchmark results produced by classical methods and open loop optimal mathematical solutions. Section 3.2 presented results utilizing MATLAB's backslash\inversion, while Section 3.3 included results using Moore-Penrose pseudoinverse, pinv. Section 3.4 presented results using LU-inverse. Section 3.5 revealed robustness to variations in plan parameters with both state and rate sensor noise. This section consolidates the results into a single table of raw data depicted in Table 10. This data will be used to produce percent performance improvement as figures of merit in the Discussion (Section 4).
Discussion
State and rate estimation algorithms fused with noisy sensor measurements using several of the proposed methodologies achieve state-of-the art accuracies with optimality that is analytic and deterministic rather than stochastic, and therefore use very simple equations with necessarily low computational burdens. Simple relationships with small numbers of multiplications and additions are required to be comparable to the simplicity of classical methods, but optimum results are produced that exceed the modern notion of linear-quadratic optimal estimation. Implementation of non-standard feedback achieves robustness with the additional computational cost of a matrix inverse, and therefore three optional inversion methods were compared. General lessons taken with manually tuned PID as a benchmark for state and rate estimation errors, while optimal loop optimal cost is the benchmark for the cost of utilization of state estimations for guidance and control:
1.
Classical feedback estimation methods (tuned per computer code is presented in Appendix B, Algorithm A1) are very effective at achieving very low estimation errors, but at higher costs utilizing the estimates in the decision criteria (guidance or control).
a. Linear-quadratic optimal estimation achieved 87% better state estimates, but over 400% poorer rate estimates compared to classical PID with costs over 2000% open-loop optimal costs. b.
Classical position plus velocity estimation achieved 90% improved state estimation with over 30% better rate estimation, but cost of implementation remains high (over 400% higher than the optimal benchmark).
2.
Open loop optimal estimation established the mathematical benchmark for cost, and achieved 72% improved state estimation and 33% rate estimation errors.
3.
Backslash\inverse is relatively inferior to all other inverse methods, producing 200% poorer state estimation and over 2000% poorer rate estimates with 52% reduced costs compared to the optimal benchmark; 4.
Singular switching generally improves state and rate estimation and costs; a. Singular switching with backslash\inverse produced 69% improvement in state estimation and 29% improvement in rate estimation with roughly optimal costs. b.
Singular switching with LU-inverse produced 72% improvement in state estimation and 33% improvement in rate estimation with roughly optimal costs (3% better than optimal . . . a numerical curiosity). c.
Singular switching with pinv inverse produced 69% improvement in state estimation and 29% improvement in rate estimation with roughly optimal costs (approximately identical improvement percentages to LU-inverse with singular switching).
5.
LU inverse and pinv inverse methods perform alike with disparate strengths and weaknesses relative to each other. 6.
Choosing the pinv inverse method as the chosen recommendation, Monte Carlo analysis reveals the residual sensitive to parameter variation is indistinguishable from the inherent sensitivity of the optimal solution when using the singular switching technique. Meanwhile, substantial vulnerability to parameter variation is revealed when singular switching is not used. 7.
Lastly, omitting the complicating cross-products in the problem results in an order of magnitude higher estimation errors and several orders of magnitude higher parameter estimation error. Therefore, cross-product motion decoupling is strongly recommended for all instantiations of state and rate estimation.
Notes on Percentages of Performance Improvements
The choice of benchmarks for establishing percentage performance improvements leads to seemingly exaggerated numbers. The current selection of benchmarks emphasizes the strengths of the respective methods: classical feedback estimation methods are designable to achieve high accuracy but suffer from high effort by the decision criteria associated with their use. Optimal methods as instantiated here emphasize minimization of decision effort, so the benchmark for control effort is selected as optimal open loop rather than classical feedback (e.g., manually tuned PID). Percent degradation over thirty thousand percent results when compared to the optimal value (of six) as a benchmark. If the calculation had instead used the manually tuned classical PID as a benchmark, the optimal effort would exhibit an improvement over ninety-nine percent.
The final line in Table 11 illustrates the extreme penalty of not using feedback decoupling of the vector cross-products in Equation (1) representing translation due to the rotating reference. The penalty embodies the deleterious effects of neglecting treatment of the nonlinear, coupled, full six-degree-of-freedom system of equations. Table 11. Percent improvement comparison of real-time optimal decision methods percent performance improvement.
Decision Method Mean Final Position Error Percent Improvement
Mean Final Rate Error Percent Improvement
Future Research
Notice in Figure 3c sinusoidal wave input is coded using the identical time-index of the rest of the simulation. The next stages of future research will utilize this identical simulation to investigate efficacy of the proposed virtual sensoring amidst unknown wave actions. Secondly, hardware validation of key facets of this research is a logical next step.
Conclusions
Using variations of mathematical optimization to provide state, rate, and decision/control provides virtual sensing information useful as sensor replacements. In this instance, arbitrary position and rate sensors were modeled as ideal sensors, plus Gaussian random noise and algorithms were presented and compared that provide very smooth (not noisy) signals for position, rate, and acceleration (manifest in the decision/control). There was no acceleration sensor, so the notion of sensor replacement is manifest for acceleration, while the position and rate information was provided by the selected algorithm acting as a vital sensor. Real-time optimal (nonlinear) state estimation using the Moore-Penrose pseudoinverse (implemented in MATLAB using the pinv command) was revealed to be the most advised approach with very highly accurate estimates and essentially mathematically optimally low costs of utilization. The real-time optimal inverse calculation becomes poorly conditioned as the end-state is approached due to rank deficiency in the matrix inversion, so switching to the open loop optimal in the very end was implemented when the determinant of the matrix became nearly zero.
Funding: This research received no external funding. The APC was funded by the author.
Institutional Review Board Statement: Not applicable.
Informed Consent Statement: Not applicable.
Data Availability Statement: While the simulation codes used to produce these results are presented in the manuscript and the appendix, data supporting reported results can be obtained by contacting the corresponding author.
Conflicts of Interest:
The author declares no conflict of interest. | 11,459 | 2021-07-01T00:00:00.000 | [
"Engineering",
"Physics"
] |
Bone Connectivity and the Evolution of Ichthyosaur Fins
: After the end-Triassic extinction, parvipelvian ichthyosaurs diversified and became dominant elements of marine ecosystems worldwide. By the Early Jurassic, they achieved a thunniform body plan that persisted for the last 100 m.y.a of their evolution. Diversification and extinctions of thunniform ichthyosaurs, and their swimming performance, have been studied from different perspectives. The transformation of limbs into hydrofoil-like structures for better control and stability during swimming predates thunniform locomotion. Despite their importance as control surfaces, fin evolution among thunnosaurs remains poorly understood. We explore ichthyosaur fin diversity using anatomical networks. Our results indicate that, under a common hydrofoil controller fin, the bone arrangement diversity of the ichthyosaur fin was greater than traditionally assumed. Changes in the connectivity pattern occurred stepwise throughout the Mesozoic. Coupled with other lines of evidence, such as the presence of a ball-and-socket joint at the leading edge of some derived Platypterygiinae, we hypothesize that fin network disparity also mirrored functional disparity likely associated with different capabilities of refined maneuvering. The ball-and-socket articulation indicates that this local point could be acting like a multiaxial intrafin joint changing the angle of attack and thus affecting the maneuverability, similar to the alula of flying birds. Further studies on large samples and quantitative experimental approaches would be worthy to test this hypothesis.
Introduction
Ichthyosauromorphs diversified in the aftermath of the Permo-Triassic mass extinction [1,2].The macromorphological evolutionary changes in their body plan provide canonical examples of convergence among tetrapods secondarily adapted to the marine environment (SECAD from hereon) [3].As early as the Anisian (Middle Triassic), some ichthyosauromorphs evolved fusiform bodies with dorsal and well-developed caudal fins [4].Since then, and throughout the Jurassic and much of the Cretaceous, Ichthyosauria Ichthyosauriomorphs (ichthyosaurs from hereon) have been dominant elements in marine ecosystems worldwide.Within this clade, thunnosaurian ichthyosaurs are easily recognizable by their streamlined body deepest at the pectoral region and tapering posteriorly to the peduncle of the lunate caudal fin [5,6] (Figure 1).Alongside Neoceti cetaceans, ichthyosaurs were the only tetrapods to evolve a thunniform body plan suitable for long-distance cruising [7][8][9] and the first vertebrates to achieve thunniform bodies [10].
As required, throughout the wide arc of SECAD lineages, the shift from continental to marine lifestyle was coupled with the transformation of the columnar and weight-bearing limbs of continental forms into paddles or fins, both for propulsion and/or steering during swimming [11][12][13].Both functional categories of modified limbs (paddle-shaped limb or hydrofoil-shaped) imply the enclosing of limb bones into soft-tissue envelopes and the lengthening of the distal region by the addition of bones [14].As a result, all SECAD have better-integrated limbs in comparison with their terrestrial ancestors.However, among them, the evolutionary strategy and adaptation path followed by ichthyosaurs were unique.Network analysis by Fernández et al. [15] showed that the most widespread evolutionary strategy among SECAD was the enclosing of limb bones in soft-tissue "envelopes" (like "baby mittens"), without drastically impacting the underlying connectivity pattern of the bones.In contrast, the strategy depicted by ichthyosaurs involved "zipping up" their fingers so that digital bones (transformed into carpal-like elements) were connected not only proximodistally with the surrounding bones but also laterally.This strategy resulted in highly integrated and homogeneous forefins in ichthyosaurs, allowing them to explore new regions of the morphospace [15].
In the last decades, the knowledge of the speed and mode of ichthyosaur evolution and extinction increased significantly.Integrative analyses of disparity and evolutionary rates indicate that the evolution of the lineage was characterized by a Triassic early burst followed by an evolutionary bottleneck leading to a long-term reduction of the evolutionary rates and disparity throughout the Jurassic and Cretaceous [2].On the other hand, disparity and diversity data of Cretaceous forms show that the extinction of ichthyosaurs was characterized by a two-phase pathway: an early Cenomanian extinction that radically reduced their ecological diversity, and a final extinction event at the end of the Cenomanian [16].However, within this general framework, two key episodes of ichthyosaur evolution are particularly significant due to their impact on the diversity and morphological innovation of the group, and both had ophthalmosaurian parvipelvians as their main protagonists: the Early/Middle Jurassic and the Jurassic/Cretaceous transitions.This clade of parvipelvians accounts for more than half of the entire evolutionary history of ichthyosaurs and is known for drastic transformations in their forefins, including the emergence of pre-radial and post-ulnar zeugopodial elements and numerous accessory digits.The Early/Middle Jurassic transition, although poorly documented [17][18][19], witnessed the emergence of the ophthalmosaurians.In contrast, the Jurassic/Cretaceous transition marks a profound drop in the diversity (and probably disparity) of the clade [16,20].
Understanding the evolutionary transformation of ichthyosaur fins is crucial for taking the first steps in comprehending the role of forefins during swimming in these marine reptiles, particularly as they evolved into efficient thunnosaurian cruisers.Here we analyze the morphological disparity of ichthyosaurs by exploring how the underlying connectivity pattern of fins transformed during ichthyosaurs' evolutionary history.We increased the taxon sample of anatomical networks of fins from 3 [15] to 16 including forefins of Mixosaurus cornalianus and 14 parvipelvians.Finally, framed against the phylogeny, we track the changes in the connectivity pattern of ichthyosaur forefins over 147 million years (from the Annisian up to the Albian) comprising most of the evolutionary history of the ichthyosauromorphs.
The results of analyses of the fin networks highlighted that, within a clear trend towards better integrated and modular forefins, ichthyosaurs depicted a broad array of connectivity patterns.The overall similarity of the fin morphology (i.e., hydrofoil design) hides a striking underlying disparity of bone arrangements.We also found that major evolutionary changes in fin networks occur stepwise.Given the significance of forefins as control surfaces during swimming we proposed that the forefin disparity mirrored functional disparity as well, likely associated with disparity of the refined maneuverability principally among derived thunniform swimmers.
Materials and Methods
We built undirected and unweighted anatomical network models of the forefin for a total of 16 Ichthyosaurian taxa (Supplementary Material, Table S1), in addition to the SECAD dataset of [15].For the selection of taxa and specimens, we chose complete fins in their anatomical position without any deformation.In cases where this was not possible, we reconstructed the missing parts using all available information, ensuring that at least the minimum number of fin elements were positioned in their most conservative configuration.Anatomical network analysis seeks to describe and analyze the underlying connectivity pattern of the bone elements and their connections, being sutures, contacts, and articulations.This kind of analysis adapts concepts of network analysis to anatomy, where network metrics are interpreted as metrics of anatomical complexity, integration, heterogeneity, and modularity (following [22] and references therein).Each element of the
Materials and Methods
We built undirected and unweighted anatomical network models of the forefin for a total of 16 Ichthyosaurian taxa (Supplementary Material, Table S1), in addition to the SECAD dataset of [15].For the selection of taxa and specimens, we chose complete fins in their anatomical position without any deformation.In cases where this was not possible, we reconstructed the missing parts using all available information, ensuring that at least the minimum number of fin elements were positioned in their most conservative configuration.Anatomical network analysis seeks to describe and analyze the underlying connectivity pattern of the bone elements and their connections, being sutures, contacts, and articulations.This kind of analysis adapts concepts of network analysis to anatomy, where network metrics are interpreted as metrics of anatomical complexity, integration, heterogeneity, and modularity (following [22] and references therein).Each element of the forefin is represented as a node, and contacts among them are depicted as links connecting the nodes.Osteological information is based on personal examination (MF, LC, AM) and published specimens.Network models were created in the open-source software Gephi v.0.10.0 [23], which was implemented for calculation of the network's descriptors, including those descriptors developed specifically for anatomical networks (heterogeneity and parcellation based on [22]).These metrics are anatomically interpreted as measures of the complexity of connections (density, number of connections divided by the maximum possible number of connections), anatomical integration both locally (average clustering coefficient, number of connections between the neighbors of a node divided by the maximum possible number of connections in the neighborhood, on average) as well as along the entire length of the structure (average path length, average of the path length between any pair of nodes), the variability of connections (heterogeneity, standard deviation of connections divided by the mean number of connections), and anatomical modularity (parcellation, based on the number of modules and the number of nodes in each module).For a detailed description of the network metrics and how they are calculated see [22] and references therein.Data from ichthyosaur limbs was subjected to two PCA analyses: one with the complete SECAD dataset from [15] adding new network models obtained herein (Figure 2) and a second considering solely the ichthyosaur information to gain detailed observations (Figure 3).A major change compared to the [15] analysis, is that we now include the average path length metric as well, under normalized variance-covariance correlation because the av.path length is measured in different units compared to the other metrics.Finally, based on the phylogenetic hypothesis presented in [24], a reconstruction of the ancestral states was made in TNT v. 1.6 [25] by mapping the network metrics as continuous characters using the built-in optimization.forefin is represented as a node, and contacts among them are depicted as links connecting the nodes.Osteological information is based on personal examination (MF, LC, AM) and published specimens.Network models were created in the open-source software Gephi v.0.10.0 [23], which was implemented for calculation of the network's descriptors, including those descriptors developed specifically for anatomical networks (heterogeneity and parcellation based on [22]).These metrics are anatomically interpreted as measures of the complexity of connections (density, number of connections divided by the maximum possible number of connections), anatomical integration both locally (average clustering coefficient, number of connections between the neighbors of a node divided by the maximum possible number of connections in the neighborhood, on average) as well as along the entire length of the structure (average path length, average of the path length between any pair of nodes), the variability of connections (heterogeneity, standard deviation of connections divided by the mean number of connections), and anatomical modularity (parcellation, based on the number of modules and the number of nodes in each module).
For a detailed description of the network metrics and how they are calculated see [22] and references therein.Data from ichthyosaur limbs was subjected to two PCA analyses: one with the complete SECAD dataset from [15] adding new network models obtained herein (Figure 2) and a second considering solely the ichthyosaur information to gain detailed observations (Figure 3).A major change compared to the [15] analysis, is that we now include the average path length metric as well, under normalized variance-covariance correlation because the av.path length is measured in different units compared to the other metrics.Finally, based on the phylogenetic hypothesis presented in [24], a reconstruction of the ancestral states was made in TNT v. 1.6 [25] by mapping the network metrics as continuous characters using the built-in optimization.
Figure 3.
Ichthyosauria forefin morphospace plotted separately to aid comparison of morphospace occupancy through time.This is a second PCA using only data from ichthyosaurs.If we focus on the Ichthyosauria forefin morphospace occupancy over time derived from the second PCA (Figure 3), from Late Triassic represented by Mixosaurus up to the Albian (Late Cretaceous) represented by Platypterygius hercynicus, there were no major shifts in the fin morphospace occupation but an overall trend toward better integrated and more modular fins.Thus, this long-term tendency spanned for approximately 137 million years comprising most of the evolutionary history of the Ichthyosauriomorpha.
Morphospace Analyses
The increased taxon sampling and the inclusion of another descriptor in the analyses (average path length) complemented previous results.As in the former analysis including only three ichthyosaurs, the increased sample shows that the limb-to-fin transition of ichthyosaurs followed a unique strategy among SECAD.After the initial shift between the pattern of the basal ichthyosauromorh (e.g., Nanchangosaurus and Hupehsuchus) and ichthyosaur fins, ichthyosaurs explore new regions of the morphospace.As depicted in Figure 2, the morphospace occupied by ichthyosaurs does not overlap with that of any other SECAD, a difference that is also confirmed statistically with a PERMANOVA analysis (Supplementary Material File S1).Nonetheless, this finding should be interpreted with caution until more taxa from other lineages of marine reptiles can be incorporated into the study.Within a general path to homogeneous reintegration (sensu [15]), the pattern of connectivity changes depicted by their networks indicates that the disparity among ichthyosaur fins was greater than previously assumed.Thus, the morphospace is expanded in all directions.
After the Triassic/Jurassic crisis, ichthyosaurs occupied a large morphospace (in blue color on Figure 3) spreading alongside positive values on PCA1 and PCA2 except for the outlying Temnodontosaurus with low negative values on PC1.Within the common path to complex reintegration of their fore appendages, Jurassic forms spread across the empty morphospace.Some of them, like Chacaicosaurus, Hauffiopteryx, and Ophthalmosaurus, have the proximal elements better connected than phalanges resulting in relatively more heterogeneous networks.On the other hand, the connections of Ichthyosaurus and Caypullisaurus fins are distributed almost evenly across the networks resulting in a relatively more homogeneous fin.Temnodontosaurus trigonodon is the only parvipelvian with a diverging pattern (less homogeneous connectivity across the fin).The disparate location of this taxon is not surprising as this taxon reduced the number of primary digits to three.Cretaceous Myobradypterygius hauthali and Platypterygius hercynicus are clustered together and separated from the Jurassic thunniforms; this is due to their distinctive fin morphology characterized by the increased number of tightly packed phalanges resulting in extremely homogeneous and better-integrated fins.However, the low sampling of Late Cretaceous taxa could underestimate the morphospace occupation during the last episodes of the evolutionary history of the lineage.
Connectivity Changes in the Forefins across Phylogeny
The analysis of the anatomical networks of SECAD fins [15] indicated that as early as the Middle Triassic, the evolutionary strategy in ichthyosaurs of "zipping-up" their fingers was established and that, through the Jurassic, thunniform ichthyosaurs followed an adaptation path to homogeneous reintegration of their forefins.The analysis of an expanded sample (Table S2), mapped across phylogeny under maximum parsimony, indicates four points where major changes happen and that these evolutionary changes occurred stepwise (Figures 4 and S1).The first step is noted at the Ichthyosauria node, denoted the early and drastic changes in the underlying connectivity pattern of limb elements, promoted by the "re-integration" of the fingers, that clearly impacted the network parameters.The whole fin integration increases but without losing much of its modularity.While nodes, edges, and average clustering coefficient increase, heterogeneity and parcellations decrease.At the Parvipelvian node, no major changes occurred except for the ongoing trends toward more homogeneous fins expressed by a decrease in heterogeneity (H) values.The second step likely occurred in the Early Jurassic.At this point, the fins became larger, but slightly less integrated and modular.After relatively long stability, two successive steps took place during the Middle and Late Jurassic.The last step is the one that registers the most abrupt change in the values of the network descriptors, marking a notable increase toward even more integrated and homogeneous networks.On cladogram, in gray, non-Ichthyosauria ichthyosauromorphs were added for comparison; in green, 1-3, major evolutionary events related to swimming.Bottom: stepwise pattern of connectivity changes, each network property is illustrated separately to aid visualization.Abbreviations of network properties as for Table 1.On cladogram, in gray, non-Ichthyosauria ichthyosauromorphs were added for comparison; in green, 1-3, major evolutionary events related to swimming.Bottom: stepwise pattern of connectivity changes, each network property is illustrated separately to aid visualization.Abbreviations of network properties as for Table 1.
Morphospace Occupation
Analyses of ichthyosaur disparity, based on phylogenetic data sets [2,26], identified clear differences in morphospace occupation between Triassic and post-Triassic forms.These contributions proposed that ichthyosaurs passed through an evolutionary bottleneck close to the Triassic-Jurassic boundary and that after this key period, ichthyosaur evolution showed a long-term reduction in evolutionary rates and disparity.Other approaches integrating ecomorphological metrics and functional disparity for ecospace modeling [27][28][29] agreed with these general results.Particularly, [28] found that, after the Triassic-Jurassic crisis, ichthyosaurs again achieved relatively high diversity in the Early Jurassic but throughout the Middle and Late Jurassic, the proportional disparity of ichthyosaurs becomes increasingly diminished.However, these general outcomes do not match with the disparity of the connectivity pattern of the forefins found here (Figure 3).The analysis of the forefin networks showed no evidence of disparity retraction after the Early Jurassic as depicted by the morphospace occupation of the Middle Jurassic and younger thunnosaurs.Noteworthy, within a general tendency towards more integrated and modular fins, the thunnosauria morphospace is expanded in all directions.Similar results have been obtained through the analysis of humerus and zeugopodium morphology among ophthalmosaurids [20].Other lines of evidence, like those provided by bone microanatomy, e.g., [30,31], also suggest that thunnosaurs, and particularly ophthalmosaurids, were ecologically diverse throughout the Jurassic.
Fin Connectivity and Functional Disparity
The exploration of functional disparity focuses on morphological diversity (and its innovations) with a recognized impact on the way of life of animals [28,32].In the particular case of the Mesozoic SECAD, since the pioneering contributions of Massare [5,33], most of the ecomorphological approaches have been focused on the feeding apparatus [34][35][36] and paleohistology [37,38].However, swimming performance is a key factor for the SECAD not only for dispersal during steady swimming but also for foraging.Thus, the skeletal thunniform body plan has been linked to the ecological abilities for the capture of fast pelagic prey such as fast swimming belemnite cephalopods [10].The evolution of the thunniform body plan of ichthyosaurus has also been explored in terms of energetic performance.Assuming that all post-Triassic ichthyosaurs were thunniform swimmers, it has been proposed that body size was a key factor in the evolution of swimming [39].These contributions deal mainly with the steady locomotion of ichthyosaurs; however, different maneuverability performances are crucial for surviving escaping from predators and/or capturing elusive prey.Although belemnites were important items of thunnosaur diets, the gut content of Cretaceous Platypterygiinae, as well as tooth and skull morphology [16,40], indicate that they probably fed on a wide range of prey, including other vertebrates.
The role of the pectoral appendages of vertebrate swimmers as control surfaces is well known.Changes in the orientation of the control surfaces with respect to the body axis, as well as small changes in orientation at the leading and trailing edges, have an impact on stability and maneuverability.This is true for flexible pectoral fins of fishes [41] but also, although to a lesser extent, in relatively stiff flippers like those of sharks and odontocetes.Among odontocetes, the lack of maneuverability is compensated by changing small turn radii of flexible forms for higher turning rates and they depict different turning performances [42].In sharks, the majority of the pectoral fin area is internally supported by collagenous ceratotrichia, which cannot be actively moved [43].Most of the stabilization relies on changes in the angle of attack or asynchronous pectoral fin movement [44,45].Despite ichthyosauromorphs being axial swimmers through their evolution and having paired fins that must have acted on stability and maneuverability, the disparity of hydrofoils across thunnosaur clades has not been explored other than as an eventual source of phylogenetic or taxonomic information [24,46].Given the functional relevance of fins as control surfaces, features such as density, clustering, or path length of their bone arrangements could be considered not only as expressions of morphological disparity but also as functional disparity among thunnosaurs and, thus, suggests different ecological niches.
In addition to the observed disparity of connectivity patterns of the forefin of ichthyosaurs, an eloquent feature that still remains undescribed must be addressed.This is the presence of a ball-and-socket joint between the distal end of the humerus and extra-zeugopial accessory elements on the leading edge of the forefin of some Late Jurassic-Cretaceous ichthyosaurs.Thus in Platypterygius australis QM F3348 [47] and in the Late Jurassic Platypterygiine MLP 85-I-15-1 [48] (Figure S2) the proximal surface of the extra zeugopodial element anterior to radius is short (antero-posteriorly) and notably convex and articulates with a strongly concave and small distal articular surface of the humerus.A similar condition occurred in the forefin of the Late Jurassic Platypterygiine Sumpalla [20] although in this taxon the articular facet on the distal humerus is not so well demarcated.This peculiar ball-and-socket joint between the humerus and the pre-radial accessory fin elements indicates that this local point could be acting like a multiaxial joint.If so, then subtle, intrafin movements at this point would indicate considerable changes on the leading edge.That is as a vortex generator that increases the lift force and enhances maneuverability during locomotion analogous to the function of the alula in flying birds [49].Noteworthy, the forefin of Platypterygius americanus (UW 2421, Figure S2) [50], shows another very interesting condition: a ball-and-socket joint occurs on the trailing edge between the humerus and a pisiform.This condition suggests that the diversity of maneuvering abilities among derived ichthyosaurus may have been even greater.Quantitative experimental approaches would be worthy to test this hypothesis.
Unfortunately, the fins of QM F3348, MLP 85-I-15-1, and UW 2421, which are eloquent examples of ball-and-socket joints, could not be modeled for this study because they are very incomplete.It is expected that the exploration of deposits such as those of the Cretaceous Zapata Formation in Southern Chile [51,52] may provide more complete specimens in the near future.
Stepwise Evolution of Ichthyosaur Hydrofoils
Along the phylogeny is a clear trend, expressed across the succession of major steps of connectivity changes, towards better integrated, more modular, and more homogeneous fins in ichthyosaurs (Figure 4).These major changes could be interpreted as steps of a stepwise evolutionary pattern of limb-to-controller hydrofoil transition within ichthyosaurs.It is known that, on very broad scales, morphological iteration (and convergence) occurs frequently [53,54].Whether this stepwise pattern denotes, at the lowest scale, a morpho-logical iteration in the evolution of more efficient controller flipper-hydrofoils is a worthy question to be empirically tested in the future.
The results of network analysis framed against phylogeny show that the underlying connectivity patterns changed as ichthyosaurs evolved thunniform body plan very early in phylogeny.Thus, the first step of the connectivity changes coincides with the emergence of Ichthyosauria soon after the emergence of Ichthyosauromorhs at the Olenekian [1].Some noteworthy modifications on the forefin, as the lack of centralia [55] pre-dated these changes.The ongoing fin evolution throughout the Early Jurassic indicates that morphological changes that accompanied the emergence of parvipelvians and thunnosaurs, such as mesopodialization and the development of the thunniform body plan, respectively, predate the next steps of important changes in bone connectivity.The paucity of Aalenian-Bathonian records [56] obscures understanding of the fin transition between the Early and Middle Jurassic and the sudden appearance of ophthalmosaurid ichthyosaurs.Unfortunately, the most complete specimens of early ophthalmosaurids (i.e., Mollesaurus and Argovisaurus) lack their fins [19,57].However, the comparison between Chacaicosaurus and Ophthalmosaurus icenicus, as well as the ancestral reconstruction using parsimony analysis of the network parameters (Figure 4), suggest that the complexity of the propodeal-epipodial joint (as was the morphological innovation of the appearance of the pae) did not produce drastic changes in the connectivity pattern.In the same way, the rise of Platypterygiine by the Late Jurassic is not mirrored by changes in the fin networks.It is likely that along evolution, the morphological innovations of the forefins (associated with the emergency of major clades) provided the structural framework that allowed the subsequent diversification of the bone connectivity that ultimately triggered an ecological diversity (e.g., diversity of refined maneuverability among thunniform swimming).
Noteworthy, as also indicated by other ecological and diversity parameters [16,20], the Jurassic-Cretaceous transition seems to reduce the disparity of the forefin.The only survival lineage shows the most extreme pattern of homogeneous integration but also a restricted occupation of the morphospace.
Conclusions and Future Directions
The generalized hydrofoil design of ichthyosaur fins hides a great diversity of bone arrangements.The occupation of the morphospace through time shows a clear evolutionary trend towards better integrated and modular forefins.Within this common path, the disparity of thunnosaurs (as mirrored by the large occupation of morphospace areas) persisted throughout the Jurassic.A key period occurred at the Jurassic-Cretaceous boundary.Late Cretaceous-derived Platypterygiine explores a vacant restricted new area of the available space.
The connectivity pattern diversity (i.e., variations of density, clustering, path length, and nodes and edge values) may also represent functional diversity.Based on the role of the forefin as the control surface of swimming, we argue that the morphospace occupation can be interpreted in ecological-functional terms.The controller hydrofoils of ichthyosaurs are assumed to be relatively stiff and with restricted mobility [37].However, the number of nodes, density, clustering, and path length of their bony arrangement indicate that not all fins should have had the same performance in terms of partial surface deformation and/or in terms of relative stiffness.Noteworthily, some derived Platypterygiines had a ball-andsocket joint point on the leading edge of their fins that could have facilities for localized bending of the leading edge substantially affecting the angle of attack during swimming.Based on the integration of the outcomes of network analysis and gross anatomy of the leading edge we propose diverse maneuverability capacities among members of the large clade Platypterygiine.Further studies on large samples and quantitative experimental approaches would be worthy to test this hypothesis.The mapping of the bone arrangements of the forefin on phylogeny shows that evolutionary changes occurred stepwise along the Mesozoic.
Figure 2 . 13 Figure 3 .
Figure 2. Principal component analysis (PCA) scatter diagram showing morphospace occupation defined by the first two PCAs explaining 77.288% of the variation.Red dashed lines represent the convex hull morphospace occupied by the three ichthyosaurs previously analyzed[15].See TableS2for details on network properties of analyzed taxa.Diversity 2024, 16, x FOR PEER REVIEW 5 of 13
Figure 4 .
Figure 4. Fin evolution of Ichthyosauria.Changes in the connectivity pattern through phylogeny.On cladogram, in gray, non-Ichthyosauria ichthyosauromorphs were added for comparison; in green, 1-3, major evolutionary events related to swimming.Bottom: stepwise pattern of connectivity changes, each network property is illustrated separately to aid visualization.Abbreviations of network properties as for Table1.
Figure 4 .
Figure 4. Fin evolution of Ichthyosauria.Changes in the connectivity pattern through phylogeny.On cladogram, in gray, non-Ichthyosauria ichthyosauromorphs were added for comparison; in green, 1-3, major evolutionary events related to swimming.Bottom: stepwise pattern of connectivity changes, each network property is illustrated separately to aid visualization.Abbreviations of network properties as for Table1. | 6,085.8 | 2024-06-17T00:00:00.000 | [
"Biology",
"Geology"
] |
Reshaped three-body interactions and the observation of an Efimov state in the continuum
Efimov trimers are exotic three-body quantum states that emerge from the different types of three-body continua in the vicinity of two-atom Feshbach resonances. In particular, as the strength of the interaction is decreased to a critical point, an Efimov state merges into the atom-dimer threshold and eventually dissociates into an unbound atom-dimer pair. Here we explore the Efimov state in the vicinity of this critical point using coherent few-body spectroscopy in 7Li atoms using a narrow two-body Feshbach resonance. Contrary to the expectation, we find that the 7Li Efimov trimer does not immediately dissociate when passing the threshold, and survives as a metastable state embedded in the atom-dimer continuum. We identify this behavior with a universal phenomenon related to the emergence of a repulsive interaction in the atom-dimer channel which reshapes the three-body interactions in any system characterized by a narrow Feshbach resonance. Specifically, our results shed light on the nature of 7Li Efimov states and provide a path to understand various puzzling phenomena associated with them.
The unique ability to fine tune the interaction between ultracold atoms has led to the realization of a number of quantum phenomena [1], among which the Efimov effect has become a quantum workhorse that allows for the exploration of some of the deepest issues of universal few-body physics [2][3][4][5].Near a magnetic-field dependent Feshbach resonance the strength of the interatomic interaction is characterized by the s-wave scattering length a, which can assume arbitrarily large values compared to the characteristic range of the interactions, i.e., the van der Walls length r vdW = (mC 6 /ℏ 2 ) 1/4 /2, where m is the atomic mass and C 6 is the dispersion coefficient.However, not all Feshbach resonances are the same.The intricate nature of the hyperfine interactions in alkalimetal atoms allows for different couplings between the open channel and the corresponding closed channel carrying the Feshbach state.As such, a resonance is said to be broad (narrow) in case of a strong (weak) coupling and is characterized by the dimensionless strength parameter s res ≫ 1 (s res ≪ 1) [1].
Regardless of the strength of the Feshbach resonance, the Efimov effect occurs at |a| → ∞ due to the formation of an induced long-range three-body interaction of the form −1/R 2 , where R is the hyperradius [5] providing the overall size of the system.This interaction gives rise to a log-periodic series of bound Efimov states whose absolute position is determined by the short-range three-body physics (Fig. 1) [2][3][4][5].In the case of a broad resonance, the three-body potential supporting Efimov states features a universal repulsive wall near R ≈ 2r vdW thus preventing the atoms from probing small hyperradii.In fact, this repulsive wall is the hallmark characterizing the van der Waals (vdW) universality, according to which the ground Efimov state dissociates into the three-atom continuum at a (0) − ≈ −9.73r vdW [6,7].This was observed across several different Feshbach resonances in 133 Cs and 85 Rb [8,9].For narrow resonances, however, this result is expected to be modified as yet another length scale emerges, namely r ⋆ = 2ā/s res ≈ 1.912r vdW /s res .Since now r ⋆ > r vdW , three-body observables are expected to depend on r ⋆ (or equivalently, s res ) rather than r vdW alone [10][11][12][13][14][15][16].Indeed, for intermediate resonances (s res ≳ 1) deviations from the Efimov-van der Waals universality were already confirmed in recent precision measurements and calculations [6,7,17,[20][21][22].For 7 Li atoms, although the Feshbach resonances are narrower than those above, experimental observations of a − are consistent with the vdW universality, thus challenging our understanding of universality.
Here, we show that as the resonance becomes narrower, the three-body interaction is reshaped with respect to that of a broad resonance (in any atomic species).While the universal repulsive wall near R ≈ 2r vdW disappears, the system also develops an additional potential barrier ranging from R ≈ 4r vdW to a distance proportional to r ⋆ , leading to a double-well structure absent for broad resonances.Specifically, we experimentally explore the energy spectrum using coherent spectroscopy in the vicinity of the atom-dimer threshold for 7 Li atoms polarized in the |F = 1, m F = 0⟩ state, which features a Feshbach resonance at 894 G with s res ≈ 0.493, and observe an Efimov state above the atom-dimer threshold.This provides strong evidence of the reshaping for the three-body interactions for narrow resonances, and further elucidates some of the mechanisms leading to other puzzeling observations with 7 Li atoms [8,23,24,26].arXiv:2308.06237v2[cond-mat.quant-gas]9 Mar 2024
RESULTS AND DISCUSSION
The DITRIS Interferometer In contrast to traditional cold-atom few-body experiments which utilize inelastic losses to uncover Efimov features [27][28][29][30], we perform high resolution coherent spectroscopy of the Efimov state on the a > 0 side of the Feshbach resonance.Following the proof-of-principle demonstration of Ref. [1] we generate a DImer-TRImer Superposition (DITRIS) state by rf association and let it evolve in time.The accumulated relative phase between its constituents is then measured in an interferometer-like sequence.The method works best in the region around a (1) ⋆ -the value of a at which the Efimov state merges with the atom-dimer threshold-where there is a clear separation of energy scales (Fig. 1).The difference between the trimer and dimer bound states must be smaller than their depth below the three-atom continuum on the one hand but larger than the temperature of the latter on the other.In this energy regime the straightforward measurement combining rf association and loss-spectroscopy fails due to rf power broadening [30].Our procedure thus goes beyond the existing methods.As a second condition, the rf pulse must be short enough in time such that it is a dimer while one atom remains free or a trimer.This effectively creates a superposition of the two chemically different bound states.
The double pulse sequence is illustrated in Fig. 2. The first pulse generates DITRIS states from a fraction of a gas of free atoms.Then, following the accumulation of a relative phase according to their binding energies, the second pulse attempts to dissociate them.The dimer and trimer pathways interfere and one observes oscillations in the number of free atoms as a function of the free evolution time.The frequency of the oscillations is proportional to |E T − E D |.The DITRIS method is thus a measurement of the trimer binding energy with respect to the atom-dimer continuum.The two requirements (separation of energies and short, Fourier broadened pulses) set the lower and upper limits of detectable |E T − E D |.It lies between the temperature of the free atom continuum and the pulse bandwidth respectively (see Methods).In this regime the conversion efficiency is limited by the pulse duration, i.e. it is not saturated by the phase space density argument [32], and therefore remains low, such that |C A | ≫ |C D |, |C T | (see Fig. 2).As a result, the oscillations appear as a small signal on top of a large background.However, making the pulses longer would decrease the upper detection limit and is therefore not favourable.To faithfully extract the main frequency contribution we use a Fourier transform inspired threeparameter fit (for details see Supplementary Note 2 and 3 as well as Ref. [1]).As is typical for frequency measurements, the accuracy increases for longer measurements.
The free evolution time is thus varied over a wide range of values (up to ∼ 100µs) limited only by the coherence time of the superposition state (see Methods).
Trimer spectroscopy
Having established a reliable tool for measuring |E T − E D | we apply the double pulse sequence for various values of the magnetic field (scattering length) with the goal of finding the point at which E T → E D .In Fig. 3(a) measurements from the DITRIS interferometer (filled circles) are represented together with the data from the previous incoherent rf association spectroscopy (open circles) [30].At large scattering lengths, the Efimov state is relatively deeply bound, (E T − E D )/h ≲ −100 kHz, and our new measurements agree with those obtained from incoherent spectroscopy [30].However, as the scattering length decreases and the Efimov state becomes more weakly bound, instead of the expected gradual approach towards the atom-dimer continuum [2][3][4][5], a sharp turn in the energy is observed.Subsequently, the experimental signal disappears for energies below the lower detection limit [see arrows in the shaded region in Fig. 3(a)].The latter is set by the temperature via due to the loss of coherence amplitude [33] (see also Supplementary Note 2).Most surprisingly, however, meaningful frequencies reemerge when the scattering length is further decreased [gray circles in Fig. 3(a)].The Efimov state binding energy quickly changes away from the threshold again and becomes undetectable above the higher frequency detection limit set by the pulse bandwidth [see the upper gray dashed line in Fig. 3(a)] leading to measurements with no dominant frequency contribution (gray arrow) (see Supplemental Material).Figure 3(a) also shows our theoretical results for the energies of the 7 Li Efimov state (squares).These results, along with the physical interpretation of the phenomena controlling the observations, are discussed later in the text.
Experimentally we are only sensitive to the absolute value of the energy difference which leads to two equally plausible scenarios: the trimer either crosses into or bounces off the atom-dimer continuum.Although the latter scenario has been indicated in the literature to occur for broad resonances [34,35], our numerical simulations for 7 Li instead show that the Efimov trimer crosses the atom-dimer continuum threshold due to a reshape of the three-body interaction potential associated to the narrow character of its Feshbach resonance (see discussion below).
We emphasize that the trimer remains a metastable state well inside the atom-dimer continuum.This is demonstrated in Figs.quences of the DITRIS interferometer are compared.In Fig. 3(b) we show a signal obtained below threshold and in Fig. 3 Interestingly, a recent theoretical study (performed for broad resonances) has provided a possible interpretation of such unusually large coherence times [33], with coherence still being observed for times as long as twice the lifetime of the Efimov state.Although this result does not fully explain the experimentally observed coherence times, our analysis below demonstrates fundamental differences between the three-body physics for broad and narrow resonances as 7 Li which can potentially lead to substantial modifications of the coherence times.Finally, we argue the implausibility of attributing the nonzero signal from the DITRIS interferometer above threshold to any molecular state other than the Efimov state.Although one cannot completely rule out that a non-universal (non-Efimovian) trimer state exists by accident in the same energy region in which our observations are performed, this coincidence is very unlikely.In particular, the region in phase space (above the threshold) that we explore experimentally is extremely narrow, covering only a few a 0 in scattering length and only a few tens of kHz in energy.Moreover, for DITRIS interferometry to provide a detectable signal, it is necessary for all states involved in the problem to be extraordinarily large.Near the region where we observe the crossing, the dimer state itself should be ∼ 160a 0 , and the trimer should be comparable to or even greater than that.On the other hand, a non-Efimovian accidental state could only originate from short-range physics and would be ≲ r vdW = 32a 0 for 7 Li.For such small states, the coupling between them and the initial atomic state (with a size comparable to the average interatomic distance, i.e., ∼ 10 4 a 0 for our case) would be extraordinarily small due to the poor Frank-Condon factor, and the DITRIS interferometer would be inefficient.As a result, since we know that the only weakly bound dimer state is the Feshbach dimer, it is reasonable to accept that the only trimer that can overlap well with both the dimer and the initial atomic state is an Efimov state.
Theory and numerical simulations
In order to better understand the nature of the 7 Li Efimov trimer near the atom-dimer threshold we have performed numerical calculations using the adiabatic hyperspherical representation (see Methods).In the following, we first present a two-channel interaction model with variable s res .This model gives insight on the crucial difference between broad and narrow resonances in the context of three-body Efimov interactions.Building upon the physical picture that emerges from the two-channel model we then develop a multichannel theory using realistic 7 Li two-body potentials.This latter model qualitatively reproduces the trimer's crossing of the atomdimer threshold, thus verifying the experimental observations.We note, that while necessary approximations in our theoretical model hinder quantitative agreement with the experiment, our findings clearly identify the physical mechanism controlling the experimental observations.
Three-body interactions near narrow resonances
The two-channel model we use for the interatomic interaction contains the proper van der Waals physics and a set of parameters chosen to produce a Feshbach resonance with the 7 Li background scattering length, a bg ≈ −25a 0 [4], but variable values for s res (see Supplementary Note 4).In the adiabatic hyperspherical representation, a great deal of physical insight can be obtained from the hyperspherical effective potentials W ν (R), which are solutions of the adiabatic Hamiltonian at fixed values of the hyperradius R. In Fig. 4 (see also Supplementary Fig. 5) we show the effective potentials relevant for Efimov physics at a = ±∞ and various values of s res between 0.41 and 246, thus covering the broad, intermediate and narrow resonance regimes.Asymptotically, all potentials approach the universal form −(s 2 0 + 1/4)ℏ 2 /2µR 2 with s 0 ≈ 1.00624 which supports infinitely many Efimov states.However, at shorter distances the potentials are drastically reshaped as the resonance strength enters the narrow resonance regime.(We note that similar results have been found in a recent publication [16].) For our broadest resonance (s res ≈ 246), representing atomic species like 85 Rb and 133 Cs, the effective potential displays the expected universal repulsive wall near R ≈ 2r vdW [thick black curve in Fig. 4(a)] which prevents atoms from probing the small R region representing the hallmark of vdW universality [6,7].As s res is tuned towards the intermediate (s res ≈ 2.56, similar to 39 K) and narrow (s res ≈ 0.41, similar to 7 Li) resonance regime the effective potentials are reshaped and the universal repulsive wall eventually disappears.Remarkably, the three-body potentials develop a repulsive barrier for R ≳ 4r vdW [Fig.4(b)] which extends up to R ≈ 3r ⋆ [Fig.4(c)] as a result of the strong mixing between the open and closed hyperspherical channels (see Supplementary Note 4).Therefore, in the s res < 1 regime, the effective potentials display a double-well structure, where interactions within the inner well (R ≲ 4r vdw ) are dominated by vdW interactions while interactions in the outer well (R ≳ 3r ⋆ ) are dominated by Efimov physics.For the s res ≈ 0.41 case, the closest to 7 Li (s res ≈ 0.493), the potential barrier height is found to be about 10 MHz (0.02E vdW ) at a = ±∞, i.e. much larger than the range of binding energies found experimentally.Importantly, and relevant to our present experiment, this barrier also persists for finite values of a > 0 (see Supplementary Note 4).
Multichannel calculations for 7 Li
To provide a more quantitative analysis of the effect of the repulsive barrier in the parameter regime of the experiment, we perform additional numerical calculations that characterize the energy of the 7 Li Efimov state using a more realistic interaction model based on the methodology developed in Refs.[6,7].We note that modifications to this model were made (see Supplementary Note 4) in order to compensate for strong (short-ranged) electronic exchange interactions [9,10].Yet, our model displays the correct physics at distances R ≳ r vdW thus preserving the major features relevant for the central physical question we explore here, i.e., whether the repulsive barrier allows for Efimov states to exist above the atom-dimer threshold.
While our results for (E T − E D )/h < 0 in Fig. 3(a) 1)] between 7 Li atom-dimer collisions and that of a system controlled by a broad Feshbach resonance for approximately the same value for |aAD| and Re[aAD]<0.The existence of the repulsive barrier on the entrance atom-dimer channel for 7 Li leads to the enhancement of elastic collisions just above the threshold as compared to that without the barrier, indicating the presence of the Efimov state above the atom-dimer threshold and with energy indicated by the closed circles.Note that at higher energies ∆σ w AD < 0, most likely due to the fact that |aAD| and |a ∞ AD | are only approximately the same, but also due to other multichannel effects causing modifications on the scattering at such energies.
were obtained using a methodology that provides a direct characterization of the energy of the Efimov state [39,40], for (E T −E D )/h > 0 the analysis of the near-resonant energy regime in the atom-dimer continuum is much more subtle.However, a convenient way to characterize the existence of the 7 Li Efimov state above the atom-dimer threshold is to compare the energy dependence of the 7 Li atom-dimer elastic cross-section, σ AD , with that of a system without the barrier, i.e., a system controlled by a broad (s res = ∞) Feshbach resonance, σ ∞ AD [35].It is crucial, however, that this comparison is performed when the two physical systems have the same value for the atom-dimer scattering lengths, a AD = a ∞ AD , such that both cross-sections converge to the same value, 4π|a AD | 2 , as the collision energy vanishes.In this case, if a AD < 0 and an Efimov state exists above the threshold, the crosssection difference should display the enhancement whenever the collision energy coincides with that of the 7 Li Efimov state.In practice, the above procedure is most meaningful in the case of weak inelastic transitions, which has lead us to suppress the short-range decay mechanisms in our model.Also, since in our case the values of a AD we obtained are only approximately the same (differing by no more than 2%), we define the weighted cross-section difference as which ensures that the cross-section difference vanishes as the collision energy E → 0. In the above expression, AD where S AD is the diagonal S-matrix element associated with the atom-dimer channel and k 2 AD = 2m/3E/ℏ 2 , with m being the atomic mass.The results for ∆σ w AD for various a AD in Fig. 5 clearly show the expected enhanced scattering of 7 Li with respect to the broad resonance case thus demonstrating the existence of a 7 Li Efimov state above the threshold as a direct consequence of the existence of the repulsive barrier.The energy of the Efimov state is associated with the maximum value of ∆σ w AD occurring at smaller values of E as |a AD | increases, i.e., when the state approaches the atom-dimer threshold from above, and is displayed in Fig. 3(a).We note that simplified, asymptotic models have failed to explain our experimental observations, indicating the importance of van der Waals interactions in order to properly describe the reshape of the three-body interactions [41,42].Note also that, as shown in Fig. 3(a), when (E T − E D )/h < 0 the theory results predict deeper energies close to the Feshbach resonance, which dive faster towards the atom-dimer continuum.We attribute such discrepancies with the simplifications adopted for otherwise nearly intractable, truly multichannel interactions in lithium.Even the most advanced attempt to model these interactions [10] have not reached fully converged results, leading to a significant discrepancy between theory and experiment [8,24] for the spin state considered here.Most importantly, however, is that both our experimental data and theoretical simulations refute the conventional expectation that an Efimov state simply merges with the atom-dimer continuum for the case of narrow resonances.
In summary, our experimental and theoretical observations of the existence of an Efimov state above the atomdimer continuum provides strong evidence of a fundamental reshaping of the three-body interactions for narrow resonances.Although our theoretical analysis allows us to point out that this phenomenon is universally valid for narrow Feshbach resonances, much is still needed to fully characterize the 7 Li Efimov states, in particular, with respect to their lifetime.The coherence times observed with the DITRIS interferometer are clearly much longer than the estimations of the trimer lifetime obtained from our numerical simulations without coherence.Since we show here that for 7 Li atom-dimer collisions are enhanced, this raises the intriguing question on whether the character of the DITRIS superposition state, along with the form of the three-body interaction, can conspire to form more long-lived superposition states in a way that coherence can still be observed at long times [33].Although some open questions still remain, our current observations provide evidence that Efimov physics at a narrow Feshbach resonance deviates from the expectations from vdW universality, where the Efimov state simply disappears at the atom-dimer threshold.
Successful application of the DITRIS interferomter to coherent spectroscopy of the Efimov energy level, to-gether with a notable demonstration of coherent manipulation of 4 He halo dimers by ultrashort laser pulses [43], reveals the great potential of the coherent approach to few-body physics phenomena.Future investigation into the superposition state lifetime, and the extension of the unique capabilities of the DITRIS interferometer to other atomic species and mixtures are expected to greatly advance our understanding of Efimov physics in ultracold atoms.
Experimental details
Standard laser cooling and evaporation techniques are used to produce a gas of 3 × 10 4 bosonic lithium atoms at 1.5 µK and an average density of 1.25 × 10 12 cm −3 in a crossed optical dipole trap.The temperature corresponds to 30 kHz and is our lower detection limit.To image the atoms we use absorption imaging which is sensitive to free atoms only.
At the core of the experiment lies the 10 µs pulse which is Fourier broadened to address both the dimer and the trimer simultaneously.The duration 10 µs refers to the full-width at half-maximum (FWHM).There is also a (measured) turn-on/turn-off time of τ 0 = 4 µs which means that the pulse is at its maximal value during τ c = 6 µs.The experimental rf pulse envelope is modelled as and plotted in Fig. 6.The Fourier transform of χ(t) is also shown in Fig. 6 in the low frequency domain.It closely resembles a sinc, the transform of an ideal rectangular pulse, but is slightly broadened.It's FWHM is 117 kHz which is the value we use as our upper detection limit.
The method was refined with respect to the proof-ofprinciple in Ref. [1] mainly by reducing the atom number fluctuations.This was achieved by stabilizing the magnetic field to a relative stability of 5 × 10 −5 and by improved statistics (Supplementary Note 2).
Scattering length calibration
In Fig. 3(a) the values of (E T − E D )/h are shown as a function of inverse scattering length.In the experiment we vary the magnetic field bias to tune the scattering length.The calibration is performed via the dimer binding energy which is frequently measured during the DITRIS interferometer data accusation.The measurement protocol can be found elsewhere [44].Finally, the FIG. 6. Pulse shape and spectrum.The pulse shape of Eq. ( 2) is compared to a pure square pulse with the same FWHM.The FFT of the former is slightly broader.
dimer binding energy is related to the scattering length via coupled channel calculations [5].Given the high accuracy of the dimer binding energy measurement and coupled channels calculations, the scattering length uncertainty is < a 0 in the region explored in the experiment.
DITRIS coherence time
The longest feasible free evolution time is given by the decoherence of the superposition state [1].The possible relevant parameters are the elastic collision rate and the trimer's intrinsic lifetime.The low signal-to-noise ratio does not permit precise measurement of the decohrence time but empirically we do not observe signs of decay for < 150 µs.In practice, the three-parameter fit allows neglecting the decay in the data analysis by keeping the range of the evolution time < 150 µs.
Theory
The adiabatic hyperspherical representation provides a simple and conceptually clear description of the threebody system in terms of the hyperradius R, characterizing the overall size of the system, and the set of hyperangles, Ω [3].Bound and scattering properties [47] of the system are determined from solutions of the hyperradial Schrödinger equation: where µ is the three-body reduced mass, E the total energy, W ν is the hyperspherical effective potentials governing the radial motion and W νν ′ the nonadiabatic couplings driving transitions between different channels, characterized the collective index ν.The hyperspherical effective potentials are defined as where U ν (R), the hyperspherical potentials, and Φ ν (R; Ω), the channel functions, are solutions of the hyperangular adiabatic equation obtained at fixed values of R. The adiabatic Hamiltonian contains the hyperangular kinetic energy, as well as all the atomic and interatomic interactions in the system.We explicitly define the terms of the adiabatic Hamiltonian used in our studies in Supplementary Note 4.
DATA AVAILABILITY
Source data are provided with this paper.Twodimensional raw atomic cloud pictures from all experimental runs are available upon request to Y.Y. or L.K.
A. Detailed description of the three-parameter analysis In order to illustrate our data analysis it is instructive to apply it to a simulated data sequence.Consider a finite-length sinusoidal signal similar to the one shown in the left column of Supplementary Fig. 8(a), for which a 100 µs long pure sine with ω/2π = 87.5 kHz was generated.As in our experiment a discrete "measurement" value is taken every 2 µs corresponding to a sampling rate of 500 kHz.In order to determine the frequency we guess a pure oscillatory fitting function: (S6) The three fitting parameters are the amplitude A, the frequency ω and the phase φ.Since the frequency is not known a priori we instruct the least-squares algorithm to start its search for a minimum in parameter space (A, ω, φ) = (1, ω 0 , 0), where ω 0 ∈ 2π × [20,200] kHz.For each initial value of ω 0 the algorithm converges to some value for the three parameters (A, ω, φ) in the vicinity of the initial parameters (possibly a local minimum, not necessarily the global minimum) and we record the converged A(ω), see right column of Supplementary Fig. 8(a).The value of ω at which A is maximal (we denote these values ω ⋆ and A ⋆ ) is the dominant frequency contribution and the global minimum in parameter space.As expected for this pure sine, ω ⋆ /2π = 87.5 kHz is obtained.The trustworthiness of the spectrum is quantified by a signal-to-noise ratio as: where Ā is the mean of all points excluding A ⋆ .For the pure sine, SNR = 11.2.Due to the finite length of the signal, Ā ̸ = 0 and hence the SNR does not diverge as it would for an infinitely long noiseless sine.We now add white Gaussian noise (WGN) with a standard deviation of 0.5 (half the amplitude) to the pure sine and repeat the procedure in Supplementary Fig. 8(b).Albeit the WGN, the 3PA is able to determine the dominant manageable number (about 100 instead of 1000s) for our three-body calculations.By adjusting the values of λ to produce the correct values for the singlet and triplet scattering lengths [5], this model accurately describes the relevant Feshbach resonances for atoms in the |F = 1, m F = 0⟩ hyperfine state.In Supplementary Fig. 12 we show the hyperspherical effective potentials for 7 Li demonstrating the existence of the repulsive barrier for both obtained for a = ±∞ (dashed lines) and a ≈ 5.7r vdW (solid lines).
The three-body spin function used in our calculations follows the spectator atom approximation, where two atoms are allowed to interact via spin states satisfying m F1 + m F2 = 0 while the third atom remains in the |F 3 = 1, m F3 = 0⟩ state.Although this approximation has been shown to be enough to describe the experimental results for 39 K [6, 7], this is not the case for 7 Li when it comes to reproducing the position of the Efimov resonance in recombination experiments [8].This result is most likely due to the presence of strong electronic exchange for 7 Li atoms 10], which would a larger spin basis to describe the 7 Li interactions.Here, in order to set our model to produce results compatible with these observations we introduce a fictitious three-body interaction of the form where we set λ = 5 and β = 0.2r vdW and tune A ex to fit the position of the a < 0 Efimov resonance of Ref. [8].While this approach leads to an atom-dimer Efimov resonance for a > 0 with energies comparable to those observed here for 7 Li, the calculated lifetimes are on the order of 10s of µs.For our simulations shown in Fig. 5 of the main text, we have turned off non-adiabatic coupling between deeply-bound molecular states in order to set a lifetime comparable
FIG. 1 .
FIG. 1. Efimov spectrum and energy scales.Schematic illustration of the Efimov scenario (universal theory) in the vicinity of a Feshbach resonance.The horizontal axis is the inverse scattering length a −1 and the dashed vertical line corresponds to the position of the Feshbach resonance (|a| = ∞).The vertical axis indicates the wavenumber corresponding to the three-atom continuum (grey region) and the discrete spectrum of Efimov trimers (solid curved lines).The straight solid line originating at the Feshbach resonance position corresponds to the universal dimer state.The extreme points of the trimers' spectrum are labelled to indicate Efimov resonances and the first excited Efimov state is highlighted.The energy scales relevant to this work are indicated and are specific to the 894 G resonance in 7 Li.
FIG. 2 .
FIG. 2. Experimental double pulse sequence.(i) Initial state of three-atom continuum.(ii) A first rf pulse transfers a fraction of the initial state to dimer and trimer bound states creating a superposition state.(iii) As the wave function evolves, each constituent gains a phase proportional to its binding energy.(iv) A second rf pulse mixes the states.For simplicity, only the free-atom part is depicted.(v) Using absorption imaging the number of free atoms is measured.(vi) An example of a measured signal as a function of free evolution time and and its three-parameter fit.The signal has a large background term (|CA| 2 ), fast oscillating terms (not shown) and a term that oscillates at ω = |ET − ED|/h.The latter is extracted via a three-parameter fit, where A indicates the amplitude.
FIG. 3 .
FIG. 3. Trimer energy from experiment and theory.(a)The values of (ET − ED)/h obtained from the double pulse sequence (filled circles) are shown together with data from rf association followed by loss (open circles)[30] as a function of inverse scattering length multiplied by 1000.For the former, the errorbars (1σ fitting error) are smaller than the point size and so are all scattering length errorbars (see Methods).The horizontal shaded region and dashed lines show the respective lower and upper detection limits of the DITRIS interferometer.The numerical results (filled squares) for (ET − ED)/h < 0 were obtained from the methodology used in Refs.[39,40] while the results for (ET − ED)/h > 0 are those extracted from Fig. 5. Being treated differently, the two regions are connected by a dashed line.(b),(c) On the left panel -examples of the experimental signal (number of atoms as a function of the time between pulses).Each point is the average of 10-20 measurements and the errorbars show the standard deviation.On the right panel -results of the three-parameter fit applied to the corresponding signals which clearly indicate the presence of dominant frequencies.The horizontal (smaller than point size) and vertical errorbars are 1σ fitting errors.(For further details see Supplementary Note 2 and Ref.[1]).(b) For data below the threshold, at a = 265 a0, and (c) for data above the threshold, at a = 156 a0.The experimental signal and three-parameter fit for the remaining points are shown in Supplementary Fig.3.Note that the experimental data predict the crossing of the threshold somewhere between 177 a0 and 160 a0.
FIG. 4 .
FIG. 4. Reshape of three-body interactions for narrow resonances.(a) Effective potentials, Wν (R), for the relevant channel supporting an infinity of Efimov states for different values of sres in units of E vdW = ℏ 2 /mr 2 vdW .As sres evolves from the regime of broad (sres ≫ 1) to narrow (sres ≪ 1) resonances a repulsive interaction emerges for R ≳ 4r vdW and extends up to R ≈ 3r * , where r * ≈ 1.912r vdW /sres.The double-well structure of the three-body interaction for narrow resonances allows for trimer states to exist above the atom-dimer continuum for finite values of a > 0 as shape resonances.(b) 100-fold zoom of the dot-dashed box in (a).Likewise, (c) shows a 10-fold zoom of the dot-dashed box in (b).For more values of sres see Supplementary Fig.5.
FIG. 5 .
FIG.5.Evidence of resonant scattering above atomdimer threshold.Weighted elastic cross-section difference [Eq.(1)] between7 Li atom-dimer collisions and that of a system controlled by a broad Feshbach resonance for approximately the same value for |aAD| and Re[aAD]<0.The existence of the repulsive barrier on the entrance atom-dimer channel for7 Li leads to the enhancement of elastic collisions just above the threshold as compared to that without the barrier, indicating the presence of the Efimov state above the atom-dimer threshold and with energy indicated by the closed circles.Note that at higher energies ∆σ w AD < 0, most likely due to the fact that |aAD| and |a ∞ AD | are only approximately the same, but also due to other multichannel effects causing modifications on the scattering at such energies.
Supplementary Fig. 8 .
Three-parameter fit.The 3PA is applied to four signals to illustrate its working principle.The open circle shows the frequency and amplitude of the pure sine.(a) Pure sine.(b) Noisy sine.(c) Sine with decay.(d) Noisy sine with decay.
Supplementary Fig. 9 .Supplementary Fig. 11 .
Measured number of atoms and spectral analysis.The left column shows the number of atom signal in the double pulse sequence.Each point is the average of 10-20 measurements and the errorbars show the standard deviation.The remaining columns are, from left-to-right, the 3PA, 2PA and FFT of the signals.For the 3PA and 2PA the errorbars and shaded region, respectively, show the 1σ fitting errors.From top-to-bottom the signals were recorded for a scattering length of a/a0 = 283, 265, 248, 197, 185, 176, 164, 160, 157, 156 and 151.Reshape of three-body interactions for narrow resonances.(a) Effective potentials, W (R), for the relevant channel supporting an infinity of Efimov states for different values of sres in units of E vdW = ℏ 2 /mr 2 vdW .(b) and (c) Enhanced views of W (R) illustrating their properties for different values of sres: As sres evolves from the regime of broad (sres ≫ 1) to narrow sres ≪ 1 resonances a repulsive interaction emerges for R ≳ 4r vdW and extending up to R ≈ 3r * , where r * ≈ 1.912r vdW /sres.The double-well structure of the three-body interaction for narrow resonances allows for trimer states to exist above the atom-dimer continuum for finite values of a > 0 as shape resonances.
(c) one from above it.Both signals are similar and no observable decay is detected within the first 100 µs, covering up to 10 full oscillations.Although thorough investigation of the trimer lifetime with the DITRIS interferometer is beyond the scope of this work, it is clear from these signals that the coherence time exceeds the expected lifetime of the Efimov trimer.(Our numerical simulations estimate the lifetime of the trimer state within the experimental range to be around 10-20 µs.) | 8,323 | 2023-08-11T00:00:00.000 | [
"Physics"
] |
Life-Cycle Assessment in the LEED-CI v4 Categories of Location and Transportation (LT) and Energy and Atmosphere (EA) in California: A Case Study of Two Strategies for LEED Projects
: This study aimed to identify different certification strategies for Leadership in Energy and Environmental Design Commercial Interior version 4 (LEED-CI v4) gold-certified office projects in California’s cities and to explore these certification strategies using life-cycle assessments (LCAs). The LEED-CI v4 data were divided into two groups: high- and low-achievement groups in the Location and Transportation (LT) category. The author identified two strategies for achieving the same level of certification across LEED-CI v4 projects: (1) high achievements in LT (LT High ) and low achievements in the Energy and Atmosphere (EA) category (EA Low ), and (2) low achievements in the LT category (LT Low ) and high achievements in EA (EA High ). The author adopted LT High –EA Low and LT Low –EA High achievements as functional units for LCA. Three alternatives were LT High : typical bus, EA Low : gas; LT Low : typical car, EA High : gas; and LT Low : eco-friendly car, EA High : gas, where a typical bus used diesel, a typical car used natural gas, an eco-friendly car used EURO5diesel, and natural gas was used as a building’s operational energy. The ReCiPe2016 results showed that the LT High : typical bus, EA Low : gas strategy was preferable from a short-term perspective, and the LT Low : eco-friendly car, EA High : gas strategy was preferable in a long-term and an infinite time perspective, while the LT Low : typical car, EA High : gas strategy continued to be the most environmentally damaging certification strategy for all the time horizons of the existing pollutants. Thus, it can be concluded that if there are alternative strategies for LEED certification, an analysis of their LCAs can be useful to refine the best sustainable strategy.
Problem Statement
Leadership in Energy and Environmental Design (LEED) is one of the most popular US-based building rating systems and is also known as an international sustainable tool. LEED contains the credits' requirements, organized in eight environmental categories that deal with transport, sites, water, energy, materials, indoors, innovation, and regional issues. The credits have different weightings, reflecting their environmental importance. Such a weighting set is designed by a stakeholder group of environmental specialists and building practitioners (the "stakeholder approach"). The group decides on a country-specific list of the environmental categories and the total number of points awarded. Then, the points are divided among categories according to their importance. Eventually, the total number of category points is redistributed among the credits of this category. This is carried out according to the category/credit importance decided by a stakeholder group.
Over the decades, LEED has been criticized for its subjective approach to dividing awarded points among the categories and credits [1] and delinking LEED performance from life-cycle assessment (LCA) outcomes [2]. LCA is a methodology that was created by ISO 14040 [3] to evaluate environmental impacts and damage resulting from the whole life cycle of the project/service. Therefore, it is important to use LCA in deciding the importance of LEED credits.
In respect of linking LEED performance to LCA outcomes, LEED v4 for Building Design and Construction (BC + D) has already included building life-cycle impact reduction credits in the material and resources (MR) category [4]. This is a good starting point. However, as seen in the literature, linking the LEED system to LCA outcomes has not been completed yet. This study aimed to continue exploring this problem by linking LEED certification strategies to LCA outcomes. To find the particular gaps in this research topic, LEED certification studies and linking LEED certification to LCA outcomes studies are discussed in Sections 1.2 and 1.3, respectively, and the goals of this study are discussed in Section 1.4.
LEED Certification
LEED-certified projects have different certification strategies depending on the country's location, certification level, project type, and project size, and, therefore, much research has been published on this matter. For example, Wu et al. [5] collected a total of 3416 LEED for New Construction (LEED-NC) v3 2009 projects from the USA (2770 projects), China (126 projects), Turkey (53 projects), Brazil (40 projects), Chile (34 projects), and Germany (30 projects). The authors pooled all the projects together in one set and sorted them into four certification levels. As a result, at the certified level, there were 655 projects; at the silver level, there were 1310 projects; at the gold level, there were 1201 projects; and at the platinum level, there were 244 projects. Wu et al. [5] (p. 375) used the mean ± standard deviation (SD) and coefficients of variation (CV), where CV = SD/mean. For example, the mean ± SD and CV of the energy and atmosphere (EA) category at the certified level was CV = 0.53; at the silver level, CV = 0.44; at the gold level, CV = 0.36; and at the platinum level, CV = 0.17. As a result, the CV value monotonically decreased from the certified level to the platinum through the silver and gold levels. The CV values of the EA category (35 points) in LEED for newly constructed LEED-NC v3-certified, silver, gold, and platinum projects were 0.53, 0.44, 0.36, and 0.17, respectively. As a result, decreasing the degree of variation in the points achieved in the EA category can be associated with a decreasing degree of variation in the other LEED categories: sustainable sites, SS (26 points); indoor environmental quality, EQ (15 points); materials and resources, MR (14 points); and water efficiency, WE (10 points), and vice versa. The implementation of these two possible tendencies can be converted into a different LEED strategy for the achievement of LEED certification levels. According to data collected by Wu et al. [5], the revealed dependence reflects LEED projects from the US rather than other countries. However, the US LEED data are not homogeneous, as green building policies such as ASHRAE 90.1 (the Energy Standard for Buildings Except Low-Rise Residential Buildings) are determined on a state-by-state basis [6].
Pushkar and Verbitsky [7] analyzed LEED-NC v3 2009 gold projects certified in 2016 in several US states such as California (CA = 58 projects), Illinois (IL = 19 projects), Florida (FL = 11 projects), Washington (WA = 11 projects), Ohio (OH = 8 projects), and Massachusetts (MA = 14 projects). As a result, in the EA category, the following two subgroups were revealed: (1) high values of the IQR/median ratio and (2) low values of the IQR/median ratio. The first group included CA, 22.0 ± 13.0 (0.59); IL, 17.0 ± 14.5 (0.85); and FL, 15.0 ± 11.0 (0.73), while the second group included WA, 16.0 ± 4.3 (0.27); OH, 13.5 ± 4.0 (0.30); and MA, 14.0 ± 5.0 (0.36). In this context, under the same gold certification, at least two facts are notable: (1) there were states with different LEED strategies (high values of the IQR/median ratio), and (2) there were states with the same LEED strategy (low values of the IQR/median). Pushkar and Verbitsky [8] (p. 98) evaluated the IQR/median ratio for LEED-NC v3 2009 gold projects certified in California in 2012-2017. These authors showed that, in the EA category, the minimum IQR/median ratio was 0.31 in 2012, and the maximum IQR/median ratio was 0.81 in 2017. In this context, in 2012, LEED strategies had low variance, while in 2017, LEED strategies had high variance when the LEED-NC v3 2009 projects had the same gold certification.
Pushkar [9] studied the difference between Shanghai and California in terms of LEED for commercial interiors (LEED-CI v4) of gold-certified office space projects. The author [9] (p. 34) showed that there was a significant difference in the IQR/median ratios in the two highest-scoring categories (location and transportation (LT), 18 points and EA, 35 points). For Shanghai, the median and (the IQR/median ratio) for LT and EA were 17.0 and (0.06) and 15.0 (0.27), respectively, while for California, (the median and the IQR/median ratio) for LT and EA were 15 (0.88) and 24 (0.55), respectively. In this context, it can be assumed that Shanghai's projects used the LT-EA certification strategies with low variation, while California's projects used the LT-EA certification strategies with high variation. It should be noted that, in both Shanghai and California, the same gold certification was analyzed.
Thus, the three articles listed above show that, for one level of certification, there can be different strategies used to achieve this.
Linking LEED Certification to LCA Outcomes
There have been some attempts to integrate LCA into LEED in the literature. Scheuer and Keoleian [10] evaluated solid waste generation and life-cycle energy consumption in a six-story University of Michigan campus building that resulted from a simulated application of LCA to LEED-NC v2 credits. Three MR and three EA credits out of a total of sixty-four credits were analyzed. The studied MR credits were MRc2, construction waste management; MRc4, recycled materials; and MRc5, local/regional materials, and the evaluated EA credits were EAc1, optimizing energy performance; EAc2, renewable energy; and EAc6, green power. The authors reported on a variety of discrepancies between the LEED-NC v2 rating system, in which all the credits were awarded one point, and there were completely different LCA results for these credits.
Humbert et al. [11] directly evaluated LCA outcomes from a simulated application of LEED-NC v2.2, with 45 quantifiable credits out of a total of 69 credits awarded to an actual Californian office building. The evaluated credits belonged to the SS, WE, EA, and MR categories. The authors concluded that most LEED-NCv2.2 credits brought about environmental benefits. However, several credits, such as SSc4.3, alternative transportation, low-emission and fuel-efficient vehicles and SSc7.1, heat island effect, non-roof, caused environmental damage. Moreover, Humbert et al. [11] pointed out significant discrepancies between (i) the low number of points awarded in the rating system and the high benefit of the LCA results of certain credits such as EAc6, green power, and (ii) the high number of points awarded in the rating system and the low benefit of LCA results of certain credits such as WEc1.1, water-efficient landscaping, which was reduced by 50%, for example.
Other studies also criticized the delinking of LCA from LEED certification. For example, Suh et al. [12] studied the application of 38 quantifiable LEED-NC v3 credits belonging to the SS, WE, EA, and MR categories for a prototypical, small office building consuming 6 terajoules (TJs) of primary energy and releasing about 18,000 tons CO 2 -eq. of greenhouse gases (GHGs) over its life cycle based on national average values. It was concluded that the environmental impact reduction potentials of the LEED building simulation were unevenly distributed across the measured impacts. The largest reductions were noted for acidification (25%), human respiratory health (24%), and global warming (22%), while no reductions were observed for ozone layer depletion and land use.
Al-Ghamdi and Bilec [13] studied the building energy use and associated life-cycle impacts of typical office buildings located in 400 cities worldwide regarding their satisfaction with the LEED-NC v3 operational energy criteria based on the American Society of Heating, Refrigerating, and Air-Conditioning Engineers (ASHRAE) 90.1 energy code. The authors reported wide variations in CO 2 emissions, from 394 tons CO 2 -eq. to 911 tons CO 2 -eq. They concluded that there is a need to consider the LCA of local-based operational energy results in order to gain a better understanding of the possible environmental impacts in the context of the energy requirements of green building rating systems.
Thus, LEED BC + D v4 included the building life-cycle impact reduction credit in the MR category [4]. This credit presents the option of receiving three points for decreasing three of six environmental impacts (global warming potential, depletion of the stratospheric ozone layer, acidification of land and water sources, eutrophication, formation of tropospheric ozone, and depletion of nonrenewable energy resources). The proposed design should reduce the impact by 10 percent compared to a baseline design built according to ASHRAE 90.1-2010. In this way, LEED v4 BC + D linked LCA to LEED certification. However, the credit's intent is to promote the reuse and optimization of building construction materials. This is a good starting point for LCA penetration into LEED systems. However, LCA is still not considered across other quantifiable LEED categories, taking into account that some EQ credits (daylighting, thermal comfort, or quality views) cannot be analyzed with the current LCA methodology.
In this respect, Greer et al. [2] evaluated LCAs of the application of three LEED BC + D v4 credits: optimizing energy performance (EA category), indoor water use reduction, and outdoor water use reduction (the WE category) in different cities of California. The authors revealed great variability in the avoided carbon dioxide, with 0.1-0.9 and 0.1-0.2 kg CO 2 /m 2 /y per each EA and WE point awarded, respectively, in different cities of California. The CO 2 variability was due to different building types, electricity fuel sources for buildings' operational heating and cooling energy, and water infrastructure in this state.
Goals of the Study
The state of the art shows that there is still a lack of studies that link LEED certification strategies to LCA outcomes. In particular, LCAs of different LEED certification strategies have not yet been performed. The author of the present study decided to focus on LEED-CI v4 gold-certified office projects in cities in California, for which two hypotheses were elaborated: (1) based on Pushkar's study [9], it was supposed that there are different LT-EA certification strategies in California, and (2) based on Greer et al.'s study [2], it was supposed that the different LT-EA certification strategies can lead to different LCA outcomes. Thus, the first aim of this study was to reveal existing certification strategies of LEED-CI v4 gold-certified office projects in cities in California, and the second aim was to evaluate the different certification strategies via LCA outcomes.
The results of this study provide the first LCA evidence from different certification strategies that were applied by LEED building practitioners toward achieving the same certification level. In this way, the outcome of the study may help LEED experts to make further improvements to the LEED system for the additional mitigation of the environmental impacts caused by the construction sector.
Design of the Study
To reduce the impact of unknown factors, the author collected LEED-CI v4 office projects from California only because, in the USA, green building policies are regulated differently in each state [6]. The author selected California as a case study due to the following reasons. First, California has the largest number of LEED-CI v4-certified office projects as compared to the other US states and so is acceptable for statistical analysis [6]. Second, California's cities have completely different percentages of people using public transportation, which allowed us to assess the impact of transportation on the LEED strategy; for example, the percent of people using public transportation in San Francisco is 34.7, but in Sunnyvale, it is 7.6 [14]. (1) Filtering LEED-CI v4-certified, silver, gold, and platinum projects by sample size and sorting these by LT points resulted in the selection of the most appropriate gold project groups with high and low achievements in the LT category (i.e., LT High and LT Low ) (Section 2.2.1); (2) Distribution of the LT High and LT Low gold projects by cities in California and comparing them to the percentage of people using public transportation in these cities (Section 2.2.2); (3) Comparing the LEED certification achievements of the LT High projects and the LT Low projects by category (IP, LT, WE, EA, MR, EQ, IN, and RP) and credit levels resulted in two different LT-EA certification strategies: LT High -EA Low and LT Low -EA High (Section 3.1.1); (4) Adopting LT High -EA Low and LT Low -EA High achievements as a functional unit (FU) for LCA evaluations by converting the LT High -EA Low LEED points into bus (typical bus) transportation distance (km) and building operational energy (OE) for heating and cooling (kWh) and converting LT Low -EA High LEED points into car (typical car or eco-friendly car) transportation distance (km) and building OE for heating and cooling (kWh) (Section 2.3.1); (5) Evaluating the midpoint impact and endpoint single-score damage results of the LT High -EA Low (LT High : typical bus, EA Low : gas) and LT Low -EA High (LT Low : typical car, EA High : gas and LT Low : eco-friendly car, EA High : gas) certification strategies using ReCiPe2016 life-cycle impact assessment methodology (Section 3.2).
projects as compared to the other US states and so is acceptable for statistical analysis [6]. Second, California's cities have completely different percentages of people using public transportation, which allowed us to assess the impact of transportation on the LEED strategy; for example, the percent of people using public transportation in San Francisco is 34.7, but in Sunnyvale, it is 7.6 [14]. Figure 1 shows a flowchart of the methodology used in the present work. The following steps were performed: (1) Filtering LEED-CI v4-certified, silver, gold, and platinum projects by sample size and sorting these by LT points resulted in the selection of the most appropriate gold project groups with high and low achievements in the LT category (i.e., LTHigh and LTLow) (Section 2.2.1); (2) Distribution of the LTHigh and LTLow gold projects by cities in California and comparing them to the percentage of people using public transportation in these cities (Section 2.2.2); (3) Comparing the LEED certification achievements of the LTHigh projects and the LTLow projects by category (IP, LT, WE, EA, MR, EQ, IN, and RP) and credit levels resulted in two different LT-EA certification strategies: LTHigh-EALow and LTLow-EAHigh (Section 3.1.1); (4) Adopting LTHigh-EALow and LTLow-EAHigh achievements as a functional unit (FU) for LCA evaluations by converting the LTHigh-EALow LEED points into bus (typical bus) transportation distance (km) and building operational energy (OE) for heating and cooling (kWh) and converting LTLow-EAHigh LEED points into car (typical car or ecofriendly car) transportation distance (km) and building OE for heating and cooling (kWh) (Section 2.3.1); (5) Evaluating the midpoint impact and endpoint single-score damage results of the LTHigh-EALow (LTHigh: typical bus, EALow: gas) and LTLow-EAHigh (LTLow: typical car, EA-High: gas and LTLow: eco-friendly car, EAHigh: gas) certification strategies using ReC-iPe2016 life-cycle impact assessment methodology (Section 3.2).
. Filtering by Sample Size and Sorting by Location and Transportation (LT) Points
A total of 101 LEED-CI v4-certified, silver, gold, and platinum office projects in California (20 certified, 36 silver, 40 gold, and 5 platinum) were discovered between March 2015 and February 2022 from the following two databases: the USGBC [15] and the Green Building Information Gateway (GBIG) [16]. The USGBC database was used to collect the credit achievements in LEED-CI v4 projects, and the GBIG database was used to collect LEED-CI office projects only [16]. As can be seen from Table 1, for the certified, silver, and platinum levels of certification, the author selected three groups: high performance in the LT category, low performance in the LT category, and intermediate performance in LT category. For the gold level of certification, the author selected four groups: high performance in the LT category, low performance in the LT category, and two groups with intermediate performance in the LT category. Table 1. Distribution of the LEED-CI v4 office projects according to the location and transportation (LT) category performance and levels of certification in California. Table 1 lists the four groups (certified, silver, gold, and platinum), the three LT performance levels (low, medium, and high), and the number of LEED projects for each combination of group certification and LT performance level. It should be noted that the LEED data contain three types of data: binary, ordinal, and discrete interval variables with relatively few values. In this context, to compare the differences between the two groups, the minimum number of LEED projects or sample size (n) in each group must be n ≥ 12 [17]. According to the LEED-CI v4 office project numbers in each group, this study focused on comparing two strategies used to achieve the gold level of certification in LEED-CI v4 office projects, namely, high and low performance in the LT category (i.e., LT High and LT Low ). Table 2 shows that the LT High group included 13 projects from San Francisco and 1 project from Los Angeles; the LT High group occurred in cities with relatively high usage of public transportation, while the LT Low group occurred in cities with relatively low usage of public transportation. Table 2. Distribution of LEED-CI v4 gold-certified office projects in cities of California according to LT High and LT Low via the percentage of people using public transportation. Table 2 provides more details on the LT Low and LT High groups by California cities for LEED gold-certified projects from Table 1. The second column represents the percentage of residents that use public transport. The last row of Table 2 contains the total number of LEED projects for each LT performance group.
Functional Unit
Following the statistical evaluation of the LEED achievements of the LT High and the LT Low projects, two different certification strategies, LT High -EA Low and LT Low -EA High , were revealed. LTc3 (access to quality transit) and EAc6 (optimize energy performance) were determined to be representative credits of LT Hight or LT Low and EA High or EA Low for LCA evaluation (see the detailed explanation in Section 3.1.1).
For the comparison of the LCAs of the LT High -EA Low and LT Low -EA High certification strategies, both transportation (LTc3) and operational energy (EAc6) need to be considered as a single FU. Therefore, the FU was designated as follows: one passenger-related transportation from home to office and back + 8 h of OE service for one employee. In particular, this FU included the transport of one employee per 30 km of distance (LTc3) + OE per one employee per 20 m 2 office space (EAc6) per one day of office work.
The FU uses 30 km of distance per one employee due to the reported average traveling distance from home to work for California [18] and 20 m 2 of office space per one employee as a common maximum operational energy design criteria for an office-type building [19]. An office employee traveling from home to work was assumed to travel by bus for LT High and by car for LT Low . In the case of traveling by car, the author of this study considered two options: the current situation, in which California's cars are fueled by natural gas, and a hypothetical future situation, in which California could become a leader in the development of new, more environmentally friendly transport [20].
EA Low and EA High refer to the quantity of kWh for the OE used by each of the LEED-CI v4 gold-certified office projects for their heating and cooling needs. As a starting point, this study adopted 80 kWh/m 2 of OE as a base case for California's office buildings [21]. It was assumed that the OE was produced by natural gas, which is the most common electricity generation fuel in California [22]. Then, the author performed a four-step evaluation procedure in which, for each of the analyzed projects, the LEED points were converted into kWh that were used as input data for the LCA of EA High and EA Low . A description of this procedure is presented in Appendix A.
Life-Cycle Inventory
The LCIs of the LT High -EA Low and LT Low -EA High certification strategies were modeled on the SimaPro platform [23]. The Ecoinvent database has a comprehensive transportation Table 3 shows the Ecoinvent v3.2 database sources adopted for transportation (used in LT High and LT Low ) and OE (used in EA High and EA Low ). For OE, the author used the original US database. However, due to the absence of an original US database for transportation, it was necessary to adopt the Switzerland database, which was considered appropriate due to the comparative nature of the evaluation in the present study.
According to the Ecoinvent v3.2 database [23], transportation by a typical bus refers to the entire transport life cycle and includes bus production, operation, maintenance, and disposal, as well as the construction, renewal, and disposal of roads. A vehicle lifetime performance of 23,900 personkm/vehicle was assumed. The data for vehicle operation and road infrastructure reflect Switzerland's conditions. The data for vehicle manufacturing and maintenance represent generic European data.
The Ecoinvent v3.2 database [23] states that transportation by a typical car includes data on Euro3 vehicle operation, and bitumen and concrete comprise roads. Inventory refers to the entire transport life cycle. The vehicle manufacturing data reflect current modern technologies. It also includes the construction, renewal, and disposal of roads, as well as the operation of the road infrastructure. A vehicle's lifetime performance was 23,900 personkm/vehicle, with an average utilization of 1.59 passengers/car. The data for vehicle manufacturing and maintenance represent generic European data, whereas the data for vehicle operation and road infrastructure reflect Switzerland's conditions.
According to the Ecoinvent v3.2 database [23], transportation by eco-friendly cars (diesel cars, lightweight concept, 2l t/100 km, EURO5) takes into account an average load factor of 1.6 persons. Inventory refers to the entire transport life cycle, car production, operation, and maintenance, as well as the operation and disposal of road infrastructure. A vehicle's lifetime performance was 150,000 km/vehicle. The data for vehicle life cycle and road infrastructure reflect Switzerland's conditions. The Ecoinvent v3.2 database [23] states that the production of electricity has an average net efficiency of 100% in the natural gas power plants in the USA. Technology reflects electricity production by natural gas steam generation.
Thus, due to Switzerland's inventory data for a typical bus, a typical car, and an ecofriendly car, the LCAs of the LT High -EA Low (LT High : typical bus, EA Low : gas), LT Low -EA High (LT Low : typical car, EA High : gas), and LT Low -EA High (LT Low : eco-friendly car, EA High : gas) certification strategies are fully comparable with each other.
Life-Cycle Impact Assessment
The author of the present study used the ReCiPe2016 life-cycle impact assessment (LCIA) method. This method is based on individualist (I), hierarchical (H), and egalitarian (E) views regarding environmental problems. The individualist view accounts for a short lifetime (20 years), the hierarchical view accounts for a long lifetime (100 years), and the egalitarian view accounts for an infinite lifetime (1000 years) of pollutants [24,25].
To analyze the LT High -EA Low and LT Low -EA High certification strategies, this study used both midpoint (H) and endpoint single-score (individualist/average, I/A; hierarchi-Sustainability 2022, 14, 10893 9 of 18 cal/average, H/A; and egalitarian/average, E/A) evaluations. On the midpoint scale, the author evaluated global warming, human carcinogenic toxicity, human noncarcinogenic toxicity, and terrestrial ecotoxicity impacts. These impacts were selected as they were the most influenced by transportation and operational energy processes [23]. Table 4 shows these impacts for 1 km (transportation) and 1 kWh (OE).
Choice of Statistical Procedures
LEED data are expressed on ordinal or discrete interval variables with relatively few values or binary data. For descriptive statistics, this paper used the median and 25th and 75th percentiles, and for inferential statistics, nonparametric tests were used because the normality assumption may not hold [26].
For ordinal or discrete data, to estimate the p-value, this paper used the exact Wilcoxon-Mann-Whitney (WMW) nonparametric test [17], and to estimate the effect size, a nonparametric Cliff's δ test was used [27].
For LEED binary data, to estimate the p-value, this paper used Fisher's exact 2 × 2 test with Lancaster's mid-p-value [28]. To estimate the effect size, (1) the author computed odds ratios using a two-by-two frequency table, but added 0.5 to each frequency observed if any of them were 0 [29], and (2) the author used the natural logarithm of the odds ratio (ln θ) [30].
LCA-LEED data are expressed as discrete data. However, as these data were being analyzed for the first time, the author performed a Shapiro-Wilk test to estimate the assumption of normality. For LT High : typical bus data, in each perspective (i.e., I/A, H/A, and E/A), the Shapiro-Wilk test results showed that the assumption of normality was not met (p = 0.0008), while for LT Low : typical car and LT Low : eco-friendly car data in each perspective (i.e., I/A, H/A, and E/A), the Shapiro-Wilk test results showed that the assumption of normality was met (p = 0.0598, p = 0.1264, and p = 0.2222), respectively. In this context, if one of the two groups does not have a normality assumption, the nonparametric exact WMW test and Cliff's δ effect size are used to estimate the statistical difference between the two groups.
The value of ln θ ranges between (-) infinity and (+) infinity [29]. A positive value indicates that Group 1 (i.e., LT High ) was larger than Group 2 (i.e., LT Low ); a value of 0 indicates no difference between Groups 1 and 2 (i.e., no difference between groups LT High and LT Low ); and a negative value indicates that Group 2 (i.e., LT Low ) was larger than Group 1 (i.e., LT High ). The effect size thresholds of the absolute ln θ (|ln θ|) were 0.51 (small), 1.24 (medium), and 1.90 (large) and were adapted from the study by Chen et al. [32].
According to Altomonte [33], the Cliff's δ coefficient is an intuitive interpretation of the practical significance (i.e., effect size) in green building studies. This is likely due to the small number of studies in this area that have used effect size coefficients. Vargha and Delaney [34] noted that more empirical evidence is needed to evaluate the real effect size for nonparametric group comparisons.
p-Value Interpretation
According to Hurlbert and Lombardi [35], exact p-values are evaluated according to a three-valued logic: seems to be positive (i.e., there seems to be a difference between Group 1 and Group 2), seems to be negative (i.e., there does not seem to be a difference between the groups), or judgment is suspended regarding the difference between Groups 1 and 2.
Recently, the author of [26] described the interpretation of the p-value in more detail. Table 5 gives descriptive and inferential statistics for the categories of the LEED-CI v4 gold certification. According to the LEED total, both the LT Low and LT High certification strategies led to similar achievements of total median points, 62.5 and 63.0, respectively. However, similar achievements were recorded for both similarly achieved categories, such as IP, WE, MR, and RP, and differently achieved categories, such as LT, EA, EQ, and IN. Notes: p-values were evaluated according to three-valued logic; bold font indicates that the value seems to be positive; Roman font indicates that the value seems to be negative.
LEED Certification Achievements of the LT High and the LT Low Projects
Among the differently achieved categories, LT (which emphasized the preferability of public transportation) performed better in the LT High group of the projects than in the LT Low group of the projects. Such results were expected. This is because, in the LT High group, 13 of the 14 LEED-CI v4 gold projects were certified in San Francisco (Table 2), the densest city with a highly developed public transportation system [20]. With regard to the LT Low group of projects, they were certified in other Californian cities such as Brisbane, Fremont, Menlo Park, Mountain View, Rancho Cordova, Roseville, San Diego, and Sunnyvale ( Table 2). In this respect, Turrentine [20] noted that, unless Californians live in San Francisco, "they also are likely to have never carpooled with their neighbors, despite the presence of High Occupancy Vehicle lanes on many California freeways, and they have probably never used mass transit". Thus, to compensate for low LT achievement, the LT Low group of projects was forced to receive more points under other categories. This phenomenon of the interdependence between LEED categories' achievements was described early on by Ismaeel [36], who developed a dynamic model for sustainable site selection according to LEED-NC v4. The author revealed that as the achievements of the site selection categories (LT and SS) decreased, the achievements of the WE, EA, MR, and EQ categories increased. Ismaeel [36] concluded that the highest influence of the site selection categories was for EA credits, followed by EQ, MR, and WE. This was explained by the fact that when site selection categories have local constraints (e.g., lack of public transportation), LEED practitioners are forced to aim for higher performance in other categories.
In the present study, to compensate for low achievement, the LT Low group of projects invested in improving the EA, EQ, and IN categories. As a result, in these three categories, the LT Low group of projects had better achievements than the LT High group (Table 5). However, it is not clear how LT is related to some of the EQ credits (e.g., daylighting thermal comfort or quality views) and IN credits (they can be completely different in different projects). Thus, the EQ and IN categories were outside the scope of this study.
In this respect, Table 6 gives descriptive and inferential statistics only for the LT and EA credits of the LEED-CI v4 gold certification. For the three LT credits, LTc2 (surrounding density and diverse uses), LTc3 (access to quality transit), and LTc5 (reduced parking footprint), the LT High group of projects received the maximum possible points, significantly outperforming the LT Low group. However, for three EA credits, EAc2 (advanced energy metering), EAc4 (enhanced refrigerant management), and EAc6 (optimize energy performance), the LT Low group of projects outperformed the LT High group. Table 6. LEED-CI v4 gold-certified projects in California: location and transportation (LT) and energy and atmosphere (EA) credits.
Location and Transportation
LTc2, surrounding density and diverse uses a Notes: p-values were evaluated according to three-valued logic; bold font indicates that the value seems to be positive; Roman font indicates that the value seems to be negative; italic font indicates that judgment is suspended. a Exact WMW test and Cliff's δ were used. b Fisher's exact 2 × 2 test and ln θ were used.
LTc2 (surrounding density and diverse uses) deals with the presence of city infrastructure near a building site; LTc3 (access to quality transit) considers the presence of public transportation in the vicinity of a building site; LTc5 (reduced parking footprint) recommends reducing private car parking places [37]. All of these LT credits can help decrease the main fuel combustion emissions, such as nitrogen dioxide (NO 2 ) and carbon monoxide (CO), which are known to be human carcinogens with terrestrial ecotoxicity impacts [23].
EAc2 (advanced energy metering) aims to control energy savings; EAc6 (optimize energy performance) encourages insulating building envelopes and installing energy-efficient systems [37]. Using fossil fuels releases emissions such as sulfur dioxide (SO 2 ) and nitrogen oxides (NO x ), thereby increasing acidification and human toxicity impacts, respectively [23]. EAc4 (enhanced refrigerant management) requires a decrease in chlorofluorocarbons (CFCs) and hydrochlorofluorocarbons (HCFCs), which can contribute to ozone depletion [37].
As can be observed, the LT and EA credits involve different areas of human health and environmental protection and, as a consequence, can decrease the different impacts associated with each of them. Thus, to obtain LEED-CI v4 gold certification, the LT High group of projects preferred to decrease the LT-related impacts and increase the EA-related impacts, whereas the LT Low group preferred to decrease the EA-related impacts and increase the LT-related impacts. Thus, the LCA of these two certification strategies was further developed, exploring them in terms of the environmental impact and damage level.
Two LT credits, LTc2 (surrounding density and diverse uses) and LTc3 (access to quality transit), are the most important due to receiving 15 out of 18 points (Table 6). However, LTc2 concerns walking, whereas LTc3 concerns driving. Thus, LTc3 will influence the environment in a much more straightforward manner than LTc2. EAc6 (optimize energy performance) is the most influential credit in the EA category due to it accounting for the greatest number of points: 25 out of 32 (Table 6). Moreover, EAc6 can be easily accounted for in the LCA framework. Eventually, the author decided to perform LCAs for LTc3 (access to quality transit) and EAc6 (optimize energy performance), as these are the most representative credits of the LT and EA categories, respectively. These credits were selected for their large influence on the LT and EA categories and the possibility of translating the credits' requirements into quantitative LCA inputs. Thus, the LCAs of two different strategies, LT High -EA Low and LT Low -EA High , were evaluated, and the results are presented below.
LCAs of LT High -EA Low and LT Low -EA High
Following the evaluation procedure described in Appendix A, LEED points were converted into kWh for all the projects analyzed. Tables 7 and 8 give LTc3 and EAc6 information for the LCAs of the LT High -EA Low and LT Low -EA High certification strategies, respectively. Table 7. LEED-CI v4 gold-certified office-type projects: LTc3 and EAc6 information for LCAs of the LT High -EA Low project group. Figure 2 shows the ReCiPe2016 midpoint impact results of LT High -EA Low (typical bus), denoted as LT High : typical bus, EA Low : gas, and of the LT Low -EA High (typical and eco-friendly cars) certification strategies, denoted as LT Low : typical car, EA High : gas and LT Low : eco-friendly car, EA High : gas, respectively. Figure 2 shows the ReCiPe2016 midpoint impact results of LTHigh-EALow (typical bus), denoted as LTHigh: typical bus, EALow: gas, and of the LTLow-EAHigh (typical and eco-friendly cars) certification strategies, denoted as LTLow: typical car, EAHigh: gas and LTLow: ecofriendly car, EAHigh: gas, respectively. It can be noted that EAHigh and EALow (OE production) had a high contribution to global warming potential and human noncarcinogenic toxicity, whereas high LTHigh and LTLow (bus and car transportation) had a high contribution to human carcinogenic toxicity and terrestrial ecotoxicity. Thus, the impact results at the midpoint do not allow us to It can be noted that EA High and EA Low (OE production) had a high contribution to global warming potential and human noncarcinogenic toxicity, whereas high LT High and LT Low (bus and car transportation) had a high contribution to human carcinogenic toxicity and terrestrial ecotoxicity. Thus, the impact results at the midpoint do not allow us to conclude which of the two processes (OE or transportation) was more influential.
Midpoint Impact Results
Comparing the impacts of the LT High -EA Low and LT Low -EA High certification strategies, the following was noted. Analyzing global warming potential, human carcinogenic toxicity, human noncarcinogenic toxicity, and terrestrial ecotoxicity, the LT Low : typical car, EA High : gas certification strategy was the most environmentally harmful. However, when analyzing global warming potential and terrestrial ecotoxicity, the impact of the LT Low : eco-friendly car, EA High : gas certification strategy was significantly lower than the impact of the LT High : typical bus, EA Low : gas certification strategy, whereas, when analyzing human carcinogenic toxicity and human noncarcinogenic toxicity, the impact of the LT High : typical bus, EA Low : gas certification strategy was much lower than the impact of the LT Low : eco-friendly car, EA High : gas certification strategy. Thus, based on the midpoint results, it is difficult to determine one preferable certification strategy. Figure 3 shows the ReCiPe2016 endpoint single-score results of the LT High : typical bus, EA Low : gas; LT Low : typical car, EA High : gas; and LT Low : eco-friendly car, EA High : gas strategies. Table 9 shows that there was a significant difference between the LTHigh: typical bus, EALow: gas and LTLow: typical car, EAHigh: gas certification strategies, as well as between the LTHigh: typical bus, EALow: gas and LTLow: eco-friendly car, EAHigh: gas certification strategies. Thus, considering the results presented in Figure 3 and their statistical comparisons presented in Table 9, it can be concluded that the preferability of one strategy over another depended on the time period of the pollutants considered. The LTHigh: typical bus, EALow: gas certification strategy was revealed to be environmentally preferable in the short term, whereas the LTLow: eco-friendly car, EAHigh: gas certification strategy was found to be the most environmentally appropriate solution from the long-term or infinite perspectives. As can be seen, for these certification strategies, transport caused greater damage to the environment than the OE production process. The shares of transport and OE changed, decreasing transport's influence and increasing OE's influence under short, long, and infinite time horizons of pollutants. In particular, in a 20-year period (the I/A option), the transport and OE shares were 91-99% and 1-9%, respectively; in a 100-year period (the H/A option), they were 61-84% and 16-39%, respectively; and in an infinite (1000-year) period (the E/A option), they were 53-76% and 24-47%, respectively.
Endpoint Single-Score Damage Results
Comparing the damage involved in the LT High -EA Low and LT Low -EA High certification strategies, in terms of all three time horizons of pollutants, the LT Low : typical car, EA High : gas certification strategy that used a typical car was the most environmentally harmful. However, the LT High : typical bus, EA Low : gas certification strategy that used a typical bus was better than the LT Low : eco-friendly car, EA High : gas certification strategy, which used an eco-friendly car in a short time horizon (the I/A option), whereas the LT Low : eco-friendly car, EA High : gas certification strategy was better than the LT High : typical bus, EA Low : gas certification strategy in the long (the H/A option) and infinite (the E/A option) time horizons. Table 9 shows that there was a significant difference between the LT High : typical bus, EA Low : gas and LT Low : typical car, EA High : gas certification strategies, as well as between the LT High : typical bus, EA Low : gas and LT Low : eco-friendly car, EA High : gas certification strategies. Thus, considering the results presented in Figure 3 and their statistical comparisons presented in Table 9, it can be concluded that the preferability of one strategy over another depended on the time period of the pollutants considered. The LT High : typical bus, EA Low : gas certification strategy was revealed to be environmentally preferable in the short term, whereas the LT Low : eco-friendly car, EA High : gas certification strategy was found to be the most environmentally appropriate solution from the long-term or infinite perspectives.
Limitations
In the present study, Spearman's rho (ρ) rank-correlation coefficient (effect size) could not be used between the LT LOW and LT High groups because these groups had different sample sizes (n = 12 and n = 14, respectively). Spearman's correlation coefficient can be used to estimate the strength of the monotonic relationship between two LEED credits/categories within one group [38], i.e., within LT LOW or LT High . The nonparametric Cliff's δ was applied to measure the magnitude of the difference between the two distributions (i.e., effect size). Cliff's δ can be used when there are two independent groups with equal or no equal sample sizes in groups.
Future Research
Recently, Altomonte et al. [33] used a seven-point Likert scale to assess occupant satisfaction with the indoor environmental quality in LEED-certified buildings (post-occupation analysis). A two-tailed nonparametric Wilcoxon rank-sum test, Spearman's rho (ρ) rankcorrelation, and Cliff's δ coefficients were used to calculate significant differences (p-value) and substantive significances (effect size) between two independent groups. In the current study, LEED-certified buildings were evaluated using the LEED scorecard (pre-occupation analysis). In the future study, the author plans to compare post-occupation results with pre-occupation results using the above statistical tests.
Conclusions
This study evaluated the LCAs of two different LEED-CI v4 gold certification strategies for office projects located in cities in California. These two different strategies were revealed by sorting the projects according to the LT category of LEED-CI v4: high and low LT achievements. It was revealed that projects with a high number of LT points performed poorly in the EA category (LT High -EA Low ), whereas projects with a low number of LT points performed well in the EA category (LT Low -EA High ). These two different LEED certification strategies resulted in the same median LEED total score; for the LT High strategy, it was 62.5, and for the LT Low strategy, it was 63.0, resulting in gold certification.
However, from the LCA point of view, the two strategies for obtaining the same LEED certification were quite different. According to the ReCiPe2016 midpoint impact evaluation, the LT Low : typical car, EA High : gas strategy was the most environmentally harmful certification strategy, whereas, in terms of global warming potential and terrestrial ecotoxicity, the LT Low : eco-friendly car, EA High : gas strategy was preferable; in terms of human carcinogenic toxicity and human noncarcinogenic toxicity, the LT High : typical bus, EA Low : gas strategy was the better choice. Thus, on this level of the evaluation, it was impossible to decide on the most environmentally beneficial certification strategy.
According to the ReCiPe2016 endpoint single-score results, the LT Low : typical car, EA High : gas strategy continued to be the most environmentally damaging certification strategy for all the time horizons of pollutants. However, it was clear that the LT High : typical bus, EA Low : gas strategy was preferable in the short-term, whereas the LT Low : eco-friendly car, EA High : gas strategy was preferable from the long-term and infinite perspectives.
The novelty of this study lies in the environmental assessment of the choice of LEED certification strategy. The author has shown that choosing a certification strategy (LT High -EA Low or LT Low -EA High ) that results in the same level of LEED-CI v4 (gold) certification resulted in significantly different environmental impacts and damage. Based on the results of the LCAs, it is recommended that LEED certification be carried out with caution in relation to the relevant LCA environmental assessments, thereby increasing the sustainability of buildings.
Funding: This research received no external funding.
Data Availability Statement:
Publicly available datasets were analyzed in this study. The data can be found here: https://www.usgbc.org/projects (USGBC Projects Site) (accessed on 10 April 2022) and http://www.gbig.org (GBIG Green Building Data) (accessed on 10 April 2022).
Acknowledgments:
The author is grateful to Architect David Knafo for a fruitful discussion of the idea presented in this study.
Conflicts of Interest:
The author declares no conflict of interest.
Appendix A
This study used a four-step evaluation procedure in which, for each of the analyzed projects, LEED points were converted into kWh and used as input data for the LCA of EA High and EA Low . The procedure included: (1) the conversion of operational energy improvement points to a percentage improvement according to EAc6 [37]; (2) the conversion of 80 kWh·y/m 2 to the FU base case, which was 6.4 kWh·day·20 m 2 ; (3) the conversion of the percentage improvement of the FU base case to operational energy saved; and (4) the calculation of the difference between the FU base case and the operational energy saved. EAc6 saved operational energy = 6.4kWh·day·20m 2 ·0.06 = 0.4kWh·day·20m 2 (A3) EA Low = 6.4kWh·day·20m 2 − 0.4kWh·day·20m 2 = 6kWh·day·20m 2 (A4) | 11,075.4 | 2022-08-31T00:00:00.000 | [
"Business",
"Engineering",
"Environmental Science"
] |
Enhanced Wear Resistance of 316 L Stainless Steel with a Nanostructured Surface Layer Prepared by Ultrasonic Surface Rolling
The low hardness and poor wear resistance of AISI 316 L austenitic stainless-steel sabotage its outer appearance and shorten its service life when it is subjected to sliding. In this paper, the single-pass ultrasonic surface rolling (USR) process was used to modify the surface of 316 L austenitic stainless steel. A nanostructured surface layer with a depth span of 15 μm was fabricated. Dry wear tests of USR samples were performed on a ring-on-block tester at room temperature, and the results were compared with those for the as-received sample. The USR sample showed a significant reduction in wear mass loss and an improved hardness, as well as a decreased surface roughness. The detailed wear mechanism was also investigated by SEM observations of the worn surfaces. It was indicated that oxidation and abrasive wear, accompanied by mild adhesion, dominated the wear of USR 316 L stainless steel at both low and high speeds. The superior wear performance of USR 316 L was attributed to its nanostructured surface layer, which was characterized by a high hardness and thereby suppressed the severe abrasive wear. The results provided an alternative approach to modifying the surface of 316 L stainless steel, without changing its surface chemical components.
Introduction
As one of the typical stainless steels, 316 L stainless steel has a medium strength, excellent toughness, good plasticity and formability, and good corrosion and oxidative resistance [1,2].Thus, it has been widely used in marine, petrochemical, and energy development and other fields.However, 316 L stainless steel possesses a low hardness.As a result, it suffers from poor wear resistance, which sabotages its outer appearance and shortens its service life, when it is subjected to sliding [3,4].It is thus of great significance to improve the wear resistance of 316 L stainless steel in order to broaden its application.
In the last two decades, surface nanocrystallization (SNC) has become an attractive surface modifying method.This method involves coating a nanostructure layer on the metal surface using severe plastic deformation technology, without changing the surface chemical components [5][6][7][8].So far, several methods have been developed to prepare SNC on the metal surface, such as surface mechanical Coatings 2019, 9, 276 2 of 10 attrition treatment (SMAT) [9,10], surface mechanical grinding treatment (SMGT) [11,12], surface mechanical rolling treatment (SMRT) [13], ultrasonic shot peening (USP) [14,15] and supersonic fine particles bombarding (SFPB) [16].These approaches have also been used to improve the surface and bulk properties of AISI 316 L stainless steel.Lu et al. found that a gradient nanostructured surface (GNS) layer was formed on 316 L stainless steel by means of SMRT, which facilitated a significant enhancement in fatigue strength [17].Sara Bagherifard et al. reported that the nanolayer in AISI 316 L, formed by severe shot peening, could increase the surface roughness and surface wettability, which was highly important for biomedical application [18].The enhancement of oxidation resistance at high temperatures was also found in SMAT-treated AISI 316 L [19].
Ultrasonic surface rolling (USR) is a novel and effective surface strengthening technique, which combines static load with ultrasonic energy to impact the material surface.Compared with conventional techniques, USR holds several advantages, such as ease of operation, low cost, and high efficiency.It has been used to reduce the stress concentration, improve the hardness, enhance the wear performance, and augment the fatigue characteristics of several metals [20][21][22].Zhao et al. found that the USR TC11 titanium alloy experienced a 46.6% increase in the micro-hardness of the topmost surface and a 19.3% improvement in fatigue strength, compared to the original sample [23].A significant improvement (52.6%) in the micro-hardness of USR 40Cr, accompanied by a decrease in friction coefficient and wear rate, was also obtained by Wang et al. [24].Moreover, Wang et al. reported that USR, assisted by electropulsing, can further improve the surface microhardness and rotating bending fatigue strength of AISI 304 stainless steel [25,26].While SNC has shown its advantage in improving the surface properties and bulk mechanical properties of various metals, limited attempts have been made to modify 316 L by the USR method.Moreover, multi-pass USR is generally required to obtain satisfying surface properties, which unfavorably increases the complexity and decreases the efficiency of processing.
In this paper, single-pass USR was adopted to modify the surface of AISI 316 L stainless steel.Micro-analytical techniques (OM and TEM) were used to characterize the microstructures of the surface modification layer.The wear behaviors under dry friction conditions were discussed.The detailed wear mechanisms were also investigated by SEM observations of the worn surfaces.
Sample Preparation
In this study, a commercial hot-extruded AISI 316 L austenitic stainless-steel bar, with nominal chemical compositions of 0.03 wt % C, 10.7 wt % Ni, 16.72 wt % Cr, 2.12 wt % Mo, 1.01 wt % Mn, 0.03 wt % Si, 0.0165 wt % S and 0.042 wt % P, was employed.The initial material was machined into a round bar, with a diameter of 30 mm and a length of 500 mm.
Ultrasonic Surface Rolling Process
A schematic illustration of an ultrasonic surface rolling (USR) setup, in which an ultrasonic apparatus was installed in a CNC lathe, is shown in Figure 1.During the USR processing, a carbide alloy rolling ball, driven by ultrasonic waves, was applied to achieve a high-frequency impact and rolled on the samples under a static force.In the present work, the following USR parameters were used: a rolling ball diameter of 14 mm, an ultrasonic vibration frequency of 28 kHz, a vibration amplitude of 5 µm and an applied static force of 300 N. A single-pass USR process was used to enhance the wear-resistance of AISI 316 L stainless steel.While a multi-pass USR process is generally employed to obtain the gradient nanocrystalline structure, it is energy-extravagant and efficiency-deficient.
Wear Test, Microstructure Characterization and Hardness Test
The wear tests were performed on the M-2000 tribometer (Shunmao, Jinan, China) in a ring-onblock mode under dry wear conditions at room temperature, as illustrated in Figure 2. The rings were made of AISI 52100 steel, with a HRC hardness of 63 and a diameter of 30 mm.The applied load was 150 N, and the wear duration was 15 min at a sliding speed of 200 and 400 rpm, respectively.Before and after the wear test, ultrasonic cleaning of the samples in acetone was performed, and the samples were weighed to obtain the mass loss.In order to guarantee the stability of the wear data, each wear test was repeated three times under the same testing conditions.Microstructure observations were performed using optical microscopy (OM, Olympus BX51M, Tokyo, Japan) and a Transmission Electron Microscope (TEM, JEM 2100, JEOL Ltd., Tokyo, Japan).Samples for OM analysis were prepared via grinding, polishing, and etching in an electrolyte of 90% alcohol and 10% perchloric acid.For TEM characterization, a cross-sectional foil was mechanically cut containing the treated surface accompanied by carefully grinding, and further prepared via twinjet polishing, with a solution of 2% perchloric ethanol at 253 K.The wear scars were analyzed using a desktop Scanning Electron Microscope (SEM, Phenom, Thermo Fisher Scientific, Waltham, MA, USA).The micro-hardness was measured using a HXD-1000TC microhardness-testing instrument (Shanghai Taiming Optical Instrument Co., Ltd., Shanghai, China) on the surface of the sample.The load was 0.98 N, and the holding duration was 10 s.The average micro-hardness value of ten test points was used.
Wear Test, Microstructure Characterization and Hardness Test
The wear tests were performed on the M-2000 tribometer (Shunmao, Jinan, China) in a ring-on-block mode under dry wear conditions at room temperature, as illustrated in Figure 2. The rings were made of AISI 52100 steel, with a HRC hardness of 63 and a diameter of 30 mm.The applied load was 150 N, and the wear duration was 15 min at a sliding speed of 200 and 400 rpm, respectively.Before and after the wear test, ultrasonic cleaning of the samples in acetone was performed, and the samples were weighed to obtain the mass loss.In order to guarantee the stability of the wear data, each wear test was repeated three times under the same testing conditions.
Wear Test, Microstructure Characterization and Hardness Test
The wear tests were performed on the M-2000 tribometer (Shunmao, Jinan, China) in a ring-onblock mode under dry wear conditions at room temperature, as illustrated in Figure 2. The rings were made of AISI 52100 steel, with a HRC hardness of 63 and a diameter of 30 mm.The applied load was 150 N, and the wear duration was 15 min at a sliding speed of 200 and 400 rpm, respectively.Before and after the wear test, ultrasonic cleaning of the samples in acetone was performed, and the samples were weighed to obtain the mass loss.In order to guarantee the stability of the wear data, each wear test was repeated three times under the same testing conditions.Microstructure observations were performed using optical microscopy (OM, Olympus BX51M, Tokyo, Japan) and a Transmission Electron Microscope (TEM, JEM 2100, JEOL Ltd., Tokyo, Japan).Samples for OM analysis were prepared via grinding, polishing, and etching in an electrolyte of 90% alcohol and 10% perchloric acid.For TEM characterization, a cross-sectional foil was mechanically cut containing the treated surface accompanied by carefully grinding, and further prepared via twinjet polishing, with a solution of 2% perchloric ethanol at 253 K.The wear scars were analyzed using a desktop Scanning Electron Microscope (SEM, Phenom, Thermo Fisher Scientific, Waltham, MA, USA).The micro-hardness was measured using a HXD-1000TC microhardness-testing instrument (Shanghai Taiming Optical Instrument Co., Ltd., Shanghai, China) on the surface of the sample.The load was 0.98 N, and the holding duration was 10 s.The average micro-hardness value of ten test points was used.Microstructure observations were performed using optical microscopy (OM, Olympus BX51M, Tokyo, Japan) and a Transmission Electron Microscope (TEM, JEM 2100, JEOL Ltd., Tokyo, Japan).Samples for OM analysis were prepared via grinding, polishing, and etching in an electrolyte of 90% alcohol and 10% perchloric acid.For TEM characterization, a cross-sectional foil was mechanically cut containing the treated surface accompanied by carefully grinding, and further prepared via twin-jet polishing, with a solution of 2% perchloric ethanol at 253 K.The wear scars were analyzed using a desktop Scanning Electron Microscope (SEM, Phenom, Thermo Fisher Scientific, Waltham, MA, USA).The micro-hardness was measured using a HXD-1000TC microhardness-testing instrument (Shanghai Taiming Optical Instrument Co., Ltd., Shanghai, China) on the surface of the sample.The load was 0.98 N, and the holding duration was 10 s.The average micro-hardness value of ten test points was used.
Microstructures of the USR Surface Layer
Figure 3 shows the typical cross-sectional OM images of the as-received sample and the USR sample.The microstructure of the as-received sample was mainly composed of equiaxed austenitic grains, with a mean size of 19 µm, embedded with a few annealing twins, as shown in Figure 3a.Notably, plastic deformation and grain refinements were observed in the top-surface layer, as shown in Figure 3a.The depth of the surface grain-refined layer extended to around 15 µm.The original austenitic grains in the subsurface were compressed and elongated in the direction parallel to the specimen surface, indicating a plastic deformation zone due to ultrasonic striking during USR.In the plastic deformation zone, deformation twins within grains were clearly observed.
Typical cross-sectional TEM micrographs, shown in Figure 4a-c, characterized the microstructures of the USR sample's surface.It was apparent that equiaxed nanograins were formed in the top-surface layer of the USR sample, indicating surface nanocrystallization by USR [27,28].The corresponding SAED patterns confirmed the nanograin microstructures of the USR sample's surface and indicated that the nanostructured surface layer is composed of martensite and austenite grains with random crystallographic orientations.The grain size ranged from approximately 6 to 15 nm, which was comparable with that generated by SMAT [29], while it was much smaller than that generated by SMRT [17].The equiaxed nanograins were generated through dynamic recrystallization under severe plastic deformation, induced by USR.The detailed formation of the nanograins for coarse-grained polycrystals by USR involved various dislocation activities and the development of grain boundaries, which were similar to the results obtained by SMAT [27].The high density of dislocations and dislocation tangles within the austenitic grains in the plastic deformation zone provided evidence for the above inference (Figure 4b).Moreover, increased deformation twins were visible in the plastic deformation zone (Figures 3a and 4c), which indicated that mechanical twinning played an important role in the formation of nanograins.Because 316 L stainless steel exhibits a low stacking fault energy, mechanical twinning prevails in server plastic deformation.The high plastic strain subjected to USR played a crucial role in the formation of the nanostructure, and it facilitated dislocation generation and suppressed dislocation annihilation [30].The surface nanocrystallization induced by USR has been found in several metals, such as 40Cr [24], 304 stainless steel [31], and Ti6Al4V [32].The present nanostructure surface layer was relatively thin because of the insufficient plastic deformation subjected to the single-pass USR.The thick nanostructure surface layer could be obtained by increasing the passes of USR.
Microstructures of the USR Surface Layer
Figure 3 shows the typical cross-sectional OM images of the as-received sample and the USR sample.The microstructure of the as-received sample was mainly composed of equiaxed austenitic grains, with a mean size of 19 μm, embedded with a few annealing twins, as shown in Figure 3a.Notably, plastic deformation and grain refinements were observed in the top-surface layer, as shown in Figure 3a.The depth of the surface grain-refined layer extended to around 15 μm.The original austenitic grains in the subsurface were compressed and elongated in the direction parallel to the specimen surface, indicating a plastic deformation zone due to ultrasonic striking during USR.In the plastic deformation zone, deformation twins within grains were clearly observed.
Typical cross-sectional TEM micrographs, shown in Figure 4a-c, characterized the microstructures of the USR sample's surface.It was apparent that equiaxed nanograins were formed in the top-surface layer of the USR sample, indicating surface nanocrystallization by USR [27,28].The corresponding SAED patterns confirmed the nanograin microstructures of the USR sample's surface and indicated that the nanostructured surface layer is composed of martensite and austenite grains with random crystallographic orientations.The grain size ranged from approximately 6 to 15 nm, which was comparable with that generated by SMAT [29], while it was much smaller than that generated by SMRT [17].The equiaxed nanograins were generated through dynamic recrystallization under severe plastic deformation, induced by USR.The detailed formation of the nanograins for coarse-grained polycrystals by USR involved various dislocation activities and the development of grain boundaries, which were similar to the results obtained by SMAT [27].The high density of dislocations and dislocation tangles within the austenitic grains in the plastic deformation zone provided evidence for the above inference (Figure 4b).Moreover, increased deformation twins were visible in the plastic deformation zone (Figures 3a and 4c), which indicated that mechanical twinning played an important role in the formation of nanograins.Because 316 L stainless steel exhibits a low stacking fault energy, mechanical twinning prevails in server plastic deformation.The high plastic strain subjected to USR played a crucial role in the formation of the nanostructure, and it facilitated dislocation generation and suppressed dislocation annihilation [30].The surface nanocrystallization induced by USR has been found in several metals, such as 40Cr [24], 304 stainless steel [31], and Ti6Al4V [32].The present nanostructure surface layer was relatively thin because of the insufficient plastic deformation subjected to the single-pass USR.The thick nanostructure surface layer could be obtained by increasing the passes of USR.
Mechanical and Wear Behaviors
Figure 5a shows the surface Vickers hardness of the as-received and the USR samples.The USR sample experienced a 61% increase in hardness (increasing from 177 HV, which was the level of the as-received specimen, to around 290 HV).The improved hardness was ascribed to the grain refinement and the high dislocation density of the nanostructured surface layer, induced by USR.Herein, the strengthening or hardening of the grain refinement made a major contribution, which can be expressed based on the well-known Hall-Petch equation.Figure 5b shows the wear mass loss of the USR sample, compared to the as-received sample, at two different wear speeds.It was apparent that the USR-treated sample suffered a low wear mass loss, indicating a high wear resistance.The advantage of the USR sample was especially prominent at a high wear speed.At a high wear speed, its wear mass loss was reduced by ~20 times, compared to the as-received sample.Moreover, the wear loss of the as-received sample increased significantly with an increase in the wear speed.In contrast, the USR sample showed a less significant speeddependent wear behavior.
Mechanical and Wear Behaviors
Figure 5a shows the surface Vickers hardness of the as-received and the USR samples.The USR sample experienced a 61% increase in hardness (increasing from 177 HV, which was the level of the as-received specimen, to around 290 HV).The improved hardness was ascribed to the grain refinement and the high dislocation density of the nanostructured surface layer, induced by USR.Herein, the strengthening or hardening of the grain refinement made a major contribution, which can be expressed based on the well-known Hall-Petch equation.
Mechanical and Wear Behaviors
Figure 5a shows the surface Vickers hardness of the as-received and the USR samples.The USR sample experienced a 61% increase in hardness (increasing from 177 HV, which was the level of the as-received specimen, to around 290 HV).The improved hardness was ascribed to the grain refinement and the high dislocation density of the nanostructured surface layer, induced by USR.Herein, the strengthening or hardening of the grain refinement made a major contribution, which can be expressed based on the well-known Hall-Petch equation.Figure 5b shows the wear mass loss of the USR sample, compared to the as-received sample, at two different wear speeds.It was apparent that the USR-treated sample suffered a low wear mass loss, indicating a high wear resistance.The advantage of the USR sample was especially prominent at a high wear speed.At a high wear speed, its wear mass loss was reduced by ~20 times, compared to the as-received sample.Moreover, the wear loss of the as-received sample increased significantly with an increase in the wear speed.In contrast, the USR sample showed a less significant speeddependent wear behavior.Figure 5b shows the wear mass loss of the USR sample, compared to the as-received sample, at two different wear speeds.It was apparent that the USR-treated sample suffered a low wear mass loss, indicating a high wear resistance.The advantage of the USR sample was especially prominent at a high wear speed.At a high wear speed, its wear mass loss was reduced by ~20 times, compared to the as-received sample.Moreover, the wear loss of the as-received sample increased significantly with an increase in the wear speed.In contrast, the USR sample showed a less significant speed-dependent wear behavior.
SEM observations and EDS analysis of the worn surfaces were conducted to understand the material removal and dissection of the wear mechanisms.Figures 6 and 7 show SEM micrographs of Coatings 2019, 9, 276 6 of 10 the typical worn surfaces of the USR sample and the as-received sample at two different wear speeds, respectively.At a low speed (200 rpm), both the as-received and the USR samples displayed abrasive wear and oxidation, marked by the obvious furrow trace and a large oxidation zone, as shown in Figure 6a,b and Figure 7a,b.The obviously increased oxygen content in the black zone on the wear track implied the occurrence of oxidation induced by friction heat, as shown in Figure 6e,f and Figure 7e,f.A trivial plastic flow was also discernible, implying the occurrence of mild adhesive wear.At a high speed (400 rpm), the relatively smooth wear track and furrow trace demonstrated slight adhesive wear and abrasive wear for the USR sample, as illustrated in Figure 6c,d.In contrast, severe plastic flow and adhesive wear occurred on the wear tracks of the as-received sample, as shown in Figure 7c,d.The severe plastic flow and adhesion phenomena were related to the relatively low microhardness of the as-received sample, compared with its friction couple.The prevailing oxidation wear at a low wear speed was generally related to the initial oxidation film on the metal's surface.Raw AISI 316 L stainless steel and its AISI 52100 steel friction couple were covered by an oxidation film.Therefore, wear indeed occurred between the two oxidation films at a low speed, and thus the oxidation prevailed, but the wear mass loss was low.Increasing the wear velocity led to a direct metal-to-metal contact.Thus, the abrasive wear became dominant, accompanied by a high wear mass loss.These findings were consistent with those from a previous report on the wear of steel [33].
Due to its low hardness, 3l6 L stainless steel generally shows a low wear resistance, especially when the abrasive wear is dominant.After the USR processing, a significantly improved hardness was obtained because of the resultant nanostructured surface layer.This hard and nanostructured surface layer can withstand severe adhesive wear and plastic flow, which thus played a critical role in enhancing the wear resistance of 316 L stainless steel.Furthermore, an additional roughness test was performed.The average surface roughness was R a 0.04 µm and R a 0.8 µm for the USR sample and the as-received sample, respectively.This demonstrated that USR reduced surface roughness, which was consistent with previous reports on other metals [24].Generally, the roughened surface tended to reduce the wear mass loss.This reinforced the critical role of the nanostructured surface layer in enhancing the wear resistance of 316 L stainless steel.However, the initial roughness, after the wear test, was completely changed (Figure 6) because of the severe oxidation and abrasive wear.Therefore, we considered that the roughness had a less significant influence on the wear of 316 L stainless steel under the present condition.Moreover, the different subsurface damages and cracking may be related to the enhanced wear resistance of the USR sample, which will be further addressed in future works.
SEM observations and EDS analysis of the worn surfaces were conducted to understand the material removal and dissection of the wear mechanisms.Figures 6 and 7 show SEM micrographs of the typical worn surfaces of the USR sample and the as-received sample at two different wear speeds, respectively.At a low speed (200 rpm), both the as-received and the USR samples displayed abrasive wear and oxidation, marked by the obvious furrow trace and a large oxidation zone, as shown in Figures 6a,b and 7a,b.The obviously increased oxygen content in the black zone on the wear track implied the occurrence of oxidation induced by friction heat, as shown in Figures 6e,f and 7e,f.A trivial plastic flow was also discernible, implying the occurrence of mild adhesive wear.At a high speed (400 rpm), the relatively smooth wear track and furrow trace demonstrated slight adhesive wear and abrasive wear for the USR sample, as illustrated in Figure 6c,d.In contrast, severe plastic flow and adhesive wear occurred on the wear tracks of the as-received sample, as shown in Figure 7c,d.The severe plastic flow and adhesion phenomena were related to the relatively low microhardness of the as-received sample, compared with its friction couple.The prevailing oxidation wear at a low wear speed was generally related to the initial oxidation film on the metal's surface.Raw AISI 316 L stainless steel and its AISI 52100 steel friction couple were covered by an oxidation film.Therefore, wear indeed occurred between the two oxidation films at a low speed, and thus the oxidation prevailed, but the wear mass loss was low.Increasing the wear velocity led to a direct metal-to-metal contact.Thus, the abrasive wear became dominant, accompanied by a high wear mass loss.These findings were consistent with those from a previous report on the wear of steel [33].
Due to its low hardness, 3l6 L stainless steel generally shows a low wear resistance, especially when the abrasive wear is dominant.After the USR processing, a significantly improved hardness was obtained because of the resultant nanostructured surface layer.This hard and nanostructured surface layer can withstand severe adhesive wear and plastic flow, which thus played a critical role in enhancing the wear resistance of 316 L stainless steel.Furthermore, an additional roughness test was performed.The average surface roughness was Ra 0.04 μm and Ra 0.8 μm for the USR sample and the as-received sample, respectively.This demonstrated that USR reduced surface roughness, which was consistent with previous reports on other metals [24].Generally, the roughened surface tended to reduce the wear mass loss.This reinforced the critical role of the nanostructured surface layer in enhancing the wear resistance of 316 L stainless steel.However, the initial roughness, after the wear test, was completely changed (Figure 6) because of the severe oxidation and abrasive wear.Therefore, we considered that the roughness had a less significant influence on the wear of 316 L stainless steel under the present condition.Moreover, the different subsurface damages and cracking may be related to the enhanced wear resistance of the USR sample, which will be further addressed in future works.
Conclusions
In this paper, the single-pass ultrasonic surface rolling process was used to modify the surface of AISI 316 L stainless steel to improve its surface properties.The main conclusions that were drawn are as follows: • A nanostructured surface layer, with a depth span of 15 µm, was fabricated on AISI 316 L stainless steel through USR processing.
•
USR 316 L stainless steel exhibited a significantly improved hardness and enhanced wear resistance, as well as a decreased surface roughness, compared with the as-received sample.
•
Oxidation and abrasive wear, accompanied by mild adhesive wear, contorted the wear of USR 316 L stainless steel at both low and high wear speeds.
•
The superior wear-resistant performance of USR 316 L stainless steel was attributed to the nanostructured surface layer, which had a high hardness and thereby withstood the severe abrasive wear.
Figure 1 .
Figure 1.Schematic illustration of the ultrasonic surface rolling (USR) setup.
Figure 2 .
Figure 2. Schematic illustration of the ring-on-block wear test principles.
Figure 1 .
Figure 1.Schematic illustration of the ultrasonic surface rolling (USR) setup.
Figure 1 .
Figure 1.Schematic illustration of the ultrasonic surface rolling (USR) setup.
Figure 2 .
Figure 2. Schematic illustration of the ring-on-block wear test principles.
Figure 2 .
Figure 2. Schematic illustration of the ring-on-block wear test principles.
Figure 3 .
Figure 3.Typical cross-sectional optical microscopy (OM) of (a) the as-received sample and (b) the USR sample in the top-surface layer.
Figure 3 .
Figure 3.Typical cross-sectional optical microscopy (OM) of (a) the as-received sample and (b) the USR sample in the top-surface layer.
Figure 4 .
Figure 4. Bright field TEM image of the USR sample (a) in the top-surface layer, (b,c) in the deformation zone.The insert in (a) shows the corresponding SAED patterns.
Figure 5 .
Figure 5.The hardness (a) and wear mass loss (b) of the USR sample, compared to as-received sample.
Figure 4 .
Figure 4. Bright field TEM image of the USR sample (a) in the top-surface layer, (b,c) in the deformation zone.The insert in (a) shows the corresponding SAED patterns.
Figure 4 .
Figure 4. Bright field TEM image of the USR sample (a) in the top-surface layer, (b,c) in the deformation zone.The insert in (a) shows the corresponding SAED patterns.
Figure 5 .
Figure 5.The hardness (a) and wear mass loss (b) of the USR sample, compared to as-received sample.
Figure 5 .
Figure 5.The hardness (a) and wear mass loss (b) of the USR sample, compared to as-received sample.
Figure 6 .
Figure 6.SEM images of the typical worn surfaces of the USR sample at 200 rpm (a,b) and 400 rpm (c,d).The right-hand graph is an enlargement of the left-hand graph.The result of EDS analysis (e,f), obtained from the region marked by 1 and 2 in (b).
Figure 6 . 10 Figure 6 .
Figure 6.SEM images of the typical worn surfaces of the USR sample at 200 rpm (a,b) and 400 rpm (c,d).The right-hand graph is an enlargement of the left-hand graph.The result of EDS analysis (e,f), obtained from the region marked by 1 and 2 in (b).
Figure 7 .
Figure 7. SEM images of the typical worn surfaces of the as-received sample at 200 rpm (a,b) and 400 rpm (c,d).The right-hand graph is an enlargement of the left-hand graph.The result of EDS analysis (e,f), obtained from the region marked by 1 and 2 in (b). | 6,525.4 | 2019-04-25T00:00:00.000 | [
"Materials Science"
] |
Not-so-adiabatic quantum computation for the shortest vector problem
Since quantum computers are known to break the vast majority of currently-used cryptographic protocols, a variety of new protocols are being developed that are conjectured, but not proven to be safe against quantum attacks. Among the most promising is lattice-based cryptography, where security relies upon problems like the shortest vector problem. We analyse the potential of adiabatic quantum computation for attacks on lattice-based cryptography, and give numerical evidence that even outside the adiabatic regime such methods can facilitate the solution of the shortest vector and similar problems.
INTRODUCTION
The advent of quantum computers heralds an age of new computational possibilities. Two paradigms of quantum computing are gate model and adiabatic quantum computation (AQC): the gate model closely resembles current computing architecture, replacing bits with qubits and retaining control over the smallest buildingblocks of the system, and AQC in which the solution for the problem to be solved is encoded into the ground state of a Hamiltonian [1,2]. Typically one cannot prepare this ground state directly -otherwise the problem would be straightforward to solve. One therefore begins with a physical system with a Hamiltonian whose ground state one knows how to prepare. The adiabatic theorem then guarantees that a sufficiently slow change from this initial Hamiltonian to the problem Hamiltonian lets the system evolve into the ground state of the latter.
Both paradigms have been demonstrated to be equivalent [3], though there is not a general way of mapping from one paradigm to the other. The most impactful quantum algorithm discovered thus far is that of Shor for Integer Factorisation and Discrete Logarithm [4]. Quantum computing is expected to have far reaching consequences, influencing materials science [5], development of medicines [6], and many other disciplines. Crucially for information security though, large-scale quantum computers -through application of Shor's algorithm -will make obsolete most currently operational cryptosystems by solving the underlying mathematical problems that are intractable on classical hardware.
A. Cryptography
When two parties (Alice and Bob) want to communicate securely over an insecure channel they must use public key cryptography. In this case Alice has a public/private key pair. Anyone can encrypt messages using the public key, but only Alice can decrypt these messages as only she knows the private key. This means Bob can communicate securely without having to already share a secret with Alice. Generally speaking, the public key is derived from the secret key in a manner which is not easily reversible (this is called a trapdoor). Public key cryptography is not efficient, and so is mostly used for exchanging an initial secret securely, from which point onwards Alice and Bob can use more efficient private key cryptography, which is not relevant to this paper.
Some of today's most prevalent public key cryptosystems are RSA, Diffie Hellman key exchange, and ElGamal, the security of which rely on the hardness of Integer Factorisation and Discrete Logarithm [7][8][9]. These are public key cryptosystems. Reverse engineering the secret key from only the public key and other public information amounts to cracking the cryptosystem, and this is what Shor's algorithm allows us to do for the schemes listed above.
These developments have necessitated the creation of entire new families of cryptosystems -and the corresponding field of post-quantum Cryptography [9]. The security of each family is based on the hardness of one of a handful of 'contender problems'. One of these families is Lattice-Based Cryptography (or LBC). LBC is the most promising area, accounting for over half of the remaining candidate systems in the NIST Post-Quantum Cryptography Standardization process. Lattice-based constructions derive their security from the Shortest Vector Problem (more in Section II) and other closely related problems [10]. At present these problems are only conjectured hard, i.e. there is no proof that quantum computers cannot solve them in polynomial time (BQP), there is only an absence of algorithms that can do so either provably or heuristically. It is therefore essential to analyse the security of post-quantum cryptosystems, so as to either verify or disprove their resilience against attacks that may be aided by quantum logical elements.
tines to preexisting algorithms [11,12] which predominantly fall under either 'sieving' or 'enumeration'. Sieving takes a large basket of vectors and iteratively combines them to obtain smaller and smaller vectors, whereas enumeration evaluates all vectors in a ball around the origin. Central to these gate model algorithms is the quantum Fourier transform (QFT). The most popular approach has been the use of Grover search to quadratically speed up search of unsorted lists in these algorithms. In 2015, however, it was observed that Grover search could not be applied to enumeration [11], but recently a quantum tree algorithm [13] was utilised to achieve square root speed up of lattice enumeration with discrete pruning [14]. QFT is also a key component of quantum hidden subgroup algorithms which have also been applied to the shortest vector problem (SVP) on ideal lattices (these are structured lattices embedded in algebraic fields) [15,16].
Solving this lattice problem is in essence backwards engineering a private key from the public key and other public information (i.e. reversing the trapdoor process previously mentioned), hence compromising any cryptosystems based on the hardness of this problem (and other related problems). The Learning with Errors cryptosystem [17], and many LBC trapdoor functions [18], for example, can be shown to be at least as hard as solving various lattice problems. The concept behind Learning with Errors is the addition of Gaussian noise to a lattice equation, which is otherwise easy to solve via systems of linear equations. An important innovation was the introduction of the Smoothing Parameter [19], which describes how much noise can be added before the structure of the lattice is lost and the problem becomes meaningless. This technique of using noise to obfuscate solutions could make LBC a fruitful field in which to apply optimisation algorithms such as those enabled by AQC. At present the lattice community is still a long way from breaking these cryptosystems, as they tend to use lattices in hundreds of dimensions and the best algorithms at the time of writing scale exponentially in the dimension parameter.
The appeal of focusing on gate-model quantum algorithms is the rigorous complexity analyses that can be performed to give theoretical scaling. So far none of these gate-model algorithms threaten LBC. There has not yet been any work done on adiabatic quantum algorithms for LBC. Even though time complexity is generally difficult to estimate for this class of algorithms, they seem particularly suitable for attacks on LBC for two reasons: firstly, because Lattice problems can be formulated as optimisation problems [20] as we will demonstrate; secondly, while a major drawback of AQC is the prohibitive time cost of achieving adiabacity, this may not be a problem here as, up to a threshold, approximate solutions are also admissible. This is significant as it means it is not necessary to achieve adiabacity, thereby potentially avoiding the major time constraints associated with AQC. In this paper, we therefore employ AQC-style algorithms, but with sub-adiabatic time parameters.
In this work we demonstrate a mapping from the Euclidean norm of a vector to the energy of an ultra-cooled bosonic gas in a potential trap. To do so we use a generalised Bose-Hubbard Hamiltonian to describe the energy of the quantum system. We then present an AQC algorithm for solving one of the central lattice problems and analyse its performance on several instances of low dimensional lattices.
C. Structure
Section II introduces lattices and explains the shortest vector problem for which the algorithm is designed. It then covers the necessities regarding Adiabatic Quantum Computing (AQC). Section III outlines the Hamiltonian we will use, and then we build the mapping from lattice vector-norms to system Hamiltonian ultimately combining this into one SVP algorithm. In Section IV we analyse both analytical scaling and simulation results.
II. PRELIMINARIES
The length of a vector is defined in terms of a norm.
p . Any value of p ≥ 1 can be taken, but common choices are p = 2 and the infinity norm with x ∞ = lim p→∞ x p = max i { x i } the infinity norm l ∞ . For any choice of p there are two shortest vectors (as lattices are symmetric about the origin), but as these are the same up to sign, we refer to 'the', and not 'a' shortest vector, the length of which is denoted λ 1 (L).
When talking about approximation factors, we say γ = poly(N ) if γ grows asymptotically as O(N k ) for some constant k, and γ = exp(N ) if γ grows asymptotically as O(k N ) for some constant k. Similarly, we say an algorithm takes polynomial time if it requires poly(N ) operations to complete, and exponential time if it requires exp(N ) polynomial time operations to complete.
The dot product of two N -dimensional vectors is the canonical inner product on Euclidean space given by x · y = x 1 y 1 + ... + x N y N .
A. Lattices
Lattices simply put are a repeating pattern of points in space. In two dimensions, this looks similar to Fig 1, which shows that the same lattice can be described by multiple different bases (red arrows and green arrows are just two different bases -there are infinitely many different bases for any given lattice, in fact). All bases must have the same volume, but some contain much longer vectors than others, as can be seen by comparing the length of the red arrows with the length of the green arrows in where the b i are linearly independent and the lattice is embedded in the ambient space R N for some N ≥ k. The lattice is said to be full rank if N = k. Cryptographically, full rank lattices are the most relevant, and also the hardest for a particular dimension ambient space. Because of this, for the rest of this paper we will deal only with full rank integer lattices, i.e. those lattices for which the basis vectors have integer coordinates b i Z N . Throughout the rest of the paper we will treat B as a row basis There is no standardised convention in the cryptographic community (column bases vs row bases) and this choice is generally down to the author's preference. These lattices are in fact (a subset of) Euclidean lattices. The central problem that we set out to address is the shortest vector problem (SVP) which is simply the task of finding the shortest non-zero lattice vector.
Definition 2 Shortest Vector Problem: Let λ 1 (L) denote the length of the shortest nonzero vector in a lattice L. Given a basis B = {b 1 , ..., b N } describing L find the shortest nonzero vector such that Given a Lattice L determined by a basis B, every vector v in the lattice can be described as a linear combination of the basis vectors v = x·B as in Def 1. We will call this linear combination x the coefficient vector and denote the coefficient vector that achieves the shortest vector x min so that In the following we will denote the coordinates of a coefficient vector x as x i and the coordinates of x min as x i min to avoid confusion of subscripts. A variant of SVP is γ-approximate SVP.
Definition 3 SVP γ : given a basis B = {b 1 , ..., b N } describing a lattice L, find v such that v ≤ γ · λ 1 (L), Cracking this problem is also conjectured hard, and solving it would be considered fatal for LBC. It is this problem that this work targets as the quantum algorithm from Section III outputs a distribution over short vectors, as discussed in Section IV. As such, even if a quantum algorithm for finding the shortest vector is not feasible, finding somewhat short vectors may scale significantly better.
The format of the bases we work with to tackle the Shortest Vector Problem have an important bearing on the speed with which we can accomplish the task. With that in mind we will outline the forms of basis that we utilise in this work.
A note on lattice bases: in LBC there is much talk of 'good' bases and 'bad' bases. It is important to distinguish the two and discuss their significance in solving the central problems and compromising lattice-based cryptosystems. A 'good' basis is comprised of short vectors which are approximately orthogonal to each other. The conditions of shortness and orthogonality are essentially the same, but they mean that good bases already contain short vectors. In a lattice-based cryptosystem one would generate a good basis as a private key (for example, in NTRU [21] and GGH [22], but not in LWE [17]), and scramble it (making it 'worse' and the vectors less orthogonal) to create a bad basis, which would serve as a public key. The instances that are of interest to us are those of bad bases, from which we hope to derive short vectors. Two types of bases that are relevant to us are Hermite Normal Form bases, which are useful in that they are upper triangular and allow us to perform some useful manipulation later in the paper, and the LLL-reduced bases which are used as a benchmark and a starting point for many lattice algorithms.
Definition 4 Hermite Normal Form (HNF):
For any integer lattice row-basis B of rank N there exists a unique upper triangular basis H which satisfies the following conditions: or the columns above each pivot (the first nonzero entry from the left) H ii are reduced modulo the pivot HNF bases form a good starting point for some of the work in Appendix B. They are generally quite bad bases but have some nice properties which we will use. LLL-reduced bases [23] are better. The LLL algorithm runs in polynomial time, reducing bad bases to better ones; it outputs vectors which are exponentially larger than those which would be considered solutions to SVP γ , but is used as a benchmark in LBC cryptanalysis.
Then the basis B is LLL-reduced if there exists δ (0.25, 1] such that: LLL-reduced bases are not unique -there are potentially many different bases satisfying these conditions for any given lattice. In this respect they are different from HNF bases, which are unique for each lattice.
Definition 6
The Gram matrix G of a row basis B = {b 1 , ..., b N } is given by The Gram matrix will be enough to define H P entirely, as G ij is the dot product b i · b j and so will be used regularly in the following work.
B. Hamiltonian Evolution
The model system that we will use in the following for AQC algorithms is based on the Bose-Hubbard Hamiltonian describing bosonic particles in potential landscapes with sufficiently well pronounced minima that can be identified as sites [24][25][26][27], and in practice these sites often form a periodic structure, as depicted in Fig 2. The explicit Hamiltonian is comprised of a tunnelling term H 0 , an interaction term H I , and an onsite-energy term H C . The tunnelling term defined in terms of annihilation and creation operators a i and a † i of a particle at site j is typically restricted to tunnelling processes between neighbouring lattice sites, but also tunnelling between other pairs of sites would be fine for present purposes.
The interaction withn i = a † i a i and onsite term together define the problem Hamiltonian and the interaction constants v ij and onsite energies µ i will be determined by the underlying Euclidean lattice. In particular, it will be essential not only to consider onsite interactions i v iini (n i − 1) and interactions between neighbouring sites, but also long-range interactions.
With the choice of f (0) = 1 and g(0) = 0, the system Hamiltonian contains initially only the tunnelling term. It has comparatively simple eigenstates, and the system can thus be initialised in its ground state. As soon as the values of f (t), g(t) differ from their initial values, the system state will start to evolve in time, but the system will remain in the instantaneous ground state of its current Hamiltonian if the values of f (t), g(t) change sufficiently slowly [28]. Given the validity of such adiabatic dynamics, the system will thus end up in the ground state of the problem Hamiltonian H I + H C at the final point in time with f (T ) = 0 and g(T ) = 1.
Since besides the requirement of sufficiently slow changes, there are no further restrictions on f (t) and g(t) there is a continuum of possible sweeps. Knowledge of the spectrum of the underlying Hamiltonian could be used to find functions that make the adiabatic approximation particularly good. Since, however, the AQC should be applicable to the case in which finding this spectrum is beyond computational capabilities, we will not assume any suitably chosen functions, but simply a linear sweep throughout the rest of this paper.
III. QUANTUM ALGORITHM
In this section we formulate the quantum SVP algorithm and detail the mapping from vector norms to the Hamiltonian of Eq (3).
A. Problem Hamiltonian to l 2 norm The interaction term has an explicit distinction between the interaction v iini (n i − 1) of particles at the same site and the interaction v ijninj between particles at different sites. This distinction is necessary becausen i particles interact only with the remainingn i − 1 particles at the same site i, whereasn i particles at site i interact with alln j particles at site j. On the other hand, there is the onsite interaction term, and the onsite energies can always be chosen such that they compensate for the difference between the onsite and offsite interactions, i.e. such that which maps to the l 2 norm of a vector in a natural fashion.
A vector v L\{0} can be written as a unique combination of the basis vectors Remembering that B is a row basis for L, where b i are the rows. Expanding the square of the Euclidean norm of this vector term by term we have This can be expressed as Referring to (12), this form neatly fits that of the problem Hamiltonian H P with the identification ofṽ ij with the scalar product b i · b j of two basis vectors, and ofn i with the integer expansions coefficients x i . Generally, an experimentally observable expectation value of particle number at any given site does not need to be an integer, but at the end of the algorithm, where the tunnelling term is vanishing, any local particle number is indeed well defined without quantum fluctuations, so that the identification ofn i with x i is justified.
In terms of Def 6, the problem Hamiltonian for the AQC thus reads where all the interaction constants G ij are defined by the basis B and any positive number of particles can be found at any site, subject to availability of particles. Running this algorithm with K particles, they could theoretically occupy the sites in any non-negative combination summing to K, resulting in a Hilbert space of D Fock states [29], where D is and the Fock state with the lowest energy H P has a configuration of particles that when interpreted as the coefficient vector x, gives the shortest possible vector norm v under the constraint that See Appendix C for a worked-though example demonstrating the theory outlined up to this point.
B. Adaptation to Negative Coefficients
The mapping so far transforms the Euclidean norm squared of general lattice points v = x · B into the problem Hamiltonian energy where the x i is the number of particles at each siten i . The dilemma that this presents is that this only permits non-negative values for each of the x i . This is not a problem, however, as we show next how to modify the physical system such that H P generalises so as to return solutions that relate to negative x i values.
The solution that we propose is to add N m extra particles to the system (m particles for each site), and then by a change of variables use these particles as an offset, thereby permitting negative coefficients x i . The coefficients x i can now take values as low as −m, which occurs if the particle number at site i is zero. If there are m particles at site i then x i = 0 and so on. The new particle number can be written n i =n i + m. Denote this new problem Hamiltonian H P . Upon substituting this change of variables into Eq (12) the new problem Hamiltonian H P becomes To obtain the desired minimisation of the problem Hamiltonian H P from H P the chemical energy at each site needs to be reduced by a function of the column sum of the interaction matrix. The final term, being constant, can be corrected at a later stage so as to return the correct short lattice vectors but would not affect the energy spectrum of the Hamiltonian (other than a constant shift) or, consequently, which configuration of particles minimises the system energy.
To guarantee that the shortest vector lies in the solution set, the offset m must be larger than the infinity norm of the coefficient vector x min . That is, m ≥ x min ∞ where x min · B = λ 1 (L).
C. Multi-Run Quantum SVP
Above we have defined a mapping from the Euclidean length of a vector to the energy of an ultra-cooled bosonic gas trapped in a potential landscape. But choosing the parameters for total particle number K and length of time evolution T then performing the quantum algorithm are not enough, on their own, to obtain the shortest vector. One run of the algorithm described above contains K particles, but the Fock states in the solution space may not correspond to the required linear combination x min .
Take for example a 2D lattice basis for which the shortest vector is determined by x min = (3, 0), then λ 1 (L) is found by To obtain the shortest vector using the algorithm detailed above (assuming no prior knowledge about x min , and for simplicity setting offset m = 0) one would first run with particle number K = 1, then with K = 2, K = 3 and then possibly repeat a few more times to be sure the shortest vector has indeed been found. In this way, a search for the shortest vector consists of running the algorithm many times, each time incrementing K by 1 until confident that there are no shorter vectors to be found. The output of this algorithm, if performed adiabatically, will be a collection of coefficient vectors x and the resulting lattice vectors v = x · B, each of which return the shortest lattice vector possible for a particular choice of K, and among these samples will be the sought after shortest vector of length λ 1 (L).
The Multi-Run algorithm ensures that with well chosen m, K max , and sweep times the λ 1 (L) will definitely be correctly identified. This is after performing the sweeps for particle numbers K i = N m + i up to K c = K max = N m + c. The drawbacks of this method are that it is difficult to analyse rigorously due to the approximations required for the number of runs in order to ensure the presence of x min in the solution sets.
In the next part we present a more eloquent all-in-one algorithm where many different runs from this algorithm are combined into one larger run. It offers an O(n) improvement in space but at the cost of λ 1 (L) no longer corresponding to the ground state, but instead to the first excited state.
Appendix C illustrates what one of the runs would look like for a 2D lattice with no offset (m = 0).
D. Single-Run Quantum SVP
The aim of this algorithm is to generalise Multi-Run into one overarching algorithm (Single-Run) that encompasses all of the repetitions executed during the Multi-Run algorithm. Whereas before the coefficient vectors (in 2D) (1, 0) and (1, 1) could be obtained only from separate runs, now the aim is to include all possible coefficient vectors in one solution space. Instead of repeating sweeps with a different particle number K many times, only one sweep is performed, the solution space of which includes all possible solutions from the Multi-Run version.
What we propose is to introduce an extra site to the potential landscape, corresponding to the zero vector. Label this site N + 1. This new site should act as a 'particle reservoir' and NOT influence the energy of the system directly. For the Euclidean lattice, one appends the zero vector to the basis B, as defined in Eq (17). Denote the particle number for the Single-Run version K S . If the process is run with K S ≥ K max total particles, then the set of configurations where no particles are in site N + 1 correspond to one run of the previous algorithm with K S particles; the set of configurations with one particle in site N + 1 correspond to a run of the previous algorithm but with K S − 1 particles, and so on. It is important to remember that the ground state is no longer the shortest vector, but the zero vector -m particles in the first N sites and all the remaining particles in site N +1 returns 0 which has the lowest energy -and so an adiabatic evolution is no longer desirable. We analyse the implications of this in the following sections.
Accordingly, the Single-Run problem Hamiltonian H P looks as follows: where G ij is defined as B B T , for The problem Hamiltonian in Eq (16) will be the one used for the rest of the paper, unless otherwise stated, including all numerical simulations. The size of the Hilbert space D S for Single-Run Quantum SVP (letting K S = K max to make the two modes of computation directly comparable) is, either by use of the hockey-stick identity or by direct application of Eq (13) Having added an extra 'particle reservoir' site and accordingly appended 0 to the basis, the lowest energy state of the problem Hamiltonian is the unwanted 0 state, but the energy of the first excited state will correspond to λ 1 (L).
IV. RESULTS
Little is known analytically about the time scaling for adiabatic quantum algorithms, beyond a worst case energy gap dependence of 1/∆ 3 [30] whereas with quantum gate algorithms neat closed form scalings are known for a handful of algorithms, for example Shor's exponential speedup for Integer Factorisation and Discrete Logarithm [4] and Grover's quadratic speed up for searching unsorted lists [31]. For adiabatic quantum optimisation, it is not yet even know if these algorithms run faster than classical optimisation [32]. Due to time dependence on the minimum energy gap between E 0 and E 1 it is usually found that though this leaves open the possibility of drastically reducing run time subject to achieving lower values of α, β than their classical analogues. While it is difficult to estimate the time scaling for this algorithm, or even for which parameter regimes this scaling would be optimal (near adiabatic versus much faster sweeps, for example), we can calculate the qubit space requirements (though we do not directly use qubitbased architecture).
A. Qubit Requirements
Let us estimate the required system size. This is a function of number of sites and number of particles K S . The former is predetermined (it is N + 1) but one can choose the latter. The aim is to make the system just large enough (pick K S ) so that x min is one of the possible configurations of particles in N + 1 sites with high probability.
Firstly m, the offset, must be chosen. We derive this by taking an estimate for the infinity norm of x min . This can be seen in Fig 3 for HNF bases (to err of the side of safety, as this will give larger values because HNF bases have long vectors). One can see that the average of the infinity norms grows linearly for HNF bases (see the red best fit line), so we approximate m to be some linear function of N . This means that K S is already up to m(N + 1) particles.
The only other consideration is the sum of the coefficient terms k = N i=1 x i min . Heuristically this grows linearly as shown by the black best fit line in Fig 3, and is less than m so can be ignored from this point on, as in the instance that all m(N + 1) particles reside in the first N lattice sites (and none in the particle reservoir corresponding to the zero vector) the net coefficient sum would be m > k. This is because N m particles are acting as offset particles, leaving the remaining m as the coefficient sum.
By considering the size of the solution space, we have analytically deduced that the qubit requirements scale as O(N log N ) as shown in Appendix A, which space-wise appears acceptable.
B. Empirical Results
The required particle number K is determined by N i=1 x i min which must be estimated. This reflects the fact that on average some coordinates of x will be positive, and some negative, cancelling out, but they will rarely cancel out entirely. The growth in the mean of N i=1 x i min is linear as demonstrated numerically in Fig 3, and grows below the estimated offset number m. The significance of this is that m(N + 1) particles is a generous estimate for K S . This means that taking K S = m(N + 1) gives a good chance of finding x min in the solution set.
To understand this, consider the system with mN particles in the first N sites and m particles in site N +1. The potential solutions are the same as those one would get from performing the Multi-Run version with mN total particles. Now with one more particle in the first N sites and one fewer in site N + 1 the solutions are the same as those from Multi-Run with mN + 1 total particles. To ensure there are enough particles in the system, one must be confident of achieving up to mN + The growth in x min ∞ with respect to Hermite Normal Form bases appears heuristically linear, as can be seen by the red dashed best-fit line in Fig 3. Another group of cryptographically relevant bases are LLL-reduced bases. These are much better than HNF bases and are easy to obtain. They are often used as a first step in classical SVP routines [12,33]. LLL-reduced bases in low dimensions (< 30 or so) were so efficient at finding the shortest vector that there were not enough data points to draw sound conclusions from, as can be seen in the lower blue scatter plot in Fig 3. We can, however, assert that the LLL-reduced case is upper bounded by the Hermite Normal Form case and so can approximate m to be a term linear in N . This is a sound assertion because any lattice basis can be transformed into an HNF basis in polynomial time.
C. Numerical Analysis
Ideally simulating this quantum SVP algorithm on lattices in many dimensions would give an empirical idea of scaling. Regrettably, simulating quantum systems is computationally very intensive due to the factorial growth of the Hilbert space and so these simulations were only possible for low dimensional lattices. Using the QuSpin python library we were able to simulate problems with Hilbert space sizes of up to ten thousand eigenstates (20 particles in 5 sites).
Nevertheless it is insightful to consider the distribution over eigenstates (grouped where degenerate) for runs of different time length. We simulated the quantum SVP algorithm on 200, 150 and 100 lattices in two, three, and four dimensions respectively. Using standard 'bad' bases from literature does not work well for small dimensions -both HNF and LLL reduction tend to return maximally reduced bases -so we generated our own as follows. For each lattice we generated a basis (call this the 'good' basis) and then scrambled it by some randomly generated unimodular matrix to obtain a worse basis. The average increase in basis vector length is a factor of 12.06, 10.08 and and 10.07 in dimensions two, three and four respectively under the unimodular transformations. Basis vectors could not be increased by too much otherwise the problems would have become intractable on our hardware. Note also that after generating the Hamiltonians in the QuSpin package we scaled the Hamiltonians in order that they all occupied roughly the same spectrum of eigenvalues. The reason for this is that expanding the energy spectrum significantly increases success probabilities for quantum adiabatic algorithms due to the dependence on minimum energy gap. Scaling Hamiltonians has a similar effect to altering the sweep times which we want to analyse, and it is reasonable to expect that implementers of this algorithm would have access to the same energy spectrum regardless of the problem size. As such, we in a sense 'fixed' the spectrum and varied the sweep times to isolate the effect of the parameter T .
Mean Distribution over Eigenstates: Fig 4 shows the averaged results for the Single-Run quantum SVP algorithm of Section III. Each subplot represents a different choice of parameters, and shows the mean probability of observing the system in an eigenstate corresponding to the zero vector (index 0), the shortest vector λ 1 (L) (index 1, in red), the second, third etc shortest vectors and so on up to the twentieth shortest vector. What is clear to see is a high likelihood of the system being found in low-energy states. For slower sweeps (higher T values) this distribution becomes more concentrated around the lowest-energy states.
Paying particular attention to the red bars (representing the preferred solutions corresponding to λ 1 (L)) one can see that maximising the height of the red bar requires some nuance: sweep too slow and there is too high a chance of attaining the zero vector at an unacceptable time cost; too fast and the system will become excited to much higher energy levels with unacceptably low probabilities of observing the system in very low energy states.
The top row of Fig 4 displays the final results for parameter choices N = 2, m = 3, T = 1, 10, 100. This maps to a system of nine particles in three sites. Applying Eq (13), the total Hilbert space has 55 eigenstates. The bottom row of Figure 4 shows the results for parameter choices N = 3, m = 4, T = 1, 10, 100. This is for a system of sixteen particles in four sites. The Hilbert space now has 969 eigenstates. In both N = 2, 3 slower sweeps result in higher probabilities of the system terminating in the lowest eigenstates, and for T = 100 there is a very low likelihood of finding the system in anything but the ground state. It should also be observed that as the system size has increased from N = 2 to N = 3, and keeping T constant, the probability of recording the system in any given low-energy state decreases.
Time Sweep Optimisation
While reliable time bounds for finding ground states of quantum adiabatic algorithms with good enough probability are much sought-after, and the general rule of thumb is 'longer is better', thought is shifting. There are instances, for example in the MAX 2-SAT problem, in which slow sweeps perform much worse than fast ones [34]. Here we instead consider faster sweeps and how they relate to algorithms where the exact ground state is not necessarily required.
The problem of observing the ground state of a system after a quantum annealing algorithm has been the subject of much research. In the pursuit of 'somewhat' low energy states much less is known. The algorithm outlined in Section III targets the first excited state, and furthermore, the ground state (corresponding to the zero vector) is of even less use than eigenstates of energy just above λ 1 (L), as at least these return a short non-zero contender. Bearing this in mind along with the observations from Fig 4 we thought to examine how targeting low-but-not-ground states versus ground state might differ.
The solid lines in Fig 5 represent mean probability of returning 0, λ 1 (L), λ 2 (L) in blue, red and green respectively. The dashed lines reflect 90% confidence.
Looking first at the probability curves there are a few interesting observations to be made. In two and three dimensions, exponentially slower sweeps result in higher probabilities of achieving a final ground state, as might be expected. These blue curves will continue to level off past the right of the axes due to maximum probability of any state being one. But remarkably targetting the first excited state appears to experience almost no success penalty for performing faster sweeps. In fact, the nonmonotonicity of the red dashed line indicates that there is some 'Goldilocks' zone where evolutions are slow enough to achieve a good distribution over low energy states, but not slow enough that P (E 0 ) dominates the distribution. In Fig 5 this zone appears around T = 2 for 2D lattices and T = 4 for 3D lattices.
The case for 4D lattices looks quite different. The solid blue line is overtaken by probabilities for λ 1 (L) and even λ 2 (L). To understand better what is happening let us look at the probability distributions at a few different points in the four dimensional graph in Fig 5. To this end, Fig 6 presents the same information as that in Fig 4, but for more samples and should be looked at closely in conjunction with the right-hand plot of Fig 5. Again, probabilities corresponding to λ 1 (L) are highlighted in red. While it is apparent there is some locally different behaviour corresponding to the ground state, Fig 6 shows that as sweeps become slower, probability density continues to accumulate around the lowest energy states, and if slow enough (though beyond our computational capabilities), would concentrate entirely on the lowest eigenstate. This behaviour is particularly promising for the quantum SVP algorithm in higher dimensions as again one can see (the T = 128 subplot provides the best example) a significant concentration of probability around the ten lowest eigenstates (out of > 10, 000) without this distribution necessarily being dominated by the zero vector. Furthermore, the probability of achieving λ 1 (L) is considerably high relative to surrounding eigenstates for slower sweeps, as demonstrated by the series of red bars in Fig 6 In order to explain some of the favourable characteristics it serves to look at a specific instances for the evolution of the system.
energy eigenstate. This does not fit the traditional AQC framework, but is advantageous in that it permits faster sweeps. Scaling to larger systems, this could help to circumvent the prohibitive time cost of AQC algorithms. In sub-adiabatic regimes it is foreseeable that shorter sweep times could be employed at the cost of larger γ approximations for SVP γ and vice versa.
V. DISCUSSION
We have introduced a quantum optimisation framework to the area of computationally hard lattice problems that may underpin tomorrow's cryptosystems. By examining some interesting properties of an AQC-style algorithm when targeting low-but-not-lowest energy states we have identified the existence of a 'Goldilocks' zone for time sweep optimisation. This is particularly exciting for cryptanalysis of lattice-based cryptosystems as the underlying problems often come in approximate -and not exact -form, as with SVP γ analysed in this work. Among cryptographers it is thought is that the approximate nature of lattice problems strengthens their post-quantum credentials, as the lack of determinism means quantum hidden subgroup algorithms cannot be applied. This 'proximity' property, however, may allow sub-adiabatic algorithms for such problems to overcome the costly time requirements of AQC, while still outputting acceptable solutions. Outside of cryptography, it should be observed that for many real-world problems an approximate solution is fine where exact solutions are intractable, and the 'Goldilocks' zone highlighted in this paper indicates that this may be where AQC-style algorithms will most outperform classical alternatives.
The numerical analysis presented in Section IV offers an encouraging insight into how Hamiltonian simulation on higher dimensional instances may perform. The notion of mapping Euclidean distances into Hamiltonian energies is one that has many foreseeable applications in tackling lattice problems: there are similarities, for example, in the formulation of SVP γ and the approximate closest vector problem [10]. Many lattice problems are so closely related that a successful attack on one of them could be fatal for a number of LBC schemes, and so there are several areas one could apply the ideas laid out in this work.
Looking forward there are many interesting challenges to surmount. A major one is the issue of achieving better theoretical bounds on scaling complexity. One advantage of AQC is that time dependence relies on only one factor (minimum energy gap ∆) meaning the source of time cost is easy to understand. In a sub-adiabatic regime, however, modelling eigenstate transitions probabilistically could be a natural progression for theoretical analysis of AQC-style algorithms. The development of quantum hardware that can realise this generalised Bose-Hubbard Hamiltonian and assume particle-particle offsite interactions is a target for experimental physicists, and generalising AQC-style algorithms to run on different hardware -such as coherent Ising machines [35,36] -will become increasingly investigated as progress continues towards a post-quantum world.
ACKNOWLEDGMENTS
The authors would like to thank Adam Callison for helpful discussion. Alexandros Ghionis was supported through a studentship in the Quantum Systems Engineering Skills and Training Hub at Imperial College London funded by EP-SRC(EP/P510257/1).
Appendix A: Qubit scaling
With these heuristic scaling assumptions we can derive the following analysis for the Single-Run algorithm (results for Multi-Run are similar). Using m = cN for linear constant c, there are m(N + 1) particles in the system. Therefore the total particle number for Single-Run (denote K S ) is K S = cN 2 + cN .
The Hilbert space size with P particles distributed among Q sites is The qubit scaling equivalent is obtained by simply taking the base-two logarithm of the above expression, with values for P, Q substituted in: By Stirling's approximation this is very close to which can be written Bounding the product term, this is much less than 2πN (e(cN +(c+1)) N .
(A5) Leaving a system size in qubit terms of log 2 D bounded above by O(N log N ). There are no analytical time bounds; this is an active research area in the community. What we can do is provide some analysis for a Grover search algorithm over the same solution space, giving us some post-quantum context for the complexity to be expected. Given that the solution space scales as N N = 2 N log N , search using Grover's algorithm scales as 2 1 2 N log N .
Appendix B: Basis Band-diagonalisation
In order to guarantee that offsite interaction terms γ ij in the problem Hamiltonian are nonzero only for small |i − j| the basis must be altered to take a banded structure, but using only operations that preserve the basis. To demonstrate, consider the following example: The row span of the matrix and one can see that while there exist nearest-neighbour interaction terms (γ 12 , γ 23 ) there is no γ 13 term and so one can see how the banding structure is necessary to eliminate far away offsite particle-particle interactions.
Our solution is to iteratively eliminate elements far away from the leading diagonal using an argument that relies on taking the greatest common divisor (gcd) of as many elements as is needed to help us eliminate elements using Bezout's lemma.
Take a simple example to give a taste of what this algorithm is tapping into. Consider a prime determinant integer lattice, here taking the determinant to be some large p. Let us pick p so that the lattice is in fact the row span of the following matrix. Every prime determinant integer lattice can be represented by a basis very similar in form to the one shown [37] -the pivot p is not necessarily in the bottom right, but this does not affect the reduction: For this simple case assume that the gcd of x i+1 , x i+2 divides x i . Then by Bezout's lemma there exists some u, v, δ such that Now perform the lattice preserving row operations This should be performed iteratively from i = 1 to N −2.
After the first such iteration the above matrix looks as follows: If x i+1 , x i+2 do not divide x i , then extend consideration to x i+3 , and so on until finding a group of numbers with gcd dividing x i . Each extra number means adding an extra band to the matrix so it is ideal to find a set of coprime (or with gcd dividing x i ) entries x i+1 , ...x i+j for some small j. Fortunately, for k randomly selected numbers the probability of them being coprime (a stronger condition than is needed) is 1/ζ(k) which fast approaches one as k increases. This means that even for high dimensional lattices one can be confident of tight bandings.
There are circumstances where x i is odd, and several of the entries below it are even. These are the only problematic cases where sticking strictly to the algorithm yields poor results. In these cases it is optimal to instead to multiply b i by 2 so that x i can be eliminated efficiently. This is not ideal as it does not preserve the lattice, but instead increases the volume by a factor of 2 (or p if this is extended to some other small primes). What results is a basis for a sublattice. Fortunately these cases are rare enough that the mean volume increase on performing this algorithm over many lattices is very small.
To generalise this to any given HNF basis this procedure simply needs to be repeated for all dense columns, which will appear only above pivots that are not equal to 1. While technically HNF bases can be dense in the upper triangle, this is not typical. Furthermore, HNF bases are particularly relevant cryptographically [38] as they afford a way of representing a bad basis that can be communicated with O(N ) key size (if dense this would be O(N 2 )). Thus the property that makes this basis a good candidate for band-diagonalisation also makes it a good choice in terms of cryptographic efficiency. The size of these coefficients reduces farther away from the leading diagonal. The dark blue squares represent zeros and so it is clear to see both the upper-triangular form of the row bases and also how the magnitude of the coordinates fades quickly to zeros above the leading diagonal. Moreover, increasing the dimension does not adversely affect the ability of the algorithm to produce a tightly banded lattice basis. The mean volume increase in thirty dimensions was 2.98 and in 60 dimensions was 7.99, meaning that while the algorithm tends not to preserve the lattice exactly, the volume of the basis is increased by a small factor. This is acceptable for solving SVP γ . The effect of this band-diagonalisation algorithm is that in realising the quantum SVP algorithms described in this paper it is not necessary to consider particle-particle interaction terms for particles at sites which are far away from each other.
Appendix C: Example run
Consider a very simple example to aide intuition in following the the algorithm from lattice basis to final result and look at all the steps in between. The system comprises two particles in two lattice sites, with no offset (m = 0). The Hilbert space is three dimensional and the (7), H 0 from Eq (3) and H P from Eq (12) is (C3) The Hamiltonian ground state for t = 0 is Initialise the system in this state and let it evolve. The probabilities of measuring the system to be in each of the Fock states during the evolution is as follows: The Hamiltonian ground state for t = T = 2 is which is easy to see because and (1, 0) is the shortest vector in the lattice. This can be seen by looking at B G . This example would be one | 11,484.2 | 2019-10-23T00:00:00.000 | [
"Computer Science",
"Mathematics",
"Physics"
] |
Open and Unoriented Strings from Topological Membrane - I. Prolegomena
We study open and unoriented strings in a Topological Membrane (TM) theory through orbifolds of the bulk 3D space. This is achieved by gauging discrete symmetries of the theory. Open and unoriented strings can be obtained from all possible realizations of $C$, $P$ and $T$ symmetries. The important role of $C$ symmetry to distinguish between Dirichlet and Neumman boundary conditions is discussed in detail.
Introduction
Although originally (and historically) open string theories were considered as theories by themselves, it soon become evident that, whenever they are present, they come along with closed (non-chiral) strings. Moreover open string theories are obtained from closed string theories by gauging certain symmetries of the closed theory (see [1] and references therein for a discussion of this topic). The way to get open strings from closed strings is by gauging the world-sheet parity [1][2][3], Ω : z → −z. That is we impose the identification σ 2 ∼ = −σ 2 , where z = σ 1 + iσ 2 andz = σ 1 − iσ 2 ) is the complex structure of the world-sheet manifold. The spaces obtained in this way can be of two types: closed unoriented and open oriented (and unoriented as well). These last ones are generally called orbifolds and the singular points of the construction become boundaries. The states (operators and fields of the theory in general) of the open/unoriented theory are obtained from the closed oriented theory by projecting out the ones which have negative eigenvalues of the parity operator. This is obtained by building a suitable projection operator (1 + Ω)/2 such that only the states of positive eigenvalues are kept in the theory. Namely the identification X I (z,z) ∼ = X I (z, z) or X I L (z) ∼ = X I R (z) (in terms of the holomorphic and antiholomorphic parts of X = X L + X R ) holds.
Another construction in string theory is orbifolding the target space of the theory under an involution of some symmetry of that space. In this work we are going to consider only a Z 2 involution, imposing the identification X I ∼ = −X I , where X I are the target space coordinates. When combining both constructions, world-sheet and target space orbifolding, we obtain open/unoriented theories in orbifolds [4][5][6][7] or orientifolds (X I (z,z) = −X I (z, z)), implying the existence of twisted sectors in the open/unoriented theories.
Further to the previous discussion both sectors (twisted and untwisted) need to be present for each surface in order to ensure modular invariance of the full partition function [1,8,18]. One point we want to stress is that twisting in open strings can, for the case of a Z 2 target space orbifold, be simply interpreted as the choice of boundary conditions: Neumman or Dirichlet.
Toroidal compactification is an important construction in string theories and in the web of target space dualities. Early works considered also open string constructions in these toroidal backgrounds [8,9]. In these cases we have some compactified target space coordinates, say X J (z+2πi,z−2πi) ∼ = X J (z,z)+2πR (R is the radius of compactification of X J ), the twisted states in the theory are the ones corresponding to the points identified under X J (z + 2πi,z − 2πi) ∼ = −X J (z,z) + 2πR or in terms of the holomorphic and antiholomorphic parts of X this simply reads X I L (z) ∼ = −X I R (z). An important result coming from these constructions is that the gauge group of the open theory, the Chan-Paton degrees of freedom carried by the target space photon Wilson lines (only present in open theories) are constrained, both due to dualities of open string theory [8] and to modular invariance of open and unoriented theories [8][9][10][11]. This will result in the choice of the correct gauge group that cancels the anomalies in the theory.
One fundamental ingredient of string theory is modular invariance. Although for bosonic string theory the constraints coming from genus 1 amplitudes are enough to ensure modular invariance at generic genus g, it becomes clear that once the fermionic sector of superstring theory is considered it is necessary to consider genus 2 amplitude constraints. For closed strings (types II and 0) the modular group at genus g is SP L(2g, Z) and the constraints imposed by modular invariance at g = 2 induce several possible projections in the state space of the theory [12][13][14][15][16] such that the resulting string theories are consistent. Among them are the well known GSO projections [17] that insure the correct spin-statistics connection, project out the tachyon and ensure a supersymmetric effective theory in the 10D target space.
Once we consider an open superstring theory (type I) created by orbifolding the world-sheet parities, for each open (and/or unoriented) surface a Relative Modular Group still survives the orbifold at each genus g [18]. Again in a similarly way to the closed theory the modular invariance under these groups will result in generalized GSO projections [18][19][20][21].
For a more recent overview of the previous topics see [22,23] (see also [24] for an extensive explanation of them).
Closed string theories are obtained as the effective boundary theory, their worldsheet is the closed boundary ∂M. Obtaining open string theory raises a problem, we need a open world-sheet to define them. But the boundary of a boundary is zero, ∂∂M = 0. So naively it seems that TM cannot describe open strings since world-sheets are already a boundary of a 3D manifold. The way out is to consider orbifolding of the bulk theory. In this way the fixed points of the orbifold play the role of the boundary of the 2D boundary of the 3D membrane. This proposal was first introduced by Horava [47] in the context of pure Chern-Simons theories. We are going to extend his results to TMGT and reinterpret the orbifolded group as symmetries of the full gauge theory.
Other works have developed Horava's idea. For a recent study on WZNW orbifold constructions see [48] (and references therein) For an extensive study, although in a more formal way than our work, of generic Rational Conformal Field Theories (RCFT) with boundaries from pure 3D Chern-Simons theory see [49] (and references therein). Nevertheless previously the monopole processes were not studied. These are crucial for describing the winding modes and T-duality in compact RCFT from the TM point of view and, therefore, in compactified string theories.
We consider an orbifold of TM(GT) such that one new boundary is created at the orbifold fixed point. To do this we gauge the discrete symmetries of the 3D theory, namely P T and P CT . Several P 's are going to be defined as generalized parity operations. C and T are the usual 3D QFT charge conjugation and time inversion operations (see [50] for a review). The orbifolding of the string target space corresponds in pure Chern-Simons membrane theory to the quotient of the gauge group by a Z 2 symmetry [45]. As will be shown, in the full TM(GT), the discrete symmetry which will be crucial in this construction is charge conjugation C. Besides selecting between twisted and untwisted sectors in closed unoriented string theory it will also be responsible for setting Neumann and Dirichlet boundary conditions in open string theory. In this work we are not going to consider more generic orbifold groups.
There are two main new ideas introduced in this work. Firstly the use of all possible realizations of P , C and T combinations, which constitute discrete symmetries of the theory, as the orbifold group. Although the mechanism is similar to the one previously studied by Horava for pure Chern-Simons theory, the presence of the Maxwell term constrains the possible symmetries to P T and P CT type only. Also the interpretation of the orbifold group as the discrete symmetries in the quantum theory is new, as is the interpretation of charge conjugation C which selects between Neumman and Dirichlet boundary conditions. This symmetry explains the T-duality of open strings in the TM framework. It is a symmetry of the 3D bulk which exchanges trivial topological configurations (without monopoles) with non-trivial topological configurations (with monopoles). In terms of the effective boundary CFT (string theory) this means exchanging Kaluza-Klein modes (no monopole effects in the bulk) with winding number (monopole effects in the bulk).
In section 2 we start by introducing genus 0 (the sphere), and genus 1 (the torus), Riemann surfaces and their possible orbifolds under discrete symmetries which we identify with generalized parities P . Section 3 gives an account of Neumann and Dirichlet boundary conditions in usual CFT using the Cardy method [51] of relating n point full correlation functions in boundary Conformal Field Theory with 2n chiral correlation functions in the theory without boundaries.
Then, in section 4 we give a brief overview of the discrete symmetries of 3D QFT and use it to orbifold TM(GT). We enumerate the 3D configurations compatible with the several orbifolds, both at the level of the field configurations and of the particular charge spectrums corresponding to the resulting theories. It naturally emerges from the 3D membrane that the configurations compatible with P CT correspond to Neumann boundary conditions (for open strings) and to untwisted sectors (for closed unoriented). The configurations compatible with P T correspond to Dirichlet boundary conditions (for open strings) and twisted sectors (for closed unoriented). The genus 2 constraints are discussed here although a more detailed treatment is postponed for future work. Further it is shown that Neumann (untwisted) corresponds to the absence of monopole induced processes while for Dirichlet (twisted) these processes play a fundamental role. A short discussion on T-duality show that it has the same bulk meaning as modular invariance, they both exchange P T ↔ P CT .
Riemann Surfaces: from Closed Oriented to Open and Unoriented
Any open or unoriented manifold Σ u can, in general, be obtained from some closed orientable manifolds Σ under identification of a Z 2 (or at most two Z 2 ) involution such that each point in Σ u has exactly two corresponding points in Σ conjugate in relation to the Z 2 involution(s). The pair (x, −x) in the last equation is symbolic, the second element stands for the action of the group Z 2 , z 2 (x) = −x, in the manifold. Usually this operation is closely related with parity as will be explained bellow. Although in this work our perspective is that we start from a full closed oriented theory and orbifold it, there is the reverse way of explaining things. This means that any theory defined in an open/unoriented manifold is equivalently defined in the closed/oriented manifold which doubles (consisting of two copies of) the original open/unoriented. Let us summarize how to obtain the disk D 2 (open orientable) and projective plane RP 2 (closed unorientable) out of the sphere S 2 and the annulus C 2 (open orientable), the Möbius Strip (open unorientable) and Klein bottle K 2 (closed unorientable) out of the torus T 2 .
The Projective Plane and the Disk obtained from the Sphere
For simplicity we choose to work in complex stereographic coordinates (z = x 1 + ix 2 , z = x 1 − ix 2 ) such that the sphere is identified with the full complex plane. The sphere has no moduli and the Conformal Killing Group (CKG) is P SL(2, C). A generic element of this group is (a, b, c, d) with the restriction ad − bc = 1. It acts in a point as It has then six real parameters, that is, six generators. That is to say that the sphere has six Conformal Killing Vectors (CKV's). It is necessary to use two coordinate charts to cover the full sphere, one including the north pole and the other one including the south pole. Usually it is enough to analyze the theory defined on the sphere only for one of the patches but it is necessary to check that the transformation between the two charts is well defined. In stereographic complex coordinates the map between the two charts (with coordinates z,z and u,ū) is given by z → 1/u andz → 1/ū. The disk D 2 can be obtained from the sphere under the identification This result is graphically pictured in figure 1 and consists in the involution of the manifold S 2 by the group Z P 1 , D 2 = S 2 /Z P 1 . There are one boundary corresponding to the real line in the complex plane and the disk is identified with the upper half complex plane. It is straightforward to see that the non trivial element of Z P 1 is nothing else than the usual 2D parity transformation The CKG of the disk is the subgroup of P SL(2, C) which maintains constraint (2.3), that is P SL(2, R). From the point of view of the fields defined in the sphere this corresponds to the usual 2D parity transformation. In order that the theory be well defined in the orbifolded sphere we have to demand the fields of the theory to be compatible with the construction where the first equation applies to scalar fields and the second to vectorial ones. For tensors of generic dimensions d (e.g. the metric or the antisymmetric tensor) the transformation is easily generalized to be T (x) = P d 1 T (P 1 (x)). In order to orbifold the theory defined on the sphere we can introduce the projection operator which projects out every operator with odd parity eigenvalue and keeps in the theory only field configurations compatible with the Z 2 involution. To obtain the projective plane RP 2 we need to make the identification This result is graphically pictured in figure 2 and again is an involution of the sphere RP 2 = S 2 /Z P 2 2 . The resulting space has no boundary and no singular points. But it is now an unoriented manifold.
This identification can be thought of as two operations. The action of the element α = (0, −1, 1, 0) ∈ Z α 2 ⊂ SL(2, C) followed by the operation of parity as given by (2.4). Note that α(z) = −1/z but P 1 α(z) = −1/z as desired. In this case we can define a new parity operation P 2 ∈ Z P 2 2 = Z P 1 2 × Z α 2 as From the point of view of the fields defined in the sphere we could use the usual parity transformation since any theory defined in the sphere should be already invariant under transformation (2.2) such that P SL(2, C) is a symmetry of the theory. But in order to have a more transparent picture we use the definition (2.8) of P 2 and demand that where the first equation concerns to scalar fields and the second to vectorial ones. For tensors of generic dimensions d (as the metric or the antisymmetric tensor) the transformation is again easily generalized to be T (x) = P d 2 T (P 2 (x)). The CKG is now SO(3), the usual rotation group. It is the subgroup of P SL(2, C)/Z α 2 that maintains constraint (2.3)
The annulus, Möbius strip and Klein bottle from the Torus
Let us proceed to genus one closed orientable manifold, the torus. It is obtained from the complex plane under the identifications There are two modular parameters τ = τ 1 + iτ 2 and two CKV's. The action of the CKG, the translation group in the complex plane, is with a and b real. The metric is simply |dx 1 + τ dx 2 | and the identifications on the complex plane are invariant under the two operations These operations constitute the modular group P SL(2, Z). That is with a, b, c, d ∈ Z and ad − bc = 1.
The annulus C 2 (or topologically equivalent, the cylinder) is obtained from the torus with τ = iτ 2 under the identification This result is symbolically picture in figure 3.
There is now one modular parameter τ 2 and no modular group. There is only one CKV being the CKG action given by z ′ = z + ib, translation in the imaginary direction. In terms of the fields defined in the torus this correspond to the projection under the parity operation Ω : z → −z z → −z (2.15) The Möbius strip M 2 can be obtained from the annulus (obtained from the torus with τ = 2iτ 2 ) by the identification under the elementã [24] of the translation groupã Note thatã belongs to the translation group of the torus, not of the disk, and that a 2 = 1. This construction corresponds to two involutions, so the orbifolding group is constituted by two Z 2 's, where ⊂ × stands for the semidirect product of groups. Thus the ratio of areas between the Möbius strip and the original torus is 1/4 contrary to the 1/2 of the remaining open/unoriented surfaces obtained from the torus, due to the extra projection operator (1+ã)/2 taking from the annulus to the strip.
In terms of the fields living on the torus we can think of this identification as the projection under a new discrete symmetry, which we also call paritỹ Although this operation does not seem to be a conventional parity operation note that, applying it twice to some point, we retrieve the same point,Ω 2 = 1. It is in this sense a generalized parity operation. The previous construction is presented, for example, in Polchinski's book [24]. Let us note however that one can build the Möbius strip directly from a torus [1] with moduli τ = 1/2 + iτ 2 under the involution by Ω as given in (2.15) 3 . In this case the ratio of areas between the original torus and the involuted surface is 1/2 as the other involutions studied in this section. As we will show later both constructions correspond to the same region on the complex plane. The first one results from two involutions of a torus (τ = 2iτ ) with double the area of the second construction (τ = iτ ). In this sense both constructions are equivalent. The Möbius strip orbifolding is pictured in figure 4.
Again there is one modular parameter τ 2 and no modular group. The only CKV is again the translation in the imaginary direction.
The Klein bottle K 2 is obtained from the torus with τ = 2iτ 2 under the identification This result is pictured in figure 5. for Ω and τ = 2iτ 2 forΩ and Ω ′ . Note that M 2 can also be obtained from the torus with τ = 1/2 + iτ 2 considering the parity Ω. In the labels of the last line the first letter stands for Open or Close surface while the second letter stands for Oriented or Unoriented.
The bottle is the involution of the torus K 2 = T 2 /Z Ω ′ 2 , has one parameter CKG with one CKV, translations in the imaginary direction. There is one modulus τ 2 and no modular group. The resulting manifold has no boundary and no singular points but is unoriented.
Again we can define a new parity transformation Ω ′ We summarize in table 1 all the parity operations we have just studied together with the resulting involutions (or orbifolds).
Conformal Field Theory -Correlation Functions and Boundary Conditions
To study string theory we need to know the world-sheet CFT. In a closed string theory they are given by CFT on a closed Riemann surface, the simplest of them is the sphere, or equivalently the complex plane. To study open strings we need to study CFT on open surfaces. As was shown by Cardy [51] n-point correlation functions on a surface with a boundary are in one-to-one correspondence with chiral 2n point correlation functions on the double surface 4 (for more details and references see [52]). We will study the disk and the annulus, so we double the number of charges (vertex operators) by inserting charges ±q (vertex operators with ∆ = 2q 2 /k) in the Parity conjugate points. Note that the sign of the charges inserted depends on the type of boundary conditions that we want to impose but the conformal dimension of the corresponding vertex operator is the same.
We summarize the 2, 3 and 4-point holomorphic correlation functions of vertex operators for the free boson where in all the cases q i = 0, otherwise they vanish.
Disk
We will take the disk as the upper half complex plane. As explained before it is obtained from the sphere (the full complex plane) by identifying each point in the lower half complex plane with it's conjugate in the upper half complex plane. In terms of correlation functions where we replaced z = x + iy in the the first equation of (3.1), y is the distance to the real axis while x is taken to be the horizontal distance (parallel to the real axis) between vertex insertions.
Dirichlet Boundary Conditions
As it is going to be shown, when the mirror charge have opposite sign the boundary conditions are Dirichlet. The 2-point correlation function restricted to the upper half plane is simply the expectation value Insertion of vertex operators (from the unity) in the boundary is not compatible with the boundary conditions since the only charge that can exist there is q = 0 (since q = −q = 0 in the boundary). Taking the limit y → 0 the expectation value (3.3) blows up but this should not worry us, near the boundary the two charges annihilate each other. This phenomena is nothing else than the physical counterpart of the operator fusion rules 3-point correlation functions cannot be used for the same reason, one of the insertions would need to lie in the boundary but that would mean q 3 = 0, the other two charges had to be inserted symmetrically in relation to the real axis and would imply q 1 = −q 2 . This reduces the 3-point correlator to a 2-point one in the full plane.
For 4-point vertex insertions consider q 1 and q 3 in the upper half plane, q 2 (inserted symmetrically to q 1 ) and q 4 (inserted symmetrically to q 3 ) in the lower half plane. As pictured in figure 6 the most generic configurations is q 1 = −q 2 = q and q 3 = −q 4 = q ′ . Making z 2 =z 1 = −iy and z 4 =z 3 = x − iy ′ we obtain the corresponding 2-point correlators in the upper half plane Again note that we cannot insert boundary operators without changing the boundary conditions. In the limit x → ∞ both correlators behave like When we approach the boundary the correlators go to infinite independently of the value of x. This fact can be explained by the kind of boundary conditions we are considering, they are such that when the fields approach the boundary they become infinitely correlated independently of how far they are from each other. Therefore this must be Dirichlet boundary conditions, the fields are fixed along the boundary, furthermore, as stated before their expectation value is 1 . It doesn't mater how much apart they are, they are always correlated on the boundary. The tangential derivative to the boundary of the expectation value ∂ x φ | ∂D 2 = 0 also agrees with Dirichlet boundary conditions.
Neumann Boundary Conditions
For the case of the mirror charge having the same sign of the original one the boundary conditions will be Neumann. The expectation value for the fields in the bulk vanishes since the 2-point function φ q (z 1 )φ q (z 2 ) = 0 in the full plane. Nevertheless we can evaluate directly the non-zero 2-point correlation function in the boundary Note that contrary to the previous discussion, concerning Dirichlet boundary conditions, in this case q = 0 on the boundary since the mirror charges have the same sign and the correlation function vanishes in the limit x → ∞ indicating that the boundary fields become uncorrelated. The 3-point correlation function in the full plane must be considered with one charge −2q in the boundary and two other charges q inserted symmetrically in relation to the real axis (see figure 6). In the upper half plane this corresponds to one charge insertion in the boundary and one in the bulk Note that in the limit y → 0 the fusion rules apply and we obtain (3.6) with ∆ replaced by 4∆.
For the 2-point function in the upper plane we have to consider the 4-point correlation function in the full plane with q 1 = q 2 = −q 3 = −q 4 = q, where q 2 is inserted symmetrically to q 1 in relation to the real axis and q 4 to q 3 . We obtain the bulk correlator Again in the limit x → ∞ this correlator vanishes. This corresponds to Neumann boundary conditions. The normal derivative to the boundary of (3.8) vanishes on the boundary ∂ y φ(0)φ(x) | ∂D 2 = 0.
For the case of one compactified free boson the process follows in quite a similar way. The main difference resides in the fact that the right and left spectrum charges are different. Taking a charge q = m + kn/4 its image charge is now ±q, wherē q = m − kn/4. In this way we have to truncate the spectrum holding q = −q = kn/4 for Dirichlet boundary conditions, and q =q = m for Neumann boundary conditions, in a pretty similar way as it happens in the Topological Membrane. We summarize in figure 6 the results derived here.
Annulus
We consider the annulus to be a half torus. For simplicity we take the torus to be the region of the complex plane [−π, π] × [0, 2πτ ] (and the annulus the region [0, π] × [0, 2πτ ]). We use z = x + iy with x ∈ [−π, π] and y ∈ [0, 2πτ ]. Here y is the vertical distance (parallel to the imaginary axis) between vertex insertions while x is taken to be the distance to the imaginary axis.
Dirichlet Boundary Conditions
Considering mirror charges with opposite sign, 2-point correlations in the torus correspond to the bulk expectation value in the annulus As in the case of the disk, it blows up in the boundary. But in the boundary this correlation function is not valid since the two charges annihilate each other. Therefore the only possible charge insertions in the boundary are q = 0, that is the identity operator. Again 3-point correlation functions cannot be used in this case. For 4-point vertex insertion consider q 1 and q 3 inserted to the right of the imaginary axis and q 2 and q 4 their mirror charges. The most generic configuration is We obtain the 2-point correlation function in the annulus Again the same arguments used for the disk apply. There cannot exist boundary insertions other than the identity and the tangential derivative to the boundary ∂ y φ | ∂C 2 = 0 vanish.
Neumann Boundary Conditions
Considering now the mirror charges having the same sign, again the fields in the bulk have zero expectation value. But the 2-point boundary correlation function is computed to be where we take one insertion in each boundary. In the case that the insertions are in the same boundary the factor of π 2∆ is absent. The 3-point function in the torus corresponds either to 2-point function in the annulus (taking only one insertion in the boundary) or to 3-point function (taking all the insertions in the boundaries). Taking one insertion in the bulk φ q (x, 0) (with mirror image φ q (−x, 0)) and other in the boundary φ −2q (π, y) we obtain If the insertion is the boundary x = 0 the factor of π 2 is absent.
As an example of two insertions in the boundaries take them to be both in the boundary x = 0, we obtain x 2 + (y + y ′ ) 2 y 2 (x 2 + y ′2 ) 2qq ′ /k (3.14) We can stop here, for our purposes it is not necessary to exhaustively enumerate all the possible cases. As expected the normal derivative to the boundary of these correlation functions (∂ x . . . ) vanishes at the boundary. These results are summarized in figure 7.
For the case of one compactified free boson the process follows as explained before. The spectrum must be truncated holding q = −q = kn/4 for Dirichlet boundary conditions and q =q = m for Neumann boundary conditions.
TM(GT)
Is now time to turn to the 3D TM(GT). In this section we present results derived directly from the bulk theory and its properties. The derivations of the results presented here are in agreement with the CFT arguments in the last section.
Take for the moment a single compact U(1) TMGT corresponding to c = 1 CFT with action where M = Σ × [0, 1] has two boundaries Σ 0 and Σ 1 . Σ is taken to be a compact manifold, t is in the interval [0, 1] and (z,z) stand for complex coordinates on Σ. From now on we will use them by default.
As widely known this theory induces new degrees of freedom in the boundaries, which are fields belonging to 2D chiral CFT's theories living on Σ 0 and Σ 1 .
The electric and magnetic fields are defined as and the Gauss law is simply Upon quantization the charge spectrum is for some integers m and n. Furthermore it has been proven in [32,39] that, for compact gauge groups and under the correct relative boundary conditions, one insertion of Q on one boundary (corresponding to a vertex operator insertion on the boundary CFT) will, necessarily, demand an insertion of the chargē on the other boundary. We are assuming this fact through the rest of this paper. Our aim is to orbifold TM theory in a similar way to Horava [47], who obtained open boundary world-sheets through this construction. We are going to take a path integral approach and reinterpret it in terms of discrete P T and P CT symmetries of the bulk 3D TM(GT).
Horava Approach to Open World-Sheets
Obtaining open string theories out of 3D (topological) gauge theories means building a theory in a manifold which has boundaries (the 2D open string world-sheet) that is already a boundary (of the 3D manifold). This construction raises a problem since the boundary of a boundary is necessarily a null space. One interesting way out of this dilemma is to orbifold the 3D theory, then its singular points work as the boundary of the 2D boundary. Horava [47] introduced an orbifold group G that combines the world-sheet parity symmetry group Z W S 2 (2D) with two elements {1, Ω}, together with a target symmetryG of the 3D theory fields With this construction we can get three different kind of constructions. Elements of the kind h =h×1 Z W S 2 induce twists in the target space (not acting in the world-sheet at all), for elements ω = 1G × Ω we orbifold the world-sheet manifold (getting an open world-sheet) without touching in the target space and for elements g 1 =g 1 × Ω we obtain exotic world-sheet orbifold. In this last case it is further necessary to have an element corresponding to the twist in the opposite direction g 2 =g 2 × Ω. To specify these twists on some world-sheet it is necessary to define the monodromies of fields on it. Taking the open string C o = C/Z 2 as the orbifold of the closed string C π(C o ) = D ≡ Z 2 * Z 2 ≡ Z 2 ⊂ ×Z (4.7) * being the free product and ⊂ × the semidirect product of groups. D is the infinite dihedral group, the open string first homotopy group. So the monodromies of fields in C o corresponds to a representation of this group in the orbifold group, Z 2 * Z 2 → G, such that the commutative triangle is complete. The partition function contains the sum over all possible monodromies where τ is the moduli of the manifold. The monodromies g 1 , g 2 and h are elements of G as previously defined satisfying g 2 i = 1 and [g i , h] = 1. It will be shown that P CT plays the role of one of such symmetries with g 1 = g 2 . It is in this sense one of the most simple cases of exotic world-sheet orbifolds.
The string amplitudes can be computed in two different pictures. The loopchannel corresponds to loops with length τ of closed and open strings and the amplitudes are computed as traces over the Hilbert space. The tree-channel corresponds to a cylinder of lengthτ created from and annihilated to the vacua through boundary (|B ) and/or crosscaps (|C ) states. Comparing both ways for the same amplitudes we obtain where I acts in t as Time Inversion t → 1 − t. This construction is presented in figure 8.
In terms of the action and fields in the theory Horava used the same approach of extending them to the doubled manifold Since there is a one-to-one correspondence between the quantum states of the gauge theory on M and the blocks of the WZNW model, we may write where Ψ i stands for a basis of the Hilbert space H Σ . The open string counterpart in the orbifolded theory is Z Σo = a i Ψ i ∈ H Σ (4.14) which also agrees with the fact that in open CFT's the partition function is the sum of characters (instead of the sum of squares) due to the holomorphic and antiholomorphic sectors not being independent.
Discrete Symmetries and Orbifold of TM(GT)
Following the discussion of section 2 and section 4.1, it becomes obvious that the parity operation plays a fundamental role in obtaining open and/or non-orientable manifolds out of closed orientable ones. Hence obtaining open/unorientable theories out of closed orientable theories.
Generally there are several ways of defining parity. The ones we are interested in have already been presented here. For the usual ones, P 1 and Ω defined in (2.4) and (2.15), the fields of our 3D theory transform like where Λ is the gauge parameter entering into U(1) gauge transformations. Under these two transformations the action transforms as The theory is clearly not parity invariant. Let us then look for further discrete symmetries which we may combine with parity in order to make the action (theory) invariant. Introduce time-inversion, T : t → 1 − t, implemented in this non-standard way due to the compactness of time. Note that t = 1/2 is a fixed point of this operation. Upon identification of the boundaries as described in [39] the boundary becomes a fixed point as well. It remains to define how the fields of the theory change under this symmetry. There are two possible transformations compatible with gauge transformations, A Λ (t, z,z) = A(t, z,z) + ∂Λ(t, z,z). They are: and where we defined C, charge conjugation, as A µ → −A µ . This symmetry inverts the sign of the charge, Q → −Q, as usual. These discrete symmetries together with parity P or Ω are the common ones used in 3D Quantum Field Theory. When referring to parity in generic terms we will use the letter P . Under any of the T and CT symmetries the action changes in the same fashion it does for parity P , as given by (4.16). In this way any of the combinations P T and P CT are symmetries of the action, S → S. Gauging them is a promising approach to define the TM(GT) orbifolding. It is now clear why we need extra symmetries, besides parity, in order to have combinations of them under which the theory (action) is invariant. In general, whatever parity definition we use, these results imply that P T and P CT are indeed symmetries of the theory.
We can conclude straight away that any of the two previous symmetries exchange physically two boundaries working as a mirror transformation with fixed point (t = 1/2, z =z = x) (corresponds actually to a line) as pictured in figure 9. We are considering that, whenever there is a charge insertion in one boundary of q=m+kn/4, it will exist an insertion ofq = m − kn/4 in the other boundary [30,39].
Under the symmetries P T and P CT as given by (4.15), (4.17) and (4.18) the boundaries will be exchanged as presented in figure 9. In the case of P CT the charges will simply be swaped but in the case of P T their sign will be change q → −q. Note that Σ1 2 = Σ(t = 1/2) only feels P or CP . As will be shown in detail there are important differences between the two symmetries CT and T , they will effectively gauge field configurations corresponding to untwisted/twisted sectors of closed strings and Neumann/Dirichlet boundary conditions of open strings.
Not forgetting that our final aim is to orbifold/quotient our theory by gauging the discrete symmetries, let us proceed to check compatibility with the desired symmetries in detail. It is important to stress that field configurations satisfying any only feels P T or P which are isomorphic to Z 2 P T /P CT combinations of the previous symmetries exist, in principle, from the start in the theory. We can either impose by hand that the physical fields obey one of them (as is usual in QFT) or we can assume that we have a wide theory with all of these field configurations and obtain (self consistent) subtheories by building suitable projection operators that select some type of configurations. It is precisely this last construction that we have in mind when building several different theories out of one. In other words we are going to build different new theories by gauging discrete symmetries of the type P CT and P T .
It is important to stress what the orbifold means in terms of the boundaries and bulk from the point of view of TM(GT). It is splitting the manifold M into two pieces creating one new boundary at t = 1/2. This boundary is going to feel only CP or P symmetries since it is located at the temporal fixed point of the orbifold. Figure 10 shows this procedure. In this way this new boundary is going to constrain the new theory in such a way that the boundary theories will correspond to open and unoriented versions of the original full theory.
Tree Level Amplitudes for Open and Closed Unoriented Strings
We start by considering tree level approximation to string amplitudes, i.e. the Riemann surfaces are of genus 0. These surfaces are the sphere (closed oriented strings) and its orbifolds: the disk (open oriented) and the projective plane (closed unoriented) as was discussed in section 2. From the point of view of TM(GT), orbifolding means that we split the manifold M into two pieces that are identified. As a result at t = 1/2, the fixed point of the orbifold, a new boundary is created. For different orbifolds we shall have different admissible field configurations. In the following discussion we studied which are the configurations compatible with P T and P CT for the several parity operations already introduced.
Disk
Let us start from the simplest case -the disk is obtained by the involution of the sphere under P 1 as given by (2.4). So consider the identifications under P 1 CT and P 1 T . For the first one the fields relate as The orientations of Σ andΣ are opposite. Under these relations the Wilson lines have the property This means that for the configurations obeying the relations (4.19) we loose the notion of time direction. Under the involution of our 3D manifold, using the above relations as geometrical identifications, the boundary becomes t = 0 and t = 1/2. For the moment let us check the compatibility of the observables with the proposed orbifold constructions given by the previous relations. In a very naive and straightforward way, when we use P 1 CT as given by (4.19) the charges should maintain their sign (q(t) ∼ = q(1 − t)). Then by exchanging boundaries we need to truncate the spectrum and set q ∼ =q = m in order the identification to make sense. Let us check what happens at the singular point of our orbifolded theory, t = 1/2. The fields are identified according to the previous rules but the manifold Σ(t = 1/2) = S 2 is only affected by P 1 .
Take two Wilson lines that pierce the manifold in two distinct points, z and z ′ . Under the previous involution P 1 CT , z is identified withz for t = 1/2. Then, geometrically, we must have z ′ =z in order to have spatial identification of the piercings. The problem is that when we have only two Wilson lines, TM(GT) demands that they carry opposite charges. In order to implement the desired identification we are left with q = 0 as the only possibility. For the case where the Wilson lines pierce the manifold in the real axis, z = x and z ′ = x ′ , the involution is possible as pictured in figure 11 since we identify x ∼ = x and x ′ ∼ = x ′ .
In the presence of three Wilson lines, following the same line of arguing, we will necessarily have one insertion in the boundary and two in the bulk as pictured in figure 12. Only in the presence of four Wilson lines, as pictured in figure 13 can we avoid any insertion in the boundary. Note that the identification B(z,z) ∼ = −B(z, z) in the real axis implies necessarily B(x, x) = 0. Remember that 2πn = B (see [32,39] for details). We could as well have an insertion in the boundary and one in the bulk This fact is simply the statement that by imposing P 1 CT we are actually imposing Neumann boundary conditions. The charges of the theory become q = m, this means that the string spectrum has only Kaluza-Klein momenta. Furthermore the monopole induced processes are suppressed, recall that they change the charge by an amount kn/2 which would take the charges out of the spectrum allowed in this configurations.
Following our journey consider next P 1 T . The fields now are related in the following way P 1 T : The Wilson line has the same property (4.20) as in the previous case. Now the charges change sign under a P 1 T symmetry. As before identifying the charges in opposite boundaries truncates the spectrum, q(t) ∼ = −q(1 −t). So we must have q ∼ = −q = nk/4.
We can, in this case identify two piercings in the bulk since the charge identifications are now q ∼ = −q is compatible with TM(GT). But we cannot insert any operator other than the identity φ 0 in the real axis since the corresponding charge must be zero q(x) = −q(x) = 0. Therefore this kind of orbifolding is only possible when we have a even number of Wilson lines propagating in the bulk. The result for two Wilson lines is pictured in figure 14 and for four in figure 15.
Projective Plane
We now consider the parity operation as the antipodal identification given in (2.8).
We thus obtain the projective plane as the new 2d boundary of TM(GT).
The transformation is given by discrete symmetry, that is t ′ ∼ = 1 − t, z ′ ∼ = −1/z andz ′ ∼ = −1/z. We obtain for P 2 CT Note that the relation between the integrals follows from taking into account the second and third equalities of (4.23), and the relations dz = dz ′ /z ′2 , dz = dz ′ /z ′2 , and consequently dz ∧ dz = −(1/z ′z′ )dz ′ ∧ dz ′ . Σ andΣ again have opposite orientations and are mapped into each other by the referred involution. Under these relations and in a similar way to (4.24) the action transforms under P 2 as given in (4.16) and any of the combinations P 2 CT or P 2 T keep it invariant. Also the Wilson lines have the same property given by (4.20).
In the derivation of the previous identifications (4.23) we had to demand analyticity of the fields on the full sphere. This translates into demanding the transformation between the two charts covering the sphere to be well defined. Since ∂ u Λ = −z 2 ∂ z Λ and ∂ūΛ = −z 2 ∂zΛ the fields must behave at infinity and zero like If naively we didn't care about these last limits the relations would be plagued with Dirac deltas coming from the identity 2πδ 2 (z,z) = ∂ z (1/z) = ∂z(1/z). Once the previous behaviors are taken into account all these terms will vanish upon integration. Another way to interpret these results is to note that the points at infinity are not part of the chart (not physically meaningful), to check the physical behavior at those points we have to compute it at zero in the other chart.
This time the charges compatible with P 2 CT are q = m since q ∼ =q. Once there are no boundaries it is not possible to have configurations with two Wilson which allow this kind of orbifold. In this way the lowest number of lines is four as pictured in figure 16. Furthermore the number of Wilson lines must be even.
This configuration corresponds to untwisted closed unoriented string theories. Note that Λ, which is identified with string theory target space, is not orbifolded by P 2 CT . The charges allowed are q = m, the KK momenta of string theory. Once again the monopole processes are suppressed. For P 2 T the fields relate as In this case q = kn/4 since q ∼ = −q and further configurations with two Wilson lines are compatible with the orbifold as pictured in figure 17. In this case we have twisted unoriented closed strings. Note that the orbifold identifies Λ ∼ = −Λ such that the target space of string theory is orbifolded. The full construction, including the world-sheet parity, from the point of view of string theory is called an orientifold. The allowed charges q = kn/4 correspond to the winding number of string theory. The monopole processes are again crucial since allow, in the new boundary, the gluing of Wilson lines carrying opposite charges. We will return to this discussion.
One Loop Amplitudes for
Open and Closed Unoriented Strings
Annulus
We start with the already studied parity transformation Ω, as given by (2.15). There is nothing new to add to the fields relations (4.19) for P CT and (4.21) for P T , this time under the identifications t ′ = 1 − t, z ′ = −z andz ′ = −z. The resulting geometry is the annulus C 2 and has now two boundaries. For ΩCT the allowed charges are q = m due to the identification q ∼ =q and B(x) = 0 at the boundaries. We can have two insertions in the boundaries of the 2d CFT but not in the bulk due to the identifications of charges, basically the argument is the same as used for the disk. As in the disk we cannot have one single bulk insertion due to the total charge being necessarily zero in the full plane. Up to configurations with four Wilson lines we can have: two insertions in the boundary; one insertion in the bulk and one in the boundary corresponding to three Wilson lines; three insertions in the boundaries (with q = 0); one insertion in the bulk and two in the boundary corresponding to four Wilson lines; and two insertions in the bulk corresponding to four Wilson lines as pictured in figure 18. This construction corresponds to open oriented strings with Neumann boundary conditions. The charge spectrum is q = m, corresponding to KK momenta in string theory and the monopole induced processes are suppressed. It is Neumann because the gauged symmetry is of P CT type. We note that the definition of parity is not important, even for genus 1 surfaces the results hold similarly to the previous cases for P 1 and P 2 used in genus 0. What is important is the inclusion of the discrete symmetry C! For ΩT the allowed charges are q = kn/4 due to the identification q ∼ = −q. There are no insertions in the boundary. One insertion in the bulk corresponds to two Wilson lines and two to four Wilson lines presented in picture 19.
Möbius Strip
Let us proceed to the parityΩ as given by (2.17). The results are pictured in figure 20 and are fairly similar. Note that it corresponds to two involutions of the torus with τ = 2iτ , one given by Ω resulting in the annulus, andã which maps the annulus into the Möbius strip. Then, for each insertion in the strip it is necessary to exist four in the torus. Once more we have forΩCT that B = −B = 0 in the boundaries and q is identified withq demanding the charges to be q = m, which correspond to the KK momenta of string theory. Due to this fact the monopole processes are suppressed in the configurations allowing this kind of orbifolding. This corresponds to Neumann boundary conditions.
For theΩT case we have the identification of q with −q demanding the charges to be q = kn/4, the winding number of string theory. This time not allowing the monopole processes to play an important role, the charges are purelly magnetic. This corresponds to Dirichlet boundary conditions. As discussed in subsection 2.2 we can also consider the involution of the torus, with moduli τ = 1/2 + iτ 2 under ΩT or ΩCT . In this case four insertions in the torus correspond to two insertions in the strip as presented in figure 21 for the ΩT case. As previously explained both constructions result in the same region of the com-plex plane. Note that the resulting area in both cases is 2π 2 τ 2 and that in both cases the region [0, π] × i[0, 2πτ 2 ] is identified with the region [π, 2π] × i[0, 2πτ 2 ].
Again, for Ω ′ CT , we obtain q = m because q ∼ =q. The minimum number of insertions is two corresponding to four Wilson lines in the bulk. This construction corresponds to untwisted unoriented closed strings with only KK momenta in the spectrum. The monopole processes are suppressed.
For Ω ′ T case we have q = kn/4 due to q ∼ = −q. We can have one single insertion in the bulk corresponding to two Wilson lines or two corresponding to four Wilson lines. This construction corresponds to twisted unoriented closed strings with only winding number. The monopole processes are present and are crucial in the construction.
Note on Modular Invariance and the Relative Modular Group
Modular invariance is a fundamental ingredient in string theory which makes closed string theories UV finite. What about the orbifolded theories? It is much more tricky. So if we actually want to ensure modular invariance we need to build a projection operator which ensures it. A good choice would be O = 2 + P T + P CT 4 (4.27) such that the exchange of orbifolds doesn't change it. This fact is well known in string theory (see [24] for details).
In the case when we are dealing with orbifolds which result in open surfaces the modular transformation τ → −1/τ , according to the previous discussion, exchanges the boundary conditions (Neumman/Dirichlet). Note that orbifolding the target space in string theory (or equivalently the gauge group in TMGT) is effectively creating an orientifold plane where the boundary conditions must be Dirichlet (as for a D-brane). This is the equivalent of twisting for open strings. In terms of the bulk the modular transformation is exchanging the projections P CT ↔ P T .
Let us put it in more exact terms. Consider some discrete group H of symmetries of the target space (or equivalently the gauge group of TMGT). Consider now the So far we have concentrated on one loop amplitudes only, i.e. genus 1 world-sheet surfaces orbifolds. For the pure bosonic case this is sufficient, but once we introduce fermions and supersymmetry new constraints emerge at two loop amplitudes. Specifically the modular group of closed Riemann surfaces at genus g is SL(2g, Z), upon orbifolding there is a residual conformal group, the so called Relative Modular Group [18] (see also [19][20][21]). For genus 1 this group is trivial but for higher genus it basically mixes neighboring tori, this means it mixes holes and crosscaps (note that any surface of higher genus can be obtained from sewing genus 1 surfaces). Furthermore, the string amplitudes defined on these genus 2 open/unoriented surfaces must factorise into products of genus 1 amplitudes. For instance a 2 torus amplitude can be thought as two 1 torus amplitudes connected trough an open string. For a discussion of the same kind of constraints for closed string amplitudes see [12][13][14][15][16].
The factorization and modular invariance of open/unoriented superstring theories amplitudes will induce generalized GSO projections ensuring the consistency of the resulting string theories.
The correct Neveu-Schwarz (NS -antiperiodic conditions, target spacetime fermions) and Ramond (R -periodic conditions, target spacetime bosons) sectors were built from TMGT in [36]. There the minimal model given by the coset M k = SU(2) k+2 × SO(2) 2 /U(1) k+2 with the CS action (4.29) was considered. It induces, on the boundary, an N = 2 Super Conformal Field Theory (see also [33] for N = 1 SCFT). The boundary states of the 3D theory corresponding to the NS and R sectors are obtained as quantum superpositions of the 4 possible ground states (wave functions corresponding to the first Landau levelthe ground state is degenerate) of the gauge field B, that is to say we need to choose the correct basis of states. The GSO projections emerge in this way as some particular superposition of those 4 states at each boundary (for further details see [36]). It still remains to see how these constraints emerge from genus 2 amplitudes from TM and its orbifolds. We will discuss in detail these topics in some other occasion.
Neumann and Dirichlet World-Sheet Boundary Conditions, Monopoles Processes and Charge Conjugation
It is clear by now that the operation of charge conjugation C is selecting important properties of the new gauged theory. And here we are referring to the properties of the 2D boundary string theory. Gauging P CT results in having an open CFT with Neumann boundary conditions while, gauging P T results in having Dirichlet boundary conditions. So C effectively selects the kind of boundary conditions! In the case that P CT gives a closed unoriented manifold, we obtain an untwisted theory, while P T gives a twisted theory (orientifold X ∼ = −X). Again C effectively selects the theory to be twisted or not. These results are summarized in table 2.
Twisted q = kn/4 q = kn/4 q = kn/4 q = kn/4 q = kn/4 Although these facts are closely related with strings T-duality, the C operation does not give us the dual spectrum. Upon gauging the full theory it is only selecting the Kaluza-Klein momenta or winding number as the spectrum of the configurations being gauged.
From the point of view of the bulk theory the gauged configurations corresponding to Neumann boundary conditions correspond to two Wilson lines with one end attached to the 1D boundary of the new 2D boundary of the membrane at t = 1/2 and the other end attach to the 2D boundary at t = 0. For Dirichlet boundary conditions there is one single Wilson line with both ends in the 2D membrane boundary at t = 0 and a monopole insertion in the bulk of the 2D boundary at t = 1/2. Note that the Wilson lines do not, any longer, have a well defined direction in time, we have gauged time inversion. These results are presented in figure 25.
For the case where we get unoriented manifolds the picture is quite similar. There are always an even number of bulk insertions. In the case of P CT the Wilson lines which are identified have the same charge, therefore there are no monopole processes involved. The two Wilson lines are glued at t = 1/2 becoming in the orbifolded theory one single line which has both ends attached to Σ 0 and one point in the middle belonging to Σ 1/2 . In the boundary CFT we see two vertex insertions with opposite momenta. This construction corresponds to untwisted string theories since the target space coordinates (corresponding to the gauge parameter Λ in TM(GT)) are not orbifolded.
In the case of P T the identification is done between charges of opposite signs. Then two Wilson lines become one single line with its ends attached to Σ 0 , but at one end they have a q charge and in the other end they have a −q charge. In Σ 1/2 there is a monopole insertion which exchanges the sign of the charge. This construction corresponds to twisted string theories since the target space coordinates are orbifolded (Λ ∼ = −Λ).
As a final consistency check in P CT the charges are always restricted to be q = m due to compatibility with the orbifold construction. By restricting the spectrum to these form we are actually eliminating the monopole processes for this particular configurations!
T-Duality and Several U(1)'s
The well know Target space or T-duality(for a review see [53]) of string theory is a combined symmetry of the background and the spectrum of momenta and winding modes. It interchanges winding modes with Kaluza-Klein modes. From the point of view of the orbifolded TM(GT) corresponding to open and unoriented string theories the projections P T truncate the charges spectrum to q = kn/4 (due to demanding q = −q) which in string theory is the winding number. The projections P CT truncate the charge spectrum to q = m (due to demanding q =q) which corresponds in string theory to the KK momenta. Note that P CT excludes all the monopole induced processes while P T singles out only monopole induced processes [32,37,39].
T-duality is, from the point of view of the 3D theory, effectively exchanging the two kinds of projections T − duality : P T ↔ P CT q = −q ↔ q =q (4.30) This is precisely what it must do. The nature of duality in 3D terms was discussed in some detail in [35]. It was shown there that it exchange topologically non trivial matter field configurations with topologically non trivial gauge field configurations. Although charge conjugation was not discussed there (only parity and time inversion), this mechanism can be thought as a charge conjugation operation. Note that C 2 = 1.
It is also rather interesting that from the point of view of the membrane both T-duality and modular transformations are playing the same role. In some sense both phenomena are linked by the 3D bulk theory.
So far we have considered only a single compact U(1) gauge group. But new phenomena emerge in the more general case. The extra gauge sectors are necessary any how [39].
Take then the general action with gauge group U(1) d × U(1) D with d U(1)'s noncompact and the remaining D's compact. Due to the charges not being quantize and the non existence of monopole-induced processes in the non compact gauge sector, the mechanism is slightly different (see section 3). But this operator can act as well over the noncompact sector. For the case of open manifolds M/P T , I ′ run over the indices for which we want to impose Neumann boundary conditions (on Λ I ′ ) and I ′′ over the indices corresponding to Dirichlet boundary conditions. For the case of closed manifolds M/P T the picture is similar but I ′ runs over the indices we want Λ I ′ to be orbifolded (obtaining an orientifold or twisted sector).
In the case of several U(1)'s more general symmetries (therefore orbifold groups) can be considered (for instance Z N ). Those symmetries are encoded in the Chern-Simons coefficient K IJ .
Conclusion and Discussion
In this paper we have shown how one can get open and closed unoriented string theories from the Topological Membrane. There were two major ingredients: one is the Horava idea about orbifolding, the second is that the orbifold symmetry was a discrete symmetry of TMGT. The orbifold works from the point of view of the membrane as a projection of field configurations obeying either P T or P CT symmetries (the only two kinds of discrete symmetries compatible with TMGT). For P CT type projections we obtained Neumann boundary conditions for open strings and untwisted sectors for closed unoriented strings. For P T type projections we obtained Dirichlet boundary conditions for open strings and twisted sectors for closed unoriented strings. For P CT q =q = m, so only the string Kaluza Klein modes survive. In this case the monopole induced processes are completely suppressed. For P T q = −q = kn/4, so only the string winding modes survive. In this case only monopole induced processes are present, being the charges purelly magnetic. Charge conjugation C plays an important role in all the processes playing the role of a Z 2 symmetry of the string theory target space. These results can be generalized to symmetries of the target space encoded in the tensor K IJ and are closely connected, both with modular transformations and T-duality which exchange P T ↔ P CT .
This work is the first part of our study of open and unoriented string theories. In the second part [54] we shall derive the partition functions of the boundary CFT from the bulk TMGT [55][56][57][58][59].
Also an important issue to address in future work will be to generalize the constructions presented here to non trivial boundary CFT's [54], for example WZNW models and different coset models which can be obtained from TM with non-Abelian TMGT.
As a final remark let us note that the string photon Wilson line has been left out. TM(GT) can take account of it as well: for any closed Σ there is a symmetry of the gauge group coupling tensor K IJ → K IJ + δ I χ J − δ J χ I where each χ I = χ I [A] is taken to be some function of the A I 's. This transformation affects only B IJ and the induced terms vanish upon integration by parts. Once we consider the orbifold of the theory the new orbifolded Σ o has a boundary and the induced terms will not vanish any longer but induce a new action on the boundary ∂Σ o , they will be precisely the new gauge photon action of open string theories. As is well known the choice of the gauge group of string theory, i.e. the Chan-Paton factors structure carried by this photon Wilson line will be determined by the cancellation of the open string theory gauge anomalies (see [24] and references therein). We postpone the proper treatment of this issue from the point of view of TM to another occasion [54]. supported by PRAXIS XXI/BD/11461/97 grant from FCT (Portugal). The work of IK is supported by PPARC Grant PPA/G/0/1998/00567 and EUROGRID EU HPRN-CT-1999-00161. | 15,078 | 2000-12-20T00:00:00.000 | [
"Physics"
] |
Computer-Aided Designing and Manufacturing of Lingual Fixed Orthodontic Appliance Using 2D/3D Registration Software and Rapid Prototyping
The availability of 3D dental model scanning technology, combined with the ability to register CBCT data with digital models, has enabled the fabrication of orthognathic surgical CAD/CAM designed splints, customized brackets, and indirect bonding systems. In this study, custom lingual orthodontic appliances were virtually designed by merging 3D model images with lateral and posterior-anterior cephalograms. By exporting design information to 3D CAD software, we have produced a stereolithographic prototype and converted it into a cobalt-chrome alloy appliance as a way of combining traditional prosthetic investment and cast techniques. While the bonding procedure of the appliance could be reinforced, CAD technology simplified the fabrication process by eliminating the soldering phase. This report describes CAD/CAM fabrication of the complex anteroposterior lingual bonded retraction appliance for intrusive retraction of the maxillary anterior dentition. Furthermore, the CAD/CAM method eliminates the extra step of determining the lever arm on the lateral cephalograms and subsequent design modifications on the study model.
Introduction
Advances in digital imaging systems, computer-aided design, and computer-aided manufacturing (CAD/CAM) technology are providing new possibilities in orthodontics. The application of CAD/CAM for establishing a virtual setup and fabricating transfer tray/jigs [1][2][3] has greatly improved the indirect bonding process. CAD/CAM has also enabled 3D virtual diagnosis, treatment planning, wafer fabrication, and customized bracket design [4][5][6][7]. Its use in orthognathic surgery has shown multiple advantages including reducing laboratory time for making surgical splints and improving accuracy for repositioning of the maxilla and mandible.
Although the lingual orthodontic appliance provides distinctive esthetic advantages, its use has been limited due to increased chair time and more difficult mechanical control. Application of lingual orthodontic appliances is becoming easier with new technologies such as virtual positioning of the brackets and indirect bonding systems which utilize virtual setup models.
Accurate surface imaging is required to digitally manufacture orthodontic appliances. Even when CBCT scans are used for the diagnosis or design of an appliance, separate surface imaging of the dentition is required to compensate for poor surface rendering in the CBCT. Surface images of the dentition are typically obtained from a 3D optical scanner and registered with a CBCT scan. However, taking a CBCT solely for the fabrication of an orthodontic appliance is impractical considering the expense and radiation dose. Recently, 3D dental CAD/CAM solution software utilizing 2D lateral and posteroanterior (PA) cephalograms and 3D virtual dental models (3Txer version 2.5, Orapix, Seoul, Korea) has been introduced. Choi et al. evaluated the accuracy of orthognathic surgical wafers fabricated using the software and concluded that the new method using the cephalograms and surface scan can be regarded as an effective alternative for conventional 3D surface scan and CBCT methods [7]. The lateral cephalogram is important in designing orthodontic appliances for en-masse retraction of the maxillary anterior dentition. The lever arm length of the appliance is determined by the location of the center of resistance of the maxillary anterior teeth on the lateral cephalogram. The appliance design is then drawn on the study model. However, there is room for error when transferring design information from the lateral cephalogram to the actual study model.
On the contrary, the CAD/CAM method can precisely transfer the design information from the lateral cephalogram to the final design of the appliance. In order to minimize these errors, this study utilizes merged three-dimensional (3D) model images and cephalograms to virtually design custom lingual appliances. In addition to improving the design accuracy, CAD/CAM technology has simplified fabrication by also eliminating soldering process. It provides a mesh type base in the lingual pads to increase bonding strength of the appliance. Additionally, rapid-prototyping technology makes it possible to support undercuts on the lingual pad base, which are not possible with conventional fabrication methods. This study introduces a technique for CAD/CAM fabrication of lingual orthodontic appliances and assesses the final position of the cemented appliance with the planned position on the lateral cephalogram.
Materials and Methods
This new custom lingual appliance is named kinematics of lingual bar on nonparalleling technique (KILBON). The torque on the maxillary anterior segment is determined by the center of resistance (Cres) and the corresponding retraction force vector. In the sagittal plane, the retraction vector is determined by the vertical position of a palatal temporary skeletal anchorage device (TSAD) and the location of the lever arm [8][9][10]. When anterior teeth are retracted with palatal TSADs, the lever arm can be located closer to the center of resistance of the maxillary anterior teeth when compared to retraction with buccal TSADs.
The KILBON system consists of the following components: palatal TSADs, anterior lingual pads connected by archwire, and posterior segments ( Figure 1). The anterior segment is made of a 0.036-inch wire connected to lingual pads splinting six anterior teeth into a single unit. Two lever arms are attached to the anterior segment and connected to the TSADs with NiTi closed-coil springs for direct retraction. This provides translation of the anterior segment. Each posterior segment is also splinted as one unit, and a short tube is extended from the maxillary first molar. This tube functions as a sliding yoke and vertical hook for intrusion of posteriors. A 0.036-inch guide wire is connected to the retraction hooks and extends distally through the tube. The posterior extension wire gives vertical stabilization to the anterior group of teeth, which prevents unwanted extrusion or intrusion.
The KILBON appliance was designed with dental CAD/CAM solution software (3Txer version 2.5, Orapix, Seoul, Korea) and commercial 3D CAD software (Rhinoceros 3D v5.0, Mc Neel & Associates, USA). The 3D image of the study model was produced using a laser scanner (KOD-300 3D, Orapix, Seoul, Korea; accuracy, ±20 m). The model image was registered with the lateral and frontal cephalograms using the 3Txer software, as described by Choi and colleagues (Figure 2) [11].
On the lateral cephalogram, the center of resistance (Cres) was marked using the measurement function within the software. The placement location of the TSADs and lever arm length were determined based on the desired orientation of the retraction vector. The preliminary construction of the appliance was designed using this information (Figure 3). of the appliance were designed on the virtual model. The lingual archwire connected to the anterior pads is illustrated in Figure 4. Before producing a stereolithographic prototype, any defects or voids were examined with reverse engineering software (Rapidform 2006, 3D systems, Seoul, Korea). A prototype of the KILBON appliance was manufactured using a rapid-prototyping machine (Projet MD3000 Plus, 3D systems, Circle Rock Hill, SC, USA). The actual appliance was then manufactured from this stereolithographic prototype using conventional dental casting. The lingual arch component and right and left posterior tube segments were invested using phosphate-bonded investment material and casted with cobalt-chrome alloy. After final finishing and polishing, a transfer jig was fabricated for indirect bonding of the appliance.
Prior to trying in the appliance, the tooth surfaces were first etched with 37% phosphoric acid gel ( To optimize the design with the least distortion during the fabrication process and produce the closest to en-masse anterior retraction, various lever arm designs were applied on five patients. After placing the KILBON appliance, occlusal photographs and lateral cephalograms were taken. Positional accuracy and rigidity of each design were evaluated by comparing the planned design on the 3D model to the new occlusal photograph and through superimposing the new lateral cephalogram on the initial cephalogram containing the design information ( Figure 6).
Results and Discussion
The rigidity and stability of the appliance during retraction varied depending on the lever arm design. When 0.8 mm wire was used for the lever arm (case 1, 17-year-old female), the lever arm bent slightly during en-masse retraction ( Figures 6(a) to 6(c)). In cases 2 and 3 (23-year-old females), the wire diameter was increased to 0.9 mm to withstand the retraction force. In these cases, the final position of the appliance deviated slightly from the planned position due to deformation of the anterior lingual wire from postcasting polishing ( Figures 6(d) to 6(i)). To overcome this in case 4, an auxiliary wire was International Journal of Dentistry added between the extension arm and lever arm to prevent positional change of the lever arm and distortion during casting (22-year-old female, Figures 6(j) to 6(l)). Stability and positional accuracy were improved with this addition. In case 5 (26-year-old female), multiple auxiliary wires were applied to prevent distortion during casting and en-masse retraction, resulting in the best outcome in terms of stability and positional accuracy (Figures 6(m) to 6(o)). In this case, the cemented KILBON appliance maintained the desired position, as planned in the software. CAD/CAM technology shows a range of promising possibilities in the fabrication of orthodontic appliances. When 3D model and CBCT scans or lateral cephalograms are combined together, the lever arm vector can be virtually designed in the software, and this design information can be saved and exported to the other 3D CAD software. Furthermore, after minor adjustments, this framework design can be used for other patients after minor adjustments. The appliance design can also be converted to fabricate customized brackets following the retraction of the anterior segment. When used with virtual articulation software, premature contacts on the appliance can be eliminated and chair time adjustment is reduced. The treatment result is easily evaluated by comparing registered pre-and posttreatment lateral cephalograms.
Another advantage of the CAD design method is improved bonding of the lingual bracket base. One of the most important factors in the bonding of orthodontic brackets is the type of bracket base [12]. The most commonly used bracket bases are perforated bases, foil mesh bases, photoetched bases, and integrated cast-type bases. The highest resolution of commercially available stereolithographic printers is approximately 0.3 mm [13], which is sufficient for providing the retention feature on the base of a stereolithographic prototype. The base of a metal bonded attachment must be manufactured so that a mechanical interlock between the bonding material and the attachment surface can be achieved [14]. For steel brackets, the bonding material is attached mechanically to the bracket base penetrating into the undercuts provided usually by a fine mesh welded or brazed onto the back of a metal bracket. In another study on CAD/CAM fabricated lingual bracket [15], the smooth surface of the bracket base was sandblasted with aluminum oxide (Rocatec-Pre/Rocatec-Plus, 3 M ESPE, USA) to enhance the retention of the gold alloy bracket. In this study, sandblasting was unnecessary because of the built-in retention features designed in the bracket base. 3D scanning of the models with a high-resolution scanner enabled individualization of the brackets using a precise image of the lingual surface. This is necessary since the lingual surfaces of teeth vary much more widely than labial surfaces [16][17][18]. This method also minimizes bracket thickness [19].
The vertical height of retraction hooks controls the resulting movement of the anterior teeth, resulting in tipping, bodily movement, or lingual root movement during retraction. The double J retractor introduced two lever arm hooks for space closure [20]. The anterior long lever arm hooks were designed to pass the line of action of this force through the center of resistance. Unlike traditional 6 International Journal of Dentistry lingual brackets and archwire, the one-body structure of the lingual pads and lingual wire eliminated any wire play in the brackets and prevented loss of torque control during retraction. Furthermore, the single-body design reduced the high cost of lab fees for lingual brackets. The KILBON appliance was fabricated by casting a stereolithographic prototype. During casting, the fragile parts of the appliance are subject to distortion and require reinforcement. In most cases, conventional dental casting utilizes a wax pattern, and distortion of this casting can be attributed to distortion of the wax pattern. The stereolithographic prototype is much more rigid, and therefore distortion is reduced in comparison to the traditional lost wax technique. However, some distortion can be caused by hardening of the investment around the prototype, whereby setting and hygroscopic expansion could lead to uneven deformation of the walls of the prototype. This depends on the thickness and configuration of the prototype. The addition of auxiliary wire and selection of the appropriate wire diameter result in less distortion of the appliance.
In this study, the KILBON appliance was applied on five patients. A greater sample size is required for a more thorough evaluation. Further studies are required to optimize the angulation of the lever arm and resulting retraction vectors of the anterior and posterior segments.
Conclusions
CAD technology, equipped with merged image of 3D model image and cephalograms or CBCT scans, enables improved accuracy of orthodontic appliance design. Using computerassisted design and manufacturing of the KILBON appliance, the following results were obtained: (1) the use of auxiliary wires reduced the distortion of the appliance during casting; (2) wire diameter should be larger than 0.9 mm to withstand retraction force.
Disclosure
No author of this paper will benefit from the production or sale of the 3D KILBON. | 3,017 | 2014-05-11T00:00:00.000 | [
"Materials Science"
] |
Remote Sensing Monitoring of Vegetation Dynamic Changes after Fire in the Greater Hinggan Mountain Area: The Algorithm and Application for Eliminating Phenological Impacts
: Fires are frequent in boreal forests affecting forest areas. The detection of forest disturbances and the monitoring of forest restoration are critical for forest management. Vegetation phenology information in remote sensing images may interfere with the monitoring of vegetation restoration, but little research has been done on this issue. Remote sensing and the geographic information system (GIS) have emerged as important tools in providing valuable information about vegetation phenology. Based on the MODIS and Landsat time-series images acquired from 2000 to 2018, this study uses the spatio-temporal data fusion method to construct reflectance images of vegetation with a relatively consistent growth period to study the vegetation restoration after the Greater Hinggan Mountain forest fire in the year 1987. The influence of phenology on vegetation monitoring was analyzed through three aspects: band characteristics, normalized difference vegetation index (NDVI) and disturbance index (DI) values. The comparison of the band characteristics shows that in the blue band and the red band, the average reflectance values of the study area after eliminating phenological influence is lower than that without eliminating the phenological influence in each year. In the infrared band, the average reflectance value after eliminating the influence of phenology is greater than the value with phenological influence in almost every year. In the second shortwave infrared band, the average reflectance value without phenological influence is lower than that with phenological influence in almost every year. The analysis results of NDVI and DI values in the study area of each year show that the NDVI and DI curves vary considerably without eliminating the phenological influence, and there is no obvious trend. After eliminating the phenological influence, the changing trend of the NDVI and DI values in each year is more stable and shows that the forest in the region was impacted by other factors in some years and also the recovery trend. The results show that the spatio-temporal data fusion approach used in this study can eliminate vegetation phenology effectively and the elimination of the phenology impact provides more reliable information about changes in vegetation regions affected by the forest fires. The results will be useful as a reference for future monitoring and management of forest resources.
Introduction
Forests play an irreplaceable role in maintaining the ecological balance of the terrestrial biosphere due to their wide coverage, complex distribution, and species diversity [1,2], multifunction, and multi-value characteristics [3].
Fires are one of the serious disturbances globally and are particularly prevalent in boreal forests [4]. Forest fires promote dynamic changes in ecosystem structure and function, have positive and negative impacts on ecosystems, and have a profound impact on human life and regional developments [5][6][7][8]. On the one hand, it poses a severe health hazard to people living in the surroundings [9], impacts forest ecosystems, and burns an average of about 350 million ha of forest land per year, which is one of the primary causes for the decline in global forest stocks. The burning of forests severely causes local economic losses. Forest fires also have long term environmental and climate impacts. A certain frequency and intensity of fire can maintain the balance of forest ecology and play an essential role in preserving biodiversity. For example, fires can help regulate fuel accumulation, restore vegetation by removing fungi and microorganisms, control diseases and insects, and gain more energy by exposure to solar radiation, mineral soil, and nutrient release [10]. With climate change and global warming, the frequency of forest fires is increasing and receives increasing attention as an integral part of global environmental change studies [11,12].
Global fires emit about 2.1 PgC (1 PgC = 1015 g of carbon) of carbon flux per year, which is equivalent to 50%-200% of annual terrestrial carbon sinks. Among them, 35% are related to forests [13]. The post-fire forest regeneration process is extremely important. Carbon is emitted from forest fires that are injected into the atmosphere, post-fire vegetation regeneration and carbon sequestration of woody vegetation may help to reduce carbon emission in the atmosphere [14]. Forest disturbance and restoration can, therefore, affect the energy flow and biogeochemical cycles and thus is considered as the primary mechanism for the carbon transfer between the surface and the atmosphere, playing an important role in regional and global carbon cycles [15][16][17]. The detection of forest disturbances and the monitoring of post-fire forest restoration are essential for both ecological research and forest management. Understanding the dynamics of forest regeneration after a fire can help to assess forest resilience and adequately guide forest management after the disturbance. Therefore, information about the spatial patterns and temporal trends of the forest helps in restoration after a fire.
Considering the small spatial coverage, limited sample points, low site accessibility, and high labor costs, site sampling is not suitable for monitoring large-scale vegetation dynamics after a fire, while satellite remote sensing provides an economical and effective tool for monitoring large-scale forest changes [18,19]. Fires can cause profound changes in ecosystems, where vegetation is consumed, leaf chlorophyll is destroyed, the soil is further exposed, and carbonization and moisture changes in vegetation roots cause a large number of spectral changes, which can be detected through satellite data [20,21]. Optical remote sensing data, such as the widely used Landsat images, have been proven to be very suitable for forest interference detection and forest change monitoring because they have the necessary spatial resolution (the resolution of 30 m, consistent with the scale of most local vegetation changes [22]) and spectral coverage (visible, near-infrared, short-wave infrared, and thermal infrared bands) to capture most forest disturbance events caused by natural or artificial management [23]. At the same time, time-series remote sensing data (40-year observations) (e.g., Landsat) provides excellent potential for trajectory monitoring of forest dynamics after fire [24,25], as long-term monitoring of forest recovery after a fire is often required [26,27].
Numerous studies have been carried out on different aspects of post-fire forest restoration [23,[28][29][30][31]. Many scholars have also studied the phenology of vegetation. The experimental results of Frison et al. [32] show that radar data is more accurate for phenological estimation than optical data. Flavio et al. [33] have tested the phenology-based vegetation mapping method and proved it effective. Some studies have calculated the phenological characteristics of mangroves to derive environmental driving factors that affect their growth [34,35]. Vegetation phenology can provide information about the vegetation dynamics and response after forest fires [36]. However, there is less research on analyzing the influence of phenological factors caused by remote sensing data on the monitoring of dynamic vegetation changes. Due to the significant differences in vegetation phenology between different growth stages, in order to avoid the "pseudo-variation" of the timeseries vegetation index caused by the interannual vegetation phenological changes, some studies have chosen to use the images acquired at the time near the vegetation growth peak to monitor the post-fire forest recovery [37]. Remote sensing and geographic information tools have emerged as important tools to study vegetation phenology using long time-series of vegetation indexes to monitor the post-fire forest recovery [37]. However, due to current technical limitations, it is challenging to obtain remote sensing data with high spatial resolution and high temporal resolution simultaneously. The coarse resolution (e.g., MODIS, 250 m/500 m/1000 m) will obscure the details of the features and affect the observation results. The long revisit period (16 days) of satellites (e.g., Landsat), frequent cloud pollution, and other atmospheric conditions limit their application in long time-series detection of surface objects without phenological interference. Therefore, long-term observations by only using images located near the vegetation growth peak in cloudy areas may result in a gap in the study years.
Taking the forest restoration in the Greater Hinggan Mountain area after the "5.6 fire" in 1987 as an example, this study aims at demonstrating the effect of the spatiotemporal fusion algorithm in eliminating the phenological impact when monitoring vegetation restoration using remote sensing images. Here, we used the Landsat and MODIS time-series images to study the vegetation during 2000-2018 based on the spatiotemporal fusion algorithm to eliminate the influence of phenological factors in Landsat images. We compared the band characteristics, NDVI and DI indices prior to and after the elimination of the phenology effect, and further explore the impact of phenology on forest dynamic monitoring. The results of this study prove that the spatiotemporal fusion algorithm can effectively eliminate phenological factors in remote sensing images. The elimination of the phenological effects can provide more reliable information on vegetation restoration. Thus, the present study provides a scientific reference for post-fire forest reconstruction and ecological restoration.
Study Area
The Greater Hinggan Mountain area is located in Heilongjiang Province, in the northern part of Inner Mongolia Autonomous Region which is the watershed of the Mongolian Plateau and the Songliao Plain bounded by latitude 50°10′N to 53°33′N, longitude 121°12′E to 127°00′E ( Figure 1). The area is more than 1200 km long and 200-300 km wide with an average altitude of 1200-1300 m above mean sea level. The Greater Hinggan Mountain area is a typical cold temperate continental monsoon climate with warm summers and cold winters. The annual average temperature of the area is −2.8 °C; the lowest temperature is −52.3 °C. The precipitation, which peaks in summer, is 420 mm annually and is unevenly distributed throughout the year, i.e., more than 60% occurs between June and August [38]. The Greater Hinggan Mountain is the largest modern state-owned forest area with a total area of 8.46 × 10 4 square kilometers and forest coverage of 6.46 × 10 4 square kilometers. Therefore, the forest coverage rate is about 76.4%, and the total storage capacity is about 5.01 × 10 8 m 3 , accounting for 7.8% of the national total [39]. The Greater Hinggan Mountain covers large forest resources, serving as an important stateowned forest area with a vital timber production area in China. At the same time, this forest area has experienced one of the most severe forest fires in China. On 6 May 1987, a severe forest fire occurred in the northern part of the Greater Hinggan Mountain. The burned area was 1.133 × 10 7 km 2 and the area of over-fired forest land was 1.114 × 10 7 km 2 , of which the affected area was 8.17 × 10 5 km 2 . The fire seriously affected the social, economic, and ecological benefits of the forest area, causing unprecedented heavy losses to the country. Since the catastrophic forest fires in the Greater Hinggan Mountains happened in 1987, this place has been one of the areas for research on fire prevention and post-fire forest management [38,40,41].
The burned area of the "5.6 Fire" was extracted in a previous study [42]. The entire burned forest area spanned two Landsat scenes (Path 121/122, Row 23), but it is difficult to acquire the two scenes simultaneously in each year. Considering that around 90% of the burned forest area is within the scene of path 122 row 23, we extracted a sample area ( Figure 1) from Landsat path 122 row 23 as the study area for the recovery monitoring [39].
Data Used and Preprocessing
A total of 16 Landsat surface reflectance data from Path 122, Row 23 with and below 10% cloud cover during the vegetation growth period from 2000 to 2018 was considered. The data was downloaded from the United States Geological Survey (USGS, https://earthexplorer.usgs.gov/); details of the data are given in Table 1. We have tried our best to find the images with the least cloud volume during the vegetation growth period each year. Although the cloud volume of the image in 2001 is 10%, the study area is only part of an image, and most of the clouds are located outside the study area. Therefore, the image from 2001 was still used in the study. The Landsat surface reflectance data was corrected at the sub-pixel-level by topographic and atmospheric correction [43,44]. The FMASK algorithm was used to detect cloud cover and cloud shadow and to generate a mask [45,46]. The MODIS 16d synthetic vegetation index product MOD13Q1 for the periods 2000-2018 was downloaded from the National Aeronautics and Space Administration (NASA) and pre-processed. The zenith BRDF-adjusted reflectance product MCD43A4V006 with a spatial resolution of 500m was obtained from NASA, which is daily reflectance data for spatial and temporal fusion with Landsat data to generate the surface reflectance on the target date. The above MODIS data was converted from Sinusoidal projection to UTM projection with WGS84-51N coordinates. After all the detailed processing, the river, road and building areas in each image were masked based on the 10 m global resolution land cover data [47], supplemented by visual interpretation, and the boundary of the study area was extracted.
Vegetation Phenological Information
Two processes were used to extract phenological information: the smooth reconstruction of the temporal vegetation index and the extraction of phenological parameters. Previous scholars have done a lot of research on the smooth reconstruction of NDVI time-series data. Methods including least squares (i.e., Savitsky-Golay filtering, asymmetric Gaussian function fitting, logistic blending function fitting), and Fourier fitting, Fourier correction algorithms, harmonic analysis method and wavelet analysis based on spectrum analysis technology have been considered. There has also been a large number of studies on the extraction of phenological parameters [48][49][50]. Based on the comparison of all earlier methods, we used an adaptive Savitzky-Golay filter to reconstruct the MOD13Q1 NDVI time-series and used a dynamic threshold method to extract the vegetation phenological index of each year. For the processing of data, we used TIMESAT software [51].
The vegetation index has a large degree of uncertainty showing the highest value of the NDVI peak, and the determination of the beginning and end dates of the growth period is relatively easy; the determination of the mid-point of the growth period is more reliable, which is often located in the peak season of vegetation growth. Vegetation at the midpoint of the growth period in the study area in each year has a relatively consistent growth situation [52]. Comparing the remote sensing indices at this time of each year can effectively eliminate the phenological influence. Therefore, the midpoint of the vegetation growth period in the study area in each year was selected as the date of the image to be synthesized.
Synthesis of Target Image Based on STARFM Fusion Algorithm
Traditional image fusion methods, such as intensity-hue-saturation (IHS) transformation [53], principal component substitution (PCS) [54], and wavelet decomposition [55], focus on combining the spectral properties of low-resolution data with the high spatial resolution of panchromatic images to generate high-resolution multispectral images. These methods are useful for exploiting the different spectral and spatial characteristics of multi-sensor data, but they cannot enhance both the spatial resolution and temporal coverage. In the present study, we have quantitatively captured the changes in radiation measurements (surface reflections) associated with the phenology and study their effects on the monitoring of vegetation restoration associated with fire. The STARFM (spatial and temporal adaptive reflection fusion model) algorithm is used to predict the reflectance of the target date. The algorithm considers the influence of spatial distance on predicted pixels and also considers the spectral difference and temporal difference between pixels. The homogeneous pixels in MODIS data show the relationship with the corresponding Landsat pixels as where ( , ) represents the spatial position of the homogeneous pixel; represents the image acquisition time; ( , , ) represents the reflectivity of Landsat pixels; ( , , ) represents the reflectivity of MODIS pixel; indicates the difference in reflectance between different data. At , the following is the relationship between the MODIS reflectance and the Landsat reflectance of the same pixel: When the ground cover type and the system error between the two types of data remain unchanged, = , the above equation can be expressed as The MODIS pixels are mostly non-homogeneous pixels, and the solar bidirectional reflection changes, and the surface coverage type changes with time, which makes the above ideal conditions challenging to meet. Therefore, the key point of the method is to find similar pixels from neighboring pixels of the target pixel and replace the homogeneous pixels with similar pixels. We used a windowbased threshold method to search for similar pixels from the window. If the pixel in the moving window satisfies the following relationship, the pixel is considered to be a similar pixel of the target pixel.
where represents the size of the moving window; / , / represents the position of the predicted pixel; represents the standard deviation of Landsat surface reflectance; represents the classification number of ground objects in the moving window. Thus, the reflectance value of the predicted pixel can be expressed by the following equation: where represents the number of similar pixels in the window; represents the contribution weighting coefficient of the neighboring pixels to the target pixel. The weighting coefficient can be calculated using three factors: spectral distance, temporal distance, and spatial distance between adjacent pixels and central pixels. Spectral distance is the spectral difference of pixels between simultaneous MODIS and Landsat data in the same location. The MODIS pixel reflectance can be considered as a mixture of multiple Landsat pixel reflectance in the same region. The smaller the spectral distance, the more similar the Landsat pixel and target pixel, and the larger the weight coefficient assigned. Time distance is the difference between MODIS pixel values at different times, which represents the change of the surface coverage status in this period. The smaller the time distance value, the smaller the change of land cover, the larger the contribution of the pixel to the central pixel value, and the larger the weight coefficient assigned. Spatial distance is the distance between the neighboring pixels and the target pixel. The smaller the spatial distance, the larger the weighting coefficient, which is calculated as follows: where is the spectral distance; is the temporal distance; is the spatial distance; ( , ) is the spatial position of similar pixels; is the weight adjustment coefficient, which is a constant. Normalized weight coefficients ( ), and the total weight coefficient ( ) are given as After selection of similar pixels, we filtered to remove the poor-quality pixels. If the spectral and time distance of similar pixels are smaller compared to the target pixel in the center of the moving window, the pixel provides better spectral information and time information compared to the target pixel. Otherwise, the pixel is an unqualified similar pixel. When the uncertainty factors and of Landsat and MODIS surface reflectivity are introduced in the similar pixel screening, the qualified similar pixels must satisfy the following inequality relations: where represents the uncertainty factor between the MODIS and the Landsat reflectance value, and represents the uncertainty factor of the MODIS reflectance at different phases. When all observed pixel reflectance values are independent of each other, and are expressed as After extracting the phenological index of the vegetation, the corresponding date of the midgrowth period of each year can be calculated as the date of the image to be synthesized, and the Landsat and MCD43A4V006 data are fused to construct reflectance data of vegetation with a relatively consistent growth period in different years.
Vegetation Indices
We used NDVI and DI to characterize post-fire vegetation restoration status.
NDVI
As one of the most famous vegetation indices, NDVI shows a good correlation with vegetation regeneration and the photosynthetic effective radiation ratio absorbed by plant canopy, leaf area, and biomass, so it is widely used to study vegetation response to wildfire disturbance [23,26,[56][57][58][59]. NDVI is calculated using Equation (15): where and are the reflectance of the near-infrared band and red wavelengths, respectively.
DI
The calculation of DI is based on the Tasseled Cap transformation [60,61], which is a spectral transformation that converts the original high covariant data into three uncorrelated indices known as brightness (B), greenness (G), and wetness (W). The calculation of DI is based on the observation that disturbed forests usually have higher brightness values and lower green and humidity values compared to undisturbed forest areas [61]. The linear combination of three Tasseled Cap transformation indices include brightness, greenness, and wetness. At the same time, the spectral normalization step is conducted, and the intra-image statistics are used to normalize the radiation variations as where , , and represent the average Tasseled Cap transformation brightness, greenness, and wetness of the "forest in a particular scene"; , , and are the corresponding standard deviations, so , , and represent normalized brightness, greenness, and wetness, respectively. After normalization, the three components are linearly combined to obtain DI as follows: The disturbed forest area usually has a high positive value and low negative values of and , thus showing a high DI value; in contrast, the undisturbed forest area shows a low DI value.
Yearly Composite Image
The date corresponding to the midpoint of the vegetation growth period in each year was obtained from the vegetation index. The reflectance data of vegetation with relatively consistent growth periods in different years were constructed by integrating Landsat and MODIS data. The image acquisition date of each year, the mid-point of the vegetation growth period, and the number of days between them are given in Table 2 (represented by the number of days in a year corresponding to the date). Since the MCD43A4V006 data has data gaps on some dates, in order to have the fusion image as complete as possible, the MCD43A4V006 image nearest to the original date and with the least data gaps was found close to the original date. The adjusted data date was marked in brackets of the original date. (180) 37 2015 242 196 46 2016 260 213 47 2017 264 202 62 2018 123 202 79 The midpoint of vegetation growth in the study area in each year was almost always around the 200th day of the year ( Table 2)
The Characteristics of Reflectance Prior to and after Eliminating Phenological Influence
The average values of all bands in the study area prior to (original image) and after (fusion image) the elimination of the phenological influence in each year are given in Table 3 and are shown in Figure 2. In the blue and the red bands (Figure 2a,c), the average reflectance value of the study area after eliminating phenological influence is lower than that of the area with phenological influence in Figure 2d shows that the average reflectance value of the study area after eliminating phenological influence in the study area was greater than that without eliminating phenological influence in the near-infrared band for almost every year except the years 2006 and 2011, indicating that the vegetation has a stronger ability to reflect near-infrared signals during the mid-life phase. At the same time, after eliminating the influence of phenology, the inter-annual change curve of reflectance in the near-infrared band becomes more gradual. The inter-annual reflectance variance of the near-infrared band with phenological influence is 0.0026, while reduced to 0.0002 when the phenological influence is eliminated. Figure 2e,f show that the interannual change of the reflectance in the two short wave infrared bands is more gradual after eliminating the influence of phenology. There is a water absorption band near both short-wave infrared bands, and the one near the second short-wave infrared band (1.9 μm) has stronger water absorption than the one near the first short-wave infrared band (1.4 μm). Figure 2f shows that in the second short-wave infrared band, the average reflectance of the study area after eliminating the phenological influence is almost lower than that prior to eliminating the phenological influence in every year.
NDVI Characteristics Prior to and after Phenological Influence Elimination
The mean values of NDVI in the study area prior to and after the phenological influence elimination in each year are given in Table 4 and shown in Figure 3. The changing trend of NDVI values of each year was more stable after the elimination of phenological influences. The NDVI without eliminating phenological effects shows a significant decreasing trend compared with the previous year for the years in 2002, 2009, and 2016, while the NDVI curve without phenological effects shows a slight increasing trend in these years. At the same time, it can be seen from Table 4 that the difference of NDVI prior to and after eliminating phenological influence was up to 0.4 or more, indicating that the impact of phenology on vegetation monitoring cannot be ignored.
Characteristics of DI Changes Prior to and after Phenological Influence Elimination
The mean values of DI in the study area prior to and after the phenological influence elimination in each year were calculated and are shown in Table 5 and Figure 4. Combined with Table 5 and Figure 4, it can be seen that the changing trend of DI values in each year is more gentle after eliminating the influence of phenology. The DI variations ( Figure 4) without eliminating phenological influence show a significant upward trend in the years 2002, 2016, and 2018 compared with the previous year, while the DI variations after eliminating phenological effects are relatively flat in these years, with no obvious increase or decrease. Table 5 shows the difference in DI prior to and after eliminating phenological influence reached the maximum value of 4.067 in the year 2018, and the corresponding image acquisition date is 79 days prior to the midpoint of the vegetation growth period. The DI variations after eliminating the impact of phenology in the study areas decreased from
Discussion
The comparison of the band characteristics shows that in the blue band and the red band, the average reflectance values of the study area after eliminating phenological influence was lower than that without eliminating phenological influence in each year, indicating that the vegetation has a stronger absorption ability. In the infrared band, the average reflectance value after eliminating the influence of phenology was greater than the value of the unremoved phenological influence in almost every year. In the second shortwave infrared band, the average reflectance value after eliminating phenological influence was lower than that with phenological influence in almost every year. Since this study used the corresponding date of the midpoint of the vegetation growth period of each year in the study area as the target date of image fusion, the mid-growth period of vegetation is often the period of peak vegetation growth, and most of the acquisition dates of the images cannot be in the peak period of vegetation growth. Therefore, vegetation located at the midpoint of the growing season tends to have better growth compared to the data acquisition date.
At the same time, due to the influence of chlorophyll, plant structure, and water absorption, the vegetation at the midpoint of the growing season has stronger absorption in the blue, red, and shortwave infrared bands and stronger reflection in the near-infrared band compared with the image acquisition date. Meanwhile, in the fused image, the reflectance values of several bands (red band, near-infrared band, shortwave infrared band) tend to be stable, which show a great relationship with the state of vegetation growth, indicating that the method effectively eliminates the disturbance caused by phenology influence in the study of interannual growth and change of vegetation.
When determining the fusion methodology, we considered a number of models. Finally, STARFM, developed by Gao et al. [62] combining Landsat and MODIS data to predict the daily surface reflectance at Landsat spatial resolution and MODIS temporal frequency, was considered. This method was tested in a conifer-dominated region in central British Columbia, Canada, and proved to generate daily surface reflectance with the same spatial resolution as Landsat data. The generated reflectance data is in good agreement with the actual Landsat reflectance data. Zurita-Milla et al. [63] developed another downscaling algorithm based on a linear hybrid model to produce images with medium resolution imaging spectrometer (MERIS) spectral characteristics and similar Landsat time resolution. However, this reduction algorithm requires high resolution land-use data for pixel unmixing and may not be suitable for many applications. The STARFM method does not require any auxiliary data compared to the downscaling algorithm. Zhu et al. [64] developed an enhanced spatial and temporal adaptive reflection fusion model (ESTARFM) based on the STARFM algorithm and tested the simulated and actual satellite data. The results show that ESTARFM improves the accuracy of reflectivity prediction, especially for heterogeneous landscapes. Taking the NIR band as an example, the ESTARFM prediction for a uniform region is slightly better than STARFM (average absolute difference (AAD) 0.0106 vs. 0.0129 reflection units); for complex heterogeneous environments, the prediction accuracy of ESTARFM was further improved compared with STARFM (AAD 0.0135 vs. 0.0194). Although the prediction accuracy of ESTARFM is slightly higher than STARFM, the former requires the input of one image before and after the predicted image, respectively, while the latter allows only one image of the time near the predicted image to be input. Considering that ESTARFM has high requirements for data, it cannot be fully realized in all the years of the study area, and most of the study area is covered by vegetation with a relatively homogeneous ground status, hence the STARFM is found to be more applicable to achieve the objectives of the study. In the future, the ESTARFM methodology can be tried when there is enough data as input.
Although annual 30 m reflectance data of the study area cannot be obtained in this study to verify the accuracy of the fusion image, on the one hand, Gao et al. [62] tested the methodology in central British Columbia, Canada, and found that the daily surface reflectance generated by this method is in good agreement with the actual Landsat data. On the other hand, the 2011 image acquisition date in this study is very close to the target date of image fusion, with only eight days difference. The reflectance value of each band of image acquisition dates is also very close to those of the target date in 2011 (mean absolute difference of each band on two dates in 2011 is 0.001, 0.001, 0.001, 0.004, 0.004, and 0.001, respectively), indicating that the image obtained by the spatio-temporal fusion algorithm has certain reliability.
The analysis results of NDVI and DI values in the study area of each year show that the NDVI and DI curves vary considerably without eliminating the phenological influence, and there is no obvious change in trend. After eliminating the phenological influence, the changing trend of NDVI and DI values in each year is more stable, and on the whole, NDVI shows a slight upward trend, while DI shows a slight downward trend. Therefore, the elimination of phenological influence plays an important role in monitoring vegetation changes. At the same time, after removing the impact of phenology, the NDVI and DI trend curves of the study areas in each year reflect relatively consistent vegetation changes, further illustrating the reliability of the phenology elimination method and the credibility of vegetation monitoring results.
In the quantitative analysis of remote sensing, the relationship between surface property measurements at different spatial resolutions often causes concern [65]. Since vegetation cover can be highly heterogeneous spatially, subpixel variability is likely to introduce uncertainties in the vegetation indices at different resolutions [66]. Several studies have investigated the impact of spatial resolution on NDVI, but with conflicting results. Aman et al. [67] concluded that NDVI derived from the coarse spatial resolution sensor data can be used in lieu of NDVI integrated from fine spatial resolution without introducing significant errors. On the other hand, Price [68] noted that for a region consisting of a mixture of totally vegetated area and non-vegetated area, prominent discrepancies occur between NDVI derived from high-resolution measurements and NDVI derived from low resolution measurements, with the relative difference approaching 30%. This study used Landsat data with 30 m spatial resolution. In future studies, remote sensing data with different resolutions can be used to further explore the impact of eliminating phenological influences on post-fire vegetation restoration monitoring.
Conclusions
Taking the forest restoration in the Greater Hinggan Mountain area after the "5.6 fire" in 1987 as an example,based on the MODIS and Landsat time-series images acquired from 2000 to 2018, this study took the midpoint of the vegetation growth period of each year as the target date and used the STARFM fusion algorithm to construct reflectance images of vegetation with relatively consistent growth periods. The influence of phenology on vegetation monitoring was analyzed using three aspects: band characteristics, NDVI and DI values.
Based on the detailed analysis using remote sensing data, it can be concluded that eliminating phenological influences can more accurately reflect the changes of vegetation within the region, which implies that phenological factors in remote sensing images may affect the observation of vegetation changes. Observation of vegetation changes using remote sensing images of different periods of vegetation growth may cause great errors. The spatio-temporal data fusion method used in this study effectively eliminated the influence of phenological factors during the annual observation of vegetation by establishing vegetation reflection images with relatively consistent growth periods. At the same time, this method is conducive to improving the utilization of remote sensing data because researchers do not need to find remote sensing images with consistent vegetation growth conditions for monitoring but can use images located in different vegetation growth conditions and then transform them to more consistent conditions through the spatiotemporal fusion method, thereby improving the temporal resolution of vegetation monitoring. After eliminating the influence of phenology, the results based on remote sensing indices in the study area showed that although the forest in this region was affected by disturbances in some years, its growth trend is generally better. The conclusion drawn in the present analysis provides a reference for future forest monitoring research and local forest management. | 7,872.2 | 2020-01-02T00:00:00.000 | [
"Environmental Science",
"Mathematics"
] |
Influence of Degree of Dispersion of Noncovalent Functionalized Graphene Nanoplatelets on Rheological Behaviour of Aqueous Drilling Fluids
Application of carbon nanomaterials in oil well drilling fluid has been previously studied and was found to enhance its filtration properties.-ere is a general consensus that addition of colloids in suspension will alter its rheology, i.e., carbon nanomaterials, in this research work; graphene nanoplatelets are hydrophobicmaterials, which require functionalisation to improve its dispersion in aqueous solution. However, different degrees of dispersion may vary the rheological properties behaviour of drilling fluid. -e objective of this study was to characterize the colloidal dispersion of graphene nanoplatelets (GNP) in aqueous solution and its impact on the rheological properties behaviour of water-based drilling fluid. Dispersion of graphene nanoplatelets was achieved through noncovalent functionalisation by means of surfactant attachment. UV-visible spectroscopy was employed to analyze the dispersion of GNP in aqueous solution. -e rheological test was carried out using a simple direct-indicating viscometer at six different speeds. Results revealed that the degree of dispersion of GNP using Triton X-100 was generally higher than both SDS and DTAB. Comparison between the rheological properties behaviour of drilling fluid with GNP dispersed using different surfactants shows little to no difference at low shear rates. At high shear rates, however, greater dispersion of GNP shows higher thinning properties while fluid with a low dispersion of GNP exhibited linear behaviour to thickening properties.
Introduction
Oil well drilling fluid or commonly known as drilling muds are a complex colloidal system that is a trivial part for an oil well drilling operation.Apart from its primary function in conventional drilling operation where drilling fluid acts as a safety barrier which prevents formation fluid from flowing into the well, drilling fluid also needs to carry drilled cuttings to the surface.
e ability of a fluid to suspend and lift cuttings to the surface is related to its rheological properties.Improper design of rheological properties of drilling fluid may affect its ability to remove cuttings from the wellbore which in turn may cause stuck pipe and damaged bottom hole assembly.
Graphene, the world's first two-dimensional (2D) crystalline material, has a theoretical thickness of 0.34 nm [1,2] and is composed of a single layer of carbon atoms in the sp 2 hybridisation state with each atom covalently bonded to three other carbon atoms in a hexagonal lattice with a carbon-to-carbon distance of 0.142 nm [3,4].Graphene is the building block of other carbon materials.For example, the sheets can be curved into 0D fullerene, rolled into 1D carbon nanotubes (CNTs), or stacked into 3D graphite [3].Although theoretical prediction on the unique electronic structure and the linear dispersion relation of graphene started almost 70 years ago (1947) by P. R. Wallace, the first physical isolation of graphene happened only recently.e discovery is attributed to Andre K. Geim and Konstantin Novoselov, researchers from Manchester University back in 2004, which gave them the honour of being awarded the Nobel Prize for Physics in 2010 [3].
e unique properties of graphene are a direct consequence of its structure.It has been reported that the strength of graphene exceeds 1.5 TPa due to its extremely strong in-plane σ-bonds (670 kJ/mol) [4], while the remaining π-orbits perpendicular to the plane constitute the delocalized network of electrons which makes the structure highly conductive [3].
e thermal conductivity of graphene was reported to be 5000 W m −1 K −1 which is better than many metals [4] again because of the strong sp 2 bonds that help dissipate heat via lattice vibration or phonon scattering [3].In addition, graphene has also been proven to be able to form a membrane impermeable to even the hardest gas to filter, helium [5,6].In recent years, applications of graphene and its predecessor, CNTs, have been getting a lot of attention for research in oil and gas industry especially drilling fluids and were proven to have improved the conventional drilling fluid system [7][8][9].
A key requirement for carbon research and applications is the ability to identify and characterize all the members of the carbon family.Over the years, several literature studies have characterised carbon allotropes using Raman spectroscopy due to its fast, nondestructive, and high resolution with maximum structural and electronic information [10][11][12][13][14][15][16][17][18].In addition, Raman spectroscopy is highly sensitive to symmetric covalent bonds with little to no natural dipole moment [19].
e main features of carbon allotropes in Raman spectra are the so-called G and G′peaks, which lie at around ∼1580 cm −1 and ∼2700 cm −1 , respectively.Graphene can be distinguished from graphite by thoroughly examining the G′ peak of the Raman spectra as studied by Ferrari et al. [20].Single graphene has a single G′ peak roughly four times more intense than its G peak, while bilayer graphene's G′ peak consists of four components and bulk graphite's G′ peak consists of two components roughly 1/4 and 1/2 the height of the G peak. is result is in agreement with Gupta et al. [21] and Dresselhaus et al. [22].Gupta et al. [21] and Dresselhaus et al. [22] found that single-layer graphene exhibits sharp G and G′ peak at ∼1580 cm −1 and ∼2700 cm −1 , respectively while multilayer graphene or graphite exhibit broad G′ peaks which can differentiate the number of carbon layers.
In addition, Dresselhaus et al. [22] differentiated the Raman spectra between graphene, graphite, and carbon nanotubes.While a single G peak is observed for a 2D graphene sheet, curvature effect such as in carbon nanotubes gives rise to multiple G peaks.Another significant difference is the existence of the radial breathing mode (RBM) in Raman spectra of carbon nanotubes which is in the range of ∼120 cm −1 to 350 cm −1 that is related to the diameter of the carbon nanotubes.Graf et al. [23] also used Raman spectroscopy to differentiate graphite flakes of different thicknesses at 532 excitation wavelengths.Although Graf et al. [23] used different excitation wavelengths, the results are conclusively similar to Ferrari et al. [20], Gupta et al. [21], and Dresselhaus et al. [22].
Raman signature and the specific peak position of any material is related to the material's unique structure and is independent of the excitation wavelength, so the molecular fingerprint will be the same regardless of the excitation laser wavelength.However, different excitation wavelengths provide specific strengths and weaknesses and may impose 'noise' in the resulting spectra.Ferrari et al. [20] compares the G′ peak of graphene and graphite measured at two different excitation wavelengths.Although the shape of the spectra remains the same even at similar shift, its efficiency and resolution drops quite significantly at 633 nm as compared to 514 nm.Luo et al. [24], in addition, studied the Raman spectra of hydrogenated graphene at five different excitation wavelengths: 457 nm, 488 nm, 532 nm, 633 nm, and 785 nm.Among the five different laser-excitation wavelengths, results show that excitation wavelength of 532 nm produces the best Raman spectra with minimal noise for hydrogenated graphene.
Calculation of the number of layers of graphene is possible from the position of the G band.As explained by Wall et al. [25], the theory is that as the layer thickness increases, the band position shifts to lower energy, representing a slight softening of the bonds.
us, the band position can be empirically correlated to the number of atomic layers present using the following equation [25]: where ω G is the position of G peak in wavenumbers and n is the number of layers present in the sample.Despite all the great properties of graphene, individual graphene approaching each other may establish π-π bond like in graphite.is interaction may sum up and bundle graphene together which makes dispersing it in aqueous solution difficult.Functionalising graphene is essential as it allows graphene to be dispersed in solutions, to graft desired functions on their surface or to be coupled with other materials [26].
Generally, there are two methods to functionalise graphene to achieve dispersion in aqueous solutions: covalent functionalisation and noncovalent functionalisation [4,20,27].e covalent functionalisation method involves attaching various chemical moieties to improve its solubility in solvents.is method, however, can be considered aggressive as it occurs at high temperature and includes the use of dangerous chemicals such as the use of neat acids [28].In addition, as a result of attaching different functional groups, the structure of graphene is altered, consequently changing the properties of graphene as well.Noncovalent functionalisation, on the other hand, is particularly attractive because of the possibility of attaching various groups on the surface of graphene without disturbing its structure and properties [26].
Surface-active agents or widely known as surfactants have been extensively used to disperse carbon nanomaterials via the noncovalent method.
e structure of surfactants consists of two parts, one of which is a hydrophobic tail, usually a hydrocarbon chain, and the other is a polar hydrophilic head which may be cationic, anionic, or nonionic in nature [29].Surfactants help achieve homogenous dispersion of carbon nanomaterial in aqueous solutions by wrapping it or forming micelle around each individual nanomaterial [28].To date, a wide variety of surfactants and concentrations have been investigated to disperse carbon nanomaterials in aqueous solution such as sodium dodecyl benzene sulfonate (SDBS) [30], dodecyl trimethyl ammonium bromide (DTAB) [10], hexadecyl trimethyl ammonium bromide (CTAB) [11], octyl phenol ethoxylate (Triton International Journal of Chemical Engineering X-100) [12,13], and sodium dodecyl sulfate (SDS) [7,9,19,22,25,26].
Our previous research work shared results on the filtration properties of water-based drilling fluid with the inclusion of graphene nanoplatelets without the emphasis on the degree of dispersion [31].In addition, Aftab et al. [7] also studied the usage of graphene nanoplatelets in water-based drilling fluid including filtration, rheology, and shale inhibition.Functionalisation of graphene nanoplatelets was achieved through noncovalent functionalisation using sodium dodecyl sulfate (SDS), an anionic surfactant.However, effect of factors such as degree of dispersion was not reported.In 2011, Pu et al.'s [32] research found that homogeneity of graphene dispersion in aqueous media differs with different types of surfactants.
us, the research question is will the effect of different degrees of dispersion of graphene affect the ability of graphene as an additive for drilling fluid application or more specifically of this study, rheology.
In this work, the dispersion of graphene nanoplatelets in aqueous solutions using three different types of surfactants was studied: SDS, DTAB, and Triton X-100.Graphene nanoplatelets were dispersed in aqueous solutions of different concentrations of surfactants by means of ultrasonic bath at 30 °C (86 °F).Colloidal dispersion of graphene nanoplatelets in aqueous surfactant solutions was studied by means of UV-Vis spectroscopy.
e impact of different degrees of dispersion on rheological properties behaviour of water-based drilling fluid was then studied using a simple direct-indicating Fann viscometer.
Materials.
Graphene nanoplatelets with a diameter of 40-70 nm, length of 2-5 nm, and purity of >95 wt.% were procured from Sigma-Aldrich. e usage of SDS (98% purity from Aldrich) along with two other surfactants of different types, namely, DTAB (99% purity from Aldrich), a cationic surfactant, and Triton X-100 (97% purity from Aldrich), was employed to disperse graphene nanoplatelets in the waterbased drilling fluid, a nonionic surfactant.Figure 1 shows the chemical structure of (a) DTAB, (b) SDS, and (c) Triton X-100.
e selection of these surfactants is based on the representative used in literature for nonionic-and ionicbased surfactants.As its name suggests, DTAB is composed of a hydrophobic carbon tail attached to an ammonium head with three methyl groups.SDS, on the other hand, is composed of a hydrophobic carbon tail attached to a sulfate group, while Triton X-100 is composed of a hydrophilic polyethylene oxide chain and an aromatic hydrocarbon hydrophobic group.
Graphene Nanoplatelets Characterisation.
Graphene nanoplatelets characterisation methods employed in this research work include Raman spectroscopy, FESEM, and TEM.Raman spectroscopy provides high-resolution structural and electronic information while the field emission scanning electron microscopy (FESEM) and high-resolution transmission electron microscopy (HRTEM) provide microscopic imaging for visual purposes.Raman spectroscopy is a spectroscopic technique used to observe vibrational, rotational, and other low-frequency modes in a system.In this research work, the structural fingerprint of graphene nanoplatelets was analysed at 514 nm laser excitation wavelength using Horiba JobinYvon HR800 in Centralized Analytical Laboratory (CAL) in Universiti Teknologi PETRONAS (UTP).FESEM was carried out using a variable pressure field emission scanning electron microscope (VP-FESEM) Zeiss Supra55 in Centralized Analytical Laboratory (CAL) in Universiti Teknologi PETRONAS (UTP).Uncoated powder samples of the materials were mounted on a grid holder and were spincoated with gold to enhance electron conductivity.An accelerating voltage of 5 kV was used to view the nanomaterials.HRTEM was carried out using FEI, TECNAI-G2-20-200kV-S-TWIN High-Resolution Transmission Electron Microscope (HRTEM) at Science and Engineering Research Centre (SERC) in Universiti Sains Malaysia (USM).A small amount of sample was dispersed using a solvent in a test tube and was ultrasonicated for one hour.
e samples were then left standing for another 30 minutes before dropping on a copper grid.e sample was then left to dry overnight prior to HRTEM analysis.
Preparation of Dispersion of Graphene Nanoplatelets in
Surfactant Solutions.All surfactant solutions were prepared by mixing the calculated amount of each surfactant in 50 mL of distilled water in an Erlenmeyer flask.Five concentrations were prepared for each surfactant to properly reflect the effect of various concentrations on dispersion stability: 100 ppm, 200 ppm, 300 ppm, 400 ppm, and 500 ppm resulting in 15 total solutions.A fixed amount of 1 mg of graphene nanoplatelets per 1 mL of surfactant solutions was then dispersed in all solutions.e resultant solutions were then ultrasonicated at 100 kHz for 30 minutes at 30 °C (86 °F) in an ultrasonic bath in order to get surfactant-coated graphene.
Colloidal Dispersion Analysis.
To evaluate colloidal dispersion of graphene nanoplatelets in the aqueous surfactant solution, UV-Vis absorption spectra between 200 nm and 700 nm were measured (PerkinElmer UV/Vis Spectrometer Lambda 25) with 1.00 nm slit width.Yu et al. [16] in their literature discovered that individual carbon nanotubes (CNTs) are active in the UV-Vis region and exhibit corresponding characteristics band with additional CNTs in suspension due to 1D van Hove singularities [33].In addition, flocculated CNTs are inactive in the wavelength region between 200 and 1200 nm.us, it is possible to establish a relationship between the amount of dispersion stability of CNTs and the intensity of the corresponding absorption spectrum [34].Similar with CNTs, the dispersion of graphene can also be evaluated by UV-Vis spectroscopy.For all surfactant solution with graphene nanoplatelets, respective surfactant solution without graphene nanoplatelets was used as a calibration reference.
International Journal of Chemical Engineering 2.5.Drilling Fluid Formulations and Preparations.350 mL of water-based drilling uid was prepared by mixing distilled water, potassium chloride, xanthan gum, polyanionic cellulose, caustic soda, barite, and graphene nanoplatelets which were dispersed at a xed concentration of 1 mg/ 1 mL in surfactant solutions.All drilling uid formulations were formulated with a xed concentration of 0.1 g/ 350 mL of graphene nanoplatelets.Table 1 shows the approximate drilling uid formulations and mixing procedure.
While most works of literature described drilling uid rheological properties by simply explaining the plastic viscosity and yield point, Sharma et al. [36] suggest uids which contain nanomaterials that mostly exhibit nonlinear behaviour.At this pro le, the Bingham plastic model may not be the most suitable mathematical model in explaining the rheological properties behaviour of uids with multiple colloid suspensions especially with the inclusion of nanomaterials.A more accurate method would be to plot a rheogram with a shear rate and shear stress as was used by Ho et al. [37] and Srivatsa et al. [38] despite the fact that Ho et al. [37] used a more advanced rheometer instead of FANN ® viscometer.Although some calculations are needed, the rheogram of shear stress vs shear rate can be plotted by converting the FANN ® viscometer reading into shear stress and shear rate as explained by Lam and Je eris [39].Rheological properties behaviour can then be explained by interpreting the curve in the rheogram.
e shear viscosity of a uid at a given shear rate is given as follows: where µ viscosity in mPa•s, τ shear stress at the given shear rate in Pa, and c shear rate in s −1 .
Based on the design of FANN ® Instrument 35SA vis- cometer, viscosity can be calculated using the following equation in the unit of mPa•s: where µ viscosity in mPa.s, k 300 for the standard rotorbob combination, f 1 for the standard torsion spring F1, θ dial reading, and ω rotor speed in rpm.Equation (3) then becomes Solving Equations ( 3) and ( 4) for shear stress and unit conversion results in the following equation to get shear stress as a function of shear rate and viscosity, we obtain
Results and Discussion
3.1.Graphene Nanoplatelets Characterisation. Figure 2 presents the Raman spectra of the procured graphene nanoplatelets between 200 cm −1 and 3200 cm −1 with a laser excitation wavelength of 514 nm. e G and G′ of the graphene nanoplatelets are located at 1582.36 cm −1 and 2731.66 cm −1 , respectively.It can be observed that no peak 4 International Journal of Chemical Engineering exists in the range between 200 cm −1 and 350 cm −1 which meant there are no curvature in the sample edges like in carbon nanotubes.Based on the ratio of the height of the G and G′ peaks which is around 2.9, it shows that the procured graphene nanoplatelets structure resembles more of a highly oriented pyrolytic graphite as compared to pristine graphene.is is due to the existence of multiple layers of carbon atoms in the samples.Using Equation ( 1), the number of layers can be calculated and was found to be ve layers.e number of layers of the graphene nanoplatelets was then con rmed using HRTEM.
Figure 3 shows morphology of graphene nanoplatelets as viewed using FESEM at 1k, 10k, 80k, and 100k times magni cations.At 1k times magni cation, it can be seen that the edges of graphene nanoplatelets sheets folded in ambient air conditions creating irregular bundles of graphene nanoplatelets.At higher magni cation, FESEM images could further characterize the shape of graphene nanoplatelets.It can be seen that the morphology of graphene nanoplatelets is that of aky graphene sheets with an average of 20 nm in thickness.Figure 4 on the other hand shows an image of graphene nanoplatelets edge observed by using HRTEM at 1,000k times magni cations.It can be observed that the thickness of the graphene nanoplatelets is 1.869 nm average with a distance between graphene layers of 0.405 nm average making the number of layers 5. Results of the number of layers of graphene sheets from HRTEM are in agreement with analysis acquired by Raman spectrometry.
Colloidal Dispersion.
e absorption spectra of graphene nanoplatelets in DTAB, SDS, and Triton X-100 are illustrated in Figures 5-7, respectively.Pure DTAB and SDS show no absorption at 250-300 nm in UV-Vis spectra.ese surfactants contain hydrophilic functional groups -SO 4 − for SDS and -CH 3 − for DTAB on the outer end of the long chain which adsorbs on graphene.Note that unlike other surfactants, Triton X-100 without graphene shown in Figure 7(f ) exhibits multiple peaks most noticeably around 280 nm. is attribute is due to the existence of the aromatic hydrocarbon lipophilic group in its structure.Similar peak pattern was also observed in other nonionic surfactant solutions [32].
e absorption spectra of DTAB (Figure 5) and SDS (Figure 6) show the similar trend; the dispersion of graphene nanoplatelets in water increases with increasing surfactant concentration with the exception of 400 ppm and 500 ppm of SDS which exhibited equal absorption values.e similar values may suggest two di erent views: the optimum concentration of SDS to help disperse graphene nanoplatelet in water is 400 ppm or there is not enough graphene nanoplatelets to be dispersed in 500 ppm SDS solution.
e second opinion, however, is less attractive due to the fact that the absorption spectra of graphene nanoplatelets in Triton X-100 (Figure 7) exhibited more than twice the value of SDS. e absorption trend of graphene nanoplatelets with Triton X-100 (Figure 4) displays a rather di erent trend than the other two surfactants.e 100 ppm surfactant concentration of Triton X-100 shows the best dispersion stability followed by 200 ppm, while increasing from 300 ppm to 500 ppm indicates little to no di erence in dispersion stability.is phenomenon might be due to reaggregation of graphene nanoplatelets due to the excessive surfactant used which was also observed by other nonionic surfactant solutions [32].
Comparing the UV-Vis spectra of all three surfactants used with similar concentrations, it was found that the nonionic type of surfactant performed well at dispersing graphene nanoplatelets in water.is result can be attributed to the lower critical micellization concentration point of nonionic surfactants as compared to ionic surfactants [40].
e di erence in dispersion capability between surfactants can also be explained on the basis of their chemical structure.Surfactants contribute in achieving homogenous dispersion of graphene nanoplatelets in aqueous solutions by wrapping it or forming a micelle around each individual graphene.In International Journal of Chemical Engineering order to do so, the surfactants orient itself so that the hydrophobic tail is directed towards the graphene while their hydrophilic head is directed towards the aqueous solutions [28].erefore, the dispersing power of the surfactant depends on the length of the hydrophobic tail and how firmly it adsorbs onto the graphene surface and produces energy 6 International Journal of Chemical Engineering barriers of su cient height to overcome Van der Waals forces among the neighboring graphene particles.us, longer tails mean higher steric hindrance, providing greater repulsive forces between individual graphene as can be seen with the di erence in structure and performance of DTAB and SDS.In addition, theoretically, molecules with aromatic ring structure have stronger adsorption ability to the graphitic surface due to π-π stacking-type interaction [30]. is explains why the nonionic surfactant Triton X-100 with aromatic rings produces a higher degree of colloidal dispersion.
Rheological Behaviour.
Rheological properties behaviour of water-based drilling uid with 0.1 g of graphene nanoplatelets dispersed using DTAB, SDS, and Triton X-100 is illustrated in Figures 8-10.From the rheogram, it can be seen that, at a low shear rate, the rheological properties behaviour of the drilling uid does not seem to be a ected by the degree of dispersion of graphene nanoplatelets for all three surfactants.At high shear rates, however, the degree of dispersion of graphene nanoplatelets does have an impact on the rheological properties behaviour of the drilling uid. is is evident by observing the di erence of shear stress between all three surfactants at 1022 s −1 .e 100 ppm of Triton X-100 which exhibited the highest degree of dispersion have a lower shear stress value at 1022 s −1 shear rate as compared to two other surfactant solutions.
is, in turn, provides greater shear-thinning International Journal of Chemical Engineering properties of the drilling uid which is a more desirable behaviour in a drilling uid.It can also be observed that, at a lower degree of dispersion such as with 100 ppm of SDS and 100 ppm of DTAB, the rheological properties behaviour at a high shear rate is almost linear and is higher than the uid sample with graphene nanoplatelets dispersed in water.is might be due to agglomerations of graphene nanoplatelets as suggested by Chai et al. [37] although used a di erent type of base uid or due to the properties of surfactants used.
It can also be argued that the change in rheological properties behaviour is due to the addition of surfactants used to disperse graphene nanoplatelets in drilling uid.However, Yunita et al. [41,42] investigated the use of nonionic-and anionic-type surfactants in the water-based drilling uid and found that drilling uid exhibited higher viscosity with the introduction of surfactants in the mix.Meanwhile, in this research work, with the introduction of graphene nanoplatelets and surfactants in the mix, drilling uid formulated using graphene nanoplatelets with anionic and cationic surfactants exhibited a higher viscosity at a lower concentration which may be due to the surfactants used or can also mean a lower degree of dispersion.Drilling uid formulated using graphene nanoplatelets with the nonionic surfactant in this research exhibited a lower is concludes that the lower viscosity of the drilling uid is due to the introduction of graphene nanoplatelets and that the surfactants used have minimal e ect on the rheological properties behaviour as compared to graphene nanoplatelets.
Conclusions
e colloidal dispersion of graphene nanoplatelets was achieved through noncovalent functionalisation by encapsulating the graphene with micelles of the surfactant by means of π-π bond stacking.Among three surfactants used in this research work, Triton X-100, a nonionic surfactant, produces the best dispersion stability followed by SDS, an anionic-type surfactant, and DTAB, a cationic-type surfactant.Based on the work carried out in this research, it can be concluded that the degree of dispersion of graphene nanoplatelets plays a role in the rheological properties behaviour of the water-based drilling uid.With a greater degree of dispersion, drilling uid exhibited higher shearthinning properties while drilling uid with agglomerated graphene nanoplatelets shows more linear properties if not shear thickening.
Data Availability e data used to support the ndings of this study are available from the corresponding author upon request.International Journal of Chemical Engineering
Table 1 :
Drilling uid formulations and mixing procedure. | 5,817 | 2019-02-26T00:00:00.000 | [
"Materials Science"
] |
Semi-Supervised Morphosyntactic Classification of Old Icelandic
We present IceMorph, a semi-supervised morphosyntactic analyzer of Old Icelandic. In addition to machine-read corpora and dictionaries, it applies a small set of declension prototypes to map corpus words to dictionary entries. A web-based GUI allows expert users to modify and augment data through an online process. A machine learning module incorporates prototype data, edit-distance metrics, and expert feedback to continuously update part-of-speech and morphosyntactic classification. An advantage of the analyzer is its ability to achieve competitive classification accuracy with minimum training data.
Introduction
IceMorph [1] is a semi-supervised part-of-speech (POS) and morphosyntactic (MS) tagger for Old Icelandic. Old Icelandic is a difficult language to tag for morphosyntactic features given its inflectional and morphonological complexity. IceMorph is designed to achieve competitive classification accuracy using a minimum of cleanly tagged training data, and to allow for continuous online retraining.
The IceMorph system consists of a number of interacting modules, including a Perl machine parser for Old Icelandic dictionaries, a prototype-based inflection generator coded in Haskell based on similar tools used in Functional Morphology [11,12,22], an edit distance classifier, a website to collect feedback from human experts, and a context-based machine learning algorithm for grammatical disambiguation. We hypothesize that this multi-pronged approach can offer better outcomes than any one of the approaches alone to the vexing problem of morphological analysis in Old Icelandic. Although this may seem to be an obvious solution for the problem of POS and MS tagging in a language that not only has a complex morphology but also for which there is a paucity of clean training data and a noisy target corpus, we have not encountered similar multi-pronged approaches to this problem for Old Icelandic.
For the machine learning component, we rely on a Hidden Markov Model (HMM) classifier that makes use of the restricted Viterbi algorithm, and retrain from expert input as opposed to cotraining [28]. Although recent work on sequential tagging has returned excellent results with Conditional Random Fields (CRF) [27], because of problems associated with Old Icelandic's inflectional complexity and the very limited scope of our training data, the CRF we implemented returned sub-optimal results. Instead, our results show that the multi-pronged approach we describe, despite a very small and noisy training set, can achieve competitive classification (96.84% on the POS task, and 84.21% on the MS task).
We took inspiration for IceMorph from a number of sources. Several tools exist for morphosyntactic tagging of Modern Icelandic; for instance [21], achieves 91.18% accuracy by applying a TnT tagger trained on an extensive corpus of Old Icelandic texts orthographically and grammatically normalized to Modern Icelandic. Another approach is IceTagger [23], a rule-based POS tagger for Modern Icelandic that achieves a 91.54% accuracy rate on a POS classification task. There are also a large number of semi-supervised Bayesian POS taggers such as [24,25], with [24] reporting an accuracy of 79.7% on an MS classification task, and [25] reporting 93.4% accuracy on a POS task. However, all of the existing approaches require either a set of manually crafted rules or fairly extensive training sets. Importantly, the approaches for Icelandic described elsewhere [21, 23,29] are all tuned for Modern Icelandic, a space in which relatively large, clean training data exist. A philosophical underpinning of IceMorph is to provide competitive tagging performance for Old Icelandic utilizing available resources while requiring a minimum of clean input data. For example, our training sets are an order of magnitude smaller than those used in [21]. Consequently, we feel that IceMorph is closely related to projects such as [5,6,29] which make use of language tools to reduce the amount of man-hours required to tag a corpus. [5] reports an accuracy of 93.1% on a Spanish POS task [6], reports an accuracy of 90.7% on a POS task in English, and [29] reports an accuracy of 93.84% on a POS task in Modern Icelandic (Table 1).
System architecture
IceMorph consists of a collection of modules designed to streamline the creation, maintenance, and analysis of input data as well as the prediction of POS and morphosyntactic (MS) classes for previously unseen words. It can be conceptualized as consisting of two separate systems. The first system produces an initial set of tags for each corpus instance, providing broad coverage (.98%) with sub-optimal accuracy. The second system refines the initial set of tags by continuously directing novel expert feedback into a machine learning algorithm. Figures 1 and 2 depict the general layout of IceMorph. In the following paragraphs, each module is described in more detail.
Dictionaries
IceMorph currently uses two standard dictionaries of Old Icelandic for basic lexical and grammatical information: Cleasby-Vigfusson [3] (including the Lexicon Poeticum) and Zoëga [4]. The dictionaries were gathered from online sources [7], [8], [9] or transformed into electronic text using optical character recognition. Each dictionary entry was machine parsed and, where necessary, normalized into standard Old Icelandic orthography using the widely accepted I´slenzk fornrit orthographical conventions [10].
Each of the two dictionaries features approximately 27,000 entries with 42% overlap in headwords. We considered Fritzner [2] as an additional resource because it contains considerably more unique lemmata compared to Cleasby-Vigfusson or Zoëga. However, its lack of morphosyntactic detail in its entries led us to disregard it for the purposes of this study.
We encountered a number of issues during this initial data preparation phase that can be classified into three problem areas as follows: (1) OCR errors and other inconsistencies in underlying data: Although OCR errors are to be expected, we have uncovered both errors and inconsistencies in each of the underlying dictionaries. We corrected a number of those errors to reduce their influence on other modules of the IceMorph system.
For instance, while Zoëga differentiates between ø & ö, ae & oe, and uses -st for the mediopassive forms, Cleasby-Vigfusson only uses ae, ö, and -sk. Related characters (e.g. i and í) were often interpreted incorrectly by our OCR software.
(2) Disagreement between sources: not all sources agree on the classification of individual lemmata. For instance, Cleasby-Vigfusson defines báðir as a dual adjectival pronoun (adj. pron. dual), while Zoëga lists it simply as an adjective, but considers its dual form baeði as a conjunction. We relied on [41] to mediate these differences.
(3) Inconsistencies in the use of morphosyntactic information: we relied heavily on morphosyntactic clues present in the dictionaries to determine the class of a given verb or noun. For comparison, the accuracy of the IceMorph HMM-rV tagger is presented in the first row. Our measures of accuracy reflect the use of two distinct sets of tagged data. The first set (called EXPERT) contains longer sequences of training data and thus reflects more accurately IceMorph's performance when trained with a rich data set, and is also more comparable to the training data used in these comparison studies. doi:10.1371/journal.pone.0102366.t001 Figure 1. Creation of a base tagged corpus within IceMorph using various data sources. Dictionaries and corpora are machine parsed and inserted into a relational database. Declension prototypes are created by an expert via a functional programming language using readily available Old Icelandic grammars. Each dictionary lemma is mapped to corresponding declension prototypes to yield multiple declension paradigms. Finally, each corpus instance is compared to the list of inflected lemmata to produce the base tagged corpus. doi:10.1371/journal.pone.0102366.g001 However, the same morphosyntactic syntax was often used within the same dictionary to describe lemmata belonging to different classes.
On the other hand, morphosyntactic elements of irregular forms often had unique patterns that also affected classification negatively. For instance: faðir (gen., dat. and acc. fö ður, pl. feðr), m. father. feðr, m. father, = faðir. The pattern [LEMMA]+'', m.'' + [TRANSLATION] usually signals masculine a-class nouns in Zoëga, so our machine parser defined a lemma feðr. The same dictionary contains an additional entry for faðir with a unique morphosyntactic structure. In this case, the machine parser was unable to categorize the lemma.
In a final step, we performed alignment on our various dictionary sources to produce a single uniform multi-dictionary relational database structure. Ambiguous or overlapping entries were discovered using simple SQL queries, and the limited number of problematic entries that we discovered were subsequently corrected by hand. Our current merged dictionary contains 48,973 lemmata. While this dictionary covers most words found in the Old Icelandic prose corpus, it has less comprehensive coverage for compounds, names, and archaic words. Each lemma is associated with at least one source entry in the dictionaries. Table 2 shows a sample source entry for lemma afdrykkja.
Corpora
IceMorph uses the Icelandic Legendary Sagas [13] as a target corpus. The corpus spans a total of 357,604 non-unique words and 22,815 unique words. Figure 3 illustrates the distribution of unique word frequencies in the corpus. Its logarithmic shape confirms Zipf's law [26] that few words occur with very high frequency. We take advantage of this common property by having human experts correct paradigms of high frequency words. We also take advantage of the fact that many of these high frequency words are conjunctions as well as other words that do not inflect. The effect is a sizeable reduction in the noise related to POS and morphosyntactic information.
Declension prototyping
IceMorph performs morphosyntactic classification in two steps. First we create declension prototypes for the most common nouns, verbs, and adjectives with the objective of creating prototypes that can generate declension paradigms for words whose inflections contain no or few irregularities. In keeping with the inherent methodology of IceMorph, we used readily available Old Icelandic grammars [4,14] to produce those paradigms.
We integrated the declension paradigms into the system using the Functional Grammar (FM) approach [11,12,22], which represents an intuitive method for implementing natural language morphology in the functional language Haskell [15].
The coding of Old Icelandic inflectional rules in FM/Haskell is accessible and easily understood by non-programmers, a necessary development criterion given the general lack of programming expertise among Old Icelandic language specialists. Such coding allowed us to take advantage of a panel of three Old Icelandic language experts who could then check for inaccuracies in the declension prototypes, which would have been impossible if we had used a different method of coding the inflection module. For instance, Figure 4 illustrates the implementation of Old Icelandic masculine i-stem nouns using FM. While using Old Norse ''staðr'' as its sample noun, this paradigm produces correct or near-correct declension paradigms for most masculine i-stem nouns in Old Icelandic.
IceMorph has a total of 96 prototypes: 40 noun prototypes covering nine strong and three weak declensions, 55 verb prototypes describing seven strong as well as four weak classes, and one adjective prototype. Each prototype in turn populates declension tables of varying sizes. For instance, noun declension tables consist of eight entries while verb declension tables contain 55 inflectional forms. Using these declension prototypes, we created inflection paradigms for each lemma in our composite dictionary. Depending on the properties of a lexicon entry, we performed one of the following mappings: Case 1 -known morpho-syntactic classification: If the lemma is associated with POS and class information, we generate paradigms for each prototype matching this information. For instance, lemma af-runr was classified as a masculine i-stem by the dictionary parser. There are two prototypes for masculine i-stem nouns, so two inflectional paradigms with a total of sixteen entries were created for this lemma.
Case 2 -unknown class: If, for a given lemma, the dictionary parser was only able to determine POS but not class, then inflectional paradigms were generated using each prototype of the given POS. In all cases, we were able to determine the gender of nouns and whether a verb was weak or strong. For a strong verb, such as antigna, we generated 20 inflectional paradigms with a total of 1100 entries.
Case 3 -unknown classification. For a purely hypothetical case in which neither POS nor class are known, declensions for all prototypes would be generated.
At the end of this process, IceMorph produced approximately one million declension paradigms to which we added closed-class words taken directly from our composite dictionary.
Given the Old Icelandic target corpus and the generated list of inflectional paradigms, we were able to classify each word in the corpus using the Wagner-Fischer edit distance algorithm [16]. Each unique word in the corpus was compared to the set of declensions and classified as the declension with the smallest edit distance. To reduce computational overhead, we made the following three assumptions: 1. compound prefixes do not undergo transformations; if a corpus word does not begin with the prefix of a compound word in the dictionary, the pair is skipped 2. certain Old Icelandic characters must be present in the corpus word if they are present in the lemma, and vice versa 3. the edit distance cost of transforming a declension instance into a corpus word could not exceed a value of 2 Furthermore, we used a modified cost schema tailored to the characteristics of Old Icelandic sound changes. For instance, the Old Icelandic character ''a'' might transform into an ''ö'' due to a process called u-mutation, so we reduced the transformation cost for those characters to a value of 0.2 (see Table 3 for more examples). On the other hand, ''e'' rarely changes to ''ö'' in Old Icelandic, so its cost remains fixed at 1. The purpose of adjusted cost is to make IceMorph less susceptible to errors, such as those generated by optical character recognition, that occur in upstream system components.
At the end of this process, over 98% of the corpus was tagged for both POS and morphosyntactic class. Although this approach provided broad coverage, we anticipated considerable noise in these tags mainly due to the creation of imperfect declension paradigms. One of the key features of the IceMorph design is to allow expert users to manually correct data. To that end, we developed an online tool [17] that enables expert users (currently a committee of three Old Icelandic language experts) to edit and correct any data point. At the time this article was written, our experts had tagged 490 (,0.14%) corpus words involving 289 (0.59%) dictionary entries.
Language specific phenomena such as homonymy also lead to ambiguity in classification. Homonymy is common in Old Icelandic. For instance, the corpus instance noun menn (''men'') could be the Nominative or Accusative Plural of the lemma maðr. In order to provide correct MS classification for an observed word, we needed to consider its context in the corpus. For example, a classifier is more likely to classify menn as Accusative Plural if it is preceded by an Accusative Plural pronoun such as sína. This type of context sensitive tagging is well described in the literature [27,30,31].
The second portion of the IceMorph system is designed to address issues related to context-based morphosyntactic (MS) tagging.
Semi-supervised morphosyntactic (MS) classifiers
IceMorph now has two very different sources of information for POS/MS tagging. On the one hand, there are prototypegenerated inflectional paradigms that operate in conjunction with the edit-distance based mapping between corpus words and declension entries. Their coverage is expansive yet very noisy. On the other hand, we have a small set of declensions contributed by our experts.
As Table 4 shows, expert feedback is considered to be correct by default. On the other end of the spectrum, prototype mappings using edit distance are expected to contain a considerable degree of noise. The two intermediate knowledge sources result from homonyms and multiple occurrences of a word in a given inflection paradigm. The table also reveals an inverse relation between the usefulness of a knowledge source and its coverage of corpus words. We refer to the first three types of feedback as ''expert-related''. Combined, they provide considerable corpus coverage (,67.6%) with relatively low noise levels.
Our classification module attempts to improve overall tagging accuracy based on this data. Our strategy was to classify MS tags directly and then infer the corresponding POS tags via simple lookup (for instance, the MS tag nom_sg uniquely maps to the POS tag noun). We considered three types of classifiers for this classification task: a dynamic Bayesian network classifier, a Hidden Markov Model (HMM) classifier with maximum likelihood estimation (MLE) using both a default and restricted Viterbi algorithm, and a linear chain Conditional Random Field (CRF) classifier.
For a given event, the dynamic Bayesian network classifier [20] considers its prior likelihood, as well as its likelihood in the presence of other (presumably independent) features to determine the likelihood of the event itself. The following function picks the feature set yielding maximum likelihood. In the context of IceMorph, the prior likelihood is the distribution of morphosyntactic tags based on expert feedback as well as unique and non-unique matches. The features chosen are the morphosyntactic tags preceding and following a given corpus word. We then calculate the likelihood of a given morphosyntactic element being associated with that word (Table 5). We restrict the knowledge sources for these features by prioritizing them from most to least strict. For instance, if a preceding word is the unique match of a given expert form, then only that morphosyntactic tag is used when calculating likelihood. If, on the other hand, it does not match any expert-based tags, then all available edit-distance tags are used.
Previous studies have shown that dynamic Bayesian network classifiers are associated with a number of attractive features, such as computational efficiency [18] as well as robustness in the presence of noisy input [19] and missing data [33,34] due to their integration over the complete feature space. It has also been shown that these classifiers perform well even if the feature independence requirement has been violated [35].
Hidden Markov Models [36] are widely used for the task of sequence tagging. The HMM defines the problem space in terms of N S hidden states; in IceMorph, these are morphosyntactic tags N O observations; in IceMorph, these are corpus words N transition probabilities T i = 1..S,j = 1..S between two states i and j N emission probabilities E i = 1..S capturing the probability of an outcome for state i We use a standard trigram HMM. In order to find the most likely sequence of hidden states based on given observations, we implement the Viterbi algorithm [37]. For a given t M T and observations o 1 , …, o n we find the most likely state sequence by solving for a given element x in the sequence. Similar to the process applied when creating the dynamic Bayesian network classifier, we only used expert-related data from our corpus when creating the HMM. In addition, we created two versions of the Viterbi algorithm, a default and a restricted version. The default Viterbi (dV) uses all the transition probabilities offered by the HMM. In contrast, the restricted Viterbi (rV) [38] uses the expert-related subset of transition probabilities whenever they are available.
Conditional Random Fields [27,32] is an undirected graphical model often used for tagging sequential data. A CRF assigns probabilities to output nodes based on the values of input nodes. In contrast to the HMM, it includes sequential knowledge and allows for the inclusion of feature functions describing the feature space. A linear-chain CRF takes into account features from the current and previous position in a given sequence and provides a score such that: for a given position i in a sequence of words, where f j denotes a feature function and l j represents its corresponding weight. Its The transformations are specific to Old Icelandic. Their purpose is to improve classification performance by making the classifier more robust with respect to errors introduced earlier in the IceMorph system, such as OCR errors or differences in spelling convention between words in the corpus and dictionary sources. doi:10.1371/journal.pone.0102366.t003 feature space may include a variety of data, such as corpus instances, POS, morphosyntactic tags, positioning in a given sequence, etc. This makes CRFs quite powerful, but at a higher computational cost. Our experiments were conducted using the open source CRF++ tool [39].
Tagged corpora
When we started work on IceMorph we manually tagged a subset of 462 words. They were randomly chosen but reflect the relative frequency distribution of POS in Old Icelandic. We refer to this tagged set as the GOLD corpus.
In addition to the creation of GOLD, we asked our language experts to check and, if necessary, correct declension paradigms created by our prototype classifier via our online tool. At the point of writing this article 488 corpus words had been processed by our experts; we refer to this tagged set as the EXPERT corpus. Figure 5 provides details with respect to the two subsets we used for testing and evaluation. The two test corpora differ in nature. Since GOLD instances have been chosen randomly they are distributed evenly throughout the corpus. In addition, words representing high frequency POS (as measured by occurrence in a dictionary) such as nouns (192 GOLD instances) and adjectives (153 GOLD instances) occur in GOLD relatively more often than words that belong to less frequent POS.
EXPERT instances, on the other hand, tend to cluster at the beginning of the corpus because our language experts focused on that section. Moreover, EXPERT contains many instances of words occurring frequently in the corpus even though the relative frequency of their associated POS in the dictionary may be lower (for instance, verbs with 160 instances or about 33%, and pronouns with 74 instances or about 15%). Table 6 shows the distribution of POS in EXPERT, GOLD, and in our concatenated dictionary.
When testing classifiers we distinguish between results obtained using EXPERT and GOLD, respectively. EXPERT is our closest analogy to a properly tagged test environment because it contains long sequences of tagged words. GOLD, on the other hand, allows us to study the robustness of a given classifier since most of its instances occur in a highly noisy environment (i.e. preceding and following words tend to not be tagged).
The data used for this project is available through the California Digital Library's ''Merritt'' data repository. We have deposited three sets of data in the repository which can be used in conjunction with our code, available from GitHub. The three datasets are collected as a single data package on Merritt, with the following DOI: 10.5068/D1WC7K. The contents of this package is as follows: (a) the concatenated dictionary file, stored as a json (dictionary_ 20140605.json) (b) the untagged and tagged Fornaldarsögur corpus (allvol.zip and icemorph_corpus-2014-06-01.zip) (c) the EXPERT and GOLD training/testing corpora (tagged_ corpora_20140605.json)
Classification results
As a baseline measure, we ran all classifiers on an in-sample data set (i.e., the same data was used for training and testing) for both the EXPERT and GOLD tagged sets. As expected, all classifiers performed well. We then split our test data into 80% training and 20% testing. In future work, the selection of corpus instances will be driven by ''Query by Uncertainty'', an active learning algorithm that [40] has shown to provide increased accuracy for corpora with minimal training sets. From the EXPERT corpus we used the first 20% for testing because forms tagged by experts tend to be clustered around the beginning of our corpus. Since the GOLD forms are more evenly spread throughout the corpus, we chose the last 20% as test data. When applying our classifiers to the split data set, the HMM classifier clearly outperformed the other two, its accuracy not suffering relative to its baseline (indeed, it scored higher). The restricted Viterbi consistently performed superior relative to the default Viterbi. This is pronounced in the performance of HMM-rV on the GOLD corpus, which contains a higher degree of uncertainty. With respect to results from EXPERT corpus on the POS tagging task, our HMM classifier yields results similar to state-of-the-art POS taggers trained on noise-free data. Table 7 contains the results of our classification tests.
The relatively poor performance of the CRF classifier deserves special explanation. Due to its higher demand for computing resources, we initially restricted its training set to sequences in which each word was associated with no more than one morphosyntactic form. As features we chose surface forms and MS tags of the preceding and following corpus words. Test CRF-1-80/20 performed below its in-sample base line, but the decline was considerably less than the dynamic Bayesian network classifier. We assumed that increasing the number of allowed morphosyntactic forms associated with a given word from one to two we could improve CRF performance. But as test CRF-2-80/ 20 shows, the opposite was true: performance declined somewhat for EXPERT words. Our interpretation of these results is that while CRF performs very well when trained with noise-free input, it is less capable of handling uncertainty in its training set than our HMM classifier with restricted Viterbi.
Conclusion and Outlook
The IceMorph POS and MS tagger attempts to maximize classification performance using a minimum of cleanly tagged training data. It is a hybrid system combining readily available resources for Old Icelandic (such as dictionaries, grammars, and corpora) and human expert feedback with machine learning algorithms for continuous automated classification. Given a small set of tagged words, IceMorph achieves corpus-wide POS classification of over 96% and MS classification of over 84% accuracy.
None of the resources used by IceMorph is noise free. Dictionaries and corpora contain errors introduced during OCR or inherent in the source itself. Furthermore, the context-based classifier learns its probability matrix from highly noisy data. IceMorph is designed to maximize performance given this noisy environment. It does so by taking cues from human experts, as well as exploiting the logarithmic distribution of unique words in corpora, essentially reducing the task of classification to a process of disambiguation of homographs.
The key to improved performance will be to further reduce noise throughout the IceMorph system, most easily accomplished by expanding expert feedback. We are exploring additional ways to improve accuracy by refining our machine learning algorithms. We are also investigating how to optimize the selection of corpus The tagged corpus GOLD resembles more closely the distribution of the dictionary while the tagged corpus EXPERT owes its pattern of distribution to frequencies in the saga corpus. doi:10.1371/journal.pone.0102366.t006 words to have maximum impact on classification performance by implementing appropriate active learning algorithms. Finally, we are looking at ways to incorporate phenomena specific to Old Icelandic, such as enclitics (suffixed determiners), so as to reduce classification failures.
Software and Data
Software for this project can be found at GitHub. Search for IceMorph. Data is available at the University of California/ California Digital Library repository Merritt, with the following DOI: 10.5068/D1WC7K | 6,171.2 | 2014-07-16T00:00:00.000 | [
"Computer Science"
] |