text
stringlengths
1.23k
293k
tokens
float64
290
66.5k
created
stringdate
1-01-01 00:00:00
2024-12-01 00:00:00
fields
listlengths
1
6
Systems Behavior: Of Male Courtship, the Nervous System and Beyond in Drosophila Male courtship in fruit flies is regulated by the same major regulatory genes that also determine general sexual differentiation of the animal. Elaborate genetics has given us insight into the roles of these master genes. These findings have suggested two separate and independent pathways for the regulation of sexual behavior and other aspects of sexual differentiation. Only recently have molecular studies started to look at the downstream effector genes and how they might control sex-specific behavior. These studies have confirmed the essential role of the previously identified male specific products of the fruitless gene in the neuronal circuits in which it is expressed. But there is increasing evidence that a number of non-neuronal tissues and pathways play a pivotal role in modulating this circuit and assuring efficient courtship. INTRODUCTION One of the fascinating fields of neurogenetics is the study of complex behaviors and the genes that control them. Courtship behavior in fruit flies (Drosophila melanogaster) is particularly well suited for such studies. The behavior consists of a series of consecutive steps that the courting male performs and that can easily be observed and quantified (reviewed in [1,2]): The male orients himself toward the female, taps the female with his forelegs, extends and vibrates one wing to "sing" a courtship song, licks the female's genitalia, attempts copulation, and copulates. Thanks to the many genetic and molecular tools that exist for this model organism it has been possible to gain significant insight into the genes and processes that regulate the behavior. In Drosophila melanogaster development, sex is determined cell-autonomously or by signals between adjacent tissues and not by hormones (reviewed by [3,4]). Sexual behavior in flies is regulated by the same master regulators that control general somatic sexual development and are part of a cascade of alternative splicing events. The primary signal lies in the ratio of X-chromosomes to autosomes, which determines whether a functional form of the "master regulator" protein Sex-lethal (Sxl) is produced (in females) or not (in males) (Fig. 1). In females, functional Sxl protein acts as a splicing regulator to control female-specific expression of the transformer (TraF) protein, itself a splicing regulator. TraF interacts with Tra-2, another splicing regulator. Together they control the female-specific splicing of doublesex (dsx) and fruitless (fru) pre-mRNAs. This results in the production of the female-specific dsx protein (DSX-F). No female specific FRU protein is formed because of translational control [5,6]. In males, the absence of TraF leads to the default splicing of both dsx and fru RNAs and to the production of male-specific dsx (DSX-M) and FRU M proteins. The central role of tra in the control of sexual differentiation and sex specific behavior is demonstrated by the fact that chromosomal females with a mutation in the tra gene are transformed into normal males with male courtship behavior [7]. Since tra controls both dsx and fru, further studies have examined which one of them controls mating behavior by examining the courtship behavior of dsx and fru mutant males. A mutation in dsx was found to reduce overall male courtship and to impair courtship song, but it did not abolish courtship [8,9]. Females that expressed male DSX-M acquired male morphology, but did not court [8]. In contrast, males with strong mutant alleles of fru barely courted, demonstrating that fru is essential for male courtship. Weaker fru mutations lowered courtship and caused males to indiscriminately court females and males [10][11][12][13][14]. Based on these and similar experiments it was proposed that there are two independent branches downstream of tra, one through dsx that controls somatic sexual differentiation outside the nervous system, and another through fru in the nervous system that controls male courtship behavior [15]. In recent years it has become increasingly evident, however, that the two pathways are both significantly contributing and interacting to regulate male courtship through both the CNS and other tissues. This article will review the role of fru in regulating courtship and discuss recent evidence that there is a close interplay between dsx and fru regulated pathways and genes in the regulation of courtship. FRU IS A MASTER REGULATOR OF MALE SPE-CIFIC BEHAVIOR Recent excellent reviews have described the complexities of the fru gene and its functions in detail [16][17][18]. This article will summarize some of this information and focus on more recent findings on how FRU M may control an amazing array of behaviors. A central role for fru in male courtship has recently been confirmed by findings that FRU M and the neuronal network defined by FRU expressing neurons are sufficient to specify the early steps of male mating behavior [19,20]. Expression of FRU M in otherwise completely normal females leads to male courtship behavior towards other females, although at lower levels than in control males and with impaired courtship song, indicating that factors other than FRU M are also required. In addition, females that express FRU M show male specific aggression, another sex specific behavior that is regulated by fru [21][22][23]. The fruitless gene is large (150 kb) and encodes numerous transcripts with non-sex-specific and sex-specific functions that are transcribed from several promoters [6,[11][12][13]24]. The most distal promoter, P1, gives rise to the sex specific transcripts. They contain TraF binding sequences in their second exon. Binding of TraF, which is only present in females, leads to the choice of an alternative 5' splice site and inclusion of sequences with numerous translational stop codons. This female specific transcript appears to be unable to produce any protein [6]. In males, in the absence of TraF binding, the stop codon containing part of the transcript is spliced out, thus allowing a long uninterrupted reading frame that gives rise to the male specific FRU M protein [6,[11][12][13]25]. Fru proteins belong to the BTB-Zn-finger protein family, suggesting that they act as transcription factors, although no direct molecular targets have been identified yet. However, genome-wide searches for genes that are controlled by fru have identified numerous target genes (see later). The male specific FRU M protein contains a unique 101 amino acid N-terminal region. These sequences are highly conserved among Drosophila species. Their male specific function is still under investigation. A recent report has suggested that these residues are essential to allow FRU M to function when it is ectopically expressed in otherwise normal females, but that they may be less important for FRU M function in its normal male context [26]. It has already been demonstrated that FRU M isoforms that contain one of several alternative putative DNA binding domains affect male neuronal differentiation and behavior differently [27]. The FRU M protein is expressed in about 2000 neurons of the brain and ventral ganglia, as well as in the peripheral nervous system [5,[11][12][13]19,20,28,29]. Are these neurons unique to males, and is this how FRU exerts its functions? That this is not the case was recently shown by the generation of transgenic flies that contained a manipulated fru gene that exclusively spliced the sex specific transcript in a male mode, even in females. To visualize the protein made from this transcript, FRU coding sequences were replaced with sequences coding for the yeast transcription factor Gal4 whose expression can be visualized, thus marking cells that usually express the male specific splice form. When this transgene was expressed in females, the expression pattern was basically indistinguishable from the pattern normally seen in males, indicating that the neuronal circuits that express FRU M in males are present in females [19,20,28]. Therefore, there are no gross anatomical differences caused by FRU M expression that can account for fru dependent male behaviors. However, on a smaller scale, neuronal dimorphism may be part of fru regulation. There are several FRU M expressing clusters that differ in males and females by cell number and other characteristics. And there is increasing evidence for specific roles for these and other subsets of FRU M expressing cells. A cluster of FRU M expressing neurons that are part of the median bundle, a structure that receives sensory input, is involved in controlling the sequential order of the different courtship steps, perhaps by coordinating different sensory stimuli [30]. Two glomeruli in the antennal lobe which receive olfactory input (DA1 and VA1v) differ in size between males and females, and those two glomeruli, plus an additional one (VL2A), were found to be the only olfactory glomeruli that were innervated by fru-Gal4 positive neurons [20,31]. Olfactory neurons that project to the DA1 glomerulus express the Or67d olfactory receptor which responds to 11-cis-vaccenyl acetate, a male derived pheromone (cVA). Activation of the receptor with cVA has different functions in males and in females: In males, it inhibits courtship to other males, in females it acts to stimulate receptivity towards males [32,33]. Recent experiments have shown how the same pheromone perceived by the same receptor might lead to different behaviors in males and females. The projections from the DA1 glomerulus to the protocerebrum, a higher order brain center, were found to be sexually dimorphic. The male specific projection pattern is dependent on the expression of FRU M in these neurons and other FRU M positive cells [34]. In yet another cluster in the brain, named fru-mAL, neuron number and morphology is different between males and females. These differences depend on FRU M and its regulation of differential programmed cell death between males and females [29]. Intriguingly, this cluster of neurons has recently been implicated in the control of male specific aggressive behavior [35]. These data demonstrate that fru expressing clusters can have distinct male specific functions. It is not known how this functional specificity is brought about. Part of the specificity might be due to the fact that these clusters are part of different and dedicated neuronal circuits. Since FRU M has the characteristics of a transcription factor it is likely to bestow male specific molecular characteristics to the neurons that express it. This could occur both during the development of these neurons and/or by setting differential physiological states of individual neurons in the adult animal. Whether the same set of FRU M -dependent transcripts is induced in all fru expressing neurons or whether subsets of fru clusters express specific signatures of fru regulated genes remains to be seen. BOTH FRU M AND DSX ARE REQUIRED IN THE CNS FOR MALE SPECIFIC FUNCTIONS The male courtship song is an important part of male courtship behavior that has been shown to map to certain regions of the brain and the ventral thoracic ganglia [36]. fru mutant males have impaired courtship song, indicating a role for fru in regulating the behavior [13,14]. FRU M however does not appear to be sufficient for specifying normal courtship song, since females expressing FRU M do not exhibit normal courtship song [37]. Since a mutation in dsx also causes impaired courtship song in males, Rideout et al. and others tested the possibility that both fru and dsx are required to specify normal male courtship song [8,37]. Indeed, expression of both FRU M and the male form of DSX, DSX-M, was required for normal courtship song. Co-expression of FRU M and DSX-M was observed in neurons of the mesothoracic ganglia in a neuronal cluster that shows a sexually dimorphic number of FRU M expressing neurons [37]. Intriguingly, expression of DSX-M was required to obtain the full set of male FRU M expressing neurons. This is reminiscent of previous findings that both DSX-M and FRU M are required in the abdominal ganglia for the differentiation of malespecific serotonergic neurons [27], and that DSX-M is required for an increased number of neurons in the abdominal ganglia of males [38]. A central role for FRU M expressing abdominal neurons in the production/performance of courtship song was shown recently by Clyne et al. [39]. The authors used a light-activated ion-channel that they expressed in all fru-expressing neurons. This allowed them to specifically activate these neurons by light. When the cells in the abdomen of decapitated flies were activated, both males and females extended a wing and performed courtship song, although the characteristics of the song were different in females. When the females also expressed FRU M , the displayed song was very male-like and was recognized by control females as valid courtship song. The authors concluded that the potential to display the behavior was largely present in both sexes, but whether it was initiated, and the quality of the song was dependent on stimuli and/or coordination mediated by FRU M . In contrast to the results obtained in decapitated flies, light activation of the behavior occurred at very low frequency in intact flies. Since control of courtship song does not only require male abdominal ganglia, but also male posterior regions of the brain, it is possible that the lightactivated response was suppressed in intact flies, because sensory stimuli that usually trigger the behavior were absent and higher-order control neurons were therefore inhibiting the display of the behavior. That females may possess some intrinsic neural pathways for courtship has previously been suggested by findings that females which lack FRU M , but are mutant for the gene retained (retn), show some male courtship [40]. retn codes for a ARID-box transcription factor that is expressed in a small subset of neurons in both males and females that does not overlap with fru expressing neurons. Furthermore, the effect of retn is influenced by whether DSX-M or DSX-F is present in these flies and the authors showed that fru and dsx can act together in the context of developmental genes such as retn. DSX-M was also found to control the expression of a male-specific gustatory receptor, Gr68a. It is expressed in taste sensillae on the male foreleg and may play a role in the pheromonal perception of females. Removal of Gr68a by RNAi affects courtship [41]. THE FAT BODY, A NON-NEURONAL TISSUE, AND GENES EXPRESSED OUTSIDE THE FRU CIRCUITS ARE REQUIRED FOR NORMAL COURTSHIP Both FRU M and DSX are transcription factors, but very little is known about the sex-specific genes they regulate and what role they might play in courtship. Their identification is crucial for our understanding of courtship regulation. Several groups have performed molecular screens to identify sex specific transcripts and transcripts that change in fru and dsx mutants [41][42][43][44][45]. However, the biological role of only a few of these transcripts has been examined so far. The takeout (to) gene was identified in a subtractive screen and was shown to be preferentially expressed in male heads [44]. A mutation in takeout affects male courtship behavior and interacts genetically with fru, indicating that they act in the same overall pathway that regulates mating behavior. takeout mutant males showed an overall reduction in courtship; although they were able to perform all steps of courtship, they initiated and maintained the behavior at a significantly lower rate. Given that the mutant affects mating behavior, it was surprising when it was found that the takeout transcripts were not present in the nervous system, but that the gene was male-specifically expressed in the fat body that surrounds the brain (there is also some non-sex specific expression in the antennae, the olfactory organs of the fly) [44]. The insect fat body consist of large, lipid-filled cells and is often compared to the mammalian liver (Fig. 3). Its crucial role in fat storage, energy metabolism and immunity is well documented [46][47][48], but it had not been implicated in the control of sex specific behaviors before. Its only known sex specific role was in the production of yolk proteins in females [49]. To test whether there is a general sex specific role for the fat body in male courtship behavior, genetic means were used to feminize the fat body in otherwise normal males and to ask whether this affected courtship. To do so, the female specific TraF protein was targeted only to fat body cells. To change sex only in a defined subset of cells is feasible in flies because, as mentioned earlier, sex is determined cell-autonomously and not regulated by circulating hormones. Courtship was reduced drastically in males with feminized fat body, indicating that the sexual identity of the fat body is indeed crucial for normal courtship [50]. Interestingly, courtship in these males was considerably lower than in the takeout mutants, suggesting that the feminization did not just reduce the amount of takeout, but probably also that of other fat body transcripts which normally play a role in courtship regulation. These other transcripts remain to be identified. The lower courtship scores observed in males with feminized fat body are reminiscent of the reduced scores observed in females that express FRU M . What if courtship in FRU M females was lower than normal because they still had a female fat body? In a genetic experiment that did the opposite of the one just described in males, the fat body tissue was masculinized in females that also express FRU M . These females now courted as well as normal males, underscoring the importance of the fat body and its interaction with the CNS [50]. There is increasing evidence for a sex specific role of fat body factors. In another screen for sex specifically expressed transcripts in the head, Fuji et al. [41] identified four genes with preferential sex specific expression in the fat body. tsx, sxe1, sxe2 were male-specifically, and fit was female specifically expressed. In addition, recent genomic screens have identified a number of sex-specifically expressed genes that appear to be expressed in the fat body [45], see below. How can a tissue like the fat body regulate courtship behavior? As discussed earlier, expression of FRU M in the CNS is required to establish the competence for courtship behavior. Obviously, fat body factors need to interact with the nervous system to regulate its function. Since the fat body is a major secretory tissue, one possibility is that it does so by secreting factors into the hemolymph, the circulating fluid of flies, and that these factors somehow interact with the brain. Consistent with this hypothesis is the finding that the Takeout protein is present in the hemolymph [50]. This suggests that soluble, circulating factors may play a significant role in the control of Drosophila sexual behavior, reminiscent of the hormonal control of behavior in vertebrates (Fig. 3). How such proteins cross through or signal through the blood brain barrier and interact with fru circuits is unknown. Two lines of evidence suggest that the sex-specific role of fat body factors is physiological, in the adult and behaving fly, rather than during development: Only feminization of fat body in adult flies, but not at larval stages, leads to the described reduction in courtship [50]. And, in experiments that looked at transcriptional changes in adult males that were allowed to court females for 5 minutes, at least three out of eleven upregulated genes were genes that are controlled by the sex determination pathway and are expressed in the fat body [51]. The takeout gene codes for a 27kD protein with characteristics of soluble carrier proteins that is most similar in sequence to secreted Juvenile Hormone binding proteins of other insects [44,52,53]. Interestingly, expression of the takeout gene is regulated by both DSX-M and FRU. Mutations in either gene reduce the amount of takeout present in males, indicating that both fru and dsx are required for full takeout activation [44] (Fig. 2). These findings support the notion that DSX-M acts as an activator in males, as had been previously suggested [54][55][56]. The action of DSX proteins is best characterized in the case of the female specific yolk protein (yp2) promoter. Both DSX-F and DSX-M bind the yp2 promoter; bound DSX-F activates transcription, whereas bound DSX-M represses its activation [56][57][58]. Thus, the described effects of DSX proteins on yp2 are opposite to those observed for takeout regulation. In contrast to the regulation observed in yp2, however, both DSX-M and FRU M are required for normal takeout expression. Consistent with this, expression of FRU M in females alone is not sufficient for male levels of takeout, most likely due to the absence of DSX-M and an inhibitory effect by the presence of DSX-F (Fig. 2). Only in females that express both FRU M and DSX-M are wildtype levels of takeout expression observed. It is not known yet whether DSX and FRU act by directly binding to the takeout promoter, or through other transcription factors. Potential DSX consensus binding sites [59] have been observed within 1kb upstream of the takeout transcrip-tion start site. FRU recognition and binding sequences have not been described yet. Recent micro-array based genomic studies that examined the expression of genes that are regulated by the sex determination hierarchy in the heads of flies have identified new modes of DSX regulated gene expression [45]. These studies suggest that the model of regulation that is seen in yolk protein genes and takeout, namely that one form activates and the other represses, is true only for a subset of dsx regulated transcripts. For others, expression was lower in both sexes when dsx was mutated, indicating that they are usually activated by both DSX-M and DSX-F. Another class was higher in both mutants, indicating that both DSX forms usually repress these transcripts. The reason why these genes were found to be expressed at different levels in the two sexes in the first place was that DSX-F appeared to both activate and repress to a greater extent. This may be due to the fact that DSX-F interacts with another protein encoded by intersex [60], which could make it a more potent activator and repressor. In addition, there was a class of transcripts where DSX was only required in one sex. The same study also identified genes that were regulated by FRU M . When whole heads and dissected brains were compared, it was discovered that a majority of the identified DSX and FRU M targets was expressed outside of the nervous system. These genes are most likely expressed in the fat body, or perhaps in glial cells. These findings indicate that there may be a fairly large number of sex specific transcripts in the fat body, supporting earlier findings about its sex specific function. Further studies will be required to determine the role of individual genes and whether/how they contribute to sex specific behaviors. Since FRU M expression has so far not been observed in fat body [5,6,11], the finding that a significant number of its transcripts are regulated by fru poses the question of how this regulation occurs. Unless fru levels in the fat body were below detection threshold, FRU M probably acts indirectly, perhaps by influencing the generation of a circulating signal, or via other effects mediated by neuronal activity of FRU M expressing cells. Very few FRU M targets were identified in the nervous system, possibly because they are expressed only in small subsets of FRU M expressing cells and therefore may not have been detected under the stringent criteria of the screen [45]. One of the identified FRU M targets, dpr (defective proboscis extension response), was found to affect courtship. Mutant males showed reduced courtship latency and reduced time to copulation. Interestingly, dpr was expressed in ascending median bundle neurons that express FRU M and in earlier studies had been shown to regulate the timing of courtship [30]. Not only is there mounting evidence for the crucial role of the fat body, but in addition, a recent study by Grosjean et al. [61] has shown a contribution of glial cells in the brain. A mutation in the gene "genderblind" which is expressed in CNS glial cells, causes males to become non-discriminatory and court females and males alike. This is most likely due to their overreaction to and improper processing of chemosensory cues, since they do not court desat1 mutant males which produce very small amounts of sex specific pheromones. However, they do court desat1 males that have been "painted" with 7-tricosene, a pheromone that is thought to normally prevent male-male courtship. genderblind codes for a transporter that regulates extracellular glutamate, an indication that glutamatergic neurons are involved in the processing of pheromone detection. Taken together our current knowledge of male courtship behavior shows an intricate network of neuronal circuits that are set up under the control of both the fruitless and doublesex genes and that together confer the neuronal compe- tence for the behavior (also discussed in [62]). However, perhaps surprisingly, efficient and normal courtship is dependent on additional input from non-neuronal tissues, such as the fat body and glial cells. Diffusible sex specific factors secreted from the fat body may play an important role in this regulation, suggesting that sexually dimorphic characters in Drosophila result from the interaction of sexdetermining genes and endocrine factors.
5,755.4
2008-11-30T00:00:00.000
[ "Biology", "Psychology" ]
Knocking on New Physics' door with a Scalar Resonance We speculate about the origin of the recent excess at ~750 GeV in diphoton resonance searches observed by the ATLAS and CMS experiments using the first 13 TeV data. Its interpretation as a new scalar resonance produced in gluon fusion and decaying to photons is consistent with all relevant exclusion bounds from the 8 TeV LHC run. We provide a simple phenomenological framework to parametrize the properties of the new resonance and show in a model-independent way that, if the scalar is produced in gluon fusion, additional new colored and charged particles are required. Finally, we discuss some interpretations in various concrete setups, such as a singlet (pseudo-) scalar, composite Higgs, and the MSSM. Introduction Very recently, the ATLAS and CMS collaborations presented first results from 13 TeV proton-proton collisions at LHC Run-II [1, 2]. Intriguingly, both experiments found a resonance-like excess in the diphoton invariant mass spectrum around 750 GeV. The CMS collaboration reported a 95% CL upper limit of 13.7 fb on the cross section times branching ratio of a narrow spin-2 resonance decaying into two photons, compared with an expected exclusion of 6.3 fb (Fig. 6 of Ref. [1]), which corresponds to an excess with a local significance of 2.6 σ [1]. When interpreting the excess in terms of the signal strength for a narrow scalar (or pseudo-scalar) resonance, based on the expected and observed exclusion limits, the CMS measurement in the Gaussian approximation reads µ CMS 13TeV = σ(pp → S) 13 TeV × B(S → γγ) = (5.6 ± 2.4) fb . (1) Moreover, the ATLAS collaboration reported the observed exclusion limit in the fiducial region for a narrow-width scalar resonance µ ATLAS 13TeV,fid < 11.5 fb, compared with an expected exclusion of 2.6 fb (Fig. 3 of Ref. [2]), showing an excess of 3.6 σ significance [2]. Using Monte Carlo simulation we estimate the acceptance of the fiducial region for scalar production via gluon fusion to be ∼ 60%. In this case, the Gaussian approximation cannot be used to estimate of the signal strength, as is clear also from the very large value of the observed limit, compared to the expected one. We therefore parameterize the likelihood with a Poissonian function, requiring the correct observed exclusion limit and local significance to be reproduced, resulting in µ ATLAS 13 TeV = σ(pp → S) 13 TeV × B(S → γγ) = 10 +4 −3 fb . (2) The CMS search for a diphoton scalar resonance [3] performed during the Run-I phase at proton-proton collision energy of 8 TeV, sets a 95% CL observed upper limit of σ(pp → S) 8 TeV × B(S → γγ) < 1.32 fb, with an expected limit of 0.69 fb (Fig. 10 of Ref. [3]), implying that a ∼ 2σ excess was observed by CMS already at Run-I. The analogous ATLAS search [4] reported an observed upper limit on the RS graviton production cross section times branching ratio of < 2.8 fb at 95% CL, with an expected one of 2.2 fb (Fig. 4 of Ref. [4]). We estimate that the limit improves only by a factor ∼ 1.3 for the scalar resonance case. Based on the expected and observed exclusion limits, the diphoton signals from 8 TeV searches in the Gaussian approximation, for a narrow-width scalar resonance, are µ CMS 8TeV = σ(pp → S) 8 TeV × B(S → γγ) = (0.63 ± 0.35) fb , µ ATLAS 8TeV = σ(pp → S) 8 TeV × B(S → γγ) = (0.46 ± 0.85) fb . ( Before presenting any further discussions, we note that there is a simple (and yet general) way to test the compatibility of 8 and 13 TeV measurements, by fitting to a single parameter: where R pp depends on the production mechanism of the scalar and, to a good approximation, is given by the ratio of the parton luminosities of the relevant initial states at the two collision energies. For instance, this ratio is 4.7, 2.5 and 2.7 for 750 GeV scalar produced via gluon fusion (gg), vector fusion ( qq + qq) and associated production ( qq), respectively [5]. Among these, the most likely possibility to reconcile the measurements at two energies is if the scalar was produced dominantly via gluon fusion. Finally, when interpreting the resonance as a narrow-width (pseudo-) scalar particle produced via gluon fusion, our combination of 8 and 13 TeV measurements leads to µ 13TeV = (4.6 ± 1.2) fb , still showing a significant excess over the SM background. Shown in Fig. 1 are the individual likelihoods at 8 (13) TeV in dashed (dotted-dashed) both for CMS (red) and ATLAS (blue), while the combination is shown in solid-black. Another important experimental input, besides the overall signal yield, is the decay width of the resonance (Γ S ). The typical diphoton invariant mass resolution of the detector at 750 GeV is approximately ∼ 10 GeV [4]. On the one hand, if the width is much smaller than the resolution, as for the SM Higgs, it remains experimentally unmeasurable. On the other hand, for a sizable width a direct measurement from the lineshape is possible. Based on the results presented in [1, 2], the observed width of the excess is easily compatible with Γ S 40 GeV. Indeed, the best-fit in the ATLAS analysis [2] is obtained for Γ S /m S ∼ 6% with 3.9 σ local significance (compared with 3.6 σ in the narrow-width approximation). It is clear that the present data is not yet conclusive and a precise measurement of the width would require further analysis. Until then, we study Δχ 2 Figure 1: Reconstructed likelihoods as a function of µ 13TeV of the diphoton resonance searches at 8 TeV (dashed) and 13 TeV (dotted-dashed) by CMS (red) and ATLAS (blue). The combination (solid-black) is obtained when interpreting the resonance as a narrow-width (pseudo)scalar particle produced via gluon fusion. the phenomenological implications for both cases: negligible and sizable width (compared to the resolution). In the rest of the paper we entertain the possibility that the excess is due to a new scalar (or pseudo-scalar) resonance. In Sec. 2 we introduce a simplified effective framework to parametrize its couplings to SM particles, while in Sec. 3 we explore all the relevant experimental constraints from other searches performed at Run-I. In Sec. 4 we discuss two phenomenologically different scenarios based on the size of the total decay width. Finally, in Sec. 5 we speculate on possible interpretations of the excess in some concrete new physics models, after which we conclude. Simplified characterization framework Let us add to the Standard Model (SM) a neutral scalar resonance S with mass m S . The relevant effective operators for gg → S → γγ are where G a µν and F µν are the SU(3) C and U(1) QED field strength tensors, respectively, while α s and α are the strong and electromagnetic coupling constants. Analogously, if the resonance is a pseudo-scalar, the effective Lagrangian describing its interactions with SM particles remains the same as in Eq. (6) after the substitution c G,γγ →c G,γγ and X µν X µν → 1 2 µνρσ X µν X ρσ , where (X = G, F ). We assume that the dominant production mechanism of the scalar S at hadron collider is gluon fusion (gg → S), induced by c G . As anticipated in the literature, the leading order cross section is expected to receive large higher order QCD correction factor [6]. We benefit from the precise computations of the SM Higgs boson production at the LHC. In fact, the QCD correction factor in the infinite top mass limit (to match our calculation) is a good approximation of the full top mass dependence even for a relatively heavy Higgs boson [6]. In the numerical analysis below, we fix m S = 750 GeV and estimate the total cross section using the available NNLO QCD predictions for the SM Higgs [6,7] and rescaling these for the heavy m top limit (since we assume heavy mediators in the loops generating Eq. (6)). Finally, the production cross sections for a 750 GeV scalar are For the pseudo-scalar case, we note that the N 3 LO QCD corrections, computed recently in Ref. [8] for 13 TeV, provide a K-factor very similar to the one of the scalar [6], well within the scale uncertainty of the NNLO computation. The decay width of S into two photons or two gluons following from Eq. (6) is given by where we include the NLO QCD correction factor K F = 1 + 67α S 4π from Ref. [6], and the running coupling constant at the appropriate scale, α s ≡ α s (m S ). Note that the leading order partial decay widths for the pseudo-scalar case are the same, with the substitution c X →c X . Since the c γγ and c G effective couplings are expected to be generated at loop level (c γγ,G ∼ O(1)), any tree-level coupling of S to lighter states is typically expected to substantially increase the total decay width to ∼ O(GeV), thus strongly reducing the branching fraction in two photons, in which case a larger value of both c G and c γγ is needed to fit the excess. In the Sec. 4.2 we quantify this statement. In order to assess the allowed parameter space for a generic model with a 750 GeV neutral scalar, we parametrize its tree-level couplings to SM particles as follows: The partial decay widths which follow from these couplings are where β X = 4m 2 X /m 2 S . Also,m f is the MS mass that should be evaluated at the scale m S and ∆ QCD are the higher order QCD corrections [6]. In the W W and ZZ decay formulas given in Eq. (12) we neglected the contributions from loop-induced couplings, since these are expected to be subleading whenever the tree-level couplings are present. The tree-level couplings to top and W induce at 1-loop the effective Sγγ and Sgg interactions. Their contribution to the decay widths and production cross section can be obtained with the substitution Let us analyze the question if the excess can be accommodated by top coupling only, without invoking further contributions to c G and c γγ . Using the expressions derived in this section, we find that the required coupling is c t 50. This, however, would imply an unphysical partial decay width Γ(S → tt) ∼ 8 TeV. We also note that the situation cannot be ameliorated by considering non-zero values of c V , nor by introducing a large coupling c b to the bottom quark as well. We therefore exclude this possibility and conclude that new colored and charged particles inside the loop are necessary in order to explain the excess. As we will see later on, these particles are expected to be light in order to accommodate the excess, potentially within the reach of LHC. In addition to the decay channels listed in Eq. (12), we also consider a possible invisible decay width Γ inv . This could be due to decays into dark matter particles, or into particles which escape detection, or for which no experimental bound is present. Experimental constraints The framework introduced above can be employed to analyze other potentially relevant experimental constraints in a model-independent way. In Table 1 we summarize the LHC Run-I limits on σ × B for a given decay channel for a narrow 750 GeV neutral scalar (pseudo-scalar) resonance (Γ S /m S < few %). The extended discussion is given below. The CMS search for a dijet resonance [9] at 8 TeV and 18.8 fb −1 , optimized for the mass window between 500-800 GeV, imposes a 95% CL upper limit on the production of a RS graviton decaying to gg: where A is the acceptance. We conservatively assume A = 1. The ATLAS [11,13] and CMS [10] searches for a scalar resonance decaying to ZZ and W W with the full data set, combining all the relevant Z and W decay channels, impose a 95% CL upper limits of On the other hand, the ATLAS collaboration [12] has performed a search for resonance decaying to γ and Z(→ + − ). From Monte Carlo simulations we estimate that ∼ 70% of all events fall into the fiducial region, and we interpret the search as a 95% CL upper limit on the inclusive σ × B, The decay into a Higgs boson pair is also subject to constraints. Several searches have been performed during the LHC Run-I by ATLAS [15] and CMS [14]; for a resonance mass of 750 GeV, the most stringent constraints come from the 4b channel, where each of the 125 GeV Higgs bosons decays into a bb pair. One has This constrains the B(S → hh) at a level similar to the branching ratio into vector bosons, i.e. a few 10 −2 for a production cross cross sectionsection of the order of a few picobarn. Decays into quarks are experimentally less constrained. In particular, ATLAS [17] and CMS [16] searches for resonance decaying into a pair of top quarks result in the following constraints: Other fermionic channels, such as bb, τ + τ − , or light quarks and leptons, are less relevant if one assumes Yukawa-like couplings to the new resonance as in Eq. (11). An interesting possibility is that S predominantly decays to invisible particles that might constitute (part of) the observed dark matter in the universe. This scenario, on the other hand, might lead to sizable mono-jet (missing energy plus jet) signatures at the LHC. The present ATLAS search [18] sets a bound on σ(pp → H) × B(H → inv) for a heavy Higgs-like particle, but only up to a mass of 300 GeV. We perform a Monte Carlo simulation of the signal for 750 GeV applying the same cuts as in the analysis. We find that the acceptance times efficiency improves by a factor of ∼ 3 with respect to 300 GeV case. We use this to estimate the bound on a 750 GeV scalar Negative results from 8 TeV resonance searches, together with the positive signal in γγ channel from Eq. (5), imply model-independent upper limits on B(S → XX)/B(S → γγ), for a given channel XX. These are illustrated in Fig. 2 for XX = Zγ, ZZ, W W , hh, tt, gg and invisible. Phenomenological scenarios With this experimental results in mind, we face two different classes of scenarios, depending on whether or not the resonance has extra tree-level decay channels, that would lead to a sizable decay width. Since this is an open issue in the present analyses [1, 2], we study both cases separately. • Only loop-induced decays: The dominant decay channels of S are into the SM gauge bosons (gg, γγ, Zγ, ZZ,pseudo-scalar and W + W − ) through loops of heavy states of the new sector, described effectively by in SU(2) L × U(1) Y invariant way. In terms of the mass eigenstates, the relevant part of the above Lagrangian reads where Analogous operators can be written in the pseudo-scalar case using the substitutions in Eq. (7), In this scenario, the relevant phenomenology depends entirely on c G , c W and c B (or c G ,c W andc B ). • Sizable extra decay channels: S is allowed to couple at tree level to other lighter particles (either SM or BSM ones) and, therefore, Γ S is expected to be dominated by extra channels, rendering it largely independent on c G and c γγ . In this case, the phenomenological parameters relevant to the observed excess are c G , c γγ , and Γ S . Phenomenology of the "loop-only" scenario The loop-induced decay channels generated by the effective couplings in Eq. (21) are dominant only if any tree-level couplings to lighter states are strongly suppressed. Starting with Eq. (21), we compute the partial decay widths for S → V V , where V V = γγ, ZZ, W W , Zγ and gg, as a function of c G , c B , and c W . The total width is then simply given by the sum of the partial widths, for m S = 750 GeV. The branching ratio into two photons, B(S → γγ), is also a function of the same three coefficients only. In Fig. 3, we show 68 and 95% CL preferred regions from the combined diphoton signal in Eq. the effective Sgg coupling, |c G | 12. On the other hand, this scenario also predicts a correlated signal in ZZ, Zγ, and W + W − channels: where R W B = c W /c B . The best present limit is due to the Zγ channel (see Fig. 2) with a lower limit R W B −1.7 and a loose upper limit. The correlation induced via the single parameter R W B is a striking prediction for future searches at LHC Run-II. In Sec. 5 we discuss the interpretation of the excess in some concrete models which fall into this class. Phenomenology of the "extra-width" scenario In this section we entertain the possibility that the decay width of the new scalar is dominated by other decay channels than the loop generated ones described previously, in which case the total decay width Γ S becomes an independent free parameter. This implies that the branching fraction in two photons, B(S → γγ), is a function of c γγ and Γ S only. Within this class of scenarios, two possibilities could be realized: • either the width is bigger than the experimental resolution (Γ S 10 GeV) and could then be measured directly, Figure 4: Preferred region at 68% (green) and 95% (yellow) CL by the diphoton excess from Eq. (5) and constraints from dijet production in Eq. (14) in (c G , c γγ ) plane for Γ S = 1 GeV (upper plot) and Γ S = 20 GeV (lower plot). Shown in black-dotted is the gluon fusion production cross section at 8 TeV. The blue line is the prediction for a composite vector-like fermion bidoublet (2, 2) 2/3 . See text for details. • or it is still smaller than the resolution, Γ S 10 GeV, but larger than the "looponly" contribution, Γ loop S < Γ S . As a benchmark point, we assume the total decay width to be either Γ S = 20 GeV or Γ S = 1 GeV, and repeat the survey of the relevant phenomenological constraints. In this case, the diphoton signal strength at 13 TeV is where one can notice the simple scaling with Γ S . We show in Fig. 4 in green (yellow) the 68 % (95 %) CL region preferred by the combined fit to diphoton resonance searches from Eq. (5). The main difference with respect to the previous case, is that the c G,γγ couplings have to be large in order to fit the diphoton excess. This should also be confronted with the dijet bound from Eq. (14), which corresponds to a constraint |c G | 13 if Γ S = 1 GeV as shown in Fig. 4 (top) and |c G | 28 if Γ S = 20 GeV, as shown in Fig. 4 (bottom). According to Eq. (8), the large values of c G enhance the production cross section, therefore it is important and non-trivial to point out to which final states S is allowed to decay and compare with the present experimental constraints discussed in Sec. 3. The total production cross section σ(pp → S) isolines at 8 TeV are shown with black-dashed vertical lines in Fig. 4. The 95% CL constraints on tree-level decay channels from 8 TeV resonance searches are Comparing these bounds (Table 1) Explicit models and insights In this section we provide interpretations of the generic scenarios described above, in the context of some more concrete models. Scalar singlet with loop decays only In this class of models the scalar resonance is a SU(2) L × U(1) Y singlet, which decays predominantly in SM gauge bosons via the couplings in Eq. (21). This feature can be naturally achieved by assuming that S is the lightest state of a new heavy sector, coupled to the SM only via the SM gauge interactions, and that it does not participate at tree level to the electroweak symmetry breaking, i.e. there is no mixing with the SM Higgs boson. In this case the operators in Eq. (21) are generated via loops of (colored and charged) heavy states of the new sector. Given that the main phenomenological constraints concern only the gg → S → γγ process, the relevant properties of the new particles inside the loop are the spin and the quantum numbers under SU(3) C and U(1) QED . As a simple benchmark scenario, we assume S is coupled to a set of vector-like heavy fermions Ψ i , triplets (or singlets) of SU(3) C with electric charge Q i and mass M i , via marginal (Yukawa type) operators, At one loop they generate the necessary couplings of S with gluon and photon pairs. In the limit of heavy fermions (τ Ψ i = m 2 S /4M 2 i 1), we can match the c G , c γγ and c Zγ coefficients to the model parameters, where N c i = 3 (1) for the color triplets (singlets) and T 3 i is the SU(2) L isospin quantum number. For completeness, c Zγ parameterizes the effective scalar coupling to Zγ. Using the relations in Eq. (23), the matching in Eq. (30) can easily be translated to c B and c W . Based on the analysis in Sec. 4.1, the "loop-only" scenario points to relatively small effective couplings c G , c B and c W , which can in principle be due to a single (or few) particle(s) in the loop. For example, consider a single vector-like quark representation which is a singlet under SU(2) L and has electric charge Q f = Y f : In Fig. 3 (top), we show in solid-blue the predictions for electric charges 5/3, 2/3 and 1/3. The first two, in particular, can nicely explain the observed excess for reasonable values of g * ∼ O(1) and M f ∼ O(TeV). If this is correct, one can expect vector-like quark signatures to show up at the LHC. Singlet mixed with the Higgs The previous scenario can be generalized allowing also for other decay channels of the singlet into SM particles, which can arise at a renormalizable level through mixing with the Higgs. Indeed, the cubic term S|H| 2 will in general be present in the scalar potential, if no particular assumption is made in order to suppress it, and it will give rise to a mass mixing between the two CP even states after electroweak symmetry breaking. A situation of this kind can be found in the context of many most natural extensions of the SM. The effective Lagrangian for the singlet is then given by Eq. (11), with where θ is the singlet-Higgs mixing angle. For sufficiently high masses, the main decay widths of S are into W , Z, and Higgs bosons, in an approximate ratio dictated by the equivalence theorem. The exact value of Γ(S → hh), which is determined by c hm , depends on the details of the scalar potential; see e.g. [19]. If no other relevant decay modes are present, the branching ratios into W + W − and ZZ are close to 0.5 and 0.25, respectively. The total width in this case is Γ S = sin 2 θ Γ SM , where Γ SM 250 GeV is the width of a SM-like Higgs of 750 GeV. One can then see from Fig. 2 that, due to the bound on S → ZZ, a branching ratio into γγ of at least about 2% is needed in order to reproduce the observed signal if B(S → ZZ) ∼ 25%. Independently of the total decay width, this can be recast as a bound on the mixing angle sin θ 2 × 10 −3 c γγ . The strongest experimental constraint on the mixing angle from the LHC Run-I Higgs coupling analysis, sin 2 θ < 0.2 [20], is several orders of magnitude weaker for physically motivated values of c γγ . Pseudo-scalar singlet Similar couplings to photons and gluons can be generated also for a pseudo-scalar singlet S, with a coupling to the heavy fermions of the type L Y uk ⊃ −g * i SΨ i iγ 5 Ψ i . In this case, the matching toc G andc γγ is similar to Eq. (30) with g * i → 3/2g * i . A nice feature of the pseudo-scalar scenario is that the CP symmetry automatically forbids a mixing of this particle with the SM Higgs, thus also forbidding tree-level decays to SM gauge bosons. Axion-like particles fall in this class of models, an early analysis of collider bounds in this context can be found e.g. in Ref. [21]. In addition to this, if the (pseudo-scalar) singlet is one of the pseudo-Nambu-Goldstone bosons (pNGB) of a non-minimal composite Higgs scenario in a symmetry-breaking pattern G/H, one expects that a Wess-Zumino-Witten term in the low-energy effective theory is generated; see e.g. Refs. [22,23]. This effectively provides couplings of the pseudo-scalar singlet to SM gauge bosons exactly as in Eq. (24), with the matching where f is the scale of the spontaneous symmetry breaking in the strong sector and n G,W,B are O(1) coefficients which depend on the symmetries and fermion content of the underlying UV theory. Using Eq. (23) for the pseudo-scalar one obtainsc γγ = (n B + n W )m S /f . This contribution, together with those from loops of heavy fermions discussed above, could easily match the observed excess for O(1) values of the n B,W,G parameters (see Fig. 3). In this context, measuring the n B,W,G parameters could offer an insight into the UV structure of the strong sector [23], in the same way as measuring the π 0 → γγ decay width offered insights on the structure of QCD. A simple model which provides a singlet pNGB, as well as a solution to the electroweak naturalness problem, can be found in the context of non-minimal composite Higgs models, for example those based on the spontaneous symmetry-breaking pattern SO(6)/SO(5) [23][24][25][26]. In particular, it has been shown [25,26] that a ∼ 750 GeV singlet pNGB can be accommodated in such models. Even though in this case the UV anomaly is such that n G = 0 and n B = −n W , 1 which gives vanishingc γγ andc G , the necessary non-zero contributions to these coefficients in order to explain the excess can be obtained from loops of SM fermions or vector-like top partners, generically predicted in these setups. Composite resonance The "extra-width" scenario points to large effective couplings which, based on perturbativity arguments, require several fermion representations to coherently contribute to gg → S. Interestingly enough, we find this to be plausible in the context of composite Higgs models if S is a composite scalar singlet resonance of the strong sector. In this example we focus on the minimal composite Higgs model SO(5)/SO(4) [27], where such a resonance could play a role in the unitarization of W W scattering at high energy [28]. Its mass is naively expected to be near the strong coupling scale Λ ∼ (few) TeV, unless it is protected by a symmetry. Nevertheless, the resonance could still be light due to some accidental cancellations or peculiarities of non-perturbative dynamics. For example, in QCD the σ meson (or f 0 (500)) is much lighter than the typical mass scale of the other resonances, even though it is not a pNGB like the pions. In order to generate sizable values of c G consider, for example, a color triplet vector-like fermion resonance that transforms as a bidoublet (2, 2) 2/3 under the SU(2) L × SU(2) R × U(1) X global symmetry of the strong sector. The four mass eigenstates have electric charges (Q = T 3 L + T 3 R + X): 5/3, 2/3, 2/3 and −1/3. The composite scalar S couples to the bi-doublets via strong sector coupling g * as described in Sec. 5.1. Using Eq. (30), we find In Fig. 4, we show in solid-blue the correlation in the (c G , c γγ ) plane. The excess can easily be accounted for reasonable values of the parameters, e.g. for Γ S = 20 GeV, M f ∼ 1 TeV, and g * ∼ 3. Such a composite singlet resonance is expected to couple strongly with the pNGBs of the model, in particular, the Higgs and the longitudinal polarizations of the W and Z bosons. In terms of the four pNGBs π a , the coupling reads L S,CH ⊃ a S (∂ µ π a ) 2 S/f . In terms of the physical states this interaction corresponds to [28] where f is the scale of the spontaneous breaking of the global symmetry in the composite sector, f ∼ 1 TeV, and a S is an O(1) parameter. In particular, notice that the derivative coupling with the Higgs is given as c h∂ = c V = a S m S f . With this matching, and using Eq. (12), the decay width into the pNGBs (h, W , and Z) is The strongest constraint from LHC Run-I resonance searches comes from the ZZ decay channel, Eq. (15). It imposes a bound of For c G = 5 (10), as suggested in Fig. 4 top (bottom) for the bidoublet representation, this corresponds to B(S → pNGB) 16 (4)%. This implies that, for reasonable values of the parameters (i.e. assuming no extremely large contributions to c γγ ), a sizable total width can only be obtained via decays to other channels, such as tt or invisible. Therefore, one should expect to observe the signal in these channels soon if this scenario is indeed realized in Nature. In other words, the upper limit on Γ(S → ZZ)/Γ(S → γγ) from Fig. 2, corresponds to a S m S f < 0.6 × 10 −2 c γγ . The fact that the coupling to pNGBs has to be suppressed with respect to the naive expectation a S ∼ 1, puts in some tension this scenario as a natural interpretation of the excess. A second doublet: the MSSM and beyond Extra Higgs bosons below the TeV scale are naturally predicted in supersymmetric models. A prime example is the second doublet of the Minimal Supersymmetric SM (MSSM), on which we now focus. The mass matrix of the CP-even Higgs system of the MSSM contains three free parameters -two masses and one mixing angle, or equivalently m A , tan β, and the well-known top-stop radiative correction ∆ t . Identifying the 750 GeV resonance with the CP-even component of the heavier doublet, the masses, mixing, and couplings of all the Higgs states are determined as functions of tan β alone, which remains the only free parameter of the model. The mixing angle between the two doublets, in the basis where one of the states takes all the vacuum expectation value, reads (see e.g. [29]) where m H = 750 GeV and m h = 125 GeV are the masses of the two physical states, and the radiative correction ∆ t is determined as a function of the masses and tan β. The mixing is largest, δ 0.3, for tan β = 1, which is close to the edge of future sensitivity of the high-luminosity LHC to modified Higgs couplings [30]. Neglecting loop effects due to new (supersymmetric) particles coupled to the Higgs bosons, the production cross section and branching ratios of H are also determined. We have already shown in Sec. 2 that with modified couplings to SM particles alone it is not possible to reproduce the diphoton excess. In this simple case, one finds the highest values of the γγ signal strength, at the level of only ∼ 10 −2 fb, for very low values of tan β, where also the production cross section is the largest. Contributions from additional (supersymmetric) particles are required in order to further enhance the γγ rate. For low tan β the width of the 750 GeV state reaches 10-20 GeV, and Fig. 4 (bottom) shows that very large effective couplings to gluons and photons are needed in this case. Furthermore, since the branching fraction into tt is close to 1, also direct constraints from tt resonance searches are relevant, and require c G 7 and therefore c γγ 40. On the other hand, the width of H reaches its minimum of around a GeV for tan β 6-8. In this case, as can be seen in Fig. 4 (top), smaller values of c G and c γγ are needed. However, loops of top squarks can increase the sizable only marginally [31]. B(H → γγ) can get a more significant contribution from light charginos [31], but still at a level which is not enough to reproduce the observed signal strength. It therefore looks difficult to accommodate the observed excess in the MSSM. A generic two Higgs doublet model of type II, on the other hand, with the addition of new light charged and colored states coupled to the Higgs bosons, can easily accommodate a diphoton signal compatible with the excess. In this case one simply has to satisfy all the constraints as described in Sec. 4.2, and the c G and c γγ coefficients can be estimated as in Eq. (30). Conclusion After analyzing the very first data at 13 TeV collision energy, both ATLAS and CMS collaborations have recently reported a tantalizing excess in the diphoton invariant mass spectrum around ∼ 750 GeV. When interpreting the excess as a scalar (or pseudo-scalar) resonance, produced dominantly via gluon-gluon fusion, no tension with 8 TeV analyses is found. On the contrary, as shown in Fig. 1, a slight excess observed at 8 TeV in CMS nicely dovetail with the recent excess, contributing to the combined signal strength We introduced, in Sec. 2, an effective parameterization of scalar and pseudo-scalar interactions with the SM fields, and computed the relevant production cross sections and decay widths in terms of the effective couplings. Assuming the production cross section to be dominated by gluon fusion, we showed that the top quark and W contributions in the loop are not sufficient to explain the excess, requiring new colored and charged particles to exist. In Sec. 3, we did a survey of all potentially relevant resonance searches for a neutral scalar at the LHC Run-I, summarizing the limits on σ × B in Table 1. These, on the other hand, imply upper limits on the size of potential decay modes as shown in Fig. 2. Working in the effective framework, we identified, in general terms, two phenomenologically distinct scenarios based on the assumptions as regards the total decay width, to which we refer as "loop-only" and "extra-width" scenario. In the "loop-only" scenario, the main assumption is that the new resonance mainly couples to the SM gauge bosons at loop level, and thus the total decay width is in the MeV range. The excess, in this case, can easily be explained for O(1) effective couplings c G and c γγ , while remaining in agreement with all other data. In Sec. 5, we give two specific examples that match to this scenario, namely: a model with a single vector-like quark generating the effective ggS and γγS couplings, and a setup in which the pseudoscalar singlet, along with the Higgs, arises as a pseudo-Nambu-Goldstone boson of some spontaneous symmetry breaking, in which case the required couplings are generated by a combination of Wess-Zumino-Witten terms and loops of composite fermions. The "extra-width" scenario assumes that there are additional tree-level decay channels that dominate over the loop induced ones. We investigate the phenomenological implications specifying Γ S = 1 or 20 GeV. In the latter case, employing the limits from Table 1, we argue that the resonance can not dominantly decay to the SM gauge bosons, and identify the tt or monojet signatures at the LHC as a possible way out. This scenario can be realized in composite Higgs models (see Sec. 5.4), where large c G and c γγ couplings are obtained via interactions with composite resonances in the large representations (or large multiplicity) of the global symmetry of the strong sector. In the intermediate regime (Γ S = 1 GeV), somewhat smaller photon and gluon couplings are required to fit the excess. However, also in this case strong constraints on the other couplings apply, making tt and invisible again the channels in which a sizable branching ratio is allowed. An example of a scenario of this kind could arise in a type-II two Higgs doublet model, like the MSSM. Here, knowing the mass of the heavier doublet leaves only one free parameter that determines the phenomenology at tree level. The total width of the heavier doublet is in the few GeV range for moderate values of tan β, but larger widths can also be attained. Even in the optimal case of smallest total width, we estimate that loop corrections from supersymmetric particles are too small to explain the observed excess, calling for an interpretation beyond the MSSM. Even though it is still too early to draw definite conclusions about the existence of a new resonance, our analysis shows that both experiments consistently point to a sizable excess at an invariant mass of around 750 GeV. Moreover, such a resonance could fit well in many reasonable scenarios beyond the SM. In all cases other light particles are required and interesting signatures are predicted to show up during the rest of Run-II at the LHC.
8,892.8
2015-12-15T00:00:00.000
[ "Physics" ]
Natural Motion for Energy Saving in Robotic and Mechatronic Systems : Energy saving in robotic and mechatronic systems is becoming an evermore important topic in both industry and academia. One strategy to reduce the energy consumption, especially for cyclic tasks, is exploiting natural motion. We define natural motion as the system response caused by the conversion of potential elastic energy into kinetic energy. This motion can be both a forced response assisted by a motor or a free response. The application of the natural motion concepts allows for energy saving in tasks characterized by repetitive or cyclic motion. This review paper proposes a classification of several approaches to natural motion, starting from the compliant elements and the actuators needed for its implementation. Then several approaches to natural motion are discussed based on the trajectory followed by the system, providing useful information to the researchers dealing with natural motion. Introduction Industrial robotic and mechatronic systems are required to have high energy efficiency, especially when high-speed continuous operations and high-volume production are needed [1]. Operating a robot or a mechatronic system at high speed produces significant losses at high velocities, as well as energy surpluses in the deceleration phases. These losses have repercussions on the amount of electric energy that is needed to operate the manufacturing system. Furthermore, increasing energy prices and environmental awareness encourage to reduce the power consumption. This is highlighted by the policy applied by the European Union, which aims to reduce the whole energy consumption up to 30% by 2030 [2]. Moreover, in the last years, the demand for industrial robots has accelerated due to the ongoing trend toward automation [3]. Therefore, their energy efficiency is becoming crucial since manufacturers are incentivized to install eco-friendly solutions for plants and production systems. For these reasons, engineers and researchers have been motivated to investigate and develop novel strategies to increase energy efficiency in industrial robots and mechatronic systems. Several energy-saving methods for robotic and automatic systems can be found in the literature. In Reference [4], G. Carabin et al. present a classification of these methods, drawing a distinction between hardware, software and mixed approaches. In particular, hardware solutions include the implementation of new kinds of actuating systems, regenerative drives [5] and the design of lightweight manipulators [6][7][8]. The software approach is focused on time minimization, operations scheduling and trajectory optimization [9][10][11][12][13]. Mixed approaches rely upon the concurrent improvement of both hardware and software components of the automatic system. Among those, a particular method for enhancing energy efficiency is based on the concept of natural motion. The main drawback of SEA is that the stiffness is fixed and cannot be changed during motion, thus limiting the level of compliance to adapt for several tasks. To overcome this problem, a novel class of actuators was proposed: the variable stiffness actuators (VSA). These actuators consist of a motor connected to the output link by a spring in series, whose stiffness is variable. Therefore, the stiffness can be properly controlled to reduce energy requirements during the execution of repetitive tasks, as explained in Reference [40]. Reviews on variable stiffness actuators can be found in References [41][42][43][44], whereas a comparison of several design of VSA based on spring pretension is illustrated in Reference [45]. VSA were first introduced to decrease contact shocks, to enhance soft collisions in human-robot interaction [46,47] and to efficiently actuate legged locomotion systems [48,49]. Furthermore, they were employed to decrease energy consumption in cyclic operations of robotic systems. An example is given by the actuator with adjustable stiffness proposed by A. Jafari et al. [50][51][52]. Parallel Compliance Elements Compliant elements can be connected in parallel to the main actuators as shown in Figure 1b. Although some parallel elastic actuators can be found in the literature [53][54][55], it is not necessary to replace the original actuators to install parallel springs. For example, M. Iwamura et al. equipped a serial robot with two linear springs placed at joints, between neighboring links and a special spring holder [56]. In this way, they overcome the difficulties in adjusting the stiffness value and the mounting positions of torsional springs. The research in this field mainly addresses the development of mechanisms to realize variable stiffness springs [57][58][59] and non-linear off-the-shelf springs [60][61][62][63]. Indeed, non-linear stiffness 4 of 26 allows a larger energy saving in robots, since actuators torques and end-effector trajectory are always related by non-linear behavior, as demonstrated in References [58,64,65]. A mechanism to change spring stiffness is proposed by M. Uemura et al. in Reference [57]. It consists of a sliding screw with a self-lock function and a linear spring. The self-lock function guarantees that a constant elastic value is kept when the motor of the variable elastic mechanism does not exert a torque. The linear spring is attached by one end to the actuated link of the system and by the other to a point on the lead screw mechanism. By changing the position of the lead screw the length of moment arm of the elastic force exerted by the linear spring changes and hence the equivalent torsional stiffness. R. Nasiri et al. realize a parallel variable torsional spring by means of two linear springs and two worm-gear motors [58]. One end of the two linear springs is connected to a point of the actuated link, whereas the other end to the worm-gear motor. By independently controlling the strain-length of the springs with the two motors, an arbitrary compliance profile at the actuated joint can be obtained. A mechanism capable of realizing a constant non-linear torsional springs is presented by N. Schmit and M. Okada [60,61]. The mechanism consists of a linear spring connected to a cable wound around a non-circular spool. A non-circular spool (or variable radius drum) is characterized by the variation of the spool radius along its profile [66]. In References [60,61], the spring mechanism is attached to each actuated joint, with respect to which the linear spring behaves as a non-linear rotational spring with a described torque profile given by the shape of the spool (Figure 2). A similar design is proposed also by B. Kim and A.D. Deshpande [67], who additionally design an antagonistic spring configuration for bilateral torque generation. H.J. Bidgoly et al. [63] realize a non-linear torsional spring with an arbitrary stiffness profile by combining a linear spring and a non-linear transmission mechanism. The last consists of a non-circular cam connected to the actuated joint and a roller, which moves along the outer circumference of the cam. The stretched linear spring is hinged to the centers of the cam and the roller. The desired torque-angle profile is obtained by properly designing the shape of the cam. Another system to attach springs in parallel with the actuator is proposed by M. Plooij and M. Wisse [62]. A parallel spring mechanism converts the linear stiffness of a linear spring in an equivalent torsional non-linear stiffness. The mechanism consists of two pulleys of different size connected through a timing belt. The larger pulley is attached to the actuated link. The two ends of the spring are connected to two points placed on the outer circumference of the smaller pulley and of the larger one, respectively. In this way, the spring is non-linearly stretched with respect to the rotation of the link. One peculiarity of this mechanism is that it has two different configurations in which the Appl. Sci. 2019, 9, 3516 5 of 26 elastic energy of the spring is null. This means that if the mechanism is properly designed for a specific pick-and-place operation, the actuators does not have to counteract the spring during the task. It is worth noting that there are two mounting possibilities for parallel springs, that is, they can be connected to just one joint, or they can span over two joints. In the first arrangement, largely adopted, springs are called mono-articular, whereas in the second, bi-articular ( Figure 3). Bi-articular springs take inspiration from biological studies [68]-bi-articular muscles actuate human limbs as well as paws of birds, reptile and insects. Non-linear bi-articular springs are designed for example by B. Kim and A.D. Deshpande [67], by means of pulley-cable mechanisms and antagonist linear springs. Bi-articular springs are mostly employed in the realization of walking robot [69][70][71]. The effectiveness of such springs in serial manipulators to perform pick-and-place operations are investigated by G. Lu et al. [72] and H.J. Bidgoly et al. [73], with different conclusions. In the case of a variable linear bi-articular spring added on a SCARA robot, G. Lu et al. [72] conclude that such a spring alone cannot effectively save energy. Conversely, H.J. Bidgoly et al. [73] assert that bi-articular spring contribute more in the actuation cost minimization with respect to mono-articular springs, in the case of a redundant 4-DOF serial manipulator with non-linear mono-and bi-articular springs. Serial and Parallel Compliance Elements In the context of natural motion, some actuators are developed combining both serial and parallel compliant elements. In these systems, the serial compliance is employed for impact mitigation, whereas the parallel one is adopted for efficient energy storage. In Reference [74], a mixed series-parallel approach is presented by G. Mathijssen et al. Multiple series elastic actuation branches are placed in parallel, each engaged depending on torque requirements. This design leads to a significant torque effort reduction, as well as an increased output torque range, compared to traditional stiff or SEA configurations. This actuation design is adopted by N.G. Tsagarakis et al. [75] and by W. Roozing et al. [76]. A novel asymmetric actuation scheme that consists of two actuation branches that transfer their power to a single joint through two compliant elements with diverse stiffness and storage capacity properties is presented. The serial branch is used for impact absorption and it is connected to the main actuator, whereas the parallel branch is adopted for its large potential energy storage capability in cyclic tasks. Discussion of Elastic Elements Configurations Both serial and parallel configurations can be implemented with springs of variable or non-variable stiffness, as well as linear or non-linear behavior. Variable compliance allows the online tuning of spring parameters. Therefore, control systems that act on the stiffness to perform varying operations and to change system dynamics can be adopted, as in References [16,58,77,78]. As a drawback, variable compliance springs need a more complex actuator design and sensory feedback. On the contrary, configurations based on non-variable stiffness are easier to implement, the springs can be designed and tuned offline but are suitable only for non-varying tasks and operations. The serial configuration (adopted for example in References [42,50,79]) provides the actuator with a compliance that can be employed to decrease contact shocks and to reduce force peaks due to impacts, for example, In human-robot interaction [46]. On the other hand, the parallel configuration [65,[80][81][82] results in simpler mechanism and mathematics formulation [78]. Parallel compliance does not enlarge the configuration space of the robotic system and has a well-posed quadratic cost function for energy consumption minimization with respect to the compliance coefficients [83]. Moreover, the serial arrangement of springs and motors limits the operational speed due to uncontrolled robot deflection when performing high-speed tasks. Comparisons between serial and parallel configurations can be found in References [84][85][86][87][88]. T. Verstraten et al. in Reference [87], provide a comparison between stiff actuators, parallel elastic actuators and serial elastic actuators in terms of power as well as mechanical and electrical energy consumption. As test case, a sinusoidal motion is imposed to a pendulum load. By means of simulations they demonstrate that, if the stiffness of the elastic actuators is properly tuned, it is possible to reduce energy up to 78% compared to rigid actuator using serial elastic actuators and up to 20% using the parallel ones. Although the results of this study demonstrate that a serial spring arrangement outperforms the parallel one from an energetic point of view, the majority of the works dealing with natural motion adopt parallel springs because of their easiness of installation and control. An evidence of this is provided by Table 1, where the spring installation configuration adopted in the reviewed papers are highlighted. Design of Desired Natural Dynamics The concept of natural motion can be exploited to perform tasks that are compatible with the natural dynamics of a mechanical systems, that is, cycle tasks. For this reason, such a concept is mainly adopted for locomotion of legged robots or, in the industrial field, for pick-and-place and palletizing tasks. A pick-and-place (or palletizing) task has strict requirements on the positions where objects are located and the corresponding velocities. Specifications on the task time and the path could potentially be relaxed constraints. Conversely, walking robots do not need strict requirements for the gait. In this case, if the robotic system is already equipped with elastic elements, the desired task (the robot gait) can be adapted to the system characteristics, that is, The task can be performed by adopting the trajectory naturally generated by the system. Such an approach can be referred to as natural dynamic exploitation [96]. Generally speaking, when there are requirements on the task to execute, it is always necessary to modify the system to fulfill them. The adaptation of the system to perform a given periodic task can be indicated as natural dynamic modification [73]. As discussed in Section 1, the modifications of a robotic system to exploit the natural motion rely on spring additions. Therefore, the problem can be reformulated in this way: how should the spring parameters be determined in order to fulfill the task requirements exploiting system dynamics? Such a question has not a unique and straightforward answer. Requirements on task are typically given in terms of positions, velocities and time. Dynamic models of robotic systems depend non-linearly on the first two. Additionally, to find an analytical solution for the motion equations is very difficult, if not impossible, and hence also the setting on the time requirement is impossible, unless simplifications (such as linearization) are adopted or assumptions on the system response law are made. Therefore, since the optimal spring depends on the trajectory and vice versa, we classify the methods concerning with natural motion according to the trajectory followed by the system: • Defined trajectory. A feasible trajectory for the desired task (typically harmonic) is imposed and the spring parameters are optimized to minimize a given objective function related to energy consumption; • Optimized trajectory. The spring parameters and the system trajectories are concurrently optimized thanks to a parametric representation of both or by adopting the optimal control theory; • Free-vibration response. The trajectory is not imposed. The optimal spring parameters are identified so that the free response of the system fulfills task requirements. Such a result can be obtained by means of linearized dynamic models or multibody simulators; • Periodic trajectory learning. The robotic system is not modified. The forced response at resonance of the system is learned by means of proper tools and used as reference trajectory. These four categories are analyzed and discussed in the following. Defined Trajectory Most of the works dealing with natural motion adopt a fixed pre-defined trajectory, which usually consists of an harmonic motion law. In the case of defined trajectory, the correct springs parameters to be added in the system can be determined with different strategies: control-based methods, graphical approaches or optimization strategies. These three approaches are discussed in the following. [77], employ a control method based on linear stiffness adjustment to reduce the actuator torque needed to follow a desired harmonic trajectory with a serial link system. The proposed adjustment law for the parallel spring stiffness depends on the angle and angular velocity tracking errors. Such a law is added to the feedback control system: when the tracking errors become smaller by the stiffness adjustment, the feedback terms become smaller as well. In this manner, the actuator torques are reduced. The variability of the linear stiffness is exploited just to tune the system at different desired frequencies of the harmonic trajectory. In particular, the stiffness varies just at the beginning of the motion, during the tuning phase, before converging to a constant value. H. Goya et al. in Reference [16], experimentally validated the stiffness adaptation control on a 3-DOF SCARA robot equipped with two adjustable parallel springs on the two revolute joints. The desired start and end points are determined in the task-oriented coordinates and the corresponding points in joint angle coordinates through inverse kinematics. The start and end points in the joint space are connected through an harmonic trajectory with a given period and amplitude equal to the distance between the start (end) point and the equilibrium point. The equilibrium position of the elastic element is set at the middle point between the start point and the end one in the joint coordinates. The springs are tuned to perform the given motion between two points, as discussed in Reference [77]. An alternative method to perform multi-point trajectories and concurrently exploit the adaptive stiffness control to minimize the actuator efforts is proposed by K. Matsusaka et al. in Reference [93]. In this work, it is assumed that the elasticity can be instantaneously changed when the spring is in equilibrium. In this way, the amplitude of the oscillation can be modified without affecting the potential energy stored by the elastic element. With respect to the strategy proposed by H. Goya et al. based on the feedback control method [16], this approach allows to increase the number of pick-and-place points and to reduced the energy consumption by 39%, as proved experimentally. Furthermore, M. Uemura et al. prove in Reference [98] the effectiveness of the stiffness adaptation controller [77] on a 1-DOF system. Multi-frequency harmonic trajectories are tracked, while minimizing the norm of the required torque. Variable elastic elements and the control proposed in Reference [77] are used by K. Matsusaka et al. in Reference [94] to improve the energy-efficiency of a 2-DOF robot in palletizing task. For such a task an obstacle-avoidance trajectory is proposed consisting in moving both the joint with harmonic trajectories having an angular frequency ratio 2:1. Additionally, to increase the energy savings, the authors suggest to provide the system not only with variable stiffness springs but also with linear constant springs. In this way, the gravitational force can be counteracted. An improvement of the adaptation stiffness controller in terms of energy saving is obtained in Reference [97], by combining such a controller with a delay feedback control. However, although the new approach allows to move the system almost without any actuation force, the delay feedback control modifies the desired trajectory. Therefore, the resulting motion can be significantly different from the desired one. A solution to improve the energy savings of the adaptive stiffness control and achieving a good trajectory tracking control is proposed in Reference [99] and experimentally validated in Reference [78]. The new control method combines the stiffness adaptation and the iterative learning control and can be applied to multi-joint robots. The proposed control optimizes the stiffness of elastic elements installed in parallel at each joint in order to save energy and to track a desired multi-frequency harmonic trajectory. An online adaptation method suitable for non-linear compliance acting in parallel with actuators is presented in Reference [58]. The method aims at minimizing actuation forces of multi-joint robots performing given cyclic tasks. Such a method adds an adaptation rule in parallel to the closed-loop control in order to minimize the squared actuation forces. The parameters that allow to minimize the force are the adaptable coefficients used in the compliance definition. The compliance of each actuated joint is defined as a multi-basis non-linear compliance, that is, as the sum of the products between a coefficient and a smooth basis function defined over the joint position. In particular, the compliance structure is defined by the basis functions that are decided a priori and fixed, whereas the coefficients are adaptable and used to minimize the cost function. By choosing a proper set of basis functions (e.g., polynomials), the compliance force acts as a general function approximator. Hence the elastic force has more flexibility to compensate the actuation torques, which are typically in a non-linear relationship with the joint angles. Similar to the other methods discussed up to now and based on control system, this method does not require any knowledge of the controlled system or of the dynamical equations of the robot. A recursive algorithm to adjust the configuration of a variable spring actuator for a given trajectory is proposed in Reference [40]. Unlike the previous methods to tune variable stiffness, such an algorithm does not optimize the mechanical output. It directly reduces the input electrical energy requirements of the system during the execution of repetitive tasks without requiring a precise knowledge of the controlled system. It is based on the gradient descent optimization algorithm [107]. Basically, it expresses the objective function (the total electrical input energy of the actuator) as a convex function in the design parameters (i.e., The spring stiffness). An iterative process finds the values of the configuration parameters that minimize the objective function in a repetitive task performed by the actuator. The inputs of the algorithm are real-time measurements or estimations of the objective function, the physical limits of the design parameters and the periodicity of the task. All the control-based approaches discussed up to now do not require the knowledge of the system dynamic model. Such an aspect not only simplifies their implementation but makes these approaches more robust to typically unmodeled physical phenomena as, for example, friction or noise, as experimentally demonstrated in References [40,99]. In Reference [99], the validity of the method is proved also in the presence of static friction, Coulomb friction and backlash. In Reference [40], the robustness of the algorithm to the variation of the objective function due to the changes in the operation condition, perturbations and signal noise is demonstrated. On the other hand, all these methods, although exploit variable stiffness springs, are intended for repetitive tasks with a relatively low rate of change of the stiffness. Therefore, the convergence to the optimal stiffness value is not instantaneous but can last several cycles. A method proposed by W. Schiehlen and N. Guse [64] takes advantage of the inverse dynamic model and considers constant linear springs mounted in parallel with the actuators. A control based on limit cycle to reduce energy consumption in robotic system performing periodic tasks is proposed. Given the period of the task and the desired trajectory of the system, this is then adapted to match the limit cycle of the mechanical system at best. The definition of the desired trajectory includes the identification of the boundary conditions for the state of the motion as well (i.e., position and velocity of the start and end points). By adopting a modified shooting method [108], the values of the stiffness and the neutral position of the parallel spring are adjusted to meet the boundary conditions of the system featuring a limit cycle close to the desired trajectory. A low energy control is sufficient to force the system to follow the desired trajectory. With reference to a 2-DOF assembly robot performing an harmonic motion law, the limit cycle of the system can be correctly adjusted by properly choosing the coefficient of linear springs. In the case of an arbitrary, non-harmonic motion law, such as a piece-wise constant acceleration trajectory, non-linear springs are needed to correctly adjust the limit cycle to the desired trajectory. In Reference [91], R.B. Hill et al. extend the method proposed by W. Schiehlen and N. Guse to multiple-point trajectory. In this case, variable stiffness springs are employed. The spring parameters can be tuned to force the limit cycle to converge to the desired pseudo-periodic trajectory between every two consecutive points of the multiple-point motion. G. Lu et al. propose a control method starting from a linearized model of a 2-DOF planar manipulator, lying in the horizontal plane [72]. The purpose is to save energy in performing a given harmonic trajectory. The controller adaptively tunes the stiffness of parallel springs, while compensating for viscosity effect. The adaptive control law for the stiffness is proportional to the errors between the actual and the desired joint velocities and the difference between the actual joint position and its equilibrium one. The same authors propose an inertia adaptive control for energy savings in Reference [92]. This work starts from a 1-DOF system already equipped with a constant linear spring and add a movable mass to tune and adapt the eigenfrequency of the system. In this way, the frequency of the desired harmonic trajectory can be reached. Another control-based approach to employ stiffness variability for energy efficiency during a task is proposed by A. Velasco et al. in Reference [79]. The method aims at determining the optimal stiffness profile of serial springs that minimizes the energy consumption of the mechanical system, performing a given task. Since changing the actuator stiffness has an energetic cost, the authors add such a cost to the objective function (squared mechanical torque). By exploiting the cost function and the motion equations of the actuated system, an analytical solution for optimal stiffness as function of the desired trajectory is found. In particular, the total time is subdivided in intervals and an optimal stiffness profile, that can be constant or variable, is determined for each sub-interval. Simulations with squared trajectories with different amplitudes and frequencies show that the optimal spring profile (i.e., constant or variable) depends on the reference trajectory. Constant stiffness is preferred for low amplitudes and low frequencies trajectories, whereas variable stiffness is suggested for high frequency motion laws. Methods Based on Force-Displacement Graphs W. Schiehlen and N. Guse propose in Reference [95] an alternative method to that in Reference [64] for determining the spring design parameters that minimize the actuators work. The required control forces or torques are computed by means of the inverse dynamics and the desired trajectory. Such forces are then expressed as function of the positions of all the bodies and fitted by means of polynomials. If a linear function is used as a curve fit, which represents a linear spring, the spring coefficient is the slope of this curve fit and the axis intercept is the spring fastening. The use of curve fitting is equivalent to a minimization task with inequality constraints, since the stiffness coefficient is always positive. The exploitation of the force-displacement graph is suggested also by M. Khoramshi et al. [65]. In particular, they compute the absolute work along the force-displacement graph and derive the value of the parallel spring stiffness that minimize it. Such a strategy results in a non-linear stiffness profile capable of improving the energy efficiency of the system. Indeed, with the adding of parallel non-linear compliance, the resulting force-displacement graph is lined up around the horizontal axis (i.e., null force). Methods Based on Offline Optimization An analytical method for the offline computation of the optimal parallel compliant elements and the frequency of the reference trajectories for serial manipulators performing cyclic tasks is presented by M. Shushtari et al. [96]. A representation of the compliant elements similar to those used by R. Nasiri et al. [58] is adopted. In particular, they take into account a multi-basis representation consisting in the product between a compliance coefficient and a basis function dependent on the joint coordinates. Such a representation leads to a cost function (squared mechanical torques multiplied by a weighting matrix) that is a quadratic function with respect to the compliance coefficients and quartic with respect to the task frequency. This function can be analytically solved. In order to address the multi-task case, M. Shushtari et al. propose a weighted sum of the original cost functions defined over the tasks. P. Boscariol et al. [89] set up a constrained optimization problem to find the optimal stiffness value and placement of a linear spring to be added in parallel with the joint actuator of a 1-DOF system for reducing the peak torque requirement (Figure 4). The authors also shown the effects of the joint trajectory (cycloidal motion trajectory and 5th order polynomial with null initial and final acceleration) on the optimization results. In Reference [90], G. Carabin et al. propose a methodology to reduce the electrical energy consumption in a Delta-2 robot by concurrently exploiting energy recuperation drive axles and torsional springs mounted in parallel to the actuators. Figure 5 reports the kinematic diagram and the electrical schematic of the Delta-2 robot. An optimization-based design method determines the stiffness of the two torsional springs and their equilibrium positions. The energy consumption performing cyclic pick-and-place operations, with a predefined trajectory (double-S speed profile in the workspace) is minimized. A. Velasco et al. in Reference [86], determine the optimal stiffness value and spring preload such that a given cost functional is minimized. In particular, they consider the influence on the optimal values of different aspects: spring placements (i.e., serial or parallel actuator); parameters of a given harmonic trajectory (amplitude and frequency); cost function (squared mechanical torque or squared mechanical power). From such a study, it results that, if parallel spring are employed, the optimal values for spring stiffness and preload can be analytically found regardless the cost function employed. Conversely, for serial springs an analytical solution exists only if the cost function is the squared torque. As far as the effects of the trajectory parameters on the energy savings are concerned, the authors show that, if serial springs are employed, the savings depend also on the cost function adopted. On the other hand, parallel springs have the same trends independently from the cost function. In particular, it results in the parallel springs being more convenient for small amplitudes at low frequency or large amplitudes at high frequency. The use of serial springs is more convenient for small amplitudes at high frequency or for large amplitudes at low frequency, if mechanical torque is considered or for high frequencies independently from the amplitude, if mechanical power is considered. Optimized Trajectory Another strategy to improve the energetic efficiency of the system consists of the concurrent optimization of trajectory and spring stiffness. Two are the most common methodologies to reach such a goal-methods based on the optimal control theory, which find an optimal control law that minimizes a given cost function, and methods that parametrize the trajectory by means of basis functions. Methods Based on the Optimal Control Theory W. Schiehlen and M. Iwamura [101] simultaneously optimize the constant linear spring stiffness (mounted in parallel with the actuator) and the joint trajectories with respect to the energy consumption, taking advantage of the optimal control theory. Following their approach, the time to execute the task is not given but it is optimized as well. Indeed, they define a relationship between the consumed energy and the operating time and the optimal trajectory, finding a condition for the operating time to be optimal. Then, the optimal design for the springs is derived accordingly to such a time. The aforementioned relationship as well as all the optimal solutions are derived starting from linearized equations of motion. An analytical solution is provided for the optimal trajectory (which results in harmonic motions in modal coordinates), the operating time and the spring equilibrium position (corresponding to the middle point between the initial and final desired points). Conversely, the optimal stiffness is numerically found by means of an iterative method. Although the authors verify the correctness of the analytical solution proposed by comparing it with the numerical one based on the non-linear dynamics, their method is valid as long as the linearization assumptions are respected, that is, The centrifugal and Coriolis forces are negligible. This means that such a method is suitable if fairly strong springs or long operating time are adopted. A manipulator prototype, respecting the validity assumptions, is used by M. Iwamura et al. [100] for the experimental validation of this approach. The method proposed by W. Schiehlen and M. Iwamura [101] is extended by the same authors to systems working under gravity in [56]. The optimal control theory is exploited by C. Mirz et al. [82] to reduce energy consumption in parallel kinematic manipulators by means of linear torsional springs mounted in parallel to the motors. The power consumption of the drives is selected as cost function to calculate the optimal trajectory to travel between given initial and final positions in a fixed cycle time with minimum energy. By applying the Pontryagin's minimum principle, the cost function is transformed in a two points boundary value problem solved numerically by means of finite difference method. Although the characteristics of the elastic elements are determined for one specific initial and final position and a given cycle time, simulations showed satisfactory results in terms of energy savings also performing pick-and-place operations between 200 different positions arranged in a square about the initial and final points. Methods Based on Parametrization through Basis Functions N. Schmit and M. Okada et al. [102] minimize the actuator torques by simultaneously designing the robot trajectory and the torque profiles of the non-linear parallel springs located at each joint. The desired time interval to perform the task is divided into a certain number of equal sub-intervals, based on which a third-degree Hermite interpolation is used to parameterize both the joint trajectory and the spring torque profiles. By expressing the spring torques as function of the trajectory, the position and velocity of each joint at the nodes become the only design parameters. The optimal joint trajectory is found numerically minimizing a cost function composed of three terms-a term evaluating the actuator torques, a term evaluating the improvement due to the contribution of the non-linear springs and a term that weights the non-linearity of the profiles to guarantee their technical feasibility. Once the optimal joint trajectory is found, the optimum spring is designed as function of the optimal joint trajectory with a closed-form solution. However, the resulting springs may exhibit negative stiffness. Such an issue is overcome in Reference [61], where constraints to impose positive stiffness to given joint coordinates are added to the optimization problem. The use of polynomials to parameterize the joint trajectory and spring torque profiles is suggested by H.J. Bidgoly et al. in Reference [73]. In contrast to Reference [102], the authors adopt one polynomial (with a degree to be determined by the designer) for all the time interval. Also in this case, the optimal spring are computed analytically, whereas the joint trajectories are numerically optimized: the spring torque profiles are polynomial functions of the joint trajectories, which, in turn, are polynomial functions of time. The optimal trajectories are found by solving a constrained optimization problem whose cost function takes into account actuator torques and realization complexity of the non-linear springs. The possibility to obtain an optimal trajectory, in the meaning that it is very close to the robot natural behavior, is increased by the assumption that the system is redundant with respect to the task, so that infinite solutions of inverse kinematics are possible. However, although the authors show that very good results can be obtained in terms of actuation minimization with their methodology, they do not provide an evidence that kinematic redundancy has advantages over the non-redundant case. R.B. Hill et al. in References [80,91], propose a method to increase the energy efficiency of parallel robots, by concurrently optimizing the joint trajectory and the control law of variable springs mounted in parallel with the system actuators. Figure 6 shows the adopted five-bar mechanism performing a pick-and-place operation and the power transmission system of variable stiffness springs. The authors do not focus on the spring profiles but on the joint trajectories of the spring actuators so that a classical position controller can be used without a feedback measure of the stiffness. Similar to Reference [102], the total time to perform the task is subdivided in a certain number of equal subinterval, defined by the number of intermediate points (via points) chosen between the initial and the final positions for both the robot joints and spring motors. The via points are interconnected thanks to an expression defined by means of four polynomials, each one function of position, velocity and acceleration of the considered link and of the time, respectively. The explicit form of each polynomial is obtained by means of a heuristic approach: different functions are experimentally tested until the best polynomial form in terms of consumed energy during the motion is obtained. The optimal trajectory is defined looking for the position, velocity and acceleration of the intermediate points of both the robot joints and spring motors that minimize the energetic losses of the entire actuation chain (robot joint motors and variable stiffness motors). The larger advantage of the method proposed by R.B. Hill et al. in References [80,91] over the other discussed in this section, is the possibility of planning and optimizing multiple-point trajectories. Free-Vibration Response In the context of natural motion, few methods exploit the free-vibration response of the robotic system for performing a desired task. N. Kashiri et al. [104] propose a method to exploit the natural dynamics of serial robots driven by adjustable compliant actuators mounted in series. A feedback linearization control is used to linearize the system dynamics and to generate modally decoupled limit cycles. Having decoupled the system dynamics, it is possible to analytically write the response of the closed-loop system and to set its natural frequencies to the target values by properly tuning the compliant elements. The control scheme generates a smooth reference link position to excite an individual mode, as well as a combinations of modes allowing for the desired periodic motion with a minimal energy expenditure. [103] and of a Delta robot (Figure 7) [81] to move the system between two given points for a pick-and-place operation in a prescribed time. To reach such a result, a set of springs with optimized parameters (stiffness and equilibrium positions) are added in parallel to the motors. The optimal spring parameters are found by solving the system direct dynamics with the aid of a multibody simulator. By imposing the initial desired position of the system (pick position) and taking a first guess for the spring parameters, the simulator computes the system positions and velocities after the task time, which are compared to the desired ones (place positions and velocities). Iterative simulations are carried out until small enough errors are obtained. Ideally, the resulting system does not require any actuation torque to perform the motion. However, to stop the system in the pick-and-place positions, actuators counteracting the springs or mechanical breaks are needed. Periodic Trajectory Learning A mechanical system can move using, as a reference trajectory, the one that it naturally generates. Proper tools, such as adaptive oscillators, can be used to learn the periodic trajectories and adopt them as reference, especially for robot locomotion and dynamic walking [109][110][111]. In this manner, the reference trajectory is synchronized with the resonant frequency of the system, which results in energy saving. A tool capable of obtaining smooth and lag-free estimates of the frequency and phase of an external quasi-periodic signal is given by the adaptive frequency oscillators (AFO), introduced by L. Righetti et al. [112,113]. AFO are adopted in the literature for the estimation of cyclical movements, especially for rehabilitation purposes and in robots performing quasi-periodic motions. However, the convergence and optimality of these methods are in general not ensured [110,111]. Adaptive oscillators are used in central pattern generator to learn a specific rhythmic pattern, by synchronizing the reference trajectory with the resonant frequency of the robotic system and earning energy efficiency. For example, in Reference [109], Buchli et al. present a 4-DOF spring-mass hopper with a controller based on adaptive frequency Hopf oscillators, which adapts to general, non-harmonic signals. The adaptive oscillator adapts to the properties of the mechanical system, in particular to its resonant frequency. In order to tune the frequency and shape of the cyclic natural motion for energy efficient, novel oscillators, that is, the adaptive natural oscillators, are introduced. M.R.S. Noorani et al. present in Reference [106] an adaptive frequency non-linear oscillator for energy efficiency that exploits the resonant mode in a leg-like mechanical system called stretchable pendulum. The system is a simple oscillating mass-spring mechanism that interacts with the ground during its oscillation. A Hopf non-linear oscillator is placed in the feedback loop and its frequency tracks the resonance frequency of the mechanical system. The system not only earns energy efficiency but has also the ability to adapt with a changing environment. M. Khoramshahi et al. and R. Nasiri et al. present in References [83,105] a linear and a non-linear adaptive natural oscillator, ANO and NANO, respectively. These tools are capable of tuning the frequency and the shape of cyclic motions for energy efficiency and ensure optimality and convergence. Moreover, they are built upon the adaptive frequency oscillators but, in contrast to AFO that adapts to the frequency of an external signal, ANO adapts the frequency of reference trajectory to the natural dynamics of the system (Figure 8). In Reference [105], the efficiency of ANO is shown in the simulations of a hopper leg and of a compliant robotic manipulator performing a cyclic task. Furthermore, experimental results of a 1-DOF joint with variable compliance (Figure 9) show the feasibility of the approach, exploiting the natural dynamics and reducing the consumed energy. In Reference [83], the non-linear adaptive natural oscillator (NANO), adopted to optimize the shape of reference trajectories, is presented. The oscillator ensures stability, convergence and optimal solution. Three robotic models are investigated in simulation: the pendulum, the adaptive toy and the hopper-leg, showing the feasibility of the proposed approach to achieve energy efficiency. [105] to exploit natural dynamics. The oscillator is employed as a pattern generator and the applied force is used as feedback. Figure 9. The 1-DOF mechanism with variable compliance presented by M. Khoramshahi et al. [105]. The linear spring acts as a parallel rotational spring at the joint and its stiffness can be tuned by controlling the ball screw mechanism. Design Optimization with Natural Motion In the previous sections, we have analyzed the mechanical design of both robotic and mechatronic systems for natural motion and of desired natural dynamics to ensure energy efficiency. It should be noted that, in the majority of the works, the exploitation of natural motion is strictly related to the definition of an optimal design problem. Indeed, spring parameters (with variable or non-variable compliance and linear or non-linear behavior) as well as trajectory parameters are designed using an optimization strategy. In this section, we provide an overview on the optimization problems adopted for the design of trajectory and spring parameters. Table 2 reports a summary on the optimal design problems, showing design variables, objective and constraint functions and algorithms adopted in the reviewed works. Furthermore, in the last column of the table the results obtained in terms of energy (or torque) efficiency are reported. Before analyzing the optimization problems in detail, it is necessary to make a distinction between works in which the design formulation includes spring parameters, works in which it considers trajectory parameters or both. In problem formulations in which the spring parameters are to be determined, both stiffness and equilibrium position are typically design variables. The second design formulation finds the optimal values for the trajectory and task time, frequency or coefficients of the motion profile are design variables. Another formulation is the concurrent design optimization of spring parameters and trajectory properties. This is the most complex optimization formulation to be implemented, since the properties of elastic elements usually influence the trajectory and vice versa. Despite the complexity, this formulation allows for a high degree of design flexibility and, therefore, potential to further reduce energy consumption. Several methods are used to solve the optimal design problem with natural motion. Due to their efficiency with convex problems, the majority of the works about natural motion utilize gradient-based optimization algorithms. Due to their use of gradients and also use of Hessians, these algorithms are referred to as first and second order algorithms. J.P. Barreto et al. [81] adopt the gradient-based trust-region-dogleg algorithm on an unconstrained problem, whereas H.J. Bidgoly et al. [73], P. Boscariol et al. [89] and R.B. Hill et al. [80] use MATLAB's gradient-based fmincon function. R. Fabian et al. [40], as well as M. Khoramshahi et al. [65] and R. Nasiri et al. [83] adopt a gradient descent algorithm. The unconstrained gradient-based shooting method was used by R.B. Hill et al. in Reference [91]. N. Schmit et al. [61,102] adopts the sequential quadratic programming (SQP) algorithm. A gradient-free method is implemented in Reference [103], where a downhill simplex algorithm is considered. In some cases, an analytical solution is found to the optimal design problem, avoiding the cost of setting up and solving numerical optimization problems. These papers include References [79,86,96,98,99]. An alternative approach is to tackle the problem with an optimal control theory approach. This is used for the optimal design problems by M. Iwamura et al. [56,100], by C. Mirz et al. [82], who along with W. Scheilen et al. [101] implement the Pontryagin's minimum principle (see Section 3.2.1). Most of the reviewed works adopt objective functions based on actuators torque, on consumed energy or power. A few works use objective functions based on task time and positions [103], position and velocity of the end-effector [81] or position and velocity errors [91]. Results shown in the last column of Table 2 indicate that all the reviewed approaches for the optimal design problems in natural motion consistently reduce energy consumption. The vast majority of papers utilize gradient-based optimization algorithms. Despite the great efficiency of these algorithms, the validation is typically carried out on simple test cases (as reported in Table 3) with low number of design variables and a convex objective functions. This review did not result in any research using genetic algorithms or evolutionary strategies employed in the field of natural motion. Although, these algorithms are less efficient, they can often handle non-convex problems, discrete design variables and noisy system functions. As it is the goal to consider complex mechanical systems and expanded design problems, the optimization formulation will require a higher number design variables including the parameters of the motion law. This may lead to non-convex optimization problems for which gradient-based algorithms are not suitable. Thus, the proper optimization formulation, the system equations and corresponding choice of optimization algorithm will play a critical role moving forward with the optimal design with natural motion. Discussion In Section 2, we analyzed several approaches to perform a cyclic task with a robotic system by exploiting its the natural motion, that is, a motion mainly due to the transformation of the potential energy stored by elastic elements into kinetic energy. The application of the natural motion is demonstrated to be beneficial in terms of energy consumption on different mechanical systems (see Table 3) by means of simulation or experiments. From the table, it is evident that most of the methods are validated on very simple test cases, that is, planar systems with one or two revolute joints. In all the cases, systems with rigid-links are considered, so there are no examples of robotic systems with flexible links or fully compliant mechanisms on which the natural motion is applied. Considering the increasing interest in these kind of systems, the effect of link flexibility for the exploitation of the natural motion should be investigated, since link flexibility affects the system motion [114]. This could be taken into account in two ways-controlling the resulting unwanted vibrations or making them part of the natural motion trajectory. For the practical implementation of the natural motion in industrial application, it is important that the robot can perform multi-point trajectories. Indeed, pick-and-place operations are typically carried out between points belonging to two areas, that is, The pick and the place positions are not always the same but can be any position in these area. The majority of the works discussed in Section 2 neglect such an aspect and only consider point-to-point trajectories. Multi-point trajectories are taken into account in References [16,80,82,91,93,96]. Among these, H. Goya et al. [16] and C. Mirz et al. [82] use spring parameters that are optimized for the center of the interested areas to move the system following a multi-point trajectory. M. Shushtari et al. [96] also consider one constant stiffness, whose value results from an optimization problem that takes into account the whole multi-point trajectory. Conversely, K. Matsusaka et al. [93] and R.B. Hill et al. [80,91] optimize the elastic elements for each segment of the trajectory by changing the stiffness values when the robot is stopped in a pick or in a place position. If small pick-and-place areas are considered, the energetic savings using constant stiffness springs is advantageous with respect to the stiff case and even comparable to the varying stiffness (considering the energy cost to change stiffness) [93], despite being a simpler approach. However, when the points of the multi-point trajectory are far from the averaged values, based on which the spring is optimized, such an approach does not work anymore. It becomes difficult to accurately follow the trajectory (because of the elastic forces to be counteracted) and the energy saving are minimal or even null. Therefore, variable stiffness springs become necessary. The main goal of the application of the natural motion approach is to save energy. All the reviewed methods allow to improve the energy efficiency of a robotic systems, however, it is almost impossible to assert which one outperforms the others to reach such a goal. A benchmark case, with which to perform a comparative analysis between the different methods, is lacking. The benchmark is meant not only in terms of test case but above all in terms of performance evaluation. All the authors present a percentage of energy savings with respect to the stiff case (see the column of results of Table 2) but someone refers to electrical energy others to mechanical one. Most of the contributions neglect an estimate of the losses or the energy consumed, for example, to change stiffness, to activate the breaks. Additionally, the use of different objective functions (e.g., mechanical torque, mechanical energy, electrical energy) does not allow for a proper comparison, since the use of different objective function can lead to very different results as demonstrated by A. Velasco et al. [86]. Generally speaking, the use of optimized trajectories and variable stiffness springs is the more promising approach, since it leads to a trajectory that better fits the system dynamics and allows for flexibility in task execution (point-to-point or multi-point trajectories, different task frequency). On the other hand, methods based on fixed trajectories and constant springs are typically easier to set-up and implement. Conclusions In this paper we presented a review, a classification and a discussion of several approaches that adopt the concept of natural motion to enhance the energetic performance in robotic and mechatronic systems. In the first part of the paper, we identified the physical requirements that a system has to fulfill to exploit the natural motion and we discussed the technical possibilities to modify it, if necessary. To this end, the configurations in which compliant elements are installed at the joints of the mechanisms (i.e., serial and parallel, with variable and non-variable stiffness) were introduced and compared. Although a serial arrangement seems to be more convenient in terms of energy savings, most of the works deal with compliance mounted in parallel with the main actuators, because of the ease of installation and control. In the second part of the paper, we classified the approaches related to natural motion on the basis of the trajectory followed by the system: given trajectory, optimized trajectory, free-vibration response and period trajectory learning. Moving from the first category towards the last, the resulting motion gets closer to the system dynamics. Indeed, in the last case, also known as natural dynamic exploitation, the periodic or cyclic task is designed to match the response naturally generated by a given system. In all the other cases, a modification of the system is typically required (natural dynamic modification) so that its dynamic behavior matches a desired task. Regardless the trajectory followed, the results are that, in general, the added compliance parameters can be optimized offline (in the case of non-variable compliance) or can be tuned online (if the compliance is variable). To conclude, methods based on optimized trajectory and variable stiffness springs are most promising. In fact, they allow to approximate quite well the system dynamics (and hence to reduce energy consumption), while preserving task flexibility. Furthermore, we presented an analysis of optimal design problems in natural motion, since the implementation of natural motion is often related to the definition of an optimization problem. Numerical and experimental results, expressed as percentage reduction of required energy or torque, show that the adoption of natural motion techniques allows to highly increase energy efficiency. Future developments in the field of natural motion for energy saving in robotic and mechatronic systems should be toward the creation of a benchmark case to better understand the improvements and the potentialities of the different strategies. Further investigations would be necessary in regards to applicability to more complex systems, for example, spatial mechanisms and robots with different kinds of joints. Furthermore, the concept of natural motion could also be extended to robots and mechanisms with flexible links or fully compliant joint-less systems. Finally, the natural motion approach is expected to be applied in more scenarios in both industrial and academic research, where novel challenges could be addressed to maximize its potentialities. Author Contributions: All authors discussed and commented on the manuscript at all stages. More specifically, L.S. and I.P. collected the related literature, conducted the analysis and completed the draft writing under the supervision of A.G. and R.V.; E.W., A.G. and R.V. contributed to the revision of the paper structure and the presentation style, as well as the proofreading of the paper.
12,546.8
2019-08-27T00:00:00.000
[ "Engineering" ]
Safe-update of bi-layered controller and its application to power systems In real-world social infrastructures such as power systems, proven controllers are already implemented and operated stably. To further improve the control performance reliably, bi-layered control with inheriting the existing controller is proposed: the existing controller generates the baseline control signal in the upper layer, while multiple sub-controllers individually coordinate the signal to actuate the infrastructure system in the lower layer. The sub-controllers are characterized by few parameters that represent the degree of coordination. Then, the update strategy of the parameter with guaranteeing the safety of the overall system is proposed. The effectiveness of the bi-layered control is shown via a numerical experiment with a power system model. Introduction The scale of social infrastructures such as power systems and traffic systems has been getting larger in recent years. Social infrastructures must work reliably, while they have to overcome some problems caused by their scale increase. One of the problems can be seen in the operation and control of the power system. In the worldwide growing awareness of environmental preservation, a large number of renewable energy sources (RESs) are integrated into the existing power system. The high penetration of RES may increase the uncertainty of power supply and lead to deteriorating frequency stability [1]. To prevent deterioration, the controller needs to be updated depending on the RES increase. In other words, the main requirement for next-generation power systems is the ability of adaptation in the controller. Most of the existing controllers designed for social infrastructures, e.g. load-frequency control (LFC) for the power system, are implemented in a centralized fashion [2]: the controller aggregates the measured information from connected sub-plants and actuates them all at once. This controller is referred to as a global controller. Since practical social infrastructures are spatially distributed, the measurement and actuation for the global controller need to be of low-rank [3]. For example, LFC measures the average frequency from generators and actuates them by broadcasting requests. Due to the severe limitations in the measurement and actuation, the achievable performance by existing global controllers is not satisfactory. This motivates us to relax the limitation and to readdress the controller structure for social infrastructures. Some works address decentralized structure in the controller and/or apply data-based controller design. Although the works [4,5] show the effectiveness of decentralized control and data-based design, it is not always realistic to completely replace the existing reliable controller with a new one. As is pointed out in [6,7], the safety of the control systems may be impaired during the update of the controller. In other words, the state may largely fluctuate beyond the allowable operation range, which causes hardware failures or puts users at risk. Also, the complete renewal of the controller is not acceptable for users and operators due to their psychological resistance. This article addresses a novel control framework, where the existing controller is not replaced, but inherited to the updated control system. We consider the bi-layered structure in the controller as shown in Figure 1. In the bi-layered control, multiple local sub-controllers {K i } i∈{1,...,n} are dispersively installed below the global controller K 0 . A similar controller structure can be seen in the concept of glocal control [8], where independently defined global/local objectives are achieved simultaneously. On the other hand, the bi-layered control in this article pursues a common objective by the cooperative design of K 0 and {K i }. It is assumed that the upperlayered controller plays the role of the baseline controller, i.e. K 0 is preliminarily implemented and works well. Then, the design problem of lower-layered controllers is addressed: {K i } are designed to assist the operation of the upper-layered controller by using their local measurement of sub-plants {P i } i∈{1,...,n} . Since the bi-layered controller utilizes the local measurements in addition to the existing global one, it has the potential to further improve the overall control performance. A variety of controller design methods can be compatible with the presented general framework of bilayered control. A promising one is the application of data-based update, which brings the ability of adaptation in the control system. This article also addresses the update of lower-layered controllers with the safety guarantee, which is the main concern of data-based control [6]. The rest of this article is organized as follows: in Section 2, the design problem of the bi-layered controller is formulated. Each lower-layered controller is characterized by designable parameters, and its design problem is formulated. Section 3 addresses the design method of the parameters based on operating plant data. In addition, some propositions are given, where the safety of the overall system is guaranteed during the parameter update. In Section 4, the demonstration of the bi-layered control using a power system model is shown. Section 5 concludes the article and shows future works. Notation: The symbol e i is a unit vector whose ith element is 1, and the symbol 1 denotes the all-ones column vector, i.e. 1 := [1 · · · 1] T . For a vector v, the symbol (v) denotes the diagonal matrix composed of the elements of v. Given matrices and Y, the linear fractional transformations (LFTs) are defined by which are called the lower LFT and upper LFT, respectively [9]. System description We consider a feedback control system illustrated in Figure 2. The system is composed of the following three parts: one is the plant system defined by the interconnection of the sub-plants {P i } via the connection matrix L, another is the baseline controller K 0 , and the other is the set of additional sub-controllers {K i } to be designed. Each sub-plant P i is described by the following discrete-time state space model: where x i ∈ R m i , u i ∈ R, w i ∈ R, y i ∈ R, and z i ∈ R denote the state, control input, disturbance input, measured output, and control output, respectively, and ξ i ∈ R and η i ∈ R represent the interaction signals between the sub-plants {P i }. In addition, the symbols A pi , B ui , B wi , B ξ i , C yi , C zi , and C ηi denote constant matrices with appropriate dimension. The set of sub-plants {P i } is described by where x ∈ R m 1 +···+m n , u ∈ R n , w ∈ R n , y ∈ R n , z ∈ R n , ξ ∈ R n , and η ∈ R n are the stacked vectors of In addition, the coefficients in (5) denoted by A p , B u , B w , B ξ , C y , C z , and C η are the diagonal matrices composed of their corresponding coefficients of (4). For example, A p := diag(A p1 , . . . , A pn ). We let P(s) ∈ C 3n×3n represent the transfer function matrix of {P i }. As sketched in Figure 2, {P i } are connected via the connection matrix L ∈ R n . This is modelled by using the following equation: Here, we let z 0 be the aggregation of z, i.e. z 0 = 1 T z holds. Then, it follows from (5) and (6) that the overall plant system P all is described by The controllers K 0 and {K i } constitute a bi-layered structure: the baseline controller K 0 is located on the upper layer, while the additional sub-controllers {K i } are on the lower one. The baseline controller K 0 , namely, upper-layered controller, is an SISO static system described by where k 0 is a constant, and r 0 ∈ R and y 0 ∈ R denote the baseline control signal and the aggregation of the measurements {y i }, i.e. y 0 = 1 T y, respectively. The baseline signal r 0 is broadcast to the sub-controllers {K i }. Then, lettingr = 1r 0 , the input-output response from the measured output y to thisr is described bȳ We see from (9) that the upper-layered controller K 0 plays the role of "global" control: the controller is driven by the averaged behaviour of the sub-plants {P i } and broadcasts a baseline control signal to the subcontrollers {K i } [8]. Each sub-controller K i , namely, lower-layered controller, is described in the parametrized form: where α i is the parameter of the function K i . A simplistic class of K i is the linear controller of the form which is characterized by two design parameters α ri and α yi . The lower-layered controller K i works as the "local" coordinator, i.e. it assists the upper-layered controller K 0 by individually coordinating the baseline signal r 0 based on each local measurement y i . It follows from (9) and (11) that the overall bi-layered controller is described by where α r := [α r1 · · · α rn ] T and α y := [α y1 · · · α yn ] T , respectively. Although the details are omitted to simplify the statements in this article, the following discussion is valid with slight modifications even for the case of the dynamic controller where k 0 , α r , and α y in (12) are replaced by the transfer functions k 0 (s), α y (s), and α y (s), respectively. Problem setting We consider that K 0 is already implemented in the control system and it stably regulates the plant system P all without {K i }. In other words, no coordination ofr is performed by {K i } and (α r , α y ) = (1, 0) holds in (12). Then, letting G 0 denote the existing baseline control system, we have the expression An assumption is imposed on G 0 . This article aims at designing {K i } to further improve the control performance in the sense of the H 2 norm. In the design, it is assumed that K 0 is given and that Assumption 2.1 holds. For simplicity of discussion, we let α = [α 1 · · · α n ] T and holds. Letting G α denote the overall bi-layered control system, we have the expression (15) Then, the following design problem is addressed. where G α H 2 represents the H 2 norm of α, i.e. letting z 0 be the unit impulse response of G α , it holds that Method of parameter design It is known that structured controller design such as Problem 2.1 is intractable even in numerical computation. In particular, as summarized in e.g. [10,11], the design problem of output feedback controllers with static gain and/or decentralized structure cannot be convex and cannot be solved efficiently. This article addresses the solution algorithm of finding a local optimum. To this end, the gradient of the cost function in (16) is studied as follows. As a preliminary, suppose that the models of {P i } and K 0 are available for the design of parameters α. Then, an analytical expression of the gradient of J(α) can be derived and is given in the following proposition: where W and Y are the solutions of Lyapunov equations respectively, and A cl (α) is given by The proof of the proposition is omitted in this article. The proposition is shown in the same way as its more generalized result given in Theorem 3 of [12]. Based on Proposition 3.1, we can update the parameters by the following algorithm. Algorithm 1 Step 0 Let c and λ be positive constants, and p max denote the maximum iteration limit. In addition, let p = 0 and determine the initial guess α (0) = α 0 for some constant vector α 0 . Step 1 Based on the current guess α (p) , solve the Lyapunov equations (19) and (20) to obtain W = W (p) and Y = Y (p) . As implied in Proposition 3.1 and Algorithm, the models of {P i } and K 0 are required for the parameter design. However, their accurate models are generally not available for controller-designers, in particular, for large-scale plant systems such as power systems. Another way to the parameter design is to combine the algorithm above with data-driven model reconstruction. In the following discussion, the method of model reconstruction based on the operating plant data is stated. To this end, let the control input generated by (12) be modified as where u ID := [u ID1 , . . . , u IDn ] T ∈ R n denotes the measurable noise or the identification input to be injected by the controller-designer. Recall the baseline control system G 0 , where (α r , α y ) = (1, 0) and {K i } do not work. Then, we see that the state x of G 0 follows the state-space equation of the form: where A cl (·) is given by (21). It is assumed that all of {x(k), u ID (k)} are measurable. In other words, the input-state data of all of the sub-plants {P i } is available for data-driven model reconstruction. Then, the model reconstruction is reduced to estimating the coefficient matrices A cl (0) and B u in (24) based on the input-state data. Let cl andB u denote the estimates of A cl (0) and B u , respectively. By gathering (24) from k = 0 to k = N−1, the following equation holds: where (26) Assuming that F(N) is of full rank, the least squares solution is given by = X(N)F(N) T F(N)F(N) With this , the gradient (18) is approximately calculated and is utilized for the controller update at Steps 1 and 2 of Algorithm. Finally, it should be noted that the dimension of the state space of the reconstructed model (24) is generally different from that of the actual plant. In many practical situations, some of the sub-plants are not accessible, and only a part of {x i (k), u IDi (k)}, instead of {x(k), u ID (k)}, are available for data-driven model reconstruction. Then, the state behaviour x(k) cannot be perfectly reconstructed from the data. Then, the resulting model (24) with estimated cl andB u is lower-dimensional than the plant system (5), and the modelling error inevitably exists. Algorithm combined with the low-dimensional model is verified in a demonstration given in Section 4. Stability and safety analysis Updating lower-layered controllers {K i } can ultimately improve the overall control performance, evaluated by the H 2 norm. However, the safety of the system is not guaranteed in the process of the update: the state may largely fluctuate and be beyond an allowable range. This section addresses additional structure imposed on the lower-layered controllers {K i } such that any safety requirement is satisfied. The discussion in this section holds even for general K i stated in (10). To this end, the saturation function is additionally introduced to each K i , described by (10), such that every u i does not largely vary from r 0 : where The behaviour of the saturation function is illustrated in Figure 3. The centre of the possible output is r 0 , which is the baseline control signal generated by K 0 , while the range is γ i , which is to be designed. One can interpret the modified control input, generated by K si , as with a bounded pseudo-disturbance d i . Due to the saturation function, it holds with this d i that This interpretation of (28) by (29) with the bounded disturbance is illustrated in Figure 4. The boundedness plays a central role in guaranteeing the stability and safety of the overall control system. This is seen as follows. Due to the linearity assumption on {P i } and K 0 , the influences are independent each other. Let G safe denote the overall control system defined by the feedback connection of F l (P(s), L) and {K si }, and let z 0 be the output difference between G safe and the baseline system G 0 , described by (13). Then, letting d := [d 1 · · · d n ] T , by a straightforward calculation, we have the following inequality: where A cl (·) is given by (21). We now have the following two propositions: one is on the stability and the other is on the safety. Proposition 3.2: Suppose that Assumption 2.1 holds. Then, the overall control system G safe is BIBO stable. Proposition 3.3: Suppose that Assumption 2.1 holds. Then, the performance deterioration in G safe by any design of {K i (α i , r 0 , y i )} is bounded, in other words, (31) holds. Proposition 3.3 states the worst-case fluctuation in z 0 caused by the undesirable coordination of the lowerlayered controllers {K i }. The fluctuation caused during the update is "designable" by the choice of γ i . Small γ i suppresses the worst-case fluctuation and satisfies severe safety requirement, while equivalently deteriorating the achievable performance by the controller update. Conversely, large γ i sacrifices the safety, while improving the achievable performance. There is an inevitable trade-off between the safety guarantee and the performance limit. Numerical experiment In this section, we demonstrate the bi-layered control via a numerical experiment using a power system model. The model is originally developed by the Institute of Electrical Engineers of Japan Power and Energy [13] and consists of 107 buses, 191 branches, and 30 generators of eastern Japan. It is assumed here that renewable energy (RE) farms are connected to the generator 1 and 6, denoted by P 1 and P 6 , respectively. Then, we suppose that the active power output of the RE farm is suddenly changed and behaves as a disturbance to the power system causing its frequency deviation. We aim at suppressing the overall frequency deviation by the design of sub-controllers K 1 and K 6 utilizing the operation data on P 1 and P 6 . To simplify the discussion, we let (α r , α y ) = (1, 0). Then, the design of K 1 and K 6 is reduced to that of α 1 and α 6 . In the experiment, the operation data is collected until the first 5 s to reconstruct the twodimensional model (24). Note here that since the IEEJ simulator model, from which the operating data is generated, is of 90-dimensional, the reconstructed model (24) is relatively low-dimensional and cannot perfectly express the simulator dynamics. Based on the reconstructed model, the parameter is updated gradually at every 0.2 s until 8.0 s. The parameter update during the experiment is shown in Figure 5. In the figure, both α 1 and α 6 converge on the optimum solution. Figure 6 shows the result of the disturbance response. In Figure 6, the red line represents the overall frequency deviation controlled by only the existing controller K 0 , while the blue line represents the deviation controlled by the proposed bi-layered controller. Until 5 s, both lines are overlapping because the lower-layered controllers are not operating. After the lower-layered controller starts operating, the fluctuation of the blue line is suppressed compared with the red one. This means that the disturbance suppression is achieved by using the bi-layered controller. Conclusion This article addressed the novel control framework for large-scale systems. The bi-layered control system was proposed, where the existing reliable controller is inherited and only the lower-layered controllers are updated to improve the overall H 2 performance. The algorithm for the controller update was also presented and the class of the lower-layered controllers was extended such that any safety requirement is satisfied during the update. Finally, a numerical demonstration of the proposed control system was performed in the control problem of the power system. There are many future works on this bi-layered control. The class of the upper-and lower-layered controllers will be extended to nonlinear and time-varying ones. It is also important to apply the bi-layered control system to other social infrastructures such as traffic systems. Disclosure statement No potential conflict of interest was reported by the author(s). Notes on contributors Toshiki Homma received the B.E. deg rees in Applied Physics and Physicoinformatics from Keio University in 2020. His research interests include control theory for large-scale systems and applications to power systems control. Jun-ichi Imura received the M.E. degree in Applied Systems Science and the Ph.D. degree in Mechanical Engineering from Kyoto University, in 1990 and 1995, respectively. He served as a research associate in Kyoto University, from 1992 to 1996, and as an associate professor in Hiroshima University, from 1996 to 2001. Since 2001, he has been with Tokyo Institute of Technology, where he is currently a professor. His research interests include modelling, analysis, and synthesis of nonlinear systems, hybrid systems, and network systems. Dr. Imura is a member of IEEE, SICE, ISCIE, and The Robotics Society of Japan. Masaki Kengo Urata received the B.E. and M.E. degrees in Applied Physics and Physicoinformatics from Keio University in 2015 and 2017, respectively, and he received the Ph.D. degree in Systems and Control Engineering from Tokyo Institute of Technology. He is currently working at NTT network technology labo-ratories. His research interests include control theory for dynamical systems and applications to power system control and communication network control.
5,006.4
2021-03-19T00:00:00.000
[ "Engineering" ]
AnApple DetectionMethod Based onDes-YOLO v4 Algorithm for Harvesting Robots in Complex Environment Real-time detection of apples in natural environment is a necessary condition for robots to pick apples automatically, and it is also a key technique for orchard yield prediction and fine management. To make the harvesting robots detect apples quickly and accurately in complex environment, a Des-YOLO v4 algorithm and a detection method of apples are proposed. Compared with the current mainstream detection algorithms, YOLO v4 has better detection performance. However, the complex network structure of YOLO v4 will reduce the picking efficiency of the robot. *erefore, a Des-YOLO structure is proposed, which reduces network parameters and improves the detection speed of the algorithm. In the training phase, the imbalance of positive and negative samples will cause false detection of apples. To solve the above problem, a class loss function based on AP-Loss (Average Precision Loss) is proposed to improve the accuracy of apple recognition. Traditional YOLO algorithm uses NMS (Nonmaximum Suppression) method to filter the prediction boxes, but NMS cannot detect the adjacent apples when they overlap each other. *erefore, Soft-NMS is used instead of NMS to solve the problem of missing detection, so as to improve the generalization of the algorithm. *e proposed algorithm is tested on the self-made apple image data set. *e results show that Des-YOLO v4 network has ideal features with a mAP (mean Average Precision) of apple detection of 97.13%, a recall rate of 90%, and a detection speed of 51 f/s. Compared with traditional network models such as YOLO v4 and Faster R-CNN, the DesYOLO v4 can meet the accuracy and speed requirements of apple detection at the same time. Finally, the self-designed appleharvesting robot is used to carry out the harvesting experiment. *e experiment shows that the harvesting time is 8.7 seconds and the successful harvesting rate of the robot is 92.9%. *erefore, the proposed apple detection method has the advantages of higher recognition accuracy and faster recognition speed. It can provide new solutions for apple-harvesting robots and new ideas for smart agriculture. Introduction e apple-harvesting robot is a comprehensive system that integrates environment perception, motion planning, and servo control. Among them, environmental perception is an important basis for harvesting robots to complete their picking tasks [1][2][3]. Robot systems usually use target detection technology to realize the function of environmental perception. Fast and accurate target detection can make robot work for a long time, reduce labor cost, and improve production efficiency [4][5][6]. erefore, the research of apple detection has great significance to the improvement of the picking efficiency and success rate of the harvesting robot. e recognition and positioning of fruits provide the target information for the robot control system. With the development of computer vision and artificial intelligence, there are more and more methods for target recognition and positioning [7][8][9]. Kelman et al. [10] realized the location of overlapping apples by analyzing multiple intensity profiles of fruit images. e accuracy of this method reaches 94%, but the calculation process takes a long time. Nyarko et al. [11] proposed a detection method of convex polyhedron approximation surface. is method has the advantages of simple calculation and efficient execution when the fruit is occluded. Wei et al. [12] proposed a fast segmentation method for color apple images. is method uses adaptive mean shift and decision theory to determine the number of clusters and realizes the clustering segmentation of apple images. In order to solve the problem that it is difficult to process the apple images collected at night, Jai et al. [13] proposed a method combining differential image and color analysis to realize apple recognition at night. Song et al. [14] proposed an algorithm to detect and locate the fruiting branches of multiple litchi clusters in large environments. In this algorithm, DeepLabv3 is used to segment RGB image, and then nonparametric density space clustering method is used to cluster the pixels in the three-dimensional space of the tree skeleton image. e experimental results show that the detection accuracy of a litchi is 83.33% and the execution time of a single litchi is 0.464 s. Due to the poor robustness of traditional vision methods in complex background, it is difficult to meet the work requirements of harvesting robots. In recent years, the CNN (convolutional neural network) [15][16][17] has been continuously improved, and it has shown great advantages in the field of target detection. It is mainly divided into two categories. e first type of CNN generates a series of target candidate boxes and then classifies the samples by convolutional neural network. Representative algorithms are R-CNN [18], Fast R-CNN [19], and Faster R-CNN [20]. Another kind of CNN directly transforms the problem of target border location into a regression problem, so it does not need to generate candidate boxes. e typical algorithms include SSD (Single Shot MultiBox Detector) [21] and YOLO (You Only Look Once) [22,23]. Xu et al. [24] used machine learning methods to identify overlapping strawberries. Compared with the traditional segmentation method, this method can overcome the influence of light transformation. However, it is difficult to achieve good recognition results when the similarity between fruit and background is high. Wang et al. [25] proposed a method for identifying fruits and vegetables in an unstructured environment. e method used R-CNN model to identify fruits and vegetables and then completed the target location based on the principle of triangulation. Aiming at the problem that it is difficult to identify multicluster kiwi fruit in a complex field environment, Fu et al. [26] proposed a recognition method based on LeNet convolutional neural network. e recognition rate of this method for occluded fruit, overlapped fruit, adjacent fruit, and independent fruit was 78.97%, 83.11%, 91.01%, and 94.78%, respectively. However, the recognition rate of this method for partially occluded and overlapped fruit needed to be improved. Xiong et al. [27] used the Faster R-CNN detection model to detect green citrus in the natural environment. e experimental results showed that the comprehensive recognition rate of this method reached 77.45%, but the comprehensive recognition rate still needed to be further improved. Xue et al. [28] improved YOLO v2 to identify immature mangoes. e experimental results showed that the method can detect mangos at a speed of 83 f/s and an accuracy rate of 97.02%. However, from the perspective of recognition effect, the problem of missing recognition of fruits had yet to be solved. Inkyu et al. [29] used the ImageNet model to recognize sweet pepper, rock melon, apple, avocado, mango, and orange. e comprehensive recognition rate of this model reached 89.6%. From the above analysis, it can be seen that it is difficult for conventional computer vision methods and deep learning methods to meet the technical requirements of harvesting robots. In order to make the harvesting robot recognize apples quickly and accurately in complex environment, traditional YOLO v4 algorithm is improved. Firstly, by drawing lessons from the DenseNet, the original structure of YOLO v4 is optimized to reduce model parameters effectively. is change can improve the ability of neural network to extract apple image features. Secondly, in order to solve the problem that the positive and negative samples of the collected data are not balanced in the training process, AP-Loss is used to improve the class loss function of YOLO v4. It can improve the accuracy of apple recognition. Finally, Soft-NMS replaces NMS to solve the problem of missing prediction boxes. It can improve the detection accuracy of apples under overlapping conditions. In order to verify the effectiveness of the Des-YOLO v4 algorithm, a harvesting experiment is carried out with the self-designed apple-harvesting robot. Data Collection and Preprocessing. In this study, a variety of experimental materials in orchard and laboratory environments are collected for training and testing, so as to select the algorithm and parameters suitable for the apple-harvesting robot. e apple image was collected from the apple demonstration base in Dashahe Town, Jiangsu Province, China. e camera used in this study is a small camera OV2640, whose resolution is 1632 × 1232 pixels at 30 frames per second. It has small volume and low working voltage. Moreover, it can output sampling data of whole frame, subsampling, window, and so on. e camera is installed on the robot in eye-in-hand mode, so that the field of vision of the end effector and the camera does not interfere with each other in the process of fruit picking. In order to reduce the probability of overfitting of the network model, the long-range and close-range images are collected. e distance from distant view and close view to fruit is 400-500 mm and 100-200 mm, respectively. In the case of distant view and close view, images from four directions of south, north, east, and west are collected, respectively, and two images are collected from each direction, with a total of 1600 images. To ensure the complexity of apple images, the image material should include the different numbers and occlusion of apples, as well as the lighting conditions such as natural light and backlight. As shown in Figure 1, it is a set of apple images in a typical complex environment. In the end, 2,000 image materials were collected, including captured images and 400 images of apples obtained by web crawlers, containing a total of 2,950 targets. e training of YOLO neural network often requires more training sets. More training sets can make the neural network learn the features of apple image sufficiently and improve the generalization ability of the network model. However, in reality, due to the lack of material collection ability, it is difficult to obtain a large number of training materials. In addition, the growth posture of apples is different, and the overlap phenomenon is serious, so it is difficult to completely extract the shape characteristics of the fruit. erefore, it is necessary to preprocess the apple image before the YOLO training. In this study, Matlab is used to process the original data set to achieve the effect of data enhancement. (1) e image is rotated horizontally, vertically, or at a fixed angle, and the aspect ratio of the image is changed to generate more training sets (2) Data are enhanced by adjusting saturation and hue, histogram equalization, median filtering, and other image processing techniques (3) To improve the generalization ability of the model, four images are randomly cropped by Mosaic data enhancement method and spliced into one image as training data After the image is processed by the above method, 10100 pictures are finally generated for later neural network training. LabelImg is used to mark the apple target in the above data set, and the marked information is saved in PASCAL VOC data set format. To ensure the uniform distribution of the data set, it is randomly divided into training set, verification set, and test set according to the proportion of 70%, 10%, and 20% by using Matlab tools. ere are 7070 training set samples, 1010 verification set samples, and 2020 test set samples. Apple Detection Based on YOLO v4 . Apple detection is the information source of picking operations for harvesting robots, and it is also an important factor affecting the success rate of picking [30,31]. is study uses the YOLO v4 algorithm to realize the recognition and positioning of apple targets, and it can locate the apples in a video and return their coordinates. YOLO v4 is one of the best detection algorithms at present. It has the advantages of fast recognition speed and high accuracy in apple detection. On the basis of the original YOLO v3 architecture, it introduces some optimization methods from data processing, backbone network, network training, activation function, loss function, and other aspects. YOLO v4 achieves the best matching in detection speed and accuracy so far [32][33][34]. e backbone network of YOLO v4 is CSPDarknet53, which is used to extract target features. YOLO v4 draws on the experience of the CSPNet (Cross Stage Partial Network) to maintain accuracy, reduce computing bottlenecks and memory costs, and add CSP to each large residual block of Darknet53 [35]. To reduce the amount of calculation and ensure accuracy, YOLO v4 divides the feature mapping of the basic layer into two parts and then combines the hierarchical structure of different stages. e activation function of CSPDarknet53 uses the Mish function, and the rest of the network continues to use the Leaky_Relu function. Different from using FPN for upsampling in the YOLO v3 algorithm, YOLO v4 uses the idea of information flow in the PANet (Pyramid Attention Network) as a reference. e semantic information of high-level features is propagated to the lowlevel network through upsampling, and then it is combined with the high-resolution information of low-level features to improve the detection effect of small targets. As shown in Figure 2, the program flow of the YOLO v4 algorithm is as follows: (1) e features of the input image are extracted through the backbone network, and then the input image is divided into S * S grids (S � 7). If the center of a target is in a grid, this grid is responsible for the detection of the target. (2) In order to complete the target detection, each grid needs to predict B bounding boxes and the categories probability of each bounding box and to output the confidence of whether the bounding box contains the target. where IOU (Intersection Union) is a standard performance measure between the predicted bounding box (box (P)) and the actual bounding box (box (T)). Pr (Object) is the probability that the current position has an object. If there is a target in the grid, Pr (Object) � 1; otherwise Pr (Object) � 0. Each bounding box contains five premeasurements: (x, y, w, h, confidence), where (x, y) represent the center coordinate values, (w, h) are the width and height of bounding box, and confidence is the confidence information. (3) e category conditional probability C i of each grid is calculated; then, the class-specific confidence score S i of each bounding box can be obtained by multiplying the class conditional probability by the confidence of each bounding box. Pr (Class i ) is the category probability of the i-th target. By setting a threshold and comparing it with the S i , the box whose score is lower than the threshold is filtered. en, NMS is performed on the remaining boxes. Finally, the detection box of the target is obtained to realize the recognition and location of the apple. is study obtains the two-dimensional coordinates (x 1 , y 1 ) of apples through the detection box. e laser ranging sensor VL53L0 is used to measure the distance z between the target and the robot. en, the three-dimensional coordinates (x, y, z) of the target in the camera coordinate system can be obtained by coordinate transformation formula (3). f is the focal length of the camera (f � 3.6 mm). Des-YOLO Network Structure Design. Because this study only detects the apples in the image, the structure of YOLO v4 network is optimized according to the DenseNet network. DenseNet enables feature information reuse through the connection layer by establishing the dense connection between the front layer and the back layer, thus reducing the amount of calculation. In DenseNet, all previous layers are connected as input: where [x 1 , x 2 , . . ., x l-1 ] is the mosaic of all feature maps before the layer. e above formula is a nonlinear mapping relationship. Because each layer receives the feature mapping from all the previous layers, the network can be thinner and more compact. erefore, the number of channels can be reduced. Based on the analysis and understanding of the network structure of DenseNet, a Des-YOLO network structure is proposed. e SPP (spatial pyramid pooling) block from the original YOLO v4 structure is removed, and a dense block is added in its position. Dense blocks can make the feature information be better transmitted in the whole network, and the situation of overfitting can be alleviated to some extent. YOLO v4 has three different sizes of anchors, which are 19, 38, and 76. In order to improve the detection speed, only 19 × 19 and 38 × 38 anchors are selected, because the larger the anchor is, the smaller the prediction box will be. If the prediction box is too small, the apple with a small resolution will be detected. In the process of picking, the distance between the apple with too small resolution and the manipulator is too far, so it is not the picking object in the current position. e structure of the Des-YOLO network is shown in Figure 3. e size of the input image is 416 × 416. Optimization of Loss Function. e proponents of YOLO v4 believe that the design of loss function is one of the optimization techniques that can improve the accuracy without increasing the inference time. e prediction error of bounding box coordinates, the confidence error of Mathematical Problems in Engineering bounding box, and the prediction error of object category have been considered in the original loss function design. YOLO v4 is a one-stage detection method. If the quantity gap between positive and negative samples is too large, it will reduce the accuracy of the network's recognition of apples. In order to solve the problem of imbalance between positive and negative samples, the category loss function based on AP-Loss (Average Precision Loss) is improved. AP-Loss [36] transforms the classification task into the sorting task and minimizes the AP-Loss of the system based on the network error and its optimization algorithm. Firstly, the prediction box and score are transformed to obtain the transformation format of the prediction box and score, as shown in the following equations: where K and M represent the k-th row and m-th column of an image, respectively; X KM and Y KM represent the difference of the overlap score of the two prediction frames and the converted score, respectively; and α and β represent the true value matching score and the original score of the anchor frame, respectively. e network error is adjusted as follows: where F (x) is a sign function that, only if x > 0, takes 1; otherwise it is 0. Λand Tare the set of data groups marked with values 1 and 0, respectively. e optimized loss function L cla and its minimization objective function are shown in the following equations: where m∈Λ,k≠m F(x km ) and m∈T,k≠m F(x km ) are the ranking of α k in positive samples and all valid samples, respectively. L (x) and y are d-dimensional vectors composed of all L KM and Y KM , where d is the effective number of all prediction boxes and δ is the optimization parameter of the system. e backpropagation gradient of the network is obtained by deriving the score function α k , as shown in the following equation: Filtering Method of Prediction Box. In the test phase, the target detection algorithm will output multiple prediction boxes; in particular, there will be many high confidence prediction boxes around the target. In order to delete these duplicate prediction boxes and make each target have only one detection result, NMS (Nonmaximum Suppression) is generally used to filter the prediction boxes. Traditional NMS thinks that there is a clear boundary between targets. It will not produce too much overlap, so this algorithm can effectively remove false-positive samples and improve the detection accuracy. However, for the image containing multiple apples, the adjacent apples overlap with each other. According to the traditional NMS algorithm, some real apples with too high overlap will be directly removed from the detection queue, resulting in missed detection. In order to solve this problem, Soft-NMS [37] is used instead of NMS to filter prediction boxes. Soft-NMS can make prediction boxes be revaluated recursively according to the current score, instead of being roughly deleted. In this way, it can avoid the situation of missing detection when multiple apples have a high overlap. At the same time, the algorithm does not need to retrain the model and does not increase the training cost. e algorithm flow is as follows: S N } is the set of confidence scores corresponding to the prediction box (2) D � { } is the filtered prediction box set (3) Select the box B m with the highest score from set B, put it into set D, and assign the difference set of B and D to B (4) If the IOU between the remaining box and B m is greater than the set threshold N T , the score will be reduced according to equation (11) (5) Set the threshold N d , and delete the box when the new score of the remaining box is less than N d (6) Repeat steps (3), (4), and (5) until B is an empty set, and then return D and S For the prediction box with IOU greater than the threshold, a penalty function in the form of Gaussian function is constructed to reduce its score, as shown in the following equation: where σ is the scale adjustment coefficient, given as 0.5 in this experiment. Soft-NMS changes the traditional method of directly removing the prediction box with high IOU and replaces it with the method of reducing its score. It reduces the probability of the correct prediction box being deleted by mistake and improves the average accuracy of detection. Model Training and Detection Effect. In this experiment, the core processor of the training computer is AMD 3900 × 3.8 GHz CPU, and the graphics card is NVIDIA RTX 6 Mathematical Problems in Engineering 2080 Ti. e program is written by C++ and calls OpenCV, CUDA, and other operation libraries. In the aspect of model training, the learning rate is set to 0.001; momentum and decay are set to 0.9 and 0.0005, respectively; and the learning rate becomes 0.1 times the original after 11000 iterations. After 12000 times of training, the loss function of the model changes as shown in Figure 4. It can be seen from the figure that in the first 1300 iterations, the loss function value decreases rapidly. e model is fitted rapidly and then gradually stabilizes after 3000 iterations. In the iterative process, the weight is output every 100 iterations, but the number of iterations is not the more the better. Too many iterations are prone to overfitting, so it is necessary to evaluate the model comprehensively. e purpose of this study is to find suitable apples. Precision, Recall, mAP (mean Average Precision), and IOU are used to choose the appropriate threshold T (0 < T < 1) for the model. After the algorithm predicts the confidence of the target, the T needs to be compared with the confidence. e prediction targets with confidence higher than T are the apples that meet the harvesting requirements. Figure 5 shows the change of mAP with the number of iterations. Among the models obtained in this experiment, models with higher mAP are selected, and then data experiments are carried out on these models. In this study, precision, recall, and IOU of these models are compared by constantly changing the threshold T, so that modes can detect the apples in the current environment according to needs. In the apple recognition system, apples that are too far away or hidden behind the previous ones can be ignored because they will be recognized and located again before the next picking. erefore, this study ignores the Recall and selects the Precision. For the IOU, because the harvesting robot only needs to recognize the center of apples, the requirements for the IOU are not high. To sum up, the priority of these parameters is Precision > Recall > IOU. e change of threshold T will change the Precision, Recall, and IOU of the detected target. When the threshold T is 0.5, the Precision and Recall are 97% and 90%, respectively, and the IOU is 83.61%. e performance of the model is at its best. e effect of the Des-YOLO v4 algorithm on the detection of apples in various environments in the test set is shown in Figure 6. Experimental Comparison and Analysis. In order to further verify the efficiency of the improved model, the detection efficiency of various detection algorithms is compared. is study mainly evaluates the detection effects of YOLO v4, Faster R-CNN, and Des-YOLO v4 under the above conditions. In this experiment, multitarget images with different numbers and sizes are selected for detection experiment comparison, and the effect is shown in Figure 7. It can be seen that the Faster R-CNN detection efficiency is not high, and it is easy to miss the target. e conventional YOLO v4 algorithm has faster detection speed and detection accuracy, but there are many targets that are too far away in the detection results. It can be seen from Table 1 that the Des-YOLO v4 algorithm performs better than the other algorithms in detecting apples. In the case of fewer apples, the detection results of several algorithms are similar, but the detection speed of the Des-YOLO v4 is faster and the mAP is relatively high. In the case of scattered apples, although Faster R-CNN can detect more apples, apple targets that are too far away cannot be picked in practical applications. In contrast, the Des-YOLO v4 algorithm has faster detection and higher detection accuracy. At the same time, the Des-YOLO v4 is better than the official YOLO v4 algorithm when there are more apple targets, so it is more suitable for harvesting robots. From the overall effect, the Des-YOLO v4 algorithm has a faster speed and a higher accuracy. Robot Automatic Harvesting Experiment. e target detection and harvesting experiments are completed with a self-designed apple-harvesting robot. e harvesting robot is shown in Figure 8. e self-designed robot mainly includes three parts: a mobile carrier part, a 5-DOF (fivedegree-of-freedom) manipulator part, and an end effector part. Mathematical Problems in Engineering e mobile carrier is crawler chassis, which is composed of chassis cabin and crawler walking mechanism. e chassis cabin is loaded with the environment sensing system and motion control unit of the harvesting robot. e crawler walking mechanism is composed of load-bearing wheel, driving wheel, tensioning auxiliary wheel, and belt supporting wheel. e 5-DOF manipulator adopts joint structure and is fixed on the mobile carrier. e first degree of freedom is the lifting platform, the second is the waist rotation joint of the manipulator, the third is the swing axis of the back arm, the fourth is the swing axis of the forearm and the fifth is the rotation axis of the robot end manipulator. e end effector adopts claw structure. e claw opening and closing is controlled by the stepper motor through the lead screw. e inner side of the clamping claw is equipped with pressure sensors, which can realize the lossless grasping of the apple. In the harvesting experiment, the host computer of the robot first processes the apple images and detects the apple targets in the images through the Des-YOLO v4 algorithm. en, the position of the target in the manipulator coordinate system is calculated. Finally, the manipulator is controlled to move toward the target by the visual servo control algorithm, so as to complete the apple-harvesting task. Figure 9 shows the complete process of robot harvesting operation. In this experiment, the fruit tree models are used to simulate the harvesting environment. A total of 70 harvesting experiments are carried out. e processing time of a single image is 0.4 seconds, the average single harvesting time is 8.7 seconds, and the comprehensive harvesting success rate is 92.9%. e Des-YOLO v4 algorithm can meet the real-time harvesting requirements of the harvesting robot. Conclusions is study proposed a Des-YOLO v4 algorithm and a detection method of apples. e algorithm can make the harvesting robots detect apples in complex environment. In addition, it has the advantages of higher recognition accuracy and faster detection speed compared with other detection algorithms. e main conclusions are as follows: (1) To improve the detection speed of harvesting robots, the Des-YOLO network structure is proposed. By adding the DenseNet, the parameters of YOLO v4 network are effectively reduced and the ability of the network to extract apple image features is improved. erefore, the Des-YOLO network has better detection performance. (2) Aiming at the problem of imbalance between positive and negative samples in the collected data, a class loss function based on AP-Loss is proposed. e AP-Loss function uses ranking task instead of classification task. It can improve the detection performance of the Des-YOLO v4 and improve the accuracy of apple recognition. (3) In the test phase, Soft-NMS is used to replace NMS to solve the problem of missing apple detection, which improves the detection accuracy of apples under overlapping conditions. (4) e Des-YOLO v4 algorithm is tested on the selfmade apple data set. e test results show that the proposed algorithm has a mAP of 93.1% and a detection speed of 51 fps for apple images. Compared with Faster R-CNN and other network models, the proposed model can meet the accuracy and speed requirements of apple detection at the same time. (5) A harvesting robot is designed to carry out the appleharvesting experiment. e experimental results show that the processing time of a single image is 0.4 seconds, the single harvesting time is 8.7 seconds, and the comprehensive harvesting success rate is 92.9%. However, the proposed algorithm still has some shortcomings. e network model in this study is still complex and needs a lot of computing time, which affects the overall picking efficiency. In low illumination environment, the performance of the algorithm will seriously descend, which makes the robot unable to work at night. erefore, in the further research, the network model will be continued to reduce network parameters to improve the harvesting speed of the robot. Meanwhile, the detection method with night image will be studied, so that the harvesting robot can work in all illumination environments. Data Availability e Des-YOLO v4 model constructed in this study and datasets for training and evaluating the model are available from the corresponding author upon request. Conflicts of Interest e authors declare that there are no conflicts of interest regarding the publication of this study. International Science and Technology Cooperation Project of Zhenjiang City (GJ2020009).
7,058.8
2021-10-21T00:00:00.000
[ "Computer Science", "Agricultural And Food Sciences" ]
CONTROLLABILITY AND OBSERVABILITY FOR SOME FORWARD STOCHASTIC COMPLEX DEGENERATE/SINGULAR GINZBURG–LANDAU EQUATIONS . This paper is addressed to establishing controllability and observability for some forward linear stochastic complex degenerate/singular Ginzburg–Landau equations. It is sufficient to establish appropriate observability inequalities for the corresponding backward and forward equations. The key is to prove the Carleman estimates of the forward and backward linear stochastic complex degenerate/singular Ginzburg–Landau operators. Compared with the existing deterministic results, it is necessary to overcome the difficulties caused by some complex coefficients and random terms. The results obtained cover those of deterministic cases and generalize those of stochastic degenerate parabolic equations. Moreover, the limit behavior of the coefficients in the equation is discussed Introduction and main results The Ginzburg-Landau equation was first given in [16], which is a typical nonlinear equation in the physics community. It may describe a variety of phenomena including light propagation in nonlinear fibers, phenomena related to pulse formation and superconductivity, and plays an important role in the theory of amplitude equations. Real-valued Ginzburg-Landau equations were first derived as long-wave amplitude equations in [22,26]. The complex Ginzburg-Landau equation was established as a standard 1-D model for some fluid flows (see [27]). The deterministic complex Ginzburg-Landau equation is one of the most frequently studied equations in physics and mathematics. For instance, the Cauchy problems, numerical methods to establish approximate solutions and control problems for the deterministic Ginzburg-Landau equations have been extensively researched (see [1,7,10,[23][24][25]). And, the uniformly parabolic equations without degeneracies or singularities have been developed in various directions. However, more recently, several situations where the equation is not uniformly parabolic have been investigated. Indeed, many problems coming from physics (see [19]), biology (see [4,8]) and mathematical finance (see [17]) are described by parabolic equations which admit some kind of degeneracy. Another inspiring situation is the case of parabolic equations with singular lower order terms. The corresponding cases arise in quantum mechanics (see [2]), or in combustion problems (see [3]). In practice, due to the interference of random factors, stochastic processes give a natural replacement for deterministic functions in mathematical descriptions. Compared with deterministic case, some substantially difficulties arise in the study of the stochastic partial differential equations. For example, the solution to a stochastic partial differential equation is non-differentiable with respect to noise variable, and the usual compactness embedding result is not valid for solution spaces of the stochastic evolution equations. Further, the "time" in the stochastic setting is not reversible. Indeed, many tools and methods, which are effective in the deterministic case, do not work anymore in the stochastic setting. Recently, stochastic complex Ginzburg-Landau equations have received more and more attention, see for example [11,12,18]. In this paper, we will study some linear stochastic complex Ginzburg-Landau equations with both degeneracies and singularities. In fact, the linearized complex Ginzburg-Landau equation also models some different phenomena, such as the amplitude equation in pattern formation and the reaction diffusion of two chemicals in one dimension [6]. It is noted that many properties of Ginzburg-Landau equations are between that of parabolic equations and Schrödinger equations. Therefore, this paper is also devoted to considering its limit behavior where degenerate Ginzburg-Landau equations, degenerate parabolic equations and degenerate Schrödinger equations are considered simultaneously. In the following, the problem of this paper is stated in detail. Let T > 0 and Q = (0, 1) × (0, T ). Assume G 0 = (x 1 , x 2 ) to be a given nonempty open subset of (0, 1), and denote by χ G0 the characteristic function of the set G 0 . Fix a complete filtered probability space (Ω, F, {F t } t≥0 , P), on which a one-dimensional standard Brownian motion {B(t)} t≥0 is defined such that F = {F t } t≥0 is the natural filtration generated by B(·), augmented by all the P-null sets in F. Let H be a Banach space, and let C( Moreover, denote by i the imaginary unit, and for any complex number c, we denote by c, Re c and Im c, its complex conjugate, real part and imaginary part, respectively. Consider the following forward linear stochastic complex degenerate/singular Ginzburg-Landau equation: x β ydt = (c 1 y + χ G0 u)dt + (c 2 y + v)dB(t) in Q, y(t, 1) = 0 on (0, T ), y(0, x) = y 0 (x) in (0, 1), where complex-valued coefficients c 1 ∈ L ∞ F (0, T ; L ∞ ((0, 1); C)), and c 2 ∈ L ∞ F (0, T ; W 1,∞ ((0, 1); C)). Also, α ∈ [0, 2), a, b, c, d, µ, β ∈ R satisfy some conditions which will be given later. In (1.1), (u, v) is the control variable, y is the state variable, and y 0 ∈ L 2 ((0, 1); C) is a given initial value. Unless otherwise stated, we assume that all functions mentioned in this paper are complex-valued. Next, we assume that exponents α, β and parameter µ satisfy the following conditions: • sub-critical potentials: We separate the case where both the exponent β and the parameter µ are critical. In the case of (1.3), the potential is called critical, and otherwise it is called sub-critical. As we shall show later, the case of a critical potential requires a specific functional setting and a special care in the derivation of Carleman estimates. The Carleman-type estimate was first introduced by Carleman to study the uniqueness for elliptic equations in [5], which has become an important tool in studying controllability for stochastic partial differential equations. The main purpose of this paper is to study the null controllability and observability for forward linear stochastic complex degenerate/singular Ginzburg-Landau equation (1.1). The null controllability of (1.1) is formulated as follows. For any y 0 ∈ L 2 ((0, 1); C), one can find a pair of control (u, v) ∈ L 2 F (0, T ; L 2 (G 0 ; C)) × L 2 F (0, T ; L 2 ((0, 1); C)), such that the solution y to (1.1) satisfies y(T ) = 0 in (0, 1), P-a.s. On the other hand, the observability for (1.1) is stated as follows. If u = v = 0 in (1.1), find (if possible) a positive generic constant C 1 = C 1 (a, b, c, d) such that for any y 0 ∈ L 2 ((0, 1); C), the solution y to (1.1) satisfies that The observability is one of the most important properties in structural theory. The observability inequality (1.5) means that the terminal value can be dominated by its local information of any solution to (1.1) in G 0 × (0, T ). Such kind of inequalities are closely related to control problems, unique continuation properties and inverse problems. In the last decades, the theory of controllability and observability for deterministic and stochastic uniformly parabolic equations has been largely developed (see [11,13,14,28] and the references therein). More recently, there are several papers which are concerned with the control problems for deterministic and stochastic degenerate equations (see [4,20,31]). In addition, parabolic equations with singular potentials have also been extensively studied. In this aspect, we refer to [9,29,30] for deterministic system, and [15,32] for stochastic system. There are also some known controllability and observability results about the deterministic and stochastic complex Ginzburg-Landau equations (see [10-12, 23, 24] and the references therein). However, as far as we know, nothing about the null controllability and observability are known for stochastic complex degenerate/singular Ginzburg-Landau equations. In this paper, we study the controllability and observability problems of the general forward linear stochastic complex degenerate/singular Ginzburg-Landau equations for different critical cases of the exponents α, β and parameter µ. The controllability result for forward linear stochastic complex degenerate Ginzburg-Landau equation (1.1) can be stated as follows. The assumption condition bc = ad is technical, which shows up in some cross terms of the weighted identify, and we now do not know how to drop it. Remark 1.4. The assumption condition a > 2c in (H 2 ) is not needed in deterministic case. The detailed explanations can refer to Remark 3.8. Remark 1.5. In the case of deterministic degenerate/singular equation, i.e., a = c = 1, b = d = 0, c 2 = v = 0, and choosing all functions to be real-valued in (1.1), then one can get the null controllability result for forward degenerate/singular parabolic equations from Theorem 1.2. This is the main result in [29]. The corresponding observability estimates for backward linear stochastic complex degenerate/singular Ginzburg-Landau equation (1.6) are established. Proposition 1.6. Assume that (H 1 ) or (H 2 ) holds. Then, for any w T ∈ L 2 (Ω, F T , P; L 2 ((0, 1); C)), the solution to (1.6) satisfies Moreover, in the case of (H 1 ), C 2 (a, b, c, d) is given by , (1.8) and the specific form of C 2 (a, b, c, d) in (H 2 ) is where and C 0 will be defined later by (3.4). In the rest of paper, unless otherwise stated, we shall denote by C a generic positive constant independent of a, b, c, d, which may change from line to line. Remark 1.7. Choosing α = µ = 0 in (1.1), one can obtain the null controllability and observability for the onedimensional linear stochastic Ginzburg-Landau equation, which is consistent with the results in [11]. Compared with [11], we choose different weighted functions and use the Hardy inequalities to deal with the difficulties caused by degeneracy and singularity. This leads to more complicated assumptions about the coefficients than [11]. On the other hand, the observability estimates for forward equation (1.1) are as follows: Then, the observability estimate (1.5) holds for any solution to equation (1.1). Furthermore, in the case of (H 1 ), C 1 (a, b, c, d) is given by 11) and the specific form of where K 0 is given by (1.10), and C 0 will be defined later by (3.4). Remark 1.9. Notice that Theorem 1.8 is valid only for the sub-critical case (i.e., (1.2)). The reason why this result is not available for the critical case (i.e., (1.3)) is that the Carleman estimate we established in this case is based on the H * α (0, 1)-norm of the solution instead of H 1 α (0, 1)-norm. Remark 1.10. When system (1.1) reduces to a real-valued forward stochastic degenerate parabolic equation (a = 1, b = c = d = 0), the observability estimates for forward stochastic degenerate parabolic equations can be obtained from the above results. These forms are the same as the known one given in [20]. From the observability estimate (1.5), the unique continuation property of the general forward stochastic degenerate/singular parabolic equations can be obtained immediately. From the observability constants (1.11) and (1.12) in Theorem 1.8, we have the following limit behavior of coefficients in (1.1). Corollary 1.12. Assume that (H 1 ) or (H 3 ) holds. If b, d → 0, then the observability estimate (1.5) also holds for stochastic parabolic equations with degeneracy and singularity. Remark 1.13. It is obviously that blow-up phenomena for constant C 1 (a, b, c, d) could occur when a → 0, which means that the internal observability estimate cannot be obtained by using our method for stochastic degenerate Schrödinger equations with singularity. However, the corresponding boundary observability estimate can be derived by Theorem 3.2. The rest of this paper is organized as follows. In Section 2, we give a pointwise weighted identity for linear stochastic complex degenerate/singular Ginzburg-Landau operator. In Section 3, the global Carleman estimates for the forward and backward linear stochastic degenerate/singular Ginzburg-Landau equations are established. Finally, in Section 4, we prove the main results. A weighted identity for linear stochastic complex degenerate/singular Ginzburg-Landau operator In this section, we are devoted to establishing a pointwise weighted identity for the following linear stochastic complex degenerate/singular Ginzburg-Landau operator: which will play a crucial role in the sequel. First, define two unbounded operators where y ∈ H 2 loc ((0, 1]) denotes y xx ∈ L 2 loc ((0, 1]), and In the case of sub-critical i.e., (1.2), In the case of critical i.e., (1.3), for α ∈ [0, 1), and for α ∈ [1, 2), We denote For a fixed weight function ∈ C 3 (Q; R) and auxiliary function Φ ∈ C 1 (Q; C), we set θ = e , z = θp. Then, by an elementary calculation, we can get that (2.4) By (2.2) and (2.3), it is easy to check that where We have the following pointwise weighted identity for the operator L in (2.1). Proof. In [11], by choosing n = 1, a 11 = x α , a 0 = 1, and b 0 = 0, we can get the result immediately. From this Lemma, we give the proof of Theorem 2.1. Proof of Theorem 2.1. By (2.5), (2.6), and Re (ic) = Im c, it is easy to see that (2.9) By (2.3), we can obtain that Next, we compute the last two terms in the right side of sign of the equality (2.9). By the definition of I 2 in (2.4), and a simple calculation, we have that Similarly, by the definition of I 1 in (2.4) and a simple calculation, it holds that (2.14) Finally, combining (2.9)-(2.14) with Lemma 2.2, we can get (2.7). (3.5) In the sequel, for any n ∈ N, we denote by O(s n ) a function of order s n , for sufficiently large s. Then, we give the following Carleman estimates. (i) Assume that (H 1 ) holds. Then, there exist two positive constants s 0 = s 0 (α, η, µ, a, b, c, d) and C, such that for all s ≥ s 0 , every solution p to (3.1) satisfies where (ii) Assume that (H 3 ) holds. Then, there exist two positive constants s 1 = s 1 (α, η, a, b, c, d) and C, such that for all s ≥ s 1 , every solution p to (3.1) satisfies where The condition a ≥ c is not needed in the case of (ii), but it is necessary in the observability inequality. To avoid confusion about the conditions, we relax the conditions here to be consistent with those for the observability estimates. Step 1. Let us estimate A 1 in (3.11). Recalling z(t, 1) = 0 on (0, T ) and the definition of V in (2.8), we have The reasonableness of the computations may be delicate since we work in non-standard weighted spaces, specially in the critical potentials. For this reason, we make computations that may be justified by the regularization process described in [29]. In order to understand the computations related to A 1 , it helps to replace z by z n := θp n , where p n is the solution to the regularized problem in which the potential µ x β has been replaced by µ (x+ 1 n ) β . Therefore, the quantity that we actually need to compute is the following one: where V n is obtained by replacing z in V by z n . Therefore, by the definitions of A in (2.4) and Φ in (3.10), we get that Hence, Step 2. Let us estimate A 2 in (3.11). By (3.1) and a > 0, we know that From the definition of A in (2.4) and noting that |γ t | ≤ Cγ 1+ 1 k , one can obtain that (3.14) From z(0, x) = z(T, x) = 0 in (0, 1), one can see that And then we estimate "cE Q µ x β |dz| 2 dx" in two cases: the case of a sub-critical exponent 0 < β < 2 − α and the case of a critical exponent β = 2 − α. 4 , one gets that Combining the above inequality with (3.13)-(3.16), we can get (3.18) Step 3. Let us estimate A 3 in (3.11). By the definitions of A and Φ, we can get From (3.10), it is easy to see that By the definitions of E and F in (2.8), and noting that Φ x = 0, we have Therefore, By z(t, 1) = 0 on (0, T ), one can get that It is easy to check that Therefore, it holds that By observing that |γ t | ≤ Cγ 1+ 1 k , |γγ t | ≤ Cγ 3 , and |γ tt | ≤ Cγ 1+ 2 k , one can conclude that (3.23) By (3.19)-(3.23), we obtain that (3.24) Step 4. In this part, we compute the last term A 4 in (3.11). By the definition of B 1 in (2.8), we have We produce estimate of term A 4 in two cases: the case of a sub-critical exponent 0 < β < 2 − α and the case of a critical exponent β = 2 − α. (ii) Assume that (H 2 ) holds. Then, there exist two positive constants s 4 = s 4 (α, η, a, b, c, d) and C, such that for all s ≥ s 4 , every solution (h, H) to (3.29) satisfies where Remark 3.8. It is worth noting that (i) only needs a > 0, but (ii) needs a > 2c. The reason is that in order to remove the term containing |H x | 2 in (i), we can use the improved Hardy-Poincaré inequality (see Lem. 3.2) to choose the desired coefficients. However, the coefficients of the singular terms in (ii) can only be µ(α). If the stochastic equation reduces to the deterministic case, the condition a > 2c is not necessary. Remark 3.9. Similar to (ii) in Theorem 3.2, we can also get that, for µ < µ(α), And, for µ = µ(α), we have Proof of Theorem 3.7. The proof is similar to the proof of Theorem 3.2. The main difference is that a, c are replaced by −a, −c, and Step 2. We only prove Step 2 here. Step 2. Let us estimate A 2 , where A 2 is obtained by replacing a, c of A 2 in (3.11) by −a, −c. From (3.29) and a > 0, we know From the definition of A in (2.4) and |γ t | ≤ Cγ 1+ 1 k , it is easy to see that Further, for any ε > 0, one can obtain that On the other hand, similar to (3.16), one can see that (3.37) And then we estimate "−cE Q µ x β |dz| 2 dx" in two cases: the case of a sub-critical exponent 0 < β < 2 − α and the case of a critical exponent β = 2 − α. Proofs of the main results In the section, we give proofs of controllability and observability results for forward linear stochastic complex degenerate/singular Ginzburg-Landau equation (1.1), respectively. First, by the standard duality technique ( [28]) and observability estimate (1.7), the null controllability result in Theorem 1.2 can be obtained immediately. Therefore, we only need to prove Proposition 1.6. Proof of Proposition 1.6. In the case of (H 1 ), choose a cut-off function ξ ∈ C ∞ (R; [0, 1]) such that , and it is easy to see that G 1 ⊆ G 0 . Similar to the proof of (4.4), we can get It follows that On the other hand, notice that d(|w| 2 ) = wdw + wdw + |dw| 2 . Hence, for any 0 ≤ t 1 ≤ t 2 ≤ T , By Gronwall's inequality, it follows that Combining the above equality with (4.8), we have Notice thatĈ 3 e C(1+aK0) is C 1 (a, b, c, d) in (1.8). Then, we have completed the proof for the case of (H 1 ). In the case of (H 2 ), similar to (H 1 ), we know x α |w x | 2 dxdt. Combining the above equality with (4.14), we get Notice that C 1 e C(1+aK0) is C 1 (a, b, c, d) in (1.11). For the case of (H 1 ), we have completed the proof. Combining the above equality with (4.17), we have that Notice that C 2 is C 1 (a, b, c, d) in (1.12). The proof of Theorem 1.8 is completed.
4,751.6
2023-01-03T00:00:00.000
[ "Mathematics" ]
Machine learning building price prediction with green building determinant , In the Malaysian real estate industry, GB is still in its infancy, where the valuation is not integrated into standard property valuation [7]. The valuation standard only provides the valuation of the property and buildings, which may not have sufficient definition to include GB development [8][9]. It creates some difficulties for the valuers to assimilate the conventional method of valuation to indicate and predict the price of GB accurately [10]. The problem arises as researchers see another problem related to real estate transaction data. It is stated that valuers often face difficulties to predict property prices over the time [11], especially when a matter related to limitation of data evidence transaction on GB valuation because GB development is relatively new in Malaysia and comparatively new in the real estate industry [12][13]. The valuation of non-GBs often depends on leasing or sales transaction data from several properties provided by JPPH and the data is unlimited. It is iimportant to realise that valuers face various challenges because of their heavy dependencyy on market data. Lack of data means lack of support for the valuable contributions of green attributes, which is supposed to be the factor influencing the GB price. Indeed, the real estate market is exposed to many price fluctuations due to existing correlations with many variables and some of which are beyond our control or perhaps unknown [14]. In light of this situation, Machine learning (ML) model has emerged as a very promising approach in resolving the issue and it is proven to be effective in different kinds of prediction and classification problem [15][16][17]. ML model has different kinds of algorithms and techniques to be selected for developing a good predictor model. These are beneficial to resolve dataset problems such as imbalance and insufficient data like the limitation of sale data evidence transaction of GB valuation. However, the accuracy of the results produced by the ML prediction model is highly dependent on many factors including the algorithms hyper-parameters tuning and different group of features selection. Thus, this paper is written with the aim to report the design and implementation of machine learning model based on auto hyper-parameters tuning and different groups of feature selection. The contribution of this paper is two-fold. Firstly, it introduces the design and implementation of machine learning model with auto hyper-parameter tuning. In the methodology part, this paper provides the technique of auto hyper-parameter tuning by using best estimator function provided by Phyton Scikit-Learn library. Secondly, it presents how GB determinant affects the machine learning performance in predicting the price of building based on real dataset of Kuala Lumpur district in Malaysia The structure of this paper is as follows. Section II focuses on the background of the study related to the ML in real prediction of real estate and ML algorithms. Section III describes the research methodology followed by the discussion of the result in section IV. The concluding remark is written in the last section. BACKGROUND OF THE STUDY 2.1. Machine learning for real estate prediction Accurate evaluation of property price is crucial for real estate, the stock market, tax sector, the economy and the power of purchasers [18]. The conventional method is limited to the scope of current systems data that needs to be taken into account. Normally, predicting the price of property is often done through basic comparative market analysis as well as similar real estate in the same area to provide an approximate price for a particular property [19]. But in GB context, the other factors that can contribute or give positive impact or added values to the GB price should also be considered to produce an accurate result in the price and to reflect the current market value [20]. This will only happen if the valuer considers the historical factors in predicting the price of the GB. ML is seen to have the potential in considering those factors and problems [14]. The common ML modelling techniques that are already being implemented in real estate problems are Linear Regression [21][22][23], Decision Tree [24][25][26][27], Random Forest [21,[28][29], Ridge Regression [30] and Lasso Regression [24,31]. The function of all these algorithms is to predict the real estate dataset and the researchers test all these algorithms in order to predict the green building prices. Machine learning algorithm There are five (5) ML algorithms that are used in this study namely Linear Regression, Decision Tree, Random Forest, Ridge and Lasso algorithms. Linear Regression (LR) is one of the most well-understood and well-known algorithms in ML and statistics. It is also a predictive model that mainly concerns in minimising the error and to ensure or to make the most accurate and possible prediction in explaining the dataset ability. The representation of LR algorithm is an equation that explains and describes a line which ensures the best fits of the relationship between the output variables (y) and input variables (x), by finding the exact weighting for the input variable that is called coefficient (B) [32]. The formula in (1) representing the Linear Regression algorithm. In this formula, Y is the dependent variable (DV) by the given input (x) which is the independent variable (IV). The main goal of the Linear Regression algorithm is to find the value for the coefficients 0 and 1 [21][22]25]. Due to the simplicity of algorithm, Linear Regression has been commonly used in real estate prediction problem [13][14][15]. Decision Tree (DT) is another common model used to solve regression and classification problem [33]. The algorithm produces a tree structure that includes a root node and branches. Each internal node stands for a test on an attribute, each branch denotes the outcome of a test, which is called a decision node and each leaf node holds a class label which is called a terminal node. The topmost of the node in the tree is called a root node [33][34] as presented in Figure 1. However, previous research showed the designs which indicate that the DT algorithm can provide a higher accuracy to dataset, compared to the other algorithm like Lasso [24]. DT has no problems in approximating the linear relationships based on Independent Variable and Dependent Variable factors [25][26]. It is good to perform the algorithm when it comes to prediction. The Random Forest (RF) is an advanced tree structures from the DT, [35][36][37][38]. It is a type of ensembled ML model called Bootstrap, Bagging or Aggregation. The bootstrap is a powerful statistical method for estimating a quantity from a data sample such as the mean. RF model will take a lot of data samples, calculate the mean, then average all of the mean values to give a better estimation result of the true mean value [39]. Several research have demonstrated that RF mostly outperforms many other algorithm in dealing with problem related to property price [21,[28][29]. The Ridge algorithm is one of ML models that is used for analysing multiple regression dataset that suffers from multicollinearity. Multicollinearity is also called as collinearity that refers to a position in which two or more informative variables in a multiple regression are highly related. Even though, Ridge Regression algorithm is added in that problem, a degree of bias to the regression can still be estimated. Ridge Regression is a model that enforces the coefficient to be lower but it does not enforce them to be zero, as it will not get rid of irrelevant feature but rather minimising their impact on the training model [40]. To avoid overfitting, Ridge Regression algorithm performs L2 regularisation stated in the formula. Meanwhile, Lasso algorithm uses L1 regularisation [41]. Equation (2) denotes Ridge algorithm. In this formula, Y denotes for DV, X as IV and B represents the regression of coefficient to be predicted [40]. The represents the residual errors. There are some research which prove that Ridge Regression can be less performed compared to Linear Regression although the Ridge Regression is designed to handle multicollinearity in modelling house price [30]. In the other study on house price prediction, Lasso Regression has outperformed Ridge algorithm in handling multicollinearity. Furthermore, in real estate value prediction using multiple algorithm, Lasso regression algorithm seems to overfit their model dataset by using Ridge Regression algorithms [42]. Lasso regression algorithm stands for Least Absolute Selection and Shrinkage Operator and it can perform both tasks which are feature selection and regularisation. The only difference of Lasso algorithm from Ridge Regression algorithm is that the regularisation term is in absolute value. It is set to restraint the sum of the absolute values of the model parameters where the sum must be less than a fixed value [43][44]. Besides that, Lasso algorithm is being applied in a shrinking (regularisation) process where it penalizes the coefficients of the regression variables shrinking some of them to zero if they are not relevant. Indeed, this process is being applied to minimise the prediction error. Research in [24] has demonstrated the potential of Lasso algorithm to produce higher accuracy than Linear regression and decision tree within the scope of study. The algorithm was employed in predicting the house price in Ames, Iowa in United State using real estate data from 2016 to 2020 and it was found that Lasso algorithm outperformed Ridge algorithm in this case [30]. The researchers also mentioned that Lasso is very useful for features selection and to eliminate any useless features. METHODOLOGY 3.1. Dataset The dataset is a collection of housing prices in 2018 with determinants that includes GB. As this paper uses machine learning prediction, these variables are called features. Table 1 shows the set of features to develop the machine learning prediction model. This study uses 18 features as independent variables (IV) for predicting the Transaction Price as dependent variable (DV). Feature selection The following Figure 2 shows the Pearson correlation between all features to the DV running with Python codes. All the IVs have a very weak correlation to the Transaction Price. GB variable has the highest correlation among the features but 0.25 is considered weak. However, even with a very weak correlation, it was anticipated in the study that to some degree they still contribute impressive information to the model. There are several approaches in selecting the features for machine learning model. They can be divided according to the features correlation level or based on the feature's types or purposes. In this study, features were divided into three types namely without GB, GB and GB with other features. Machine learning algorithms with auto-hyper parameter tuning The five algorithms explained in part 2.2 namely Random Forest Regressor, Decision Tree Regressor, Ridge, Lasso and Linear Regression were used in this study. Prior to the prediction results prediction, auto hyper-parameter tuning was implemented first based on the training dataset by calling best_estimator method in the Python Scikit-Learn library. The method uses grid search optimization of hyper-parameter tuning on the given machine learning algorithm. This is the easieat and shortest time ways for inexpert data scientist to get the suggestions of parameters configuration for the algorithms. The steps of implementing the auto hyper-parameters are as follow: 1. Call the regressor algorithm. 2. Create dictionary and define initial parameters for the algorithm with the corresponding set of values. 3. Call the grid search method by passing the created dictionary. 4. Do preliminary training for the algorithm with the grid search instance and get the parameters estimator. 5. Set the algorithm with the suggested parameters and conduct another fitness with the suggested parameters. 6. Perform another training with the suggested parameters. 7. Validate the prediction value produced by the algorithm and get the score. Experiment configuration In this study, the training and validation datasets were divided into the ratio 80:20 respectively. Python 3.6 Jupyter Notebook platform with Intel i7 7th Generation processor on 16 GB RAM were used. Each machine learning model with each algorithm was set to employ 80:20 percent ratios between the training and validation separation. Each model was run for five times of experiments and the average results of metrics were calculated for comparison. The metrics to present the performances of machine learning algorithms are R squared (R^) and root mean squared error (RMSE). The R^ can explain how well the selected features in predicting the dependent variable while RMSE represents the sample standard deviation the difference between the predicted and real values. The range of values for R^ is between 0.1 with higher is better. Meanwhile, RMSE with lower value shows lower errors or differences in the prediction results. RESULT The results are presented in different tables according to the three groups of features selection namely without GB, GB only and GB with other features. The average results from the five times experiments of each machine learning model were calculated and recorded. The results of model without GB features selection is presented in Table 2. Without GB determinant, only the Random Forest Regressor could produce an acceptable result. The algorithm had the lowest RMSE (0.027) and the highest coefficient of determination presented by the R^ (0.69). The mean of R^ from other algorithms appeared to be very weak but the errors distanced of each algorithm is considered promising. The following Table 3 presents the mean of R^ and RMSE for the tested algorithms with GB determinant only. Similarly, Random Forest regressor outperformed other algorithms but the values for RMSE and R^ were not as good as the value in Table 2. The performances of Random model regressor dropped when only dependent on the GB determinant. However, not much different could be seen on the other algorithms. Lastly, Table 4 lists the results of each algorithm when tested with all determinants that combined GB and others. Combining GB with other features in the models does not really show a significant improvement to each of the tested algorithms. Slightly better performance can be seen on the Decision Tree regressor for the R^. CONCLUSION Within the scope of this study, it can be concluded that GB determinant has not contributed much to the performance of machine learning models even though its correlation to the building price is higher than the other determinants. Moreover, the worst results of all algorithms produced by the model with single GB determinant. Among the five selected algorithms, only Random Forest regressor shows a consistent performance with all the group of features selection. Therefore, Random Forest regressor can be further enhanced in future research for the same case of building price prediction. Suraya received her bachelor's degree in Computer Science, majoring in software engineering, from UTM. She later pursued her master's degree in Computer Science at UPM. She first started her career in industry when she was employed as an Associate Network Engineer by Ramgate Systems Sdn. Bhd in June 1996. She started her career as a full-time lecturer with UTM after receiving her master's degree. In three years of service at UTM she managed to complete two research projects funded by the university. She was offered a position in University Teknologi MARA (UiTM), Seri Iskandar, Perak, Malaysia, in 2004, which she gladly accepted and has been lecturing Computer Science subjects there until presently. In 2015, she received her PhD in Information Technology and Quantitative Sciences. In UiTM, she has so far managed to complete more than fifteen research projects and is currently active with three research grants.
3,572.8
2020-09-01T00:00:00.000
[ "Computer Science" ]
Reliability Measure of a Relay Parallel System under Dependence Conditions In a relay system of dependent components, the failure to close reliability measure is given as a Girsanov transform of the failure to open reliability measure. Introduction As in Barlow and Proschan [1], a complex coherent reliability system is completely characterized by its structure function assuming values in the set and where stochastic process assuming values in  .The sto- chastic process , represents the state of the i-th component.if the component i is working at time t and i i if the component i is in a failed state at time t.Also, the system state     X t   has such an interpretation, is increasing in each coordinate and each component is relevant, that is, there is a time t and a configuration of    0 i t X t  in which the functioning of the component i is fundamental for the functioning of the system.A relay system is subject to two kinds of failure: failure to close and failure to open.Similarly, circuits construct from these relays are subject to the same kinds of failure.If the i-th relay responds correctly to a command to close, (that is, closes), and if the circuit responds correctly to a command to close (that is, closes) if, and only if, at least one of its components responds correctly to a command to close, and 0 otherwise, then, is a parallel system.Next, let if the i-th relay responds correctly to a command to open (that is open), and i otherwise, i = 1; 2. Let is defined by The concept of dual structure is useful in analyzing system of components subject to two kinds of failures, such as relays system and safety monitoring systems.It is interesting and useful to note that both failure to close and failure to open can be analyzed using the same structure function  .In this paper, under dependence conditions, we analyze the dual structure probability measure of a parallel system, through a transform of the failures counting processes compensators in the original probability space.In Section 2 we analyse the problem for a parallel system of two components.In Section 3 we generalize the results and in Section 4 we discuss some reliability preservation properties. Without loss of generality, firstly, we consider a parallel system of two components.We observe two component lifetimes T and S, which are positive random variables defined in a complete probability space   where , , , ,      satisfies Dellacherie's conditions.In order to simplify the notation, in this paper we assume that relations such as between random variables and measurable sets, always hold with probability one.In what follows we assume that S and T are totally inaccessible t  -stopping time and that , that is, the lifetimes can be dependent but simultaneous failures are ruled out.  P S T   0  The parallel operation of S and T is defined by the maximum between S and T and denoted by If we denote the survival functions of S and T as respectively, it follows from Arjas and Yashin [2], that, under some conditions, the -compensator processes of and are given by We assume such conditions and as S and T are totally inaccessible-stopping time the compensator processes are continuous.Now we calculate  which characterizes uniquely the lifetime of a series system and, therefore, the dual of such a parallel system.As this operation is symmetric on S and T, the idea is to combine compensator transformations in   A t and .   B t Firstly, we consider the compensator transform B s t To prove the main Theorem of this section we are going to use the following Lemma: Lemma 2.1 Under the above hypothesis the following process We consider the localization sequence, the  -stopping time defined By It is sufficient to prove that the process e 1 e e e 1 The procedure is easy: On the set we have A u s As the integrand 0 e e e e 1 A u s Secondly, we consider the compensator transform and with the same argument to prove Lemma 2.1 we can prove Lemma 2.2: Lemma 2.2 Under the above hypothesis the following process e 1 e e e 1 Observe that the same expression for the t  -compensator of is obtained through the transformation: Then, we propose the compensator transforms: to prove the main Theorem: Theorem 2.3 Under the above hypothesis the following process are continuous and we have Under certain conditions, it is possible to find Q.Indeed, assume that the process is uniformly integrable.Then it follows from well known results on point process martingales (Girsanov Theorem,Bremaud [3]) that the desired measure Q is given by the Radon Nikodyn derivative Remark: In the case where T and S are identically distributed, and the compensator transform is given by which is used in Bueno and Carmo [4], to de_ne active redundancy operation when the component and the spare are dependent but identically distributed. Parallel operations are very important in reliability theory: the performance of a parallel system are always better than the performance of any coherent system with the same components; it is used in replacement models, to optimize system reliability through active redundancy.However, if the component are stochastically dependent the reliability of a coherent system is a difficult and tedious calculation involving multivariate distributions.The calculation becomes more tractable under the assumption of a series system, in which case the reliability is the survival function of a multivariate positive random vector.It is also can be easily done through the compensator processes.We can show this argument easily: Corollary 2.4 Under the hypothesis of Theorem 2.3, and under Q such that   Proof As the compensators are deterministic before any failure, we can write: . A t B t P S T t P S T t As an application we calculate the Barlow and Proschan [1] reliability importance of a component for the system.In the independent and absolutely continuous case, it is the probability that the component causes the Copyright © 2013 SciRes.AJOR system failure.For dependent components, this quantity is I S P S S T P S S T P S T P S T E E B S E B T E B S E B S B T E B where This expression is an extension of the Barlow and Proschan reliability importance, by Bueno and Menezes [5] where the importance of the component S for the system is P E B Now, we ask how we can use Corollary 2.4 to calculate the Barlow and Proscha relibility importance of component S for the system reliability.In a series representation we have: In the above we use that We expected such a relation since that the reliability of the dual system is X t  To clarify this argument, suppose that we can define the reverse times, S  and T  such that the events and  are equivalent to  and respectively, then A General Parallel System The structural relationship between the lifetime , of a parallel system and its components lifetime is given by We intend to define a compensator transform to characterize the parallel system as a series system.As in Section 2, the idea is to combine compensator transformations on the compensator process of the lifetime counting failure process of component ln . n n l l l l and Under the above hypothesis and notation we have The main results, which follows from an adaptation of Girsanov Theorem, is Theorem 3.1 Under the above hypothesis the following process  -martingale with is a nonnegative 2) If the components are dependent but identically distributed, we have As in Barlow and Proschan [1], we assume the series parallel decomposition of a coherent system: are minimal cut sets, that is, a minimal set of components whose joint failure causes the system to fail. We can also define, for each the minimal parallel cut structure and we can write Therefore, using the compensator series transformation for each j  we get the series transformation for the system. Preservations Results In many reliability situations, we encounter structures of coherent systems where components share a load, so that a failure of one component results in increased load on each of the remaining components.Furthermore, the components in a coherent system could be subjected to the same set of stress.In such cases, the random variables of interest are not independent but rather associated. Therefore, it is very interesting to verify whether the association properties of the lifetimes under P are preserved under Q. We introduce the association definition. In particular, this definition, formulated by Esary, Proschan and Walkup [6], Esary and Proschan [7], is useful to produce upper and lower bounds for system reliability.The measure Q preserves this property from the measure P. Theorem 4.2 If T is associated under , then also under . Proof We consider the upper sets in , 1 and U , and the uniformly integrable  -martingale, t  It follows that there exists a unique -predictable process, such that the covariance process Since a martingale has a constant expectation, we have 1 2 0 0 1 2 0 0 0 1 1 , , . Classes of non-parametric distributions , such as increasing ( decreasing) failure rate (IFR (DFR)) distributions, new better (worse) than used (NBU (NWU)) distributions and others, have been extensively investigated in Reliability Theory.They can be used to achieve the benefit of a maintenance operation or to derive bounds on system reliability. Several extensions of these concepts appeared in the literature, e.g.Harris [8], Barlow and Proschan [3], Marshall [9] and others.However, they all have in common that they don't order the lifetime vectors in the sense of stochastic order as the univariate concept does.Arjas [10] In order to introduce the concepts of Arjas [10], we define the residual lifetime of at time as However is a measurable random variable and can be written as a suitable approximating step function As P , using the dominated convergence theorem we have Conclusions Relay system are very concerned in reliability theory, however, such a modeling is complicated under stochastically dependence conditions.The purpose of this paper is to provide a way to work with this situation using a point process martingale approach. Some preservation important results in association and non parametric distributions classes useful in reliability theory are proved and an application in component importance is analised. if the circuit responds correctly to a command to open (that is, open), and 0 otherwise.Then        , is the system represent-ing the correct system response to a command to open.D a series system, the dual of Generally, given a structure function , its dual   , 1, Lemma 2.2 and the Stieltjes differentiation rule we have . follows that is associated if, and only if, , considered to observe the components, continuously in time, based on a family of sub -algebras proved that the class of upper sets in the above definition can be restrict to the class Q , of finite unions of open upper sets with corner point 1  and the equivalent definition of MIFR t in terms of the open upper sets with corner point as in the above remark.
2,658.6
2013-01-30T00:00:00.000
[ "Engineering" ]
Constructive General Bounded Integral Control This note proposes a systematic and more generic method to construct general bounded integral control. It is established by defining three new function sets and citing two function sets to construct three kinds of general bounded integral control actions and integrators, resorting to a universal strategy to transform ordinary control into general integral control and adopting Lyapunov method to analyze the stability of the closed-loop system. A universal theorem to ensure regionally as well as semi-globally asymptotic stability is provided in terms of some bounded information, and even does not need exact knowledge of Lyapunov function. Its one feature is that the indispensable element used to construct the general integrator can be taken as any integrable function, which satisfies Lipschitz condition and the self excited integral dynamic is asymptotically stable. Another feature is that the method to construct general bounded integral control action is extended to a wider function set. Based on this method, the control engineers not only can choose the most appropriate control law in hand but also have more freedom to construct the bounded integral control actions and integrators, and then a high performance integral controller is more easily found. As a result, the generalization of the bounded integral control is achieved. Introduction In 2009, the idea of general integral control, which uses all available state variables to design the integrator, firstly was proposed by [1], which presented some general integrators and controllers.However, their justification was not verified by mathematical analysis.In 2012, general integral control design based on linear system theory was presented by [2], where the linear combination of all the states of dynamics was used as the integrator. B. S. Liu The results, however, were local.The regional as well as semi-global results were proposed in [3], where the sliding mode manifold was used as the integrator, and then general integral control design was achieved by using sliding mode technique and linear system theory.In 2013, a class of nonlinear integrator, which was shaped by diffeomorphism, was proposed by [4], where feedback linearization technique was used to analyze the closed-loop system stability.General concave integral control was proposed in [5], where a class of concave function gain integrator is presented and the partial derivative of Lyapunov function is introduced into the integrator design.In consideration of the twinning of the concave and convex concepts, general convex integral control was proposed by [6], where the method to design the convex function gain integrator is presented and its highlight point is that the integral control action seems to be infinity but its factually is finite in time domain.Although general concave and convex integral control are all bounded integral control, one major limitation of them is that the indispensable element of the integrator is limited to the partial derivative of Lyapunov function, another is the function sets, which are used to design the general concave and convex integrator and integral control action, only were limited to two kinds of function sets.These two limitations become a serious obstruction to design a high performance integral controller.In addition, the generalization of the integrator and integral control action, which is achieved by defining two function sets, respectively, was proposed by [7], and its one drawback is that the integral control action could tend to infinity. In consideration of the limitation of general concave and convex integral control, the aim of this paper is to propose a systematic and more generic method to construct general bounded integral control such that for a particular application, the control engineers not only can choose the most appropriate control law in hand but also have more freedom to construct the bounded integral control action and integrator.The main contributions are as follows: 1) three new function sets are defined, respectively; 2) three kinds of method to construct general bounded integral control action and integrator are proposed; 3) the indispensable element used to construct the integrator is not confined to the partial derivative of Lyapunov function [5] [6] and function set [7], which is used to construct the integrator, and can be taken as any integrable function, which satisfies Lipschitz condition and the self excited integral dynamic is asymptotically stable; 4) the function sets used to construct the bounded integral control action have a wider range of choice than the corresponding function sets proposed by [5]- [7]; 5) a class of positive define bounded gain function is introduced into the integrator, which provides the designer with additional degrees of freedom to improve the control performance; 6) exact knowledge of Lyapunov function is not necessary and it only needs to satisfy some bounded information; 7) by using Lyapunov method and LaSalle's invariance principle, a universal theorem to ensure regionally as well as semi-globally asymptotic stability is established.As a result, the generalization of the bounded integral control is achieved. Throughout this paper, we use the notation x is defined as , and that of matrix A is defined as the corresponding induced norm ( ) The remainder of the paper is organized as follows.Section 2 describes the system under consideration, assumption, definition and proof of Lemma.Section 3 addresses the method to construct general bounded integral control.Conclusions are presented in Section 4. Problem Formulation Consider the following nonlinear system, where unknown constant parameters and disturbances.The function f , g and h are continuous in ( ) , , x w u on the whole control domain be a vector of constant reference.Set ϑ ≡ × .We want to design a feedback control law u such that ( ) Assumption 1: For each D ϑ ϑ ∈ , there is a unique pair ( ) 0 0 , x u that depends continuously on ϑ and sa- tisfies the equations, ( ) ( ) ( ) so that 0 x is the desired equilibrium point and 0 u is the steady-state control that is needed to maintain equili- brium at 0 x , where y r = .No loss of generality, we state all definitions, theorems and assumptions for the case when the equilibrium point is at the origin of n R , that is, 0 0 x = . Assumption 2: No loss of generality, suppose that the function ( ) , g x w satisfies the following inequalities, ( ) where x g l is a positive constant.Assumption 3: Suppose that there exists a control law ( ) x u x such that 0 x = is an exponentially stable equilibrium point of the system (5) and the inequality (6) hold, and there exists a Lyapunov function ( ) x V x such that the following inequlities, ( ) ( ) , 0, , hold for all , , , , x f l c c c and 4 c are all positive constants.For the purpose of this paper, it is convenient to introduce the following definitions and Lemmas.For the convenient comparison with the general concave and convex integral control, it is necessary to explain that the following Definition 1 and 2 was proposed by reference [5] [6], respectively. Definition 1: ( ) x R ∈ denotes the set of all continuous differential increasing bounded functions [5] where ⋅ stands for the absolute value.Figure 1 depicts the region allowed for one component of functions belonging to function set F ϕ .For in- stance, for all x R ∈ , hyperbolic tangent function, arc tangent function, Amosin function [8] and so on, all be- long to function set F ϕ . Definition 2: ( ) x R ∈ , denotes the set of all continuous differential increasing functions [6], and given any 0 ε > , there exists a positive constant a φ such that ( ) ( ) where ⋅ stands for the absolute value.Figure 2 describes an example curve and the region allowed for the derivative reciprocal of one component of functions belonging to function set F φ .For instance, for all x R ∈ , the functions, ( ) and given any 0 ε > , there exists a β such that ( ) ( ) , 0 x = is an asymptotically stable equilibrium point of the self excited integral dynamic ( ) where x v l is a positive constant.For instance, for all x R ∈ , the functions, ( ) , , , tanh , sinh x x x x x x + and so on, all belong to function set v F . Lemma 1: Let ( ) ∈ , and then the function [6], ∫ is a positive define bounded increasing function, that is, ( ) , where c ∞ is the lim- it of ( ) y t as t → ∞ .Its proof consults the reference [6]. ∈ , and then the function, Proof: by definition of ( ) z t and Definition 5, we have, Now, using Lemma 1, we obtain, ( ) ( ) Thus, ( ) z t is bounded, that is, ( ) Discussion 1: Comparing the two function sets F ϕ and F φ proposed by [5] [6] with the function set F ψ , it is no hard to see that although they all claim that the function is continuous differential increasing function, the main differences are as follows: the limiting conditions of the function set F ψ is less than the function sets F φ and F ϕ .Thus, the function set F ψ can completely includes the any functions belonging to the two function sets F ϕ and φ F . Discussion 2: Comparing the function set [7], which was used to generalize the integral control action, with the function set F ψ , the differences are the limiting condition about their derivatives, that is, the former de- mands ( ) . However, the latter only requires ( ) x x ψ > .Thus, the function set F ψ not only can completely include the any functions belonging to the function set proposed by [7] but also the functions belonging to the function set F ψ have a wider range of choice than the one proposed by [7].Discussion 3: Comparing the function set [7], which was used to generalize the integrator, with the function set v F , the differences are that: the former is defined by resorting to Mean Value Theorem, therefore, it requires that the function is differential.However, the latter is defined by designing a self excited integral dynamic, and only demands its origin is asymptotically stable, and then differentiability condition is removed.Thus, it is not hard to see that the function set v F not only can completely include the any functions belonging to the function set proposed by [7] but also the functions belonging to the function set v F have a wider range of choice than the one proposed by [7]. Discussion 4: It is obvious that the bound of function, ( ) z t , which is obtained by Lemma 2, is too conserva- tive and even is not of interest.The situation, however, is not as bad as it might seem.As shown by Figure 2 and Figure 3, we can use a β or a φ as its approximate value in practice, corresponding to ε small enough. Constructive Method In general, integral controller comprises three components: the stabilizing controller, integral control action and integrator.Thus, a general integral controller can be given as, where ( ) x u x is an ordinary control law; K σ is a positive define diagonal matrix; is a continuous differential increasing function with ( ) is a positive constant vector or positive define vector function; ( )  .Thus, substituting (10) into (1), obtain, By Assumption 1 and choosing K σ to be nonsingular and large enough, and then set 0 x =  and 0 x = of (11), obtain, Therefore, we ensure that there is a unique solution, 0 σ , and then ( ) 0 0,σ is a unique equilibrium point of the closed-loop system (11) in the control domain of interest.At the equilibrium point, y r = , irrespective of the value of w .Now, the design task is to provide methods to construct the bounded integral control action and integrator in the control law (10) such that ( ) Φ is bounded and ( ) 0 0,σ is an asymptotically stable equilibrium point of the closed-loop system (11) in the control domain of interest.To achieve this objective, the methods can be summarized as follows: Method 1: If we choose ( ) F ϕ σ Φ ∈ , and then by definition of F ϕ , it is easy to know that the integral con- trol action is bounded for all m R σ ∈ .Thus, ( ) µ σ can be taken as any positive define bounded vector func- tion or positive constant vector, that is, ( ) As a result, the generalization of the general concave integral control is achieved.  .Thus, by Lemma 1 and 2, it is easy to know that this kind of integral control action is bounded in time domain, that is,  .As a result, the generalization of the general convex integral control is achieved. σ ∈ , and then ( ) µ σ needs to be designed like Method 2. The condition for ( ) σ Φ is the same as Method 2. It is obvious that this is a more generic method to construct general bounded integral control because the function set used to construct the bounded integral control action has a wider range of choice than the corresponding function sets proposed by [5]- [7].Moreover, it is worth noting that ( ) µ σ can be designed like Me- thod 2 when ( ) σ Φ is bounded.In addition, it is convenient to introduce the variable, a Φ , which is equal to a ϕ , a φ and a ψ , respectively, corresponding to the above three kinds of choices of the function . Based on the control law ( ) x u x and three kinds of integral control actions and integrators above, the fol- lowing theorem can be established. Theorem 1: Under Assumptions 1 -3, if there exists a positive define diagonal matrix K σ such that the fol- lowing inequality, ( and the inequality (20) hold, and then ( ) 0 0,σ is an exponentially stable equilibrium point of the closed-loop system (11).Moreover, if all assumptions hold globally, and then it is globally exponentially stable. Proof: To carry out the stability analysis, we consider the following Lyapunov function candidate, ( where 12 21 , 12 0 m P g K σ > .Obviously, Lyapunov function candidate ( 14) is positive define.Therefore, our task is to show that its time derivative along the trajectories of the closed-loop system (11) is negative define, which is given by, ( Substituting ( 12) into (11), we obtain, Substituting ( 16) into (15), and using (3), ( 4), ( 6), ( 8), ( 9), ( 17) and where ( ) ( ) where ( ) ( ) The right-hand side of the inequality (19) is a quadratic form, which is negative define when 0.25 0 Using the fact that Lyapunov function ( 14) is a positive define function and its time derivative is a negative define function if the inequalities (13) and (20) hold, we conclude that the closed-loop system (11) is stable.In fact, 0 V =  means 0 x = and 0 σ σ = .By invoking LaSalle's invariance principle [9], it is easy to know that the closed-loop system (11) is asymptotically stable. Discussion 5: Compared to general convex and concave integral control [5] [6], it is easy to see that this paper is not a simple extension of them but proposes a systematic and more generic method to construct general bounded integral control.The main progresses are as follows: 1) the indispensable element ( ) v x used to con- struct the integrator can be taken any functions belonging to function set v F and is not confined to the partial derivative of Lyapunov function ∂ , which is used to construct the integrator in [5] [6]; 2) a positive define bounded gain function ( ) µ σ is introduced into the integrator, which can be used to improve the integral control performance; 3) a class of new function set F β is defined, and then the method to construct general bounded integral control action and integrator is extended to a wider function set F ψ .As a result, this is a fire new and more generic method to construct general bounded integral control action and integrator; 4) we need not exact knowledge of Lyapunov function ( ) x V x and only need it satisfy some bounded information.Moreover, if the partial derivative of Lyapunov function is attached into the function ( ) v x , the stability condi- tions can be relaxed.All these mean that the control engineers have more freedom to design the integrator and bounded integral control action, and then a high performance integral controller is more easily found. Discussion 6: Compared to the generalization integrator and integral control action proposed by [7], the main differences are as follows: 1) the integrators proposed by [7] are unattached with the integral control action.However, the integrators presented here are all educed by differentiating the nonlinear function, which is used to produce the integral control action; 2) the integral control actions proposed by [7] could tend to infinity.However, the integral control actions proposed here are all bounded.This means that this kind of integral control can devote its mind to counteract the unknown constant uncertainties and filter out the other action, and then actuator saturation is easy to be removed in practice; 3) a positive define bounded gain function ( ) µ σ is introduced into the integrator, which provides the designer with additional degrees of freedom to improve the integral control performance; 4) as mentioned at Discussion 2 and 3, the function sets v F and F ψ used to construct the integrator and integral control action, respectively, all have a wider range of choice than the corresponding function sets proposed by [7]. Remark 1: From the statement above, It is obvious that: First, five function sets for constructing general bounded integral control action is enumerated; Second, three general methods to construct the bounded integral control action are proposed; Final, a universal theorem to ensure regionally as well as semi-globally asymptotic stability is established.Under the domination of this theorem, all of them synthesize a systematic and more generic method to construct general bounded integral control together.Consequently, for a particular application, the control engineers not only can choose the most appropriate control law in hand but also have more freedom to design the bounded integral control action and integrator, and then a high performance integral controller is more easily found. Conclusion This paper is not a simple extension of general convex and concave integral control but proposes a systematic and more generic method to construct general bounded integral control.The main contributions are as follows: 1) three new function sets are defined, respectively; 2) three kinds of method to construct general bounded integral control action and integrator are proposed; 3) the indispensable element used to construct the integrator is not confined to the partial derivative of Lyapunov function [5] [6] and function set [7], which is used to construct the integrator, and can be taken as any integrable function, which satisfies Lipschitz condition and the self excited integral dynamic is asymptotically stable; 4) the function sets used to construct the bounded integral control action has a wider range of choice than the corresponding function sets proposed by [5]- [7]; 5) a class of positive define bounded gain function is introduced into the integrator, which provides the designer with additional degrees of freedom to improve the control performance; 6) exact knowledge of Lyapunov function is not necessary and it only needs to satisfy some bounded information; 7) by using Lyapunov method and LaSalle's invariance principle, a universal theorem to ensure regionally as well as semi-globally asymptotic stability is established.As a result, the generalization of the bounded integral control is achieved. smallest and largest eigenvalues, respectively, of a symmetric positive define bounded matrix ( ) A x , for any n x R ∈ .The norm of vector Figure 1 . Figure 1.The region allowed for one component of functions belonging to function set F ϕ . , all belong to function set F φ . Figure 3 Figure 3 depicts the example curves for one component of functions belonging to function set F ψ .For in- stance, for all x R ∈ , the functions, ( ) ( ) Figure 4 Figure 4 depicts the example curves and the region allowed for one component of functions belonging to Figure 2 . Figure 2. Example curve and the region allowed for the derivative reciprocal of one component of functions belonging to function set F φ . Figure 3 . Figure 3. Example curves of functions belonging to function set F ψ . Figure 4 . Figure 4. Example curves and the region allowed for one component of functions belonging to function set F β . Method 3 : If we choose ( ) F ψ σ Φ ∈ , constructive general bounded integral control can be divided into two cases: 1) if ( ) σ Φ is bounded, and then ( ) µ σ can be taken as any positive define bounded vector function or positive constant vector.The condition for then inequality (18) can be rewritten as,
5,020.4
2014-07-03T00:00:00.000
[ "Engineering", "Mathematics" ]
Yang-Baxter sigma models: Quantum aspects We study the quantum properties at one-loop of the Yang-Baxter $\sigma$-models introduced by Klim\v{c}\'\ik. The proof of the one-loop renormalizability is given, the one-loop renormalization flow is investigated and the quantum equivalence is studied. Introduction The Yang-Baxter σ-models were first introduced by Klimčík [1,2] as a special case, at the classical level, of a non-linear σ-model with Poisson-Lie symmetry [3,4]. Recall that the Poisson-Lie symmetry appears to be the natural generalization of the so-called Abelian T -duality [5] and non-Abelian T -duality [6,7,8] of non-linear σ-models. In particular, two dynamically equivalent σ-models can be obtained at the classical level providing that Poisson-Lie symmetry condition holds. That condition takes a very elegant formulation in the case where the target space is a compact semi-simple Lie group which naturally leads to the concept of the Drinfeld double [9]. The Drinfeld double is the 2n-dimensional linear space where both dynamically equivalent theories live. For the Poisson-Lie σ-models, a proof of the oneloop renormalizability and quantum equivalence was given in [10,11,12,13]. We are interested by a special class of classical Poisson-Lie σ-models, the Yang-Baxter σ-models. Those classical models exhibit the special feature to be both Poisson-Lie symmetric with respect to the right action of the group on itself and left invariant. Thus, using the right Poisson-Lie symmetry or the left group action leads to two different dual theories. Those two dynamically equivalent dual pair of models live in two non-isomorphic Drinfeld doubles, the cotangent bundle of the Lie group for the left action, and the complexified of the Lie group for the right Poisson-Lie symmetry. Classical properties were investigated in the past and it has been showed that Yang-Baxter σ-models are integrable [1]. More recently, based on the previous work of Refs. [14,15,16], authors of Ref. [17] proved that they belong to a more general class of integrable σ-models. In particular, they showed that the ε-deformation parameter of the Poisson-Lie symmetry can be re-interpreted as a classical q-deformation of the Poisson-Hopf algebra. If classical properties are well investigated, very little is known about the quantum version of the Yang-Baxter σ-models. In the case where the Lie group is SU (2), the Yang-Baxter σ-model coincides with the anisotropic principal model which is known to be one-loop renormalizable. This low dimensional result can let us hope a generalization for any Yang-Baxter σ-models. However, contrary to the anisotropic principal model, the Yang-Baxter σ-models contain a non-vanishing torsion which could potentially gives rise to some difficulties. On the other hand, another generalization of the anisotropic chiral model, the squashed group models are one-loop renormalizable for a special choice of torsion [22]. Furthermore, the one-loop renormalizability of the Poisson-Lie σ-model cannot provide any help here since the proof was established for a theory containing n 2 parameters when the Yang-Baxter σ-models contain only two: the ε deformation and the coupling constant t. At the quantum level, the Yang-Baxter σ-models are no more a special case of the Poisson-Lie σ models. The main result of this article consists in proving the one-loop renormalizability of Yang-Baxter σ-models. The plan of the article is as follows. In Section 2 we introduced the Yang-Baxter σ-models on a Lie group and all algebraic tools needed. In section 3, the counter-term of the Yang-Baxter σ-models, i.e. the Ricci tensor, is calculated. Section 4 is dedicated to the proof of the one-loop renormalizability, and the computation of the renormalization flow is done in Section 5. In Section 6, we study the quantum equivalence and we express the Yang-Baxter σ-action in terms of the usual one of the Poisson-Lie σ-models. Outlooks take place in Section 7. The complexified double We considered the case of the Yang-Baxter models studied in Ref. [1]. In that case the Drinfeld double D can be the complexification of a simple compact and simply-connected Lie group G, i.e. D = G C , or the cotangent bundle T * G. Let us consider the case of the complexified Drinfeld double, it turns out that D = G C admits the so-called Iwasawa decomposition In particular, if D = SL(n, C), then the group AN can be identified with the group of upper triangular matrices of determinant 1 and with positive numbers on the diagonal and G = SU (n). Furthermore, the Lie algebra D turns out to be the complex Lie algebra G C , which suggests to use the roots space decomposition of G C : where ∆ is the space of all roots. Consider the Killing-Cartan form κ on G C , and let us take an orthonormal basis H i in the r-dimensional Cartan sub-algebra H C of G C with respect to the bilinear form κ on G C , i.e: This permits to define a canonical bilinear form on H * , and more specifically endows the roots space ∆ ⊂ H * with an Euclidean metric, i.e. Moreover, the inner product on the roots space part of G C is chosen such as: and to fix the normalization, we impose the following non-linear condition E α = E † −α . With all those conventions, the generators of G C verify: The structure constants N α,β vanish if α + β is not a root. Since G C is a Lie algebra, the structure constants verify the Jacobi identity which leads on one hand to: and on the other hand to: In the non-vanishing case, the structure constants N α,β can be calculated from the last relation with (n, m) ∈ N such that β + nα and β − mα are the last roots of the chain containing β (see Ref. [21] for more details). Since H i is an orthonormal basis in H C , we obtain the relations: A basis of the compact Lie real form G of G C can be obtained by the following transformations: with α ∈ ∆ + (positive roots). With our choice of normalization, the vectors of the basis verify κ( and all others are zero. Let us define now a R-linear operator R : G → G such that: this operator R is the so-called the Yang-Baxter operator [2] which satisfies the following modified Yang-Baxter equation: Let us define the skew-symmetric bracket: which fulfills the Jacobi identity, and defines a new Lie algebra (G, [., .] R ). It turns out that this new algebra is nothing but the Lie algebra of the AN group of the Iwasawa decomposition of G C and will be denoted G R the dual algebra. The Yang-Baxter action We shall now consider the action of the Yang-Baxter σ-models [2] expressed on the Lie group G, which takes the expression: where coupling constant and is the deformation parameter. We can immediately check that the Yang-Baxter models (14) are left action invariant, G acting on himself. Concerning the right Poisson-Lie symmetry, it is well known that such σ-models have to fulfill a zero curvature condition to be Poisson-Lie invariant. Indeed, if we take the following G * -valued Noether current 1-form J(g): we can easily verify that the fields equations of (14) are equivalent to the following zero curvature condition: We remark that if the deformation ε vanishes then the action of the group G is an isometry, since the Noether current are closed 1-forms on the worldsheets and the action (14) coincides with that of the principal chiral σ-model. The operator (1 − εR) −1 on G can be decomposed in a symmetric part interpreted as a metric g on G and a skew-symmetric part interpreted as a torsion potential h on G. An attentive study of the action (14) gives the following expressions for g and h: In order to prove the one-loop renormalizability, we need to calculate the Ricci tensor associated to the manifold (G, g, h). 3 Counter-term of the Yang-Baxter σ-models In this paper, for the calculus of the counter-term, we choose the standard approach [18] based on the Ricci tensor. This choice provides a clear and an elegant expression of the counter-term in terms of the roots of G C . However the calculus could have been done by using our formula of [12] for the counter-term in an equivalent way. Geometry with torsion on a Lie group G Let us consider a pseudo-Riemannian manifold (G, g) as the base of its frame bundle, where G is a compact semi-simple Lie group and g a non-degenerated metric. Moreover, we choose the left Maurer-Cartan form g −1 dg, g ∈ G as the basis of 1-forms on G, and in that basis the metric coefficients g ab and the torsion components T abc are all constant. On that frame bundle we define a metric connection Ω with its covariant derivative D such that Dg = 0. Furthermore, if we define by d D the exterior covariant derivative, the torsion can be written T = d D (g −1 dg). From these definitions we will obtain the expression of the connection Ω. Metric connection. By using the relation Dg = 0 we obtain: Ω s ac g sb + Ω s bc g as = 0. With g ab constant and if we denote Ω abc = g as Ω s bc , the previous relation becomes: Thus the two first indices of the connection Ω are skew-symmetric. The torsion. We said that the torsion verifies T = d D (g −1 dg) or in terms of components: Since g −1 dg is the left Maurer-Cartan form on G, we get: on the other hand T a is the 2-form torsion, i.e. we can write it as: Consequently, the components of the torsion are related to the skew-symmetric part of the connection as: with f a bc the structure constants of the Lie algebra G. Note that in the case of the non-linear σ-models the torsion is defined by T = dh where h is the 2-form potential torsion, we will exploit that a little further to express the connection for the Yang-Baxter σ-models. The connection. From the relations (20)(22), we can find the components of the connection: with the conventions Ω abc = g as Ω s bc , T abc = g as T s bc and f bca = f s bc g sa . Let us introduce the Levi-Civita connection L which is in fact the second term of the r.h.s in Eq.(23), and rewrite the connection Ω for a totally skew-symmetric torsion: The curvature and the Ricci. By definition the 2-form curvature F fulfills F = d D Ω, i.e. Moreover, since Ω a b is a 1-form of G, Ω a b = Ω a bc (g −1 dg) c , we obtain the general expression for the curvature: The Ricci tensor is such that Ric ab = F s asb and can be written as: We are now able to decompose the symmetric and skew-symmetric parts of the Ricci tensor in terms of the torsion-less Ricci tensor Ric L and the torsion T as: Ric (ab) = Ric L (ab) + 1 4 T r as T s br (28) Application to Yang-Baxter Ricci symmetric part : Recall that in the case of the Yang-Baxter σ-models and with our normalization choice, the metric is given by: Let us introduce the bi-invariant connection Γ on the Lie group G, it corresponds to the Levi-Civita connection in the case of a vanishing deformation, i.e. Γ = L(ε = 0). From the equations (23) we can obtain the Levi-Civita coefficients: where we keep the convention for the indices i ∈ H and α ∈ ∆ + . All others Levi-Civita coefficients are equal to those of the bi-invariant connection Γ. We can now express the torsion-less Ricci tensor Ric L as a deformation of the usual Ricci tensor Ric Γ of the bi-invariant connection on Lie group, i.e. It is well-known that for the Riemannian bi-invariant structure the Ricci tensor takes the expression: therefore, the components of Ric L are the following: Concerning the contribution of the Torsion to the symmetric part of the Ricci tensor, we have to express the Torsion in terms of the constant structures of G. For a non-linear σ-model the Torsion 3-form is calculated from the potential torsion 2-form such T = dh, which implies that: Moreover, since the torsion potential involves only root indices the torsion components vanish for the Cartan sub-algebra indices (T ibc = 0). We can now calculate the torsion contribution, and we obtain for the nonvanishing coefficients: In the calculus we used the fact that the Killing κ can be expressed in terms of the root α and the constant structures N α,β such as: Adding both contributions to the Ricci tensor and using our normalization, we obtain the final expression of the symmetric part: We observe that, in the case of the Yang-Baxter model, the torsion induced by the Poisson-Lie symmetry is precisely that which avoids the dependence of the Ricci tensor in the root length (α, α). Ricci skew-symmetric part : Using the fact the T iab = 0, the only non-vanishing non-diagonal components of the Ricci tensor can be written: The first r.h.s term can be expressed as a function of the structure constants, N α,β such as: The two other terms are nothing but the contribution of the roots space (see Eq.(41)) to the component κᾱᾱ of the Killing form, i.e.: − fᾱ βγ T αβγ κ γγ κ ββ − fᾱβγT αβγ κγγκββ = − ε 2 κᾱᾱ + 2(α, α) . By summing the Bianchi relations (7) on positive roots, we obtain that: with ρ = 1 2 α∈∆ + α the Weyl vector. Finally, the skew-symmetric part of the Ricci tensor is given by: One-loop renormalizability At one-loop the counter-terms for a non-linear σ-model [18] on G are given by: We require, for the renormalizability, that all divergences have to be absorbed by fields-independent deformations of the parameters (t, ε) and a possible non-linear fields renormalization of the fields (g −1 ∂ ± g) a . Thus, if we suppose that all parameters are the independent coupling constants of the theory, the Ricci tensor in our frame has to verify the relations: with u a vector that contributes to the fields renormalization, χ 0 and χ ε are coordinates-independent. Decomposing into symmetric and skew-symmetric parts, the previous relation for the Yang-Baxter σ-models becomes: From the equations (51) and (52), we extract immediately: Since χ 0 and χ ε are now fixed, they have to fulfill in the same time the relation (53), which gives the following constraint: Furthermore, the covariant derivative of u can be easily calculated: Let us define the vector εū = u, and insert (56) in the constraint (55) we obtain: Then, if we imposeū = 4ρ the constraint is fulfilled for any root α since (., .) is the canonical scalar product on R r . We can conclude that the Yang-Baxter σ-models are one-loop renormalizable. We note that it is quite elegant to find a field renormalization given by the Weyl vector. Renormalization flow Let us introduce the β-functions of the two parameters (t, ε), they satisfy: where λ = 1 π ln µ, with µ the mass energy scale. We obtain the following system of differential equations: The set of differential equations can be exactly solved, and solutions take the following general expressions: with (A, B, C) ∈ R three integrative constants. We note that divergences occur for ε and t when the energy scaleλ goes to ± 3π 4 +C. On the other hand, forλ → ∞ the parameters ε and t are vanishing, leading to an asymptotic freedom. We can illustrate the situation with the following plot ( Fig.1) of λ as a function of ε where we choose B = 1 and C = 0. Now we will express the Yang-Baxter σ-models in terms of the usual Poisson-Lie σ-models' expression. Recall that general Right symmetric Poisson-Lie σ-models can be written: Here Π R (g) is the so-called Right Poisson-Lie bi-vector and M an n 2 real matrix. Using the adjoint action of an element g ∈ G we can rewrite the action (14) such as the previous (62), with Let us focus on the dual models, as evoked earlier there exists two nonisomorphic Drinfeld doubles for the action (62). Consequently, we have two different dual theories for one single initial theory on G, and all three are classically equivalent. We will consider each case and argue that they are all quantum-equivalent at one-loop. We start by considering the Drinfeld double D = G C , in that case we saw that the dual group is the factor AN in the Iwasawa decomposition. The corresponding algebra is the Lie algebra G R generated by the R-linear operator (R − i) on G, whose its group is a non-compact real form of G C (see [2,20] for details). The dual action can be expressed as: K.Sfetsos and K.Siampos proved in [10] that for Right Poisson-Lie symmetric σ-models the quantum equivalence holds providing that the matrix M is invertible. In the Yang-Baxter σ-models this condition is always satisfied and the inverse of M is given by: When we consider the dual model associated to the left action of G, the Drinfeld double is the cotangent bundle T * G = G G * . Then the dual group is the dual linear space G * of G, which is an Abelian group with the addition of vectors as the group law. The corresponding action is that of the non-Abelian T -dual σ-models [6,7,8] and has the well-known expression: It has been showed in [19] that those models are one-loop renormalizable. Since the action (64) is Left Poisson-Lie symmetric, Sfetsos-Siampos condition [10] still holds (in their Left formulation) and implies again the quantum equivalence at one-loop. Outlooks Yang-Baxter σ-models are one case of non-trivial Poisson-Lie symmetric σmodels which keep the renormalizability and the quantum equivalence at the one-loop level, and are known to be classically integrable. Those models appear to be a semi-classical q-deformation of Poisson algebra, and can be a starting point in the quest for a quantum q-deformation fully renormalizable thanks to the relative simplicity of these models containing only two parameters . Furthermore, for low dimensional compact Lie groups G the geometry associated to the Yang-Baxter σ-models can be viewed as a torsionless Einstein-Weyl geometry. We plan in the future to study the Weyl connections with torsion on Einstein manifolds, with the hope to learn more about the geometric aspects of the Poisson-Lie σ-models. I thank G.Valent for discussions and C.Carbone for proofreading.
4,349.8
2014-01-14T00:00:00.000
[ "Physics" ]
Novel nucleic acid analogs with a chimeric phosphinate / phosphate backbone; synthesis and biophysical properties Novel analogs of acyclic nucleosides based on a bis-hydroxymethylphosphinic acid (BHPA) backbone were incorporated into a thymidine-containing DNA strand by phosphoramidite methodology. The physicochemical properties of these constructs were evaluated. Melting temperature measurements demonstrate that chimeric oligomers with a phosphinate / phosphate backbone possess lower binding affinity towards complementary single stranded templates and slightly higher binding affinity towards double stranded DNA, as compared to non-modified reference oligomers. The polyanionic oligomers containing BHPA abasic residue were also synthesized by the same methodology. These oligomers show low cytotoxic activity toward HUVEC and HeLa cell lines and, as expected, are resistant to nucleolytic degradation at the modification site Introduction DNA analogs with improved stability to nucleolytic degradation and higher binding affinity toward complementary DNA or RNA strands are of interest due to their potential application as therapeutics in antisense 1 or antigene strategies. 2 Several acyclic analogs of DNA, including those derived from isosteric glyceronucleosides, have been synthesized for these purposes up to now. 3,4Despite their improved flexibility those oligomers in most cases form weaker duplexes with complementary DNA or RNA strands, except in the case when increasing flexibility of the carbohydrate portion resulted in triplex stability. 5he aim of reported studies was the design of novel antisense / antigene therapeutics based on the use of a bis-(hydroxymethyl)phosphinic acid (BHPA) unit as a scaffold molecule.We assumed that bis-(hydroxymethyl)phosphinic acid could replace the 3'-, 4'-and 5'-carbons of the sugar moiety and provide a site for attachment of nucleobases via a one-or two-carbon linker.Esterified or amidated BHPA, if successfully incorporated into the oligonucleotide chain, were expected to possess an additional hydrogen bond acceptor site at the phosphinyl oxygen atom.The conformational flexibility and neutral nature of such a unit as in 2 could enhance cellular uptake, while attached nucleobases were expected to interact via stacking and hydrogen bonds with complementary DNA or RNA.Moreover, it was anticipated that even an abasic bis-(hydroxymethyl)phosphinic unit, as in 1, could provide protection of chimeric constructs against nucleolytic degradation.The use of a bis-(hydroxymethyl)phosphinic acid for the synthesis of acyclic nucleoside analogs and their successful incorporation into short DNA oligomers has been disclosed. 6Such acyclic nucleoside analogs were successfully used for the protection of deoxyribozymes, directed towards the HIV-1 viral RNA sequence in in vitro HIV-1 infected cellular experiments, 7 as well as for deoxyribozymes designed to cleave bcr-abl mRNA fragments in in vitro experiments. 8n this paper we report on the synthesis of longer oligomers with a chimeric phosphinate / phosphate backbone and the evaluation of the biophysical properties of such constructs, including their affinity toward RNA and DNA templates, their stability in the presence of 3'-and 5'-exonucleases, and their cytotoxic activity in selected cell lines. Synthesis of oligomers with a chimeric phosphinate / phosphate backbone Short oligomers In our previous communication we described conditions for the incorporation of the modified BHPA units 9a-d into short DNA oligomers. 6The phosphoramidite monomers 9 and conventional phosphoramidite monomers were used as substrates for the synthesis of our constructs by automated phosphoramidite methodology. 14The synthesis was performed on an ABI 394 synthesizer (Applied Biosystem Inc., Foster City, CA) using succinyl-either oxalyllinked 15 LCA-CPG solid supports.The only difference in the procedure recommended by manufacturer was a prolonged coupling time of the modified monomers (up to 600 s).The coupling efficiency of 9a-d, as determined by DMT-ion assay, was in the range of 95-97 %.The 5'-terminal DMT group was usually removed before cleavage of the product from the solid support. While the synthesis of trimers dApYpdC 10 (where Y is a modified BHPA unit as in 2, B = thymin-1-yl or thymin-3-yl, n=2) followed by a standard ammonia deprotection and RP HPLC purification led to the desired amidate derivatives 10c and 10d, this approach did not yield the esterified derivatives 10a and 10b (Table 2, a-d as in Scheme 1).Instead, as revealed by MALDI-TOF analysis, both isolated products in fact constituted the same compound 11, possessing a BHPA unit with depleted alkyl-nucleobase moiety (Y ab , see table 2).In this case hydrolytic deprotection conditions (concentrated aqueous ammonia, 16 h, 55 °C) resulted in cleavage of the ester bond of the phosphinic acid derivatives 10a and 10b, giving rise to anionic DNA analogs with a pseudo-abasic site (as presented in the structure 1).This instability of the BHPA ester bond of 10a and 10b was also observed under less harsh conditions, routinely used for the cleavage of oligomers from the solid support (28 % aq.NH 4 OH, 1 h, 20 °C).Despite the observed stability of the ester bond of 7a and 7b in concentrated aqueous ammonia (2 h at RTdata not shown), we considered that such compounds, as base-labile phosphotriester analogs, 16 might not survive the deprotection conditions during DNA synthesis.This result, albeit expected for those phosphotriester analogs, 17 convinced us that the routine phosphoramidite methodology could not be used for the synthesis of the modified oligomers containing nucleotide units with required protection of nucleobases.However, application of an oxalyl-LCA CPG solid support 15 and methyl-phosphoramidite methodology 14 allowed us to obtain homo-thymidylates TpTpYpYpTpT (12a and 12b). 6Removal of the phosphate protecting methyl groups was achieved by treatment of solid support-bound oligomers with a thiophenol / dioxane / triethylamine mixture (2:1:2, v/v) for 5 min at RT, followed by the release of oligomers from the solid support by treatment with 1 % aqueous triethylamine for 7-10 min at RT. Expected oligomers were also obtained by using an oxalyl-LCA CPG solid support and protection of the phosphate function with a 2-cyanoethyl group.In this case treatment of solid support-bound oligomers with 1 % aqueous triethylamine for 10 min at room temperature resulted in simultaneous release of the oligomer and removal of the 2-cyanoethyl protecting groups.The basic conditions used were sufficient for release of the oligomer from the solid support without hydrolytic cleavage of the P-O bond.Chimeric oligomers 10 and 12 were synthesized as diastereomeric mixtures by virtue of a chiral center at the phosphorus atom of the BHPA moiety.In spite of our efforts neither trimers 10c and 10d nor hexamers 12a and 12b could be separated by HPLC in the form of P-chiral pure diastereomers. Substrates for the synthesis of abasic DNA analogs A much simpler synthetic route to the anionic DNA analogs 1 could be used starting from bis-(hydroxymethyl)phosphinic acid methyl ester 4. Dimethoxytritylation of one of OH groups in 4 followed by phosphitylation of the resulting 8e led to phosphoramidite monomer 9e (Scheme 2).Monomer 9e could be used for the synthesis of pseudo-abasic DNA analogs, either those possessing a Y ab group within the oligomer chain, or at the 5'-end of the construct.For introduction of the abasic BHPA unit at the 3'-end of the oligomer derivative 8e was acylated with succinic anhydride in pyridine and then coupled with an aminoalkyl linker of controlled pore glass solid support (LCA CPG).This modified solid support 9f was loaded with the BHPA derivative up to 57.0 µmol /g, as determined by a DMT-cation assay and used for the synthesis of 3'-modified chimeric DNA analogs. Synthesis of longer chimeric DNA analogs -compounds 13-21 Our preliminary experiments led us to the following conclusions: • the routine phosphoramidite strategy has proved to be successful for the synthesis of the short chimeric oligomers containing BHPA amidate units; • esterified BHPA analogs can be introduced only into homo-thymidylate oligomers exclusively via an oxalyl-linked solid support approach; • anionic abasic analogs of DNA can be obtained from an easily accessible monomer (DMT protected phosphoramidite of BHPA methyl ester) 4. Therefore, this methodology was used for the synthesis of numerous longer oligomers with phosphinate / phosphate backbone.To study the properties of the abasic DNA analogs 18 containing ionic bis-(hydroxymethyl)phosphinic acid residues as in 1, one or two BHPA units (Y ab ) were introduced into the central domain of the longer homo-thymidylate chain, as in 13 and 14, respectively, or were used for 3'-and 5'-terminal protection of the DNA chain, as in 15 and 16.The polyanionic oligomer dCp(Y ab p) 17 dC 17 and its 5'-fluorescently labeled analog 18 were also synthesized by the same methodology.Compounds 13-17, due to their stability under acidic conditions, were synthesized as 5'-DMT-protected oligomers (DMT-ON) and separated from shorter by-products by reverse phase HPLC. 19 Purity of compounds 13 -18 was checked by 20 % polyacrylamide / 7 M urea gel electrophoresis (PAGE).For compounds 13, 14 and 17 PAGE analysis was done on the 5'-32 Plabeled oligomers.Representative PAGE gels of oligomers 13 and 14 as well as 15 and 16 are show at Figure 2a and b respectively.Nonadecadeoxyadenylate dA 19 and nonadecathymidylate T 19 are used as the reference oligomers.Electrophoretic mobility of oligomers 13 and 14 (Figure 2a) is slightly higher than that of the reference T 19 due to the presence of the Y ab units increasing total negative charge of the chimeric oligomers 13 and 14.The structure of the 5'-fluorescently labeled oligomer 18 was confirmed by MALDI-TOF analysis (Figure 3a) and its purity was checked by PAGE analysis.Band visualisation was achived by means of a Stains all reagent (Figure 3b).For comparison mobility of 5'-32 P-labeled oligomer 17 was checked by analogous PAGE analysis (Figure 3c).At this gel the spot of 5'fluorescently labeled oligomer 18 could be seen in the UV light and was marked at the autoradiogram film as shown.Oligomer 17 exhibits higher mobility at the polyacrylamide gel.This oligomer is contaminated in ca. 10 % of with the product of lower PAGE mobility.This impurity could be neither separated from oligomer 17 by RP HPLC nor identified by MALDI-TOF mass spectrometry.Monomers 9a and 9c, mimicking acyclic DNA nucleosides (N-1-substituted thymine), were used for the synthesis of nonadecathymidylate analogs 19, 20 and 21, containing incorporated one, two or three BHPA units, respectively. DMT-ON Oligonucleotides 19a-21a, possessing esterified BHPA units, due to their rather low stability either in basic and acidic conditions, were obtained as fully deprotected oligomers.The yields of these oligomers, after RP HPLC separation, were rather moderate (10-15 optical units from 1 µmol DNA synthesis). For the synthesis of oligomers 19c-21c containing BHPA amidate units we used a two step deprotection / purification procedure. 19After the synthesis was completed, oligomers 19c-21c were released from the solid support as the 5′-DMT-protected derivatives.Shorter by-product oligomers were separated chromatographically from the desired, DMT-containing hydrophobic oligomers, which then were exposed to acidic conditions (3 % TFA in dichloromethane).These conditions were safe during the automated synthesis of oligomers and were supposed to be suitable for final deprotection of the terminal 5′-OH group of 19c-21c.Unfortunately, all our efforts to obtain the desired fully deprotected oligomers without cleavage of the P-N bond have failed, even if the time of the reaction was as long as required for the removal of the DMT group during the DNA solid phase synthesis.Thus, we concluded that the removal of the 5'-terminal DMT protecting group should be carried out before the release of the oligomer from the solid support, as we proved by the successful synthesis of the trimer dApYpdC 10c. 6No other methods of removal of DMT group as e.g. using ZnBr 2 20 were tested. For evaluation of biophysical properties oligomers 19a-21a were used as fully deprotected species, and oligomers 19c-21c as 5′-DMT-protected constructs.The structures of all oligomers listed in Table 2 were confirmed by MALDI-TOF mass spectrometry, and their purity was confirmed by HPLC analysis.As determined by analytical RP HPLC the purity of oligomers 19-21 was ca.98-99 %.Representative analytical HPLC profile and MALDI-TOF mass spectrum of oligonucleotide 19a are shown in Figure 4a and 4b, respectively.In the mass spectrum molecular ion with m/z 5754 represents oligomer 19a (MW 5750), while a minute peak at m/z 5602 gives rise from [M-CH 2 CH 2 Thy] + fragmentation ion or decomposition product (Tp) 9 Y ab (pT) 9 . Oligomers containing a 5'-DMT-protecting group and amidated BHPA units showed remarkable instability in MALDI-TOF experimental conditions due to the presence of the acid labile 5'-DMT protecting group and of the acid-labile amide P-N bond.A representative analytical HPLC profile and MALDI-TOF spectrum of oligonucleotide 19c are shown at Fig. 5a and 5b, respectively.The profile of analytical RP HPLC analysis shows exclusively one peak representing oligomer 19c and the MALDI-TOF mass spectrum confirms the presence of the expected product giving a signal at m/z 6053 (calc.6051).However, due to instability of compound 19c in the conditions used for the MS analysis, oligomer 5'-DMT-(Tp) 9 Y(pT) 9 , where Y is BHPA-NH-CH 2 CH 2 Thy moiety, undergoes an acidic hydrolysis resulting in the appearance the three other products of the following MS signals: at m/z 5901 for product 5'-DMT-(Tp) 9 Y ab (pT) 9 (calc.5900), at m/z 5751 for product (Tp) 9 Y(pT)9 (calc.5749) and at m/z 5599 for product (Tp) 9 Y ab (pT) 9 (calc.5598).Similar MALDI-TOF mass spectra were obtained for Anionic groups present at the ends of oligomers, as in the case of compounds 15 and 16, have little influence on duplex stability.In this case the differences in melting temperatures of chimeric duplexes in comparison to the Tm of the non-modified duplex T 19 /dA 19 are up to 3 °C (Table 3).The presence of an anionic abasic site in the third strand of the DNA triplex [as in the complex of 13 with hairpin oligomer d(A 21 C 4 T 21 )] significantly decreased triplex stability resulting in lowering of the triplex-duplex transition Tm by about 15 °C per modification.No triplex formation was observed for complexes of hairpin DNA with oligomer 14 possessing two Y ab residues.Thus, as expected, the lack of either Watson-Crick or Hoogsteen interactions of Y ab units, located within the central domain of the modified oligomers, with their counterparts in corresponding duplexes and triplexes has dramatic consequences on the stability the these complexes.However, abasic BHPA units positioned at the ends of oligomers exhibit minimal influence on duplex stability (as in duplexes 15/dA 19 , 16/T 19 and 15/16) due to smaller participation of the terminal base pairs in duplex stability. 21ybridization affinity of chimeric phosphinate / phosphate backbone oligomers 19 -21, containing one, two or three BHPA units with alkylnucleobase moiety, to ssDNA and RNA templates is not improved in comparison to hybridization affinity of chimeric oligomers containing abasic units Y ab .The differences in Tm of non-modified complexes T 19 /dA 19 and T 19 /A 19 and duplexes of oligomers 19a, 19c, 20a, 20c, 21a and 21c with their DNA and RNA templates are within 2 to 6 °C.The lower affinity of these novel DNA analogs towards their complementary strands probably results from their higher flexibility and the loss of entropy in comparison to natural DNA molecule. 3,4It has been proven that the more rigid structure of DNA analog (like e.g.LNA) provides better affinity to a complementary RNA strand. 22From the other side peptide nucleic acid analogs (PNA) despite their structural flexibility exhibit extremely high affinity toward their DNA and RNA complements. 23n interesting hybridization feature was found for oligomers 19a,c -21a,c with respect to their ability to form stable triplexes with hairpin DNA [d(A 21 C 4 T 21 )].Melting temperatures of these triplexes (Table 3) are identical or slightly higher than that of the reference complexes.Probably the phosphinate / phosphate backbone flexibility of the third strand has less influence on triplex stability as compared to contribution of this feature in the duplex structure. Enzymatic degradation of chimeric oligomers 13 and 14 with 3′-and 5′-exonucleases as analyzed by MALDI-TOF technique Chimeric oligomers 13 and 14, possessing one or two abasic units (Y ab ) located in the central domain of the oligomer sequence (Table 2), were used to study their recognition by 3′-and 5′exonucleases.Chimeric oligonucleotides were incubated with snake venom (svPDE, PDE I) and with calf spleen phosphodiesterases (PDE II), respectively and the digestion products of each reaction were identified by MALDI-TOF mass spectrometry analysis. Figure 6a presents four subsequent MALDI-TOF mass spectra of degradation products of oligonucleotide 13 with PDE I taken after 10 min, 1, 2 and 3 hours of the cleavage reaction.A ladder of products, which differ by m/z 304, corresponds to the products of subsequent removal ISSN 1424-6376 Page 164 © ARKAT USA, Inc of pT nucleotides from the parent oligonucleotide 13 (m/z 5601).After 3 hours of cleavage reaction an accumulation of oligonucleotide (Tp) 9 Y ab of m/z 2863 is observed.No further degradation of this product occurs even after 6 hours.These results demonstrate that cleavage of the parent oligomer undergoes up to the modification site.An analogous product (Tp) 8 Y ab pY ab of m/z 2747 is accumulated when oligomer 14 is treated with svPDE (Figure 6d).In contrary, both these oligomers, 13 and 14, when treated with PDE II nuclease, result in accumulation of the products containing one natural nucleotide upstream of the modification site.Thus, oligomer 13 is subsequently degraded to product TpY ab (pT) 9 of m/z 3167 (Figure 6b), while oligomer 14 results in the formation of compound TpY ab pY ab (pT) 9 of m/z 3354 (Figure 6d).Snake venom phosphodiesterase, which is a 3′-exonuclease, is active up to the modification site being able to cleave the phosphodiester bond between thymidine and BHPA units.In contrast, calf spleen phosphodiesterase, which is a 5′-exonuclease, is not able to cleave the phosphodiester bond between 5'-thymidine and BHPA units.This nuclease removes all the 5′terminal nucleotides but one upstream of the modification site.As expected, BHPA structural motives present in the DNA chain are resistant toward 3′-and 5′-exonucleases, however their recognition depends on the cleaving features of used nucleases. 24,25Y ab units introduced at the ends of the oligomers can be a novel class of protecting "clamps" against cellular exonucleases and therefore can be used for protection of antisense, TFO, ribozyme and deoxyribozyme oligonucleotides tested in vivo or in cellular systems for their gene down-regulation activity.The added value of such protection is that BHPA abasic units located at the ends of the oligonucleotides demonstrate minimal influence on duplex stability, as it is shown in thermal stability of duplexes 15/dA 19 , 16/T 19 and 15/16.We prepared several oligomers with 3′-and 5′terminal Y ab protecting groups for their screening as antisense agents (data not shown).Such phosphinic acid "clamps" were also successfully applied for the protection of deoxyribozymes, directed towards the HIV-1 viral RNA sequence in in vitro HIV-1 infected cellular experiments, 7 as well as for deoxyribozymes designed to cleave bcr-abl mRNA fragments in in vitro experiments. 8ligonucleotides 13 and 14 and the reference oligothymidylate T 20 were also given to the cleavage reaction with 3'-exonuclease from human plasma. 26For this assay, 5′-radiolabeled oligonucleotides were incubated with 50 % human plasma.Products of degradation were analysed by PAGE under denaturing conditions (20 % polyacrylamide, 7 M urea).The results show that the reference oligomer incubated for 8 hours at 37 °C afforded a ladder of products ranging from T 19 to T 2 (Figure 7a).The chimeric oligomers were degraded only partially under these conditions.Degradation of oligonucleotide 13 proceeded from the 3'-terminus and was arrested by the presence of the abasic BHPA motif.As a consequence, accumulation of the product (Tp) 9 Y ab was observed (Figure 7b).An analogous ladder of products, with accumulation of the product (Tp) 8 Y ab pY ab was observed for degradation of oligonucleotide 14 (results not shown). Cytotoxicity of chimeric oligomers 17 and 18 towards HUVEC and HeLa cell lines For determination of cytotoxicity of chimeric phosphinate / phosphate oligomers we chose two polyanionic oligomers 17 and 18.The cytotoxicity of the test oligomers was evaluated in tumor HeLa and endothelial HUVEC cell lines with an MTT assay. 27Endothelial cells (HUVEC) were isolated and cultured as described. 28HeLa cells were cultured according to the standard method. As shown in Table 4 cytotoxicity of both polyanionic polymers toward tested cell lines was rather low. (Tp) 9 Y ab a Toxicity of oligomers was determined as described 27 by an MTT method.Absorbance of a given sample was measured at 570 nm, with the reference wavelength 630 nm (Microplate Reader 450, BioRad).The percentage of living cells (P LC ) was calculated from the equation: P LC = (A S -A M )/(A C -A M ) x 100 %, where As is the absorbance of a given sample of cells treated with oligomers, AM is the absorbance of a cell medium, Ac is the absorbance of a control (untreated) cells. Conclusions A series of novel DNA analogs, containing chimeric phosphinate / phosphate backbone could be effectively synthesized by incorporation of acylic nucleoside analogs derived from bishydroxymethylphosphinic acid (BHPA) residue into an oligonucleotide chain.Structural flexibility and loss of entropy make these chimeric oligomers unable to form stable duplexes with their complementary RNA and DNA strands.However, these oligomers possess slightly higher binding affinity towards double stranded DNA, as compared to a non-modified reference.Their low cytotoxic activity toward HUVEC and HeLa cell lines and, as expected, excellent stability toward 3'-and 5'-exonucleases does not preclude their use as therapeutic agents either as 3'-and 5'-terminal protecting groups or as triplex forming oligomers in antigene approach. Experimental Section General procedure for the synthesis of compounds 7 MSNT procedure.MSNT (0.44 g, 1.5 mmol) was added to a solution of 6 (0.80 g, 1.0 mmol) and N-1or N-3-(2-hydroxyethyl)thymine (0.17 g, 1 mmol) in anhydrous pyridine (8 mL) and the mixture was stirred at RT for 24 h.The solvent was evaporated and the crude product was isolated by column chromatography on silica gel with a gradient of 0-5% MeOH in CHCl 3 to give pure 7a or 7b in the form of a white foam.Appel procedure.CCl 4 (0.54 mL, 5.6 mmol) was added with stirring under argon at RT to a solution of 6 (0.90 g, 1.1 mmol) and PPh 3 (0.88 g, 3.4 mmol) in anhydrous pyridine (8 mL). General procedure for the synthesis of compounds 8a-d To a solution of compound 7 (0.22 mmol) in MeOH (4 mL), a methanolic solution of toluene-4sulfonic acid (0.13 mL, 0.02 M) was added, with stirring.After 5 min the reaction was terminated by addition of pyridine (0.4 mL).The product and unreacted substrate were separated by means of preparative TLC on silica gel with CHCl 3 /MeOH (9:1, v/v).The recovered substrate again was reacted as above.After several-fold repetition of the procedure, pure product 8 was obtained as a white solid.Yields of the reactions, TLC R f values, FAB MS and 31 P NMR (CHCl 3 ) spectral data are given in Table 1. ISSN 1424-6376 Page 170 © ARKAT USA, Inc General procedure for phosphitylation of derivatives 8a-d 2-Cyanoethyl N,N,N',N'-tetraisopropylphosphordiamidite (1.2 eq. ) was added under argon to the solution of 8a-d (1.0 eq.) and 2-ethylthio-1H-tetrazole (2.4 eq.) in anhydrous acetonitrile.The reaction mixture was stirred for 1 h at room temperature and then loaded under argon on to a silica gel column.The products 9a-d were eluted with a gradient (0-5%) of methanol in methylene chloride, concentrated in vacuo and stored under argon at -20 °C. Yields of these reactions (%), TLC chromatographic mobility R f and 31 P NMR (CHCl 3 ) spectral data of 8a-d are given in Table 1. Loading of 8e on a LCA CPG solid support The solution of 8e (300 mg, 0.7 mmol), succinic anhydride (100 mg, 1.0 mmol) and 4dimethylaminopyridine (110 mg, 0.9 mmol) in anhydrous pyridine (10 mL) were stirred for 24 hours at room temperature.After this time an additional portion of succinic anhydride (21 mg, 0.05 mmol) was added and the reaction mixture was kept for an additional 4 hours.Then chloroform was added to the reaction mixture, the organic phase was washed twice with 2 % aqueous citric acid (2x20 mL) and once with water (20 mL).The organic layer was dried with anhydrous magnesium sulfate, filtered and then evaporated to dryness.The acyl derivative of 8e was loaded on a silica gel column, eluted with CHCl 3 /MeOH 9:1 (v/v) and dried in vacuo.Long chain aminoalkyl control pore glass (LCA CPG, CHEMGENES, Ashland Technology Center, Ashland, MA) (2g) and acyl derivative of 8e (340 mg, 0.65 mmol) were dried overnight in vacuo, then dissolved in anhydrous DMF (10 mL) and pyridine (1 mL).DCC (250 mg, 1.2 mmol) was added to this solution and the reaction mixture was shaken for 48 hours at room temperature.Then the glass was washed with anhydrous methanol/pyridine (10 mL, 1:1, v/v) solution, with anhydrous acetonitrile (10 mL) and dried in vacuo.The loading efficiency was determined by DMT-cation assay.The modified solid support 9f was loaded with the derivative 8e up to 57.0 µmol /g. Oligonucleotide synthesis The 3'-O-phosphoramidite building units 9a-e were used for the synthesis of chimeric oligomers 10-21 by automated solid phase methodology. 14The 1 µmole scale synthesis of oligomers was performed on an ABI 394 synthesizer (Applied Biosystems Inc., Foster City, CA) using succinyl-or oxalyl-linked LCAA-CPG solid support.The only difference in the manufacturer's recommended protocol was a prolonged coupling time (up to 600 s).The coupling efficiency was determined by DMT-ion assay.Oligomers 10c, 10d, 11 and 18 were synthesized on succinyl-linked LAC CPG as DMT OFF constructs, cleaved from the solid support by treatment with 28 % ammonium hydroxide (1mL) for 1 h at room temperature and purified by reverse phase semi-preparative HPLC. Oligomers 12a, 12b, 19a-21a were synthesized on oxalyl-linked LCAA-CPG solid support as DMT OFF constructs.These oligomers were removed from the solid support by treatment with thiophenol / dioxane / triethylamine mixture (2:1:2, v/v) for 5 min at room temperature and then solid support was washes with 1 % aqueous triethylamine for 7-10 min at room temperature.After solvent evaporation oligomers were purified by semi-preparative or analytical HPLC (as indicated in table 2). Oligomers 13-17, 19c-21c were obtained as the 5'-O-DMT protected constructs.These oligomers were purified by a standard RP HPLC method (DMT ON step).The removal of 5'-DMT group was achieved by treatment with 50 % acetic acid for 30 min at room temperature, followed by the RP HPLC semi-preparative purification (DMT OFF step) to produce fully deprotected oligonucleotides with 50-80 % yield. The structure and purity of oligomers were confirmed by MALDI-TOF mass spectrometry, 20 % polyacrylamide / 7 M urea gel electrophoresis and RP HPLC analysis. Melting temperature measurements Samples for the melting temperature measurements (duplexes or triplexes) were prepared by hybridization of modified oligomers with the complementary single stranded DNA, RNA or double stranded DNA, as listed in Table 3. Melting profiles were recorded after heating up to 70 ºC, followed by annealing to 5 ºC with a temperature gradient of 0.5 ºC/min.Oligonucleotides were kept at 5 ºC for 5 min then heated at temperature gradient of 0.2 ºC/min to 86 ºC.The melting temperatures were calculated using the first order derivative method.Measurements were performed on Cintra 40 instrument (GBC Australia). MALDI-TOF measurements Samples (1 µL) withdrawn from the digestion reactions were put on a sample plate, mixed with the matrix solution [1 µL of an 8/1, v/v mixture of 2,4,6-trihydroxyacetophenone (10 µg/mL in ethanol) and diammonium citrate (50 µg/mL in water)] and left for crystallization.MALDI-TOF spectra were recorded on a Voyager-Elite instrument (PerSeptive Biosystems, CT) in the reflector mode, at a resolution of 2000.M/z negative ion peaks are shown at the spectra. Cytotoxicity studies The cytotoxicity of oligomers 17 and 18 was studied for HeLa and HUVEC cell lines using an MTT [3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide; Sigma, St. Louis, MO] assay (activity of the mitochondrial respiratory chain). 27Cells were trypsinized and diluted with appropriate culture medium to a density of 5000 cells/200 µL for tested cells.Cell suspensions were added to the 96-well plates (200 µL per well).After 24 hours of cell cultivation oligonucleotides were added to a final concentration of 2.5 µM.The cells were incubated for 72 hours in the presence of tested oligonucleotides.As controls, cultured cells were grown in the absence of oligonucleotides.After 72 hours of incubation, 25 µl of MTT solution (5 mg/mL) in PBS was added to each well and incubated for additional 2 hours at 37 °C.Finally, 95 µL of lysis buffer (20 % SDS, 50 % aqueous dimethylformamide, pH 4.5) was added into each well and incubated at 37 °C for additional 24 hours.Absorbance of a given sample was measured at 570 nm, with the reference wavelength 630 nm (Microplate Reader 450, BioRad).The percentage of living cells (P LC ) was calculated from the equation: where A S is the absorbance of a given sample of cells treated with oligomers, A M is the absorbance of a cell medium, A C is the absorbance of a control (untreated) cells.HeLa cells were cultured according to the standard method.Data points represent means of at least three measurements. Scheme 2 . Scheme 2. Synthesis of phosphoramidite derivative of BHPA methyl ester 9e and solid support bound BHPA methyl ester. Figure 1 . Figure 1.Semi-preparative purification of oligomer Y ab p(Tp) 17 Y ab (15) by the two step deprotection / purification procedure. 19RP HPLC profiles are shown for purification of the 5'-DMT-Y ab p(Tp) 17 Y ab oligomer, step DMT-ON (a) and fully deprotected oligomer Y ab p(Tp) 17 Y ab , step DMT-OFF (b).Collected were oligomers with retention time 19.54 min (a) and 9.19 min (b).HPLC deatils are described in Experimental. Figure 2 . Figure 2. Analysis of purity of oligomers 13 -16 by 20 % polyacrylamide / 7 M urea gel electrophoresis.dA 19 and T 19 oligomers are used as the reference oligomers; Figure a: oligomers 13 and 14 as well as the reference oligomers were used for analysis as 5'-32 P-labeled derivetives, Figure b: detection of tested oligomers was done with Stains all reagent. Figure 3 . Figure 3. Spectral and electrophoretic analysis of polyanionic oligomers 17 and 18. a: MALDI-TOF spectrum of oligomer 18 (m/z 4244, calc.4246); b: 20 % polyacrylamide / 7 M urea gel electrophoresis with band visualisation in Stains all reagent; c: PAGE of 5'-32 P-labeled oligomer 17 and its 5'-fluoresently labeled analog 18.The spot of the oligomer 18 visible in the UV light on the gel was as marked at the autoradiogram film as shown. Figure 4 .Figure 5 . Figure 4. Analytical RP HPLC profile (a) and MALDI-TOF spectrum (b) of oligonucleotide 19a.Chromatographic analysis conditions are given in Experimental. Figure 6 .Figure 6 ( Figure 6.MALDI-TOF MS analysis of degradation products of chimeric phosphinate / phosphate oligomers 13 and 14 with snake venom and calf spleen phosphodiesterases (PDE I and PDE II, respectively).Oligomers were used at the concentration of 1 nM with the amount of enzyme and the reaction time described in Experimental.MALDI-TOF mass spectra show analysis of the products of the following reactions: a -(Tp) 9 Y ab (pT) 9 (13) / PDE I, b -(Tp) 9 Y ab (pT) 9 (13) / PDE II. Table 1 . Spectral and chromatographic data for bis-(hydroxymethyl)phosphinic acid derivatives 7 -9.Yields of the transformation reaction of the salt 6 to amidated or esterified BHPA derivatives 7a-d (in %) are also given a CHCl 3 / MeOH, 9:1, v/v.b The multiple resonances (four or eight signals) may be due to the presence of threo-erythro diastereoisomerism and/or P-C-O-P spin couplings.c methyl-protection of phosphoramidite residue.d 2-cyanoethyl-protection of phosphoramidite residue. Inc Hybridization properties of chimeric phosphinate / phosphate backbone oligomers The affinity of polymers 13-16 and 19-21 toward complementary DNA and RNA strands as well as double stranded DNA was determined by UV thermal melting measurements.The Tm values of investigated duplexes and triplexes are given in Table3.Introduction of anionic BHPAderived abasic sites (Y ab ) into the DNA chain, as in chimeric oligomers 13 and 14, causes remarkable destabilization of the corresponding duplexes, with similar effects on Tm for both DNA and RNA complexes.One abasic site present in the center of the oligomer sequence causes the respective Tm to decrease by 6 and 3 °C for DNA and RNA templates, respectively.The presence of a second Y ab group further decreases the affinity of chimeric DNA towards complementary strands of DNA or RNA by about 6-7 °C. Table 3 . Binding affinity of oligomers 13-16 and 19a,c-21a,c towards dA 19 , A 19 and double stranded region of DNA hairpin sequence d(A 21 C 4 T 21 ).All Tm values a were determined by the UV melting temperature measurements (details given in the Experimental section) Melting temperatures were measured at 10 mM Tris-HCl buffer pH 7.4, 10 mM MgCl 2 , 100 mM NaCl, with an error of ± 1.0 °C.For determination of Tm's of complexes with oligomers 19a-21a the 10 mM Tris-HCl buffer pH 7.0, 10 mM MgCl 2 , 100 mM NaCl was used. Table 4 . Cytotoxicity of oligomers 17 and 18 toward HUVEC and HeLA cells given as percentage of living cells a
7,555.8
2004-02-27T00:00:00.000
[ "Biology", "Chemistry" ]
Long Noncoding RNA CTC Inhibits Proliferation and Invasion by Targeting miR-146 to Regulate KIT in Papillary Thyroid Carcinoma Several lines of evidence have shown that long non-coding RNAs (lncRNAs) are dysregulated in many diseases. Nevertheless, the biological relevance of the lncRNAs in papillary thyroid carcinoma (PTC) has not been fully explored. We demonstrated that CTC was a negative regulator of PTC cell migration and invasion in vitro and in vivo. We found that microRNA-146 (miR-146) is an inhibitory target of CTC. We then demonstrated that CTC functioned as a miR-146 decoy to de-repress expression of KIT. Further study demonstrated that CTC modulated the progression and chemoresistance of PTC cells via miR-146 and KIT. The analysis of hundreds of clinical specimens revealed that CTC and KIT levels were downregulated, whereas miR-146 levels were greater in PTC tissues than in normal thyroid. Their expression levels correlated with one another. In conclusion, CTC functions as a competing endogenous RNA to inhibit the progression and chemoresistance of PTC cells, and identifies CTC serve as a potential therapeutic agent to suppress PTC progression. In this study, we identified a novel lncRNA, lncRNA-CTC (CTC), that regulates the proliferation and invasion of PTC cells. We also demonstrated that CTC directly binds with miR-46 and functions as an miRNA decoy to regulate KIT expression. LncRnA ctc inhibits the proliferation and invasion of papillary thyroid cancer cells. Because miR-146 was consider as a potential molecular biomarker in PTC 25 , we suspect that some lncRNAs regulate PTC cell via miR-146. To verify our conjecture, DIANA TOOLS (a human lncRNA target prediction tool) was used to screen lncRNAs, which have potential ability to associate with miR-146. According to the results from DIANA TOOLS, ten high-scoring candidate lncRNAs were chosen for subsequent study. Next, we compared expression levels of these potential lncRNAs in PTC tissues and the corresponding adjacent normal tissues (ANT). As shown in Supporting Fig. 1, lncRNA-CTC (CTC) was expressed at lower levels in PTC tissues than in ANT, and CTC was chosen for further study. CTC is located at chromosome 19: 27,793,496-27,799,403 with the transcript length 5908 nt (ENST00000592404). ORF (Open Reading Frame) Finder software shown that the potential ORFs for CTC were totally shorter than 480 bp (Supporting Fig. 2A). CPC (Coding Potential Calculator) software was used to identify the coding potential of CTC, and results showed that each transcript of CTC was unable to code (Supporting Fig. 2B). In vitro translation experiments were performed to verify these software prediction results. As shown Supporting Fig. 2C, protein bands for CTC were not observed. The 180 kD protein band of DNMT1 was used for positive control. A full-length CTC was identified using RACE experiments (Supporting Fig. 2D). The localization of CTC was also evaluated. We found that CTC was localized in both the cytoplasm and the nucleus (Supporting Fig. 2E). We next examined the potential function of CTC in the proliferation and invasion of PTC cells. To do this, we constructed overexpression plasmids and two specific small interfering RNAs (siRNAs). qRT-PCR analysis indicated that siRNA #1 and #2 inhibit the expression of CTC (Supporting Fig. 3). PTC cells proliferation rates were decreased by CTC overexpression plasmids (Fig. 1A). Conversely, knockdown of CTC increased the proliferation The relative mRNA and protein levels of lncRNA CTC in the tumor tissues from mice were detected by qRT-PCR. In the qRT-PCR experiments, the vector control was designated as 1. Bar graphs present means ± SD, n = 3 (**P < 0.01; *P < 0.05). rates of PTC cells (Fig. 1B). In line with these data, CTC overexpression promoted PTC cells migration and invasion, whereas knockdown of CTC resulted in the inhibition of PTC cell migration and invasion ( Fig. 1C-F). To determine whether CTC affect tumorigenicity in vivo, an additional experiment was performed. Nude mice were subcutaneously injected with CTC knockdown or control cells. We found that, compared with control mice, CTC knockdown mice exhibited higher tumor growth; however, there were no effects on body weight ( Fig. 1G-I). Further experiments confirmed that CTC expression levels were higher in control mice than in CTC knockdown mice (Fig. 1J). These in vivo and in vitro data suggest that CTC induces the proliferation of PTC. CTC binds to miR-146 and represses its expression. Because miR-146 is a potential target of CTC, we investigated whether CTC binds to miR-146. To do this, wild-type (WT) CTC were cloned into firefly luciferase reporter plasmids. One mutation was generated from WT CTC at the predicted target site to miR-146 ( Fig. 2A). The luciferase activity assay showed that pre-miR-146 inhibited WT CTC luciferase activity, but not the luciferase activity of mutant CTC (Fig. 2B). Consistent with this result, anti-miR-146 induced WT CTC luciferase activity, but the mutant CTC did not (Fig. 2C). To determine whether miR-146 was indeed a target of CTC, we tested the In the qRT-PCR experiments, the vector control was designated as 1. Bar graphs present means ± SD, n = 3 (**P < 0.01; *P < 0.05). efficacy of CTC WT and mutant expression (Fig. 2D). Pull-down assays were performed to investigate whether CTC bound miR-146. We found that WT CTC, but not mutant CTC, bound miR-146 (Fig. 2E). Anti-miR-146 abolished miR-146 precipitation (Fig. 2E). We next investigated the role of CTC on miR-146 expression. As shown Fig. 2F, CTC WT decreased miR-146 expression, whereas mutant CTC had no effect. Conversely, CTC knockdown induced miR-146 expression (Fig. 2G). Interesting, neither overexpression nor knockdown of miR-146 had an effect on CTC expression levels (Fig. 2H,I), suggesting that miR-146 was downstream of CTC. These results suggest that CTC associates with miR-146, and suppress its expression. CTC suppress the proliferation and invasion of PTC cells through miR-146. Because CTC regulates the proliferation and invasion of PTC cells and CTC binds to miR-146, we next examined the role of CTC binding of miR-146 in the proliferation of PTC cells. As shown in Fig. 3A, pre-miR-146 abolished the effect of CTC in the proliferation of PTC cells. Conversely, anti-miR-146 and the CTC overexpression plasmid synergistically inhibited PTC cell proliferation (Fig. 3B). The effect of the CTC/miR-146 signaling pathway on PTC cell proliferation was further evaluated using CTC siRNAs. Pre-miR-146 induced the effect of the CTC siRNA on the proliferation of PTC cells (Fig. 3C). Anti-miR-146 abolished the effect of CTC siRNA on the proliferation of PTC cells (Fig. 3D). Similar results were also obtained in cell migration and invasion assays ( Fig. 3E-H). Together, these data demonstrate that CTC regulated the migration and invasion of PTC cells, and that miR-146 is a downstream molecule in the CTC-regulated pathway. CTC suppress the proliferation and invasion of PTC cells through KIT. Because KIT is a target of miR-146, the influence on CTC inhibited migration and invasion of PTC cells were evaluated. First, the role of CTC on KIT expression was investigated. We found that CTC induced KIT mRNA and protein expression in a time-dependent manner (Fig. 4A). Similarly, a dose-dependent increase in KIT expression was also observed in PTC-1 cells, in which were transfected with CTC overexpression plasmid for incremental concentrations (Fig. 4A). We next determined the effect of miR-146 on CTC-regulated KIT expression. Pre-miR-146 inhibited CTC-induced KIT mRNA and protein expression levels, whereas anti-miR-146 and CTC overexpression plasmid synergistically induced KIT expression (Fig. 4C,D). To test whether KIT affect CTC regulated PTC cell proliferation, migration and invasion, we designed two siRNA plasmids for KIT (KIT siRNA #1 and #2), and tested the efficiency of those siRNAs (Fig. 4E). MTT assays suggested that overexpression of KIT induced the effect of the CTC overexpression plasmid on the proliferation of PTC cells (Fig. 4F). By contrast, KIT siRNA #1 abolished the proliferation effect of CTC overexpression plasmid (Fig. 4G). Similarly, overexpression of KIT promoted the promigratory and proinvasive effect of CTC overexpression plasmid, whereas knockdown of KIT removed the pro-migratory and pro-invasive effects of the CTC overexpression plasmid in PTC cells (Fig. 4H,I). CTC siRNA #1 was used to determine whether CTC regulated the migration and invasion of PTC cells via KIT. Results from migration and invasion assay demonstrated that KIT overexpression abolished siRNA-CTC-induced migration and invasion of PTC cells, whereas KIT knockdown by the KIT siRNA #1 and #2 upregulated CTC siRNA #1 induced migration and invasion (Fig. 4J,K). Together, these data demonstrate that induction of KIT expression is an important event in CTC-controlled PTC cell invasion and migration. CTC/miR-146/KIT axis regulates PTC cell chemoresistance. Because noncoding RNAs have vital bio- logical functions in the chemoresistance of cancer cells 29 , we next investigated whether CTC/miR-146/KIT axis plays a role in regulation of PTC cells chemoresistance. First, we investigated the role of 5-fluorouracil (5-Fu) and doxorubicin (Dox) on the expression of CTC, miR-146 and KIT. As shown in Supporting Fig. 4, 5-Fu and Dox reduced miR-146 expression and induced CTC and KIT expression. As expected, 5-Fu and Dox led to cell growth inhibition in a dose-dependent manner. Overexpression of CTC increased the sensitivity of PTC cells to 5-Fu and Dox treatment; however pre-miR-146 abrogated the effect of CTC on chemoresistance (Fig. 5A). Conversely, inhibiting miR-146 and CTC overexpression synergistically increased the sensitivity of PTC cells to 5-Fu and Dox treatment (Fig. 5B). The effect of the miR-146/CTC axis on chemoresistance was further evaluated by blocking CTC. Knockdown of CTC decreased PTC cells sensitivity to Dox and 5-Fu treatment. Overexpression of miR-146 enhanced the effect of CTC siRNA on chemoresistance, whereas knockdown of miR-146 removed the effect of CTC siRNA on chemoresistance to 5-Fu and Dox (Fig. 5C,D). Next, we evaluated whether CTC regulated cell chemoresistance via KIT. Results shown that overexpression of KIT induced CTC-inhibited chemoresistance; conversely, knockdown of KIT reduced CTC-inhibited cell chemoresistance to 5-Fu and Dox (Fig. 5E,F). Similarly, overexpression of KIT abrogated inhibition of chemoresistance by CTC (Fig. 5G). Knockdown of both CTC and KIT synergistically induced chemoresistance (Fig. 5H). Collectively, these results suggest that CTC decreases the resistance of CRC cells to chemotherapy drugs via the miR-146/KIT axis. Expression of CTC, miR-146, and KIT in clinical samples. To elucidate whether CTC/miR-146/KIT axis could be an important pathway to decide the outcome of PTC, we first analyzed the expression of CTC, miR-146 and KIT in five normal thyroid (NT) tissues and seven human PTC cell lines (K1, K2, KTC-1, KAT-5, TPC-1, BHP5-16, and BCPAP). The expression levels of CTC and KIT was downregulated in PTC cell lines than NT tissues, whereas miR-146 expression was upregulated in PTC cell lines than NT tissues ( Fig. 6A-C). Further studies were performed to explore clinical relevance in clinical samples by comparing CTC, miR-146 and KIT expression using qRT-PCR. As shown in Fig. 6D-F, the expression CTC and KIT was downregulated and the expression of miR-146 was upregulated in PTC tissues compared with corresponding adjacent nontumorous tissues (ANT). Moreover, the ROC curves illustrated strong separation between the PTC patients and adjacent nontumorous tissues (ANT) (Supporting Fig. 5). Correlation analysis indicated that high level CTC expression was positively correlated with high levels of KIT (Fig. 6G-I). By contrast, an inverse correlation was observed between Scientific RepoRtS | (2020) 10:4616 | https://doi.org/10.1038/s41598-020-61577-z www.nature.com/scientificreports www.nature.com/scientificreports/ upregulated miR-146 expression and downregulated CTC and KIT expression (Fig. 6G-I). Taken together, those results provide evidence that CTC/miR-146/KIT axis might contribute to the progression of thyroid cancer. Discussion In the present study, for the first time, we described a novel signaling pathway mediating the migration and invasion of PTC cells. In this signaling pathway, CTC suppressed the migration and invasion of PTC cells. Mechanistic studies demonstrated that CTC interacts with miR-146, leading to disrupting miR-146 binding to the 3′-UTR of KIT. In addition, the CTC/miR-146/KIT axis regulated PTC chemoresistance. Human miR-146 genes have two members, miR-146a and miR-146b, located on chromosomes 5 and 10, respectively 30 . Although miR-146a and miR-146b are encoded by different chromosomes, there are only two www.nature.com/scientificreports www.nature.com/scientificreports/ nucleotides that differ between the 3′ end of miR-146a and miR-146b 31 . According to limited structural differences, miR-146a and miR-146b are predicted to bind the 3′-UTR of the same genes. A previous study showed miR-146a function in inflammatory cytokine production and stem cell generation 32 . Importantly, several studies confirmed that miR-146a and miR-146b in PTC tissues could be a useful biomarker of the molecular diagnosis of PTC 25,26 . Increasing numbers of studies suggest that lncRNAs functioned as an miRNA sponge to regulate cancer development 33 . In this study, accordance with bioinformatics prediction, we found lncRNA-CTC is a potential target of miR-146. Further luciferase reporter and RNA-pull down assays confirmed that CTC sponged miR-146 to derepress PTC cell proliferation and invasion. Interestingly, CTC regulated miR-146 expression, but miR-46 did not in turn regulate CTC expression, suggesting that miR-146 is a downstream molecule in CTC-mediated signaling. In clinical samples, an inverse correlation was observed in CTC and miR-146 levels. These clinical samples further support our conclusion that there is a target relationship between CTC and miR-146. Because all clinical samples were collected at one hospital, more PTC tissues should be investigated to compensate for this limitation and strengthen the robustness of this clinical finding. MiR-146 is a key modulator of the immune response 32 ; however, it remains unclear as to whether CTC participated in the immune response via miR-146. Illuminating the role of CTC in immune system could help further clarify the role of CTC function. KIT, a member of the type III receptor tyrosine kinase family, is a tyrosine kinase receptor 34 . Downstream pathways include the MAPK and RAS and PI3-K/AKT cascades. KIT functions as an oncogene in many cancers 35 . A previous study report that miR-146 was upregulated in PTC tissues, and it regulated PTC progression by binding the 3′-UTR of KIT 25 . The activity of KIT is a hallmark in PTC; however, its biological significance remains unknown. We found that CTC regulated KIT expression via miR-146. We also found that CTC inhibited PTC cell migration and invasion via directly binding miR-146. However, we believe that this mechanism is the tip of the iceberg. One lncRNA could sponge several miRNAs simultaneously. Similarly, one miRNA can have several potential target genes. In other words, the CTC/miR-146/KIT axis may not be the only pathway targeted by CTC in PTC. Although further studies are needed to explore the potential miRNAs for CTC, we nevertheless believe that CTC might be a candidate biomarker for the progression of PTC. We also believe that interruption of CTC/ miR-146-KIT signaling might point the way toward suppression PTC progression. We proposed a working model of the role of CTC in the proliferation and invasion of PTC cells (Fig. 7). In this model, miR-146 promotes PTC migration and invasion by targeting KIT 3′-UTR (Fig. 7A). However, CTC regulates proliferation and chemoresistance in PTC cells by interacting with miR-146 (Fig. 7B). As a result, miR-146 cannot target the 3′-UTR of KIT or inhibit KIT expression (Fig. 7B). Subsequently, KIT inhibits proliferation and chemoresistance in PTC cells (Fig. 7B). Our findings suggest efficient therapeutic approaches for PTC treatment. Samples and cases. PTC samples and adjacent nontumorous tissues (ANT) (located >3 cm away from the tumor) were collected from 125 patients who undergoing surgery at Gannan Medical University and Nanjing University of Chinese Medicine from April 2010 to April 2019. Two certified pathologists independently examined all PTC tissues to confirm histological subtype. ANT were confirmed by not being contaminated by cancer cells. Major inclusion criteria were as follows: (1) patients with pathologically confirmed PTC and without other malignancies; (2) patients that had not received any preoperative treatment; and (3) patients with complete clinical data, including age, sex, race, tumor size, and local invasion. Exclusion criteria were as follows: (1) patients with other kind of malignant tumors; (2) patients who were diagnosed with malignant lymphoma, poorly differentiated and anaplastic carcinomas; and (3) patients with stroke, organ transplantation, heart failure. Tissue samples were divided into two parts. One part was used for the histologic diagnosis by two independently expert pathologists. The other part was stored in liquid nitrogen for RNA extraction. Tumors were staged according to the American Joint Committee on Cancer (AJCC) pathologic tumor-node metastasis (TNM) classification. The characteristics of patients are described in Supporting Table 1. Five normal thyroid tissues, identified as normal tissues on histological examination, were collected from the contralateral thyroid lobe of patients with benign disease. cell and reagents. All human papillary thyroid cancer cell lines, including KTC-1, K1, KAT-5, K2, TPC-1, BCPAP, and BHP5-16, were purchased from Cell Bank of the Chinese Academy of Science (Wuhan, China). All cell lines were cultured in Dulbecco's modified Eagle's medium (DMEM) (Gibco BRL, Grand Island, NY) containing with fetal bovine serum (Gibco BRL, Grand Island, NY) (10%), penicillin (100 U/mL) and streptomycin sulfate (100 µg/mL). Cells were maintained at 37 °C in a humidified atmosphere of 5% CO 2 and 95% air. Tumor formation in nude mice. For each male BALB/c nude mouse (Ten 6-week-old), approximately 2×10 6 shRNA-Ctrl cells and siRNA-CTC cells were injected into the scapulae. The tumor volume was measured every 3 days by calculating tumor width and length. The formula was as follows: tumor volume = 1/2 length × Migration and invasion assays. Migration and invasion assays were performed as described 36 . Briefly, cells were collected and plated the upper chamber (Corning) coated with (invasion) or without (migration) Matrigel-coated membrane matrices (BD Biosciences). The lower chamber was added medium with 10% FBS. Twenty-four hours later, cells were fixed with 4% formaldehyde and stained with crystal violet (Merck, Darmstadt, Germany). Cells were counted under a microscope (Olympus, Tokyo, Japan) at 200x magnification. The number of cells was the average value from six representative fields. Transfection and luciferase reporter gene assays. Cells were grown to 80% confluence prior to transfection using Lipofectamine 3000 in accordance with the manufacturer's instructions (Invitrogen). Twenty-four hours later, cells were serum-starved for other 24 hours. Luciferase activity was measured using a dual-specific luciferase assay kit (Promega) (Madison, WI), and Renilla luciferase activities was used as internal control. and their corresponding adjacent nontumorous tissues (ANT) (n = 125). (G-I) The relative lncRNA CTC and miR-146 levels (G), the relative lncRNA CTC and KIT levels (H) and the relative miR-146 and KIT levels (I) in the PTCs were subjected to Pearson's correlation analysis. Box plots illustrate medians with 25 and 75% and error bars for 5 and 95% percentiles. For (D-F) the lowest value was designated as 1. lncRNA CTC, miR-146 and KIT data are expressed as fold induction relative to the lowest value. (**P < 0.01; *P < 0.05). For cellular mRNAs, total RNA was extracted using TRIzol reagent according to the manufacturer's instructions (Invitrogen, Basel, Switzerland). One microgram of total RNA was used as a template for reverse transcription by random primers and M-MLV Reverse Transcriptase (Promega). We performed qRT-PCR using the SYBR Green system (Applied Biosystems), with GAPDH as the internal control. The procedure was performed on a LightCycler 480 (Roche), and sequencing was used for the verification of qRT-PCR products. Primers used this study are listed in Supporting Table 2. Western blot analysis. Western blot analyses were performed as described 36 . Briefly, cells were harvested and lysed in radio immunoprecipitation assay (RIPA) buffer (Cell Signaling Technology, 9800). Protein concentrations were quantified using BCA assays (Cell Signaling Technology, 7780). Cell lysates (40 μg) were electrophoresed using 12% SDS-PAGE, and were then transferred to a nitrocellulose membranes (Bio-Rad) that were subsequently blocked with 5% (w/v) nonfat dried milk. One hour later, blots were incubated with primary antibodies, as indicated in the figures, overnight at 4 °C. Subsequently, immunocomplexes were incubated with horseradish peroxidase-linked secondary antibodies (Jackson ImmunoResearch) and then blots were developed using an enhanced chemiluminescence system (GE Healthcare). Statistical analysis. GraphPad Prism 5 software (GraphPad Software, La Jolla, CA, USA) was used for statistical analyses. Parametric data was analyzed by using a two-tailed t-test. Nonparametric data was analyzed by using the Mann-Whitney U test. Means were illustrated using a histogram with error bars representing presented as mean ± SD or mean ± SEM, and P < 0.05 was considered statistically significant. Data availability All data generated or analyzed during this study are included in this article.
4,697
2020-03-12T00:00:00.000
[ "Medicine", "Biology" ]
Reactivity of He with ionic compounds under high pressure Until very recently, helium had remained the last naturally occurring element that was known not to form stable solid compounds. Here we propose and demonstrate that there is a general driving force for helium to react with ionic compounds that contain an unequal number of cations and anions. The corresponding reaction products are stabilized not by local chemical bonds but by long-range Coulomb interactions that are significantly modified by the insertion of helium atoms, especially under high pressure. This mechanism also explains the recently discovered reactivity of He and Na under pressure. Our work reveals that helium has the propensity to react with a broad range of ionic compounds at pressures as low as 30 GPa. Since most of the Earth’s minerals contain unequal numbers of positively and negatively charged atoms, our work suggests that large quantities of He might be stored in the Earth’s lower mantle. Helium was long thought to be unable to form stable solid compounds, until a recent discovery that helium reacts with sodium at high pressure. Here, the authors demonstrate the driving force for helium reactivity, showing that it can form new compounds under pressure without forming any local chemical bonds. T he noble gas (NG) elements, such as He, Ne, Ar, Kr, and Xe, were believed not to react with other elements for decades, due to their stable closed shell electron configuration. Pauling 1 predicted that Kr and Xe may react with F and O, which was proved by Bartlett 2 who found the first NG compound, the ionic Xe + [PtF 6 ] − . Since then, numerous NG compounds have been synthesized, both in molecular and solid form [3][4][5][6][7][8][9] . Electronic structure calculations have predicted many more [10][11][12][13][14][15][16][17][18] . Meanwhile, the modification of external conditions such as pressure has led to the successful formation of yet different classes of NG compounds [19][20][21][22][23][24][25] . In most of these compounds, NG elements are oxidized and form chemical bonds by sharing their closed shell electrons. It is no coincidence that much of the recent progress on NG chemistry has been made in the area of high pressure, especially regarding unusual bonding features. This is due to the fact that high external pressure can drastically alter the chemical properties of elements [26][27][28] . Recent theoretical studies showed that Xe becomes easier to oxidize under high pressure; for example, Xe can form stable compounds with oxygen 18,29 . Even though these compounds have been found at ambient conditions, they are only metastable. Under pressures as high as those in the Earth's core, Xe can even be oxidized by Fe and form stable Fe-Xe compounds 30 . In contrast to the above studies, a recent investigation demonstrated that NG elements can also become oxidants and gain electrons while forming compounds with elements with low ionization energies such as alkali and alkaline earth metals 31,32 . In these compounds, NG atoms are negatively charged and play the role of the anions. It has also been revealed that high pressure promotes the formation of Xe-Xe covalent bonds in Xe 2 F compounds 33 . Furthermore, compounds formed between NG elements 19,34,35 and with other closed shell systems have been reported: notably diatomic gases like Xe-H 2 36 and Xe-N 2 37 and closed shell molecules like Xe-CH 4 38 . Many NG elements are found or are predicted to form weakly interacting host-guest hydrates or clathrates [39][40][41][42] . In contrast to other compounds, these phases are bound by van der Waals forces. Under ambient conditions, only the heavier NG elements Xe and Kr and, to some extent, Ar, are found to be chemically reactive. Remarkably, Dong et al. 43 reported recently in a combined experimental and computational study that mixtures of sodium (as well as its oxide) with helium can be stabilized at high pressure. A detailed electronic structure analysis of the resulting compounds Na 2 He and Na 2 OHe showed that He does not lose electrons nor form any chemical bonds. It is important to notice that the Na 2 He compound can be regarded as a high-pressure electride of the form Na + 2 E −2 He, where E represent the interstitial sites (quasi-atom) hosting a pair of electrons. Note that Sun et al. 44 proposed from calculations that He can react with many ionic alkali oxide or sulfide compounds under high pressure. A very recent work by Liu et al. 45 noticed the ability of He to form stable compounds with water molecules at high pressures. The origin of the stability of all the He-containing compounds above is not well understood 46 . Here we propose that helium has a general propensity to react with ionic compounds that contain an unequal number of cations and anions, e.g., A 2 B or AB 2 . Such compounds have large Coulomb repulsive interactions between the majority ions (cations or anions), which leads to two effects that favor reaction with helium. First, in the lower pressure range, these repulsive interactions prevent the formation of close-packed structures, thus leaving room for the insertion of helium atoms; this means that the reaction with helium can potentially be stabilized due to the large gain in PV term (compression work). More importantly, with increased pressure, the Coulomb repulsion becomes even stronger. The presence of He can then, second, keep the majority ions farther apart and therefore lower the Madelung energy. We will examine a series of example systems and show that the combination of the two effects, namely the PV and the Madelung energies, favors reactions between helium and various ionic compounds, sometimes at quite moderate compression. For number-balanced ionic compounds (chemical formula AB), the above arguments do not apply and we show that indeed helium does not react with several prototypical compounds. Through detailed energy analyses, we find that the eventual stabilities of the He (and Ne)-inserted ionic compounds depend on the balance of the above driving forces and the factors that counteract them. The reaction of He with a large number of ionic compounds shows very intriguing behavior, yet it can be explained within the framework of our theory. Our work reveals that chemically inert elements such as He can become reactive and form new compounds under pressure without the formation of any local chemical bonds. The reactivity of He with ionic compounds may have significance in geoscience. Earth has a finite supply of helium; and due to the light weight of these atoms, they tend to escape into space. It is therefore of significant interest whether mantle materials could store large quantities of helium. Previously, the miscibility of helium in the mantle has been considered very low due to the hitherto assumed inertness of the element. However, as shown by our work, helium tends to insert into the lattices of ionic compounds with unequal cation and anion numbers at high pressure-which is a feature shared by most of the minerals in the Earth's mantle, indicating that they may store considerable amounts of helium. Of course, our calculations apply to the ground state, and the effect of elevated temperatures, inevitable inside the mantle, needs to be addressed. This is beyond the scope of current work and will be investigated later. However, our results, which will be presented in a follow-up study, are in line with recent laboratory experiments that discovered significant uptake of He in SiO 2 glass as well as cristobalite [47][48][49] , a highpressure polymorph of quartz, in the pressure range 10-20 GPa. Results Reactivity of helium with ionic compounds. In order to test our theory, we chose four ionic compounds MgF 2 , MgO, Li 2 O, and LiF, and studied their reactivity with He under high pressure. These four compounds represent ionic compounds of AB 2 type, AB type with ±2 charge, A 2 B type, and AB type with ±1 charge, respectively. CaF 2 was also included in our study as it would reveal an important opposing mechanism caused by the occupation of the outer-shell d orbitals under pressure. For comparison, we also further investigated the reaction of Na with He, which can be viewed as the interaction of the ionic compound Na 2 E with He. We first searched for the most stable structures of these compounds with and without insertion of He atoms under pressures from 0 to 300 GPa. Then, the enthalpy change for the inclusion of He in these compounds is calculated in the same pressure range. The enthalpy differences for the reaction A-B + He → A-BHe were calculated as follows: Note the difference between ΔH r here and the reaction enthalpy. A positive ΔH r corresponds to an exothermal reaction, or a thermodynamically stable A-BHe compound. The results of ΔH r as function of pressure are shown in Fig. 1. Since the ionic compounds may undergo structural changes under increasing pressure, several ΔH r -P curves corresponding to different structures are shown. In contrast, the most stable structure of each A-BHe compound remains the same throughout the pressure range. Let us first compare the results of MgF 2 -He and MgOHe. The former compound has twice as many anions (F − ) as cations (Mg 2 + ); whereas in the latter, their numbers are equal. As shown in Fig. 1a, a 1:1 mixture of MgF 2 and He will become stabilized as a ternary compound, MgF 2 He, between 100 and 150 GPa (at an interpolated value of 107 GPa). At ambient pressure, MgF 2 He is 0.25 eV/atom higher in enthalpy than the constituents MgF 2 and He. However, at 300 GPa, MgF 2 He is about 0.05 eV/atom lower in enthalpy (Fig. 1a). We considered adding more He by calculating the stability of MgF 2 He 2 compounds as well (Supplementary Figure 1 and Supplementary Note 1). Although their enthalpy decreases by a small amount from 0 to 50 GPa, it then increases again at higher pressure, and ultimately no stable MgF 2 He 2 could be found. Therefore, MgF 2 -He compounds can only be stabilized within a limited composition range. In contrast to MgF 2 -He, MgOHe cannot form any stable compound at any compositon ratio. For the 1:1 compound MgOHe, the enthalpy decreases by about 0.1 eV/atom from 0 to 50 GPa, but then increases with further increase of the pressure (Fig. 1b). Reducing the concentration of He to 50% (Supplementary Figure 1b), the enthalpy of MgOHe 0.5 does not decrease from the value at ambient pressure (+0.36 eV/atom) up to at least 300 GPa (+0.41 eV/atom). Now let us investigate the 2:1 binary ionic compounds. Li 2 O-He contains, in contrast to MgF 2 , twice as many cations as anions. However, the insertion of He has a very similar effect as in MgF 2 . While the enthalpy of formation of the Li 2 OHe compound does not become negative with respect to Li 2 O and pure He at any pressure in the studied range, it does decrease from +0.25 eV/ atom at 0 GPa to almost 0 eV at 300 GPa (Fig. 1c). Its ΔH is almost on the convex hull at all pressures above 100 GPa (see Supplementary Figure 1c), which agrees with the results of Sun et al. 44 . The reaction enthalpies of the stoichiometries Li 2 OHe 0.5 and Li 2 OHe 2 also decrease with increasing pressure, but both compounds remain unstable at all pressures studied. In contrast to Li 2 O-He, LiF-He compounds are not stable, and their reaction enthalpy increases with increasing pressure, i.e., pressure disincentivises the insertion of He in LiF lattices (Fig. 1d). We also tested the reactivity of He with CaF 2 , which has an anion:cation ratio of 2:1. The interesting feature of this compound is that it is the prototype of the fluorite structure; remember that the electride Na 2 E sublattice of Na 2 He can be interpreted as the antifluorite structure. For CaF 2 , a reaction with He does not cause a departure from the fluorite lattice, but results merely in the insertion of He in the octahedral interstitials of CaF 2 . The formation enthalpy of CaF 2 He with respect to CaF 2 + He shows an intriguing behavior ( Fig. 1e): at ambient pressure it is unstable, but its formation enthalpy decreases and becomes negative (stable) at a pressure of about 30 GPa. At pressures higher than 50 GPa, the formation enthalpy increases again, becoming unstable at a pressure of about 110 GPa. The presence of He atoms helps stabilize the ionic compound, but only in the intermediate pressure range of 30-110 GPa. Lastly, we find in agreement with Dong et al. that Na 2 He becomes stable above 160 GPa and remains thus up to the highest pressure studied (Fig. 1f). Structure changes and electronic properties. Now, let us analyze the trends in the structures of the compounds formed at high pressure. The most notable feature is that the A 2 BHe compounds were found to have the same stable structure with Fm3m symmetry at all pressures; see Fig. 2a for an example. This is the structure of full-Heusler compounds. It is also identical to the Na 2 He structure when the quasiatoms (E) are considered to be the anions. The second lowest enthalpy structure usually had a symmetry group of Cmcm. Its enthalpy was about 0.6 eV/atom higher than the full-Heusler structure throughout the pressure range considered. As in the antifluorite structure, the B ions form an face-centered cubic (FCC) lattice, while the A ions occupy all the tetrahedral sites. This structure ensures that the first neighbor of any ion will be an ion of the opposite charge. The He atoms are inserted into the octahedral sites, thus also forming an FCC lattice. The A 2 B compounds also share similar structures at low pressure: Li 2 O and CaF 2 adopt the same CaF 2 -type structure at ambient pressure, and MgF 2 takes up the TiO 2 structure 50,51 . However, these structures have large interstices, making for an inefficient packing, and they will not be thermodynamically favored under very high pressure. As pressure increases all three A 2 B ionic compounds adopt more tightly packed structures where the distance between the closest A-B ions and A-A ions are nearly the same (see Supplementary Table 1, and Supplementary Figure 2). It is interesting that the A-BHe compounds also adopt the same high symmetry structure throughout the pressure range By studying the electronic structures of these compounds, we can quantitatively examine whether He forms any chemical bonds with the neighboring atoms and species in these inclusion compounds. First, we calculate the electronic localization function (ELF), shown as cross sections in Fig. 3. ELF values close to 1 indicate a high probability of a fully occupied electronic state, such as a filled electronic shell or a covalent bond. As we can see in Fig. 3a, b for both MgF 2 He and MgOHe, the ELF has localized, distorted spherical shells around all atoms that are separated by regions of near-zero ELF. The lack of any local ELF maxima away from the atomic sites means that no covalent bonds form between He and the other atoms, nor between Mg and F in MgF 2 He, and Mg and O in MgOHe. The latter is expected, as the interactions between Mg 2+ and F − , and Mg 2+ and O 2− are dominantly ionic. A topological analysis of the charge distribution in both compounds 52 confirms this: at 300 GPa, the calculated Bader partial charges on Mg/F and Mg/O in MgF 2 He and MgOHe are +1.71/−0.83 and +1.64/−1.56, respectively; the He atoms in both compounds are essentially neutral (0.04 for MgF 2 He and 0.07 for MgOHe, respectively; Fig. 3). The major change to the chemical bonding upon insertion of He into the MgF 2 and MgO lattices is the change of ionic interactions, in other words Madelung energies, which will be discussed further below. The inertness of He in these He-salt compounds can also be demonstrated through the electronic projected density of states (PDOS). We calculate and compare three PDOSs for the MgF 2 and MgF 2 He compounds. First, we obtain the PDOS of Mg-s/p, F−s/p, and He-s states in MgF 2 He at 300 GPa. Second, we obtain the PDOS of Mg-s/p and F-s/p states in a contrived MgF 2 compound in which Mg and F atoms occupy the same positions as in MgF 2 He at the same pressure. We denote this compound as MgF 2 [He]. Third, we obtain the PDOS of Mg-s/p and F-s/p states in MgF 2 in its most stable structure (Pnma symmetry) at 300 GPa. The highest valence bands of all three compounds (Fig. 3c-e) are dominated by the F-2p states of approximately the same width (8-10 eV), and all exhibit very large bandgaps. The He-1s states are mostly located at −15 to −10 eV, but also to some degree around −3 eV, which could just be part of the F-2p states due to overlap of the atom-centered projection spheres. Most importantly, however, after removing the He atoms from MgF 2 He but keeping the structure unchanged (MgF 2 [He]; Fig. 3d) the F-2p states are almost unchanged. This implies that the interaction between He and other atoms in MgF 2 He is very small, and there is no hybridization and no chemical bond formation. More detailed discussions of the effects of He insertions on the electronic and atomic structures of ionic compounds can be found in the Supplementary Information (see Supplementary Notes 2 and 3 as well as Supplementary Figures 2 and 3). The driving force of He insertion. Now we will focus on the mechanism of why stable He + ionic compounds form under pressure. The key issue is why He forms stable compounds with 1:2 (or 2:1) ionic compounds but not with 1:1 compounds. The reason for this can be more easily explained using an example in one spatial dimension. In Fig. 4, we present a very simple, onedimensional (1D) representation of ionic crystals. The figure shows that in a 1D ionic compound with cation:anion ratio of 1:1 (AB type), the cations and anions are arranged in an alternating fashion; for fixed atomic separation (determined also by the repulsive interactions among atoms in real materials), this is the state with the lowest Madelung energy. If such a compound forms a mixture with NG atoms, the average distance between A and B must increase, increasing the Madelung energy. As a result, the products of AB-type compounds and NG elements will be less stable than the separated phases. On the other hand, for 1D ionic compounds with 2:1 ratio (A 2 B type), the ground state contains units of A-B-A (here, we set A as +1 positive-charged and B as −2 negative-charged ions) that repeat infinitely. At the interface of two A-B-A units we will have two A atoms repelling each other. Thus, when NG atoms are inserted in between two A ions, the distance between these two A ions increases, which lowers the Madelung energy, making the structure more stable. The 1D ionic chain model based purely on Coulomb interactions can be solved analytically (see the Supplementary Note 4 and Supplementary Figure 4) and confirms that the insertion of He in A 2 B-type compounds will lower the Madelung energy, whereas the insertion in AB-type compounds will raise the Madelung energy. As revealed by the density functional theory calculations and the subsequent analysis of the electronic and structural properties of real He-inclusion materials, it is suggestive that the essence of the mechanism of their stabilization is a modification of electrostatic interactions, i.e., the change of the Madelung energy. This theory is revealed clearly by the simple 1D picture just introduced. However, when discussing the stability of real threedimensional (3D) materials under pressure, many other factors need to be considered, which will somewhat obscure the above simple argument. Obviously, the effect of the insertion of helium is much smaller in 3D materials because the interstitial sites are naturally larger. Interestingly, both He-inserted AB 2 and AB types of ionic compounds show high symmetry lines in their structures (Fig. 2c, f) with the same pattern as we show in Fig. 4. In order to study the effect of the insertion of He atoms in ionic compound lattices, we discuss separately the two enthalpy contributions of PV work and internal energy, i.e., H = E + PV. We then monitor the changes ΔE and Δ(PV) upon the insertion reaction, i.e. between the constituents and the product compound. We calculate and plot the two terms as functions of pressure in Fig. 5 for all compounds considered (see more compounds in Supplementary Note 5 and Supplementary Figure 6). It is obvious that Δ(PV) is zero at ambient conditions (P = 0). For reactions involving AB 2 or A 2 B compounds, Δ(PV) quickly drops to significantly negative values as a function of pressure. It becomes about −0.2 eV/formula unit for Li 2 MgF 2 compounds and −0.5 eV/formula unit for Na 2 E beyond 50 GPa. CaF 2 is an exception, with Δ(PV) slightly lower than zero at 50 GPa and positive at higher pressure. In contrast to AB 2 -type compounds, the value of Δ(PV) for AB compounds is mostly positive, except for a slightly negative value at low pressure (50 GPa). The different behaviors of Δ(PV) are caused by the different volume changes for A 2 B and AB compounds during the reaction with He. This volume change ΔV is summarized in Fig. 6. It shows that the insertion of He into the lattice of both A 2 B and AB types of compounds reduces the overall volume at low pressure, i.e., ΔV < 0; it is advantageous (purely from a PV work perspective) to store helium inside the compounds instead of as separate constituents. However, the volume reduction is much more significant for A 2 B type of compounds. At ambient pressure, ΔV/formula unit is −0.6 and −0.75 Å 3 for MgF 2 and Li 2 O reacting with He, respectively. In contrast, ΔV is only about −0.1 Å 3 for MgO and -0.03 Å 3 LiF reacting with He. This distinct difference between A 2 B and AB types of compounds originates ultimately from the different balance of Coulomb interactions (Madelung energies) of the two types of compounds. As illustrated in the 1D model above, there is strong A-A repulsion in A 2 B compounds. As a result, A 2 B compounds assume larger volumes per atom at low pressure to minimize these repulsions, thereby leaving more room for the insertion of He in their lattices. However, the He-inclusion compounds all seem less compressible than the constituents: for both A 2 B and AB compounds, ΔV increases with increasing pressure and eventually, for MgO, Li 2 O, LiF, and CaF 2 , becomes positive at sufficiently high pressure. That means that the He-inserted lattice has a larger volume than the separate constituent ionic compound and He. In contrast to the Δ(PV) term, the insertion of He in the lattice of both AB-and A 2 B-type compounds causes large increases of the internal energies at ambient and low pressures, ΔE > 0 (Fig. 6). This is due to the disturbance of the electronic structure of the ionic compounds caused by insertion of the NG element. At lower pressure, the gain in Δ(PV) is not large enough to overcome the large increase of internal energy upon insertion of He. Therefore, at lower pressure, He cannot react with ionic compounds regardless of the cation:anion ratio. Under increasing pressure, the internal energy balance ΔE for He insertion decreases significantly. Although this is generally true for both AB and A 2 B ionic compounds, the decrease of ΔE is more remarkable for the latter (Fig. 5). For example, ΔE changes from 1.05 eV/formula unit at 0 GPa to -0.02 eV/formula unit at 300 GPa for MgF 2 ; whereas it only changes from 1.08 eV/ formula unit at 0 GPa to 0.75 eV/formula unit at 300 GPa for MgO. A similar trend can also be found in the Li-based compounds, except that ΔE actually increases in the pressure range from 0 to 50 GPa for LiF. Because Δ(PV) either changes only slightly or increases with increasing pressure, it is indeed the dramatic decrease of the internal energy change ΔE that eventually leads to the stabilization of A 2 BHe compounds at sufficiently high pressure. What causes this decrease of ΔE upon He insertion? One major factor is the change of the Madelung energy as explained in detail for the 1D model. That change of the Madelung energy ΔE M can be calculated by assigning effective charges to each atom in the crystals of both the pure ionic compound and the He-inclusion compound. The Bader charges are used as the effective charges for the ions. The results for ΔE M , for all compounds and pressures, are also shown in Fig. 5. It is obvious that in general ΔE M behaves very similar to ΔE under increasing pressure. The correlation between ΔE M and ΔE indicates that the drastic decrease of the latter under pressure is indeed caused by the change of the Madelung energy. The only major exception occurs in the low-pressure region of Na 2 He. This is not surprising because Na is not an electride at lower pressure (<200 GPa). Interestingly, Na can form a stable compound with He at 150 GPa, before Na itself becomes quasi-ionic. This can be explained by the theory based on the electrostatic interaction because there is a strong interplay between the electride state and the He insertion. As He is inserted into the Na lattice, it will increase the size of the interstitial sites. Therefore, the quantum orbital energy at the interstitial sites will be lowered, which will help the formation of an electride 53 . In turn, the large local charges in any electride phase will stabilize the insertion of He in the lattice. The correlation of ΔE M and ΔE is not perfect even for compounds consisting of very hard ions. There are several reasons for this. First, it is hard to truly determine the effective charge of anion in the compounds. The nominal charges are integer numbers and they are usually much larger than the actual charges and the Bader charges. As a matter of fact, these charges may also change with pressure (see the Supplementary Figure 5). However, reasonable variations of the charge values, e.g., by using different calculation methods do not alter the conclusions drawn here. Second, a simple spherical charge distribution model may not work very well for ionic compounds, especially under pressure (notice the non-spherical ELF isosurfaces in Fig. 3a, b). Third, there might be large contributions to the internal energy beyond the Madelung energy. The insertion of He in the ionic compounds increases their lattice constants while at the same time also blocking the respective interstitial area for other electrons' wavefunctions. The overall effect may raise or lower the kinetic energy of the electrons of the filled anion shells and further influence the internal energy. Lastly, for ionic compounds consisting of heavier ions, such as CaF 2 , the change of the internal energy may have a turning point and again increase with pressure, opposite to the trend of the Madelung energy. This counteracting factor will be discussed in detail below. Opposing factors to He insertion. In this section, we will examine the question why He-inserted AB 2 or A 2 B ionic compounds are sometimes not stable even though the reaction potential from the Madelung energy is already significant. For instance, as shown in the previous section, the reaction enthalpy of He + Li 2 O decreases with increasing pressure but never becomes negative. Although the Madelung energy and the internal energy both decrease with increasing pressure while He is inserted into the Li 2 O lattice, they never form stable compounds. Furthermore, CaF 2 forms a stable compound with He but only in a limited pressure range from 30 to 110 GPa. In this case, higher pressure destabilizes the He-inserted ionic compound. Such behavior is also shown in an earlier work of Sun et al. for a number of alkali chalcogenides. For example, we find K 2 S to form a stable compound with He in the pressure range from 1.5 to 6.1 GPa (1.3 to 5.8 GPa in the work of Sun et al. 44 ). We will first investigate the unusual behavior of He insertion into the CaF 2 lattice. Foremost, its volume change ΔV increases dramatically with increasing pressure (Fig. 6e). Therefore, although its Madelung energy evolution would stabilize the He insertion, the overall formation enthalpy starts to increase at pressures beyond 50 GPa and the He insertion is not favored at any pressure. This distinct behavior compared to the other ionic compounds is due to the fact that the energy of the Ca-3d orbital is lowered under high pressure, and it becomes partially occupied. This alters dramatically the simple picture of He insertion into this ionic compound. As shown in Fig. 7a, the occupation of the Ca-3d orbital increases from about 0.1 at ambient pressure to 0.3 or 0.4 at 100 GPa. Correspondingly, the charge transfer from Ca to F decreases. As a matter of fact, the Bader charge of Ca in both the CaF 2 and CaF 2 He compounds decreases from about 1.65 e at 0 GPa to about 1.45 e at 300 GPa. In comparison, the Bader charge of Mg (in MgO and MgOHe) changes much less, from about 1.76 e at 0 GPa to 1.73 e at 300 GPa. A significant charge transfer from F to Ca-3d orbitals leads to greatly reduced repulsive interactions among F − anions, which lowers the volume of CaF 2 under pressure. The overall effect is ΔV > 0 for the helium insertion reaction, which is thus disfavored under pressure. Furthermore, the occupation of the 3d orbitals can lower the kinetic energy under high pressure, because the 3d orbital can largely penetrate into the core region due to the lack of core states with the same angular momentum. This gain in kinetic energy is more significant for CaF 2 than CaF 2 He since the previous compound is more closely packed. This explains why the internal energy difference ΔE increases slightly while the pressure is higher than 150 GPa, opposed to the decreasing trend of ΔE M . The different behavior of CaF 2 illustrates that the insertion of He into ionic compounds might be complicated by other factors if the composite species are heavily polarized. Similar effects arising from the occupation of an orbital with higher angular momentum can also be seen in the Li 2 O compound. The Li atom has an electron configuration of 1s 2 2s 1 . However, under high pressure, some of the electrons will be transferred into the 2p orbital. As shown in Fig. 7b, the occupancy of the 2p orbital increases from about 0.2 or 0.3 at 0 GPa to 0.9 or 1.3 at 300 GPa. Due to the lack of any lower shell p orbital, the 2p orbital has no radial node and can largely penetrate into the core region. This essentially reduces the size of the Li ions, which eventually leads to the positive ΔV in Fig. 6c. Furthermore, our proposed He insertion mechanism and the opposing factors are readily applied to many other A 2 B or AB 2 compounds as well as Ne insertions in ionic compounds. Several examples are discussed in the Supplementary Information (see Supplementary Notes 5 and 6, as well as Supplementary Figures 6, 7, and 8). In summary, we propose that chemically inert elements such as He have a prevalent propensity to react with ionic compounds that have unequal numbers of cations and anions. The He atoms do not form any chemical bonds with the ions in the compounds. However, the insertion of He atoms will lower the otherwise strong repulsive Coulomb interactions between the majority ions with the same charge, and therefore lower the Madelung energy. We also show that the recently discovered reactivity of He with Na originates from the same energetic driving force. Methods Structure search. In order to test our hypothesis that the insertion of He atoms can lower the Madelung energy of certain types of ionic compounds, we selected a number of compounds with different cation:anion ratios: Li 2 O (2:1); LiF (1:1); MgF 2 (1:2); MgO (1:1); and CaF 2 (1:2), as the test compounds reacting with He. Extensive crystal structure searches were conducted by use of the particle swarm optimization algorithm implemented in CALYPSO (Crystal structure AnaLYsis by Particle Swarm Optimization) [54][55][56][57] . A series of efficiency-improving techniques available in the code were employed, including symmetry constraints, bond characterization matrix, and coordination characterization function, etc. The effectiveness and the efficiency of this crystal search method have been proven by numerous early calculations. With the aid of this powerful method, we obtained the predicted stable structures of the above selected ionic compounds and the products of reactions between them and helium. We selected a pressure interval from 0 to 300 GPa and 100 GPa pressure steps for the structure predictions. Formation enthalpy and electronic structure calculation. The formation enthalpy and electronic properties of products were calculated by DFT as implemented in the VASP 58 package, in which the generalized gradient approximation within the framework of Perdew-Burke-Ernzerhof 59 describes the exchangecorrelation functional and the projector augmented wave method 60,61 was used to describe electron-ion interactions. For Li (Na, Mg, Ca), the 1s (2s) states were included in the valence. The plane wave cutoff energy is set as 900 eV. The k-point meshes with interval smaller than 2π × 0.05/Å was used for the ab initio calculation and the enthalpies are converged within 1 meV/atom. Madelung energy calculation. The Madelung energy was calculated using a Fourier method that is implemented in the Vesta 62 program. There are two important parameters, including the radius of ion spheres and the Fourier coefficient cutoff frequency. The charge-density distribution, ρðrÞ, of an ion is defined inside a sphere as ρ r ð Þ ¼ ρ 0 1 À 6 r s À Á 2 þ8 r s À Á 3 À3 r s À Á 3 h i for r < s else ρ(r) = 0, where s is the radius of the sphere. The sphere has to be smaller than half of the interatomic distances. It is determined by testing the convergence of the Madelung energy, a standard procedure as recommended by the VESTA program. The Fourier coefficient cutoff frequency for the long-range Coulomb potential is set as 2/Å for all the calculations. This is also a value recommended by the program. Bader charge calculation. The calculation of the electron population was performed using the Bader Charge Analysis code developed by the Henkelman group in the University of Texas at Austin 63 . While calculating Bader charges, we found that the charges on the He atoms, although very small, are not exactly zero. We would like to point out that this does not mean there is actual charge transfer during the insertion of He into ionic compounds. The He-1s orbital is fully occupied and the 2s orbital is much higher in energy. Therefore, there is no quantum orbital available for any electron transfer. However, while one calculates charges using the Bader analysis, the charge enclosures around He atoms are determined by the zero flux sheets of the total charge density. Even if He atoms form no bonds with the surrounding atoms, the total charge density is the overlap of He electrons and the electrons of neighboring ions. Therefore, the enclosed charge around He might be slightly different from 2. The Bader charge of He in Na 2 He is even higher because the charge in the interstitial sites (quasiatoms) overlaps more with the He atoms.
8,423.8
2018-03-05T00:00:00.000
[ "Physics" ]
Social Investment, Economic Growth and Labor Market Performance: Case Study—romania A few years have passed since the financial crisis began with the bankruptcy of the American Lehman Brothers bank and few dare predict the moment when we will overcome the crisis. Chaotic human resource policy in the Romanian economy and complex taxation have lowered our chances to overcome it. Excessive income tax, massive layoffs, not always dictated by real needs in the private sector, hesitation in the government regarding the reorganization of an oversized public sector and the low productivity are only some obstacles in overcoming the crisis. People are a very important factor in the production process and in the success of a company. It is essential that modern organizations rethink their strategies, make long-term investments, and invest in people. Success and survival on the market greatly depend on the understanding of these facts and managers must be aware of their importance. Introduction Investing in people, training and preparing them to acquire information and knowledge is, according to many economists, the most profitable investment for any society, which can only acquire prosperity from people's activity [1], especially the work of highly trained people. If human capital means professional expertise, skills and health, that enhance individual creative capacities, and the ability to produce economic-social goods, to allow future income generation, investing in human capital translates into higher productivity for the individual who owns such capital. OPEN ACCESS Moreover, through high productivity, educational capital increases the value of labor and entails higher wages.Thus, education is regarded as a means of accumulating human capital.Furthermore, a higher level of training will increase workforce flexibility, and allow for its better adjustment to the labor market conditions. While educational capital describes individual abilities acquired from training in educational facilities or outside them, and biological capital summarizes health, in the form of individual physical abilities, specialists in this field have identified inter-dependent links between the two.Thus, health conditions the acquisition of educational capital, in the same way that lack of economic resources generates an individual's inability to maintain and develop his/her educational capital.A lower educational level may also translate into reduced concern for and ability to maintain optimum physical health, which would cause a decline in health, and hence declining labor and decline of human capital.We can therefore talk about the presence of a vicious circle generating permanent poverty.This is why, following the avalanche of information and knowledge, the space of scientific research has provided for unprecedented development of the information technologies, communication systems and communication technologies in a knowledge-based society, and lifelong learning has become the informing and formative paradigm of the new millennium. The general objective of the paper is to review the educational system, as an element of human capital and its relationship with the labor resources in Romania, by reviewing employment and work force productivity, exploring it from both the perspective of individual education levels and that of social policies promoted as strategies of investing in people. Also, in describing and analyzing the effects of education on economic life, the secondary goal of this paper is to offer an overview of the Romanian situation which, corroborated with the generation of a new outlook on intellectual capital, might generate an integrated approach in the development of human resource strategies to allow an organization to meet its future goals by improving the quality of labor relations between the organization and its employees as well as its strategies, policies and practices in recruiting, training, developing, managing the performance, rewarding and its employees and managing relations with them.Theoretical concepts such as intellectual capital, human capital, knowledge-and resource-based strategic management, as well as human resource strategic management, highlight and provide concreteness to the research, the more so as the paper ends with a conclusions and proposals section. A Brief Review of Literature Investment in people through competitive and efficient educational policies is more and more frequently envisaged in the specialized literature a sure "root" source (Schultz) of economic growth, while education is regarded as the "strong heart" (Blaug) of the human capital theory [2].Education enhances the individuals' capacity for lifelong learning, generating an increase of future productive competencies, and of human capacities as a whole.Some voices say that development is an extension of human liberties, while economic growth is not regarded as a goal in itself, but just a means to extend such liberties.Therefore, education increases an individual's freedom to live a healthier, worthwhile life. Perceived just as labor resources, people should be supported to develop, motivated to perform, and appreciated for their worth.Nations no longer assess their economic power just in terms of Gross Domestic Product or population, more and more frequently they refer to the production power and innovation capacity of human capital that some professional authors see as the intellectual and human force of a nation. Considered as the most important source of wealth in the New Economy, intellectual capital rose to attention when it was observed that there were significant differences between the market value of a company and its net accounting value.Some 30 years ago, Kenneth Galbraith suggested that this concept involves more than mere knowledge or pure intellect, it means action [3]. Specialists in areas such as economy, management, accounting, have tried to define the concept of intellectual capital as closely as possible, given that it is "intangible".Thus, defined as a way to create value and as a resource in the traditional sense, intellectual capital consists of three directions converging into a complex meaning: an accounting one, which designs it as an intangible asset, where estimates show that now 60%-70% of a company's worth is given by such assets; identifying an adequate matrix, so as to develop efficient methods of assessing intellectual capital; and finally a direction given by strategic management.Before the 1980s, specialist literature in strategic management theorized that, in order to understand the competitive edge, the external environment of the organization is key.This started from the assumption that resources were evenly distributed, and easily accessible to organizations in the same industry.Thus, the management's task was to identify the most intelligent ways of combining products and markets, based on factors including: the substitution products power, entry barriers, the negotiation power of the suppliers and buyers. In 1986, Barney would develop four criteria in establishing the resources that can generate sustainable competitive edge.From an analysis of these criteria-value to the customer, rarity, uniqueness and cost of copying-it is noted that the only resources that meet them all are the intangible assets; therefore it is recommended that intangible assets should be recorded as efficiently as possible in the structure of the organization, for it to be intelligently managed [3]. Defined by de Hugh MacDonald as "existing knowledge in an organization that can be competitive advantage" [4] or by Leif Edvinsson and Pat Sullivan as, "knowledge that may be converted to value" [5], intellectual capital includes three elements: human capital, structural capital and relational capital (Figure 1). The concept of "human capital" has been around forever, but use of the term as such, both in academic circles and in the professional environment, has become common in the past 50 years.Moreover, interest in and importance assigned to the concept have become evident in the increasing number of scientific papers on this topic, for about half a century. Nowadays, the theory of human capital holds a special place in economic sciences, with its own system of ideas and principles, books and reference research, and authors awarded the Nobel Prize for their research, such as Gary S. Becker in 1992 and Theodore W. Schultz in 1979.The wealth of studies and research on human capital demonstrate that the countries that allocate higher investment in human capital-for education, research, health, are also those that register the highest economic performance.This logic underpins the economic boom in the second half of the past century in some south and eastern Asian countries (South Korea, Hong Kong, Singapore, Taiwan), which invested a lot in education [7].Therefore, long-term economic development can only be obtained with solid investment in human resources. The first author to mention, prefiguring the first signs of a theory on human capital, was Milton Friedman, who, in his Ph.D. paper [8] of 1946 dealt with the incomes of professionals.Moreover, specialist theory states that the theory of human capital is the fruit of research conducted in the 1950s by economists at the University of Chicago and Columbia University on education demand, the workings of the labor market, the issue of wage differences and many others [8]. Gary S. Becker, the uncontested leader of the human capital school, developed the theory of investment in human capital, and the concept of human capital return on investment [7].Thus Becker built a complex theory of the role of education in economic growth.He classified human capital in the same way as physical means of production: additional investments in human capital, through education, training and medical treatment, and maintaining increased productivity as the ultimate goal. As for the main point of the research, that of focusing efforts on education and training as means of developing knowledge and skills, and on employment and productivity, respectively, as an effect of investment on educational capital, in 2001, the Organization for Economic Cooperation and Development gave the closest definition for the term human capital, referring to the sum of knowledge, skills, competencies and attributes incorporated in individuals, that facilitate the creation of personal, social and economic welfare. According to the specialists, identifying and defining the components of human capital are not easy tasks, as they raise issues of defining and operationalizing, which is why most authors consider educational capital-skills developed through training in school or outside it and biological capital-physical skills displayed in the state of health, to be the main elements of human capital. Two outstanding figures of the theory of human capital, Jacob Mincer and Gary Becker refer to human capital in their works [7], especially in the educational sections, emphasizing the idea of training and education costs.Moreover, Blaug added to their theories, stressing that the individuals in one country should attain a minimal education level in order to become intelligent consumers, and benefit from the positive effect of the technological progress of their time, respectively [2].Thus, education may be considered as both a consumer good, providing multiple benefits to the population, and as a direct investment in business.The rationale of this definition generated many concerns among economists, who have not, however, reached an agreement on the hypothesis.One thing is certain, though: the fact that investment is and should be ongoing, either to expand the human capital by education, or by maintaining the existing stock of human capital with regular medical examinations; the idea of human capital as an investment is gaining more and more ground nowadays. The Need for Continuously Informing and Education the Population The European Union undertook to create better and more numerous work places.This commitment requires a strong partnership between the Member States, the regional and local authorities, the social partners, civil society and, especially, the European citizens.There are still a lot of things to achieve in important fields, such as research, innovation and within the knowledge-based society in order to create better and more numerous work places [9] in a continuously changing world.It is very important that the EU and each Member State invest in their most valuable resource: their citizens. Eurostat [10] estimates that 24.512 million women and men in the EU-28, of which 18.347 million are in the Eurozone (EA-18), had become unemployed by September 2014.Compared to August 2014, the number of unemployed people went down by 108,000 in the EU-28 and 19,000 in the Eurozone.Compared to September 2013, unemployment dropped by 1,818,000 in the EU-28 and 826,000 in the Eurozone.Thus, in September 2014, the Eurozone recorded an 11.5%, unemployment rate, stable compared to August 2014, but in decline, if compared to the 12.0% of September 2013.In the EU-28, the unemployment rate was 10.1% in September 2014, again stable compared to August 2014, but lower if compared to the 10.8% in September 2013.Of the EU Member States, the lowest unemployment rate was recorded in Germany (5.0%) and Austria (5.1%), and the highest in Greece (26.4% in July 2014) and Spain (24.0%).Compared to 2013, unemployment rates dropped in twenty-one Member States, rose in six and remained unchanged in Belgium.The most substantial drops were recorded in Hungary (from 10.0% to 7.6%, in a one-year interval, August 2013-August 2014), Spain (from 26.1% to 24.0%) and Portugal (from 15.7% to 13.6%), and the highest rises in Finland (from 8.2% to 8.7%) and France (from 10.3% to 10.5%) (Figure 2).Moreover, recent information published in Eurostat Report 168/2014 of 4 November 2014 [11], describes the risk of poverty or social exclusion in the EU-28, with one person in four experiencing this.Thus, in 2013, 122.6 million people, or 24.5% of the EU population, where in danger of poverty or social exclusion; these people were in at least one of the following situations: at-risk-of-poverty after social transfers (income poverty), severely materially deprived or living in households with very low work intensity. The percentage of people exposed to the risk of poverty or social exclusion in the EU-28 in 2013, i.e., 24.5%,only slightly declined from the percentage in 2012 (24.8%), but was higher than in 2008 (23.8%).Therefore, according to the data published by the statistical office of the European Union in 2013, more than a third of the population was on the brink of poverty or social exclusion in five EU Member States: Bulgaria (48.0%),Romania (40.4%),Greece (35.7%),Latvia (35.1%) and Hungary (33.5%).At the opposite end of the scale were countries like, the Czech Republic (14.6%), the Netherlands (15.9%),Finland (16.0%) and Sweden (16.4%),where the lowest rate of poverty or social exclusion risk was recorded (Table 1).As a consequence, reducing the number of people exposed to this risk has become one of the main objectives of the Europe 2020 strategy-the Europe 2020 strategy [12], following the Lisbon strategy, was adopted by the European Council on 17 June 2010, and is the common EU agenda for the next decade, stipulating the need for a new growth pact that may bring about sustainable economy by an enhanced competitiveness and productivity, the principles underpinning a sustainable social market economy. More than ever, in a society affected by a deep financial and economic crisis [13], where thousands of people remain unemployed, education and lifelong personnel training gives them a real chance of becoming competitive on the labor market and (re)integrating in their society, in terms of socio-professional and geographic mobility. Investment in People in Times of Crisis, between the Possible and the Probable In order to obtain more money from the state budget, the Romanian government used the simplest solution: tax increase.The measure has direct implications on the labor market, one of the effects being discouragement of the unemployed to seek another job and implicitly to pay contributions to the state."70% of the additional income of a rehired unemployed person goes to taxes and the forfeit of social benefits.Therefore, there are not enough stimuli for job seeking" [14]. The comedown in labor productivity in 2009, despite the strong rise of unemployment, demonstrates that reorganization of the economy took place in the private sector, where this was easier-in human resources, but it was not efficient.Moreover, the cost reduction strategy based on personnel cutbacks is counterproductive, both in the public and the private sectors, as the costs of rehiring at a later date will exceed the savings made today through dismissals [14]. The public sector in Romania is one of the most over-sized in comparison to other EU Member States.For example, in the first semester of 2010, the share was of 19% in Italy, 21% in Great Britain and 26.5% in Poland.In other EU countries, this percentage is even lower.One explanation for this fact is the rate of workforce participation, which is lower in Romania compared to the EU average.In general, though, the public sector is definitely over-sized both in relative terms, in comparison with other EU countries, and in relation to the economic conditions in Romania.There are two issues the public sector is facing: rise in efficiency and cost reduction. According to our National Agency for Employment, the unemployment rate level at the end of July 2010 was 7.43% (679,495 unemployed), 1.13 percentage points higher than for July of the previous year.The minor month on month improvement (unemployment 0.01% lower in June 2010 compared to May 2010) is not a consequence of active measures for employment, but of fewer eligible social security beneficiaries. The most numerous dismissals are anticipated in the extractive sector, in the construction industry, the IT and TV and radio industries.If the official version confirms this annual average unemployment rate [15], Romania will rank below the EU average for this indicator (9.6%), midway among the 27 Member States, with top ranking Spain (unemployment rate-20.3%),Slovakia (16%) and Ireland (13.6%). An important study, ELLI 2010 (European Lifelong Learning Indicators) conducted by Bertelsmann German Foundation, analyzed the existing society in 27 European countries, among which Romania, based on 36 indicators, related to the education coordinates developed by UNESCO.We are talking about learning in order to know, defining formal education, learning in order to do, defining the training for a job, cohabitation learning, with a major contribution to the structure of social cohesion and learning in order to organize one's own life, in a permanent effort to gain as much useful information for personal development as possible.The specialists in Bertelsmann think that designing education based on these fundamental directions is the path to the welfare of 21st century society [16], as they are compulsory in assessing the development level of a certain society. According to this study, Romania got the lowest percentage of the 24 European countries where the researchers analyzed formal education data.As for the classification based on "cohabitation learning" and "learning in order to organize one's own life", our country ranks at the bottom of the scale, only ahead of the Hungarians and the Bulgarians.Romania ranks better (24) in "learning in order to do", meaning training for a job, but still with a poor performance. The lack of active, coherent policies to support the need for information and lifelong education of the population [17] transforms continuing education into a restrictive form for many citizens wishing to be up to date with the latest news in a certain field, who do not have the necessary financial resources to purchase books or learning materials or apply for vocational training. The March 2009 EC record indicates a series of measures called "good practice measures", which can be adopted by the Member States to ensure sustainability of their economic activity.All in all, the EU Member States strongly stressed the importance of maintaining the number of employees through policies aiming at: (1) Supporting the economic activities that are viable but that have difficulties in accessing funds by facilitating access to capital.The priorities were industries that have been strongly affected, such as the automotive industry, that many governments helped by operating a subsidy plan for the purchase of new automobiles.Other measures included fast depreciation of invested capital Other measures included rapid acceleration of the depreciation rate of invested capital (Czech Republic) or unlocking the state fund for the employers, in order for the latter to cover a fraction of their personnel costs. (2) Retraining and training programs.Here the measures have varied according to the proposed objectives.France, for example, encouraged professional retraining most, while in Lithuania the employers were encouraged to keep their employees.(3) Measures meant to reduce the companies' expenses before the effective dismissal of their employees.Among these-technical unemployment or cuts in the payment of social insurance.(4) Expansion of the unemployment aid period and the encouragement of part-time activities. Austria, for example expanded the part-time period from one year to two and Germany gave bonuses for reduced working hours.(5) Targeted measures aimed to support low incomes, such as subsidies for electricity bills. Generally, these tend to focus on the above-mentioned areas, but are applied differently, in accordance with the economic structure, the existing economic situation and the governments' ability to finance these measures. A crisis actually means an imbalance between the system components that are affected and cause anomalies in the system operation.Crises are generally necessary, because they represent an unavoidable sanction for errors of management [18].Only systemic crises need corrective measures, as they might lead to collapse of the system.The recent economic crisis affected Romania mainly by a 15%-17% decrease in export demand. Theory recommends that, in times of crisis, decrease in the demand for products and services at the internal and external level be balanced with public investments in infrastructure, education, health care, culture.It would have been a chance to do, somehow under duress, what we have not been able to achieve for years-modernization of the transport infrastructure, of the villages, schools, universities, hospitals, etc. Unfortunately, the inappropriately promoted policies did not lead to such achievements [19], moreover they enhanced the negative effects of the external crisis by creating their own internal crisis, and the situation tended to get out of hand.Were we to examine the situation in the field of education, we could see a strategic error in the relevant public policy [20], because the share of the GDP allocated to investment in education under normal economic development conditions in a country is the essential condition for its prosperity. Nevertheless, there is a direct connection between a country's level of development and quality of life and its investment in education and research [21].Because all the other resources are limited, except people's creativity and innovation capacity, which start and develop through educational and research processes. Measuring Economic Development In order to be able to assess the impact of investment in education on economic development, the relevant specialists recommend, as a first step of the study, an efficient cost/benefit analysis of investments in education, given that investments are dependent on governmental medium and long-term economic policies, and on diverse random factors in the individuals' lives.Knowing the benefits and costs of investing in human capital, an economically sound decision will apply the cost/benefit analysis.This will help identify the effects of investing in human capital, at both the individual (private), and the social level. Having intangibility as a defining characteristic, as indicated in all its definitions, direct measurement of human capital is difficult, and therefore estimated indirectly.Thus, the literature distinguishes between investigating the human capital stock and investigating investment in human capital at the macro and micro level.Micro-level analyses take into consideration individual decisions and heir effects, while macro-level ones stress the role and importance of human capital in economic growth. It is important to mention an aspect reflected in both theoretical works and empirical analyses, regarding the level or magnitude of the human capital at a given time (where human capital being seen as a stock variable) or the investment, i.e., accumulation of human capital (where human capital is seen as a flow-type indicator) over a certain interval of time.In the first instance the typical measure is the average number of school years for a specific population group, while for the second it is typically the rate of tuition. Human capital measures are classified into two broad groups: monetary and non-monetary.Monetary methods assign a money value to the human capital stock, at both the individual, and the aggregated level, which allows comparing the human capital stock to the physical one [22].The most widely used monetary methods include the prospective (income based), retrospective (cost-based) and integrated (based on a combination of the two), literature giving the prospective measure as most efficient, and providing the best results.  Prospective methods are based on the estimated future incomes or, rather, estimated present value of future income flows for an individual with or without consideration for the living costs. Retrospective methods are based on the costs of human capital "production"; in other words, it considers the sum of education and tuition expenses, or determines the costs of human capital reproduction. After applying the two methods, the relevant authors recognized their limitations and some of them tried to measure and asses human capital by combining the prospective and retrospective methods in order to improve their respective strengths and play out their weaknesses. Unlike the first category of human capital measurement, non-monetary methods provide human capital measurement in point of investment in education, without assigning money values to the human capital.The most widely used indicators for this method include: tuition rates, average number of years in education, literacy rates, and share of active population graduating different forms of education.The method rationale is that these indicators are more strongly dependent on the investment in education, and the latter is a key factor in the creation of human capital.Thus, the educational indicators used are indicators for the human capital rather than direct estimates of education. So it becomes evident that there should be differences between countries on the basis of the indicators and of the results generated in the use of different human capital measurement methods.It is, therefore, not just the amount of tuition (average years) that are different between countries, but also the quality of each year of schooling (cognitive skills acquired during the school years).To adjust the human capital function to differences in quality, the specialists have suggested the use of educational inputs, countryspecific rate of recovery of educational investments, or the direct testing of cognitive skills. There are some who say that there is some inter-dependence between the different methods of measuring human capital, namely: inputs to the human capital production process are the basis for the cost-based method (retrospective); on the other hand, the income-based method (prospective) and the educational approaches are based on the effects of the human capital generating process. In practice, given there are multiple visions and perceptions on the human capital, it is no surprise that different studies in the literature come up with diverging results for the effects of human capital on economic growth and development.Moreover, such effects of the human capital on economic growth have not been empirically validated; the lack of consensus being mainly due to the theoretical bases of the estimation methods, i.e., to the deficiencies present in each approach.Shortcomings may be of two types: either the method does not adequately reflect the key elements of human capital, or the data are of poor quality. An important indicator, measuring the effort that society is willing to make so that its members, classified upon certain age criteria, may attend certain educational programs and acquire a certain intellectual capital, in line with the society's possibilities at one time, is the percentage of public expenses for education (CPIB), in GDP in a chosen financial year. Economic development is also typically expressed in terms of the Gross Domestic Product, an indicator that, when used in a regional context, allows for the measurement of the whole macroeconomic activity and economic growth, and creates a basis for analyzing the regions in comparative terms [23].A number of international initiatives focused on this issue, and, in August 2009 caused the European Commission to adopt a communication titled "GDP and beyond: measuring progress in a changing world" [24], which outlines a number of actions aimed at improving and completing the GDP; while in the seasonally adjusted series, Romania registered a decline of its GDP in QII 2014 by 0.3% compared to QI 2014 and an increase in QI 2014 by 0.5% compared to QIV 2013, therefore, for the third quarter (QIII) of 2014, we can think of a 1.9% economic growth compared to the previous quarter, according to the data published by the National Statistics Institute (INS) in early November this year.On the other hand, looking at the national GDP development trend over time and the percentage allocated to education, we note that, for the whole period of analysis, in the 2000-2007 interval (Figure 3), the percentage for education was less than 5%, although the law provided for 6%.By comparison, if we look at the situation of the other EU Member States for the same interval, 2000-2007, the average investment was about 5.1% of the GDP, with some broad variations, however, from one country to another.For instance, in 2007, the percentage exceeded 6% for the northern countries and Cyprus, while the other countries allocated less than 5% of their GDP (Table 2).The National Education Law [26], approved in early 2011, provided that a minimum 6% of the gross domestic product of each year should be allocated for the funding of national education, out of the state budget and the budgets of the local governments.It also stipulated that educational establishments and institutions may obtain and use their own revenues independently.The same law stated the Gross Domestic Product of each year should be allocated from the state budget to scientific research.The enforcement deadline for this measure has been postponed, based on claims that this funding rate would require an additional total budget effort of more than 46 billion lei in 2012 and 2013, therefore the fiscalbudget strategy for 2011 stated that the deadline would be extended to 2014. This explains Romania's position, ranking second lowest among the EU states in point of allocations from the gross domestic product (GDP) for education in 2011, at 4.1%, same as Greece.Only Slovakia (4% of the GDP) and Bulgaria (3.6%) allocated a lesser share of their GDP to education in 2011, a Eurostat study reported [10]. Of the 4.1% allocated to education, Romania allocated 1.3% to pre-school and primary schools, 1.6% to secondary schools, and 0.9% to tertiary or higher education.Bulgaria ranks lowest among the EU Member States in point of GDP allocation for education in 2011, according to the data made public by Eurostat [10].In practice, Bulgaria only spent 3.6% of its GDP for education in 2011, of which 1.8% on secondary education. According to the same Eurostat source [10], in 2011, total expenses in the EU-27 were 49.1% of the GDP, of which 5.3%, worth 347 billion euro, allocated to education.As a share of the GDP, the highest allocations for education were in Iceland, 7.9% of the GDP, Denmark (7.8%) and Cyprus (7.2%). Table 2 shows the official statistical data reflecting the percentage of the expenses for education in various European countries and the United States of America and Japan.Figure 4 is based on those data, i.e., the trend (2011 compared with 2007) where CPIB is placed-between 3.5% and 5.5%.For 2011, attention should be paid to the amount spent on education in Denmark 8.75%, Malta 7.96, Cyprus 7.87%, Iceland 7.36%, Sweden 6.82%, Finland 6.76%, and Norway 6.66%.In 2007, Romania spent 4.25% (following a series less than 3.5%), which was still not satisfactory and under the legally stipulated 6%.This fact truly proves the "importance" that the authorities give to investment in intellectual capital [27], the most crucial investment that a nation can make.2). Romania must invest in education, considering that, although there are some areas of excellence, 15%-20% of the population is below the elementary level of education, general school, the World Bank country director for Romania, Elisabetta Capannelli, stated at the Bucharest Forum 2014 [28] Unlocking the potential of Eurasia.Strategic decisions on the new Silk Road, organized by the Aspen Institute Romania. "There are areas of excellence in Romania (in education-editor's note), as for example foreign language skills, appreciated by the investors, but there are also areas of weakness.If we look at the Programme for International Student Assessment (PISA) tests for mathematics or sciences, for example, the results show weaknesses of the educational system.[...] 15%-20% of the Romanian population does not have basic education, middle school level", the World Bank representative said [28]. With an allocated budget slightly above 3% of the GDP for education, compared to Sweden's investing 6.7% of its GDP in education, and ranking 74th in the world in point of the ease of doing business in the country, the World Bank Office stressed that Romania has a wealth of opportunities that could be developed by incentives to the private sector to start initiatives, and a commitment of the authorities to follow through with reforms, so that the citizen might be the final beneficiary of such efforts. Table 3 shows the CPIB indicator sheet, where we find details about the calculation formula, the indicator definition and the other defining elements.Table 3. Public expense for education, % out of GDP. Defining elements The CPIB indicator sheet Definition The percentage of public expenses for education in the GDP in a certain financial year. Unit of measure % Purpose Shows the percentage out of the annual financial income that the Government spends for education development. Symbol CPIB Calculation method The amount of total public expenses for education divided by the GDP of a certain financial year and multiplied by 100.The way in which education investments are made per country and form of state investment, private government dependent, independent private and totally private, is shown in Figure 5 While we can justify public investment in primary and secondary education, where the citizens acquire a package of educational services in accordance with the values and standards that a society has at a certain time, along with the percentage of GDP expenses that the society decides to invest, tertiary education is quite controversial.This applies mainly to the under-developed or developing countries, where tertiary education is free-due to the fact that the best graduates will migrate to developed countries where they get better paid; in other words, these countries become suppliers of free intellectual capital, an unfair and immoral thing [30]. A return of the investment made into those top students should happen-either they should have to work a certain number of years in the country of study [31] or they should send a percentage of their income to their country of study and origin.The issue is not easy at all, it requires discussions, whether such intellectual capital supplying becomes the object of international debates, and answers to one question: how to find fair supports for the supplying countries. Conclusions and Recommendations The percentage of expenses on education out of the GDP in a country is an essential indicator [32], which reflects the policy of that country in the education field.The value of this indicator provides information on how that country will look in the future. It is recommended that the investments in education [33] should increase during crisis rather than during regular times.The human resource, highly trained, will represent the main production factor [34], generating innovation and creativity.In Romania, with an under-funded educational system, increase of public investment, especially of middle-and high-school teachers, becomes a necessity [35] and entails an increased efficiency in the spending of education money.Investment in people, i.e., in their education and training, is the most profitable for any nation.The percentage of GDP that goes to such investment shows the importance given by the authorities to education, learning and research. The prosperous nations in the world have always been paying great attention to education and allocating large amounts of their GDP to it.Within the knowledge-based society, intellectual capital is the most important investment that a society makes, as it is superior to the classical resources, labor, nature, and capital [36].A great inequity arises between the countries that invest in the education of their own citizens, mainly tertiary studies, and the countries that receive the brightest graduates for employment [37]. As to how to increase or maintain funding for education, the EU Member States show an increased interest in finding solutions to improve efficiency and promote fairness [38]-a more difficult challenge in the context of financial and economic crisis and of the rising levels of public debt in particular.Their concern is not only to reach the level and find the funding source, but also to develop a set of proposals for reform in the education system [39], raising questions on the future development of workforce competence, for the benefit of individuals and of society in general. Romania's concerns, in relation to current demographic trends, as well as migration and brain drain, generate a compulsory effort to integrate all the socio-demographic categories on the labor market [40]-educational integration of all young people, irrespective of their social, economic or cultural background, as well as of elderly people.Thus, investment in the training and/or re-training of their knowledge and skills, in order to integrate them into the economic and social realities of the 21st century have become a major concern for the state social policy system. I think that the think-tank idea of The Lisbon Council for Economic Competitiveness and Social Renewal, i.e., appointing human capital managers at the local and/or regional level to coordinate and implement efficient policies for human capital enhancement [41], would be good practice in Romania, too.In the regions where this practice has been successfully implemented, e.g., the cities of Bratislava, Helsinki, Stockholm, the key tasks of human capital development were taken over by informal networks, official agencies, coordinating groups, working groups, local NGOs or even enthusiastic persons.With the role of designing, developing and implementing a human capital strategy for the region/municipality, the human capital managers help focus the available resources to the most efficient leverage.Thus, our community and the local space where we all develop, build social relationships and spend time with the near and dear, building a future, are good reasons why the attention and interest of the political decision makers should be channeled in this direction, as the stake is improving the standards of living for the future generations through access to education, training and lifelong learning. To quote Peter Drucker, who said that he would never make predictions, but simply look out the window to see what is visible-but not yet seen, I find that the highest level of education will have a discernible impact on the process of graduating from school to occupational life, championed by people who strive for knowledge and lifelong learning. Figure 3 . Figure 3. GDP Percentage for education.Source: Chart including data provided by the National Institute of Statistics [25]. Figure 4 . Figure 4. Percentage of expenses for education out of GDP in European countries, USA and Japan for 2011 compared with 2007 (Source: [10] and Table2). [29]. Figure 5 . Figure 5.The distribution of pupils/students (ISCED Levels 1, 2 and 3) according to the type of educational structure they attended (state or private), in 2006 (Source: [29]). risk of poverty or social exclusion (Persons falling under at least one of the three criteria) Persons at-risk-of- poverty after social transfers (%) data on At-risk-of-poverty rate after social transfers estimated from Household Budget Survey; UK: change of provider of cross-sectional EU-SILC data: until 2012 data were collected by the ONS, from 2012 onwards they are collected by the Department for Work and Pensions; -= Data not available. Table 2 . Percentage of expenditure on education in GDP for the European countries, United States and Japan (Source: [10]).
9,032.8
2015-03-11T00:00:00.000
[ "Economics" ]
Hybrid MAC Protocol for Brain–Computer Interface Applications Brain–computer interface (BCI) can permit individuals to use their thoughts as the sole means to control objects such as smart homes and robots. While BCI is a promising interdisciplinary tool, researchers are confronting network lifetime as an obstacle to further development. Furthermore, the medium access control (MAC) protocol is the bottleneck of the network reliability. There are many standards for MAC protocols that can be utilized for productive and dependable transmission by altering the control parameters. Modifying these parameters is another source of concern due to the scarcity in knowledge about its effect. In addition, there is no instrument accessible to receive and actualize these parameters on transmitters embedded inside the cerebrum. In this article, we give the transmission instrument to both ultrahigh-frequency (UHF) radio frequency identification (RFID) and ultra-wideband (UWB) signals for multiple transmitters and ultrasonic technology mimicking the neural dusts by modifying the superframe structure. In this article, a hybrid MAC protocol is proposed, and the results show that the traffic received can be increased by 700% for UHF-RFID and more than 100% for UWB and ultrasonic technology. Comparative results for wireless channel MAC protocols using these different transmission techniques are discussed in terms of network delay, data dropped, traffic sent, and traffic received. I. INTRODUCTION B RAIN-COMPUTER interface (BCI) is a system that enables transmitting the human brain signals to an external device, which enables connecting the central nervous system of human beings with the external world [1]. One the objectives of BCI is to sidestep the damaged nervous system in the spinal cord and develop an immediate connection between the brain and an embedded device that can receive neural signals to imitate muscle function and, in so doing, overcome paralysis [2]- [5]. For example, individuals who are tetraplegic have normal neural signaling, however, suffer paralysis due to downstream damage at the spinal cord. BCI technology enables a functional cerebrum to communicate directly to computer-assisted devices that serve in place of muscles to re-establish functional movement. Using BCI, people can be prepared to rehearse contemplation to induce neural signals that can be deciphered by a computer [5]. As medical technology is exponentially developing, BCI stands at the forefront of personalized and predictive medicine [6], [7]. Specialists are attempting to use BCI to create prosthetic arms for patients to have optimal control of motion and use the functional electrical stimulation device to reanimate paralyzed arms [7], [8]. A BCI is a system comprising a number of sensors, a neural decoder or translator, and some form of actuator to carry out an action. The sensors' main task is to detect changes in neural activity related to the intent to influence or move an external device. Generally speaking, sensors can be placed inside or outside the skull, and each approach has some advantages and disadvantages. For the case where sensors are placed outside the skull, noninvasive, the sensors' are simply attached to the scalp and connected using to the decoder using a simple set of wires. However, the signals detected by the sensors' in this case suffer from severe attenuation caused by the skull, scalp, and other layers that cover the brain. Embedding the sensors inside the skull, invasive, is more complicated because the sensors have to be surgically implanted on the surface or within the depth of the brain; nevertheless, it provides high-quality signals and allows fast data transfer. The sensors in this case should be equipped with a transmitter to allow sending the signal out of the skull and then acquired by a number of receivers attached to the scalp [9]. In such scenarios, enabling several embedded sensors to communicate with external receivers wirelessly requires establishing a wireless network to manage the communications process between the transmitters and receivers. There are many factors that must be considered while designing BCI wireless networks, because of the stringent constraints on the power consumption and size of the communicating tags. Therefore, a large portion of the research work on invasive BCI focuses on the physical layer, as reported in [10] and the references listed therein. However, the medium access control (MAC) protocol may have significant impact on the power consumption of such networks, and hence, it is essential to design an efficient MAC that has high power efficiency while being capable of providing fast and reliable data transfer. In particular, when the number of sensors is large, optimizing the This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/ MAC design becomes crucial, and thus, the MAC design for BCI applications is becoming a fertile research field [11]- [13]. A. Basic MAC Protocols MAC protocols are generally needed to manage the communications of multiple users by allocating each user a certain transmission resources such as time or frequency. Broadly speaking, MAC protocols can be classified into following three categories. 1) Channelization protocols: a) frequency-division multiple access (FDMA); b) time-division multiple access (TDMA); c) code-division multiple access. 2) Random access protocols: a) aloha: i) pure aloha; ii) slotted aloha. b) carrier sense multiple access (CSMA): i) CSMA with collision detection; ii) CSMA with collision avoidance (CSMA-CA). 3) Controlled access protocols: a) reservation. b) polling; c) token passing. Each protocol has certain advantages and disadvantages in terms of spectral efficiency, delay, reliability, overhead, and complexity. For example, FDMA has low complexity and overhead because it does not require synchronization among the users. However, it has low spectral efficiency due to the frequency guard bands. Unlike TDMA, FDMA is immune to system timing issues since the frequency band is reserved for the user for the entire duration of the transmission session. Therefore, timing adjustment is not critical, and fewer number of bits is required for synchronization and framing [14]. In FDMA, it is rare for the receiver to get information from more than one transmission source. One of the key limitations for FDMA is the maximum data rate, which is minimal. FDMA can be attractive for BCI systems due to its simplicity; however, power consumption should be reduced. B. Related Work BCI systems can be generally classified as wireless sensor networks (WSNs), wireless body area networks (WBANs), or wireless personal area networks (WPANs). Therefore, the MAC for such networks can be adopted for BCI. Examples for these MACs are the IEEE 802.15.1 and IEEE 802.15.4. However, the characteristics of the brain environment are different from such networks [15], [16], which leads to a modest performance when WSN, WBAN, or WPAN protocols are adopted for BCI [11], [17]. Such local area networks (LANs) are typically based on the IEEE 802.15.1 and IEEE 802.15.4 standards. Such protocols relatively offer low cost and low power consumption and do not require underlying infrastructure. Nevertheless, these protocols are designed to support low data rates. More specifically, the IEEE 802.15.4 can support very long battery life and has very low complexity. In spite of the 802.15.4 advantages, it may have poor performance in terms of power consumption, reliability, and delay, if the MAC parameters such as the back-off window size and maximum number of retransmissions are properly selected. In [18], the authors investigated this problem and proposed an adaptive MAC algorithm for minimizing the power consumption while guaranteeing reliability and delay constraints. The data traffic is considered unsaturated, which allowed using sleep/wake-up modes to minimize the power consumption. Periodic listening, idle listening, additional control overhead, and collision are the main drivers of power consumption. To manage these issues, the authors in [19] consider the out-of-band wake up radio. Nodes switch into sleep mode when there is no information for transmission. When a tag has data, the wake-up radio transmits the control signal to the fundamental circuit for the wake-up and information transmission. Furthermore, the tag remains in sleep mode to save power. The authors do not provide any system for crisis occasions. A review of various MAC protocols and the IEEE 802.15.4 for WBANs can be found in [10], where a diagnostic model is presented using delay and throughput, which effects low power tuning and vitality minimization. Moreover, path loss analysis is given for in-body, on-body, and off-body correspondence. MAC parameters for different networks are summarized in Table I. C. Motivation and Main Contributions As can be noted from the literature survey, very little work has been devoted to design an efficient MAC for BCI applications. In such applications where the sensors are embedded inside the skull, some constraints such as limited power and end-to-end delay are critical. The limited power constraint is mostly due the small size of the embedded sensors, and time constraint is imposed by the maximum tolerated delay between the brain activity and neuroprosthetic device response [17]. The throughput, received data rate, and delay are essential to extract sufficient information in order to translate the neural signals into a desired movement. Therefore, this article proposes a new protocol for BCI applications that mitigates such problems and enhance the network performance. The new protocol combines the benefits of FDMA that allows using multiple channels without interference and with high throughput, TDMA can avoid collisions using a scheduling algorithm based on the breath first search (BFS) approach and CSMA-CA to improve the throughput and reduce the power consumption by reducing the idle listening state introduced by the TDMA [20], [21]. Therefore, the proposed MAC is hybrid. Although the data rates for certain BCI applications such as the P300-based BCI can be very low [22], there are other applications that are currently being investigated, where much higher data rates might be required. For example, Kaplan et al. [23] considered adapting the P300-BCI for gaming applications. Furthermore, it is shown in [24] that an electroencephalogram (EEG) signal might require about 85 kb/s in particular scenarios. Therefore, the need for high data rate support for BCI systems is a key enabler for future BCI applications. Moreover, to the best of the authors' knowledge, there is no research in the open literature that investigates different transmission technologies and mechanisms for BCI-MAC protocols. Therefore, the goal of this article is to determine the suitable technology for developing BCI systems among UHF-RFID, UWB, and ultrasonic, as well as to develop an MAC protocol that addresses previous concerns of network delay, data dropped, and traffic send/receive. The performance of the proposed system is evaluated in terms of throughput, data rate, and time delay. The hybrid system is evaluated using three different combinations FDMA+TDMA, FDMA+CSMA, and FDMA+TDMA+CSMA, for three technologies, which are UHF-RFID, UWB, and ultrasonic. The obtained results show that the proposed hybrid MAC outperforms all the individual MACs and can satisfy the requirements of BCI applications. The results are presented for several cases of using 12 and 100 tags. The results are obtained using Opent [25], which is a highly reliable simulation tool that is used by several industrial giants such as Cisco and AT&T. Nevertheless, developing a test bed will be targeted in our future work to capture all practical aspects. The main obstacle for developing a test bed in the time being is the lack of reliable development kits that can support such a system. D. Article Organization This article is organized into five sections. Section I introduces the current scope of BCI and the purpose of this article. Section II presents the BCI system model. Section III presents the proposed hybrid protocol. Section IV presents the numerical and simulation results. Finally, Section V concludes this article. II. BCI SYSTEM MODEL This article considers a BCI system, where N T transceivers on the brain surface to collect and transmit the EEG brain signals, and N R transceivers are placed on the scalp to acquire the transmitted signals. The sensors used are semiactive, and they remain idle until they receive brain signals to transmit. The wireless network parameters for BCI are investigated using the MAC protocols of three different technologies, which are ultrahigh-frequency (UHF) radio frequency identification (RFID) [26], [27], ultra-wideband RFID (UWB-RFID) [28], [29], and ultrasonic technology mimicking the neural dusts by modifying the superframe structure [15], [30], [31]. To select the appropriate frequency bands, the FDMA is tested using four distinctive frequency channels with RFID parameters, which mimics the realistic circumstance of multiple transmitters placed on the human brain to transmit the neural signals captured from the implanted electrodes to a receiver placed on the scalp. Since the frequency scope of passive RFID is from 860 to 960 MHz and the center frequency is 915 MHz, the four frequency channels utilized are 915, 920, and 930 MHz. The receiver analog-to-digital converter sampling rate is 200 kHz, and the channels are inspected at rate of 50 kHz per channel. Initially, 103 samples are used to generate the modulating signal, which is considered as a sinusoidal signal to represent the brain reaction and behavior of each signal at various frequency slots. The information signal is modulated using amplitude shift keying and passed through the channel. At the receiver, the received signal is passed through limited band filters to reduce the interference. The lag of the signal from the transmitter to the receiver is computed, as shown in Table II, for four different frequencies. Because the lag of the 940-MHz band is much larger than 2 ms, it will not be considered suitable for BCI applications. III. PROPOSED HYBRID PROTOCOL The hybrid protocol is designed by dividing the N T sensors into a number of clusters, three in this article, where FDMA is used to assign a frequency band for each cluster. Within each cluster, TDMA is used to multiplex the users to avoid interference between the sensors of that cluster. Finally, CSMA-CA is used to reduce the waiting time for the sensors in each cluster and, hence, reduce the delay and increase the throughput. This design allows the use of multichannels to transmit and receive in full duplex mode [32]. Each tag wakes up upon receiving a brain signal, communicates with its neighbors in the cluster, and goes to sleep until the next signal arrives. The communication between tags is through request-to-send (RTS) and clear-to-send (CTS) ACK. Fig. 1 shows the general design for the hybrid MAC protocol, in which the brain signals collected by biosensors are divided into three clusters and delivered to one receiver. The frequency and band slot assignment is performed using the BFS algorithm described in Section III-A. A. Scheduling Algorithm In this article, it is assumed that one of the tags has sufficient computationally capability, and it is called the main tag, which is used to construct the schedule for all the tags and implement the network connectivity graph that maximizes the network data rate and reduces the delay. Fig. 2 shows an example of the scheduling algorithm outcome using 12 sensing tags. In the figure, the circles represent the sensing tags; the first and second numbers represent the time slot and frequency band, respectively. The starting point is represented by a circle labeled with an R. The BFS algorithm is used to assign a specific time slot and frequency band for each tag. Using BFS to implement the tree, the main tag serves as the root as we traverse through the tags. Therefore, the default time slot and frequency are assigned for each tag in the first level, and then, interference probability between one and two hops is checked. If there is a conflict between the N j neighbor tags for N i , we check the siblings. If they are in fact siblings, the algorithm assigns different time slots for N i . The multichannel is used to send data to the same root tag (parent) at the same time slot using different assigned frequencies [33]- [35]. In the beginning, the default time slot is increased by one for the initial levels. Then, the time slots are updated to ensure that the time slot for the children tags is less than their parents. Therefore where T New represents the updated time slot, T Max represents the total number of slots, and T Current represents the currently assigned time slot. The scheduling algorithm is described in Algorithm 1. The complexity of the scheduling algorithm, as described in [36]. Let G(V, E) be a graph with |V | number of vertices and |E| number of edges. The BFS algorithm visits every vertex in the graph and every edge, where O(|V | + |E|). (2) B. Sender and Receiver Behavior With the three clusters identified, each cluster is assigned a different channel, and when the cluster tags want to transmit simultaneously, the channel is checked periodically by the sensors. If sensor N i sends an RTS control message to utilize its own time slot S i to transmit a packet to a predefined receiver, and the channel is declared idle, i.e., the RTS and CTS have been communicated successfully between the tag and the receiver, then the tag sends the packet. However, if the CTS is not received by the sender, i.e., a collision occurred and transmission is inhibited to start the backoff algorithm to wait for a random number of frames, backoff delay, before next attempt to retransmit an RTS in the same slot. Figs. 3 and 4, respectively, show the sender and the receiver behavior for tag in a specific cluster to transmit data on an individually scheduled time slot. where T Oh = N Oh /R B , T ACK = N ACK /R B , and T Fr = N Fr /R B . The throughput can be computed as where the notations are given in Table III. 2) TDMA: TDMA has three types of delays, which are transmission delay, queuing delay, and propagation delay. The transmission delay and for TDMA can be expressed as where T Sync = N Synch /R B . Therefore 3) CSMA-CA: CSMA-CA works with the standard tag detecting medium by sending packets to the receiver when it finds that the medium is free. In the unlikely chance that the medium is occupied, the tag goes to random back-off time slots waiting for the channel to be free for transmitting. With the improved CSMA-CA RTS/CTS trade system tag that senses the free channel, it sends RTS to the receiver and waits for the CTS message from the receiver to start transmitting. The delay from the sender to the receiver is calculated as D CSMA = T Bo + T Fr + T Ta + T ACK + T ITS + T RTS + T CTS (7) where T BP = N Bo T Bo . The relation between the transmission delay and throughput is given as where κ is a metric used to capture the packet loss correlation on different links [37]. 4) Hybrid MAC Protocol: For the hybrid model, the tags are divided into three clusters, and FDMA is used to assign a particular frequency band for each cluster. Within each cluster, the tags' data are multiplexed using TDMA and CSMA-CA. The delay for each of the cluster tag is given by The throughput of the different considered types of MAC protocols is calculated while assuming that the information is moved from the sender to the receiver utilizing only one of the MAC protocol types. Because of the similarity between the sender and the receiver, there is no collision or packet loss due to buffer overflow. Moreover, the channel is assumed error free. In such scenarios, the throughput can be expressed as [37] where D is the total delay. Throughput calculated using (10) confirms that the proposed MAC protocol is preferable over other MAC protocols for BCI applications, as demonstrated by the presented numerical results. D. Model Efficiency Model efficiency can be calculated by considering transmission and propagation time for CSMA-CA by assuming that M (t) is the number of full frames up to time t, as where T f is the frame time and T i f is X i ith time frame, where the expected value of X i is given by where E(·) denotes the expectation process. Given that t → ∞, then On the other hand, the transmission efficiency can be expressed as (15) where ζ is the transmission efficiency, T t is the transmission time, and E(N Fa ) is the expected value for number of failed attempts, and T Pr is the propagation time. To calculate the expected value for the number of failed attempts, we consider that the first success happens at the nth transmission with probability Therefore, the expected value for the number of failed attempts is given by By plugging (17) into (15), we obtain ζ = T Tx T Tx + 2(e − 1)T Pr (18) where T Tx is the transmission time. For T Tx T Pr , it can be noted from (18) that ζ → 1. IV. NUMERICAL AND SIMULATION RESULTS The network model used in this article follows the model given in [38]- [41], using three different RFID technologies, which are UHF, UWB, and Ultrasonic. For each technology, multiple tags are placed with parameters that mimic the implantable electrode array (MEA). For each technology, we evaluate the data dropped, network delay, and traffic sent/received for each case. The considered protocols are based on network time protocol [39], [40] concept, which provides synchronization and is used with both LANs and wide area networks (WANs). The sensor properties are selected to match the network parameters such as the channel capacity, channel recurrence, transmit control, transmit power, receiver sensitivity, buffer size, and data rate. The results for the delay and throughput are calculated after feeding OpNet [25] simulation the interarrival time as constant 1 ms, traffic generation parameters, start time as constant 0, ON-state time as exponential with a value of 10 −3 , and OFF-state time as 4 × 10 −3 , packet generation arguments as uniform (0.5, 1) packet size in byte and no segmentation, Beacon interval, back-off time in seconds, as 2 × 10 −2 , maximum receive life time as 0.5 s, and buffer size as 5 kbyte. Furthermore, we use the constant network model, which is more suitable for BCI applications. Table IV shows the required parameters for each technology to simulate the network. The considered and proposed protocols are evaluated using 12 and 100 tags, and using three receivers for both cases. For the 12-tag scenario, the total number of tags is divided into three clusters, each of which has three tags, and assigned a unique frequency band, which can be 915, 920, and 925 MHz [42]. The same argument is applied to the 100 tags, except that the three clusters contain 33, 33, and 34 tags. Then, three different protocols are applied as follows. 1) FDMA+TDMA: The tags send data over the three frequencies at different time slots to three receivers. Therefore, all the tags in a given cluster are assigned one out of available time slots, and thus, this protocol is denoted as FDMA+TDMA. 2) FDMA+CSMA: The tags in each cluster use CSMA-CA, and thus, two different tags in different clusters may transmit at the same time, but at different frequency, and hence, this protocol is called FDMA+CSMA. The starting time for the tags to transmit data is 10 −2 s. 3) Hybrid: In this scenario, the tags in each cluster imply TDMA for multiaccess; however, CSMA is applied as well to allow tags that have data to transmit to utilize the time slots of the idle tags. Therefore, this protocol effectively composed of three protocols: FDMA+TDMA+CSMA. Figs. 5-8 show traffic sent, traffic received, dropped data, and network delay for the proposed and other considered protocols using UHF. The x-axis represents the runtime of the simulation. For the 12-tag scenario, it can be noted that the three considered protocols can send roughly the same amount of data, but the received data for the hybrid are significantly larger, because it suffered less dropped data. For the network delay, the hybrid protocol offers the minimum, while the other two protocols perform generally the same in all aspects. For the 100-tag scenario, it can be noted that the hybrid protocol can transmit slightly higher data, but the traffic received is significantly larger because the dropped data are much less. The traffic delay of the hybrid protocol is substantially less than the other two protocols. For the 100-tag scenario, it can be noted that the data sent increase over time as the buffer fills over time. Table V shows the numerical results for all the considered metrics. From these summarized results, it can be concluded that the hybrid protocol is the most compatible with brain function as observed by the previous [11], the end-to-end time for the neural signal to travel from the action potential to the arm is nearly 60-90 ms. The hybrid protocol aligns with this previous knowledge as the delay is less than the BCI capturing time range and, therefore, suitable for BCI. UWB radio technology supports microelectromagnetic systems. Circuit fabrication [18], [43] shows that for BCI applications, in order to capture the EEG signals by electrodes from inside the brain and transmit them wirelessly by a transmitter placed on the scalp to a receiver or processor located outside the skull, a transmission frequency of 3.5 GHz is used. Based on our last study, using 3.5 GHz gives better results in terms of the received signal strength, signal-to-noise ratio, path loss, and channel capacity [10], [44]. In Opnet, we used UWB parameters to transmit the data. We have implemented the different technologies in order to place the tags. The values for each tag were changed using edit attributes so that we can send the data packets according to the desired application. Generally speaking, for the brain space, the transmission of the data performs differently with UWB and RFID. The synchronization is performed by sending out beacon signals every 2 × 10 −2 s. The first scenario analyzes transmitting the data with different frequencies at the same time. This method is similar to the frequency-division multiplexing in the CSMA-CA protocol. The frequencies used in this case are 3.5, 4, and 4.5 GHz. In this scenario, the tags begin to transmit data at 1 × 10 −2 s. Figs. 9-11 present the network parameters for each of the three scenarios: FDMA+CSMA, FDMA+TDMA, and hybrid. As can be noted from Fig. 9, it can be noted that the buffering time is less significant as compared to the UHF as the traffic sent increase over time is very small. For traffic received, it can be noted that the hybrid and FDMA+TDMA significantly outperform the FDMA+CSMA. For the network delay depicted in Fig. 11, the delay for the 12 tags is very small and comparable for the three protocols. However, for the 100-tag case, it can be noted that the hybrid noticeably outperforms the other two protocols. It is worth noting that the dropped data are zero for all protocols, and thus, the figure is not included. By considering Table VI provides a comparison of the three scenarios. It can be noted that the hybrid protocol is the most compatible with natural brain function as the delay is within this limit for several cases of interest. While ultrasonic has been used in medicine for many decades, it has only recently taken specialized form in bioelectronics, where promising new technology is developing. Ultrasonic has a potential for widespread use due to its ability to potently deliver power [45]. The smaller size of an ultrasonic transducer is an added advantage for applications such BCI. In Opnet, the ultrasonic technology parameters [29] were used to transmit data. The simulation is performed using the same procedure as described for the UHF and UWB. In the first scenario, the FDMA+CSMA protocol combines both FDMA and CSMA protocols. The tags transmit at different frequencies and detect the medium before transmitting data to avoid collisions. Each of the three clusters transmits at 1.85, 1.95, and 2.05 MHz, respectively [30]. The tags start transmitting data at 0 s. In this scenario, collisions are introduced early in the network; however, CSMA-CA is used to effectively avoid collisions [46]. In the second scenario, the FDMA+TDMA protocol calls for the network to include three clusters that use the same frequencies of the FDMA+CSMA and different times for transmission. For each cluster, one tag is designed to transmit any given time. In the third scenario, the hybrid protocol includes each cluster with different frequencies and times for each given cluster, similar to other two cases. However, each tag sends data only in a particular time slot. Three tags, one from each cluster, may send data in the same time slot. In order to prevent collision, the tag is forced to hold transmission for a random number of slots. Synchronization of all tags in each protocol is achieved by configuring the access point to broadcast beacon signals. In Figs. 12 and 13, the network parameters are presented for each protocol. As can be noted from Fig. 12 for the 12-tag scenario, the hybrid protocol traffic sent/received is between 6.35 and 6.85 kb, the data dropped are approximately zero, and the network delay is about 4.2 × 10 −4 s. For the FDMA+TDMA protocol, traffic sent and received was between 1.4 and 1.6 kb/s, and there were no data dropped. The network delay is between 6 × 10 −4 and 6.5 × 10 −4 s. For the FDMA+CSMA protocol, the traffic sent was approximately 1.50 kb/s, the traffic received is about 1.45 kb/s, and the network delay is about 1.5 × 10 3 s. Consequently, the hybrid protocol performs very well for this technology as well. For the 100-tag case, it can be noted that the delay becomes more critical for the FDMA+CSMA scenario, while it remains below 10 −3 for the hybrid MAC. Table VII summarizes the results for each protocol in terms of traffic sent, traffic received, data dropped, and delay for the ultrasonic transmission using 12 tags. From the results, it is clear that the hybrid protocol performs better in terms of delay and data transmitted/received. V. CONCLUSION This article presented a new MAC protocol for BCI applications. The proposed MAC is based on combining three conventional MAC protocols to improve the transmission efficiency and reduce the delay. The proposed hybrid protocol divides the sensors into clusters, where the tags in a particular cluster are assigned a specific frequency band. The tags within each cluster use TDMA; however, the time slots can be accessed by other tags to reduce the average waiting time and, thus, reduce the delay and increase the throughput. The proposed hybrid protocol was compared with two other hybrid protocols, but only using a combination of two protocols. The obtained results demonstrated that the proposed hybrid MAC has several advantages in terms if delay, throughput, and dropped data. Moreover, specifically, the delay was significantly less than the other considered protocols, which makes it attractive for time-sensitive applications such as BCI.
7,194.6
2021-06-01T00:00:00.000
[ "Computer Science" ]
Formate hydrogen lyase mediates stationary-phase deacidification and increases survival during sugar fermentation in acetoin-producing enterobacteria Two fermentation types exist in the Enterobacteriaceae family. Mixed-acid fermenters produce substantial amounts of lactate, formate, acetate, and succinate, resulting in lethal medium acidification. On the other hand, 2,3-butanediol fermenters switch to the production of the neutral compounds acetoin and 2,3-butanediol and even deacidify the environment after an initial acidification phase, thereby avoiding cell death. We equipped three mixed-acid fermenters (Salmonella Typhimurium, S. Enteritidis and Shigella flexneri) with the acetoin pathway from Serratia plymuthica to investigate the mechanisms of deacidification. Acetoin production caused attenuated acidification during exponential growth in all three bacteria, but stationary-phase deacidification was only observed in Escherichia coli and Salmonella, suggesting that it was not due to the consumption of protons accompanying acetoin production. To identify the mechanism, 34 transposon mutants of acetoin-producing E. coli that no longer deacidified the culture medium were isolated. The mutations mapped to 16 genes, all involved in formate metabolism. Formate is an end product of mixed-acid fermentation that can be converted to H2 and CO2 by the formate hydrogen lyase (FHL) complex, a reaction that consumes protons and thus can explain medium deacidification. When hycE, encoding the large subunit of hydrogenase 3 that is part of the FHL complex, was deleted in acetoin-producing E. coli, deacidification capacity was lost. Metabolite analysis in E. coli showed that introduction of the acetoin pathway reduced lactate and acetate production, but increased glucose consumption and formate and ethanol production. Analysis of a hycE mutant in S. plymuthica confirmed that medium deacidification in this organism is also mediated by FHL. These findings improve our understanding of the physiology and function of fermentation pathways in Enterobacteriaceae. INTRODUCTION Within the Enterobacteriaceae family, a distinction is made between mixed-acid (e.g., Escherichia, Salmonella, and Shigella) and 2,3-butanediol fermenters (e.g., Klebsiella, Serratia, and Enterobacter) based on their fermentation end products produced during sugar fermentation. Mixed-acid fermenters ferment sugars to ethanol and a range of organic acids, including lactate, succinate, acetate, and formate. Formate can be further converted to H 2 and CO 2 by the formate hydrogen lyase (FHL) complex (White, 2000). Mixed-acid fermentation generally leads to rapid and strong medium acidification and even cell death. On the other hand, 2,3-butanediol fermenters use the mixed-acids pathway only during the early growth phase, and switch in the late exponential phase to a different fermentation pathway, in which pyruvate is converted to the neutral end products acetoin or 2,3-butanediol, thereby preventing excessive acidification (Van Houdt et al., 2006;Xiao and Xu, 2007). Moreover, after the initial decline of medium pH, 2,3-butanediol fermenters typically deacidify the medium toward more neutral values during stationary phase (Johansen et al., 1975;Yoon and Mekalanos, 2006;Van Houdt et al., 2007;Moons et al., 2011). This is in contrast to mixedacid fermenters or 2,3-butanediol fermenters with an inactivated 2,3-butanediol pathway, where a sustained pH decrease is usually observed during sugar fermentation (Yoon and Mekalanos, 2006;Moons et al., 2011). Thus, 2,3-butanediol fermentation is apparently associated with stationary-phase deacidification. Synthesis of 2,3-butanediol from pyruvate requires three steps. First, the conversion of two molecules of pyruvate to α-acetolactate is catalyzed by the α-acetolactate synthase (α-ALS). Next, α-acetolactate is decarboxylated to acetoin by the α-acetolactate decarboxylase (α-ALD). In a last step, acetoin is reduced to 2,3-butanediol by the 2,3-butanediol dehydrogenase (BDH), which can also catalyze the reversed reaction. Each of these three reactions consumes an intracellular proton, and this potentially explains the observed stationary-phase deacidification. In Serratia plymuthica RVH1, a strain previously isolated from a food processing environment (Van Houdt et al., 2005), α-ALS and α-ALD are encoded by the budB and budA genes, respectively, which are located on the budAB operon (Moons et al., 2011). We previously showed that transfer of the S. plymuthica RVH1 budAB operon conveys to Escherichia coli the capacity to produce acetoin, to prevent lethal medium acidification and to reverse acidification (Vivijs et al., 2014a). In the present study, we transferred the budAB operon to some additional mixed-acid fermenting enterobacteria, Salmonella Typhimurium, Salmonella Enteritidis, and Shigella flexneri, and show that these also acquire the capacity to produce acetoin. However, acetoin production was not associated with stationary-phase deacidification in S. flexneri. This observation is remarkable since Shigella and E. coli are considered as a single species based on DNA homology (Fukushima et al., 2002). Thus, our results suggested the involvement of a deacidification mechanism different from proton consumption during acetoin production. To identify this mechanism, we performed random transposon mutagenesis in budAB-containing E. coli searching for mutants that lost their stationary-phase deacidification capacity but still produced acetoin. This led us to identify the FHL complex as the primary deacidification mechanism in 2,3-butanediol-fermenting Enterobacteriaceae. SCREENING FOR MUTANTS THAT HAVE LOST STATIONARY-PHASE DEACIDIFICATION CAPACITY A random knockout library of E. coli MG1655 containing pTrc99A-P trc -budAB was constructed using λNK1324, which carries a mini-Tn10 transposon with a Cm resistance gene, according to the protocol described by Kleckner et al. (1991). The mutants were subsequently grown in 300 μl LB medium with glucose, IPTG, Ap, and Cm in a 96-well plate. The plates were sealed with an oxygen impermeable cover foil and incubated without shaking at 37 • C. After 24 h, medium acidification was analyzed by adding 5 μl of a 0.06% w/v methyl red solution in 60% v/v ethanol to 200 μl culture (MR test). For mutants that no longer deacidified the medium, the remaining 100 μl culture was subjected to the Voges-Proskauer (VP) test by adding 30 μl of 5% w/v α-naphthol and 10 μl of 40% w/v KOH to 100 μl of culture. To quantify acetoin production, the mixture was stirred vigorously after 1 h and the optical density at 550 nm (OD 550 ) was measured. Acetoin concentrations were determined using a standard curve relating the OD 550 with the acetoin concentration in LB medium. From mutants that did not deacidify culture medium and still produced acetoin, transposon insertion sites were determined using the method described by Kwon and Ricke (2000). Briefly, genomic DNA of the mutants was isolated, digested with NlaIII and ligated with a Y-shaped linker, composed of oligonucleotides linker 1 and linker 2. Next, a PCR amplification was carried out using a transposon-specific primer (NK_Cm_DWN) and a primer specific to the Y-shaped linker (Y linker primer). The PCR product was subsequently sequenced using the transposon-specific primer and the insertion site was determined based on the known genome sequence of E. coli MG1655. CONSTRUCTION OF hycE MUTANTS IN E. coli AND S. plymuthica The deletion of hycE in E. coli MG1655 was achieved using the lambda red recombinase system described by Datsenko and Wanner (2000), followed by removal of the introduced antibiotic resistance cassette using the FRT/FLP recombination system. Briefly, 70-bp PCR primers were designed comprising a 50-bp 5 part complementary to the region down-or upstream of hycE and a 20-bp 3 part allowing amplification of the FRTflanked Cm resistance cassette present in the plasmid pKD3. The purified PCR product was electrotransformed into E. coli MG1655 containing the pKD46 plasmid providing the lambda red recombinase. The resistance cassette was subsequently removed by expression of the flippase recombination enzyme (FLP) of the FRT/FLP recombination system on the temperature-sensitive pCP20 plasmid. To delete the hycE gene in S. plymuthica RVH1, a fragment encompassing 643 bp upstream and 559 bp downstream of the gene was PCR-amplified using primers SP_HycE_1(XbaI) and SP_HycE_2(XbaI), cut with XbaI, ligated into a XbaIdigested pUC18 vector and transformed into E. coli DH5α. The resulting plasmid pUC18-hycE was used as a template for PCR using the outward-oriented primers SP_HycE_3(XhoI) and SP_HycE_4(XhoI). In a separate reaction, the loxP flanked Gm resistance cassette from plasmid pUCGmlox was amplified using primers LoxP_Gm_1(XhoI) and LoxP_Gm_2(XhoI). Both PCR products were then cleaved with XhoI and ligated together, generating pUC18-hycE::aacC1, which was transformed in E. coli DH5α. The hycE::aacC1 insert from this plasmid was then amplified using primers SP_HycE_1(XbaI) and SP_HycE_2(XbaI), cut with XbaI, ligated into a XbaI-digested pSF100 vector and transformed into E. coli S17-1 λpir. After conjugation of the resulting plasmid pSF100-hycE::aacC1 into S. plymuthica RVH1 (which does not support replication of this suicide plasmid), transconjugants were selected on LB agar with Gm at 15 • C. This temperature allows good growth of S. plymuthica but prevents growth of E. coli S17-1 λpir. Loss of Km resistance (pSF100 marker) was assessed by replica plating on LB agar with Km. The Gm resistance cassette was then spliced out using the cre recombinase on plasmid pCM157, which catalyzes site specific recombination between loxP sites. Restriction endonucleases and T4 DNA ligase were purchased from Thermo Scientific (St. Leon Rot, Germany) and used according to the supplier's instructions. CHARACTERIZATION OF FERMENTATIVE GROWTH AND FERMENTATION END PRODUCTS Strains were first grown overnight at the appropriate incubation temperature in 4 ml LB. For strains containing pTrc99A or pTrc99A-P trc -budAB, Ap was added to ensure plasmid www.frontiersin.org maintenance. Since S. plymuthica RVH1 is somewhat Ap resistant, Cb was used instead of Ap. Next, the cultures were diluted 1:1000 in tubes containing 30 ml LB with glucose and, when appropriate, IPTG and Ap or Cb. Five ml of paraffin oil was layered on top of the cultures to create anaerobic conditions and the tubes were incubated at the appropriate incubation temperature for 48 h. The cultures were sampled at regular time points to determine cell concentrations, medium pH and acetoin concentration, and for analysis of fermentation end products. Plate counts were determined by spot-plating (5 μl) a decimal dilution series in potassium phosphate buffer (10 mM; pH 7.00) on LB agar. Gas production was evaluated qualitatively using Durham tubes. Fermentation end products were analyzed in 600 μl culture supernatants stored at -20 • C. Succinic, lactic, formic, and acetic acid, ethanol, and glucose were determined via high-performance liquid chromatography (HPLC; Agilent 1200 series) using an ion exclusion column (Aminex® HPX-87H) maintained at 55 • C, and with 5 mM H 2 SO 4 as the mobile phase (0.6 ml/min). The system was equipped with a refractive index detector operating at 35 • C and a diode array detector set at 210 nm. STATISTICAL ANALYSIS All experiments were carried out in triplicate using independent cultures, and results are presented as the mean values ± SD. Statistical significance between mean values were determined by Student's t-test analysis using the Microsoft Excel statistical package. Results were reported as significant when a p-value of <0.05 was obtained, based on a two-sided t-test with unequal variance. INTRODUCTION OF ACETOIN SYNTHESIS PATHWAY IN MIXED-ACID FERMENTERS Previously, we introduced the budAB operon from S. plymuthica RVH1, encoding the α-ALS and α-ALD of the acetoin synthesis pathway, in E. coli MG1655 and observed that this attenuated lethal medium acidification during fermentative growth on glucose (Vivijs et al., 2014a). Here, we extended this experiment to S. Typhimurium, S. Enteritidis, and S. flexneri by introducing the pTrc99A-P trc -budAB plasmid into these organisms to see whether other mixed-acid fermenters would show a similar behavior. Figure 1 shows the growth curves and medium pH during fermentative growth in glucose-containing LB medium of these bacteria with and without the budAB genes. As expected, E. coli, both Salmonella strains and S. flexneri without budAB strongly acidified the medium (to pH 4.50-4.70 after 48 h) and this resulted in cell death during the stationary phase. Introduction of the budAB genes did not change growth of the bacteria until stationary phase was reached, but it changed the pH profile of the E. coli and Salmonella cultures in two aspects. Firstly, the acidification during the growth phase was less strong, reaching a minimum pH of about 5.60. Secondly, the pH increased again during stationary phase, up to 6.60-7.00 after 48 h. As a result, plate counts remained almost constant once they had reached their maximal stationary phase level (10-48 h). Surprisingly, a different pattern was observed in S. flexneri. Introduction of the budAB genes also attenuated medium acidification during the growth phase (pH 5.60 after 10 h), but no deacidification occurred during stationary phase. As a result, this culture reached a final pH of 4.80 after 48 h and the plate counts decreased to a similar extent as those of the strain without budAB genes. The strain with the budAB genes produced acetoin in similar amounts as the E. coli and Salmonella strains carrying these genes, so that poor expression of the acetoin pathway could be ruled out to explain the different behavior of S. flexneri. Therefore, proton consumption in the acetoin production pathway cannot fully explain the deacidification during stationary phase in E. coli and Salmonella, and it can be concluded that other deacidification mechanisms must be involved. SCREENING FOR LOSS OF DEACIDIFICATION CAPACITY IN E. coli CONTAINING A FUNCTIONAL ACETOIN PATHWAY In order to identify additional mechanisms involved in stationaryphase deacidification, we performed random transposon mutagenesis in E. coli MG1655 containing the pTrc99A-P trc -budAB plasmid and searched for mutants that were unaffected in acetoin production (VP test), yet were no longer able to increase the pH of glucose-containing LB medium at 37 • C after 24 h (MR test), thus having a MR+/VP+ phenotype. Although in most Enterobacteriaceae a positive VP test is usually associated with a negative MR test (MR-/VP+, e.g., Enterobacter aerogenes) and vice versa (MR+/VP-, e.g., E. coli), there are also some species in this family (e.g., Enterobacter intermedius, Klebsiella planticola, or Serratia liquefaciens) reported to be positive for both tests (MR+/VP+; Holt et al., 1994). Out of 6.048 mutants screened, 34 MR+/VP+ mutants were identified and their phenotype was confirmed after transferring the mutation to a native MG1655 strain by P1-transduction, followed by transformation of pTrc99A-P trc -budAB. Identification of the transposon insertion sites of these 34 mutants led to 16 different genes (Table 3). Interestingly, all genes were related to the metabolism of formate, one of the acids formed by mixedacid fermentation. Formate is produced by the pyruvate formate lyase (PFL) enzyme, which catalyzes the CoA-dependent cleavage of pyruvate to formate and acetyl-CoA ). An overview of the fermentation routes present in E. coli containing pTrc99A-P trc -budAB is shown in Figure 2. The formate that is produced and secreted can also be reimported in the cell through the FocA channel and become disproportionated to CO 2 and H 2 by the membrane-associated FHL complex (Sawers, 2005;Lü et al., 2012;Beyer et al., 2013). This complex consists of the formate dehydrogenase H (FDH-H), a selenoprotein carrying a molybdenum cofactor, and hydrogenase 3, a nickel-containing protein complex (Bagramyan and Trchounian, 2003). FDH-H catalyzes the oxidation of formate (HCOO − ), generating CO 2 and H + . The electrons from this reaction are transferred via several subunits of the FHL complex to hydrogenase 3, where they combine with two cytoplasmic protons to form dihydrogen. This pathway is thus a net consumer of protons and is used by E. coli to counteract acidification (Leonhartsberger et al., 2002). All gene products found in our screening could be linked to this particular pathway: FdhF (FDH-H), HycB, HycD, and HycE are part of the FHL complex (Bagramyan and Trchounian, 2003); HycI and HypE are both involved in maturation of the large subunit of hydrogenase 3 (Forzi and Sawers, 2007); SelA and SelD take part in the biosynthesis of selenocysteine, and mutants lacking these gene products fail to synthesize FDH-H (Leinfelder et al., 1988;Driscoll and Copeland, 2003); ModC is the ATP binding subunit of the molybdate ABC transporter and MoeA, MoeB, and Mog are other ancillary enzymes that participate in the biosynthesis of the molybdenum cofactor (Sawers, 1994;Grunden and Shanmugam, 1997;Leimkühler et al., 2001;Nichols and Rajagopalan, 2002); FdhD is an accessory protein functioning as a sulfurtransferase between IscS and FdhF and is required for FDH activity (Thomé et al., 2012); FocA and PflB are coexpressed from a single operon and form a bidirectional formate channel and the PFL enzyme, respectively www.frontiersin.org (Lü et al., 2012); FhlA, finally, is a transcriptional activator of the FHL system (Leonhartsberger et al., 2002). In conclusion, the mutant screening approach provides a strong indication that the disproportionation of formate is responsible for the stationaryphase deacidification capacity in E. coli containing the budAB genes. In addition to hydrogenase 3, E. coli also possesses three other hydrogenases catalyzing the reversible reaction 2H + + 2e − ↔ H 2 . Hydrogenase 1 and 2 are H 2 -oxidizing enzymes which are maximally induced at low and alkaline pH, respectively (Trchounian et al., 2012). Hydrogenase 4 is not well characterized and its subunits have not been isolated and studied yet, but it may be part of a second FHL complex that may produce H 2 at neutral and slightly alkaline pH (Self et al., 2004;Trchounian and Sawers, 2014). The contribution of hydrogenase 3 to acid resistance has been demonstrated previously since anaerobic cultures of E. coli W3110 hycE showed a 20-fold loss in survival of an extreme acid stress (2 h at pH 2.0) when compared to the wild-type strain (Noguchi et al., 2010). This finding suggested that the FHL complex supports survival of extreme acid challenge by counteracting intracellular acidification. Our results now show that the complex can also accomplish an increase of the environmental pH during growth under moderate acid stress, thereby preventing stationary phase cell death during fermentative growth. The observation that acetoin-producing S. flexneri showed reduced acidification in the exponential growth phase, but did not deacidify the medium during the stationary phase (Figure 1), can also be linked to formate conversion. Although S. flexneri closely resembles E. coli at the genetic level, Shigella species (with the exception of a few strains) do not produce gas during carbohydrate fermentation (Brenner et al., 1982;Germani and Sansonetti, 2006). We confirmed that the S. flexneri strain used in this study did not produce gas from glucose and the absence of this mechanism may thus explain our observation. The reason why Shigella species do not produce gas in the presence of glucose is unclear. The genes encoding the FDH-H and the hydrogenase 3 are present in the Shigella genome, but apparently no functional FHL complex is formed. EFFECT OF HYDROGENASE 3 INACTIVATION ON FERMENTATIVE GROWTH OF ACETOIN-PRODUCING E. coli To characterize in more detail the role of formate disproportionation on the capacity of E. coli (with or without budAB genes) to attenuate medium acidification during fermentative growth, we constructed a clean deletion of the hycE gene. Since this gene encodes the large subunit of the hydrogenase 3 that contains the active site for proton reduction to dihydrogen (Trchounian et al., 2012), its deletion completely blocks the conversion of formate to CO 2 and H 2 . Next, budAB-less and budAB-containing wildtype and hycE strains of E. coli MG1655 were grown for 48 h in glucose-containing LB medium sealed from the air with a paraffin oil layer and with a Durham tube to observe gas production. Plate counts, medium pH, gas production and acetoin concentrations were determined at regular time points (Figure 3; Table 4). Knockout of hycE did not have any effect on the pH profile during fermentative growth of budAB-less E. coli over the entire 48 h growth period. In the budAB-containing strains, the effect of hycE deletion depended on the growth phase. During the exponential phase (first 6 h), hycE deletion had no effect on the acidification, but it can be seen that the acidification was slightly less compared to the two budAB-less strains, in line with the earlier observations shown in Figure 1. However, from the onset of stationary phase, the pH profile of both acetoin-producing strains diverged strongly. While acidification by the budAB-containing wild-type strain slowed down and reversed into deacidification after 12 h of growth (as already shown in Figure 1), acidification by the budAB-containing hycE mutant was sustained until 24 h, after which the pH remained stable at a low value (pH = 4.72). Since both strains produced similar amounts of acetoin, and acetoin production stopped after 10 h (Table 4), it can be concluded that the stationary-phase deacidification by the budAB-containing wild-type E. coli MG1655 is not a direct consequence of proton consumption during acetoin production. More likely, deacidification is triggered by proton consumption in the reaction carried out by the FHL complex since deletion of hycE resulted in loss of deacidification. This explanation is also supported by the observed gas production. Since CO 2 is very soluble in water, gas accumulation in a Durham tube can be mainly ascribed to H 2 production, and is thus indicative of the action of the FHL complex (White, 2000). Both strains with an intact FHL complex produced more or less the same amount of gas at 12 h, filling approximately half of the Durham tube with gas (Figure 3). However, no additional gas production was seen in case of wild-type E. coli after 12 h, while the Durham Frontiers in Microbiology | Food Microbiology tubes in case of acetoin-producing E. coli were completely filled with gas after 24 h, and additional gas bubbles were formed in the medium after 48 h. On the other hand, the hycE mutant did not produce any gas, while only a small amount of gas was observed in the budAB-containing hycE mutant, which might be the result of CO 2 production during acetoin formation (Figure 3). The evolution of plate counts during stationary phase in this experiment was generally in line with the observed pH changes, with cell death taking place in the strongly acidified cultures. In particular, lethal acidification could not be prevented by acetoin fermentation in a budAB-containing hycE mutant since plate counts of this strain significantly decreased after the stationary phase, as was also the case for the two budAB-less strains performing a mixed-acid fermentation. Cell death can be explained by the combination of the low pH environment and the toxic accumulation of organic acids. ANALYSIS OF METABOLITES PRODUCED DURING FERMENTATIVE GROWTH OF E. coli To provide more direct evidence for the involvement of formate disproportionation in the deacidification capacity of budABcontaining E. coli, glucose consumption and the production of metabolites were determined by HPLC during fermentative growth in LB with glucose (Figure 4). Succinate concentrations ( Figure 4E) remained low for all strains during the course of the experiment. On the other hand, the budAB genes caused a marked shift in the production of two of the major acids of the mixed-acid fermentation pathway, especially in the late exponential and stationary growth phase, with no more acetate and much less lactate being produced (Figures 4C,D, respectively). With www.frontiersin.org regard to formate (Figure 4F), the highest formate accumulation was seen in the hycE mutants, probably because these have lost their major route to convert formate to H 2 and CO 2 . During the stationary growth phase (up to 48 h), the formate concentrations remained almost constant in the hycE − strains, but strongly decreased in the hycE + strains, indicating the reuptake and conversion of formate to CO 2 and H 2 . Interestingly, a close look at the formate accumulation curves of the hycE mutants reveals a transient decline in the late exponential growth phase (onset at 4 h of growth). Also in the hycE + background a decline (budAB-less strain) or a diminished accumulation (budAB-containing strain) of formate was observed in this phase. A possible explanation for this is the activity of the FDH-N, which also catalyzes the oxidation of formate to CO 2 (Sawers, 1994). However, FDH-N transfers the electrons to nitrate (via a nitrate reductase) instead of protons and has a much higher affinity for formate than the FDH-H (Leonhartsberger et al., 2002), which could explain why it is active in an earlier growth stage. The activity of FDH-N is limited, however, because LB medium contains only a small amount of nitrate. The disproportionation of formate ( Figure 4F) by the hycE + strains lasted longer when the budAB genes were present (48 h) than when they were absent (24 h), probably because a higher amount of formate was produced. This was also reflected by an increased gas production in the presence of the budAB genes during this phase, as reported above (Figure 3). Finally, ethanol was produced in higher quantities by the budAB-containing strains ( Figure 4B). As a final experiment to demonstrate that formate conversion causes medium deacidification during stationary phase, 5 or 10 mM formate from a 1 M solution (pH 5.50) was added to the Frontiers in Microbiology | Food Microbiology medium after 10 h of fermentative growth of budAB-containing E. coli MG1655 and the pH was subsequently measured after 10, 24, and 48 h. As expected, the addition of formate in the medium resulted in a significantly stronger pH increase during stationary phase ( Table 5). Taken together, the metabolite profiles lead us to propose the following model to explain the effect of introduction of the budAB genes in E. coli (see Figure 2). The introduction of these genes diverts part of the pyruvate generated from glycolysis to acetoin production. At the same time, possibly because of a reduced cellular pyruvate pool, the balance between the mixed-acid fermentation routes is shifted, with lactate production being almost shut down. Nevertheless, since the budAB-containing strain produced higher amounts of formate (see previous paragraph), it maintains a higher flux of pyruvate to acetyl-CoA, as also indicated by the higher glucose consumption. This can be explained by the reduced acid production and consequently the reduced metabolic inhibition. The fate of acetyl-CoA is also different in the budAB-containing strain. This is necessarily so, because the reduced production of lactic acid creates an excess of NADH that must be reoxidized by another route to maintain the cellular redox balance. As can be seen in Figure 2, this is only possible by increasing ethanol production at the expense of acetate production. This is indeed what happens, since the budAB-containing strain no longer produces acetate and has increased ethanol production. Since acetate production is coupled to the generation of an extra ATP, introduction of the acetoin pathway reduces the ATP yield per mole of glucose fermented. However, this does not result in reduced growth rate (Figures 1 and 3), because it is compensated by a higher glucose turnover. Thus, although the total biomass production (maximal cell density reached in early stationary phase) is approximately the same for all the strains, the budAB-containing strains require much more glucose to achieve this ( Figure 4A). ROLE OF HYDROGENASE 3 IN FERMENTATIVE GROWTH OF S. plymuthica RVH1 Finally, we investigated whether the FHL complex also attenuates acid formation and drives deacidification during fermentative growth of a natural 2,3-butanediol fermenter, using S. plymuthica RVH1 as a model. To this end, we constructed a hycE mutant in this strain. The evolution of medium pH for S. plymuthica RVH1 wild-type shows three phases (Figure 5). There was a decrease during the first 8 h, followed by a rapid increase between 8 and 10 h, and then a slower increase until 48 h. The initial pH increase is probably due to the switch to 2,3-butanediol production in the late exponential phase since it was lost upon knockout of the 2,3-butanediol pathway (budAB::cat) but not by knockout of hydrogenase 3 ( hycE). In contrast, the deacidification during stationary phase required both an active 2,3-butanediol pathway and an active hydrogenase 3. Genetic complementation of the budAB mutant restored its pH profile to that of the wild-type strain. However, since this complemented strain produces acetoin under the control of the plasmid P trc promoter right from the start of the experiment, its acidification is more attenuated than in the wildtype strain. Cell numbers declined after 48 h in the mutant strains that had lost or reduced deacidification capacity and, as a result, there was a clear correlation between cell numbers and medium pH at the end of the experiment. Previously, the pH profile during glucose fermentation in the 2,3-butanediol fermenter Enterobacter aerogenes was divided into three phases (Johansen et al., 1975). The first phase was characterized by a rapid drop to about pH 5.8, in the second phase the pH remained almost constant at pH 5.6 and in the third phase the pH increased again to about 6.5. However, during the last phase, the total amount of acetoin and 2,3-butanediol remained constant and 2,3-butanediol was reoxidized to acetoin, indicating that the 2,3-butanediol pathway is not involved in this deacidification (Johansen et al., 1975). Our results demonstrate that -at least in S. plymuthica-the FHL complex is responsible for stationary-phase deacidification since the final pH was about 1.3 pH units lower in a S. plymuthica RVH1 hycE mutant compared to the wild-type. ACKNOWLEDGMENTS This work was supported by a doctoral fellowship to BV from the Agency for Innovation by Science and Technology (IWT), by a research grant from the Brazilian program Science without Borders (process number 5511/10-0), by research grants from the www.frontiersin.org
6,703.2
2015-02-25T00:00:00.000
[ "Biology", "Environmental Science" ]
Highly efficient hole injection from Au electrode to fullerene-doped triphenylamine derivative layer Triphenylamine derivatives are superior hole-transport materials. For their application to high-functional organic semiconductor devices, efficient hole injection at the electrode/triphenylamine derivative interface is required. Herein, we report the design and evaluation of a Au/fullerene-doped α-phenyl-4′-[(4-methoxyphenyl)phenylamino]stilbene (TPA) buffer layer/TPA/Au layered device. It exhibits rectification conductivity, indicating that hole injection occurs more easily at the Au/fullerene-doped TPA interface than at the Au/TPA interface. The Richardson-Schottky analysis of the device reveals that the hole injection barrier (ΦB) at the Au/fullerene-doped TPA interface decreases to 0.021 eV upon using C70 as a dopant, and ΦB of Au/TPA is as large as 0.37 eV. The reduced ΦB of 0.021 eV satisfies the condition for ohmic contact at room temperature (ΦB \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\le $$\end{document}≤ 0.025 eV). Notably, C70 doping has a higher barrier-reduction effect than C60 doping. Furthermore, a noteworthy hole-injection mechanism, in which the ion–dipole interaction between TPA and fullerenes plays an important role in reducing the barrier height, is considered based on cyclic voltammetry. These results should facilitate the design of an electrode/organic semiconductor interface for realizing low-voltage driven organic devices. www.nature.com/scientificreports/ In this study, we report a Au/triphenylamine derivative layered device that has an ohmic contact with hole injection (Φ B = kT ≤ 0.025 eV, where Φ B is the hole injection barrier, k is the Boltzmann constant, and T is the temperature) at room temperature (298 K) and a novel hole-injection mechanism. We used an additional hole injection layer with a mixture of fullerene and triphenylamine derivative. Further, we used not only C 60 but also C 70 as a fullerene, and we used α-phenyl-4′-[(4-methoxyphenyl)phenylamino]stilbene (Fig. 1a) as a triphenylamine derivative and call it TPA. Fullerenes are known to be famous electron transport materials for OPVs and OLEDs [33][34][35] , but no study has yet applied C 70 to the hole injection layer. A Au/fullerene-doped TPA/TPA/Au layered device was prepared, and its rectification characteristics were evaluated. In particular, the hole injection property at the Au/fullerene-doped TPA interface was quantitatively evaluated. Further, we conducted cyclic voltammetry to clarify the hole-injection mechanism at this interface. Results and discussion Hole-injection property of fabricated device. Figure 1b shows a schematic of the fullerene-doped TPA dual-layer device fabricated in this study. Hole injection from the bottom and the top Au electrode was defined as the forward and reverse direction, respectively. Figure 2a shows the current density-electric field (J-E) properties of the Au/C 60 and C 70 -doped TPA/TPA/Au layered devices. Symmetrical J-E characteristics were observed in the forward and reverse directions for the non-doped (Au/TPA/Au) device, indicating that the energy barriers for hole injection at both Au/TPA interfaces were equal. By contrast, the threshold electric field of the C 60 -doped device was confirmed to be lowered only in the forward direction; further, that of the C 70 -doped device was drastically suppressed in the forward direction. These results demonstrated that hole injection at the Au/fullerene-doped TPA interface occurs easily compared to that at the Au/TPA interface. It should be noted that the threshold electric fields of the non-doped and the C 60 -doped devices in the reverse direction are almost the same. Therefore, we successfully developed a device with a rectifying property by inserting C 60 -and C 70 -doped TPA buffer layers. This observed rectification conductivity is very useful for organic semiconductor device applications such as RFID tags. In the reverse direction in Fig. 2a, the reason why a higher current was www.nature.com/scientificreports/ observed in the C 70 -doped device than in the C 60 -doped device remains unclear. Figure 2b shows the dependence of the J-E characteristics in the forward direction on the C 70 doping amount (0, 0.5, and 1 mol%) in TPA. A higher current density was achieved at a lower electric field as the C 70 doping concentration increased. Therefore, fullerene doping induced efficient hole injection. To consider the influence of fullerene doping on hole injection in detail, the J-E property of the Au/1 mol% C 70 -doped TPA/TPA/Au layered device was plotted on a logarithmic scale, as shown in Fig. 3a. Generally, the J-E characteristics of organic semiconductors consist of three types of current: (1) Schottky current (J ∝ E 0.5 ), (2) ohmic current (J ∝ E), and (3) space charge limited current (J ∝ E x (2 ≤ x)). Significantly, Fig. 3a showed that the 1 mol% C 70 -doped device exhibited an ohmic-type current response immediately after an electric field was applied, indicating ohmic contact with hole injection at the Au/C 70 -doped TPA interface at room temperature. In other words, no Schottky current was observed in the device. Subsequently, hole transport became a ratedetermining process from around 0.2 × 10 4 V cm −1 , and a space charge limited current was observed in Fig. 3a. It should be noted that J ∝ E 0.5 relationships were observed in the logJ-logE characteristics of the Au/1 mol% C 70 -doped TPA/TPA/Au device under the measurement temperature conditions of 4.5 °C and − 22.3 °C as shown in Supplementary Fig. S1. We quantitatively evaluated the energy barrier height for hole injection at the Au/fullerene-doped TPA interface through a Richardson-Schottky plot analysis 32,36 . The hole-injection barrier height (Φ B ) was obtained using the following equation: where T is the temperature; A, the area; A*, the Richardson constant; q, the electronic charge of an electron; n, the ideal factor; and k, the Boltzmann constant. After the J-E characteristics of the device at various temperatures were measured as shown in Supplementary Fig. S2, the data were plotted according to the relationship logJ vs. E 0.5 . As a result, straight lines (i.e., Schottky lines) were obtained as shown in Supplementary Fig. S3a. Then, the current densities in the absence of an electric field (J 0 ) were determined by extrapolating the Schottky lines, as listed in Supplementary Table S1. Notably, the maximum J 0 was two orders of magnitude greater than the minimum J 0 . Finally, the Richardson plot (lnJ 0 /T 2 − T −1 ) was drawn as shown in Supplementary Fig. S3b, and Φ B was calculated from the slope of the Richardson line. Figure 3b shows the Φ B values as functions of the fullerenedoping concentration. Φ B for the non-doped (Au/TPA/Au) device was calculated to be 0.37 eV; this was almost same as Φ B estimated from the work function of a Au electrode (~ 5.1 eV) 37 and the ionization potential of TPA (~ 5.5 eV) 28 . The sigmoid-shaped behavior of Φ B was observed for both C 60 -and C 70 -doped devices. However, C 70 doping had a higher barrier-reduction effect than C 60 doping. Remarkably, Φ B for the Au/1 mol% C 70 -doped TPA/TPA/Au device was determined to be 0.021 eV; this satisfied the condition for ohmic contact (≤ 0.025 eV at room temperature). It is considered that there are multiple levels of barriers among the Au/fullerene-doped TPA/TPA. Although the details are unclear, the Richardson-Schottky plot analysis revealed that the barrier of the rate-determining step (probably hole injection at the Au/fullerene-doped TPA) is 0.021 eV in the Au/1 mol% C 70 -doped TPA/TPA/Au device. With the doping of more than 1 mol% C 60 and C 70 , the barrier height did not decrease further, probably because it is difficult to dissolve C 60 and C 70 in organic solvents at a concentration of more than 1 mol% to TPA. Therefore, a technique for doping fullerenes at higher concentrations must be developed in the future. Overall, we reduced the hole injection barrier by 0.324 and 0.349 eV by introducing the www.nature.com/scientificreports/ C 60 -and C 70 -doped TPA layer as a hole injection layer, respectively, and successfully formed an ohmic contact at the Au/1 mol% C 70 -doped TPA interface. Consideration of hole-injection mechanism. We demonstrated barrier height reduction with hole injection at the Au/fullerene-doped TPA interface. Next, we focused on the barrier reduction mechanism. Lee reported that the hole-injection barrier was lowered by the interaction between the fullerene (C 60 only) and the Al electrode 38 . In this study, we used fullerenes (both C 60 and C 70 ) as a dopant in the TPA layer. Therefore, the reduced hole-injection barrier was attributed to the intermolecular interaction between the fullerene and the TPA (and not the electrode). Actually, the ultraviolet photoelectron spectroscopy (UPS) result of the films of Au only and 1 mol% C 70 -coated Au shown in Supplementary Fig. S4 revealed that there is negligibly slight interaction between fullerene and Au electrode because the obtained both work functions were equivalent (~ 5.2 eV). First, we conducted UV-vis spectrometry (V-650 spectrophotometer, JASCO Corp.) to observe the interaction, for example, the formation of a charge-transfer (CT) complex. However, no new absorption band appeared in the spectrum for a sample with a mixture of 1 mol% C 70 and TPA, as shown in Supplementary Fig. S5. Considering the accuracy of the measuring instrument, it was suggested that negligibly slight amount of TPA-fullerene CT complex might be formed. Next, photoelectron yield spectroscopy (PYS) was performed. As shown in Supplementary Fig. S6, the behaviors of PYS spectra for the films of TPA only and 1 mol% C 60 -doped TPA were similar, and the obtained both ionization potentials were also equivalent. Therefore, the fullerene dope would not form new energy levels associated with charge injection/extraction. This suggests that fullerenes contribute to barrier reduction as polarizable substances, not donors or acceptors. Supplementary Fig. S7 shows the J-E characteristics of a Au/evaporated C 70 /TPA/Au layered device along with that of the Au/C 70 -doped TPA/TPA/Au layered device. Based on Supplementary Fig. S7b, the evaporated-C 70 / TPA and 1 mol% C 70 -doped TPA/TPA devices exhibited schottky-type (J ∝ E 0.5 ) and ohmic-type (J ∝ E) current responses immediately after an electric field was applied, respectively, which indicates that both devices have different hole injection mechanisms. As a consequence, using the fullerene-doped TPA as a buffer layer has higher conductivity than using the evaporated C 70 . This suggests that it is important for fullerenes to penetrate the TPA layer morphologically. Then, we performed electrochemical analysis. Cyclic voltammetry was conducted using the electrochemical cell shown in Fig. 4a. Figure 4b shows cyclic voltammograms of TPA in mixed acetonitrile:toluene solutions with weight ratios of 1:0, 1:1, and 1:3. The half-wave potential (E 1/2 ) for the oxidation of TPA (given by Eq. 2) was determined as the potential at which the current equals the half of diffusion limited current (i d ) 39 . As a result, E 1/2 was obtained as 0.76 V, 0.81 V, and 0.86 V vs. Ag/AgCl for acetonitrile:toluene weight ratios of 1:0, 1:1, and 1:3, respectively. i d differed depending on the acetonitrile:toluene weight ratios in the order of 1:3 < 1:1 < 1:0. These results indicate that TPA is more easily oxidized with a higher amount of acetonitrile. Because acetonitrile and toluene are polar and nonpolar solvents, respectively, their different ratios result in changes in the relative permittivity of the electrolytes. TPA was easily oxidized to TPA + when it was surrounded by a polar solvent having a higher relative permittivity. www.nature.com/scientificreports/ The mechanism in the solid-state device without a solvent was considered in light of the above findings. Because there are no ions in the solid phase state without any carrier injection, E 1/2 must be calculated under the condition of the absence of a supporting electrolyte as an ion source. Figure 5a shows the cyclic voltammetry results for supporting electrolyte concentrations of 2.0, 1.0, 0.5, and 0.05 mmol dm −3 under the cell conditions shown in Fig. 4a. The oxidation potential of TPA decreased and i d increased as the concentration of the supporting electrolyte increased. When E 1/2 for various supporting electrolyte concentrations with various acetonitrile:toluene weight ratios in the electrolyte was calculated and plotted as a function of the square root of the supporting electrolyte concentration 40 , the linear function shown in Fig. 5b was obtained. Therefore, E 1/2 without the supporting electrolyte can be calculated by extrapolating the straight line shown in Fig. 5b. Consequently, E 1/2 for acetonitrile:toluene weight ratios of 1:0, 1:1, and 1:3 was 0.87 V, 0.89 V, and 0.94 V vs. Ag/AgCl, respectively. Even in the solid phase state, TPA was more likely to become TPA + in the presence of a substance with a higher relative permittivity. Therefore, the ion-dipole interaction between TPA and fullerenes was suggested to facilitate hole injection from the Au electrode to TPA, because fullerenes have a relatively high dipole moment 41 . The relative permittivities of C 60 and C 70 were respectively measured to be ~ 3 and ~ 4 using an 879B LCR meter (B&K Precision Corp.) at 1 kHz. Because C 70 has a higher relative permittivity than C 60 , the hole injection barrier was reduced more efficiently and an ohmic contact was probably formed with a larger ion-dipole interaction effect. In the future, we plan to investigate the best doping material for reducing the barrier height from the viewpoint of relative permittivity. Finally, electrochemical impedance spectroscopy was carried out to consider the interface energetic characteristics, and the result is shown in Supplementary Fig. S8. When the capacities at interfaces of Au/TPA and Au/ fullerene-doped TPA were determined by fitting the impedance curves, the latter was larger than the former. This indicates that the depletion layer (energy barrier) formed at the Au/TPA interface becomes smaller due to the presence of fullerenes, and supports our proposed intermolecular ion-dipole interaction effect. Figure 6 summarizes the barrier height reduction mechanism with hole injection. Owing to the ion-dipole interaction between TPA and fullerenes, TPA is easily oxidized and stabilized to TPA •+ . This indicates that the rate constant of the forward reaction (k f ) in Eq. (2) increased. Therefore, the hole injection at the Au/fullerenedoped TPA interface became highly efficient, because the equilibrium constant (K), expressed as the ratio of k f and the rate constant of the reverse reaction (k b ) in Eq. (2) (K = k f /k b ), increased. Overall, the results suggested that the proposed hole injection technique has a novel mechanism. The UPS result (Fig. S4) revealed that there is a negligibly slight interaction between fullerene and Au electrode. The UV-vis result (Fig. S5) indicates that www.nature.com/scientificreports/ the TPA-fullerene CT complex is not formed probably. The PYS result (Fig. S6) evidences that the fullerene dope would not form new energy levels associated with hole injection. On the other hand, the contribution of intermolecular ion-dipole interaction (solvation effect) to the reduction of hole injection is strongly evidenced by the results of electrochemical analyses (Figs. 4 and 5). The ion-dipole interaction is involved to the ionization process of TPA. According to the Marcus theory 42,43 , the reorganization energy is involved in the activation energy and depends on the relative permittivity. In other words, the activation energy of TPA oxidation to TPA •+ becomes low when the relative permittivity around TPA is high even in the solid and liquid states. This leads to a fact that the large ion-dipole interaction attributed to the high relative permittivity of medium contributes to the decrease in activation energy, which appeared in the E 1/2 shift for TPA oxidation toward negative direction observed in Figs. 4 and 5. The decrease in activation energy can be applied to a solid state having a high relative permittivity such as fullerenes. Consequently, the ohmic contact at room temperature was successfully achieved at the Au/1 mol% C 70 -doped TPA interface (Fig. 3). Methods Materials. TPA, α-phenyl-4′-[(4-methoxyphenyl)phenylamino]stilbene, shown in Fig. 1a The thickness of 0.3 μm of the fullerene-doped TPA layer was strategically employed in order to laminate the layer completely by a cast-coat method. Possibly, the 0.3-μm thichness of the layer influences the hole-transport resistance in the device. However, the influence should be negligible in this study because this work focuses on hole-injection property at the Au/fullerene-doped TPA interface. Then, a 50 wt% TPA-containing tetrahydrofuran (THF) supersaturated solution was spin-coated at 3000 rpm for 30 s onto the fullerene-doped TPA layer to obtain a 5.5-μm-thick TPA layer. The thickness of the fullerene-doped TPA and TPA layers was determined using a Surfcom 130A contact-type thickness meter (Tokyo Seimitsu Co., Ltd.). The supersaturated TPA solution was used so that the lower layer was not dissolved. Also, we used THF as a solvent because fullerenes are almost insoluble in THF. As a result, the complete TPA-containing double layers were obtained as shown in Supplementary Fig. S9. A counter Au electrode was finally vacuum-deposited in the same manner to fabricate the stacked Au/fullerene-doped TPA/TPA/Au device shown in Fig. 1b. The J-E characteristics of the fabricated devices were measured at various temperatures at 1 × 10 -3 Pa under dark conditions in a vacuum chamber by using a source meter (Keithley 2612A). Cyclic voltammetry. A Pt disk with a diameter of 10 μm, a Pt plate, and a Ag/AgCl/saturated KCl were used as working, counter, and reference electrodes, respectively. The ϕ10 μm Pt electrode was prepared using the procedure described in a previous study 31 . Two types of electrolyte were used. One consisted of 5.0 mmol dm −3 TPA; a supporting electrolyte of 50 mmol dm −3 TMAP; and acetonitrile/toluene mixed solvent in weight ratios of 1:0, 1:1, and 1:3. The other consisted of 5.0 mmol dm −3 TPA; TMAP in concentrations of 2.0, 1.0, 0.5, and 0.05 mmol dm −3 ; and acetonitrile/toluene mixed solvent in weight ratios of 1:0, 1:1, and 1:3. Cyclic voltammetry was conducted at a scan rate of 50 mV s −1 in the potential range of 0.3-1.1 V vs. Ag/AgCl using the electrochemical cell shown in Fig. 4a and a HA-150 potentiostat (Hokuto Denko Corp.). Degassing by N 2 gas bubbling was performed before measurements.
4,187.4
2022-05-04T00:00:00.000
[ "Materials Science", "Chemistry" ]
Global Asymptotic Stability of Pseudo Almost Periodic Solutions to a Lasota–wazewska Model with Distributed Delays In this paper, we study a class of Lasota–Wazewska model with distributed delays, new criteria for the existence and global asymptotic stability of positive pseudo almost periodic solutions are established by using the fixed point method and the properties of pseudo almost periodic functions, together with constructing a suitable Lya-punov function. Finally, we present an example with simulations to support the theoretical results. The obtained results are essentially new and they extend previously known results. Introduction To describe the survival of red blood cells in an animal, Wa żewska-Czy żewska and Lasota in [20] proposed the following autonomous nonlinear delay differential equation as their appropriate model: where x(t) denotes the number of red blood cells at time t, a > 0 is the probability of death of a red blood cell, b and c are positive constants related to the production of red blood cells per unit time, and τ is the time required to produce a red blood cell.As a classical model of population dynamics, model (1.1) and its modifications have received great attention from both theoretical and mathematical biologists, and have been well studied.In particular, qualitative analysis such as periodicity, almost periodicity and stability of solutions of nonautonomous Lasota-Wazewska models have been studied extensively by many authors, we refer to [5,7,9,11,13,[15][16][17][18][19] and the references therein. Since the nature is full of all kinds of tiny perturbations, either the periodicity assumption or the almost periodicity assumption is just approximation of some degree of the natural perturbations [21,25].A well-known extension of almost periodicity is the pseudo almost periodicity, which was introduced by C. Zhang in [24,25] and has been widely applied in the theory of ODEs and PDEs, see [2-4, 10, 14] and the references therein.In addition, it is well-known that time delays often occur in realistic biological systems, which can make the dynamic behaviors of the biological model become more complex, and may destabilize the stable equilibria and admit almost periodic oscillation, pseudo almost periodic motion, bifurcation and chaos, compared with the effects of discrete delays, distributed delays are more general and difficult to handle.Therefore, it is important and interesting to study the almost periodic dynamic behaviors of the Lasota-Wazewska model with distributed delays. Motivated by the above discussions, in this paper, we will consider the following Lasota-Wazewska model with pseudo almost periodic coefficients and distributed delays: where is the probability kernel of the distributed delays, the other variables and parameters have the same biological meanings as those in (1.1) with the difference that they are now time-dependent. The main purpose of this paper is employing fixed point method and the properties of pseudo almost periodic functions, together with constructing a suitable Lyapunov functional, to establish some sufficient conditions for the existence and global asymptotic stability of a pseudo almost periodic solution for model (1.2).The results obtained in the present paper are completely new and they extend previously known results in the literature. The structure of this paper is as follows.In Section 2, we give some preliminaries related to our main results.In Section 3, we present the main results on the dynamic behaviors for model (1.2).Section 4 gives an example with simulations to demonstrate the effectiveness of the theoretical results. Notations: Let BC(R, R) denote the set of bounded continuous functions from R to R, • denote the supremum norm g := sup t∈R |g(t)|, obviously, (BC(R, R), • ) is a Banach space.We generally denote Finally, given a function g ∈ BC(R, R), let g + and g − be defined as Preliminaries According to the biological interpretation of model (1.2),only positive solutions are meaningful and therefore admissible.Consequently, the following initial conditions are given by Let us recall some definitions and notations about almost periodicity and pseudo almost periodicity.For more details, we refer the reader to [6,22].Definition 2.1 (see [6]).Let f (t) ∈ BC(R, R).The function f (t) is said to be almost periodic on R if, for any ε > 0, the set T( f , ε) = ς : f (t + ς) − f (t) < ε, for all t ∈ R is relatively dense, i.e., for any ε > 0, it is possible to find a real number l = l(ε) > 0, for any interval with length l(ε), there exists a number ς = ς(ε) in this interval such that f (t + ς) − f (t) < ε, for all t ∈ R. We denote by AP(R, R) the set of all such functions. In this paper, we denote Definition 2.2 (see [22]).A function f (t) ∈ BC(R, R) is called pseudo almost periodic if it can be expressed as where The collection of such functions will be denoted by PAP(R, R). Remark 2.3.The functions f 1 and f 2 in Definition 2.2 are, respectively, called the almost periodic component and the ergodic perturbation of the pseudo almost periodic function f .Moreover, the decomposition given in Definition 2.2 is unique. Definition 2.6 (see [6, 23]).Let x ∈ R and Q(t) be a continuous function defined on R. The linear equation is said to admit an exponential dichotomy on R if there exist positive constants k i , α i , i = 1, 2, projection P and the fundamental solution X(t) of (2.2) satisfying Lemma 2.7 (see [23]).Assume that Q(t) is an almost periodic function and g(t) ∈ PAP(R, R).If the linear equation (2.2) admits an exponential dichotomy, then pseudo almost periodic equation has a unique pseudo almost periodic solution x(t), and Lemma 2.8 (see [6]).Let δ(t) be an almost periodic function on R and Then the linear equation admits an exponential dichotomy on R. The following lemma is from [1] and will be employed in establishing the asymptotic stability of model (1.2).Lemma 2.9.Let l be a real number and f be a non-negative function defined on [l, ∞) such that f is integrable on [l, ∞) and is uniformly continuous on [l, ∞).Then lim t→∞ f (t) = 0. Main results In this section, the main results of this paper are stated as follows.For convenience, we divide this part into two subsections. Existence of pseudo almost periodic solution which properly contains the strip where and For the sake of convenience, we denote x(t) = x(t; t 0 , ϕ).Let [t 0 , T) ⊆ [t 0 , η(ϕ)) be an interval such that x(t) > 0, for all t ∈ [t 0 , T), In fact, if (3.2) does not hold, there exists t 1 ∈ (t 0 , T) such that which is a contradiction and implies that (3.2) holds.We next show that Otherwise, there exists t 2 ∈ (t 0 , η(ϕ)) such that In view of (1.2), (3.2) and (3.5), direct calculation produces which is a contradiction and hence (3.4) holds.According to (3.2) and (3.4), one easily see that (3.1) is true, which implies that x(t) is bounded.Therefore, we know from the continuation theorem in [8, Theorem 3.2 on page 46] that the existence interval of each solution for model (1.2) can be extended to [t 0 , ∞). Lemma 3.2.Suppose assumptions (A 1 )-(A 3 ) are satisfied.Define the nonlinear operator Γ as follows, for each φ ∈ PAP(R, R), (Γφ)(t) := x φ (t), where Proof.Firstly, we claim that F(s) ∈ PAP(R, R).In fact, let φ ∈ PAP(R, R) and e −v is a uniformly continuous function for v ≥ 0, we know from Lemma 3.1 and [22, Corollary 5.4 on page 58] that g φ (v) e −c j (v+s)φ(v) ∈ PAP(R, R), and therefore, g φ can be expressed as We know from the almost periodicity of g 1 φ (v) that for any ε > 0, there exists a number l(ε) such that in any interval [α, α + l(ε)] one can find a number ς, with the property that Then, we have On the other hand, one sees that lim Combining (3.6) and (3.7), we derive that Then, by a standard argument as Lemma 3.2 in [4], we can prove that Γ maps PAP(R, R) into itself. Our first main result can be stated as follows. Theorem 3.3.In addition to (A 1 )-(A 3 ), suppose further that Then the model (1.2) admits a unique pseudo almost periodic solution in the region Proof.For any φ ∈ PAP(R, R), we introduce the following auxiliary equation Notice that M[a] > 0, we know from Lemma 2.8 that the linear equation admits an exponential dichotomy on R. Therefore, by Lemmas 2.5 and 2.7, we know that model (1.2) has exactly one solution expressed by one can observe from Lemma 3.2 that x φ (t) ∈ PAP(R, R). Obviously, to show that model (1.2) has a unique pseudo almost periodic solution, it suffices to prove that Γ has a fixed point in B. Let us first prove that the operator Γ is a self-mapping from B to B. In fact, for any φ ∈ B, we have Z. Long On the other hand, we have which, together with (3.11), means that the mapping Γ is a self-mapping form B to B. Next, we show that the mapping Γ is a contraction mapping on B. For any φ, φ * ∈ B, one has It follows from (3.12) and (3.13) that 14) shows that Γ is a contraction mapping.Therefore, by virtue of the Banach fixed point theorem, Γ has a unique fixed point which corresponds to the solution of model (1.2) in B ⊂ PAP(R, R).This completes the proof of Theorem 3.3. Asymptotic stability of pseudo almost periodic solution In the following, we give the analysis of global asymptotic stability of model (1.2).Proof.Let x(t) be any solution of model (1.2) and x * (t) be a pseudo almost periodic solution of model (1.2), consider a Lyapunov function defined by A direct calculation of the right derivative D + V(t) of V(t) along the solutions of model (1.2), produces It follows from (3.8) and (3.15) that there exists a positive constant µ 1 > 0 such that Integrating on both sides of (3.16) from t 0 to t yields From Lemma 3.1, we can obtain that x(t), x * (t) and their derivatives remain bounded on [t 0 , ∞) (from the equation satisfied by them).Then it follows that |x(t) − x * (t)| is uniformly continuous on [t 0 , ∞).By Lemma 2.9, we conclude that The proof of Theorem 3.4 is complete.Remark 3.5.Very recently, J. Shao in [16] studied the following Lasota-Wazewska model with an oscillating death rate where a : R → R is an almost periodic function which is controlled by a non-negative function, b j , c j , τ j : R → [0, +∞) are pseudo almost periodic functions, if there exist a bounded and continuous function a * : R → [0, ∞) and constants F i , F S , κ such that the author proved that equation (3.17) has a pseudo almost periodic solution which is globally exponentially stable. One can find that the function of death rate a(t) in equation (3.17) is more general than equation (1.2).However, if K j (s) = δ(s − τ j ), where δ(s) denotes the Dirac-δ function, then equation (1.2) becomes equation (3.17) with constant discrete delays, on the other hand, as pointed out by L. Duan et al. in [5] and Y. Kuang in [12], it is more reasonable and realistic to establish delay-dependent criteria ensuring the dynamics of a system because the delays have important effect on a system, one can clearly see that the stability criteria established here are delay-dependent and the method used here is different from [16].This indicates that these results are complementary to each other.Therefore, our results are new and complement the existing ones.Moreover, it seems that condition (3.8) is easier to verify than (3.18)-(3.19).[5,7,9,13,15,17,19], in particular, one can easily see that the obtained results extend the corresponding ones in [18].To the best of our knowledge, on the other hand, fewer authors have considered the existence and stability of pseudo almost periodic solutions to model (1.2).Therefore, the main results in the present paper are essentially new and they extend previously known results. Theorem 3 . 4 . If all the assumptions in Theorem 3.3 are satisfied, then, all solutions of model (1.2) in the region B converge to its unique pseudo almost periodic solution.
2,889.6
2015-01-01T00:00:00.000
[ "Mathematics" ]
SAXS Evaluation of Size Distribution for Nanoparticles SAXS Evaluation of Size Distribution for Nanoparticles Size distribution is an important structural aspect in order to rationalize relationship between structure and property of materials utilizing polydisperse nanoparticles. One may come to mind the use of dynamic light scattering (DLS) for the characterization of the size distribution of particles. However, only solution samples can be analyzed and even for those, the solution should be transparent or translucent because of using visible light. It is needless to say that solid samples are out of range. Furthermore, the size distribution only in the range of several tens of nanometers can be characterized, so DLS is useless for particles in the range of several nanometers. Therefore, the small-angle X-ray scattering (SAXS) technique is much superior when considering the determination of the size distribution in several nanometers length scale for opaque solutions and for solid specimens. Furthermore, the SAXS technique is applicable not only for the spher- ical particle but also for platelet (lamellar) and rod-like (cylindrical) particles. In this chapter, we focus on the form factor of a variety of nanostructures (spheres, prolates, core-shell spheres, core-shell cylinders and lamellae). Also getting started with a monodisperse distribution of the size of the nanostructure, to unimodal distribution with a narrow standard deviation or wide-spreading distribution and finally to the discrete distribution can be evaluated by the computational parameter fitting to the experimen- tally obtained SAXS profile. In particular, for systems forming complicated aggrega-tions, this methodology is useful. Not only the size distribution of ‘ a bunch of grapes ’ but also the size distribution of all ‘ grains of grapes in the bunch ’ can be evaluated according to this methodology. This is very much contrasted to the case of the DLS technique by which only ‘ a bunch of grapes ’ is analyzed but ‘ grains of grapes in the bunch ’ cannot be. It is because the DLS technique in principle evaluates diffusion constants of particles and all of the grains in the same bunch of grapes diffuse as a whole. Thus, the methodology is important to highlight versatility and diversity in real materials, especially in soft matter, both in the liquid and in the solid states. Introduction In recent years, controlling of nanostructures has been more significantly considered in the field of materials science, especially relating to the soft matter [1]. Versatile properties or functions can be obtained through designing nanostructures in solid-state materials, as well as nanomaterials dispersed in liquid-state substance. Even for contradictory properties such as hard and soft, they may be coexistent in one material when fabricating so-called inclined nanostructures (for instance, nanoparticle size is gradually changing as a function of the position in material). This in turn indicates that size distribution of the nanostructures should be rigorously evaluated for better understanding effects of nanostructure on properties and functions. For biological systems or supramolecular organizations, situation is very much contrast to the other ubiquitous materials as described above because they form spontaneously a regular aggregation. Therefore, the size distribution is narrow and follows a simple mathematical function with a comparatively small standard deviation. By contrast, discrete distribution of the size is required to determine for the ubiquitous materials. However, even for regular nanostructures, the determination of the discrete distribution of the nanostructure size is needed to reveal a transient state upon transition from the state 1 to the state 2, being triggered by sudden change in temperature, pH, or other external parameters. It is well known that the size distribution of particles can be evaluated by the use of dynamic light scattering (DLS). However, only solution samples can be analyzed and even for those, the solution should be transparent or translucent because of using visible light. It is needless to say that solid samples are out of range. Furthermore, the size distribution only in the range of several tens of nanometers can be characterized, so DLS is useless for particles in the range of several nanometers. Therefore, the small-angle X-ray scattering (SAXS) technique is much superior when considering the determination of the size distribution in several nanometers length scale for opaque solutions and for solid specimens [2]. Furthermore, the SAXS technique is applicable, not only for the spherical particle but also for platelet (lamellar) and rod-like (cylindrical) particles and it enables us to determine the thickness distribution of lamellae or the cross-sectional radius distribution of cylinders. Namely, the SAXS technique does not matter types of particle shape even for hallow cylinders or hollow spheres [3]. The principle is simple. Scattering comprises not only contribution from regularity of space-filling ordering (the lattice factor) of particles but also from a single particle (the form factor). The particle scattering can be mathematically formulated depending on the type of particle shape (lamella, cylinder or sphere). In the block copolymer microdomain systems, the Gauss distribution of the particle size has been assumed. Only recently, direct determination of the discrete size distribution has been available by conducting fitting theoretical scattering function to the experimentally obtained SAXS profile (the plot of the scattering intensity as a function of the magnitude of the scattering vector, q [= (4π/λ) sin (Θ/2) with Θ and λ being the scattering angle and the wavelength of X-ray, respectively] where the abundance of the particle having a given size was treated as a floating parameter with a step of 1 nm (the step can be more precise). In this chapter, getting started with nanoparticles with a narrow size distribution, we will see characteristic shape of the form factors for protein self-assembly, block copolymer microdomains and peptide amphiphile nanofibers. Then, we shift our target to the evaluation of discrete distribution of size of nanostructures by SAXS. The examples shown are thickness distribution of the crystalline lamellae of polyethylene glycol in polymer blends and thickness distribution of the hard segment domains for supramolecular elastomers (starblocks of soft polyisobutylene and hard oligo(β − alanine) segments). Other notable examples are sterically stabilized polypyrrole-palladium (PPy-Pd) nanocomposite particles, hybrid amphiphilic poly(Nisopropylacrylamide)/metal cyanide complexes and the cobalt(II) terpyridine complexes with diblock copolypeptide amphiphiles. For this example, this methodology is useful. Not only the size distribution of 'a bunch of grapes' but also the size distribution of all 'grains of grapes in the bunch' can be evaluated according to this methodology. This is very much contrasted to the case of the DLS technique by which only 'a bunch of grapes' is analyzed but 'grains of grapes in the bunch' cannot be. It is because the DLS technique in principle evaluates diffusion constants of particles and all of the grains in the same bunch of grapes diffuse as a whole. Thus, the methodology is important to highlight versatility and diversity in real materials, especially in soft matter, both in the liquid and in the solid states. Nanoparticles with a narrow size distribution First of all, some typical examples of the experimentally observed form factor are demonstrated. The samples are self-assembly of proteins, block copolymer microdomains and peptide amphiphiles. Apoferritin is a protein having ability to store iron atoms and it is referred to as ferritin when iron atoms are bound. Apoferritin forms a spherical shell as a self-assembled nanostructure with a very uniform size. As indicated in Figure 1 (pH-dependence of SAXS profiles), its SAXS profiles (apoferritin, 24-mer) exhibit characteristic features with many peaks due to its uniform shape for pH ≥3.40 [4]. Dramatic change in the SAXS profile is detected between pH = 1.90 and 3.40. This means that apoferritin is disassembled for acidic condition. Time-resolved SAXS measurements have been utilized to study disassembling and reassembling process upon the change in pH [4,5]. In Figure 1, the curve shows the result of the SAXS modeling by the scattering program GNOM [6]. Since protein molecules produce the typical form factor, it is frequently used to obtain commissioning data for newly launched SAXS beamline or apparatus [7][8][9]. It is known that block copolymer spontaneously forms a regular nanostructure with a narrow size distribution. Figure 2 shows examples of the SAXS profiles for sphere-forming block copolymer (SEBS; polystyrene-block-poly(ethylene-co-butylene)-block-polystyrene triblock copolymer) having Mn = 6.7 + 10 4 , Mw/Mn = 1.04, PS volume fraction = 0.084) [10], where Mn and Mw denote number-average and weight-average molecular weights, respectively. In Figure 2, the solid curve is the results of the model calculation for the spherical particle, but not only the form factor, but also the lattice factor of BCC (body-centered cubic) is taken into account. The full equation is as follows [11][12][13][14]: The symbols indicate the experimental data, and the solid lines indicate the fits obtained using the GNOM program. The solid lines without symbols are the theoretical SAXS curves calculated from the crystal structure of apoferritin and its subunit crystal (PDB code 3F32). For clarity, each plot is shifted along the log I axis [4]. where <x> is the average of the quantity of x. f(q) and Z(q) are particle and lattice factor, respectively, designating the scattering amplitude due to the intraparticle interference and the scattering intensity due to the interparticle interference, respectively. The form factor f(q) for a spherical particle with its radius, R, can be given as where A e is the scattering amplitude of the Thomson scattering, Δρ is the difference in the electron density between sphere and matrix, V is the volume of the spheres. Here, the Gauss distribution is used for R with σ R being the standard deviation. On the other hand, the lattice factor Z(q) is given by Eq. (3) with Eulerian angles, θ and φ, which define orientation of the unit cell of a given grain with respect to the experimental Cartesian coordinates: and g = Δd/ <d> which is the degree of the lattice distortion (Δd denotes the standard deviation in d due to the paracrystalline distortion). In Eq. (4), for the bcc lattice and for the fcc lattice. In Eqs. (3) and (4), d denotes the Bragg spacing. The spacing for {110} and {111} planes for the bcc and fcc lattices, respectively, gives rise to the first-order peaks. For randomly oriented polygrains in actual samples, the scattering is isotropic. Therefore, Z(q,θ, φ) is averaged with respect to θ and φ to obtain isotropic Z(q): Zðq, θ, φÞ sin θdθdφ (11) As clearly observed in Figure 2, the broad peak around q = 0.71 nm −1 is due to the form factor. The model curve is the result of calculation with <R> = 7.90 nm for 130°C annealed specimen and <R> = 8.10 nm for 150°C annealed specimen and the standard deviation of the size distribution (σ R ) being 1.09 and 1.10 nm, respectively. Thus, evaluated value of <R> is consistent with the result of transmission electron microscopic observation (as shown in Figure 3). Note also here that the order-disorder transition temperature locates between 130 and 150°C, so that bcc ordering is quite regular for the specimen annealed at 130°C, while it is poor for 150°C annealed specimen. The SAXS profile for the 130°C annealed specimen displays clear lattice peaks at the relative q values of 1:√2:√3, indicating high regularity of the bcc ordering. The sphere-forming block copolymers exhibit mostly the bcc ordering due to the entropic profit [14] and the fcc ordering has been found for some particular case. Comparison between the results shown in Figures 1 and 2 clearly indicates that many peaks for monodisperse particle are easy to collapse to become more featureless when the size distribution is incorporated even if it is small. Nevertheless, it is characteristic for the block copolymer microdomains that one peak can be discernible for the form factor. Very recently, it has been found that PS spherical microdomains were deformed upon the uniaxial stretching of the SEBS-8 film specimens [15]. Since SEBS triblock copolymer with the glassy PS spherical microdomains can be used as a thermoplastic elastomer (TPE), the film specimen can be stretched. In Figure 4, 2D-SAXS patterns are displayed to recognize the deformation of the round shape form factor upon the uniaxial stretching. Figure 4a shows the 2D-SAXS pattern for the SEBS-8 film specimen. Here, it is clearly observed that the round shape form factor appears at q = 0.77 nm −1 . The round peak of the form factor is deformed to become an ellipsoid in Figure 4b upon uniaxial stretching of the film specimen up to the strain of 3.65 (stretching ratio is 4.65) at room temperature. The peak position in the q − direction parallel to the stretching direction (q//SD) is lower than that in the q − direction perpendicular to SD (q⊥SD). This means that the size of the particle in the q // direction is bigger than that in q ⊥ direction, which in turn implies deformation of the spherical particles. Therefore, the model calculation of the form factor, P(q), for prolate was conducted using the mathematical equation as follows: and ðvolume of the prolateÞ (16) Here, R maj and R min stand for the radius of the longer axis and the radius of the shorter axis of the prolate, respectively ( Figure 5) and φ is the angle between the q direction and the long axis of the prolate. To fit the SAXS profile with the model calculation, the distributions in R min , ν and φ are required. Note here that the distributions in φ and μ define the orientational distribution function: Ψ φ (φ) and Ψ μ (μ), respectively. However, for this particular case, φ can be considered to be zero with no distribution, namely perfect orientation of the prolates with their log axes parallel to SD because the uniaxial stretching prolongs spherical microdomains to result in prolates with their long axes parallel to SD, which in turn enables us to set Ψ μ (μ) = 1 regardless of μ. Therefore, due to decreased numbers of the parameters, the situation became easier to evaluate the average values of R min and ν with their distributions (ΞR min and Ω(ν)). The results for the 1D-SAXS profiles in q // and in q ⊥ directions are shown in Figure 6a and b, respectively. In the both cases, the 1D-SAXS profile for the unstretched film specimen (before the stretching) is shown together. It is clearly observed that the peak of the form factor moved toward lower and higher q range upon the stretching in q // and in q ⊥ directions, respectively. Furthermore, both of the SAXS profiles can be fit by the prolate model, using Eqs. (1), (12)-(16) with the bcc lattice factor. Here, the <R min > = 6.44 nm and <ν> = 1.20 were used for the model calculation. Note that <R min > = 6.85 nm for the unstretched specimen. Moreover, the distributions in R min and ν (ΞR min and Ω(ν)) used for the calculation are plotted in Figure 7a and b, respectively. Such a mathematical function for the size distribution is enough to explain the experimentally observed SAXS profile under the stretched state. However, it should be noted that both of the distributions were required and otherwise, the model SAXS curve did not fit well the experimental results for both the q // and q ⊥ directions. Figure 6a and b also includes the SAXS profiles measured after complete removal of the stretching force. At a first glance, the peak positions of the form factor in Figure 6a and b seem to recover its original position for the unstretched specimens. However, this does not imply the recovery of the original spherical shape upon the removal of the load because the deformation of the glassy PS microdomains is permanent. Then, why did the form factor recover its original peak position? It may be ascribed to randomization of the prolates orientation upon the removal of the load. To check this speculation, we conducted the SAXS modeling of the prolate form factor by setting Ψ(φ) = 1 irrespective of φ but with keeping the size distribution R min and ν (ΞR min and Ω(ν)) unchanged. The results of the modeling are shown with the red curves in Figure 6a and b, indicating clearly good agreements with the experimentally obtained SAXS profiles. This in turn confirms the speculation of randomization of the prolates orientation upon the removal of the load. Core-shell sphere and cylinder models are significantly important for the amphiphilic selfassembly. For the core-shell sphere [16,17], the form factor is formulated as: SAXS Evaluation of Size Distribution for Nanoparticles http://dx.doi.org/10.5772/105981 if the homogeneous densities in the core and in the shell can be assumed with ρ c and ρ s , respectively. Here, V c and V s designate the volume of the core and the shell, respectively. Moreover, R c and R s denote the radii of the core and the shell, respectively. ρ 0 is the electron density of the matrix. On the other hand, when the shell density changes as a function of r (the core density is homogeneous) as defined with ρ s (r), then the form factor is formulated as [18,19]: As for core-shell cylinders, the form factor is: where J 1 (x) is the first-order Bessel function. θ is defined as the angle between the cylinder axis and q. R C and R S are the core and shell radii, respectively. H C and H S are the core and shell lengths, respectively. V C and V S are the core and shell volumes, respectively (V x ¼ πR 2 x H x ; x = C, S, or solv; C: core, S: shell, solv: solvent). ρ x is the electron density of the core, shell, or solvent. Matson et al. [20] have reported the SAXS modeling of the form factor of the core-shell cylinder for self-assembling peptide amphiphiles (PAs) as shown in Figure 8A and B. The molecules self-assembled into the core-shell cylinder are illustrated in Figure 8C. Such cylinders can be detected with cryogenic TEM as shown in Figure 9. The SAXS profile is shown in Figure 10 with the model curve, where the size distribution in the core radius is modeled using a lognormal distribution with the polydispersity being around 27-30% (see Table 1 for the structural parameter determined by the SAXS modeling), while the radial shell thickness is assumed to be monodisperse. Although the modeling results explain very well the experimentally obtained SAXS profiles, the fact that the radial shell thickness is assumed to be monodisperse means it is difficult to determine individually two distributions in inner and outer radius. For more detailed structure analyses, more experimental variations are required to gather information from different kinds of aspects, like the example shown in Figure 6a and b (parallel and perpendicular to SD). Concept of evaluation of discrete distribution by SAXS In this section, the concept of evaluation of the discrete distribution of the size of the nanostructure is explained. As an example, the lamellar model calculations are displayed in Figure 11, for which the mathematical equation is [21]: where L is the lamellar thickness and the prefactor (q −2 ) is the so-called Lorentz factor which is required to randomize the orientation of the lamellar particle. Here, it was assumed that the lamellar particle has infinitely large extent in the direction parallel to the lamellar surface. Figure 10. I SAXS profiles of PAs 5-8 (A-D) fitted to a polydisperse core-shell cylinder model. The solid red line represents the best fit to a core-shell cylinder form factor, where the core was allowed to be polydisperse according to a log-normal distribution. The solid black line represents the portion of the curves where fits were performed [20]. Figure 11a and b shows the calculated profiles when L was set to 10 and 20 nm, respectively. It can be seen that the characteristics in the shapes of lamellar form factor are similar to the case of spherical form factor. As a matter of fact, many peaks appear. Summing up these two profiles gives the one, which is also shown in Figure 11c. This in turn means that the form factor is as shown in Figure 11c when the lamellar thickness distribution is as shown in the inset of Figure 11. It is noteworthy that the characteristic shape of the one shown in Figure 11c (L = 10 nm + L = 20 nm) is similar to the case of Figure 11a (L = 10 nm). When the distribution is somewhat modified as indicated in Figure 12b, the form factor is dramatically altered to the one as shown in Figure 12a. This seems to be no more characteristic form factor. Thus, the experimentally observed form factor can be a fingerprint and the size distribution may be evaluated as far as the shape of the nanostructure can be uniquely assumed. Figure 12c shows one of the typical results of the SAXS profiles for poly(oxyethylene) (PEG), which forms lamellar crystallites. The exact sample used for the result of Figure 12c was a polymer blend of PEG with poly (D, L-lactide) (PDLLA), which is a racemic copolymer and therefore amorphous. The compositions of PEG/PDLLA were 80/20 (DL20) by weight. Figure 12c shows the result of the SAXS measurement at 64.0°C in the heating process [22]. At the temperature of 64.0°C slightly below the melting temperature of PEG (64.5°C), the typical form factor of lamellar particle was observed first time for the crystalline polymer. It was expected that the thinner lamellar which has a lower melting temperature melted away in the heating process. The thickest lamellae can only survive at the highest temperature and therefore, the thickness distribution became sharp. This may be the reason of the observation of the typical form factor of lamellar particle. As a matter of fact, a very sharp distribution was evaluated as shown in Figure 12d by the method described below. PA Hereafter, the data analysis method for the direct determination of the thickness distribution of lamellar particle is described. The model particle scattering intensity, I(q), with a distribution of thicknesses can be given as: with P(q) defined by Eq. (20). In Eq. (21), k is a numerical constant and n(L) is the number fraction of lamella with a thickness of L, providing the thickness distribution of lamellae. A protocol was employed to directly determine n(L) by fitting the calculated I(q) from Eq. (21) to the experimentally observed 1D-SAXS profile where the following parameters were being floated as the fitting parameters: the numerical constant k and n(L = 1 nm), n(L = 2 nm), n(L = 3 nm),…, n(L = 40 nm) which are the abundance number of particles having thickness L in a step of 1 nm. By this protocol, the best fit was successfully performed, which is shown by the solid curve on the 1D-SAXS profile in Figure 12c. Although most of the features seem to be well described by the particle scattering, the first-order peak is not. For some SAXS profiles, the full calculation including the lattice factor Z(q) and the particle scattering can describe the SAXS profile well. The mathematical formulation of Z(q) is [23]: Thus, the thickness distribution as shown in Figure 12d was also evaluated. Although such a sharp distribution around L = 33.5 nm accounts for the particle scattering dominant SAXS profile, the presence of thinner lamellae is clearly suggested. Lamellar case Tien et al. have reported results of comprehensive studies of the higher-order crystalline structure of PEG in blends with PDLLA [22,24,25]. For several blend compositions, they have discussed the effects of blending PDLLA on the structural formation of PEG. It is remarkable that they found more regular higher-order structure for PEG 20 wt % composition (DL20) as compared to the PEG 100% sample in the as-cast blend sample (cast from a dichloromethane solution). More interestingly, they reported that the 1D-SAXS profile markedly changed from lattice peak dominant type to particle scattering dominant type when heating the as-cast sample, as shown in Figure 13. The compositions of PEG/PDLLA were 100/0, 95/5 (DL5), 90/ 10 (DL10) and 80/20 (DL20) by weight. Figure 13 shows the results of the SAXS measurements in the heating process. Based on the results, we have conducted the evaluation of the lamellar thickness distribution in the heating process from the as-cast state up to 64°C and succeeded in showing that the distribution became sharper with the average thickness becoming larger, as shown in Figure 14. That study is the first showing quantitative evidence of the well-known concept of 'lamellar thickening' when a crystalline polymer is thermally annealed just below its melting temperature. Tien et al. have also conducted the same evaluation under higher pressure (5 and 50 MPa) [26]. Jia et al. [27] have recently evaluated thickness distribution of the hard segment domains for supramolecular elastomers (starblocks of soft polyisobutylene and hard oligo(β − alanine) segments). The molecule is a novel type of supramolecule as schematically shown in Figure 15 where the green chains are soft polyisobutylene. Due to the formation of the lamellar crystallites of oligo (β − alanine) segments, the specimen has rubber-like elasticity, that is, supramolecular self-assembly leads the specimen to TPE. Since such lamellar crystallites can be hardly observed by TEM, the SAXS measurement was conducted. The result is shown in Figure 16 with the evaluated thickness distribution which is shown in the inset. Almost monodispersed distribution was evaluated with the peak at L = 2.0 nm (inset of Figure 16), which is in good agreement with the size of the oligo (β − alanine) contour length. This case clearly demonstrates the significance of the SAXS technique. Spherical case In this subsection, the size distribution of nanoparticles is described. Fujii et al. [28,29] have synthesized novel for sterically stabilized polypyrrole-palladium (PPy-Pd) nanocomposite particles. Such a characteristic particle containing heavy element has recently been attracting intensively general interests of researchers in many fields under the name of element-blocks [30]. Figure 17 shows a TEM image of these particles with a schematic of the structure. The ordinary 1D-SAXS profiles of 1, 2 and 3% aqueous dispersions of the nanocomposite particles are shown together in Figure 18a as a plot of log[I(q)] versus log q. This plot clearly shows that the shapes of the profiles are similar. When the curves are vertically shifted, all of the data collapse onto a single curve (Figure 18b), suggesting that the nanocomposite particles are dispersed in the aqueous medium without ordering into a lattice, at least up to a particle concentration of 3%. Thus, the 1D-SAXS profile can be attributed directly to the particle scattering (the form factor). Although the TEM results revealed that the nanocomposite particles are not spherical, a mathematical equation describing particle scattering is not available for such an unusual shape of particles. Therefore, a spherical shape is assumed for simplicity. The model particle scattering intensity, I(q), with a distribution of thicknesses can be given as: The form factor, P(q), for spherical particles is given as: and ΦðqÞ ¼ 3=ðqRÞ 3 ½ sin ðqRÞ−qR cos ðqRÞ (26) Figure 16. SAXS profile for the supramolecular elastomer schematically shown in Figure 15. Black dots are for the experimentally obtained SAXS profile, and red curve is the calculated SAXS profile. The inset shows the evaluated thickness distribution [27]. In Eq. (24), k is a numerical constant and n(R) is the number fraction of spheres with a radius of R, providing the size distribution of spheres. Attempts to fit a theoretical function given by Eq. (24) to the measured 1D-SAXS profile assuming a Gauss or Schulz-Zimm-type distribution for n(R) were unsuccessful. We then employed a protocol in which n(R) was directly determined by fitting the calculated I(q) from Eq. (24) to the experimentally observed 1D-SAXS profile by the same method as described above for the lamellar case. The best fit is shown in Figure 18a with the dotted black curve for the 1D-SAXS profile (3% aqueous solution). The reason of using this profile is because of being most intense and therefore the most reliable. Thus, the obtained particle size distribution is shown in Figure 19, where the abundance is shown in the units of vol%, which was calculated by the following equation from the number fraction n(R): Bimodal size distribution was clearly obtained with two peaks at approximately R = 2.5 and 11.5 nm. It is recognized that smaller particles (assuming n(R) with a single peak at approximately R = 2.5 nm) could explain the shape of the SAXS profile in the higher q range (see the broken curve in Figure 18a), whereas larger ones (assuming n(R) with a single peak at approximately R = 11.5 nm) characterized the SAXS profile in the lower q range (see the dotted and broken curve). This result does not indicate the real distribution of the PPy-Pd nanocomposite particles themselves, but the additional abundance of tiny Pd nanoparticles existing in the nanocomposite particles. These speculations are confirmed by TEM observations (Figure 17), indicating that the average radius was approximately 16 nm with a unimodal distribution and by close examination of the high-resolution TEM image (R = 2.7 nm; Figure 17). Thus, it was possible to evaluate not only the size of 'a bunch of grapes' but also the size of all 'grains of grapes in the bunch.' Kuroiwa et al. [31,32] have synthesized novel amphiphilic N-isopropylacrylamide (NIPPAm) oligomers with dodecyl groups and carboxyl groups, as shown in Figure 20a, by the RAFT polymerization of NIPPAm with S-1-dodecyl-S'-(α,α′-dimethyl-α"acetic acid) trithiocarbonate (DTC). It was interestingly found that the DTC-NIPPAm oligomers form network aggregation upon addition of Cu 2+ ion in an aqueous solution, as revealed by TEM ( Figure 20b for the dried specimen from an aqueous solution of DTS-NIPPAm35 with Cu 2+ ions). It seems that the network diameter is somewhat 30 nm or above. There is a possibility to consider that the constitutive unit of the network should be a micelle as schematically illustrated in Figure 20c. Since the TEM observation can be only conducted for the dried specimen, the resultant TEM image might be quite different from the real structure in the aqueous media. In order to reveal real structure in the aqueous media, the in situ SAXS measurement was performed at room temperature. Then, the spherical model fitting was applied to the resultant SAXS profile. Figure 21 shows the SAXS profile with the model form factor. The experimentally obtained SAXS curve (black curve) is available for q > 0.06 nm −1 and characteristic dent and hump are observed around q = 0.15 and 0.50 nm −1 , respectively. By assuming the spherical model, the calculated SAXS profile (red curve) can perfectly fit to the experimental one as displayed in Figure 21. Thus, evaluated discrete distribution of the radius is shown in Figure 22. Here, the main distribution is found around 2-7 nm, implying the cores of the micelles. Because the core contains sulfur atoms, the contrast is considered to be highest and therefore, the core can be the most intense scatterer. This is the reason of observing majority in 2-7 nm in the distribution. This in turn implies that the network aggregation comprises micelles, which can never be detected by TEM. Close examination of the resultant distribution revealed minor abundance around 17 and 21 nm. This agrees well with the least radius of the network aggregation in the TEM observation, as mentioned above. The same distribution is shown in the inset of Figure? with the logarithmic axis for the abundance. Then, it is clear that not only the minor abundance around 17 and 21 nm, but many Figure 19. Evaluated particle size distribution based on the result shown in Figure 18 [28]. minor ones are discernible in the wide range from 17 to 58 nm. As a matter of fact, such big spheres can be occasionally seen in the TEM image ( Figure 20b). Thus, once again for this kind of complicated aggregation, the method of evaluation of the discrete distribution of size from the SAXS result is approved to be quite effective [33]. Concluding remarks In this chapter, we focused on the form factor of a variety of nanostructures (spheres, prolates, core-shell spheres, core-shell cylinders and lamellae). Also getting started with a mono-disperse distribution of the size of the nanostructure, to unimodal distribution with a narrow standard deviation or wide-spreading distribution and finally to the discrete distribution can be evaluated by the computational parameter fitting to the experimentally obtained SAXS profile. In particular, for systems forming complicated aggregations, this methodology is useful. Not only the size distribution of 'a bunch of grapes' but also the size distribution of all 'grains of grapes in the bunch' can be evaluated according to this methodology. This is very much contrasted to the case of the DLS technique by which only 'a bunch of grapes' is analyzed but 'grains of grapes in the bunch' cannot be. It is because the DLS technique in principle evaluates diffusion constants of particles and all of the grains in the same bunch of grapes diffuse as a whole. Thus, the methodology is important to highlight versatility and diversity in real materials, especially in soft matter, both in the liquid and in the solid states. At present, however, the shape of the nanostructure is limited in spherical or lamellar. On extending the methodology to the complicated structures such as cylinder, prolate, oblate, or core-shell type, there are tremendous difficulties. For cylinder, prolate, or oblate, difference in the degree of orientation of such particles spoils the methodology such that the size distributions for two principal directions (height and radius for the cylinder case/long axis radius and short axis radius for the prolate and oblate cases) cannot be uniquely evaluated. As for core-shell type particles, the inner and outer radii couple to alter its form factor, so that the size distributions for them cannot be uniquely evaluated either. As a matter of fact, the size distribution is introduced with keeping constant of the ratio of the inner and outer radii for the core-shell spheres [16]. Similarly, for the core-shell cylinders [20], the size distribution in the core radius is incorporated, while the radial shell thickness is assumed to be monodisperse. For more detailed structure analyses, more experimental variations are required to gather information from different kinds of aspects, like the example shown in Figure 6a and b (parallel and perpendicular to SD). These difficulties should be overcome.
7,981.4
2017-01-25T00:00:00.000
[ "Materials Science" ]
Protein phosphatase 1 regulatory inhibitor subunit 14C promotes triple‐negative breast cancer progression via sustaining inactive glycogen synthase kinase 3 beta Abstract Triple‐negative breast cancer (TNBC) is fast‐growing and highly metastatic with the poorest prognosis among the breast cancer subtypes. Inactivation of glycogen synthase kinase 3 beta (GSK3β) plays a vital role in the aggressiveness of TNBC; however, the underlying mechanism for sustained GSK3β inhibition remains largely unknown. Here, we find that protein phosphatase 1 regulatory inhibitor subunit 14C (PPP1R14C) is upregulated in TNBC and relevant to poor prognosis in patients. Overexpression of PPP1R14C facilitates cell proliferation and the aggressive phenotype of TNBC cells, whereas the depletion of PPP1R14C elicits opposite effects. Moreover, PPP1R14C is phosphorylated and activated by protein kinase C iota (PRKCI) at Thr73. p‐PPP1R14C then represses Ser/Thr protein phosphatase type 1 (PP1) to retain GSK3β phosphorylation at high levels. Furthermore, p‐PPP1R14C recruits E3 ligase, TRIM25, toward the ubiquitylation and degradation of non‐phosphorylated GSK3β. Importantly, the blockade of PPP1R14C phosphorylation inhibits xenograft tumorigenesis and lung metastasis of TNBC cells. These findings provide a novel mechanism for sustained GSK3β inactivation in TNBC and suggest that PPP1R14C might be a potential therapeutic target. Funding information inactivation in TNBC and suggest that PPP1R14C might be a potential therapeutic target. K E Y W O R D S GSK3β, PPP1R14C, PRKCI, TNBC BACKGROUND Triple-negative breast cancer (TNBC) is an aggressive breast cancer subtype, which lacks the expression of estrogen receptor (ER), progesterone receptor (PR), and human epidermal growth factor receptor-2 (HER-2). 1 Given less benefit from endocrine therapy or anti-HER-2 therapy, traditional chemotherapy is identified as the optimal strategy for TNBC. 2,3 However, the response of available chemotherapies is often short-term as the tumour progresses and acquires resistance. 4 Hence, it is imperative to develop innovative and effective treatments for TNBC, especially treatments against potential targets. Glycogen synthase kinase 3 beta (GSK3β), a highly evolutionarily conserved Ser/Thr kinase, is engaged in the cancer development and progression through its selective phosphorylation of substrate proteins. [5][6][7] GSK3β is active and acts as a tumour suppressor by phosphorylating and destabilizing oncogenic transcription factors (TFs). 8 On the other hand, in malignancy, GSK3β is phosphorylated at residue Ser9 into an inactive state, resulting in the activation of downstream oncogenic signalling. [9][10][11][12][13] It has been reported that the elevation of p-GSK3β-Ser9 accumulates snail family transcriptional repressor (SLUG) and snail family transcriptional repressor 1 (SNAIL) proteins, and expedites epithelial-mesenchymal transition (EMT) and metastasis in non-small cell lung carcinoma cells. 14 Increased p-GSK3β-Ser9 promotes cell proliferation and tumour growth in TNBC. Ser/Thr protein phosphatase type 1 (PP1), one of the phosphatases, is identified as a major regulator of the homeostasis of GSK3β status. 16 Notably, GSK3β is reported to stay inactive in TNBC and other malignancies, indicating the roles of phosphatase might be deregulated in cancers [17][18][19] ; however, the mechanisms remain unclear. PPP1R14C, found as a morphine-regulated brain gene, encodes protein phosphatase 1 regulatory inhibitor subunit 14C, and is identified as a potent inhibitor of PP1. 20, 21 Horvath and colleagues reported that PPP1R14C inhibited the release of neurotransmitters and promoted neuronal exocytosis by retaining synaptosome associated protein 25 (SNAP25) at a phosphorylated state. 22 Moreover, overexpression of PPP1R14C increased the phosphoryla-tion of RB transcriptional co-repressor 1 (RB1) to protect leukemic cells from chemotherapy. 23 These findings suggest that PPP1R14C might be involved in human cancer progression by regulating the phosphorylation state of a particular protein. However, the role of PPP1R14C in TNBC remains unknown. In the present study, we find that PPP1R14C is robustly increased in TNBC and predicts poor prognosis in patients. PPP1R14C is phosphorylated and activated by protein kinase C iota (PRKCI) at Thr73. p-PPP1R14C then binds with PP1 and inhibits its phosphatase activity to increase inactive p-GSK3β-Ser9. p-PPP1R14C recruits E3 ligase TRIM25 to promote the ubiquitylation and degradation of non-phosphorylated GSK3β. Thus, PPP1R14C promotes the aggressiveness, tumour growth, and metastasis of TNBC cells in vitro and in vivo by sustaining inactive GSK3β. Importantly, inhibition of PPP1R14C phosphorylation showed the anti-cancer activity. These findings uncover an oncogenic role of PPP1R14C in TNBC, and suggest that PPP1R14C might be a potential marker or target. PPP1R14C is specifically upregulated in TNBC Identifying genes specifically deregulated in TNBC might provide potential targetable molecular vulnerabilities for this intractable breast cancer subtype. Here, we first analysed The Cancer Genome Atlas (TCGA) breast cancer dataset. We identified 48 genes that were significantly increased in TNBC, by at least two folds compared to that in normal and non-TNBC tissues ( Figure 1A,B). Among the 48 genes, some of them, such as FOXC1, 24 BCL11A, 25 and PSAT1, 26 have already been reported to play important roles in TNBC (Table S1), suggesting that the analysis was reliable. Notably, PPP1R14C was previously identified as an inhibitor of PP1 and found to decrease the chemosensitivity of leukemic cells. 20,23 This prompted us to further explore the prospective role of PPP1R14C in TNBC progression. The specific upregulation of PPP1R14C could also be observed in Gene expression Omnibus (GEO) dataset as shown by the TCGA dataset ( Figure 1C). Meanwhile, PPP1R14C was highly expressed in basal-like breast cancer (BLBC), followed by normal tissues and other subtypes (Luminal A, Luminal B, and HER-2) ( Figure 1D). Importantly, our real-time PCR and western blotting analyses did show that PPP1R14C was robustly upregulated in TNBC tumour tissues and cell lines ( Figure 1E,F). These results indicated a specific upregulation of PPP1R14C in TNBC. High PPP1R14C expression indicates a poor prognosis in patients with TNBC The expression and clinical significance of PPP1R14C were then assessed by immunohistochemistry (IHC) and survival analysis in 150 breast cancer patient specimens, including 50 non-TNBCs and 100 TNBCs (Supplementary Table 2). Consistently, the IHC analysis indicated strong staining of PPP1R14C in TNBC, but it was weakly expressed in normal breast tissues and non-TNBC tissues ( Figure 2A). The staining of PPP1R14C expression was evaluated by the staining index (SI) according to the intensity and density, and an SI ≥ 6 was defined as PPP1R14C-high. High PPP1R14C expression was positively correlated with advanced T stage (P = 0.013), and relapse status (P = 0.002) in patients with TNBC ( Figure 2B and Table S3). Importantly, TNBC patients with high PPP1R14C expression experienced poor overall survival (OS) and relapse-free survival (RFS; Kaplan-Meier survival curves and log-rank test; P = 0.004, hazard ratio (HR) = 5.046, 95% confidence interval (CI) = 2.097-12.14; P < 0.001, HR = 6.164, 95% CI = 2.72-13.97, respectively; Figure 2C, Table S4). In addition, using the online database Kaplan-Meier plotter (http://kmplot.com/analysis), we found that patients with high PPP1R14C expression suffered shorter RFS among all patients with breast cancer and basal-like subgroup (Figure 2D). High PPP1R14C expression and advanced T stage were identified as independent prognostic factors for fiveyear OS and RFS in TNBC by multivariate regression analysis ( Figure 2E and Table S4). These results suggested that upregulation of PPP1R14C might contribute to the malignant progression of TNBC. PPP1R14C promotes TNBC progression in vitro The roles of PPP1R14C in TNBC progression were then investigated by gain or loss-of-function methods. PPP1R14C was exogenously transduced or endogenously silenced in two human breast cancer cell lines (MDA-MB-231 and SUM159PT) ( Figure 3A). Significantly, overexpression of PPP1R14C promoted cell proliferation, colony formation, anchorage-independent growth, invasion, migration, and cell cycle transition in TNBC cells ( Figure 3B-G and Figure S1A-D). In contrast, depletion of PPP1R14C had opposite effects ( Figure 3B-G and Figure S1A-D). Moreover, similar tumour-promoting effects were also observed when using a mouse-derived TNBC cell line 4T1, suggesting that the role of PPP1R14C was conserved (Figure S1E-J). These results showed that PPP1R14C promoted TNBC progression in vitro. PPP1R14C facilitates TNBC tumour growth and metastasis The effects of PPP1R14C on TNBC progression were further assessed in vivo. Xenograft mice model was generated through the orthotopic injection of stable SUM159PT cell lines and tumour burdens were measured regularly. Compared with the control groups, tumour growth was remarkably accelerated in the PPP1R14C-overexpressing group, or suppressed in the PPP1R14C-silenced group ( Figure 4A and Figure S2A). Tumours with overexpression of PPP1R14C had a high level of Ki-67, whereas the PPP1R14C-silenced tumours showed decreased Ki-67 levels ( Figure 4B and Figure S2B). Moreover, the impacts of PPP1R14C on TNBC metastasis were determined in the lung colonization model. Luciferase-transduced MDA-MB-231 cell lines were injected into the tail veins of mice and metastatic burdens were monitored by bioluminescence imaging (BLI) weekly. Upregulation of PPP1R14C increased the lung metastatic burdens and deteriorated the survival of mice, while downregulation of PPP1R14C reduced the number of lung metastatic lesions and prolonged mice survival ( Figure 4C,D). Furthermore, the role of PPP1R14C in metastasis was explored in an orthotopic mouse model of spontaneous breast cancer metastasis using the 4T1 cells. Luciferaseexpressing 4T1 cells (2 × 10 5 ) with altered PPP1R14C expression were orthotopically injected into the mammary fat pad. Tumour volumes were determined every week after surgery. The spontaneous metastasis of 4T1 cells was evaluated using the Xenogen IVIS Spectrum Imaging System. Strikingly, BLI and the visible metastatic lesions revealed that the metastasis of 4T1 cells was promoted in the PPP1R14C overexpression group, but suppressed in the PPP1R14C-silenced group ( Figure 4E,F). Moreover, upregulating PPP1R14C expression shortened the survival time of mice strikingly ( Figure S2C). These findings indicated that PPP1R14C promoted tumorigenesis and metastasis of TNBC. Previous studies revealed that the RVXF motif and T73 phosphorylation were required for PPP1R14C to bind and inhibit PP1, respectively 34-36 ( Figure 5D). IP assays using antibodies against PPP1R14C demonstrated that p-PPP1R14C/PP1/p-GSK3β-Ser9 formed a complex in TNBC cells ( Figure 5E). Next, we used two PPP1R14C mutants that one was no longer bound to PP1 (deletion of the RVXF motif) and the other was not inhibitory to PP1 (T73A) to perform IP assays. The results showed that neither of the PPP1R14C mutants could restore the p-GSK3β-Ser9 level in TNBC cells ( Figure 5F). Moreover, cell lysates obtained from Hela cells transduced with HA-GSK3β, Flag-PPP1R14C, and Myc-PP1 were detected using IP assays under treatment with SHIP2-IN-1, a GSK3β inhibitor that deactivates GSK3β via phosphorylation at Ser9. 37 The assays showed that PPP1R14C and PP1 specifically interacted with p-GSK3β ( Figure 5G). These results suggested that PPP1R14C inactivated GSK3β by sustaining its phosphorylation. We further investigated whether inhibition of GSK3β was essential for the tumour-promoting functions of PPP1R14C in TNBC. As expected, ectopic expression of GSK3β substantially impaired the colony formation, invasion, and migration capacities of TNBC cells with upregulated PPP1R14C ( Figure 5H Figure S3C). The above data indicated that GSK3β was indeed a potent downstream effector of PPP1R14C. PRKCI is an upstream regulator of PPP1R14C Previous studies showed that phosphorylation of PPP1R14C was mainly dependent on protein kinase C (PKC). 20 To identify which PKC family member might regulate PPP1R14C in TNBC, we analysed PKC genes expression in the TCGA breast cancer dataset. Notably, we found that, among the PKC members, PRKCI was expressed abundantly and significantly increased in TNBC compared to normal tissues and non-TNBC ( Figure S4A). Strikingly, our western blotting analysis showed that PRKCI was specifically upregulated in TNBC cell lines ( Figure 6A), suggesting that PRKCI might be the select PKC for PPP1R14C activation. Indeed, the PP1 activity was reduced in PRKCI-overexpressing cells and could be rescued by silencing PPP1R14C. In contrast, silencing PRKCI significantly increased PP1 activities in TNBC cells ( Figure 6B). Subsequently, depletion of PRKCI remarkably reduced Thr73 phosphorylation of PPP1R14C and Ser9 phosphorylation of GSK3β ( Figure 6C). These findings indicated that PRKCI regulated the phosphorylation and activity of PPP1R14C in TNBC. Nevertheless, we speculated that there might be other kinase contributing to the phosphorylation of PPP1R14C, which remained to be investigated in the future. Notably, similar to PPP1R14C depletion, silencing PRKCI suppressed cell proliferation, colony formation, anchorage-independent growth, invasion, migration, and cell-cycle transition in human and mouse TNBC cells ( Figure 6D-H and Figure S4B-E), suggesting that PRKCI, being an upstream activator of PPP1R14C, did play a tumour-promoting role in TNBC progression. p-PPP1R14C facilitates non-phosphorylated GSK3β (S9A) degradation Interestingly, we found that overexpression of PPP1R14C, leading to a high level of p-PPP1R14C (T73), increased the level of p-GSK3β (Ser9), but reduced the level of total GSK3β. Silencing PPP1R14C or transducing the PPP1R14C (T73A) mutant had the opposite effect ( Figure 7A and Figure S5A). Similar results were also observed in the mouse 4T1 cells ( Figure S5B). Notably, PPP1R14C had no significant effects on GSK3β mRNA expression in TNBC cells ( Figure S5C). It has been reported that the degradation of GSK3β is mostly dependent on the proteasome pathway in lung epithelial cells. 38 To determine whether p-PPP1R14C regulated the GSK3β stability, we evaluated its protein halflife period by adding cycloheximide (CHX). The halflife of GSK3β was shortened under these treatments in PPP1R14C-overexpressing SUM159PT and MDA-MB-231 cells. The reversed effect was presented in the PPP1R14C (T73A) mutant cells ( Figure 7B and Figure S5D). In To identify which E3 ubiquitin ligases were mediated by PPP1R14C to degrade non-phosphorylated GSK3β, mass spectrometry (MS) was applied in SUM159PT cells. Six E3 ubiquitin ligases, including TRIM25 (tripartite motif-containing 25), TRIM21 (tripartite motif-containing 21), ZNF598 (zinc finger protein 598), RAD18 (RAD18 E3 ubiquitin protein ligase), SH3RF3 (SH3 domain containing ring finger 3), and UBR5 (ubiquitin protein ligase E3 component N-recognin 5), were found in MS analysis (Figure S6A). Notably, the IP assays validated that PPP1R14C interacted with ZFP598 and TRIM25 ( Figure 7D). However, silencing of TRIM25, but not the other five E3 ligases increased GSK3β (Figure S6B and S6C). These observations indicated that TRIM25 might be responsible for PPP1R14Cmediated degradation of GSK3β. Indeed, IP assays using an anti-Flag antibody revealed that PPP1R14C formed a complex with TRIM25 depending on the phosphorylation status of PPP1R14C-T73 ( Figure 7E). In addition, an IP assay using an anti-HA antibody was performed in Hela cells transfected with Flag-PPP1R14C, Myc-TRIM25, HA-GSK3β, or HA-GSK3β (S9A). The results confirmed that PPP1R14C mediated the interaction between TRIM25 and total/non-phosphorylated GSK3β ( Figure 7F). To identify the functional domains of the GSK3β-TRIM25 interaction, a range of truncation mutants of GSK3β and TRIM25 were generated, and were applied to perform IP assays. The results revealed that TRIM25 could only bind to those GSK3β truncations with the N1 domain, and the N1 mutant with S9A was validated in the TRIM25-Myc immunoprecipitate, illustrating that the N-terminal domain was essential for the GSK3β-TRIM25 interaction ( Figure S6D). Furthermore, four TRIM25 truncation mutants, containing RING, B-box, coiled-coil (CC), and PRY/SPRY domains, were used to perform IP assays. The results demonstrated that only the TRIM25 truncation mutant containing the Bbox domain was necessary for its interaction with GSK3β ( Figure S6D). As a ubiquitin E3 ligase, it was probable for TRIM25 to mediate the ubiquitination and degradation of GSK3β. To verify this hypothesis, co-IP with an anti-HA antibody was conducted in Hela cells transfected with Flag-Ub, Myc-TRIM25, HA-GSK3β, or HA-GSK3β (S9A). As shown in Figure 7G, TRIM25 promoted the ubiquitination of total GSK3β and non-phosphorylated GSK3β-S9A. To determine the role of TRIM25 as an E3 ligase of GSK3β, a TRIM25 mutant was constructed, called TRIM25-2EA (Glu9 and Glu10 mutated to Ala), with inactivated ubiquitination activity. 28,39 IP assays exhibited that exogenous expressing wild-type (WT) TRIM25, rather than TRIM25-2EA, enhanced the total GSK3β and nonphosphorylated GSK3β-S9A ubiquitination ( Figure 7H). In addition, we mutated the ubiquitination site of GSK3β (K183) and performed an exogenous co-IP assay. 38,40 The data showed that mutation of GSK3β (K183R) interrupted the linkage of ubiquitin and GSK3β, and prevented the interaction between TRIM25 and GSK3β ( Figure 7I). Furthermore, PPP1R14C overexpression substantially increased the interplay between total/non-phosphorylated GSK3β and TRIM25, while the PPP1R14C (T73A) mutant inhibited this interaction ( Figure 7J). Therefore, these results showed that p-PPP1R14C enhanced the degradation of non-phosphorylated GSK3β via TRIM25-dependent ubiquitination. Blockade of PPP1R14C phosphorylation inhibits TNBC progression in vitro and in vivo We established PPP1R14C-wild type (WT) and PPP1R14C mutant in human and mouse TNBC cells. As expected, transduction with the PPP1R14C mutant impaired cell growth, invasion, G1-S transformation, and anchorageindependent growth of TNBC cells, suggesting that p-PPP1R14C was essential for TNBC aggressiveness (Figure 8A-E and Figure S7A-E). Next, SUM159PT cells stably overexpressed PPP1R14C-WT or PPP1R14C mutant, and vector cells were orthotopically injected into the mammary fat pads of mice. Tumours initiated from cells expressing the PPP1R14C mutant were much smaller than those from the vector and WT cells ( Figure 8F and Figure S7F). IHC of Ki-67 displayed that blockade of PPP1R14C phosphorylation reduced the Ki-67 staining intensity ( Figure 8G 2.9 Clinical relevance of PRKCI/ p-PPP1R14C/p-GSK3β axis in TNBC Finally, we assessed the PRKCI/p-PPP1R14C/p-GSK3β axis in clinical samples. IHC detection was performed to determine PRKCI and p-GSK3β expression in the 100 specimens from the same cohort of the 100 TNBC patients. PPP1R14C expression correlated strongly with PRKCI and p-GSK3β (Ser9) levels, suggesting that this axis was clinically relevant ( Figure 9A,B). DISCUSSION The high risk of recurrence and limited therapeutic options always lead to the worst clinical outcome in TNBC patients than other subtypes of breast cancer. Therefore, effective and specific regulators are urgently needed in TNBC because of its aggressive behaviours and high metastatic potential. 41,42 Our study illustrated that PPP1R14C was overexpressed in TNBC samples and cell lines. Patients with high PPP1R14C expression were more likely to suffer shorter 5-year OS and RFS. Furthermore, we revealed that p-PPP1R14C (T73), which is phosphorylated by PRKCI, inactivated PP1 to sustain GSK3β phosphorylation at Ser9 via forming a complex. In addition, p-PPP1R14C recruited TRIM25 to facilitate the ubiquitylation and degradation of non-phosphorylated GSK3β. Moreover, blockade of PPP1R14C phosphorylation inhibited the xenograft tumorigenesis and lung metastasis of TNBC cells. The above data uncover that PPP1R14C plays an oncogenic role in TNBC and it might be a promising biomarker or target. Lacking ER and HER-2, TNBC does not benefit from endocrine or HER-2 targeting therapies. Notably, recent studies indicated that TNBC was higher in immunogenicity, lymphocytes infiltration, and programmed cell death ligand 1 (PD-L1) expression, showing sensitivity to immune checkpoint blockade therapy. 43,44 Importantly, FDA approved the use of PD-L1 monoclonal antibody pembrolizumab for high-risk, early-stage TNBC patients, suggesting that immunotherapy might be a promising strategy against TNBC. 45,46 Notably, GSK3β was found to promote the phosphorylation and rapid degradation of the PD-L1 protein. 47 In this study, we showed that upregulation of PPP1R14C sustained the inactivation of GSK3β in TNBC. Thus, it might be postulated that PPP1R14C could promote the stability of PD-L1 by inhibiting GSK3β. However, this hypothesis remains to be further investigated in the future, which may provide a novel regulatory mechanism for PD-L1 expression, and suggest PPP1R14C as a potential target for immunotherapy against TNBC. GSK3β is implicated in many cell processes, including TFs, cell-cycle progression, cell survival, apoptosis, and migration. 48 Non-phosphorylated GSK3β is highly active under normal conditions and exerts an inhibitory effect on its downstream pathways. 49 Persistently inactive GSK3β has been found in various cancers, including TNBC, suggesting that p-GSK3β at Ser9 has an oncogenic role in neoplastic disease. 17,19 However, Cao et al. found that the active form of GSK3β might function as an oncogene, as it promoted cell proliferation by inducing S phase entry in ovarian cancer cells. 50 The results of this study were contradictory with others, indicating that the function of GSK3β might differ in various cell types and cellular contexts. 48 Our present data revealed that PPP1R14C, together with PRKCI, and PP1, maintained the phosphorylation of GSK3β at Ser9, which accelerated tumour proliferation and metastasis in TNBC. Furthermore, PPP1R14C promoted non-phosphorylated GSK3β ubiquitin-dependent degradation, which synergistically amplified and prolonged p-GSK3β signalling in TNBC. These results present a novel mechanism to explain how GSK3β remains inactive in TNBC, and further support the oncogenic function of p-GSK3β. Studies have indicated that PPP1R14C regulates the protein activity depending on its inhibitory effect on PP1, and thus modulates the biological activities of numerous key proteins. 20,27,34 Dedinszki et al. reported that PPP1R14C overexpression upregulated the level of phosphorylated-RB1 and decreased the sensitivity of chemotherapy in leukaemia cells. 23 However, studies on PPP1R14C are limited. The above findings revealed that PPP1R14C was upregulated particularly in TNBC, and the overexpression of PPP1R14C strongly correlated with the worse prognosis of patients with TNBC. Furthermore, we found that upregulating PPP1R14C, leading to a high level of p-PPP1R14C (T73), enhanced the aggressive phenotypes of TNBC cells by sustaining the phosphorylation state of GSK3β, and facilitating the degradation of non-phosphorylated GSK3β. The present data demonstrate that PPP1R14C might serve as a novel oncogenic biomarker, and a therapeutic target in TNBC. Although some studies reported that PPP1R14C was down-regulated in some breast cancer cells, we discovered that PPP1R14C was detectable in all breast cancer subtypes and was specifically overexpressed in TNBC, which was supported by both GEO and TCGA datasets. 51 These evidences suggest that PPP1R14C plays a specific role in TNBC progression. PPP1R14C, as a PP1 inhibitory protein, shares an Nterminal binding domain (residues 20-24; RVFFQ) with many PP1 regulatory proteins. 52 The interaction between various PP1 regulatory proteins and PP1 is located in distinct sub-cellular components with specific substrates. 32,53 The myosin-targeting subunit M110 catalysed PP1 to enhance its activity toward myosin P-light chains. 54 In TNBC, excessive PPP1R14C could efficiently compete for PP1 binding, and disrupt PP1's interaction with its other regulatory partners. In this study, we found that the interaction between PPP1R14C and PP1 increased the specificity and affinity of PP1 for p-GSK3β (Ser9), and strongly inhibited its dephosphorylation. In addition, PPP1R14C increased the inhibitory potency on PP1 after phosphorylation by PRKCI, which was specifically upregulated in TNBC among the PKC family members. These results prompt us to conclude that PPP1R14C-induced persistence of p-GSK3β specifically occurs in TNBC. In summary, we identified an oncogenic role of PPP1R14C after phosphorylation by PRKCI in TNBC. p-PPP1R14C inactivated PP1 to maintain the phosphorylation of GSK3β at Ser9, and induced non-phosphorylated GSK3β ubiquitylation and degradation, contributing to the aggressive phenotype of TNBC. These findings highlight the regulatory mechanisms of GSK3β activity in TNBC, and provide new clues for therapeutic strategies. Further investigation is warranted to determine whether blockade of PPP1R14C phosphorylation might be an effective approach to the TNBC treatment. Clinicopathological characteristics of patients Retrospectively, we overviewed the electronic medical record data of Sun Yat-sen University Cancer Center from 2004 to 2012. One hundred fifty patients who were histopathologically diagnosed with breast cancer with surveillance data were included in our study. One hundred fifty archived breast cancer samples and 10 tumouradjacent tissues were further analysed. Detailed clinicopathological data and survival data were collected (Table S2). The study was authorized by the ethics committee of Sun Yat-sen University Cancer Center. Quantitative real-time PCR The RNA was extracted by using TRIzol kit (Thermo Fisher Scientific). Two microgram RNA was mixed with RNase-free DNase, and then reversely transcribed into cDNA. One microliter of cDNA was used to conduct qPCR in SYBR premix Ex Taq (Takara) on the CFX96 Real-Time PCR Detection System (Bio-Rad). The Ct value of target genes was compared to that of glyceraldehyde-3-phosphate dehydrogenase (GAPDH) to define the relative expression level. Western blotting The cellular proteins were extracted by Radioimmunoprecipitation assay lysis buffer (Thermo Fisher Scientific). The concentration of prepared protein was identified through a Bio-Rad DC protein assay kit II. The same amount of protein lysate samples was loaded separately into 8% to 15% SDS-PAGE gels, and samples were segregated by electrophoresis, and then the separated proteins were transferred onto polyvinylidene fluoride (PVDF) transfer mem-branes (Thermo Fisher Scientific). The membranes were blocked with 5% defatted milk, and then were treated with the primary antibodies overnight at 4 • C, and subsequently were treated with labelled secondary antibodies. The immunoreactive proteins were visualized by ECL chemiluminescent substrate reagent kit (Thermo Fisher Scientific). Detailed information on the antibodies used was listed in Table S5. Immunohistochemistry IHC was conducted to determine PPP1R14C in 10 matched tumour-adjacent tissues and 150 breast cancer specimens using an anti-PPP1R14C (#PA5-50996, Invitrogen) antibody. To calculate the SI, the intensity and proportion of positively stained tumour cells were counted. The intensity was catalogued into three levels: no staining was defined as 0; weak staining was defined as 1; moderate staining was defined as 2; strong staining was defined as 3. The positively-stained proportion was classified into four scores: negative was considered as 0; lower than 10% was considered as 1; 10% to 35% was considered as 2; 35% to 75% was considered as 3; 75% to 100% was considered as 4. The rank of the staining score was determined by using SI. SI consisted of 9 scores: 0, 1, 2, 3, 4, 6, 8, 9, and 12. The cutoff value of PPP1R14C was SI = 6; SI equal to or higher than 6 was defined as high expression, and SI lower than 6 was defined as low, respectively. The threshold was identified by the heterogeneity of the log-rank test of 5-year OS and RFS. Generation of stably transfected cell lines Transient plasmid transfection was accomplished by using Lipofectamine 3000 (Thermo Fisher Scientific). To construct the stable PPP1R14C-knockdown cells, human cancer cell lines, MDA-MB-231 and SUM159PT, and a mouse-derived TNBC cell line 4T1 were infected with a retrovirus plasmid containing two different short hairpin RNAs (human and mouse) targeting PPP1R14C, respectively (GeneChem, Shanghai, China). To establish the PPP1R14C-overexpressing cell line, the full-length sequence of PPP1R14C cDNA was identified and cloned to generate a Flag-PPP1R14C construct. The construct was packaged into lentiviruses, which further infected the MDA-MB-231, SUM159PT, and 4T1 cells to integrate PPP1R14C gene into the host cell genome. The positive clones were selected by using puromycin. The constructing primer sequences were listed in Table S6. Xenograft tumour models The animal experiments were all admitted by Sun Yat-sen University's Institutional Animal Care and Use Committee. In brief, five to six-week-old female BALB/c-nu mice (18-20 g in weight) were provided by Guangdong Medical Laboratory Animal Center. Mice were raised in the SPF-levelled barrier system in the Laboratory Animal Center of the Sun Yat-sen University. To generate the orthotopic xenograft model and spontaneous metastasis models, a small incision between the fourth nipple and the midline of mice was made to expose the mammary fat pad. Fifty microliter of cell suspension (SUM159PT, 1 × 10 6 ; 4T1, 2 × 10 5 , respectively) in an insulin syringe was then injected into the mammary fat pad and the incision was sutured. Tumour volumes were determined every week after the surgery. The spontaneous metastasis of 4T1 cells was evaluated using the Xenogen IVIS Spectrum Imaging System (Caliper Life Sciences). For lung colonization models, mice were randomly divided into groups (six mice/group), treated under intravenous injection with 2 × 10 5 MDA-MB-231 cells. BLI was employed periodically to monitor the lung metastatic lesions and the spontaneously metastatic lesions through the Xenogen IVIS Spectrum Imaging System. 55 Twelve weeks later, the lungs were removed after execution of the mice, fixed in formalin, and embedded in paraffin. The amount of lung metastatic burdens was counted by five random fields under low magnification, and the data were presented in the form of mean ± S.D. PP1 activity assay by using colorimetric methods The activity of PP1 was measured according to the manufacturer's instructions (GENMED SCIENTIFICS INC., USA). 5 × 10 6 cells were prepared for the measurement. Samples were washed with GENMED clearing buffer and lysed with GENMED lysis buffer. Five tubes of standardized buffer (standard concentration of phosphate) were prepared in the measurement system, and the standard curve was established. After the assay of the sample background, we detected the sample activity and obtained the corresponding concentration of phosphate according to the standard curve, then calculated the sample activity. The enzyme activity was measured based on the determination of free phosphate with the colour development. Absorbance was read at 660 nm. Statistical analysis Statistical analysis was conducted by using the SPSS software (version 21.0, IBM). Continuous variables with the normal distribution were compared by a two-tailed Student's t-test. Qualitative variables and non-normally distributed continuous variables were analysed by the Mann-Whitney U-test or Chi-square test. Kaplan-Meier analysis was used for univariate survival analysis and the logrank test was applied to compare different survival curves. For the multivariate analysis, Cox regression analysis was applied. A P-value lower than 0.05 at two sides was deemed statistically significant. C O N F L I C T O F I N T E R E S T The authors declare that there is no conflict of interest that could be perceived as prejudicing the impartiality of the research reported.
6,634.6
2022-01-01T00:00:00.000
[ "Medicine", "Biology" ]
The $\sqrt{m\Lambda_{QCD}$ scale in heavy quarkonium We investigate the effects produced by the three-momentum scale $\sqrt{m\Lambda_{QCD}}$ in the strong coupling regime of heavy quarkonium. We compute the leading non-vanishing contributions due to this scale to the masses and inclusive decay widths. We find that they may provide leading corrections to the S-wave decay widths but only subleading corrections to the masses. Introduction Heavy quarkonium is characterized by the small relative velocity v of the heavy quarks in their centre-of-mass frame. This small parameter produces a hierarchy of widely separated scales once multiplied by the mass m of the heavy particle: m (hard), mv (soft), mv 2 (ultrasoft), . . .. In general, we have E ∼ mv 2 ≪ p ∼ mv ≪ m, where E is the binding energy and p the relative three momentum. It is usually believed that for most of the heavy quarkonium states a weak coupling analysis is not reliable. However, one can still exploit the hierarchy of scales in the problem [1]. It was argued in [2,3] that in the particular case Λ QCD ≫ mv 2 , which we will be concerned with in this letter, it is possible to encode all the relevant information of QCD in an effective Schrödinger-like description of these systems. The problem then reduces to calculating the potentials from QCD. It has been shown in [3] how to systematically calculate the potentials within a 1/m expansion (see [4] for earlier calculations). Once the methodology to compute the potentials within a 1/m expansion has been developed, the next question appears naturally: at which extent one can compute the full potential within a 1/m expansion in the case Λ QCD ≫ mv 2 . 1 We tackle this issue in this paper. We will see that, indeed, new non-analytical terms arise due to the three momentum scale mΛ QCD . These terms can be incorporated into local potentials (δ 3 (r) and derivatives of it) and scale as half-integer powers of 1/m. Moreover, we show that it is possible to factorize these effects in a model independent way and compute them within a systematic expansion in some small parameters. As mentioned before, these terms are due to the existence of degrees of freedom, namely the quark-antiquark pair, with relative three momentum of order mΛ QCD . The on-shell energy of these degrees of freedom is of O(Λ QCD ), i.e. the same energy scale that is integrated out when computing the standard 1/m potentials, which corresponds to integrating out (offshell) quark-antiquark pairs of three momentum of order Λ QCD . Therefore, in principle, both degrees of freedom should be integrated out at the same time. In this letter, under the general condition Λ QCD ≫ mv 2 , we will perform the analysis in two possible cases: 1) in Sec. 2 we will consider the particular case mv ≫ Λ QCD ; 2) in Sec. 3 the general case Λ QCD < ∼ mv. Note that the scale mΛ QCD fulfils mΛ QCD ≫ mv and mΛ QCD ≫ Λ QCD . From the last 1 In fact, there is at least one example where powers of √ m arise upon integrating out some non-relativistic degrees of freedom [5]. inequality it follows that at this scale we always are in the weak coupling regime. Case mv ≫ Λ QCD In the case mv ≫ Λ QCD , all quarks and gluons with energy much larger than Λ QCD (in particular gluons with energy and momentum of order mΛ QCD and mv) may be integrated out from NRQCD using weak coupling techniques. This leads to the EFT called pNRQCD ′ in [6,7] (formerly called pNRQCD in [8,2]). This EFT contains, as explicit degrees of freedom, gluons with energy and momentum smaller than mv and quarks with energy smaller than mv and momentum smaller than m. Quarks may be arranged in quark-antiquark singlet S = S1 l c / √ N c and octet O = 1/ √ T F O a T a fields (T F = 1/2). The Lagrangian of pNRQCD ′ then reads (R is the centre-of-mass coordinate and r the relative coordinate) [2]: and L g stays for the Lagrangian density of gluons and light quarks. The potentials V = {V s , V o } contain real and imaginary parts. The real part, which at leading order is the Coulomb potential V (0) , has been calculated by different authors over the past years [9]. The imaginary part has been calculated in [6,7]. It consists of local potentials (δ 3 (r) and derivatives of it). The imaginary coefficients come from the imaginary parts of the four-fermion matching coefficients of NRQCD [1]. The next energy scale to be integrated out is Λ QCD . This means integrating out all quarks and gluons of energy or kinetic energy of order Λ QCD . The contributions due to (off shell) heavy quarks of energy ∼ Λ QCD and three momentum of order mv or smaller (i.e. of order Λ QCD ) are easily singled out by performing an expansion of the incoming and outgoing bound-state energies h s and h o over Λ QCD in the matching calculation. This ensures that the quark kinetic energy is much smaller than Λ QCD and, therefore, that the quark three-momenta are much smaller than mΛ QCD . This expansion only produces terms that are analytical in 1/m [6,7]. The contributions due to heavy quarks of three momentum of order mΛ QCD may be obtained as follows. We split the singlet and octet fields of the pNRQCD ′ Lagrangian into two fields: where the semi-hard fields S sh and O a sh are associated to three-momentum fluctuations of O mΛ QCD and the potential fields S p and O a p to three-momentum fluctuations of O(mv). The potentials are labeled according to the relative momenta that they connect: V = V p,p + V p,sh + V sh,p + V sh,sh . The typical three-momentum transfer in V p,sh , V sh,p and V sh,sh is mΛ QCD (≫ mv). The pNRQCD ′ Lagrangian then reads The expressions for L sh pN RQCD ′ and L p pN RQCD ′ are identical to the pNRQCD ′ Lagrangian except for the changes S, respectively. Recall that the gluons left dynamical are of O(Λ QCD ) and that analytical terms in r do not mix semi-hard and potential fields. Therefore, the multipole expansion in (1) is an expansion with respect to either the scale r ∼ 1/ mΛ QCD in L sh pN RQCD ′ or the scale r ∼ 1/mv in L p pN RQCD ′ . Throughout the paper we will also assume that which implies that the Coulomb potentials in V p,sh , V sh,p and V sh,sh can be expanded about the kinetic energy and no Coulomb resummation is needed. This is not so for V p,p . The leading contribution to the real part of L mixing comes from the mixing of S sh with S p and O a sh with O a p due to the Coulomb potential. As an example, consider the real part of the singlet-mixing term due to the static Coulomb potential. It is given by In order to avoid a cumbersome notation we have dropped the upper-index p, sh from V (0) s (r). In fact, any potential between fields labeled by a, b = p, sh always has upper-indices a, b. Hence, dropping the upper-indices shall not lead to ambiguities. In the second line of Eq. (5), a Fourier transform of all the fields has been performed, and in the third one, we have expanded around p ∼ 0 in the potential, since, by definition, p ∼ mv ≪ p ′ ∼ mΛ QCD . Doing so in the loops that will appear in the matching computation guarantees that only the scale mΛ QCD is integrated out. Alternatively, one may consider S † p (R, r) slowly varying in r and multipole expand it about r = 0, which brings us directly from the first to the last line of Eq. (5). At the order of interest we have V (0) Analogous results hold for the real part of the octet-mixing term due to the static Coulomb potential: where the trace is over the colour indices, the mixing potential is The leading contribution to the imaginary part of L mixing can be immediately read off from the imaginary delta-type potentials calculated in [7]: where The matching coefficients f are the matching coefficients of the four-fermion operators in NRQCD and may be read off from Ref. [1]. Matching The next step is to integrate out from pNRQCD ′ all fluctuations that appear at the energy scale Λ QCD . These are light quarks and gluons of energy or three momentum of order Λ QCD , and singlet and octet fields of energy of order Λ QCD or three momentum of order mΛ QCD . We will be left with pNRQCD, where only a singlet field describing a quark-antiquark pair of energy mv 2 and relative three-momentum mv is dynamical 2 : pN RQCD is defined as the part of the pNRQCD Lagrangian obtained by integrating out quarks and gluons of energy and three-momentum of order Λ QCD in L p pN RQCD ′ only. It is analytical in 1/m and has been considered before in [2,7]. Here we will calculate the leading part of L 1/ √ m pN RQCD , which is defined as the part of the pNRQCD Lagrangian obtained by integrating out quark-antiquark pairs of three-momentum mΛ QCD in L pN RQCD ′ in addition to the above degrees of freedom. In general, it is non analytical in 1/m, and, at leading order, it consists of a new local (delta-type) potential. The four diagrams of pNRQCD ′ contributing to δV 1/ √ m at leading (non-vanishing) order in the multipole expansion. Open and full circles indicate octet and singlet potential insertions coming from the mixing terms respectively. These are treated according to Eq. (5). The upper-scripts P and SH on a propagator indicate that the propagating fields are of the potential and semi-hard type respectively. The circle with a cross indicates the vertex S † r · E O (or Hermitian conjugate), where the quark fields are both either potential or semi-hard. The gluon line stands for non-perturbative multi-gluon exchanges. The matching condition for the full δV = δV 1/m + δV 1/ √ m at leading (non-vanishing) order in the multipole expansion is The above matching equation should be understood (even if written at the operator level) with incoming (outcoming) momenta and energy E of O(mv) and O(mv 2 ) respectively. The typical size of the time variable in the integral is given by the vacuum expectation value of the chromoelectric correlator and hence t ∼ 1/Λ QCD . The separation between potential and semi-hard relative three-momenta discussed above can be easily implemented in the rhs of Eq. (13) and V p,sh s,o . The zeroth order term in this expansion gives δV 1/m and has been calculated in [2,7]. The V p,p potential cannot be expanded in the potential region. The size of the three-momenta in the semi-hard regions is of O( mΛ QCD ). Several approximations apply: The leading contributions to δV 1/ √ m have been depicted in Fig. 1. Fig. 1a corresponds to Fig. 1b corresponds to Fig. 1c corresponds to and, finally, Fig. 1d corresponds to The potential δV contains a real and an imaginary part. The real part contributes to the heavy quarkonium spectrum, the imaginary one to the inclusive decay width. Corrections to the spectrum The four diagrams that give the leading contribution to Re δV 1/ √ m are obtained from those of Fig. 1 s,o are the Coulomb singlet and octet potentials. They give: where in the first equality we have used the definition of E n that one may find in Ref. [7] and in the last equality we have written the chromoelectric correlator in Euclidean space (traces as well as suitable Schwinger lines connecting the gluon fields are understood): Eq. (18) Corrections to the decay width The four diagrams that give the leading contribution to Im δV 1/ √ m are shown in Fig. 2. These can be derived from the diagrams of Fig. 1 by replacing one of the potentials by a Coulomb potential and the second potential with the imaginary delta potential of Eq. (7). The graph with two potentials inside the gluonic loop as well as graphs involving the octet delta potential (∼ K o δ 3 (r)/m 2 ) do not contribute to Im δV as a delta potential (although they do as derivatives of a delta potential, which are subleading). We obtain where in the last equality we have written the chromoelectric correlator in Euclidean space. A similar analysis can be done for the P -wave decays. The leading effect would be in that case at least O(mα s / mΛ QCD ) suppressed with respect to the leading contribution computed in [6]. 3 Case Λ QCD < ∼ mv Here we will follow the same procedure as in the previous section. In this case, however, the starting point is the NRQCD Lagrangian. We split the quark (antiquark) field into two: a semi-hard field for the (three-momentum) fluctuations of O( mΛ QCD ), ψ sh (χ sh ), and a potential field for the (three-momentum) fluctuations of O(mv), ψ p (χ p ): The NRQCD Lagrangian then reads The Lagrangians L sh N RQCD and L p N RQCD are identical to the NRQCD Lagrangian expressed in terms of semi-hard and potential fields respectively. The quantity L g is the QCD Lagrangian for gluons and light quarks. For L sh N RQCD we can use weak coupling techniques. Therefore, we can construct a pNRQCD ′ Lagrangian for it, once gluons and quarks of energy or three momentum of O( mΛ QCD ) have been integrated out and transformed into potentials: If we further project to the quark-antiquark sector, the Lagrangian L sh pN RQCD ′ will formally read equal to Eq. (1). The multipole expanded gluons in L sh pN RQCD ′ have (four) momentum much smaller than mΛ QCD . We note that we cannot do the same for L p N RQCD since at scales of O(Λ QCD ) we can neither use weak coupling techniques nor the multipole expansion. We consider now L mixing . We will assume, as in Sec. 2, that the condition (4) holds. This will allow us to treat the Coulomb potential as a perturbation at the semi-hard scale. The leading order contribution to the real part of L mixing comes from the one-Coulomb exchange graph (see Fig. 3): Re The potentials V ( we need to consider also the next-to-leading term in the mv/ mΛ QCD expansion. It is given by A practical way to obtain Re L (1) mixing is by expanding the Coulomb potential in Fig. 3 at higher order in p/p ′ and promoting the conventional derivatives acting on the potential fields to covariant ones. A proper tree-level matching in coordinate space can be done using the field redefinitions of Ref. [8] for the semi-hard fields projected to the two particle sector and multipole expanding the potential fields. The leading contribution to the imaginary part of L mixing is analogous to the one given by Eq. (7): Note that the potential fields always appear as local currents in L mixing . Finally, the effective field theory that we obtain, at the order of interest, is given by where L g contains now gluons and light quarks of energy and momentum much smaller than mΛ QCD . Matching As in section (2.1), we now want to integrate out degrees of freedom of O(Λ QCD ). We will be left with an EFT, pNRQCD, where only a singlet field describing a quark-antiquark pair of energy mv 2 and relative three momentum mv is dynamical: The quantity L 1/m pN RQCD is obtained by integrating out quarks and gluons of energy and three momentum of order Λ QCD in L p N RQCD . It is analytical in 1/m and has been considered before in [3,6,7]. Here we will calculate the leading part of L 1/ √ m pN RQCD , which, in general, is non analytical in 1/m. It involves the integration from NRQCD of quark-antiquark pairs of three momentum mΛ QCD . The Lagrangian L 1/ √ m pN RQCD will consist, at leading order, of a new local (delta-type) potential that we name δV 1/ √ m : The matching calculation for δV 1/ √ m is analogous to the computation of the previous section supplemented with the technology developed in Refs. [3,6,7]. The leading contribution is given by the four diagrams shown in Fig. 4. Fig. 4a corresponds to (according to the notation of Ref. [3]) Fig. 4b corresponds to Fig. 4c corresponds to Finally, Fig. 4d gives In the above equations H stands for the NRQCD Hamiltonian in the static limit and |0; r (0) is the gluonic piece of the ground state of NRQCD in the static limit. We refer to [3,7] By summing up all the contributions, we obtain the same result as in Sec. 2: This is not a coincidence. Note first that the diagram in Fig. 4d is identical to the one in Fig. 1d. The remaining diagrams in Fig. 4 also have a mapping to the corresponding ones of Fig. 1, if we substitute the square box in the former by a round box linked to an open circle through an octet propagator. This mapping can be made rigorous from the following equality (where {|n (0) } is the gluonic term of a complete set of eigenstates of the static NRQCD Hamiltonian, and E (0) n the corresponding eigenvalues [3,7]): where in the last line we have also made use of the fact that in the limit x 1 − x 2 → 0 we have |0 (0) → 1 l c |vac / √ N c . The neglected terms, generically denoted with dots, do not give delta-type contributions to the potentials. From Eq. (37) it follows that the calculation of the diagrams of Fig. 4a, Fig. 4b and Fig. 4c reduces to that one of the diagrams of Fig. 1a, Fig. 1b and Fig. 1c respectively. Similarly, for the imaginary part of δV 1/ √ m the relevant diagrams reduce to those calculated in Sec. 2.3 and shown in Fig. 2. It reads: Here, as well, an analysis for the P -wave decays could be done. We can easily estimate that the leading effects would be at least O(mα s / mΛ QCD ) suppressed with respect to the contributions computed in [6]. For heavy quarkonium systems in the strong-coupling regime (Λ QCD ≫ mv 2 ), the corrections to the static QCD potential in the Schrödinger equation have so far been calculated within a 1/m expansion. We have shown here in a quantitative manner that they are not the only contributions to the full potential and have computed the leading non-analytical corrections in 1/m. Our findings can be summarized in the following corrections to the energy levels and the S-wave matrix elements and decay widths (the symbols V and P stand for the vector and pseudoscalar S-wave heavy quarkonium respectively, n is the principal quantum number): Γ(P Q (nS) → γγ) = C A π |R P n0 (0)| 2 m 2 × Im f γγ ( 1 S 0 ) 1 + 4(2C f + C A ) 3Γ(7/2) α s E E 5/2 where O(1/m) stands for corrections (which may be of the same size) that can be computed within the 1/m expansion (see [7]) and for higher-order corrections. Let us comment on the size of the new corrections. For the spectrum they are always smaller than mv 3 and therefore subleading with respect those calculated in [3]. For the S-wave decay widths their relative size with respect the corrections computed in [7] depends on the size of α s ( mΛ QCD ). Under some circumstances, for instance α s ∼ v, the contributions calculated here are the dominant ones. In any case, the above results fulfil the same factorization properties as those obtained in [7]. As a consequence, equations like those given in Sec. VII of Ref. [7] still hold. Let us also note that the same non-perturbative correlator appears in both electromagnetic and hadronic decays. In this paper we have assumed that the scale mΛ QCD is much larger than mα s . Otherwise we are not allowed to treat the Coulomb potential as a perturbation at that scale. This may not be the case for the Υ system where one seems to be in the situation Λ QCD ∼ mα 2 s , which implies mΛ QCD ∼ mα s . In this case, one should integrate out the three-momentum scale mα s at the same time as the scale mΛ QCD . The calculations presented here should be modified by using the full Coulomb propagators instead of the free ones in the semi-hard regions. In addition extra contributions may arise, which are only due to the three-momentum scale mα s . We do not deal with this issue in this paper, which, however, deserves further studies.
5,010.6
2003-07-11T00:00:00.000
[ "Physics" ]
Performance Analysis of Free Space Optical Link Under Various Attenuation Effects Free Space Optics (FSO) is useful where a fiber optic cable is impractical. It is similar to fiber optic communications in that data is transmitted by modulated laser light. Instead of containing the pulses of light with in a glass fiber, these are transmitted in a narrow beam through the atmosphere. This article discusses the main architectural details of the FSO communication system.. The major FSO Parameters discussed are wavelength selection, features of different wavelength windows and optical channel model. The article investigates the Performance of Free Space Optical Link under Various Attenuation Effects like rain, fog using Optiwave. Introduction The block diagram of a typical terrestrial FSO Link is shown in Figure 1. Like any other communication technologies, the FSO essentially comprises of three parts: the transmitter, the channel and the receiver. The primary duty of the transmitter is to modulate the source data onto optical carrier then the output will passed through the air, space or vaccum and that will received by the receiver. The source data is in the binary form and converted to optical pulses by the transmitter [7][8]. Modulation can be of many types as onoff keying (OOK), pulse position modulation (PPM), differential phase shift keying (DPSK), differential quadrature phase shift keying (DQPSK) and subcarrier intensity modulation (SIM) [1]. The modulator is used to achieve high data rates by varying phase, frequency and amplitude, used to carry out modulation. The modulation is achieved by varying the driving current of the optical source directly in sympathy with the data to be transmitted or by the Mach-Zehnder (SMZ) Interferometer [8]. The driver circuit is used to vary driver current in accordance with input data, so that binary signal can be modulated or converted to optical pulses.. The 1550nm band is attractive for a number of reasons as they provide larger range, high data rate, eye safety (about 50 times more power can be transmitted at 1550nm than at 850nm), reduced solar background and scattering in light haze/fog. The transmit telescope collects the light, collimates it and directed towards the receiver telescope at the other end of the channel [8]. Atmospheric channel is free space link which can be 2-3kms long. As it is open channel so there are number of factors that affect the link like its data rate, long range connectivity and error rate also. The main factors that must be considered while establishing a link are: absorption, turbulence, scattering and beam divergence. Other source of attenuation is sunlight; the link can go blank if sun goes exactly behind the transmitter. Dust particles in the atmosphere, snow, fog, rain and precipitation can disturb the link & affects the bit error rate (BER). It contains a telescope fitted with a lens that collects maximum light to provide maximum power to photo detector also optical filter is used to reject some unwanted wavelengths or noise that gets added during reception of the signal. The photodetectors are Avalance Photo Diodes (APD) or P-I-N diodes. APDs used are highly sensitive and needs 100-200 volt in reverse bias for their operation. These can detect visible and near IR wavelengths if silicon material is used. PIN diodes are used where high voltage detection is needed, also these have fast switching speeds but their use is limited for shorter distances. These are less expensive and are generally used for longer wavelengths. These diodes can detect different wavelengths like PIN (InGaAs) can detect 1550nm and Si can detect up to 1.1µm. Post detection processor carries out necessary amplification and signal processing to generate error free signal. Atmospheric Optical Channel [9-10] The atmospheric channel consists of gases, aerosols-tiny particles suspended in the atmosphere. Also present in the atmosphere are rain, haze, fog and other forms of precipitation. Another feature of interest is atmospheric turbulence. When radiation strikes the earth from the Sun, some of the radiation is absorbed by the earth's surface thereby heating up its (Earth's) surface air mass. The resulting mass of warm and lighter air then rises up to mix turbulently with the surrounding cooler air mass to create atmospheric turbulence. With the size distribution of the atmospheric constituents ranging from sub-micrometers to centimeters, an optical field that traverses the atmosphere is scattered and or absorbed [8]. Optical Attenuation by Fog The combined effects of direct absorption and scattering of laser light can be described by a single path-dependent attenuation coefficient γ (dB/km) which is to be described by Kim and Kruse Models as given by [4]. Let λ is the wavelength in nm, V is the visibility (m), and q is the particle size distribution. For kruse model Equation (1) implies that for any meteorological condition, there will be less attenuation for higher wavelengths. The attenuation of 10 µm is expected to be less than attenuation of shorter wavelengths. Kim rejected such wavelength dependent attenuation for low visibility in dense fog. The q variable in equation (1) for Kim model is given by The advection fog is generated when the warm, moist air flows over a colder surface. The air in contact with the surface is cooled below its dew point, causing the condensation of water vapour. It appears more particularly in spring when southern displacements of warm, moist air masses move over snow covered regions. The radiation or convection fog is generated by radiative cooling of an air mass during the night radiation when meteorological conditions are favourable (very low speed winds, high humidity, clear sky). It forms when the surface releases the heat that is accumulated during the day and becomes colder: the air which is in contact with this surface is cooled below the dew point, causing the condensation of water vapour, which results in the formation of a ground level cloud. This type of fog occurs more particularly in valleys. Optical Attenuation by Rain Rain is precipitation of liquid drops with diameters greater than 0.5mm. when the drops are smaller; the precipitation is usually called drizzle. The optical signal is randomly attenuated by fog and rain when it passes through the atmosphere. The main attenuation factor for optical wireless link is fog. However, rain also imposes certain attenuation. When the size of water droplets of rain becomes large enough it causes reflection and refraction. As a result these droplets cause wavelength independent scattering. Majority of the rain drops belong to this category. The increase in rainfall rate causes linear increase in attenuation, and the mean of the raindrop sizes also increases with the rainfall rate and is in the order of a few mm. The other prediction model that has been recommended by ITU-R is as in Table 1 and other models that have been used for FSO rain attenuation prediction is as in Table 2. Simulation Based FSO Link Design The FSO link is designed and results are evaluated at 1550nm wavelength. The simulation design of FSO is shown below (figure 2) We have set the wavelength to 1550nm that produces invisible laser beam. Next is the FSO channel in which provision is provided by simulator to change parameters of free space like link range, attenuation, beam divergence angle etc. To analyze the optical power, simulator provides power meter and spectrum analyzer tools. These are connected at transmitted and receiver to evaluate the performance of the link. The simulator proves to practical conditions as it provide provision for adjusting parameters like power transmitted, bit rate, noise bandwidth, range, geometric and additional losses, propagation delay and types of diodes along with their responsivity. Figure 3 shows the transmitted optical power and its spectrum with the wavelength of the optimized link for length 1000 meters. Optical power transmitted is 2.461e -3 Watts calculated by the power meter. The spectrum of the transmitted power in Figure 3 also shows at the peak wavelength of 1550nm. Optical power received is 1.209e -6 Watts as calculated by the power meter. Conclusion The simulation work is done to analyze the FSO link performance at 1500nm wavelength and at maximum distance of 1000 meters. The powers transmitted and received is analyzed by using the optical power analyzers. For FSO systems, mostly used modulation techniques are RZ (Return-to-zero) and NRZ (Non-return-to-zero). Therefore, in this research work, for FSO systems we prefer Mach-Zehnder Modulator with an NRZ modulation technique. Biography Gaurav Soni received his B-Tech Degree in ECE from PTU, Kapurthala in the year 2005 and M-Tech Degree in ECE from D. A. V. I. E. T, Jalandhar. He has more than ten years of teaching and research experience. He has to his credit 91 research papers in various refereed international journals like JOC and IEEE conference Proceedings. He is currently working as Associate Prof. in ECE Deptt., Amritsar College of Engineering and Technology, Amritsar. He has served as reviewer to IEEE Journal of Lightwave Technology, reviewer & editor of Advances in Science, Technology and Engineering Systems Journal.
2,077.6
2018-03-27T00:00:00.000
[ "Physics" ]
Fractured reservoir distribution characterization using folding mechanism analysis and patterns recognition in the Tabnak hydrocarbon reservoir anticline Naturally, fractured reservoirs play a considerable part in the study, production, and development of hydrocarbon fields because most hydrocarbon reservoirs in the Zagros Basin are naturally fractured. Production from those reservoirs is usually affected by the presence of a system of connected fractures. In this study, the Tabnak hydrocarbon field on the fold–thrust belt at the Zagros zone in the Persian plate has been analyzed by the facies models, folding mechanism analysis to identify fracture reservoir patterns. The results show a flexural fold with similarity in the folding mechanism and some open fracture potential made by limestone, shale, clay, and anhydrite in the study area's facies models. Consequently, the stress pattern and type of fracture issue on the fold's upper and lower layers will be similar. On the Tabnak anticline reservoir using image processing techniques in MATLAB R2019 software and kriging geostatistical methods, fracture surface patterns as a block model extended to the depth. Using the model results, fractures’ orientation distribution in adjacent wells 11, 14, and 15 is appropriate. The results also have similarities with the facies models, folding mechanism assessment, well test, and mud loss data analysis. These results can affect the development plans’ primary approach by drilling horizontal and sleep wells and hydrocarbon reservoir management strategies. Introduction The Zagros orogenic belt in Iran is part of the Alpine-Himalayan mountain range, extending about 2000 km from the northwest (Anatolian Fault in Eastern Turkey) to the southeast Oman Line (Berberian and King 1981;Alavi 1994). In the folded-thrust Zagros belt, two different types of folds are observed, one of which is bending folds and the other is fault-related folds. Most fault-related folds are composed of different members (Jamison 1987;Wallance and Homza 2004;Suppe et al. 2004;De Vera et al. 2009). Other folding mechanisms, such as salt-affected folds and compact folds, are seen in the Zagros but are not common. In the Zagros belt, important parameters include the thickness of calcareous units such as Asmari and Sarvak Formations. In addition, the number of separated zones by replacing evaporitic rocks (anhydrite) and shales creates a volumetric mechanical anisotropy for several bent layers (Sepehr et al. 2006). On the other hand, there are several separation horizons in the cover rocks of the folded-thrust Zagros belt. However, these horizons are not uniform, and lateral changes in the physical components of the cover rock sequence are not observed in all regions of the Zagros. As a result, different parts of the Zagros belt (Lorestan, Dezful, and Fars region) include different structural styles. Severe anisotropy usually occurs in parts of a fold when the layers are relatively thin and loose, and during the folding process, the layering surfaces show little slip resistance. Furthermore, the lowest mechanical anisotropy occurs when the layers are relatively thick and strong in terms of mechanical strength, and slipping of the layering surfaces does not occur easily. In the central parts of the Zagros, due to the presence of numerous harmonious and cohesive horizons, cover rocks with severe mechanical anisotropy are seen (Sepehr et al. 2006). In the study area in Tabnak hydrocarbon anticline due to the homogeneity of the formations that are exposed at the ground with hydrocarbon-producing formations, i.e., Kangan and Dalan, the thickness and strength of the region layers and also the approximate horizontal layer. In the study area, anisotropy can be considered minimal for this area and, consequently, the pattern of fractures due to bending of folds in surface formations can be considered similar to the classes of deeper formations. Folded Belt-Zagros Thrust is separated from the submerged zone of Zagros by folds with double planch and NW-SE extension. The anticlines in this zone are usually symmetrical and inverted and have a style parallel to the fixed layers' thickness and are often formed by flexural-slip folding mechanisms (Everts et al., 1977;Alavi 1994Alavi , 2007. Wrinkles are often propagated by faults (Suppe, 1983;Suppe and Medwedeff, 1990) or flexural-fault folds (Suppe, 1983). Natural fractures are mechanical breaks in rocks, and geological formations as a spatial distribution can be a complex function of different geological factors (Al-Rubaye et al. 2021). They occur at different scales, and ordinary is highly heterogeneous. Rock fracturing is a complicated process, which is sensitive to changes in geological conditions under lithostatic, fluid pressures, tectonic, facies (Eahsanul Haque et al. 2018), thermal, and other geological stresses such as uplifting, volcanoes, folding, and salt intrusion. In general, fractures initiate and propagate when the rock's stresses become equal or greater than the rock's strength (Sarkheil et al. 2013). Usually, in exploration studies based on structural geology and reservoir fractures, available data are seismic, porosity, permeability, lithology, bed thickness, state of stress, fault patterns, folding patterns, and production data. As a result of its analysis, the outcome is a network map of fracture index/intensity for each discrete block. The approach uses fuzzy logic (Sarkheil et al. 2020) to quantify and rank the importance of each geological parameter on fractures and neural networks to account for complex, nonlinear relationships between these geological parameters and the fractured index (Sarkheil et al. 2013). Neural networks, which can extract relationships under observed data among several variables, are considered an excellent tool for estimating fracture density as a function of several geological parameters. The neural network is suitable for analyzing the complex nonlinear system because no prior knowledge of the functional structure among variables is required. Quenes et al. (1998) used neural networks to analyze the ultimate distribution of recovery in the fractured natural reservoir as a function of bed thickness, resistance formation, and curvature formation. It is commonly observed that fractures naturally develop in different sets in orientation, density, and geometrics parameters (Quenes et al. 1998). Fractal behavior has been observed in natural fracture patterns (Barton and Larson 1985;Sarkheil et al. 2010Sarkheil et al. , 2013. Fractal geometry provides a quantification of size scaling or scale dependency of complex fracture systems. The significant aspect of a multi-fractal analysis is to detect the multiscaling chain. So that, in multi-fractal analysis, the box-counting grid technique has been used to gather information about the distribution of pixel values, which becomes the basis for a series of calculations that reveal and explore the multiple scaling rules of multi-fractals. Magnificent exposures of folds by NW-SE trending dominate structural style and its relationship to the Zagros fold-thrust belt's morphology. These folds differ in their geometry and specifications in this area. One of the fold-thrust belt regions is the Tabnak anticline located in the eastern part of the Asaloyeh hydrocarbon field. High gas production from some formations of the Tabnak hydrocarbon field in Iran indicates a naturally fractured reservoir dominated by structural fractures and facies models. In contrast, gas flow occurs along with open fractures. (Sarkheil et al. 2009a(Sarkheil et al. , b, 2010(Sarkheil et al. , and 2013. In this study, to identify the pattern of natural fractures in Tabnak hydrocarbon reservoir, folding mechanism and facies models are used to investigate the effects of porosity, permeability, fracture density on the part of a back limb of Tabnak fold. And also using image processing techniques in MATLAB R2019 software and kriging geostatistical methods, fracture surface patterns developed to the depth. Method and/or Theory Tabnak hydrocarbon field is located on the Zagros zone's fold-thrust belt in the Persian plate ( Fig. 1). The formations homogeneity in the study area, occurrences on the surface, and hydrocarbon production potential (specially Kangan and Dalan formation) are among its characteristic features. The layers' thickness and strength and the layers' approximate horizontality can be minimized for anisotropy. Secondary fractures due to folding depend on the mechanical behavior of the stressed layers and how the stress is distributed between them, so that if the fold is flexural-sliding, the stress distribution pattern and consequently, the type of stress-induced fractures. The incidence on the surface layers of the fold and the lower layers are similar, still, suppose the fold in the area is flexuralshear. In that case, i.e., there is friction between the layers instead of slipping, the pattern of stress distribution on the surface layers of the fold compared to the lower layers (located in depth) is different, so different fractures occur than deep layers in floors. Hence, due to the fold's bending, the fracture pattern in surface formations is similar to the layers of deeper formations (Fig. 2a, b). As previously researched, on the mechanism of folding of Zagros, Zagros Fold-Thrust Belt is separated by double folds along the NW-SE from the Zagros Imbricate Zone. Along with usually symmetrical anticlines in this zone, they have a similar style with the thickness of fixed layers and are mostly formed by the flexural-slip folding mechanisms (Alavi 1994 and2007): Fault-propagation folds (Suppe 1983;Suppe and Medwedeff 1990) or fault-bend folds (Suppe 1983). Tabnak hydrocarbon anticline can be considered similar to a flexural-slip bending mechanism. By identifying the folding mechanism in the Tabnak anticline, the pattern of stress distribution, and the consequence of fractures resulting from stress on the fold's surface layers, the lower layers can be considered similar. Furthermore, a model can be estimated to generalize the earth's surface fractures to a depth. In this study, various data that were widely available, reliable, and high quality were used. Of course, it can be mentioned that the scope of the study block on the Tabnak anticline was selected in such a way that it has a variety of data with appropriate quality. The data used in this research are divided into the following groups: 1) Spatial data: These include fracture information from surface surveys performed in the Tabnak anticline range and the use of aerial and satellite imagery. 2) Mud loss data: This group of information has been obtained from the waste reports of the National Iranian Oil Company. 3) Porosity and permeability data: This information results from the analysis of drilling cores in wells in the study area. 4) Data related to shale volume and facies: This information is obtained from petrophysical studies available at the National Iranian Oil Company. 5) Fracture density data related to folding mechanisms: This information has been used by video logs (FMI), in collaboration with Schlumberger Company, and reinterpretation of logs for wells in the study area of Tabnak field to verify the information. 6) Mud Loss Reports On the Tabnak anticline reservoir using image processing techniques in MATLAB R2019 software and kriging geostatistical methods, fracture surface patterns as a block model extended to the depth. The results are also compared with the facies models, folding mechanism assessment, well test, and mud loss data analysis (Fig. 3 Mud Loss Reports Validation and comparison Facies model Since the hydrocarbon reservoir facies information and the shale or clay volume data in the formation, it is essential to analyze the presence or absence of fractures in the formation and the open or closed fractures; litho-facies can be essential in completing the identification of fracture distribution. For this goal, the results of petrophysical data on Tabnak hydrocarbon field wells have been used for facies modeling. In the next step, a lithology diagram was prepared for each well to make a litho-facies model. Finally, in the study's continuation, petrophysical data (shale, dolomite, limestone, and anhydrite volume) have been used to identify the formations of litho at the desired depth (Fig. 4). Also, another part of the research facies model of the Tabnak hydrocarbon reservoir, Kangan, and Dalan Formations has been studied. The lithology-type petrophysical results were first considered for each interval and numerical value according to Table 1. In contrast, these were assigned to different two-dimensional facies model formations, and they were plotted for the study area on the Tabnak hydrocarbon reservoir according to Figs. 4 and 5. Analyzing results show the two-dimensional model in this study area, especially for the Kangan Formation, which is considered for a depth of 2350 m. The study area's significant components consist of dolomite, limestone, and anhydrite lithology. According to Fig. 5, it can be seen in the results of the two-dimensional model for the Kangan Formation, between wells 14 and 15, which are made of limestone and no shale, clay, and anhydrite. So that in the components of this region, there is no filled-fracture material. So, it can be a profitable area for open fractures and consequently, have a high potential for hydrocarbon or water production. It should be noted, however, that this calcareous range does not extend much more in-depth. Whereas in the twodimensional facies model of Fig. 6, which is related to the Dalan Formation is not seen in the range with the possibility of these open fractures between wells 14 and 15. The two-dimensional model of Fig. 6 shows the Dalan Formation, which is considered at a depth of 2750 m. The study area's significant components are lithologically composed of dolomite, carbonate anhydrite, and anhydrite. Modeling the surface fractures developed to the depth In this study's scope, each of the eight well is located on the carbonate lithology and consists of dolomite and limestone formations, mainly hydrocarbon-producing formations such as Dalan Formations and Dehram Group. It is also carbonate, so a model of surface fractures can be developed for this area in depth. For them, this pattern of fracture distribution can be modeled. The following steps are followed for this modeling: 1) Surface fractures distribution mapping (This map was prepared and compiled by combining fracture data from satellite imagery, facies models, and land surveys.) 2) Digitizing the surface fracture image (Using the image processing techniques in MATLAB R2019 software, the image was converted to the 0 and 1 binary format. All pixels larger than 0.5 were converted to 1, and all pixels smaller than 0.5 were converted to zero.) 3) Preparing the Excel sheet files includes digital fractures. (The impact radius for each well was determined.) 5) Fracture density estimation results converting to binary format (0, 1). (The fracture density values between 0.2 and 1.4 m 2 /m 3 were converted to 0 and 1; especially the fracture density model cells between from 0.2 to 0.9 m 2 / m 3 were converted to 0 and fracture density model cells from 0.9 to 1.4 m 2 /m 3 were converted to 1.) 6) Combining the three categories of information (1. The surface fracture cells were converted to binary (0 and 1) 2. Estimated cells around each well according to the density of intracellular wellbore fractures and 3. Generalized cells to the impact wells radius.) For each cell in the model without fracture data, the fracture property is developed. For each cell, the fracture information derived from the network estimation and generalization based on the impact radius is combined with the surface fracture data. 7) Converting the results to the intervals between 0 and 1. 8) And finally, using the SGeMS v2.0 software, the surface fractures that developed to the depth model are represented as a three-dimensional model (Fig. 7). As a sample in Fig. 8a, a two-dimensional fracturing model cross-sectioned on a three-dimensional surface fracture model developed to the depth was observed for 2350 m. As shown in the given model, the fractures between wells These results have similarities with the fractal dimension model (Sarkheil et al. 2013) (Fig. 8b) and the results of mud loss and well test analysis. Conclusion This study performs a case study (a confined region in the back limb of Tabnak hydrocarbon reservoir). According to the study of structural studies done on the central Zagros area, it is possible to consider the bending-sliding type of flexural folding mechanism as most anticlines in this region. According to the stress distribution pattern and the type of fracture, issues from the tension on the floor and lower layers of the fold will be similar. Furthermore, it can be seen in the results of the two-dimensional model for the Kangan Formation, between wells 14 and 15, which are made of limestone and no shale, clay, and anhydrite, so that in the components of this region, there are no filled fracture materials. So, it can be a profitable area for open fractures and consequently, have a high potential for hydrocarbon or water production. The two-dimensional model of the Dalan Formation shows the dolomite and carbonate anhydrite components at a depth of 2750 m. Also, due to the recognition of the lithology of geological formations occurring on the earth's surface and the deep structures with the ability to produce hydrocarbons, we can model the surface using this research fractures' proposed algorithm developed to the depth. So, referring to this pattern, in terms of opinion, is more appropriate. In addition, a cross-sectional view of the surface fractures developed to the depth distribution model, for the depth of 2350 m, is possible to determine the distribution of fractures adjacent to wells 11, 14, and 15. So, these distributions were profitable and in the wells' vicinity, affecting development plans and reservoir management strategy. Conflict of interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. Funding The authors would like to appreciate the comprehensive support provided by (ICOFC). Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/.
4,188.8
2021-06-01T00:00:00.000
[ "Geology", "Engineering" ]
Dibromido(di-2-pyridyl disulfide-κ2 N,N′)zinc(II) The molecular structure of the title compound, [ZnBr2(C10H8N2S2)], contains a seven-membered chelate ring in which the zinc atom is coordinated by two bromide ions and by the two pyridyl N atoms of a single 2,2′-dipyridyldisulfide (dpds) ligand within a slightly distorted tetrahedron. As is usual for this type of complex, the disulfide group does not participate in zinc coordination. The chelate complexes are connected via weak intermolecular C—H⋯Br hydrogen bonding into chains, which extend in the [010] direction. Comment In our ongoing investigation on the synthesis, structures and properties of new coordination polymers based on zinc(II) halides and N-donor ligands (Bhosekar et al. 2007), we have startet systematic investigation of their thermal behavior because we have demonstrated that new ligand-deficient coordination polymers can be conveniently prepared by thermal decompisition of suitable ligand-rich precursur compounds (Näther et al. 2003;Näther & Jess, 2006). In further investigations we have reacted zinc(II) bromine with 2,2'-bipyridyldisulfide (dpds). In this reaction the title chelate-complex has been formed by accident. The versatile coordination properties of dpds enables a series of different chelate-complexes and coordination polymers. It can act in N,N'-bidentate (Kinoshita et al., 2003;Kadooka et al. 1976& Pickardtet al. 2005 or bridging (Kubo et al. 1998& Kinoshita et al. 2003) coordination modes toward many metals. When dpds is connected to the metal atom as a chelate ligand, a seven-membered ring is formed. The title compound is isotypic to that of the corresponding chloride compound reported by Pickardt et al. in 2005. In the crystal structure the coordination geometry about the Zn(II) ion is almost tetrahedral with bonds being formed to two bromine ions and the two pyridyl nitrogen atoms of a single dpds ligand (Fig. 1). These latter interactions result in the formation of a seven-membered chelate ring. As usual for this type of complexes, the disulfide group does not participate in (15) to 119.06 (4)°, the largest being Br-Zn-Br (Tab. 1). The structural parameters in the dpds molecule are quite regular. In particular the C-S bond, 1.784 (7)-1.783 (6) Å, is in good agreement with those expected for C(sp 2 )-S bonds (1.77 Å). The S-S bond length, 2.050 (3) Å, is somewhat longer than that found in the structure of the free ligand, 2.016 (2) Å (Raghavan & Seff, 1977). Experimental ZnBr 2 and dpds was obtained from Alfa Aesar and methanol was obtained from Fluka. 0.125 mmol (28.15 mg) zinc(II) bromine, 0.125 mmol (27.5 mg) dpds and 3 ml of methanol were transfered in a test-tube, which were closed and heated to 110 °C for four days. On cooling colourless block-shaped single crystals of (I) were obtained. Refinement All H atoms were located in difference map but were positioned with idealized geometry and were refined isotropic with U eq (H) = 1.2 U eq (C) of the parent atom using a riding model with C-H = 0.95 Å. Fig. 1. : Molecular structure of the title compund with labelling and displacement ellipsoids drawn at the 50% probability level. Refinement. Refinement of F 2 against ALL reflections. The weighted R-factor wR and goodness of fit S are based on F 2 , conventional R-factors R are based on F, with F set to zero for negative F 2 . The threshold expression of F 2 > σ(F 2 ) is used only for calculating Rfactors(gt) etc. and is not relevant to the choice of reflections for refinement. R-factors based on F 2 are statistically about twice as large as those based on F, and R-factors based on ALL data will be even larger.
827.4
2007-12-06T00:00:00.000
[ "Chemistry" ]
Estimation of swelling potential of Enugu Shale using cost effective methods The behavior of swelling soils is mainly governed by its mineralogical composition as well as its environmental factors and stress history. Enugu Shale is one of these shales that assessment of its soil swelling potential cannot only be based on its mineralogical composition alone. The identification of its clay mineral types is basic to understand the roles of other factors of swelling in the soils. The results of particle size distribution indicated that Enugu Shale is dominated by fine-grains with average mean of 69.65, 23.68 sands and 6.67% gravels. While the Atterberg’s limit values are moderate to high, with liquid limit ranging from 22-66%, plastic limit 0-39% and plasticity index 0-39%, abundance of major elemental oxides show that SiO2 (50.4-88.1%), Al2O3 (6.29-28.23%) and Fe2O3 (0.98-12.25%) constitute over 90% of the bulk chemical compositions of the studied area. The studied area is dominated by A-7 soils and low plasticity clay soils according to AASHTO and USCS classification system. The results of free swell ratio range from 1.02-1.45 which indicates that studied area is dominated by mixture of swelling and non-swelling clay minerals. The Van der Merwe’s charts shows low to medium swelling potential. These results show that the study area is dominated by low to medium swelling soils which need to be modified and upgraded before it can be used as subgrade material. INTRODUCTION The studies of expansive soils have in recent time attracted a great deal of attention from engineering construction practitioners. For example, Enugu Shale in southern Nigeria is mostly underlain by soft sediments which are prone to expansion in the presence of abundant precipitation in the wet season in addition to clay mineralogy of the soils and other environmental factors prevalent at a time in the history of the soils. The swelling and shrinkage phenomenon associated with the soils of this region can be detrimental to engineering projects such as pavement, foundation, slope stability etc. Shale exhibits a wide spectrum of geotechnical characteristics especially as the moisture content increases and has often been a cause for concern on environmental geotechnical issues (Aghamelu et al, 2011). Enugu Shale is one of these shales that significantly show changes in volume on addition of moisture. Severally studies have identified the characteristics of the Shale (Okagbue and Aghamelu, 2010;Ekeocha, 2015;Oyediran and Fadamoro, 2015; *Corresponding author. E-mail<EMAIL_ADDRESS>Author(s) Tijani, 2012). The geology, climatic conditions, environmental factors and drainage conditions provide a natural setting for the occurrence of swelling/shrinkage phenomena. Structural damage caused by swelling phenomena is evident in the soils of Enugu Shale in the Enugu metropolis ( Figure 1). Evaluation of swelling characteristics of the soils using empirical estimation will be of much help to the geotechnical engineers for ease, quick and affordable understanding of the problematic soils of the metropolis. The method adopted in the work of Sridharan and Prakash (2000) was employed to characterize the mineralogy characteristics of the soil using free swelling ratio in comparison with the existing knowledge. Geology Enugu Shale overlays the Agbani Sandstone/Awgu Shale. It is a lateral equivalent of Nkporo/Owelli formation and one of the oldest deposit of Anambra Basin (Nwajide, 1990). Enugu Shale consists of fissile, grey shale with extra formational clast capped on top by Ironstone with presence of pyrite. The shale is associated with extensive synsedimetary deformation structures (Nwajide and Reigers, 1999) and lies in the eastern part of Anambra Basin (Figure 2). Enugu Shale is well exposed along Enugu-Onitsha Express Way by New-Market flyover and along Enugu-Port Harcourt Express Way by Ugwuaji flyover. (a) (b) The highly weathered Enugu Shale consists of dirty brown lateritic regolith that is porous and varies significantly up to a maximum depth of 20 m, depending on the topography and drainage conditions (Ekeocha, 2015). Climate, physiography and drainage The Enugu metropolis is bounded by latitudes 6°22' N and 6°30' N; and longitudes 7°27' E and 7°47 'E; and lies within the rainforest belt of Nigeria. The two main seasons that exist in Nigeria are the dry season that runs through the months of October to March, and the rainy season that begins in the late March and ends in October (Nwankwor et al., 1988). The wet period is mostly characterized by moderate temperatures, and high relative humidity, while the dry seasons have high temperatures and lower relative humidity. The strike of the geomorphic feature in the Enugu metropolis runs through north-south trending escarpment. The scarp slope of the Enugu escarpment rises sharply to the western side, and attains a maximum mean elevation of about 400 m above mean sea level. This elevation is continuous into the Udi Plateau. Enugu metropolis is drained by Ekulu, Iva, Ogbete and Nyaba rivers which rises from near the base of the escarpment and flow towards the east into the Cross River Basin. The study area is well drained on the western side due to geomorphological feature of the area while it is poorly drained in the eastern side due to geomorphologic characteristics of the area. MATERIALS AND METHODS A total of thirty samples were collected from different places across Enugu metropolis (Table 1). The samples showed various degree of weathering ranging from slightly weathered to moderately weathered. The samples were collected with the aid of 6 inches hand auger, with sampling depths ranging from 0.5 m to 3.5 m. The sampling strictly followed standard procedure for soil sampling as specified in British Standard Institution (BSI) 1377 (1990). The sampling and laboratory testing were conducted between June and July 2017. The samples were taken to the laboratory for various tests such as Atterberg's limits (plastic and liquid limit), free swell test and particle size analysis. Particle size analysis was conducted on the samples by ovendried at 105°C of 300 g of samples each. The oven-dried samples were sieved through the various set of BSI sieves, the sample retained on each sieve was weighed and cumulative weight passing through each sieve was calculated as a percentage of the total sample weight. Atterberg's limits were done adopting the BSI 1377 (1990) test 1A. Free Swelling Ratio (FSR) Sridharan and Prakash (2000) proposed the classification of clay mineral type based on Free Swell Ratio (FSR) ( Table 2). The free swell ratio gives realistic information about soil expansivity and clay mineralogy. The free swell ratio is calculated as follow: Where Vd = volume of soil in distilled water and Vk = volume of soil in kerosene. Particle size distribution analysis Particle size distribution analysis showed the crushed samples consist of 21.67-93.97% fines and 6-72.52% sands with average mean values of 69.65, 23.68 and 6.67% for fines, sands and gravels respectively ( Table 1). The particle size distribution curves are shown in Figure 3. The dominance of fines over sands and gravels is an indication of a non-uniform distribution of grain sizes which imply poor grading. Atterberg's limits Consistency tests showed liquid limits, plastic limits and plasticity indices range from 22-66, 0-39 and 0-39% respectively. The soils plasticity range from low plasticity to high plasticity according to Bell (2007) with descriptive classification of lean to fat. Plasticity chart ( Figure 4) shows the plasticity characteristics of the tested samples in the studied area. Clay content and activity Based on the results of index properties using Skempton (1953) and as modified by Savage (2007), clay content and activity were determined. Activity (A) values range from 0-2.02 and clay content ranges from 0-52.63% were obtained. With average means of 0.46 and 30.49% for activity and clay content respectively. The activity of the soils showed inactive to active with inactive dominating the soils of the studied area (Table 1). Free Swell Index (FSI) and Free Swell Ratio (FSR) Free Swell Index and Free Swell Ratio values as obtained by laboratory analysis and empirical evaluation showed range of values from 8-45% and 1.08-1.45 for FSI and FSR respectively (Table 1). All the tested samples have FSR above 1.0 (< 1.0 is regarded as nonswelling clay otherwise known as kaolinitic clay) but are within the range of 1.0-1.5 which are regarded as mixture of swelling and nonswelling clay and dominated mostly by kaolinitic and montmorillonitic clay minerals according to Prakash and Sridharan (2000) ( Table 2). The results agree with reports of Ekeocha (2015), Oyediran and Fadamoro, (2015) and Tijani (2012) in the studied area. (2000). Correlations of physical parameters The correlation is significant at P < 0.05 (Table 3). Correlation between activity of the soils and other physical parameters is significant only for plasticity index and liquid limit. Activity of soils is positively correlated with plasticity index and liquid limit; also, strongest correlation was obtained between activity and plasticity index (r= 0.827) while between activity of soils and liquid limit is a moderate correlation (r= 0.463). Correlation between plasticity index and other parameters is significant for soil activity, liquid limit and fines. The correlation between plasticity index and fines is a moderate positive correlation at r = 0.43. Correlation between FSI and other parameters is significant only for FSR and liquid limit. Both FSR and liquid limit are positively correlated with FSI. The correlation between FSI and FSR is very strong at r= 0.998 while that of FSI and liquid limit is moderate at r= 0.429. Again, the correlation between FSR and liquid limit is moderately positive at r= 0.415. Correlation between the liquid limit and other parameters is significant for activity of the soils, plasticity index, FSI, FSR, plastic limit and clay fraction. All the significant parameters have positive correlation with liquid limit. The correlation between liquid limit and; plasticity index is very strong at r= 0.858, plastic limit is strong at r= 0.705 and clay fraction is very strong at r= 0.755. The correlation between plastic limit and other parameters is significant only for liquid limit and clay fraction. Correlation between plastic limit and clay fraction is very strong at r= 0.960. Lastly, the correlation between percentage fines and other parameters can only be significant for percentage sands. This correlation is negatively very strong at r= -0.892. Activity, plasticity index, FSI, FSR, plastic limit and clay fraction were correlated with liquid limit at significant level of 0.05 excellently. The correlation coefficients have considerable impact on predicting the swelling characteristics of the soils when related with report of Bell (2007). Table 1 shows the classification of soils of the studied area using USCS and AASHTO classification systems. Figure 5a and b show distribution by percentage of soils by AASHTO and USCS classification system. Figure 5a shows the dominance of A-7-6 soils in the studied area with 36.67% of the entire samples population. The results also show that the studied area is mostly dominated by A-7 soils in the eastern part and this has contributed immensely to the state of roads in the area (Figure 1). Figure 5b shows that the studied samples are dominated by low plasticity clay, high plasticity silt and high plasticity clay with percentage distribution of 40, 26.67 and 16.67% respectively. The results also indicate that the studied area is characterized by swelling soils and caution should be applied before embarking on engineering construction in the studied area. Swelling potential Evaluation of swelling potential of studied soil samples were carried out based on the results of Atterberg's limit, free swell test and empirical estimation. The work of Van der Merwe (1964) was applied to investigate the swelling potential of the studied soils. Figure 6 shows the k lines superimposed on the Van der Merwe swelling chart to determine the swelling potential of the studied soil samples in the studied area. The chart has a defined range of lowmediumhighvery high zones for swelling potential. The Van der chart is a plot of gross clay fraction (P002) versus gross plasticity index (Pg). There is a mathematical derivation of line representing swelling potential by a factor k, which defined the swelling zones approximately. The studied samples were dominated by low to medium swelling soils. Free swelling test results were used to calculate free swelling ratio. Subsequently, the free swelling ratio (FSR) results were also used to identify the clay minerals present in the study area (Table 1) in comparison to the classification by Sridharan and Prakash (2000) ( Table 2). The results of XRD showed that the studied samples consist of kaolinite, Hametite and quartz (Figures 7 and 8). The results obtained agree with work of Oyediran and Fadamoro, (2015) and Ekeocha (2015) on clay mineralogy of the studied area. Soils elemental oxides The chemical characteristics of shale are mainly the function of chemistry of the main minerals, cementing materials as well as cation exchange capacity of the clay minerals. Table 4 shows the elemental oxides of the samples in the studied area. Figure 9 shows the relationship between major chemical elements of SiO 2 , TiO 2 , Al 2 O 3 , Fe 2 O 3 , CaO, Na 2 O, K 2 O, MgO and MnO on the liquid limit of the soils. The relationship between elemental oxides and liquid limit show that increase in SiO 2 , TiO 2 , , Na 2 O and MnO contents on the studied soil samples caused reduction of liquid limit while Al 2 O 3 , Fe 2 O 3 , CaO, K 2 O and MgO increased significantly the liquid limit ( Figure 9). This finding agrees with report of Mitchell (1993) which stated that the swelling and other engineering properties of the soils are controlled by the chemical composition of soil materials and water. Dontsova and Norton (1999) reported that high magnesium ions from magnesium oxides and other sources can cause surface sealing of the soil. This occurrence led to water logging of soil and subsequently soil swelling. It was observed that increase in magnesium oxide affects the swelling characteristics of the studied soils. Conclusion Field observations and experimental analysis identified that changes in geotechnical characteristics are consistent with changes in elevation in the studied area. Severally, other deductions are made from the interpretation of laboratory test results and field observations as follows; 1) The swelling potential of Enugu Shale is essentially medium swelling but abundant precipitation and prevailing climatic conditions kept continuously altering the soils of Enugu Shale to high swelling soil especially at low elevation where drainage conditions are quite poor. 2) The study revealed that a strong correlation exist between the activity of soils and plasticity index, FSI and FSR, liquid limit and plasticity index, liquid limit and clay fraction as well as plastic limit and clay fraction. 3) The dominance of A-7-6 soil based on AASTHO classification of the soils of the studied area is an indication that such soil cannot be used as subgrade material. 4) Enugu Shale clay mineralogy is a mixture of kaolinite and montmorillonite. This was obtained using Free Swelling Ratio according to Sridharan and Prakash (2000). Although XRD test did not confirm the presence of montmorillonite, field observation and Van der Merwe's chart showed the presence of low to medium swelling clay in the studied area.
3,500
2020-02-29T00:00:00.000
[ "Geology" ]
Cost analysis of IPv6 distributed mobility management protocols in comparison with TFMIPv6 The past decade has witnessed a significant evolution in the role of the Internet, transitioning from individual connectivity to an integral aspect of various domains. This shift has prompted a move in IP paradigms from hierarchical to distributed architectures characterized by decentralized structures. This transition empowers efficient data routing and management across diverse networks. However, traditional distributed mobility management (DMM) protocols, reliant on tunneling mechanisms, incur overheads, costs, and delays, exacerbating challenges in managing the exponential growth of mobile data traffic. This research proposes Tunnel-Free Mobility for IPv6 (TFMIPv6) as a solution to address the shortcomings of existing DMM protocols. TFMIPv6 eliminates the need for tunneling, simplifying routing processes and reducing latency. A comprehensive cost analysis and performance evaluation are conducted, comparing TFMIPv6 with traditional protocols such as MIPv6, PMIPv6, FMIPv6, and HMIPv6. The study reveals significant improvements with TFMIPv6. Signaling costs are reduced by 50%, packet delivery costs by 23%, and tunneling costs are completely eliminated (100%). Real-world network traffic datasets are used for simulation, providing statistical evidence of TFMIPv6’s efficacy in supporting an uninterrupted movement of IPv6 data across networks. Introduction In the past decade, we have seen a remarkable transformation in the Internet's role from being a tool for individual connectivity to becoming an integral part of everything around us.This evolution has spurred a shift in the IP paradigm from traditional hierarchical and centralized network architectures towards more flat and distributed structures [1].As mobile data traffic grows exponentially, mobile operators face the daunting task of handling this surge.To mitigate these challenges, operators are increasingly turning to data offloading technologies within 3GPP networks, utilizing mechanisms such as Selected IP Traffic Offload (SIPTO) and Local IP Access (LIPA) [2][3][4]. Despite these advancements, the rapid increase in mobile traffic threatens to outpace the capabilities of current centralized mobility management (CMM) systems, which are plagued by scalability issues such as sub-optimal routing, single points of failure and inefficient use of mobility resources [5,6].Distributed mobility management (DMM) has been proposed as a solution, endorsing a flatter network architecture with distributed entities to address these issues.DMM potentially resolves problems like sub-optimal routing and single-point failures [7,8].However, despite its benefits in reducing handover delays, DMM faces challenges such as excessive control signalling and tunnelling overheads [9,10]. Several studies have investigated DMM solutions, focusing on the efficient management of mobile video traffic and other performance improvements [5,11].However, a lack of comprehensive overviews fully addresses the key aspects required for effective DMM solution development.Existing literature often offers limited comparisons and narrow assessments [12,13].Various studies have conducted cost analyses of mobility protocols under different scenarios, but these, too, are often narrowly focused and do not provide a holistic view of mobility patterns, protocols, and network topologies [14][15][16][17][18][19][20][21][22][23].The authors in [24] provide an in-depth review of the challenges and potential solutions for mobility management in 5G and future network technologies.It discusses the complexities of maintaining seamless connectivity and efficient network resource allocation as mobile devices move across different network segments and evaluate various strategies or advancements to address these challenges.This study [25] presents a well-structured framework to assess the merits and limitations of distributed and centralized mobility management protocols.The focus lies on their efficiency, scalability, and impact on overall network performance.Through a balanced analysis, the paper contrasts the strengths and weaknesses of each approach, providing valuable insights into their suitability across diverse networking scenarios, especially against the backdrop of rapidly advancing mobile technologies.Meanwhile, the study in [26,27] delves into the readiness of current mobility management solutions in the face of the advanced demands of 5G and beyond technologies.It evaluates existing protocols and infrastructures, scrutinizes their capacity to handle the high-speed, low-latency, and high-density requirements of next-generation networks, and suggests areas for improvement or further development to ensure these systems can support the future of mobile connectivity effectively. In response to these gaps, our research introduces a framework for a tunnel-free protocol supporting DMM in mobile networks.This framework is designed to enhance communication and minimize delays by eliminating the need for tunnelling, thereby reducing registration delays.Our approach demonstrates significant improvements, including reducing handover delay, blocking probability, and data packet loss [28].This paper will analyze and compare IP mobility management protocols developed by the IETF with our proposed TFMIPv6 (tunnelfree mobile IPv6) protocol, focusing on a comprehensive cost analysis.This will provide insights into the strengths and weaknesses of each system, offering a more nuanced understanding of their respective advantages and limitations. Novelty of TFMIPv6 1. Tunnel-Free Approach: TFMIPv6 eliminates the need for tunneling, reducing registration delays and operational costs.This feature provides a significant advantage over existing distributed mobility management protocols like MIPv6, FMIPv6, and PMIPv6, which rely on tunneling. Cost-Efficiency: Our research demonstrates that TFMIPv6 achieves reductions of 50% in signaling costs, 23% in packet delivery costs, 100% in tunneling costs, and 13% in total costs compared to traditional protocols. 3. Make-Before-Break (MBB) Methodology: TFMIPv6 employs an MBB methodology to minimize packet loss and ensure uninterrupted connectivity during handovers. Differences from existing schemes 1. MIPv6 and FMIPv6: Rely on Binding Update (BU) messages to inform the Home Agent and Correspondent Node of changes, incurring high signaling and tunneling costs. TFMIPv6 avoids these costs with the Binding Mobility Anchor (BMA). 2. HMIPv6 and PMIPv6: Use local management to reduce signaling, but tunneling costs remain significant.TFMIPv6 manages mobility within a tunnel-free domain, providing a cost-effective solution. The Table 1 compares TFMIPv6 with existing protocols across key performance metrics. Our analysis provides strong evidence that TFMIPv6's tunnel-free approach is a novel solution for distributed mobility management.The significant reductions in signaling, packet delivery, and total costs, combined with uninterrupted connectivity, position TFMIPv6 as a superior alternative.The following are the main contributions of this article: • The distributed nature of the protocol could lead to a reduction in signalling overheads, contributing to overall cost efficiency. • Streamlining data movement to reduce operational expenses is a key consideration for network providers. • The protocol presents a model for evaluating long-term cost benefits, emphasizing sustained operational savings over time. • The approach advocates for more efficient use of network resources, thereby reducing unnecessary expenditures. • The protocol's cost-effectiveness is highlighted through comparative analyses with traditional network models, illustrating its economic advantages. The remainder of the paper is structured as follows: the network model and message considerations are presented next.Subsequently, we develop a model for cost analysis, followed by a thorough investigation of the numerical findings and discussions.Finally, conclusions are drawn with future research. Network model and mobility messages Here, we present a network model that provides an impression of the domain responsible for administrative purposes, including access networks and various entities within it.This model offers a general representation of the network structure and its components.Furthermore, we describe the messages used for mobility by IP mobility management protocols.These messages serve as the means of communication and coordination between network entities to manage and handle mobility-related operations effectively.By understanding the network model and mobility messages, we can gain insight into the functioning and behaviour of IP mobility management protocols within the administrative domain. Network model Fig 1 illustrates the network model employed for cost modelling, which corresponds to the model utilized in our previous work [1], as this research is an extension of the same study.The terms utilized in Fig 1, which illustrate the particular paths connecting the interacting entities, are explained in Table 2. key cost factor, namely the signalling cost.We aim to comprehensively understand the implications and efficiency of different mobility management protocols.This enables us to make informed evaluations and comparisons between the protocols, facilitating decision-making regarding their implementation and utilization. Signaling cost The signalling cost is the total signalling burden incurred during mobility-related operations.It encompasses the cumulative signalling burden associated with managing mobility in a network.If signalling cost is represented by C ð:Þ BU , it can be found by multiplying the distance covered in each hop with the size of the mobility signalling message [28,29]. Signaling cost for MIPv6.When using MIPv6, the MN sends BU messages to both the CN and the HA when its connection point changes.The signaling cost for MIPv6, denoted as MIPv6 C ððMIPv6ÞÞ BU , consists of dual components: C ððMIPv6ÞÞ ðBUÀ CNÞ , which represents the signaling cost for CN registration, and C ððMIPv6ÞÞ ðBUÀ HAÞ , which represents the signaling cost for HA registration.The formula for C ððMIPv6ÞÞ BU can be expressed in Eq 1. where C ðMIPv6Þ BUÀ HA is expressed in Eq 2. The weighing factors α and β are employed to assess the stability of the link, with α representing the wired connection and β representing the wireless connection.These factors are used to emphasize the importance of link stability.Furthermore, the signalling cost for the registration of CN, denoted as C ððMIPv6ÞÞ ðBUÀ CNÞ , can be calculated using the following formula as shown in Eq 3. Nevertheless, the signalling cost mentioned earlier does not incorporate the overhead of signalling associated with CN 0 s BAck message, as MIPv6 specification does not require it [28]. Signaling cost for FMIPv6.FMIPv6 consists of dual modes: the predictive mode and the reactive mode.However, owing to handoff readiness required for MN, FMIPv6 incurs an additional cost of signalling [31].Specifically, in the case of Predictive FMIPv6, there are three key aspects to consider: the signalling cost for handover preparation C ððPreÀ FMIPv6ÞÞ If we denote the signaling cost [28,32] of Pre-FMIPv6 as C ððPreÀ FMIPv6ÞÞ BU , it can be given in Eq 4. Signaling cost for HMIPv6.When a MN is in motion, it is managed locally by HMIPv6 [33] much like PMIPv6.After configuring a new access network, the MN solely needs to modify the MAP with its updated whereabouts since the MAP acts as the MN 0 s local HA.However, the mobility of the MN remains invisible to both the HA and the CN.In this case, the signaling cost of HMIPv6, denoted as C ððHMIPv6ÞÞ BU [20], can be expressed as can be given in Eq 6. Here, m (LBU−MAP) represents the size of the local binding update (LBU) message transmitted from the MN to MAP.The parameters α and β represent the consideration criteria for wireless and wired links, respectively.d (MAP−AR) denotes the mean number of hops between the AR and the MAP, while d (AR−MN) signifies the mean number of hops between the AR and the MN.Additionally, m (LBAck−MAP) refers to the size of the local binding acknowledgement message (LBAck) transmitted from the MAP to the MN. Signaling cost for PMIPv6.Local management is carried out for a moving MN by PMIPv6 [34].The MN utilizes mobility services entities provide to update its connection point within the designated area of PMIPv6.In the PMIPv6 area, no messages for signalling are transmitted by the MN specifically related to mobility.Consequently, the signaling cost of PMIPv6, denoted as C ððPMIPv6ÞÞ BU [28,35] and can be given in Eq 7. Here, m (NBU−LMA) represents the size of the NBU message sent from the MAG to the LMA.The parameter α is the consideration criteria for the wired link.d (LMA−AR) signifies the mean number of hops between LMA and the AR.Additionally, m (NBack−LMA) denotes the size of the NBack message transmitted from LMA to the MAG. Signaling cost for TFMIPv6.For the suggested TFMIPv6 method, whenever the MN changes its place of attachment, it directs BU messages to both the BMA and the CN.The signaling cost of TFMIPv6, denoted as C B U ((TFMIPv6)) , consists of two components.The C ððFMIPv6ÞÞ ðBUÀ CNÞ which is the cost involved in signaling for CN 0 s registration, and C ððTFMIPv6ÞÞ ðBUÀ BMAÞ which is the cost involved in signaling for BMA 0 s registration.The C ððTFMIPv6ÞÞ BU can be calculated in Eq 8. where, C ðTFMIPv6Þ BUÀ BMA and C ðTFMIPv6Þ BUÀ CN can be given in Eqs 9 and 10 respectively, and, Packet delivery cost The packet delivery cost is the overall extra data traffic that arises on routing paths because of packet delivery.We can represent the packet delivery cost as C ðð:ÞÞ PD , which is calculated by multiplying the hop distance with the size of the data packet [28]. Packet delivery cost for MIPv6.Data packets from the CN are sent directly to the MN 0 s current location [36] when using route optimization (RO).The packet delivery cost for MIPv6, denoted as C ððMIPv6ÞÞ PD [20,28] is determined in Eq 11, ω shows the data packets taking a longer route via the HA while the CN 0 s location update is ongoing.λ s indicates how often new sessions are started by the MN, and E(S) is the average length of those sessions.P ððMIPv6ÞÞ I is the cost associated with using the indirect path option in MIPv6 can be given in Eq 12. Whereas ϖ represents the additional overhead caused by MIPv6 tunnelling.d (CN−HA) shows the mean hop count among CN and HA, d (HA−AR) shows the mean hop count among HA and the AR, d (AR−MN) shows the mean hop count among AR and MN.Packet delivery cost for FMIPv6.The MN receives data packets from the CN either directly or indirectly through different paths.In FMIPv6, to avoid data packet loss, a buffering mechanism is employed [31].The cost of delivering packets for the predictive FMIPv6, denoted as C ððPREÀ FMIPv6ÞÞ PD [20,28,32] is given in Eq 13. , is similar to that of predictive FMIPv6, C ððPREÀ FMIPv6ÞÞ PD .Thus, it can also be expressed in Eq 14 [28]. C ðPREÀ Packet delivery cost for HMIPv6.The MN receives packets from the CN while in the MAP domain.Usually, data packets are sent directly through the HA for the MN.However, the MAP tunnels the data packets for the MN [33] in certain cases.The cost of packet delivery of HMIPv6 C ðHMIPv6Þ PD is given as [20,28].The packet delivery cost of HMIPv6 can be computed using Eq 15. represents the extra data transmission overhead incurred when tunnelling through the direct path in HMIPv6.This cost can be expressed in Eq 16. where d (MAP−AR) represents the average number of hops between the MAP and AR, and d (AR −MN) indicates the average number of hops between the AR and the MN.Packet delivery cost for PMIPv6.The CN sends data packets to the MN.The MAG receives these packets through tunnelling from the LMA [34].The cost of packet delivery [34] of PMIPv6 C ððPMIPv6ÞÞ PD can be computed by using Eq 17. Where λ s represents the rate at which new sessions are initiated by the MN, E(S) denotes the average session length in terms of packets.P ((TFMIPv6)) corresponds to the path cost and can be given in Eq 20. where d (CN−BMA) represents the average number of hops between the CN and the BMA, and d (BMA−MN) indicates the average hop count between the BMA and the MN.It's important to note that the proposed approach eliminates the tunnelling overhead ϖ. Packet tunneling cost Packet tunnelling cost CPT (.) is quite similar to the packet delivery cost, but its primary purpose is to examine the extra load incurred during the tunnelling process.This cost is calculated by multiplying the distance covered in each hop with the size of the IPv6 tunnelling.Packet tunneling cost for MIPv6.The cost of packet tunneling of MIPv6 CPT (MIPv6) [28,37] can be computed by using Eq 21.PT ðMIPv6Þ where ϖ represents the IPv6 overhead incurred during tunneling.represents the extra load incurred when tunnelling through the direct path [28].These costs are determined by using Eqs 25 and 26. Packet tunneling cost for FMIPv6. The cost of packet tunneling of Predictive FMIPv6 [28]C ðPreÀ where 2$ad ARÀ AR represents the extra load incurred due to tunnelling between the previous Access Router (pAR) and the new Access Router (nAR).Reactive FMIPv6 uses a buffering approach similar to predictive FMIPv6.When a MN undergoes a handoff procedure, the data packets intended for MN are buffered at the previous access router (pAR) and then sent through tunnelling to the new access router (nAR).As a result, the cost for packet tunnelling of Reactive FMIPv6, denoted as C ðReÀ FMIPv6Þ PT is calculated in Eq 27. C ðReÀ where PT (TFMIPv6) represents the additional load incurred when tunnelling through the suggested framework.Its value is determined in Eqs 33-35. Since the proposed approach doesn't involve any tunnelling as the data packets are sent directly to MN, the value of PT (TFMIPv6) is calculated to be zero.As a result, the cost of tunnelling of TFMIPv6 is given in Eqs 36-38. Total cost The total cost is represented as C ð:Þ T , which can be expressed as the sum of two components: The Packet delivery cost C ð:Þ PD and signalling cost C ð:Þ BU .Total cost for MIPv6.The total cost of MIPv6 is obtained by adding the costs of signalling and packet delivery specific to MIPv6.This calculation is given in Eq 39. Total cost for FMIPv6.The total cost for Pre-FMIPv6 and Re-FMIPv6 is determined by adding their individual costs of signalling and packet delivery.The specific calculations are provided in Eqs 40 and 41. Total cost for HMIPv6.The total cost for HMIPv6 is determined by adding its costs of signalling and packet delivery [28,38].The specific calculations can be computed by Eq 42. Total cost for PMIPv6.The total cost for PMIPv6 is determined by adding its individual costs of signalling and packet delivery [28,35].The specific calculations can be computed using Eq 43. Total cost for TFMIPv6.The total cost for TFMIPv6 is determined by adding its individual costs of signalling and packet delivery.The specific calculation is given by Eq 44. Results and discussion This section shows the findings of the signalling cost analysis of the tunnel-free mobility management approach in contrast to the existing IETF standardized mobility management protocols.When a node changes its attachment point, it also experiences a shift in its logical address, and the connection to the previous link is considered lost.Mobility management protocols are designed to enable continuous active transmissions despite the modification in the node's logical address.These protocols facilitate the mobility of nodes within a network, allowing them to move across various networks and access points concurrently sustaining uninterrupted communication sessions. This study aims to examine the cost associated with existing mobility management protocols, specifically signalling costs.To accomplish this, a simulation environment was required to evaluate the performance of both established IETF approaches and the proposed approach regarding latency reduction, blocking probabilities, packet losses, and various cost factors.Network Simulator 2 (NS2) [36] was selected as the simulation platform for this research.NS2 was chosen due to its open-source nature, making it an accessible and widely used tool for conducting network simulations. Simulation parameters In the simulation, different parameters are set up and adjusted to monitor how the network behaves and responds.Different parameters involved in the simulation process are given in Table 4, whereas for the sake of evaluation and comparison, the numerical analysis in this study utilizes system parameter values obtained from [28,[37][38][39][40]. Results are obtained in the context of costs involved in signalling the proposed framework. Signalling cost Figs 2 and 3 in our study offer a detailed comparison of the signalling costs associated with various mobility management protocols.This analysis is conducted under the parameters of a fixed radius (R) of 500 meters, with the velocities (v) of mobile nodes (MN) varying from 0 to 30 meters per second.In Fig 2, the effectiveness of the proposed TFMIPv6 protocol is highlighted.This protocol distinguishes itself by necessitating a significantly lower number of mobility signalling messages, which are crucial for supporting mobility services.As the velocity of the MN increases, an expected rise in signalling cost is observed across all protocols.However, the TFMIPv6 method consistently maintains lower signalling costs than its counterparts.Specifically, within a tunnel-free domain, the Binding Mobility Anchor (BMA) manages the MN 0 s mobility.During a transition in connection points by the MN, the exchange of messages is confined to the BMA and the MN, involving specific communications like Binding Acknowledgement (BAck), Binding Update (BU), New Binding Acknowledgement (NBACK), and New Binding Update (NBU). In Fig 3, we delve into a scenario where the MN 0 s velocity is set at 20 meters per second, and the radius ranges between 400 and 800 meters.The results reinforce the superior performance of our suggested TFMIPv6 framework in comparison to other existing mobility management protocols.It's important to note that in our simulation, Re − FMIPv6 was compared against other protocols instead of Pre − FMIPv6.This choice was informed by observations indicating Re − FMIPv6 0 s superior performance in certain contexts and comparable efficacy in others.In the protocols MIPv6 and FMIPv6, whenever the MN alters its connection point, it communicates this change by sending a Binding Update (BU) message to both the Correspondent Node (CN) and the Home Agent (HA).On the other hand, HMIPv6 demonstrates the second-best performance, followed by PMIPv6.The reason behind this ranking lies in the local management of the MN in both HMIPv6 and PMIPv6, which effectively reduces the signalling requirements for mobility.A surprising revelation from our analysis is the underperformance of FMIPv6, as depicted in both Figs 2 and 3.This is primarily attributed to the additional signalling demands imposed by its advanced mobility management, which facilitates buffering systems for seamless and rapid handovers.In conclusion, our detailed analysis and comparison of these protocols clearly demonstrate the efficiency and effectiveness of the TFMIPv6 method in managing mobility signalling, particularly in scenarios involving high velocities and varying radii.Summarizing the results from Figs 2 and 3 of our study into a tabular form.The Table 5 will capture the comparative analysis of signalling costs for various mobility management protocols under specified conditions. MIPv6 Radius: 500m 0-30 m/s Binding Update (BU) message sent to both Correspondent Node (CN) and Home Agent (HA) when MN changes connection point. FMIPv6 Radius: 500m 0-30 m/s Higher signaling demands due to advanced mobility management for seamless handovers.Unexpected underperformance observed. HMIPv6 Radius: 500m 0-30 m/s Second-best performance.Local management of MN reduces signaling requirements. PMIPv6 Radius: 500m 0-30 m/s Similar to HMIPv6 in local management of MN, effectively reducing signaling needs. https://doi.org/10.1371/journal.pone.0306132.t005 Table 5 provides a concise comparison of the different protocols based on the results from the study, highlighting their performance in terms of signalling costs under varying velocities and radii.The TFMIPv6 protocol emerges as a more efficient option due to its lower signalling requirements, especially in high-velocity scenarios.To provide clarity on the mechanisms behind TFMIPv6's reduced signaling costs, we have outlined the key features that distinguish TFMIPv6 from other protocols: 1. Binding Mobility Anchor (BMA): TFMIPv6 introduces the Binding Mobility Anchor (BMA), a central component responsible for managing mobility within the tunnel-free domain.The BMA efficiently exchanges only essential messages with the mobile node (MN), reducing the signaling burden by limiting the need to communicate with multiple network entities. Optimized Handover Procedures: The Make-Before-Break (MBB) methodology ensures a smooth handover process by preparing the next access point before disconnecting from the current one.This proactive approach minimizes packet loss and avoids the need for additional signaling that reactive handover methods require. 3. Tunnel-Free Domain: TFMIPv6 eliminates tunneling by directly routing packets between the Correspondent Node (CN) and the MN through the BMA.This reduces signaling overhead compared to other protocols like MIPv6, where multiple messages are required for both Home Agent (HA) and CN registration. Efficient Message Exchange: The signaling exchange in TFMIPv6 is limited to necessary messages, such as Binding Update (BU) and Binding Acknowledgement (BAck), which are directly handled between the BMA and MN.Other protocols like FMIPv6 require additional buffering or tunneling messages that increase signaling costs. Localized Management: Unlike protocols that rely heavily on centralized entities, TFMIPv6 manages mobility locally through the BMA.This localized approach reduces signaling load by minimizing the distance and frequency of signaling message exchanges.These technical advantages of TFMIPv6 are clearly reflected in the reduced signaling costs demonstrated in our comparative analysis.By streamlining message exchanges, optimizing handover procedures, and eliminating tunneling, TFMIPv6 consistently delivers lower signaling costs than other mobility management protocols. Packet delivery cost In our study, Figs 4 and 5 are pivotal in analyzing the cost implications associated with packet delivery across different mobility management protocols.These figures offer insights into how varying parameters impact the efficiency of these protocols in a real-world setting.Fig 4 focuses on the relationship between the packet delivery cost and different values of λ 0 s (arrival rate of packets), while keeping ω (the weight factor for secondary path usage) constant at 0.2 and E(S) (expected session duration) at 10.This graph clearly illustrates an upward trend in packet delivery costs as λ 0 s increases.Among the protocols evaluated, the TFMIPv6 method emerges as the most cost-effective, showcasing its ability to handle increasing packet arrival rates without significantly reducing delivery costs.MIPv6, while not as efficient as TFMIPv6, still performs commendably, securing a solid second place in terms of cost-effectiveness.Moving to Fig 5, the parameters are adjusted to set λ 0 s at 1 and E(S) at 10, with a variable ω ranging from 0.1 to 1.This configuration brings to light some intriguing observations.The TFMIPv6, PMIPv6, and HMIPv6 protocols show remarkable resilience to changes in ω, maintaining consistent packet delivery costs regardless of the variation in the weight factor.This suggests a robust design in these protocols, capable of adapting to different network conditions without incurring additional costs.Conversely, FMIPv6 and MIPv6 exhibit a significant sensitivity to changes in ω.As ω increases, there's a marked escalation in the packet delivery costs for these two protocols.This is primarily due to a higher proportion of data packets choosing the secondary route, which, in turn, increases the overall cost.Notably, FMIPv6 fares the worst in this scenario.This is attributed to the excessive tunnelling that FMIPv6 requires, which adds to the cost and complexity of packet delivery. Overall, these findings from Figs 4 and 5 offer a comprehensive view of how different mobility management protocols perform under varying conditions.They underscore the costeffectiveness and adaptability of the TFMIPv6 method, particularly in handling diverse network scenarios and traffic conditions.They also highlight areas where other protocols, such as FMIPv6 and MIPv6, may face challenges.This nuanced analysis provides valuable insights for network administrators and developers and guides future enhancements in mobility management protocols.Table 6 focuses on the packet delivery cost implications of various mobility management protocols under different conditions. Table 6 concisely summarizes the key findings from Figs 4 and 5, highlighting how different protocols perform in terms of packet delivery costs under various conditions.The TFMIPv6 protocol is cost-effective and adaptable, especially in scenarios with changing packet arrival rates and secondary path usage.Conversely, protocols like FMIPv6 and MIPv6 show sensitivity to these changes, resulting in higher packet delivery costs. Tunneling cost Fig 6 in our study presents a detailed and critical analysis of the tunnelling costs associated with various mobility management protocols, with a particular emphasis on how they compare to the proposed TFMIPv6 method.The settings for this analysis involve a constant ω (weight factor for secondary path usage) of 0.2 and E(S) (expected session duration) set at 10.While these parameters mirror those used in the packet delivery cost analysis, there's a notable and significant difference in the context of packet tunnelling. The key highlight from Fig 6 is the distinct advantage of the TFMIPv6 method in tunnelling costs.Unlike other protocols examined, TFMIPv6 is not affected by the costs associated with packet tunnelling.This is because TFMIPv6 employs a tunnel-free approach, effectively eliminating the additional overheads and complexities that tunnelling typically introduces.TFMIPv6 Most cost-effective, handles increasing packet arrival rates well without a significant rise in delivery costs. MIPv6 Second in cost-effectiveness, performs well with increasing λ 0 s. PMIPv6 Maintains consistent costs across varying ω, indicating robust design. HMIPv6 Similar performance to TFMIPv6 and PMIPv6, adaptable to ω changes without additional costs. FMIPv6 Sensitive to ω changes, leading to increased packet delivery costs, particularly affected by excessive tunnelling. MIPv6 Exhibits sensitivity to changes in ω, with costs escalating as ω increases. https://doi.org/10.1371/journal.pone.0306132.t006 In contrast, other protocols that rely on packet tunnelling for mobility management show varying degrees of cost increase due to this process.The tunnelling of packets, while a necessary component for these protocols, adds a layer of cost that cannot be overlooked.It involves additional steps in the data transmission process, potentially leading to increased latency and resource consumption.This distinction sets TFMIPv6 apart from its counterparts and underscores its efficiency and cost-effectiveness.By avoiding the tunnelling process, TFMIPv6 not only simplifies the mobility management process but also presents a more streamlined and economically viable option for handling mobile data transmission. Overall, Fig 6 offers a compelling argument for the TFMIPv6 method, especially considering the cost implications of packet tunnelling in mobility management.This insight is particularly valuable for network designers and administrators seeking to optimize mobility management protocols while keeping operational costs in check.The tunnel-free nature of TFMIPv6 highlights its potential as a superior alternative in mobile network management. Summarizing the findings from Fig 6 of the study into a tabular format focusing on the tunnelling costs associated with various mobility management protocols and their comparison with the TFMIPv6 method. Table 7 captures the essence of Fig 6, highlighting the significant advantage of the TFMIPv6 method in tunnelling costs.TFMIPv6 stands out due to its tunnel-free approach, which effectively removes the additional overheads and complexities inherent in other protocols that utilize packet tunnelling.This distinction emphasizes the efficiency and cost-effectiveness of TFMIPv6, making it an appealing option for network designers and administrators focused on optimizing mobility management protocols while managing operational costs. Total cost In our detailed analysis in Fig 7, we delve into the total cost associated with various mobility management protocols.The cornerstone of this analysis is the Session Mobility Ratio (SMR), denoted as S F .SMR is a critical metric, calculated from λ s /μ c , where λ s represents the session arrival rate and μ c denotes the mobility rate [29,41,42].This metric, often employed in mobile networking performance evaluations, is akin to the call-to-mobility ratio, providing a balanced perspective on both the communication and mobility aspects of the network.The configuration for this analysis in Fig 7 includes specific parameter settings: ω is fixed at 1, λ 0 s is set to 0.2, the radius (R) is maintained at 500 meters, and the velocity (v) varies from 5 to 50 meters per second.These parameters form the basis for comprehensively evaluating the total cost incurred by different protocols under various mobility conditions.The findings depicted in Fig 7 are particularly revealing.The TFMIPv6 strategy emerges as the most costeffective approach among all the protocols evaluated.Its performance in managing costs associated with mobility management is notably superior, setting it apart from the alternatives.Following TFMIPv6, PMIPv6, and HMIPv6 also show commendable performance, indicating their effectiveness in managing costs, albeit not to the same extent as TFMIPv6.A key observation from this analysis is the resilience of the TFMIPv6 strategy against changes in SMR.Even as the SMR increases, which typically could impact the balance between session management and mobility management costs, the TFMIPv6 maintains its leading position.This consistency in performance, irrespective of the rise in SMR, underscores the robustness and efficiency of the TFMIPv6 strategy in managing total costs in mobile networks.In summary, Fig 7 not only highlights the superior cost-efficiency of the TFMIPv6 strategy but also demonstrates its steady performance across varying network conditions.This insight is invaluable for network planners and engineers in designing and optimizing mobile networks for efficiency and cost-effectiveness, making TFMIPv6 a potentially preferred choice in mobility management solutions.Here's a tabular summary of the results from Fig 7 of the study, focusing on the total cost analysis of various mobility management protocols based on the session-to-mobility ratio (SMR). Table 8 encapsulates the findings from Fig 7 , showcasing the TFMIPv6 strategy as the most cost-effective approach, particularly in managing mobility management costs under varying network conditions.TFMIPv6 0 s resilience against changes in SMR is a key highlight, indicating its robustness and efficiency in managing total costs in mobile networks.The table also compares the performance of PMIPv6 and HMIPv6, acknowledging their effectiveness but noting their limitations compared to TFMIPv6.The analysis provides essential insights for network planners and engineers in optimizing mobile networks for efficiency and cost-effectiveness.The strength and weakness of TFMIPv6 protocol are: Maintains cost-efficiency even with increased SMR, demonstrating resilience and robustness. Performs well, but not as efficiently as TFMIPv6. Limitations of TFMIPv6 • Scalability Concerns: The protocol's performance might also be affected in extremely highdensity mobile environments where signaling overhead may still accumulate despite reduced tunneling. • Complexity in Implementation: Developing an efficient DMM protocol requires a precise design to balance signaling and data delivery costs.The implementation demands careful planning to ensure compatibility with existing network architectures. • Interoperability Issues: Different vendors and standards may lead to interoperability challenges in hybrid environments where nodes use different protocols, requiring complex translation or adaptation layers. Comparison with other protocols • Re-FMIPv6: It Shows superior performance in certain contexts but incurs higher signaling costs than TFMIPv6. • MIPv6 and FMIPv6: Both protocols exhibit significant sensitivity to changes in weight factors, resulting in escalating packet delivery costs due to increased reliance on secondary paths. • HMIPv6 and PMIPv6: Both demonstrate commendable performance in reducing signaling costs through local management of mobile nodes.However, their overall cost-effectiveness does not match TFMIPv6 due to persistent reliance on tunneling. Practical advantages • TFMIPv6's simplified signaling model makes it suitable for networks with fluctuating mobility demands. • The tunnel-free approach is particularly advantageous for latency-sensitive applications, ensuring faster data packet delivery.These limitations highlight that while TFMIPv6 and similar DMM protocols have promising benefits, they require careful design and implementation to ensure robust, efficient, and secure mobility management. Conclusions and future research In this paper, we introduce TFMIPv6, a novel tunnel-free protocol for distributed mobility management.Through extensive comparative analysis and simulation, our results demonstrate that TFMIPv6 significantly reduces signaling, packet delivery, tunneling, and total costs compared to other existing protocols.The key findings from our study include: (1).Signaling Costs: TFMIPv6 achieves up to 50% reduction in signaling costs due to its use of the Binding Mobility Anchor (BMA), which confines message exchanges to a localized domain.The Make-Before-Break (MBB) methodology also ensures efficient handovers, reducing signaling overhead during network transitions.(2).Packet Delivery Costs: By directly routing packets through the BMA and eliminating tunneling, TFMIPv6 minimizes delivery delays and achieves a 23% reduction in packet delivery costs.(3).Tunneling Costs: TFMIPv6 eliminates tunneling entirely, reducing the complexity and cost often associated with other protocols.(4).Total Costs: With improvements in signaling, packet delivery, and tunneling, TFMIPv6 achieves a 13% reduction in total costs, proving its cost-effectiveness and robustness. of three components: the signalling cost for handover preparation C ððReÀ FMIPv6ÞÞ ðBUÀ readyÞ , the signalling cost for CN registration C ððReÀ FMIPv6ÞÞ ðBUÀ CNÞ , and the signalling cost for HA registration C ððReÀ FMIPv6ÞÞ ðBUÀ HAÞ .If we denote the signaling cost of Re-FMIPv6 as C ððReÀ FMIPv6ÞÞ BU , it can be given in Eq 5. FMIPv6Þ PD ¼ ol s EðSÞP ðPREÀ FMIPv6Þ I þ ð1 À oÞl s EðSÞP ðPREÀ FMIPv6Þ D ð13Þ The P ððPREÀ FMIPv6ÞÞ D represents the cost associated with a direct path in a predictive FMIPv6.When it comes to the packet delivery cost, the reactive FMIPv6 packet delivery cost C ððREÀ FMIPv6ÞÞ PD C ðHMIPv6Þ PD ¼ l s EðSÞTP ðHMIPv6Þ D ð15Þ where PT ððHMIPv6ÞÞ D D represents the additional data transmission overhead involved in direct path tunnelling of PMIPv6.This overhead can be given in Eq 18, PT ðPMIPv6Þ D ¼ $ad LMAÀ AR ð18Þ where d (LMA−AR) is the mean hop count among the AR and the LMA.Packet delivery cost for TFMIPv6.The packets of data sent towards MN by CN are directed to the present position of the MN via BMA.The cost of packet delivery of TFMIPv6 C ððTFMIPv6ÞÞ PD can be given in Eq 19. C ðMIPv6Þ PT¼ ol s EðSÞPT ðMIPv6Þ I þ ð1 À oÞl s EðSÞPT ðMIPv6Þ D ð21Þ where PT ðMIPv6Þ I represents the additional load caused by tunnelling through the indirect path and PT ðMIPv6Þ D represents the extra load incurred when tunnelling through the direct path and can be calculated in Eqs 22 and 23. FMIPv6ÞPT is expressed by using Eq 24.C ðPreÀFMIPv6Þ PT ¼ ol s EðSÞPT ðPreÀ FMIPv6Þ I þ ð1 À oÞl s EðSÞPT ðPreÀ FMIPv6Þ D ð24Þ where PT ðPreÀ FMIPv6Þ I stands for the additional load caused by tunnelling through the indirect path and PT ðPreÀ FMIPv6Þ D D for HMIPv6.The cost of packet tunneling[28,38] of HMIPv6 C ðHMIPv6Þ PT and is determined by using Eq 28.C ðHMIPv6Þ PT ¼ l s EðSÞPT ðHMIPv6Þ D ð28Þwhere PT ðHMIPv6Þ D represents the additional load incurred when tunnelling through the direct path in HMIPv6 and is given in Eq 29.PT ðHMIPv6Þ D ¼ $ad MAPÀ AR þ $bd ARÀ MNð29ÞPacket tunneling cost for PMIPv6.The cost of tunnelling of packets of PMIPv6 C ðPMIPv6Þ PT is determined in Eq 30.represents the additional load incurred when tunnelling through the direct path in PMIPv6.Its value is specified in Eq 31.PT ðPMIPv6Þ D ¼ $ad LMAÀ AR ð31Þ Packet tunneling cost for TFMIPv6.The cost for packet tunnelling of TFMIPv6 C ðTFMIPv6Þ PT is determined by the formula given in Eq 32.C ðTFMIPv6Þ PT ¼ l s EðSÞPT ðTFMIPv6Þ Table 7 . Total cost analysis of various mobility management protocols based on the Session to Mobility Ratio (SMR). Fig 7 highlights TFMIPv6's cost-effectiveness across a range of Session to Mobility Ratios (SMR).It outperforms other protocols, demonstrating its resilience and robustness, particularly under varying network conditions.
8,680
2024-08-07T00:00:00.000
[ "Computer Science", "Engineering" ]
Substrate-tuning of correlated spin-orbit oxides revealed by optical conductivity calculations We have systematically investigated substrate-strain effects on the electronic structures of two representative Sr-iridates, a correlated-insulator Sr2IrO4 and a metal SrIrO3. Optical conductivities obtained by the ab initio electronic structure calculations reveal that the tensile strain shifts the optical peak positions to higher energy side with altered intensities, suggesting the enhancement of the electronic correlation and spin-orbit coupling (SOC) strength in Sr-iridates. The response of the electronic structure upon tensile strain is found to be highly correlated with the direction of magnetic moment, the octahedral connectivity, and the SOC strength, which cooperatively determine the robustness of Jeff = 1/2 ground states. Optical responses are analyzed also with microscopic model calculation and compared with corresponding experiments. In the case of SrIrO3, the evolution of the electronic structure near the Fermi level shows high tunability of hole bands, as suggested by previous experiments. conductivity. Note that a similar approach was applied to honeycomb iridate systems to successfully explain key experimental findings 9 . Hybrid functional scheme with inclusion of the SOC term is employed, and the results are analyzed and compared with various experimental strain studies on Sr-iridates, especially, with optical experiments 8,[10][11][12][13] . We have found that the tensile strain on 214 system can effectively tune the strengths of both electronic correlation and the SOC. Strong interplay among the moment direction, the SOC, and the substrate strain in the J eff = 1/2 ground state is reflected in the optical conductivities as peak shifts or intensity changes of α and β optical peaks. On the other hand, in semimetallic 113 system, upon strain, the J eff = 1/2 electronic structure is found to be rather fragile, but low energy physics coming from narrow hole bands is found to be easily tunable. Results Sr 2 IrO 4 . Tensile strain increases both Ir-O-Ir angle (θ) and Ir-O bond length (d) of IrO 6 octahedron, as shown in Fig. 1(d). The increases in θ and d play mutually competing roles, as the former enhances the bandwidth (W), while the latter localizes 5d electrons to increase effective Coulomb correlation (U). Recent optical experiment on 214 system showed the systematic shift of α-peak with enhanced broadening upon tensile strain 11 . This feature was explained by the enhancements of both U and W, which increase the separation of UHB and LHB and makes both bands more dispersive, respectively. As typical temperature-dependent behavior shows the enhancement of one parameter with simultaneous suppression of the other, the enhancements of both U and W are quite unusual 14 . To cover the epitaxial strain range of experimental reports, we have chosen LaAlO 3 (LAO), SrTiO 3 (STO), and GdScO 3 (GSO) substrates. As shown in Fig. 2, LAO and GSO substrates yield compressive and tensile strains, respectively, with + 1.9% and − 3.2% enhancements of c/a ratio compared to bulk 11 . In the case of the STO substrate, the lattice mismatch is small, and so the corresponding c/a ratio change is as small as − 0.6%. Optimized c/a ratio changes of LAO (+ 1.2%), STO (− 2.1%), and GSO (− 5.3%) cover well the experimental results. Ir-O-Ir bond angle (θ) and Ir-O bond length (d) of corresponding 214 systems are summarized in Table 1. Our calculation results for 214 films demonstrate more prominent role of U than W upon strain. As shown in Table 2, both spin and orbital magnetic moments systematically increase, as the substrate is changed from LAO to GSO, along with corresponding shifts of optical peaks. In accordance with our results, recent resonant inelastic X-ray scattering (RIXS) experiment observed that the most significant effect of substrate change is the variation of bond lengths, which is manifested in the strengthening (weakening) of the magnetic interaction of the 214 film upon compressive (tensile) strain 12 . To get the further insight of the role of the strain and to directly compare with the experiments, we have calculated optical conductivity, σ(ω), using the ab initio band methods as described above. Figure 3(a) presents the calculated σ(ω)'s for 214 system on different substrates. σ(ω) for bulk is also presented for comparison. Two-peak structure (α and β) is clearly manifested. Note that, upon tensile strain, the position of α peak is shifted to a higher energy side. As schematically depicted in Fig. 1(b,c), this feature is suggestive of the enhancement of effective U, which also agrees with the increase in the magnetic moment upon strain (Table 2). In contrast, the β peaks are not affected much by the strain, which suggests the different nature between α and β peaks (see Fig. 1(b,c)). The peak positions of (α and β) are (0.61, 1.05), (0.67, 1.05), and (0.71, 1.02) eV for LAO, STO, and GSO, respectively, which agree well with existing experiment 11 . It is seen in Fig. 3 that optical spectrum becomes broadened upon strain. This strain-dependent broadening is interpreted as the increased itinerancy due to change in the bond angle 11 . Despite the prominent role of U, as revealed by a shift of the α-peak, the broadening of optical spectrum would not be well described in our approach due to lack of dynamical effect 15 . Thus, a possible explanation of broadening in Fig. 3 is that the tensile strain Table 2. Calculated spin, orbital magnetic moments, their ratio, and peak intensity ratio (μ S , μ O , μ O /μ S , and I β /I α ) for 214 system on different substrates. I β /I α here is defined by A β ε α /A α ε β , as described in Eq. (1) and below. Bulk results are also given for comparison. Unit of μ S and μ O is μ B /Ir. enhances the effective U, which reduces the coherency of the electrons. Then, without much change in band width W, there occurs broadening of the peaks. The difference between the temperature and the strain dependence of the optical conductivities can be attributed to the altered coherency due to effective U variation 14 , which is a subject of further studies. Due to the two-dimensional (2D) nature of the 214 system, the overall optical responses are composed of in-plane characters only (σ xx and σ yy ). The strain-dependent density of state (DOS), band structure, and hopping parameter are provided in the the supplement materials. According to the previous studies on J eff = 1/2 systems, the SOC and the tetragonality are crucial parameters to stabilize the in-plane ordering of the system 6,16,17 . To investigate the roles of the SOC and the magnetic moment direction in determining the strain-dependent electronic structure of the system, we analyzed σ(ω)'s (i) for different magnetic moment directions: real in-plane (IP) and hypothetical out-of-plane (OOP) antiferromagnetic (AFM) orderings, and (ii) for normal and enhanced SOC strengths. As discussed below, the OOP configuration is related to the magnetic structure of Sr 3 Ir 2 O 7 (327) system. In Fig. 3(b), calculated σ(ω)'s for the OOP case are plotted. Compared to the IP case, the OOP case shows quite different response of the electronic structure to the substrate strain. The overall shifts are very large for the OOP case. As the substrate changes from LAO to GSO, α peak positions change by 0.10 eV and 0.24 eV for the IP and the OOP, respectively, while β peak positions change by − 0.03 eV and 0.28 eV for the IP and the OOP, respectively. Namely, when the 214 system has the IP-AFM ordering, the electronic structure is rather robust against the epitaxial strain, whereas, when the system has the OOP-AFM ordering, the overall electronic structure becomes more susceptible to the strain. In fact, Boseggia et al. 18 linked the IP magnetic ordering in 214 to the J eff = 1/2 electronic structure, on the basis of its insensitiveness to the structural distortion, which is in agreement with our calculations. When the SOC strength of the system is doubled (2 × SOC), the most pronounced effect is the large shift-down in energy of J eff = 3/2 state, as schematically plotted in Fig. 1(b,c), which is reflected by the huge shift-up of the β peak in Fig. 3(c). Another notable change is the reduction in the relative intensity of α and β peak (I β /I α ). As each substrate case has different ω β /ω α value (1.72, 1.57, and 1.44 for LAO, STO and GSO (for 1 × SOC IP case)) and as there is 1/ω dependence in the optical conductivity, the intensity is not to be defined by the height of each peak. We have quantitatively analyzed the intensities within a two-peak picture, taking into account the 1/ω dependence of the optical conductivity curve, and fitted the data with following Lorentzian-type equation: where we can define peak intensity at each frequency position as I α = A α π −1 /ε α or I β = A β π −1 /ε β . As Kim et al. 19 have shown, the β peak, that is thought to arise from transition from low-lying J eff = 3/2 band to J eff = 1/2 UHB in a simple picture, has in fact large J eff = 1/2 LHB contributions. With increasing the SOC parameter, J eff = 1/2 and J eff = 3/2 bands are decoupled, and I β /I α is diminished because of the reduction of J eff = 1/2 contribution to β peak. Namely, the effective increase of the SOC strength can be identified as the decrease of I β /I α . We can clearly see the reduction of I β with respect to I α for 2 × SOC cases in Fig. 3(c,d), regardless of moment directions and substrate types (see Table 2). Surprisingly, I β /I α ratio is found to decrease systematically upon strain, as shown in Table 2 for different substrate strain cases. This feature suggests that the tensile strain acts similarly to the increased SOC strength. The ratio of orbital and spin magnetic moment (μ O /μ S ) also shows similar trend. As the tensile strain is applied, the μ O /μ S value increases and approaches to 2 (see Table 2), which corresponds to a value for the ideal J eff = 1/2 state of strong SOC limit. The β peak shift, which occurs for increased SOC strength (2 × SOC), has been observed in the experiment 11 , even though it is not identified within our studied substrate-strain range. This feature indicates that the SOC can be enhanced effectively by means of the tensile strain. However, according to the atomic microscopic model, the strain-dependent hopping parameter is also found to produce similar optical behavior for a fixed SOC strength. Thus the overall optical behaviors are expected to come from combined effects of both the SOC strength and hopping parameters. For the OOP-AFM case, upon tensile strain, similar reduction of I β /I α is obtained, but μ O /μ S decreases as opposed to the IP case (see Table 2). This feature occurs due to the eventual breakdown of J eff = 1/2 electronic state rather than the increase in the SOC strength. Table 3 provides the band gap dependence on the magnetic moment direction in 214 system. Considering that the ideal J eff = 1/2 picture is validated in the insulating limit, the overall increasing behavior of the band gap upon strain is quite reasonable 20 . To confirm the enhanced U and SOC behaviors upon strain, we obtained σ(ω) using the microscopic model calculations with varying physical parameters. Figure 4(a,b) presents σ(ω)'s with respect to U and λ, respectively. Dominant optical spectra are attributed to the electron-hole (e-h) excitations in the vicinity of the Mott gap. With increasing U, the optical peaks shift up due to the enhancement of Mott gap. In addition, the shape of optical spectrum varies depending on U values. The change from three-peak to two-peak structure is observed. Interesting finding is that the middle-peak is depleted when the shape of optical spectrum changes. It is expected Table 3. Band gaps (in eV) of 214 system on different substrates, depending on the SOC strength and magnetic moment direction. Bulk results are also given for comparison. to occur due to the Fano-type coupling between the spin-orbit (SO) exciton and e-h excitation of J eff = 1/2 band 19 . Whether three-peak structure really appears in σ(ω) of iridate is not so certain, because the four-site cluster we have considered in Fig. 4 may not be sufficient to describe full kinetics of lattice. However, it is legitimate to infer that some optical spectral-weight transfer to higher peak (β peak) occurs with increasing U, which corresponds to tensile strain behavior. When the SOC increases, the splitting between J eff = 1/2 and 3/2 bands increases and the Mott gap is slightly enhanced. These features are well reflected in the optical conductivity shown in Fig. 4(b). The lowest energy peak becomes slightly higher and the highest energy peak shifts up somehow, when λ increases. As in the case of the weak U (< 1.8 eV), a three-peak structure appears for large λ (> 0.45 eV). It happens because the Fano-type coupling is weakened as the excitation energy of the SO exciton becomes higher than that of the e-h excitations. Because some spectral weights depleted for small λ are recovered for large λ, the spectral weight near the β peak diminishes when λ becomes larger. I β /I α behavior shows the reduction upon increasing λ, which also mimics the tensile-strain effect in the ab initio-based optical data, suggesting the effective increase of SOC strength. Note that, in the current model approach, I β /I α is also affected by the change in the hopping parameters due to substrate strain, namely, the enhanced hopping between J eff = 1/2 bands and the reduced hopping between J eff = 1/2 and J eff = 3/2 bands, under the tensile strain. Thus, regarding the peak intensities, the enhanced optical spectral weight of e-h excitation of J eff = 1/2 is expected to yield similar effect to the enhanced SOC strength. In general, care should be taken for applying low-energy atomic model to itinerant 5d system. Since the intensity of σ(ω) in the model approach is obtained by the sum of possible four spectral weights from d 4 -d 6 multiplet configurations 19 , the analysis of the each spectral weight upon parameter change is possible. In the ab initio methods however, the strain-dependent change in I β /I α can be the result of cooperative changes in many physical parameters, not solely from SOC strength. As we have seen in the IP and OOP cases, the additional information on μ O /μ S change is necessary to conclude that the primary tuning parameter in the IP case is the SOC strength, while it is not in the OOP case. The opposite behavior of μ O /μ S for IP and OOP can also be understood in terms of a simple atomic picture. For a state close to ideal J eff = 1/2 state, μ O /μ S can be expressed as where δ = 2Δ /λ (λ: SOC strength) represents small deviation from the ideal cubic case due to tetragonal crystal field splitting (Δ ) (see the supplement materials for the derivation). Considering itinerant character of 5d system, atomic model may not access the full description of the system, but the strain dependency is expected to be well-described. As δ goes more negative upon tensile strain, the IP (OOP) case shows clear increase (decrease) in μ O /μ S . The more rapid decrease for the OOP case agrees well with the tendency shown in Eqs (2) and (3) (see Table 2). The different strain dependence between 214 and Sr 3 Ir 2 O 7 (327) system are also expected to come from the different magnetic moment directions, as the former and the latter have IP and OOP-AFM orderings, respectively. The strain dependence of 327 system resembles the hypothetical OOP-AFM phase of 214 system 15 , which suggests that the response of electronic structure upon strain is more related to the moment direction than to the dimensionality of the Sr-iridates. The tensile strain can effectively change the J eff = 1/2 nature of the system through the change of the moment direction as well as the change in the electronic correlation 17 . In conjunction with recent analysis on resonant X-ray scattering of iridate systems, we corroborate that the moment direction plays a role of another degree of freedom that can be tuned using the substrate engineering, especially, for a system with many competing energy scales 21 . Note that the different responses upon strain between the IP and OOP cases can also be viewed as increased anisotropy in the electronic structure. Isotropic J eff = 1/2 ground state becomes anisotropic due to the crystal field δ coming from the strain, and the relatively higher change in the electronic structure shown in the OOP case can be interpreted as stronger dependence on tetragonal distortion δ of the electronic structure, which has been shown for μ O /μ S behaviors (See Eqs (2) and (3)). The 2 × SOC cases show overall similar strain trends, but with more robustness of the electronic structures. As can be seen by increased μ O /μ S along with reduced I β /I α (Table 2), the electronic structure for the 2 × SOC becomes closer to that of J eff = 1/2 state, which is reflected by highly reduced optical peak shifts upon external strain in Fig. 3(c). According to the model by Jackeli et al. 6 , the direction of magnetic moment of the J eff = 1/2 system can be switched from IP to OOP by changing the local crystal-field splitting. Indeed, a recent first-principles study 17 showed that the change of the magnetic order from in-plane to out-of-plane occurs when the ratio of apical and planar Ir-O bond lengths should exceed 1.09. This value, however, is at or beyond the limit of coherent growth of the perovskite oxides through epitaxy. Our substrate strain covers the bond length ratio from 1.045 (LAO) to 0.998 (GSO), and so the stable IP magnetic order is retained for all studied substrate-strain range. This feature is also supported by our energetics study, which provides that the IP-AFM structure is more stable than the OOP-AFM by 100 meV/f.u. In fact, to flop the magnetic moment of 214 system, direct doping of magnetic ion seems to be much more efficient 22 . The substrate tuning approach is expected to be efficient rather for the 327 bilayer system, in which the energy difference between IP-AFM and OOP-AFM is much smaller. The 2 × SOC case shows even larger energy difference between IP-AFM and OOP-AFM, which suggests the strong interconnection between the J eff = 1/2 electronic structure and the magnetic moment direction of the system. On the basis of the above studies, it is worthwhile to check the possible magneto-electric effect in 214 system. As the tensile strain increases, the overall electronic structures of 214 system between the IP and OOP cases become progressively distinct, which is revealed by the differences in I β /I α , μ O /μ S , and the optical conductivity shapes for different substrates (Table 2 and Fig. 3). Different electronic structures between IP and OOP moment directions can be utilized to generate strong magneto-electric effect, especially for strained system, as such by applying the strong magnetic field 23 . Namely, the control of the electronic structure, such as optical gap, would be feasible by employing the strained iridate systems. SrIrO 3 . Differently from 214 system, 113 system is known as a correlated metal with semimetallic character, being located at the boundary of the magnetic metal and magnetic insulator in the phase diagram 4,15,24,25 . We have found that 113 system is to be a paramagnetic metal for all studied substrate-strain range. In 113 system, the response of the electronic structure to the epitaxial strain is expected to be reduced with respect to the case in 214 system, due to the 3D connectivity of the IrO 6 octahedra. As shown in Fig. 1(d), in 113 system, the planar strain effects are expected to be compensated by the change in apical connectivity of the IrO 6 network, which can be seen in the apical and in-plane bond length and bond angle variations upon strain (see Table 4). Accordingly, the overall change of the electronic structure is suppressed, which is in stark contrast to the case in 214 system exhibiting the larger band width variation through direct control of the orthorhombic distortion. Related change of the hopping parameters is presented in the supplement materials. Optical experiment for 113 system has shown that the β peak position is shifted to a higher energy side as the tensile strain is applied, while the α peak is not clearly identified 8 . We also obtained the shift of β peak by 0.06 eV from LAO to GSO substrate (Fig. 5(a)). The α peak, which has not been identified in experiment, appears in our calculation due to the incapability of describing the dynamical correlation effect 15,26 . Since the 113 system is weakly correlated, careful change of relative W and U parameters using substrate strain would produce the correlated three-peak structure in the DOS to locate the α peak in the vicinity of the Drude part. Due to 3D connectivity of 113 system, α peak shifts are highly suppressed (0.03 eV shift from LAO to GSO case) with respect to the case in 214 system. Note in Fig. 5(a) that, upon tensile strain, the systematic separation of α and β peaks occurs with the reduction of the β peak intensity, as observed in 214 system. According to recent experiments for 113 films, the position of the β peak under the small compressive strain shows only a little shift 8,13 . Considering that our calculation covers wider range of strain, further experiments with various substrates are demanded to get more information on the substrate effects. Also, recent finding of enhanced scattering for the compressive strain case, which was ascribed to the disorder effect rather than to the correlation effect 27 , can also be justified by examining the α peak shift upon substrate strain. For the 2 × SOC case in Fig. 5(b), much larger shift-up of β peaks is shown, as in 214 system. Again, the enhanced J eff = 1/2 ground state of the system is well-described with highly reduced I α /I β and with more insulating nature in the DOS. Combined with 3D nature of the system, the enhanced SOC highly stabilizes electronic structure against strain, which is evident from almost locking of both optical peaks in Fig. 5(b). All the substrate cases for 2 × SOC are almost insulating with no Drude contribution in σ(ω) in agreement with the reported ab initio phase diagram 24 . In the case of 214 system, the IP-AF ordering was essential for the effective tuning of SOC, while the OOP-AF shows the break down of J eff = 1/2 picture upon strain. For nonmagnetic 113 system, the reduction of I β /I α cannot be claimed to be due to enhancement of effective SOC (see Table 5). According to recent reports, the ground state of 113 has large deviation from J eff = 1/2 state and the mixing of J eff = 1/2 and J eff = 3/2 is found to be significant with the entrance of octahedral rotations, which is in sharp contrast to layered 214 system 28,29 . Since the substrate strain directly changes the octahedral rotations, we can deduce that the shift and reduction of β peak in 113 system are due to the deviation of ground state from J eff = 1/2 state, and I β /I α reduction is due to enhanced optical spectrum weight of J eff = 1/2 e-h excitation, which is totally different from the case in 214 system. Finally, we want to discuss the low-energy electronic structure of 113 system upon strain. Even though the strain dependency is highly reduced with respect to 214 system due to the dimensionality change, the narrow-band semimetallic nature of 113 system near the Fermi level (E F ) makes the system very tunable upon small change of external parameters in low-energy scales. As shown in Fig. 6(a-c), overall band structures of 113 system on different substrates are similar to that of bulk system 24 , but there are a few points to be pointed out. First, we found that 113 system on STO has almost cubic electronic structure, which can be recognized by the highest e g band location near 2 eV above E F . The tetragonal crystal field in the presence of the substrate strain lifts the degeneracy of e g states with lowering one out of two e g states (z 2 for LAO and x 2 − y 2 for GSO) toward E F . Second, while the electron pockets are retained at k = T and U, hole pockets emerge at different k's depending on the strain, i.e. at k = S and R for LAO and near k = Γ for GSO. For the STO case, the morphologies of hole pockets are in-between LAO and GSO cases, with very narrow band character near Γ -S and R-Γ , which enables easy tune upon the epitaxial strain. The Fermi surface topology also changes accordingly, as shown in Fig. 6(d-f). LAO STO GSO In relation to recent experiments, heavier effective mass of hole carriers than electron carriers 8,28 can be identified from the band structure of STO substrate case ( Fig. 6(b)). More symmetric electron-hole band structure for tensile strain case is also consistent with transport measurement 8 . Under compressive strain, electron pockets at U and T, and hole pocket at R are formed, as shown in Fig. 6(a,d), which are in good agreement with angle-resolved photoemission spectroscopy (ARPES) 29 . In view of small band renormalization factor of 1.25 from ARPES experiment for 113 system 29 , our results successfully explain the low-energy electronic structure for both compressive and tensile strain cases, and suggest further possibility of manipulating the strain engineering. Inconsistency of simple tight-binding model with ARPES may come from the highly susceptible low-energy electronic structure of the system 29,30 . Suggested Dirac-cone-like nodes at U and T from tight-binding calculation were not detected in the recent ARPES measurement 29,31 . Our band structure shows the protected node at T upon strain but no Dirac-node at U, which needs confirmation by further experimental studies. As the band structure of 113 system depends highly on the U value, which is interconnected to the SOC strength, a proper estimation of electronic correlation U value is crucial from the theory side 31,32 . Also, recent study on 113 film demonstrated that the breaking of the crystal symmetry upon strain can lift the Dirac node of 113 system 33 , which reflects that the electronic structure of the system is highly tunable upon systematic epitaxial strain. Conclusion We have analyzed the substrate strain effects in Sr-iridate systems, employing both the ab initio optical conductivity calculation and the microscopic model approach. By analyzing optical peak positions and relative intensities along with obtained magnetic moment, we have found that, in layered 214 system, tensile strain can effectively tune the electronic correlation strength U as well as the SOC strength. The robustness of the J eff = 1/2 electronic structure, which is found to be highly correlated with the magnetic moment direction of the system, can also be controlled by employing the substrate strain effect. On the other hand, in 113 system, tensile strain easily breaks the overall J eff = 1/2 ground state, and band topology shows highly tunable hole character in the vicinity of E F . Our systematic study demonstrate that the strain engineering for iridate systems, in which various energy scales compete, provides an additional degree of freedom of tunable parameters, U and SOC, as shown as peak and weight change of the optical conductivity, which can offer new dimensions on top of the current epitaxial strain studies, especially, when combined with very recent studies based on superlattice structures 34 . Ab initio calculation. We have performed electronic structure calculations, employing the full-potential linearized augmented plane wave (FLAPW) band method 35,36 implemented in WIEN2k package 37 . For the exchange-correlation energy functional, we used the local density approximation (LDA), which has been generally employed for 5d systems. To treat the correlation in functional level, we employed the hybrid-functional 38,39 , which is given by Here Ψ corr and ρ corr correspond to the Kohn-Sham wave function and the electron density of correlated electrons, respectively. The exchange-correlation energy functional is constructed with the fraction (γ) of the Hartree-Fock (HF) exchange energy, replacing the LDA correspondence for correlated electrons (5d-electrons in the present case). This functional form is the LDA correspondence of so-called PBE0 40,41 . Compared to the normally employed LDA+ U method, the hybrid-functional approach can treat the correlation effects of different systems in a consistent way and the non-local exchange energy can be included in the HF term. The hybrid-functional scheme has been employed for numerous transition-metal (TM) perovskites, from 3d to 5d systems, and is thought to be one of the best computational schemes 42 . Especially for more itinerant 5d systems, recent calculation found hybrid functional scheme successfully described the electronic structures and magnetic properties 43 . The important SOC term is included in the second variational scheme. To determine the proper γ parameter, we performed the calculations on bulk Sr 2 IrO 4 (214) system with various γ values, using both the LDA and PBEsol functionals. As shown in Table 6, both functionals show similar results of increasing gap size with γ. Considering the observed optical gap size of around 0.4 eV, γ value in-between 0.20 and 0.25 looks appropriate. In the present study, we chose the LDA functional with γ = 0.20 to fit the observed optical peak positions. However, the overall strain dependency is expected to be similar for various γ values. Our choice of γ = 0.20 is somewhat smaller than the often-used typical value of γ = 0.25. But the systematic studies for the perovskite systems showed that the smaller value of γ produces much better results 42 . The substrate strain effects were taken into account by fixing in-plane(IP) lattice parameters of 214 and SrIrO 3 (113) systems to those of the substrates: LaAlO 3 (LAO), SrTiO 3 (STO), and GdScO 3 (GSO). Since the relevant optical experiments were performed not on ultrathin films, we did not consider the substrate materials explicitly. We assumed the collinear magnetic structures for both IP and out-of-plane (OOP) cases based on the fact that the IP ferromagnetic (FM) component due to the canted antiferromagnetic (AFM) structure is substantially weakened for the film case 44 . We optimized c/a ratio first with fixed a, which determines tetragonality of the system, and then performed the internal relaxations for given volume of every systems with force criteria of 1.0 mRy/a.u. within the LDA limit. With obtained structures, we performed the hybrid-functional calculations with the inclusion of the SOC term. In a system where the SOC plays a dominant role, inclusion of non-diagonal parts of the spin density matrices are found to be crucial. Especially, for iridates, inclusion of only diagonal parts does not describes the energy gap and magnetic moments of the system, which even changes the energetics of the 214 system. Without non-diagonal parts, the magnetic moment direction of the system is found to be OOP, which is corrected only after the inclusion of the full matrix elements. In addition to hybrid functional parts, we included non-diagonal elements of density matrices corresponding U = 2 eV in generating orbital potentials, for the description of weakly correlated Ir 5d electrons. The valence wave functions inside the muffin-tin spheres were expanded with spherical harmonics up to l max = 10. The wave function in the interstitial region was expanded with plane waves up to Table 6. Band gap dependence on the size of mixing parameter γ for LDA and PBEsol functionals. Calculations were done for the experimental bulk Sr 2 IrO 4 (214) system.
7,574
2015-12-18T00:00:00.000
[ "Physics" ]
Terahertz Broadband Absorber Based on a Combined Circular Disc Structure To solve the problem of complex structure and narrow absorption band of most of today′s terahertz absorbers, this paper proposes and utilizes the finite element (COMSOL) method to numerically simulate a broadband absorber based on a straightforward periodic structure consisting of a disk and concentric ring. The final results show that our designed absorber has an absorption rate of over 99% in the broadband range of 9.06 THz to 9.8 THz and an average of over 97.7% in the ultra-broadband range of 8.62 THz to 10 THz. The reason for the high absorption is explained by the depiction of the electric field on the absorber surface at different frequencies. In addition, the materials for the top pattern of the absorber are replaced by Cu, Ag, or Al, and the absorber still achieves perfect absorption with different metal materials. Due to the perfect symmetry of the absorber structure, the absorber is very polarization-insensitive. The overall design is simple, easy to process and production. Therefore, our research will offer great potential for applications in areas such as terahertz electromagnetic stealth, sensing, and thermal imaging. Introduction THz technology has received increasing attention and interest these years [1]. Terahertz waves are high-frequency electromagnetic waves in the frequency band of 0.1 THz to 10 THz, which occupy a critical position in the electromagnetic wave spectrum [2]. As a transition interval between electronics and optics, the terahertz band is widely used in communication, detection, sensing, stealth, and other fields because of its many unique advantages such as low photon energy, short pulse, and frequency [3][4][5][6][7][8][9]. The terahertz absorber based on electromagnetic metamaterials is one of the important devices applied in electromagnetic detection and stealth. Besides, the perfect absorber of metamaterials in the terahertz band has been a research hotspot and challenge in recent years. The desired electromagnetic characteristic parameters can be obtained by changing the composition structure, geometric parameters, and arrangement of metamaterials [10][11][12], which provides us with an effective approach to the design and application of terahertz perfect absorbers. For example, an absorber composed of the structure of a metal ring, a silica spacer, and a vanadium dioxide film was proposed by Lingling Chen et al., which enables single-band absorption in the terahertz band [13]. The absorber has a very simple structure consisting of only circular rings. In 2020, Wangyang Li et al. proposed a tunable dual-band terahertz perfect metamaterial absorber (MMA), composed of two stacked square STO resonator structures and a metal substrate [14]. The repeated double-ring structure is responsible for the formation of the dual-band absorption and provides the basis and idea for the design of multi-band, broadband absorbers. Moreover, in 2020, Yuqian Wang et al. designed a Dirac semimetal-based absorber that consists of a square-wave oscillator with four BDS films and a closed-loop to achieve multiband absorption in the terahertz band [15]. However, broadband absorption is a greater hotspot and difficulty in scientific research for the perfect absorber of metamaterials. Broadband absorption can better meet our practical needs than most single-band, dual-band, and multi-band absorbers [16][17][18][19][20]. In practice, there are two general methods to achieve broadband absorbers. The first is through the planar construction method [21][22][23], the combination and arrangement of different patterns to change the electromagnetic properties of the absorber. The second method is multi-layer addition [24][25][26]. This method strengthens the interaction between layers by increasing the number of layers of absorption layer, to achieve the purpose of changing the characteristics of absorption layer. In addition, we study the resonance mechanism of the two methods. The resonance mechanism of both methods is to achieve high broadband absorption through the superposition of resonances of different frequencies. Although the multilayer stacking method can achieve broadband absorption, the design of multilayer structure is usually very difficult in the actual process of surface preparation [27][28][29]. Because, in actual production, precise alignment between the layer and the size of the absorber we design is required, usually at the micron and nanometer level. This means that each stack of the absorber structure makes the process more geometrically difficult. Therefore, it is of great significance to design a kind of traditional metamaterial which can realize broadband absorption. In this study, we designed a metamaterial which is based on a metal-dielectric-metal structure [30][31][32][33][34] for a broadband perfect absorber in the terahertz band. The top pattern consists of a closed gold ring and a disc, which provides excellent performance and simple construction. The middle layer is made of silicon dioxide as a dielectric layer and the bottom layer is made of gold film as a reflective layer. Through simulations, we have studied the effect of different structural parameters, polarization angles and incidence angles on the absorption effect of the performance and used this as a basis for optimizing the structural parameters. We analyzed the absorption mechanism and polarization characteristics by combining the distribution of electric field and current density on the surface of metamaterials. Finally, we obtained the planar combination structure of circle and disk as an effective method to design broadband absorbers. Finally, the results show that our designed absorber has a polarization-insensitive absorption rate of over 99% in the broadband range of 9.06 THz to 9.8 THz and over 97.7% in the ultra-broadband range of 8.62 THz to 10 THz. This suggests that our research would have enormous potential for applications in terahertz electromagnetic stealth, sensing, and thermal imaging. Mathematical and Experimental Methods The structure proposed in this paper is shown in Figure 1b, and the absorber consists of three units, a periodic structure composed of metal-dielectric-metal. We chose gold as the target metal because of its chemical stability to ensure a higher environmental suitability of the absorber. In the design structure of the absorber, the first layer from the bottom up is the reflective layer, which ensures that there are no transmitted electromagnetic waves. The second and third layers are the dielectric and absorption layers respectively, which ensure the loss of electromagnetic waves. The bottom layer of the absorber is a continuous gold film with a thickness h = 8 µm and the dielectric layer with a thickness d of 6 µm silica with a relative dielectric constant ε p = 1.46. The gold pattern in the top layer of the absorber is a combination of rings and disks with a thickness t of 0.1 µm. We used the finite element method in COMSOL simulation platform to simulate and optimize the metamaterial absorber with the following optimal parameters: p1 = 35 µm, R1 = 7 µm, R2 = 12 µm, R3 = 14 µm. Micromachines 2021, 12, x 3 of 11 We implement the periodic array with a single structural unit by setting Floquet periodic boundary conditions. When the electromagnetic wave is incident perpendicular to the absorber surface, the electric and magnetic fields are parallel to the x and y axis directions respectively. The z-direction is set to perfectly match the layer, as shown in Figure 1a. In the simulation, all the metallic layers are composed of gold material. In the closed-loop disc structure and geometrical parameters we designed, the electromagnetic resonance of a specific frequency occurs when the electromagnetic wave interacts with the absorber. By means of the Drude model [35][36][37] . We can calculate the dielectric constant of gold, where the volume plasma frequency is ω = 1.37 s and collision frequency ω = 1.23 s [38]. When the thickness of the underlying metal is greater than the maximum skin depth of the metal in terahertz, spectrum the transmittance of the absorber T = 0. Therefore, the absorptance of the absorber can be simplified by A(ω) = 1 − R(ω) − T(ω) [39][40][41][42][43] as A(ω) = 1 − R(ω), where R is the reflectance of the absorber. In this paper, the thickness of the reflective gold film is 8 µm, which is bigger than the skinning depth and ensures that the transmittance T(ω) = 0. Because the design of the periodic metal structure array on top of the absorber affects the impedance of the absorber, we can adjust the structural parameters of the absorber, and material parameters to match the impedance of the absorber with the free space [44][45][46], so that the reflection coefficient tends to zero and the absorber absorption rate tends to 1. The dielectric layer of the absorber provides enough space for the propagation of electromagnetic waves in the absorber and realizes the loss of electromagnetic waves in the absorber. The absorber is insensitive to polarization, owing to the perfect symmetry of our closed-loop disc combination structure. Our closed-loop disc structure will have a higher impedance than the short cut open-loop form of the metal pattern. It means that the absorber is more easily impedance matched to space to achieve high absorption rates over a wide frequency band and small feature size. Results and Discussion In order to construct an absorber with a broadband perfect absorption effect and a simple structure, we have designed a perfect absorber based on a closed-loop disc combination structure. Through the COMSOL simulation platform, as shown in the Figure 2, we calculated the absorptance of the absorber in TE and TM modes at positive incidence, respectively. As can be seen from the graph, our designed absorber is not only insensitive to polarization, but also has an absorption >99% over the broadband range of 9.06 to 9.8 THz and still achieves a high average absorption of >97.7% over the ultra-wideband range of 8.62 to 10 THz. The reason for the polarization insensitivity is that the nanostructure unit of the absorber shows perfect symmetry [44][45][46][47]. Our absorber has more advantages compared to the others, as shown in Table 1 [47][48][49][50]. Compared to the 5-layer structure and even the complex structure with more than 10 layers in Table 1, our We implement the periodic array with a single structural unit by setting Floquet periodic boundary conditions. When the electromagnetic wave is incident perpendicular to the absorber surface, the electric and magnetic fields are parallel to the x and y axis directions respectively. The z-direction is set to perfectly match the layer, as shown in Figure 1a. In the simulation, all the metallic layers are composed of gold material. In the closed-loop disc structure and geometrical parameters we designed, the electromagnetic resonance of a specific frequency occurs when the electromagnetic wave interacts with the absorber. By means of the Drude model [35][36][37] . We can calculate the dielectric constant of gold, where the volume plasma frequency is ω p = 1.37 16 s −1 and collision frequency ω t = 1.23 14 s −1 [38]. When the thickness of the underlying metal is greater than the maximum skin depth of the metal in terahertz, spectrum the transmittance of the absorber T = 0. Therefore, the absorptance of the absorber can be simplified by [39][40][41][42][43] as A(ω) = 1 − R(ω), where R is the reflectance of the absorber. In this paper, the thickness of the reflective gold film is 8 µm, which is bigger than the skinning depth and ensures that the transmittance T(ω) = 0. Because the design of the periodic metal structure array on top of the absorber affects the impedance of the absorber, we can adjust the structural parameters of the absorber, and material parameters to match the impedance of the absorber with the free space [44][45][46], so that the reflection coefficient tends to zero and the absorber absorption rate tends to 1. The dielectric layer of the absorber provides enough space for the propagation of electromagnetic waves in the absorber and realizes the loss of electromagnetic waves in the absorber. The absorber is insensitive to polarization, owing to the perfect symmetry of our closed-loop disc combination structure. Our closed-loop disc structure will have a higher impedance than the short cut open-loop form of the metal pattern. It means that the absorber is more easily impedance matched to space to achieve high absorption rates over a wide frequency band and small feature size. Results and Discussion In order to construct an absorber with a broadband perfect absorption effect and a simple structure, we have designed a perfect absorber based on a closed-loop disc combination structure. Through the COMSOL simulation platform, as shown in the Figure 2, we calculated the absorptance of the absorber in TE and TM modes at positive incidence, respectively. As can be seen from the graph, our designed absorber is not only insensitive to polarization, but also has an absorption >99% over the broadband range of 9.06 to 9.8 THz and still achieves a high average absorption of >97.7% over the ultrawideband range of 8.62 to 10 THz. The reason for the polarization insensitivity is that the nanostructure unit of the absorber shows perfect symmetry [44][45][46][47]. Our absorber has more advantages compared to the others, as shown in Table 1 [47][48][49][50]. Compared to the 5-layer structure and even the complex structure with more than 10 layers in Table 1, our absorber with only 3 layers is very simple and easy to fabricate. Moreover, it has a large broad absorption bandwidth. The absorption bandwidth is 1.38 THz, which is a significant improvement compared to the absorbers in Table 1. Micromachines 2021, 12, x 4 of 11 absorber with only 3 layers is very simple and easy to fabricate. Moreover, it has a large broad absorption bandwidth. The absorption bandwidth is 1.38 THz, which is a significant improvement compared to the absorbers in Table 1. In order to gain a deeper understanding of the principle of the absorber producing optimum absorption, we have plotted the absorption rate of the absorber with only the closed circle and only the disc as shown in Figure 3. The distribution of the electromagnetic field at the top of the absorber for different modes (TE, TM), at different frequencies, and the distribution of the current density are shown in Figure 4. In Figure 3, we calculate the absorptivity of an absorber with only a closed ring and only a disk. It can be noticed that both structures do not have a high absorption rate (less than 70%) in the 8.0-10.0 THz range, as shown in Figure 3a,b. This is because the absorber under only the ring or disc structure does not form an effective resonant coupling with the external terahertz wave. However, the absorbers in the first structure have an In order to gain a deeper understanding of the principle of the absorber producing optimum absorption, we have plotted the absorption rate of the absorber with only the closed circle and only the disc as shown in Figure 3. The distribution of the electromagnetic field at the top of the absorber for different modes (TE, TM), at different frequencies, and the distribution of the current density are shown in Figure 4. absorber with only 3 layers is very simple and easy to fabricate. Moreover, it has a large broad absorption bandwidth. The absorption bandwidth is 1.38 THz, which is a significant improvement compared to the absorbers in Table 1. In order to gain a deeper understanding of the principle of the absorber producing optimum absorption, we have plotted the absorption rate of the absorber with only the closed circle and only the disc as shown in Figure 3. The distribution of the electromagnetic field at the top of the absorber for different modes (TE, TM), at different frequencies, and the distribution of the current density are shown in Figure 4. In Figure 3, we calculate the absorptivity of an absorber with only a closed ring and only a disk. It can be noticed that both structures do not have a high absorption rate (less than 70%) in the 8.0-10.0 THz range, as shown in Figure 3a,b. This is because the absorber under only the ring or disc structure does not form an effective resonant coupling with the external terahertz wave. However, the absorbers in the first structure have an distribution on the closed-loop disc shows that the terahertz wave induces an induced strong electromagnetic field on the metal ring and disc. The closed-loop disc and the underlying metal are separated by a dielectric layer, which can be equated to an electric dipole. As the electromagnetic waves act on the closed-loop disc, the charge builds up and the electric dipole strengthens. The strong electric dipole resonates strongly with the external electromagnetic wave on the disc and ring, culminating in a high absorption [53,54]. In the above discussion, we have calculated and analyzed the absorption performance of a closed-loop and disc combination absorber under ideal conditions. Let us now consider one of the main factors affecting the absorption performance of the absorber in practice, which is the process error in the process production. In actual process In Figure 3, we calculate the absorptivity of an absorber with only a closed ring and only a disk. It can be noticed that both structures do not have a high absorption rate (less than 70%) in the 8.0-10.0 THz range, as shown in Figure 3a,b. This is because the absorber under only the ring or disc structure does not form an effective resonant coupling with the external terahertz wave. However, the absorbers in the first structure have an enhanced absorption effect as the frequency increases. The absorbers in the second structure show a significant increase in absorption in the 8.7-9.7 THz range compared to the other bands. Both structures have some resonance potential at high frequencies, which provides ideas for our design. So, we propose an innovative closed-loop disk combination structure. The structure increases the interaction between ring and disk and achieves high absorption performance of the absorber. In Figure 4, we depict the electric field and current density distributions on the surface of the closed-loop disc for electromagnetic waves in TE, TM mode at the frequencies of 9 THz, 9.5 THz, and 10 THz, respectively. To explain why the absorber produces a perfect absorption, we plotted the distribution of electric field and current density corresponding to these three frequencies together with the absorption curve. It is easy to understand that the high absorption rate of the absorber over a wide frequency band is mainly caused by the resonant coupling of the combined closed ring and disc structure [51,52]. By comparing the electric field distribution at 9 THz as well as at 10 THz, we can see that resonant coupling performs a significant function in absorbing low and high frequency bands. Since the polarization directions in TE and TM modes are different, it can be found from Figure 4 that the position of resonance is determined by the polarization direction. At TE mode, the resonant coupling of the combined closed-loop and disc structure in the longitudinal direction is relatively stable and prominent, compared to the resonant coupling of the combined closed-loop and disc structure in the transverse direction in TM mode. The electric field distribution on the closed-loop disc shows that the terahertz wave induces an induced strong electromagnetic field on the metal ring and disc. The closed-loop disc and the underlying metal are separated by a dielectric layer, which can be equated to an electric dipole. As the electromagnetic waves act on the closed-loop disc, the charge builds up and the electric dipole strengthens. The strong electric dipole resonates strongly with the external electromagnetic wave on the disc and ring, culminating in a high absorption [53,54]. In the above discussion, we have calculated and analyzed the absorption performance of a closed-loop and disc combination absorber under ideal conditions. Let us now consider one of the main factors affecting the absorption performance of the absorber in practice, which is the process error in the process production. In actual process production, the precise alignment of the absorber layers and the accurate control of the absorber structure parameters are two major difficulties. Owing to the very simple structure of the absorber we have designed, this is much less difficult in practice, but there are still errors in the process that need to be considered. The radius of the disc and the height of the dielectric layer largely influence the electromagnetic resonance of the absorber and thus determine the absorption effect. In the following we will analyze and discuss the influence of these two parameters on the performance of the absorber. In Figures 5 and 6 we depict the absorption curves of absorbers with different dielectric layer thicknesses and different disc radii for positive incidence electric field. From Figure 5, we can see that there is a significant redshift in the absorption curve as the thickness of the dielectric layer changes, but in general the absorber is still able to maintain a high level of absorption within a broadband, which implies a good tolerance to production errors within a small range of dielectric layer thickness in-process production. We can also see from Figure 5 that as the dielectric layer thickness increases, the absorption decreases at low frequencies and increases at high frequencies. This is caused by the SPP of metallic materials in the terahertz band, but losses reduce the surface plasma mass, thus making the absorption effect less effective due to conductivity, polarization, hysteresis, and a certain propagation distance [55][56][57]. In fact, the dielectric layer provides the transmission space and the consumption path for the terahertz waves entering the absorber and plays a decisive role in the performance of the absorber. In terms of impedance matching theory, this means that variations in the thickness of the dielectric layer affect the impedance matching between the absorber and the free space, and ultimately the absorber s absorption effectiveness [58,59]. From Figure 6 we can see that the absorption curve of the absorber is different from the radius of the disk. With the change of disk radius, the absorption curve has the phenomenon of red shift. However, the disk radius is 5 µm, and the absorption effect is much lower than other parameters. This is because when the disk radius is 5 µm, the disk-ring interaction is weak and the entire magnetic dipole in the absorber cannot be excited effectively, resulting in poor absorption. When the disc radius varies from 6 µm to 9 µm, the absorber effect is still strong in the broadband, which also means that the absorber has a good process tolerance for process production. production, the precise alignment of the absorber layers and the accurate control of the absorber structure parameters are two major difficulties. Owing to the very simple structure of the absorber we have designed, this is much less difficult in practice, but there are still errors in the process that need to be considered. The radius of the disc and the height of the dielectric layer largely influence the electromagnetic resonance of the absorber and thus determine the absorption effect. In the following we will analyze and discuss the influence of these two parameters on the performance of the absorber. In Figures 5 and 6 we depict the absorption curves of absorbers with different dielectric layer thicknesses and different disc radii for positive incidence electric field. From Figure 5, we can see that there is a significant redshift in the absorption curve as the thickness of the dielectric layer changes, but in general the absorber is still able to maintain a high level of absorption within a broadband, which implies a good tolerance to production errors within a small range of dielectric layer thickness in-process production. We can also see from Figure 5 that as the dielectric layer thickness increases, the absorption decreases at low frequencies and increases at high frequencies. This is caused by the SPP of metallic materials in the terahertz band, but losses reduce the surface plasma mass, thus making the absorption effect less effective due to conductivity, polarization, hysteresis, and a certain propagation distance [55][56][57]. In fact, the dielectric layer provides the transmission space and the consumption path for the terahertz waves entering the absorber and plays a decisive role in the performance of the absorber. In terms of impedance matching theory, this means that variations in the thickness of the dielectric layer affect the impedance matching between the absorber and the free space, and ultimately the absorber′s absorption effectiveness [58,59]. From Figure 6 we can see that the absorption curve of the absorber is different from the radius of the disk. With the change of disk radius, the absorption curve has the phenomenon of red shift. However, the disk radius is 5 µm, and the absorption effect is much lower than other parameters. This is because when the disk radius is 5 µm, the disk-ring interaction is weak and the entire magnetic dipole in the absorber cannot be excited effectively, resulting in poor absorption. When the disc radius varies from 6 µm to 9 µm, the absorber effect is still strong in the broadband, which also means that the absorber has a good process tolerance for process production. Moreover, we calculated the absorption behavior with various material composition. The type of material used to form the ring and disk at the top of the absorber is changed without changing the structural parameters of the designed absorber. The effects of different materials on the absorption properties were observed. Here we use Cu and Ag in the same group as Au and Al with low refractive index. From Figure 7 we can see that the absorber with the combination of closed-loop discs of different materials still maintains a high absorption effect at high frequencies and is almost identical, with only some differences at lower frequencies. This is because Au, Ag, and Al are all low refractive index metals and Cu, Ag, and Au belong to the same main group of elements with similar properties [60][61][62][63]. They all give rise to SPP in the terahertz band. The results show the adaptability of our closed-loop disc absorbers to a wide range of materials. Moreover, the designed absorber has an excellent absorption effect, a simple structure that is easy to fabricate and a good adaptability to materials. Conclusions Overall, we have designed a metamaterial wideband terahertz ideal absorber with simple structure. By describing the distribution of electric field intensity and current density on the surface of the absorber, the mechanism of the absorber producing high absorbent was studied. Finally, the results show the absorption rates of the absorber exceed 99% in the broadband range from 9.06 THz to 9.8 THz and average over 97.7% in the ultra-broadband range from 8.62 THz to 10 THz. In addition, the materials for the top pattern of the absorber are replaced by Cu, Ag, and Al, and the absorber still achieves perfect absorption with different metal materials. This means that our research Moreover, we calculated the absorption behavior with various material composition. The type of material used to form the ring and disk at the top of the absorber is changed without changing the structural parameters of the designed absorber. The effects of different materials on the absorption properties were observed. Here we use Cu and Ag in the same group as Au and Al with low refractive index. From Figure 7 we can see that the absorber with the combination of closed-loop discs of different materials still maintains a high absorption effect at high frequencies and is almost identical, with only some differences at lower frequencies. This is because Au, Ag, and Al are all low refractive index metals and Cu, Ag, and Au belong to the same main group of elements with similar properties [60][61][62][63]. They all give rise to SPP in the terahertz band. The results show the adaptability of our closed-loop disc absorbers to a wide range of materials. Moreover, the designed absorber has an excellent absorption effect, a simple structure that is easy to fabricate and a good adaptability to materials. Moreover, we calculated the absorption behavior with various material composition. The type of material used to form the ring and disk at the top of the absorber is changed without changing the structural parameters of the designed absorber. The effects of different materials on the absorption properties were observed. Here we use Cu and Ag in the same group as Au and Al with low refractive index. From Figure 7 we can see that the absorber with the combination of closed-loop discs of different materials still maintains a high absorption effect at high frequencies and is almost identical, with only some differences at lower frequencies. This is because Au, Ag, and Al are all low refractive index metals and Cu, Ag, and Au belong to the same main group of elements with similar properties [60][61][62][63]. They all give rise to SPP in the terahertz band. The results show the adaptability of our closed-loop disc absorbers to a wide range of materials. Moreover, the designed absorber has an excellent absorption effect, a simple structure that is easy to fabricate and a good adaptability to materials. Conclusions Overall, we have designed a metamaterial wideband terahertz ideal absorber with simple structure. By describing the distribution of electric field intensity and current density on the surface of the absorber, the mechanism of the absorber producing high absorbent was studied. Finally, the results show the absorption rates of the absorber exceed 99% in the broadband range from 9.06 THz to 9.8 THz and average over 97.7% in the ultra-broadband range from 8.62 THz to 10 THz. In addition, the materials for the top pattern of the absorber are replaced by Cu, Ag, and Al, and the absorber still achieves perfect absorption with different metal materials. This means that our research Conclusions Overall, we have designed a metamaterial wideband terahertz ideal absorber with simple structure. By describing the distribution of electric field intensity and current density on the surface of the absorber, the mechanism of the absorber producing high absorbent was studied. Finally, the results show the absorption rates of the absorber exceed 99% in the broadband range from 9.06 THz to 9.8 THz and average over 97.7% in the ultra-broadband range from 8.62 THz to 10 THz. In addition, the materials for the top pattern of the absorber are replaced by Cu, Ag, and Al, and the absorber still achieves perfect absorption with different metal materials. This means that our research will have great potential for applications in areas such as terahertz electromagnetic stealth, sensing, and thermal imaging. Compared to the absorbers proposed in recent years for the terahertz band, our absorber has a lower process difficulty, higher absorption rate and bandwidth, and better universality, providing a new idea for the study of metamaterial perfect absorbers for the terahertz band. Author Contributions: M.H.: conceptualization, formal analysis, investigation, data curation, writing-original draft, writing-review and editing. K.W.: conceptualization, formal analysis, investigation, data curation, funding acquisition. P.W.: conceptualization, formal analysis, investigation, data curation, writing-original draft, writing-review and editing. D.X.: conceptualization, formal analysis, revision. Y.X.: conceptualization, formal analysis, revision. All authors have read and agreed to the published version of the manuscript. Funding: This research received no external funding.
7,204.4
2021-10-21T00:00:00.000
[ "Physics", "Materials Science" ]
Nearly assumptionless screening for the mutually-exciting multivariate Hawkes process. We consider the task of learning the structure of the graph underlying a mutually-exciting multivariate Hawkes process in the high-dimensional setting. We propose a simple and computationally inexpensive edge screening approach. Under a subset of the assumptions required for penalized estimation approaches to recover the graph, this edge screening approach has the sure screening property: with high probability, the screened edge set is a superset of the true edge set. Furthermore, the screened edge set is relatively small. We illustrate the performance of this new edge screening approach in simulation studies. In this section, we provide a very brief review of the multivariate Hawkes process. A more comprehensive discussion can be found in Liniger (2009) and Zhu (2013). Following Brémaud and Massoulié (1996), we define a simple point process N on R + as a family {N (A)} A∈B(R + ) taking integer values (including positive infinity), where B(R + ) denotes the Borel σ-algebra of the positive half of the real line. Further let t 1 , t 2 , . . . ∈ R + be the event times of N . In this notation, N (A) = i 1 [ti∈A] for A ∈ B(R + ). We write N [t, t + dt) as dN (t), where dt denotes an arbitrary small increment of t. Let H t be the history of N up to time t. Then the H t -predictable intensity process of N is defined as (1) Now suppose that N is a marked point process, in which each event time t i is associated with a mark m i ∈ {1, . . . , p} (see e.g., Definition 6.4.I. in Daley and Vere-Jones, 2003). We can then view N as a multivariate point process N j j=1,...,p , of which the jth component process is given by N j (A) = i 1 [ti∈A,mi=j] for A ∈ B(R + ). To simplify the notation, we let t j,1 , t j,2 , . . . ∈ R + denote the event times of N j . The intensity of the jth component process is In the case of the linear Hawkes process, this function takes the form (Brémaud and Massoulié, 1996;Hansen, Reynaud-Bouret and Rivoirard, 2015) λ j (t) = μ j + p k=1 ⎛ ⎝ i:t k,i ≤t We refer to μ j ∈ R as the background intensity, and ω j,k (·) : R + → R as the transfer function. For p fixed, Brémaud and Massoulié (1996) established that the linear Hawkes process with intensity function (2) is stationary given the following assumption. We now define a directed graph with node set {1, . . . , p} and edge set E ≡ {(j, k) : ω j,k = 0, 1 ≤ j, k ≤ p} , for ω j,k given in (2). Let denote the maximum in-degree of the nodes in the graph. In this paper, we propose a simple screening procedure that can be used to obtain a small superset of the edge set E. Estimation and theory for the Hawkes process We first consider the low-dimensional setting, in which the dimension of the process, p, is fixed, and T , the time period during which the point process is observed, is allowed to grow. In this setting, asymptotic properties such as the central limit theorem have been established; for instance, see Bacry et al. (2013) and Zhu (2013). Consequently, estimating the edge set E is straightforward in low dimensions. In high dimensions, when p might be large, we can fit the Hawkes process model using a penalized estimator of the form minimize ω j,k ∈F ,1≤j,k≤p where L · ; {N j } p j=1 is a loss function, based on, e.g., the log-likelihood (Bacry, Gaïffas and Muzy, 2015) or least squares (Hansen, Reynaud-Bouret and Rivoirard, 2015); P · ; {N j } p j=1 is a penalty function, such as the lasso (Hansen, Reynaud-Bouret and Rivoirard, 2015); λ is a nonnegative tuning parameter; and F is a suitable function class. Then, a natural estimator for E is {(j, k) : Recently, Reynaud-Bouret and Schbath (2010), Bacry, Gaïffas and Muzy (2015), and Hansen, Reynaud-Bouret and Rivoirard (2015) have established that under certain assumptions, penalized estimation approaches of the form (5) are consistent in high dimensions, provided that the edge set E is sparse. For instance, Hansen, Reynaud-Bouret and Rivoirard (2015) establish the oracle inequality of the lasso estimator for the Hawkes process, given that certain conditions hold on the observed event times. However, to show that these conditions hold with high probability for arbitrary samples, these theoretical results require that the point process is mutually-exciting -that is, an event in one component process can increase, but cannot decrease, the probability of an event in another component process. This amounts to assuming that ω j,k (Δ) ≥ 0 for all Δ ≥ 0, for ω j,k defined in (1). When the dimension p is large, penalized estimation procedures of the form (5) (Bacry, Gaïffas and Muzy, 2015;Hansen, Reynaud-Bouret and Rivoirard, 2015) become computationally expensive: they require O(T p 2 ) operations per iteration in an iterative algorithm. This is problematic in contemporary applications, in which p can be on the order of tens of thousands (Ahrens et al., 2013). These concerns motivate us to propose a simple and computationally-efficient edge screening procedure for estimating the true edge set E in high dimensions. Under very few assumptions, our proposed screening procedure is guaranteed to select a small superset of the true edge set E. Organization of paper The rest of this paper proceeds as follows. In Section 2, we introduce our screening procedure for estimating the edge set E, and establish its theoretical properties. We present simulation results in support of our proposed procedure in Section 3. Proofs of theoretical results are presented in Section 4, and the Discussion is in Section 5. 2. An edge screening procedure Approach For j = 1, . . . , p, let Λ j denote the mean intensity of the jth point process introduced in Section 1. That is, Following Equation 5 of Hawkes (1971), for any Δ ∈ R, the (infinitesimal) cross-covariance of the jth and kth processes is defined as where δ(·) is the Dirac delta function, which satisfies For a given value of Δ, we can estimate the cross-covariance function V j,k (Δ) using kernel smoothing: In this paper, we focus on kernel functions that are bounded by 1 and are defined on a bounded support, i.e., g., the Epanechnikov kernel). Let B denote a tuning parameter that defines the time range of interest for B]. For any ζ, we define the set of screened edges as where f 2, [l,u] ≡ u l f 2 (t)dt 1/2 is the 2 -norm of a function f on the interval [l, u]. Screening for the Hawkes process 1211 The screened edge set E(ζ) in (9) can be calculated quickly: V j,k 2, [−B,B] can be calculated in O(T ) computations, and so E(ζ) can be calculated in O(T p 2 ) computations. The procedure can be easily parallellized. There are three tuning parameters in the procedure: the bandwidth h in (8), the range B in (9), and the screening threshold ζ in (9). The bandwidth h can be chosen by cross-validation. The range B can be selected based on the problem setting. For instance, when using the multivariate Hawkes process to model a spike train data set in neuroscience, we can set B to equal the maximum time gap between a spike and the spike it can possibly evoke. The choice of screening threshold ζ can be determined based on the sparsity level that we expect based on our prior knowledge. Alternatively, we may wish to use a small value of ζ in order to reduce the chance of false negative edges in E(ζ), or a larger value due to limited computational resources in our downstream analysis. Theoretical results We consider the asymptotics of triangular arrays (Greenshtein and Ritov, 2004), where the dimension p is allowed to grow with T . When unrestricted, it is possible to cook up extreme networks, where, for instance, the mean intensity Λ j in (6) diverges to infinity. To avoid such cases, we pose the following regularity assumption. Assumption 2. There exist positive constants Λ min , Λ max , and V max such that 0 < Λ min ≤ Λ j ≤ Λ max and max Δ∈R |V j,k (Δ)| ≤ V max for all 1 ≤ j, k ≤ p, where Λ j and V j,k are defined in (6) and (7), respectively. Furthermore, Λ min , Λ max , and V max are generic constants that do not depend on p. Next, we make some standard assumptions on the transfer functions ω j,k in (2). (c) There exist positive constants b, θ 0 , and C such that, for all 1 ≤ j, k ≤ p, and for any Δ 1 , Assumption 3(a) guarantees that the multivariate Hawkes process is mutuallyexciting: that is, an event may trigger (but cannot inhibit) future events. This assumption is shared by the original proposal of Hawkes (1971). Furthermore, existing theory for penalized estimators for the Hawkes process requires this assumption (Bacry, Gaïffas and Muzy, 2015;Hansen, Reynaud-Bouret and Rivoirard, 2015). Assumption 3(b) guarantees that the non-zero transfer functions are nonnegligible. Such an assumption is needed in order to establish variable selection consistency (Bühlmann and van de Geer, 2011;Wainwright, 2009) for the penalized estimator (5). Assumption 3(c) guarantees that the transfer functions are sufficiently smooth; this guarantees that the cross-covariances are smooth (see Section A.2 in Appendix), and hence can be estimated using a kernel smoother (8). Instead of Assumption 3(c), we could assume that ω j,k is an exponential function (Bacry, Gaïffas and Muzy, 2015) or that it is well-approximated by a set of smooth basis functions (Hansen, Reynaud-Bouret and Rivoirard, 2015). Recall that s was defined in (4). We now state our main result. Theorem 1. Suppose that the Hawkes process (2) satisfies Assumptions 1-3. Let h = c 1 s −1/2 T −1/6 in (8) and ζ = 2c 2 s 1/2 T −1/6 in (9) for some constants c 1 and c 2 . Then, for some positive constants c 3 and c 4 , with probability at least 1 − c 3 T 7/6 s 1/2 p 2 exp(−c 4 T 1/6 ), Theorem 1(a) guarantees that, with high probability, the screened edge set E(ζ) contains the true edge set E. Therefore, screening does not result in false negatives. This is referred to as the sure screening property in the literature (Fan and Lv, 2008;Fan, Samworth and Wu, 2009;Fan and Song, 2010;Fan, Feng and Song, 2011;Fan, Ma and Dai, 2014;Liu, Li and Wu, 2014;Song et al., 2014;Luo, Song and Witten, 2014). Typically, establishing the sure screening property requires assuming that the marginal association between a pair of nodes in E is sufficiently large; see e.g. Condition 3 in Fan and Lv (2008) and Condition C in Fan, Feng and Song (2011). In contrast, Theorem 1(a) requires only that the conditional association between a pair of nodes in E is sufficiently large; see Assumption 3(b). Theorem 1(b) guarantees that E(ζ) is a relatively small set, on the order of O(card(E)s −1 T 1/3 ). Suppose that p 2 ∝ s −1/2 exp(c 4 T 1/6− ) for some positive constant < 1/6; this is the high-dimensional regime, in which the probability statement in Theorem 1 converges to one. Then the size of E(ζ), O(card(E)s −1 T 1/3 ), can be much smaller than p 2 , the total number of node pairs. We note that the rate of T 1/3 is comparable to existing results for nonparametric screening in the literature (see e.g., Fan, Feng and Song 2011;Fan, Ma and Dai 2014). To summarize, Theorem 1 guarantees that under a small subset of the assumptions required for penalized estimation methods to recover the edge set E, the screened edge set E(ζ) (9) is small and contains no false negatives. We note that this is not the case for other types of models. For instance, in the case of the Gaussian graphical model, Luo, Song and Witten (2014) considered estimating the conditional dependence graph by screening the marginal covariances. In order for this procedure to have the sure screening property, one must make an assumption on the minimum marginal covariance associated with an edge in the graph, which is not required for variable selection consistency of penalized estimators (Cai, Liu and Luo, 2011;Luo, Song and Witten, 2014;Ravikumar et al., 2011;Saegusa and Shojaie, 2016). It is important to note that Theorem 1 considers an oracle procedure, where the tuning parameters depend on unknown parameters. The heuristic selection guidelines suggested at the end of Section 2.1 may not satisfy the requirements of Theorem 1. We leave the discussion of optimal tuning parameter selection criteria for future research. Also, note that the bandwidth h ∝ T −1/6 is wider than the typical bandwidth for kernel smoothing, which is T −1/3 (Tsybakov, 2009). This is because we aim to minimize a concentration bound on V j,k − V j,k (see the proof of Lemma 3 in the Appendix), rather than the usual mean integrated square error as in, e.g., Theorem 1.1 in Tsybakov (2009). Remark 1. In light of Theorem 1, consider applying a constraint induced by Theorem 1 can be combined with existing results on consistency of penalized estimators of the Hawkes process (Bacry, Gaïffas and Muzy, 2015;Hansen, Reynaud-Bouret and Rivoirard, 2015) in order to establish that (10) results in consistent estimation of the transfer functions ω j,k . As a concrete example, Hansen, Reynaud-Bouret and Rivoirard (2015) considered (10) with L ω j,k ; {N j } p j=1 taken to be the least-squares loss, and P ω j,k ; {N j } p j=1 a lasso-type penalty. Our simulation experiments in Section 3 indicate that in this setting, (10) can actually have better small-sample performance than (5) when p is very large. Furthermore, solving (10) can be much faster than solving (5): the former requires O(T 4/3 s −1 card(E)) computations per iteration, compared to O(T p 2 ) per iteration for the latter (using e.g. coordinate descent, Friedman, Hastie and Tibshirani, 2010). In the high-dimensional regime when p 2 ∝ s −1/2 exp(c 4 T 1/6− ) for some positive constant < 1/6, we have that T p 2 . We note that in order to solve (10), we must first compute E(ζ), which requires an additional one-time computational cost of O(T p 2 ). Simulation set-up In this section, we investigate the performance of our screening procedure in a simulation study with p = 100 point processes. Intensity functions are given by (2), with μ j = 0.75 for j = 1, . . . , p, and ω j,k (t) = 2t exp(1 − 5t) for (j, k) ∈ E. By definition, ω j,k = 0 for all (j, k) / ∈ E. We consider two settings for the edge set E, Setting A and Setting B. These settings are displayed in Figure 1. In what follows, it will be useful to think about the (undirected) node pairs as belonging to three types. (i) We let (ii) With a slight abuse of notation, we will useẼ c ∩ supp(V) to denote node pairs that are not inẼ with non-zero population cross-covariance, defined in (7). (iii) Continuing to slightly abuse notation, we will useẼ c \supp(V) to denote node pairs that are not inẼ and that have zero population cross-covariance. Throughout the simulation, we set the bandwidth h in (8) to equal T −1/6 , and the range of interest B in (9) to equal 5. Thus, h satisfies the requirements of Theorem 1, and [−B, B] covers the majority of the mass of the transfer function ω j,k . However, these simulation results are not sensitive to the particular choices of h or B. Investigation of the estimated cross-covariances In Setting A, within a single connected component, all of the node pairs that are not inẼ are inẼ c ∩ supp(V). However, for the most part, the population cross-covariances corresponding to node pairs inẼ c ∩ supp(V) are quite small, because they are induced by paths of length two and greater. This can be seen from the left-hand panel of Figure 2. Given the left-hand panel of Figure 2, we expect the proposed screening procedure to work very well in Setting A: for a sufficiently large value of the time period T , there exists a value of ζ such that, with high probability, E(ζ) =Ẽ. In Setting B, six nodes receive directed edges from the same set of four nodes. Therefore, we expect the pairs among these six nodes to be in the setẼ c ∩ supp(V), and to have substantial population cross-covariances. This intuition is supported by the center panel of Figure 2, which indicates that the node pairs inẼ c ∩ supp(V) have relatively large estimated cross-covariances, on the same order as the node pairs inẼ. In light of Figure 2, we anticipate that for a sufficiently large value of the time period T , the screened edge set E(ζ) will contain the edges inẼ as well as many of the edges inẼ c ∩ supp(V). Size of smallest screened edge set We now define ζ * ≡ max ζ : E ⊆ E(ζ) , and calculate card E(ζ * ) . This represents the size of the smallest screened edge set that contains the true edge set. Results, averaged over 200 simulated data sets, are shown in Figure 3. We see that in Setting A, for sufficiently large T , card E(ζ * ) = card(Ẽ), which implies that E(ζ * ) =Ẽ. In other words, in Setting A, the screening procedure yields perfect recovery of the setẼ (11). This is in line with our intuition based on the left-hand panel of Figure 2. In contrast, in Setting B, even when T is very large, card( E(ζ * )) > card(Ẽ), which implies that E(ζ * ) ⊇Ẽ. This was expected based on the center panel of Figure 2. Performance of constrained penalized estimation We now consider the performance of the estimator (10), which we obtain by calculating the screened edge set E(ζ), and then performing a penalized regres- sion subject to the constraint that ω jk = 0 for (j, k) / ∈ E(ζ). Note that rather than assuming a specific functional form for ω j,k , Hansen, Reynaud-Bouret and Rivoirard (2015) use a basis expansion to estimate ω j,k . Following their lead, we use a basis of step functions, of the form 1 ((m−1)/2,m/2] (t) for m = 1, . . . , 6. Instead of applying a lasso penalty to the basis function coefficients (Hansen, Reynaud-Bouret and Rivoirard, 2015), we employ a group lasso penalty for every 1 ≤ j, k ≤ p (Yuan and Lin, 2006;Simon and Tibshirani, 2012). Thus, (10) consists of a squared error loss function and a group lasso penalty. We let whereω j,k solves (10). Results are shown in Figure 4. In Setting A, solving the constrained optimization problem (10) leads to substantially better performance than solving the unconstrained problem (5). The improvement is especially noticeable when T is small. In Setting B, solving the constrained optimization problem (10) leads to only a slight improvement in performance relative to solving the unconstrained problem (5), since, as we have learned from Figures 2 and 3, the screened set E(ζ) contains many edges inẼ c ∩ supp(V). In both settings, solving the constrained optimization problem leads to substantial computational improvements. Proofs of theoretical results In this section, we prove Theorem 1. In Section 4.1, we review an important property of the Hawkes process, the Wiener-Hopf integral equation. In Section 4.2, we list three technical lemmas used in the proof of Theorem 1. Theorem 1 is proved in Section 4.3. Proofs of the technical lemmas are provided in the Appendix. The Wiener-Hopf integral equation Recall that the transfer functions ω = {ω j,k } 1≤j,k≤p were defined in (2), the cross-covariances V = {V j,k } 1≤j,k≤p were defined in (7), and the mean intensities Λ = (Λ 1 , . . . , Λ p ) T were defined in (6). If the Hawkes process defined in (2) is stationary, then for any Δ ∈ R + , where Equation (13) belongs to a class of integral equations known as the Wiener-Hopf integral equations. Technical lemmas We state three lemmas used to prove Theorem 1, and provide their proofs in the Appendix. The following lemma is a direct consequence of (13) and our assumptions. Recall that [0, b] is a superset of supp(ω j,k ) introduced in Assumption 3. Lemma 1. Under Assumptions 1-3, for sufficiently large The next lemma shows that the cross-covariance is Lipschitz continuous given the smoothness assumption on ω j,k (Assumption 3(c)). We will use this lemma in the proof of Theorem 1, in order to bound the bias of the kernel smoothing estimator (8). Recall that s, the maximum node in-degree, was defined in (4). Lemma 2. Under Assumptions 1-3, the cross-covariance function is Lipschitz Recall that the bandwidth h was defined in (8). The following concentration inequality holds on the estimated cross-covariance. Discussion In this paper, we have proposed a very simple procedure for screening the edge set in a multivariate Hawkes process. Provided that the process is mutuallyexciting, we establish that this screening procedure can lead to a very small screened edge set, without incurring any false negatives. In fact, this result holds under a subset of the conditions required to establish model selection consistency of penalized regression estimators for the Hawkes process (Wainwright, 2009;Hansen, Reynaud-Bouret and Rivoirard, 2015). Therefore, this screening should always be performed whenever estimating the graph for a mutually-exciting Hawkes process. The proposed screening procedure boils down to just screening pairs of nodes by thresholding an estimate of their cross-covariance. In fact, this approach is commonly taken within the neuroscience literature, with a goal of estimating the functional connectivity among a set of p neuronal spike trains (Okatan, Wilson and Brown, 2005;Pillow et al., 2008;Mishchencko, Vogelstein and Paninski, 2011;Berry et al., 2012). Therefore, this paper sheds light on the theoretical foundations for an approach that is often used in practice. A.1. Proof of Lemma 1 Proof. First, we observe that, if V j,k is non-negative for all j and k, then ω j,l * V l,k is non-negative for any j, l, k. Under Assumption 1, we know that (13) holds. We can see from (13) that where the inequality follows from Assumption 2 and the equality holds since We now show that the elements of V are non-negative, i.e., V l,k (Δ) ≥ 0 for 1 ≤ l, k ≤ p, and Δ ∈ R. Recall from the definition (7) in the main paper that where the second equality follows from (23) In this proof, we use the Stieltjes integral to rewrite λ l (t) in (2) as Plugging in λ l (t) from (24) into (22) gives Using the fact that (see e.g., Hawkes and Oakes (1974)) Rearranging the terms gives Next, we will rewrite (25) by taking the conditional expectation of dN k or dN m as in (23). Note here that, when Δ < Δ, we condition dN m on the history When Δ > Δ, we condition dN k on the history up to t − Δ. These cases are discussed separately in the following. When Δ < Δ, for each integral of the summation, it holds that From the definition of λ m (t) in (2) Expanding λ k and Λ k yields by the nature of the mutually-exciting process. Thus, for Δ ≥ Δ, (27) Applying both (26) and (27) to (25) shows that V l,k (Δ) ≥ 0. A.2. Proof of Lemma 2 Proof. For any Δ ≥ 0, the integral equation (13) gives For any x, y ≥ 0, we can write where the last inequality holds since ω j,l ≡ 0 for l / ∈ E j . We then have For I, we know from Assumptions 2 and 3(c) that For II l , we can expand the convolution Without loss of generality, we consider only the case that x ≥ y. We can decompose the integrals into parts on the intervals S. Chen et al. where we use Assumption 3(c) in the second inequality, Assumptions 2 in the third inequality, and the boundedness of ω j,l from Assumption 3(c) in the last inequality. Recalling that x ≥ y, we have Finally, plugging (31) and (32) into (30) gives where we set θ 1 ≡ θ 0 Λ max + bθ 0 V max + 2CV max . Note that the last inequality holds as long as s ≥ 1. (The result also holds if s = 0: in this case, the second term in (30) is zero for every j and the bound (31) suffices.) A.3. Proof of Lemma 3 Recall that the estimator of the cross-covariance (8) takes the form The proof of Lemma 3 uses the following result. Lemma 4 is based on Proposition 3 of Hansen, Reynaud-Bouret and Rivoirard (2015); for completeness, we provide its proof in Section A.4. Lemma 4. Suppose that Assumption 1 holds. We have where c 4 , c 5 , and c 6 are constants. We are now ready to prove Lemma 3. Proof. First, note that where we use the definition of V in the third equality. Using the fact that the kernel K(x/h) is defined on [−h, h], we can write where the first inequality follows from Lemma 2. A.4. Proof of Lemma 4 Lemma 4 follows directly from the proof of Proposition 3 in Hansen, Reynaud-Bouret and Rivoirard (2015). The only difference is that we want a polynomial bound on the deviation, while Hansen, Reynaud-Bouret and Rivoirard (2015) consider a logarithmic bound. For completeness, we state the proof of Lemma 4 below, but note that the proof is almost identical to the proof of Proposition 3 in Hansen, Reynaud-Bouret and Rivoirard (2015). We refer the interested readers to the original proof in Section 7.4.3 of Hansen, Reynaud-Bouret and Rivoirard (2015) for more details. Throughout this section, we assume that N ≡ (N 1 , . . . , N p ) T is defined on the full real line. We first state some notation that is only used in this section. 1. Following Hansen, Reynaud-Bouret and Rivoirard (2015), we use C (i) a1,a2,... to denote a constant that depends only on a 1 , a 2 , . . .; and we use the superscript i to indicate that this is the ith constant appearing in the proof. 2. Without loss of generality, we assume that supp(ω j,k ) ⊂ (0, 1], as in Hansen, Reynaud-Bouret and Rivoirard (2015). 3. As in Hansen, Reynaud-Bouret and Rivoirard (2015), we introduce a function Z(N ) such that Z(N ) depends only on {dN (t ), t ∈ [−A, 0)}, and there exist two non-negative constants η and d such that 4. We also introduce the (time) shift operator S t so that Z • S t (N ) depends only on {dN (t ), t ∈ [−A + t, t)}, in the same way as Z(N ) depends on the points of N in [−A, 0). We are now ready to prove the lemma. When proving the bound (34), we only discuss the case when j = k. The proof for the case when j = k follows from the same argument and is thus omitted. Proof. In this proof, we will consider a probability bound for Z • S t (N ) − E(Z) dt ≥ u, where, for some κ ∈ (0, 1) to be specified later, Note that, by applying the bound to −Z(·), we can obtain a bound for Z • S t (N ) − E(Z) . To complete the proof, we will verify the statements (34) and (35) by considering some specific choices of Z(·). For any positive integer k such that x ≡ T/(2k) > A, we have where the inequality follows from the stationarity of N . As in Reynaud-Bouret and Roy (2006), let {M x q } ∞ q=1 be a sequence of independent Hawkes processes, each of which is stationary with intensities λ(t) ≡ (λ 1 (t), . . . , λ p (t)) T . See Section 3 of Reynaud-Bouret and Roy (2006) then where T e,q is the time to extinction of the process M x q . The extinction time T e,q is introduced in Sections 2.2 and 3 in Reynaud-Bouret and Roy (2006). Roughly speaking, it is the last time when there is an event for the Hawkes process with intensity λ(t) of the form (2), with background intensity μ ≡ (μ 1 , . . . , μ p ) T set to 0 for t ≥ 0. Since T e,q is identically distributed for all q, we can focus on one T e,q . Denoting by a l the ancestral points with marks l and by H l a l the length of the corresponding cluster whose origin is a l , we have: Then by the exact argument on page 48 of Hansen, Reynaud-Bouret and Rivoirard (2015), we have Thus, there exists a constant C (1) A depending on A such that if we take k = C (1) A T κ , for some κ ∈ (0, 1) to be specified later, then where c 4 is a constant. Note that x = T/2k ≈ T 1−κ is larger than A for T large enough (depending on A). Now, note that the event T ≡ {T e,q ≤ T/2k − A, for all q = 0, . . . , k} only depends on the process N . We will first find a probability bound for the first term in (45). In other words, we will show that, given the event T , Let Consider the measurable events whereÑ is a constant that will be defined later and M x q | [t−A,t) represents the number of points of M x q lying in [t − A, t). Let Ω = 0≤q≤k−1 Ω q . Then We have P(Ω c ) ≤ q P(Ω c q ), where each P(Ω c q ) can be easily controlled. Indeed, it is sufficient to split [2qx − A, 2qx + x] into intervals of size A (there are about C (2) A T 1−κ of these) and require the number of points in each sub-interval to be smaller thanÑ /2. By stationarity, we then obtain Using Proposition 2 in Hansen, Reynaud-Bouret and Rivoirard (2015) with u = Ñ /2 + 1/2, we obtain: and, thus, AÑ ). Note that this control holds for any positive choice ofÑ . Thus, for anyÑ > 0, Hence by takingÑ = C A T 1−κ , for C A large enough, the right-hand side of (50) is smaller than C (2) A T 1−κ exp(−c 4 T 1−κ ). It remains to obtain the rate of D ≡ P( q F q ≥ u/2 and Ω). For any positive constant that will be chosen later, we have: . Next note that if for any integer l, then |F q | ≤ xd (l + 1) ηÑ η + 1 + xE(Z). Hence, cutting Ω c q into slices of the type {lÑ < sup t M x q | [t−A,t) ≤ (l + 1)Ñ } and using (50) withÑ = C In the same way, following Hansen, Reynaud-Bouret and Rivoirard (2015), we can write where z b ≡ xd[Ñ η + 1] + xE(Z) = C η,A dT (1−κ)(1+η) . Then, by stationarity, where σ 2 ≡ E Z(N ) − E(Z) . Going back to (51), by (52), we have using the fact that log(1 + u) ≤ u. Since one can choose c 6 in the definition (43) of u (not depending on d) such that u/2 − kz 1 ≥ √ 2kz v z + 1 3 z b z for some z = c 4 T κ−2η(1−κ) . Hence, One can choose (as in the proof of the Bernstein inequality in Massart (2007), page 25) to obtain a bound on the right-hand side in the form of e −z . We can then choose c 4 large enough, and only depending on η and A, to guarantee that D ≤ e −z ≤ c 5 exp(−c 4 T 1−κ ). In summary, we have shown that, given the event T , With a slight abuse of notation, letting c 5 = max(c 5 , C A ) gives (49). To complete the proof, we apply the concentration inequality (49) with some specific choices of Z(·).
7,718.2
2017-01-01T00:00:00.000
[ "Mathematics" ]
LigSearch: a knowledge-based web server to identify likely ligands for a protein target LigSearch is a web server for identifying ligands likely to bind to a given protein. It can be accessed at http://www.ebi.ac.uk/thornton-srv/databases/LigSearch. Introduction Over the last few years a number of public small-molecule databases have been established, each with its own focus. The main databases are KEGG (Kotera et al., 2012), BRENDA (Scheer et al., 2011), ChEMBL (Gaulton et al., 2012), ChEBI (de Matos et al., 2012), ZINC (Irwin et al., 2012) and PubChem (Bolton et al., 2008). The KEGG small-molecule database focuses on substrates and products found in metabolic pathways and contains about 17 000 molecules. BRENDA is a collection of enzyme functional data and, as such, contains information about the small molecules ($175 000) involved in enzymatic reactions. In comparison, the ChEMBL database contains $1.4 million bioactive, drug-like small molecules as well as data relating to various molecular properties such as logP and Lipinski parameters. ChEBI is an ontology-based dictionary of $34 000 biologically interesting small molecules. ZINC complements these databases by providing a free database of $21 million commercially available small molecules. The PubChem database contains information on the biological activities of small molecules as derived from the various NIH databases and contains data on over 100 million substances. Finally, the Worldwide Protein Data Bank (wwPDB; Berman et al., 2003) holds over 64 000 threedimensional structures of protein-ligand complexes and hence is an especially rich source of information on the binding of small molecules to proteins. Given that a protein-ligand complex can provide valuable data on both the binding site of a protein and its biochemical function, crystallographers often need to identify small molecules that might bind to their protein. Rather than use trial and error, which is expensive and time-consuming, it is far better to identify potential ligands prior to the start of the experiments. However, even identifying likely molecules can involve a great deal of time and effort. For this reason, we have developed LigSearch, a web server that automates the process of identifying potential ligands for a given protein. The method uses sequence information and so is suitable for proteins of known and unknown three-dimensional structure alike. It should be noted that the aim of LigSearch is not to identify a protein's binding site, although in some cases this may be a spin-off. There are plenty of methods that do this already, some using structural information, some using sequence information and others using a combination of both. The server merely aims to use various existing resources to identify small molecules that are likely to bind to a given protein. It then clusters the results, grouping the molecules by their similarity, and ranks the clusters and the molecules within each cluster using a scoring scheme that aims to place the more promising hits nearer the top of the output. Methods The LigSearch pipeline is shown schematically in Fig. 1 and is described in more detail below. The user can submit a protein sequence (either via a UniProt ID or a pasted sequence) or a protein structure (via a PDB code or an uploaded PDB file), from which the sequence is extracted. Results are emailed to the user in the form of a password-protected link to a web page of ranked ligand hits. Sequence searches The submitted protein sequence is first searched against the curated entries in UniProtKB/Swiss-Prot (The UniProt Consortium, 2012) using BLAST (Altschul et al., 1990). The top 20 sequences matched represent the query sequence plus a set of its closest relatives. The UniProt identifiers for each of these 20 hits are searched against ChEMBL, ChEBI and KEGG, using their respective web services, to retrieve the small-molecule compounds that these databases have associated with each of the given proteins. For the most part, these data come from the scientific literature. ChEMBL search results are filtered to remove compounds having no binding constants and those with an IC 50 value of more than 100 nM. ChEBI search results are filtered to exclude all compounds with fewer than four atoms and/or a molecular weight of less than 50 Da. The results from the three searches are combined and duplicates are removed, giving a list of compounds known to be associated with the query sequence and/or its closest relatives. Searches against the PDB The original sequence is then searched against the protein sequences in the PDB. To improve the chances of successful matches, and to increase the number of hits, the sequence is first broken up into regions that are likely to correspond to structural or sequence domains. This is performed as follows. The top 20 BLAST hits found above are located in the Gene3D database (Lees et al., 2012), which primarily identifies likely structural domains in 15 million UniProt sequences. Gene3D is compiled by deriving hidden Markov models (HMMs) to map sequences to the protein structural domains defined in the CATH domain database (Sillitoe et al., 2013). Any long stretch of sequence that cannot be mapped to a CATH domain is instead mapped to a Pfam (Finn et al., 2010) sequence domain. From the Gene3D domain assignments for the 20 UniProt sequences, the query sequence is partitioned into predicted domains. Starting with the most similar sequence, the domains are applied to the query sequence provided that they do not overlap with a previously assigned domain. Once the domains have been assigned, any unassigned regions of over 100 residues are designated domains of unknown type. Linker sections between domains are split in half and each half is assigned to the domain adjacent to it. The sequence of each of these protein domains is then searched against the sequences in the PDB. Again, to increase the chances of successful matches, the database of sequences contains not just the full-length sequences of each protein chain in the PDB but also the sequences corresponding to each CATH domain. The latter are particularly important for matching 'split' domains, which might otherwise be missed by the sequence search. For example, PDB entry 1got consists of two CATH domains, the first being a domain comprising residues 6-57 and 177-331 and thus 'split' over two segments of the protein, while the second domain spans the region in between, residues 58-176. The search itself is performed using FASTA (Lipman & Pearson, 1985) and a multiple alignment is derived from the resultant pairwise sequence alignments. For each domain searched, the structure with the best match (based upon the maximum Smith-Waterman score produced by FASTA) is used as a reference structure and all others are superposed onto it using the main-chain atoms of equivalent residues in the alignment. Several iterations of superposition may be required to obtain an r.m.s.d. below a 12 Å cutoff, with the residues having the highest r.m.s.d. values being removed at each iteration. This compensates for any imperfections in the sequence alignment. The superposition of all of the structures onto the reference structure brings with it any bound ligands. The net result is that the various ligands overlap in one or more binding sites of the superposed proteins. These ligands represent the predicted binding partners of the query protein. A scoring system is used to order them from the most to the least promising candidates. The scores take into account the numbers and types of interaction that each ligand makes with its protein partner and also the similarity between the residues that it interacts with and the corresponding residues in the query protein. Specifically, the ligand scores 1 for each hydrogen bond it makes to the protein, with the score being 2 if the interaction is with a similar residue to that in the query sequence, 3 if it is with an identical residue type and À1 if there is no equivalent (i.e a gap in the alignment). The equivalent scores for nonbonded contacts are 0, 1, 2 and À1, respectively, although where there are several contacts to any given protein residue only one is counted. The scoring system is somewhat arbitrary and is difficult to optimize without extensive experimental ligand-binding data; however, its aim is merely to provide a qualitative ranking of ligands according to how similar the residues they interact with are to the corresponding residues in the query sequence. Many ligands in the PDB interact with more than one protein domain, so a domain-based sequence search and superposition will miss any interactions that the ligand makes with other domains. This is taken into account by merging the results from the separate domain searches. Molecular similarity and result ranking The ligands identified by the three sequence searches and those from the search against the PDB are clustered according to their molecular similarity as calculated by SMSD (Small Molecule Subgraph Detector; Rahman et al., 2009). SMSD computes the maximum common subgraph between two small molecules and provides a similarity score based on the matching subgraphs. Clustering uses a similarity cutoff of 0.4 between the most distant members of the cluster and is solely based on molecular similarity and not on whether the molecules bind in the same protein binding site. The clusters obtained are ranked on the basis of the highest interaction score for the PDB ligands in each cluster. Within each cluster the PDB ligands are ranked by their interaction scores, while the hits from ChEMBL, ChEBI and KEGG, having no interaction score, are listed at the end of each cluster in decreasing order of the sequence similarity of their associated protein to the query sequence and then by their number of cross-references in UniChem (Chambers et al., 2013). UniChem is a nonredundant database of links between chemical structures and EMBL-EBI chemistry resources. The number of cross-references indicates in how many other databases the small molecule appears. This provides a qualitative measure of its likely 'importance'. OpenBabel (O'Boyle et al., 2011) is used to calculate various molecular properties such as logP and the polar surface area where such data are lacking. The ordered list of predicted ligands can be downloaded in a tab-separated file. Validation and benchmarking To validate the LigSearch pipeline, we chose a set of enzymes as our test group of proteins. For the most part, the molecules that bind to enzymes are known (the reactants, products and any cofactors) and hence provide a means of validating the LigSearch predictions. However, because of the very fact that the cognate ligands are known, they will inevitably be returned by the KEGG, ChEBI and ChEMBL searches and so such hits need to be discarded before analysing what is left. Furthermore, if the three-dimensional structure of any of the enzymes is known, this will bias the results returned from the searches against the PDB. To prevent this, we first used the Enzyme Structures Database, part of PDBsum (Laskowski, 2009), to select a data set of enzymes which have no threedimensional structure in the PDB. This gave 3431 EC classes (as of July 2013). From these, we selected only those enzymes whose cognate molecules were given in the ENZYME database (Bairoch, 2000) and had an associated .mol file in the Enzyme Structures Database. This was to ensure that the correct answers (i.e the substrates and/or cofactors) were known and could be compared against the molecules returned by LigSearch. The result was a set of 2334 enzymes. A list of proteins belonging to each of these enzyme classes was then extracted from UniProtKB/Swiss-Prot (i.e. the reviewed part of UniProtKB). The 2334 enzyme classes encompassed 195 754 UniProtKB sequences. To select a manageable data set, the sequences were randomly selected from this list, ensuring that no EC class was represented more than once, to give 200 proteins. The number of identifiable reaction molecules associated with this set was 620. The search sequences and reaction molecules are listed in Supplementary Table S1, 1 together with the results of the searches, as described below. Firstly, we analyzed the results returned by the searches against the PDB. For each of the 200 protein sequences, the PDB ligands returned by LigSearch were compared against the protein's cognate molecules using the SMSD program. For 102 (51%) of the proteins at least one of the molecules suggested by LigSearch was a perfect match to one of these cognate molecules. In a further 18 cases a molecule with a match score of 0.8 or higher to a known substrate was identified. Thus, for 120 of the 200 enzymes (60%) an identical or very similar molecule to a known binder was identified in the PDB (see Fig. 2a). These were not merely trivial matches from close homologues. Just two of the 200 came from a protein with sequence identity higher than 65% to the query sequence. Over 50% came from proteins with a sequence identity of 30% or less (see Fig. 2b). As described above, the LigSearch score for each PDB hit reflects the similarity between the residues interacting with the ligand in the PDB complex and the corresponding residues in the query protein. The higher the score, the more equivalent interactions with identical or similar residue types are possible. Indeed, the best matches in the enzyme data set tend to have the highest LigSearch scores (Fig. 2c), so the scores do provide a guide to which molecules are more likely to bind to the query protein. Indeed, from the results in Fig. 2(c) it would appear that ligand scores higher than around 10-15 tend to be associated with the correct answers. Secondly, to test the results returned by the ChEBI, ChEMBL and KEGG searches, we took each cognate molecule in turn and counted how many other molecules were returned in the same cluster as the cognate molecule. Fig. 2(d) shows the results. For 22 (11%) of the enzymes none of the molecules returned by the ChEBI, ChEMBL and KEGG searches were similar enough to any of the cognate molecules to be in the same cluster. However, for the remaining 178 (89%) of the enzymes at least one of the small molecules returned was similar to one of the cognate ligands. Validation results for LigSearch runs on 200 randomly selected enzymes with no three-dimensional structural model in the PDB. (a) Histogram of the molecular-similarity scores for the closest PDB ligand match, as computed by the SMSD program, to any of the enzyme's cognate ligands. (b) Histogram of the sequence identities between the query enzyme sequence and the PDB protein from which the best ligand match has a similarity of 0.8 or greater to one of the cognate ligands. (c) Histogram of the LigSearch scores for the best matches to cognate ligands. The counts are grouped into four sets according to the similarity score, s, of the best-matching molecule. Lowest similarity scores (s < 0.7) are shown in blue, scores 0.7 s < 0.8 are shown in green, scores 0.8 s < 0.9 are shown in orange and closest matches with s ! 0.9 are shown in red. (d) Histogram of counts of molecules with similarity s ! 0.8 to at least one of the enzyme's cognate ligands as returned by LigSearch for the non-PDB hits. The cognate molecules themselves are, of course, excluded from the results. Together, the validation study suggests that in the majority of cases the answers returned by LigSearch include molecules that are highly likely to bind, owing to their high similarity to known binders, and these molecules tend to be those that score highly using the LigSearch interaction score. Results To demonstrate the usefulness of the system in practice, we obtained ligand-testing data from one of the crystallographers at the Midwest Center for Structural Genomics (MCSG). Her project had necessitated a search for candidate molecules to cocrystallize with thymidylate synthase from Staphylococcus aureus (UniProt ID P65248). Using results from literature mining, she had identified a number of potential compounds in February 2012. Of these, 26 were selected and 13 were used in crystallization trials (E. Filippova, personal communication). Table 1 lists the 26 compounds. The structures of two proteinligand complexes were eventually solved and deposited in the PDB as entries 4dwj and 4eaq in February and March 2012, respectively. In fact, 4dwj was a trivial case as the ligand selected had already been solved in complex with the same protein (PDB entries 2ccg and 2ccj). We submitted this protein sequence to LigSearch to compare the hits returned against the molecules that had been manually compiled. The PDB contains many thymidylate synthase structures from various organisms, so it was not surprising that LigSearch returned many hits. All structures solved after February 2012 were discarded in order to present the results as they would have been at the time of the original study. In all, 62 unique ligands were matched in the PDB. Additionally, a further 126 unique molecules were obtained from ChEBI. In this example, no hits were returned by the ChEMBL and KEGG searches. The clustering of all of the candidate molecules by SMSD resulted in 47 separate clusters, five of which contained a single metal ion. Fig. 3 shows the highest-scoring members of each of these 47 clusters plotted using multi-dimensional scaling on the basis of their all-by-all similarities. The top-scoring clusters, ranked 1, 2 etc., tend to group in the bottom righthand corner of the plot. The metals and various very small molecules are grouped at the bottom left. Most of the 26 compounds from the manual selection exercise listed in Table 1 were identified in the LigSearch output and indeed fell into three of LigSearch's clusters, 1, 3 and 15, as shown in the table. The top-scoring ligand in each of these three clusters is depicted in Fig. 4, showing the atoms that interact with the protein. In fact, LigSearch identified even higher scoring molecules than those that had been selected by hand, but these are not shown in the table. LigSearch missed two of the 26 manually selected compounds, but in both cases the compounds are substructures of other molecules returned by LigSearch and thus in effect are not significant omissions. The rightmost columns of the table show the PDB entries from which the ligands came and the sequence similarity of each protein to the query sequence. Many of the latter lie in the 20-30% range, suggesting the predicted ligands come from distantly related proteins. Their high interaction scores, however, are suggestive of conservation in the binding site and indicate that there is a strong chance the ligands may bind to the query protein. Table 1 The 26 molecules manually selected in February 2012 for crystallization trials of S. aureus thymidylate synthase (UniProt ID P65248) and used here for testing the LigSearch results. The molecules that were trialled are shown in bold and the two from which crystal structures were obtained are annotated with footnotes. The molecules have been grouped here by the three LigSearch clusters that they occurred in: LigSearch clusters 1, 3 and 15. The rightmost columns show details of the PDB entry from which the LigSearch match came. Candidate compound LigSearch PDB code (chain) Score Sequence identity (%) LigSearch cluster 1 P 1 -(5 0 -Adenosyl)-P 5 -(5 0 -thymidyl)pentaphosphate ( 5 shows an example of such a case. It compares the protein-ligand interactions for the same 3 0 -azido-3 0 -deoxythymidine 5 0 -monophosphate ligand bound to S. aureus thymidylate synthase in PDB entry 4eaq and human thymidylate synthase in PDB entry 1e99. Despite the low overall sequence identity between the two proteins (22.6%), there are several conserved residues in the binding site that make identical interactions (three arginines and one phenylalanine) and other interactions made by similar residues in the same three-dimensional locations in the two structures. Cases such as this demonstrate that matches to even distantly related proteins can provide valid predictions about which ligands are worth considering. Discussion LigSearch is a convenient tool for identifying possible ligands for a given protein and hence can provide crystallographers with a list of candidate molecules for crystallization trials. It reduces the amount of manual searching and literature mining by providing the results automatically and conveniently clustering the resultant molecules into groups according to molecular similarity. Arguably, the most useful matches are those that come from protein-ligand complexes in the PDB. Even if the proteins are distant relatives, the match can identify likely binding residues and indicate where the ligand might bind. The clusters represent a set of molecules that are all at least 40% similar to each other. This does not imply that they bind in the same binding site in a protein, as the clustering is performed based on molecular similarity and not three-dimensional location. These clusters provide a very good way to identify any potential molecular frameworks that might bind to the protein. In addition, the ChEBI and ChEMBL results provide a good enrichment of the main framework in each cluster. In this example, one of the clusters has thymidine 5 0 -diphosphate as the highest scoring compound. From the ChEBI and ChEMBL results, another six variants such as 5 0 -thymidilic acid and thymidine triphosphate were added. When looking at all of the clusters found for our example, it becomes clear that there are a large number of different molecular frameworks present. The highest scoring clusters A plot of the top-scoring molecules in each of the 47 clusters returned by LigSearch for UniProt entry P65248. The molecules have been laid out using multi-dimensional scaling on the basis of their all-by-all similarities. Thus, similar molecules tend to be grouped together. The labels show the cluster number in square brackets and the PDB Het Group three-character name or ChEBI identifier. Red labels correspond to molecules from matches to PDB entries, while blue labels are molecules returned by ChEBI searches. The molecular diagrams were plotted using ChemDraw (http:// www.cambridgesoft.com). tend to contain the substrate/product molecules as well as various versions of these molecules. A number of clusters consist of molecules commonly found in crystallization solutions, such as sulfate, phosphate and acetic acid. These molecules are difficult to exclude as they might be involved in the protein function. Some of the clusters contain highly reactive/unstable molecules such as phosphorus pentachloride and 3Hphosphole. Owing to the nature of the ChEBI and ChEMBL databases and their annotation, these molecules will be included in the results but will usually cluster together. This makes it easier for the crystallographer to disregard them. The ordering of the PDB ligand hits within each cluster relies on a somewhat arbitrary scoring scheme. The rationale for the weights assigned to each interaction seems reasonable, although it would require a great deal of experimental testing to try to optimize it. Possibly it cannot, and maybe it need not, be optimized. Larger ligands tend to give a better score as they usually make more interactions with the protein, but this in itself may be suggestive of a good candidate for binding to the query protein. We welcome any collaborations willing to help with further experimental testing and ranking improvements. Some of the searches return results for highly volatile or unstable compounds as well as compounds known to be insoluble in water. Hence, one of the improvements that is planned for the future is a more chemistry-aware filter. The implemention of a smart rulebased system, in combination with other parameters such as logP, would improve the results by removing molecules that are unlikely to be of any practical or biological use. An additional improvement, currently in the planning stages, is that all hits found should be screened against the ZINC database, checking whether the compound is purchasable. Another potential use for LigSearch might be to tackle the 'unknown ligand problem' in which a protein structure solved by X-ray crystallography is found to have mystery density belonging to some unknown molecule in the binding site. By submitting the sequence and/or Three cluster representatives for the molecules listed in Table 1. The molecules are annotated according to the interactions that they make with the protein in the top-scoring PDB entry for the cluster. Atoms making hydrogen bonds to protein are depicted with spokes radiating from them, while hydrophobic interactions have a grey circle around them (none in this example). The colour of the spokes corresponds to the similarity of the residue to which the hydrogen bond is made and the corresponding residue in the query protein (which in this case is thymidylate synthase from S. aureus; UniProt ID P65248): red for identical residue type, orange for similar and dark grey for different. The images are provided in the results section for every query with PDB hits. The molecules and the PDB entries from which the data come are (a) P 1 -(5 0 -adenosyl)-P 5 -(5 0thymidyl)pentaphosphate (PDB entry 4tmk), (b) thymidine 5 0 -diphosphate (PDB entry 3hjn) and (c) 4-(2-hydroxyethyl)-1-piperazineethanesulfonic acid (PDB entry 2pir). Figure 5 A schematic diagram of the protein-ligand interactions in two distantly related proteins: (a) thymidylate synthase from S. aureus (PDB entry 4eaq) and (b) human thymidylate synthase (PDB entry 1e99). The ligands (blue bonds) in both are identical: 3 0 -azido-3 0 -deoxythymidine 5 0 -monophosphate. Equivalent protein residues in the two plots are circled in red and occupy the same positions in each plot: for example, Glu37 is equivalent to Phe42, Phe66 is equivalent to Phe72 and Tyr100 is equivalent to Phe105. Hydrogen bonds are depicted by green dotted lines and labelled with their length in Å , while hydrophobic interactions are represented by red arcs whose spokes radiate towards the ligand. The diagram was generated using LigPlot + (Laskowski & Swindells, 2011).
5,931.8
2013-11-19T00:00:00.000
[ "Biology", "Computer Science" ]
Effect of the Molecular Weight of Carboxymethyl Cellulose on the Flotation of Chlorite The present study aimed to investigate the influence mechanism of carboxymethyl cellulose (CMC) on the flotation of fine chlorite. To this end, a series of flotation tests, sedimentation tests, and microscope analyses were conducted. Flotation tests revealed an inverse relationship between particle size and the recovery of chlorite, indicating that finer particles exhibited higher recovery rates. Moreover, it was observed that the recovery of fine chlorite was significantly associated with the water recovery (proportion of water entering the floated product to the weight of water in the initial flotation suspension) and a variety of frother types. Based on these findings, it can be inferred that froth entrainment may constitute a crucial component of the recovery mechanism underlying fine chlorite. Thus, reducing froth entrainment (the phenomenon of hydrophilic minerals entering floated products through foam water) is the key to depress chlorite flotation. Flotation tests indicate that fine chlorite recovered into froth products can be depressed effectively by CMC with a high molecular weight. The results of sedimentation tests and microscope analyses in the presence of CMC prove that CMC with a high molecular weight generates flocculation on fine chlorite particles while that with a low molecular weight does not. It is suggested that the depression of chlorite flotation may be attributed to the reduction in the entrainment resulting from the flocculation induced by CMC. Introduction As a prevalent magnesium silicate gangue mineral, chlorite is frequently associated with sulfides, such as copper-nickel sulfide [1,2]. Due to its low hardness, chlorite is susceptible to grinding, leading to the production of chlorite slimes which can negatively impact sulfide ore flotation via a phenomenon known as "slime coating" [3]. In addition, chlorite's inherent floatability can result in it reporting to the concentrate during flotation, thereby reducing its grade and causing downstream processing issues, including heightened smelting costs [4,5]. Based on the above, the deleterious effect of magnesium silicate gangue minerals on the flotation of sulfide ores is mainly attributed to the "slime coating" of magnesium silicate gangue minerals on sulfide surfaces and the import of magnesium silicate gangue minerals into the concentrate. Numerous additives have been explored to mitigate the adverse impact of magnesium silicate gangue minerals. Sodium hexametaphosphate has been found to be effective in inhibiting magnesium silicate gangue minerals; however, its excessive use can generate phosphorus wastewater and result in environmental concerns [6]. In addition, oxalic acid has been utilized to alleviate the adverse effect of magnesium silicate gangue minerals on sulfide flotation [7]. Unfortunately, oxalic acid's use is limited by its toxicity and potential health hazards upon ingestion. Thus, it is urgent to develop an ecological and efficient depressant for the flotation separation of sulfide from magnesium silicate gangue minerals. It has been reported that the primary recovery mechanism of fine magnesium silicate gangue mineral particles (i.e., those with a diameter less than 20 µm), including chlorite, is primarily attributed to the entrainment facilitated by water recovery. In addition, the entrainment of minerals may be correlated with particle size [8]. However, most studies focus on the interaction between silicate and sulfide minerals rather than the entrainment, which also has a great influence on the concentrate grade of sulfide [9][10][11][12]. Thus, it is also very important to investigate the entrainment of chlorite in order to devise effective strategies for its elimination. It is reported that the depression of silicate minerals from sulfide ore, especially when the entrainment of silicate minerals is serious, can be achieved with some polymers [13][14][15]. Carboxymethyl cellulose (CMC), the most commonly employed polysaccharide depressant, is known for its environmentally benign nature. Moreover, owing to its superior depressant performance, CMC finds extensive usage in the flotation separation of sulfide from silicate gangue minerals [16][17][18]. Many possible interaction mechanisms involving the possible contributions of electrostatic, chemical, hydrogen, and hydrophobic bonding between CMC and the surface of the magnesia-bearing mineral have been proposed [19,20]. Because the adsorbed amount of CMC depends on electrolyte concentration and pH, some studies have suggested that electrostatic interactions are involved in the adsorption process of CMC onto minerals [3,21,22]. Liu et al. believed that the nature of the interaction between mineral surfaces and natural polysaccharides, including CMC, is likely an acid/base interaction [23]. Fu et al. demonstrated that the adsorption of CMC on chlorite is significantly influenced by solution conditions [24]. The investigation conducted by Feng et al. revealed that the adsorption density of CMC onto chlorite was promoted by both copper ions and calcium ions. However, the underlying mechanisms of action for these two types of ions were found to differ [25]. In addition to silicate minerals, CMC also presents good depression performance for the flotation of other gangue minerals through selective adsorption [26][27][28]. CMC is known to be effective in reducing the floatability of minerals and its applications are diverse, with numerous systematic investigations having been conducted. However, the understanding of the various ways in which CMC influences chlorite with different particle sizes is currently inadequate, thereby impeding the broader application of CMC. Furthermore, it remains unclear whether the depressive effects of CMC are predominantly attributable to flocculation/dispersion [29]. Thus, the present study aims to explore the impact of flocculation/dispersion induced by CMC with varying molecular weights on the flotation of chlorite with the ultimate goal of facilitating the separation of sulfide from magnesium silicate gangue minerals during flotation. Samples and Reagents The chlorite utilized in the entire experiment was sourced from Haicheng, Liaoning Province, China. The XRD analysis ( Figure 1) and chemical analysis (Table 1) data confirmed its high purity, with only trace amounts of talc present. The samples were subjected to dry grinding and screened to obtain three distinct size fractions: −100 + 75 µm, −75 + 38 µm, and −38 µm, which were collected separately for subsequent analyses. CMC used for all tests was purchased from Aladdin Industrial Corporation, Shanghai, China. The molecular weight of the CMC we used in this work was 90,000, 250,000, 700,000, respectively. All three kinds of CMC were with the same degree of substitution, with 0.7. Terpilenol, MIBC (methyl isobutyl carbinol), and hexanol used as frothers, and were all obtained from Tianjin Guangfu Fine Chemical Research Institute, Tianjin, China. Hydrochloric acid (HCl) and sodium hydroxide (NaOH) were employed as pH modifiers and were procured from Tianjin Kermil Chemical Reagents Development Centre, Tianjin, China. All the chemicals were of analytical grade quality. CMC used for all tests was purchased from Aladdin Industrial Corporation, Shanghai, China. The molecular weight of the CMC we used in this work was 90,000, 250,000, 700,000, respectively. All three kinds of CMC were with the same degree of substitution, with 0.7. Terpilenol, MIBC (methyl isobutyl carbinol), and hexanol used as frothers, and were all obtained from Tianjin Guangfu Fine Chemical Research Institute, Tianjin, China. Hydrochloric acid (HCl) and sodium hydroxide (NaOH) were employed as pH modifiers and were procured from Tianjin Kermil Chemical Reagents Development Centre, Tianjin, China. All the chemicals were of analytical grade quality. Stock solutions of CMC were prepared by dispersing a predetermined amount of solid into 100 mL of vigorously stirred cold distilled water and the stirring was continued for about 30 min until the CMC powders were dissolved completely. The solutions were freshly prepared each day. The HCl stock solution was prepared by adding a portion of known-weight HCl solution with a concentration of 36% to the appropriate amount of cold distilled water and stirring. NaOH stock solution was prepared by adding a portion of known-weight NaOH solid into the appropriate amount of cold distilled water and stirring until the NaOH powders were dissolved completely. Deionized double distilled water was used for all experiments. Flotation Tests Flotation tests were performed using an XFG-type mechanical agitation flotation machine made by the Changchun Prospecting Machine Factory. In a typical single mineral flotation test, 2.0 g of mineral were added to 40.0 mL of distilled water, followed by conditioning. The pH of the mineral suspension was adjusted to the desired value using HCl or NaOH stock solution. Afterwards, CMC (if necessary) stock solution and a frother were added into the pulp and conditioned for 5 min and 1 min, respectively. Flotation was conducted for a duration of 4 min. Both the floated and sink products were collected, filtered, and subsequently dried before being weighed to facilitate the calculation of recovery. Each experiment was conducted in triplicate, and the average value was considered as the final result. Stock solutions of CMC were prepared by dispersing a predetermined amount of solid into 100 mL of vigorously stirred cold distilled water and the stirring was continued for about 30 min until the CMC powders were dissolved completely. The solutions were freshly prepared each day. The HCl stock solution was prepared by adding a portion of known-weight HCl solution with a concentration of 36% to the appropriate amount of cold distilled water and stirring. NaOH stock solution was prepared by adding a portion of known-weight NaOH solid into the appropriate amount of cold distilled water and stirring until the NaOH powders were dissolved completely. Deionized double distilled water was used for all experiments. Flotation Tests Flotation tests were performed using an XFG-type mechanical agitation flotation machine made by the Changchun Prospecting Machine Factory. In a typical single mineral flotation test, 2.0 g of mineral were added to 40.0 mL of distilled water, followed by conditioning. The pH of the mineral suspension was adjusted to the desired value using HCl or NaOH stock solution. Afterwards, CMC (if necessary) stock solution and a frother were added into the pulp and conditioned for 5 min and 1 min, respectively. Flotation was conducted for a duration of 4 min. Both the floated and sink products were collected, filtered, and subsequently dried before being weighed to facilitate the calculation of recovery. Each experiment was conducted in triplicate, and the average value was considered as the final result. Sedimentation Tests Flocculation/dispersion of the chlorite was assessed via settling tests launched using a graduated cylinder. An amount of 0.1 g of chlorite powder was conditioned in a 100 mL beaker at the desired pH for 5 min, following which CMC stock solution was added and gentle stirring was carried out for an additional 5 min using a magnetic stirrer. The suspension was then transferred to a 100 mL cylinder and the water level was adjusted to 100 mL using distilled water. The cylinder was stoppered, inverted twenty times, and then allowed to remain in an upright position for a fixed duration of 10 min. The suspension in the upper 25 mL of the cylinder was siphoned out and measured using a WGZ-3(3A) type Scattering Turbidimeter fabricated by the Shanghai Xinrui Instrument Company. The degree of flocculation/dispersion of the suspension was assessed based on the turbidity of the supernatant liquor. Lower turbidity values were indicative of superior flocculation. Each test was repeated thrice, and the average value was considered as the final outcome. Microscope Analyses Visual examination of the flocculation/dispersion state of fine chlorite was performed using a polarized optical microscope Leica DM4800. The procedure for the preparation of the slurry was identical to that mentioned in the sedimentation tests. A drop of slurry was dispensed onto a glass slide by pipette during the stirring of the slurry, following which the sample was examined using a microscope, which was fitted with a video camera. Figure 2 illustrates the variation in the flotation recovery of chlorite with different particle sizes as a function of pH. It can be observed that the flotation recovery of the fine fraction (−38 µm) without using any collector is dramatically higher than the other two coarse fractions (−100 + 75 µm and −75 + 38 µm respectively) over the entire range of the pH values tested. Apparently, the recovery of chlorite increases as the particle size of chlorite decreases, which indicates the recovery mechanism of fine chlorite particles may be due to froth entrainment. Similar results were also obtained by Pietrobon et al. in their research [8]. Li et al. [30] believed that during the flotation process of fine-grained minerals hydrophilic minerals would be mechanically entrained into the concentrate, leading to a decrease in the concentrate grade, which has been a major issue in the flotation of finegrained minerals. Kirjavainen et al. [31] studied the flotation of fine sericite and quartz in the absence of hydrophobic minerals and found that the entrainment of hydrophilic gangue minerals was influenced by the quality and shape of particles. The smaller particle sizes of minerals corresponded with the higher levels of entrainment. When the particle size of the minerals was close to the colloidal particle size, the entrainment ratio was mainly determined by the particle size and the entrainment ratio was close to 1. The Recovery Mechanism of Fine Chlorite In the flotation process, fine-grained minerals and hydrophilic minerals may be entrained into the concentrate by foam water. It is well known that the froth entrainment is closely linked to the water recovery (proportion of water entering the floated product to the weight of water in the initial flotation suspension) during flotation. Figure 3 illustrates the flotation recovery of −38 µm chlorite as a function of water recovery. It shows that at the beginning of flotation, the recovery of fine chlorite increases faster than the water re- The Recovery Mechanism of Fine Chlorite In the flotation process, fine-grained minerals and hydrophilic minerals may be entrained into the concentrate by foam water. It is well known that the froth entrainment is closely linked to the water recovery (proportion of water entering the floated product to the weight of water in the initial flotation suspension) during flotation. Figure 3 illustrates the flotation recovery of −38 µm chlorite as a function of water recovery. It shows that at the beginning of flotation, the recovery of fine chlorite increases faster than the water recovery. The recovery of fine chlorite increases essentially linearly with water recovery at an acceptable error level. Many previous studies have confirmed a correlation between the entrainment recovery of gangue and the water recovery of floated products in flotation [32]. Li et al. [30] proposed that the recovery of hydrophilic gangue caused by froth entrainment is linear with the water recovery of concentrate in flotation, and the relationship between the two indices conformed to the following equation. where Rg is the recovery of hydrophilic gangue caused by froth entrainment, %; e is the entrainment factor of hydrophilic gangue; and Rw is the water recovery of the floated concentrate, %. Figures 3 and 4 we can conclude that the fine chlorite particles are recovered into froth products because of not only hydrophobicity but also froth entrainment. It is not surprising that the recovery of the fine chlorite is higher in the higher frother concentration since froth entrainment is closely related to foam volume, and the foam is rich with a high frother concentration. In addition, it is reasonable that there are significant differences among the recoveries of fine chlorite under different frothers. It has been reported that terpilenol presents better foam stability, higher foam viscosity, and higher water recovery than MIBC when they are used as frothers, but hexanol, as a kind of hexahydric alcohol, was similar to MIBC in foaming ability [33,34]. That is, the flotation recovery of the fine chlorite should be higher under terpilenol than MIBC for the higher water recovery, and the recovery under hexanol and MIBC should be the same in theory. The results presented in Figure 4 are confirmed by all evidence. That is, the recovery of fine chlorite is directly proportional to the water recovery due to the froth entrainment of flotation. The water recovery in flotation is determined by the froth's behaviors which are bound up with the properties of frothers and frothers almost do not change the hydrophobicity of mineral. Thus, in order to verify the froth entrainment, the flotation of −38 µm chlorite were carried out with three types of frothers. No collector was used in the flotation tests. The results are presented in Figure 4. It indicates that the flotation recovery of the fine chlorite rises in tandem with an increase in the concentration of a frother and this trend is observed across all three types of frothers used in the study, namely terpilenol, hexanol, and methyl isobutyl carbinol (MIBC). Figure 4 also shows that the recovery of flotation with terpilenol as a frother is obviously higher than that with hexanol and MIBC, and the recovery is almost the same when the last two were used as frothers. From Figures 3 and 4 we can conclude that the fine chlorite particles are recovered into froth products because of not only hydrophobicity but also froth entrainment. It is not surprising that the recovery of the fine chlorite is higher in the higher frother concentration since froth entrainment is closely related to foam volume, and the foam is rich with a high frother concentration. In addition, it is reasonable that there are significant differences among the recoveries of fine chlorite under different frothers. It has been reported that terpilenol presents better foam stability, higher foam viscosity, and higher water recovery than MIBC when they are used as frothers, but hexanol, as a kind of hexahydric alcohol, was similar to MIBC in foaming ability [33,34]. That is, the flotation recovery of the fine chlorite should be higher under terpilenol than MIBC for the higher water recovery, and a high frother concentration. In addition, it is reasonable that there are significant differences among the recoveries of fine chlorite under different frothers. It has been reported that terpilenol presents better foam stability, higher foam viscosity, and higher water recovery than MIBC when they are used as frothers, but hexanol, as a kind of hexahydric alcohol, was similar to MIBC in foaming ability [33,34]. That is, the flotation recovery of the fine chlorite should be higher under terpilenol than MIBC for the higher water recovery, and the recovery under hexanol and MIBC should be the same in theory. The results presented in Figure 4 are confirmed by all evidence. Influences of CMC on the Flotation of Chlorite with Different Particle Sizes The findings from the flotation tests conducted on chlorite with different particle sizes in the presence of CMC are illustrated in Figures 5-7. The results indicate the influences on the chlorite with different particle sizes caused by one certain type of CMC, especially the CMC with low molecular weight, is not the same. The chlorite in all the particle size fractions can be almost fully depressed by the CMC with a high molecular weight of 700,000. Additionally, the depressing effect on the −100 + 75 µm and −75 + 38 µm chlorite caused by the CMC with a molecular weight of 90,000 is as strong as that brought by the CMC with a molecular weight of 250,000 and 700,000. However, it is particularly noteworthy that, for the fine chlorite (−38 µm), the CMC with a low molecular weight of 90,000 is not an effective depressant, as is evidenced by the dramatically high recovery despite the highest dosage of this CMC. In fact, as shown in Figure 7, the CMC with a higher molecular weight gave a lower flotation recovery of the fine chlorite. Taking into account the flotation behaviors of chlorite observed in previous tests, it can be concluded that froth entrainment may contribute to the difficulties of the depression of fine chlorite by the low molecular weight CMC. In addition, it is interesting that in the flotation of the −38 µm chlorite ( Figure 7) the two types of lower molecular weight CMC display a rise in recovery upon the addition of CMC at a low concentration. Correlation between Froth Entrainment and Flocculation Caused by CMC It has been mentioned above that the flotation recovery of the fine chlorite (−38 µm) in the presence of three types of CMC with the molecular weight of 90,000, 250,000, 700,000, respectively, are drastically different (Figure 7). In addition, the differences may be generated by froth entrainment in the flotation. It is well known that the recovery due to froth entrainment is also closely linked with fine particles. To investigate the influence of CMC molecular weights on the flotation of fine chlorite, a series of settling tests were carried out. The results are presented in Figure 8. It shows that the CMC with a molecular weight of 90,000 disperses the fine chlorite. On the other hand, the CMC with a higher molecular weight (250,000 and 700,000, respectively) induces strong flocculation on fine chlorite, and the CMC with a higher molecular weight gives stronger flocculation. In addition, Figure 8 indicates that the two types of lower molecular weight CMC have a strong dispersing effect at the lowest dosage. This is complementary to the rise in flotation recovery at the same concentrations presented in Figure 7. not an effective depressant, as is evidenced by the dramatically high recovery despite the highest dosage of this CMC. In fact, as shown in Figure 7, the CMC with a higher molecular weight gave a lower flotation recovery of the fine chlorite. Taking into account the flotation behaviors of chlorite observed in previous tests, it can be concluded that froth entrainment may contribute to the difficulties of the depression of fine chlorite by the low molecular weight CMC. In addition, it is interesting that in the flotation of the −38 µm chlorite ( Figure 7) the two types of lower molecular weight CMC display a rise in recovery upon the addition of CMC at a low concentration. Figure 9, which is composed of four images taken with a polarized optical microscope under different conditions, is evidence of the flocculation/dispersion of the fine chlorite. The images confirm that when conditioned without CMC or in the presence of the CMC with a molecular weight of 90,000, the fine chlorite particles are in a good state of dispersion. However, with the addition of the CMC with a high molecular weight of 250,000 and 700,000, respectively, the sizes of the fine chlorite particles are enlarged, which indicates that flocculation occurs on the fine chlorite particles. In addition, we can see clearly that the CMC with a higher molecular weight results in stronger flocculation, as mentioned above. Correlation between Froth Entrainment and Flocculation Caused by CMC It has been mentioned above that the flotation recovery of the fine chlorite (−38 µm) in the presence of three types of CMC with the molecular weight of 90,000, 250,000, 700,000, respectively, are drastically different (Figure 7). In addition, the differences may be generated by froth entrainment in the flotation. It is well known that the recovery due to froth entrainment is also closely linked with fine particles. To investigate the influence of CMC molecular weights on the flotation of fine chlorite, a series of settling tests were carried out. The results are presented in Figure 8. It shows that the CMC with a molecular weight of 90,000 disperses the fine chlorite. On the other hand, the CMC with a higher molecular weight (250,000 and 700,000, respectively) induces strong flocculation on fine chlorite, and the CMC with a higher molecular weight gives stronger flocculation. In addition, Figure 8 indicates that the two types of lower molecular weight CMC have a strong dispersing effect at the lowest dosage. This is Figure 9, which is composed of four images taken with a polarized optical microscope under different conditions, is evidence of the flocculation/dispersion of the fine chlorite. The images confirm that when conditioned without CMC or in the presence of the CMC with a molecular weight of 90,000, the fine chlorite particles are in a good state of dispersion. However, with the addition of the CMC with a high molecular weight of 250,000 and 700,000, respectively, the sizes of the fine chlorite particles are enlarged, which indicates that flocculation occurs on the fine chlorite particles. In addition, we can see clearly that the CMC with a higher molecular weight results in stronger flocculation, as mentioned above. These results reveal that the CMC with a low molecular weight almost does not affect the settling behavior of the fine chlorite and is not an effective depressant for its failure in reducing froth entrainment in flotation, despite the fact that it can render the fine chlorite particles hydrophilic. On the other hand, the CMC with a high molecular weight, which generates flocculation on the fine chlorite particles, gives a low flotation recovery of the fine chlorite because of the reduction of froth entrainment. Thus, the conclusion can be drawn that froth entrainment is an important factor for the difference between the low molecular weight CMC and the high molecular weight CMC in the depression of fine chlorite. In addition, the flocculation caused by the high molecular weight CMC is significant in reducing the froth entrainment of fine chlorite particles, which is consistent with the findings suggested by Liu et al. [23]. These results reveal that the CMC with a low molecular weight almost does not affect the settling behavior of the fine chlorite and is not an effective depressant for its failure in reducing froth entrainment in flotation, despite the fact that it can render the fine chlorite particles hydrophilic. On the other hand, the CMC with a high molecular weight, which generates flocculation on the fine chlorite particles, gives a low flotation recovery of the fine chlorite because of the reduction of froth entrainment. Thus, the conclusion can be drawn that froth entrainment is an important factor for the difference between the low molecular weight CMC and the high molecular weight CMC in the depression of fine chlorite. In addition, the flocculation caused by the high molecular weight CMC is significant in reducing the froth entrainment of fine chlorite particles, which is consistent with the findings suggested by Liu et al. [23]. Conclusions This study systematically investigated the influence of CMC with different mole weights on the flotation of chlorite. Flotation results indicate that the recovery of fine rite increases with particle size reduction due to the entrainment through water reco However, a high molecular weight CMC is found to be more effective in depressin flotation of fine chlorite than a low molecular weight one. Sedimentation tests and m scope analysis show that the flocculation/dispersion state plays an important role i depression effect of CMC on the flotation of fine chlorite. A high molecular weight can flocculate fine chlorite particles to reduce froth entrainment as well as hydrop while a low molecular weight CMC fails. It is suggested that the reduction of entrain because of flocculation caused by CMC is key to realizing the depression of fine ch flotation, which also provides a reference for the flotation of other fine minerals. Conclusions This study systematically investigated the influence of CMC with different molecular weights on the flotation of chlorite. Flotation results indicate that the recovery of fine chlorite increases with particle size reduction due to the entrainment through water recovery. However, a high molecular weight CMC is found to be more effective in depressing the flotation of fine chlorite than a low molecular weight one. Sedimentation tests and microscope analysis show that the flocculation/dispersion state plays an important role in the depression effect of CMC on the flotation of fine chlorite. A high molecular weight CMC can flocculate fine chlorite particles to reduce froth entrainment as well as hydrophily, while a low molecular weight CMC fails. It is suggested that the reduction of entrainment because of flocculation caused by CMC is key to realizing the depression of fine chlorite flotation, which also provides a reference for the flotation of other fine minerals. Conflicts of Interest: The authors declare no conflict of interest.
6,459.6
2023-04-25T00:00:00.000
[ "Materials Science" ]
Early Detection of Cancer using Machine Learning (ML) Techniques International Journal of Information Technology, Research and Applications (IJITRA) is a journal that publishes articles which contribute new theoretical results in all the areas of Computer Science, Communication Network and Information Technology. Research paper and articles on Big Data, Machine Learning, IOT, Blockchain, Network Security, Optical Integrated Circuits, and Artificial Intelligence are in prime position. I. INTRODUCTION As the sixth most prevalent malignancy, mouth cancer has overtaken all others as the leading cause of cancer-related morbidity and mortality globally.The incidence of mouth cancer is highest in Maharashtra.Early cancer discovery through clinical diagnosis results in earlier treatment, which reduces the risk of morbidity and mortality.Implementing a screening programme approach increases the likelihood of finding cancer early so that patients with the disease can receive treatment quickly.Since most diseases share the same clinical symptoms and scales, clinical diagnosis gathers information and features from the patient's history that cause problems with the diagnosis.Although oral cancer is treated with cutting-edge clinical techniques including surgery, radiation therapy, and chemotherapy, the mortality rate linked with mouth cancer has increased over the past 40 years.Benign, premalignant, or malignant tumours are all possible.Cancerous tumours are malignant tumours.The malignancy of mouth cancer is the main cause of death.The mortality rates of oral cancer can be decreased by early assessment of precancerous lesions in the mouth.The dentistry profession continues to face challenges in the diagnosis of oral cancer, particularly in the detection, assessment, and treatment of early phase changes or frank illness.It becomes difficult to predict oral leukoplakia (premalignant) and oral squamous cell carcinoma.One of the most fascinating and difficult tasks for doctors is making an accurate prognosis about how an illness will progress.Due to the difficulties in diagnosing clinical disorders, several specialists looked into the medical and computer science fields for potential answers. Numerous researchers used various techniques, such as early stage screening, and created novel ways for the early prediction of cancer therapy outcomes.In the realm of medicine, cutting-edge technologies are used, and the medical research community has access to vast volumes of cancer data that have been gathered.Machine learning techniques are now a well-liked tool for medical researchers.Numerous machine learning techniques, including feature selection and classification, are frequently used in cancer detection.To find patterns and relationships in large datasets, machine learning techniques are applied. The following is how the paper is set up: Overview of oral cancer is covered in Section II.The review of studies using machine learning techniques for the identification of oral cancer is presented in Section III.The methods of machine learning are covered in Section IV.The manuscript is concluded in Section V. II. ORAL CANCER Oral cancer is a type of head and neck cancer that starts in the squamous cells that line the mouth, tongue, and lips.Most often, mouth cancer is first found after it has spread to the neck lymph nodes.Types of oral cancer include those that affect the lips, tongue, inner cheek lining, gums, mouth's floor, and both the hard and soft palate.The following are oral cancer signs that should prompt early diagnosis and appropriate treatment: 1) Red, white, or a combination of red and white patches on the lips or inside the mouth; 2) mouth bleeding 3. Swallowing problems or pain.4. A lump in the neck.Because 70% of cases recur and result in death, treatments for oral cancer, such as surgery, radiation therapy, and chemotherapy, are ineffective.If the lesion is not detected in a timely manner, therapy will not be successful since it is frequently disregarded and the patient presents after the lesion has become incurable. The sixth to eighth most frequent cancer in the world is oral cavity cancer (OC), which is also known as a malignant tumour on the lip or in the mouth.The previous circumstance resulted in the development of pre-malignant lesions, and clinical screening techniques are used to identify morphologically altered tissue where cancer is more likely to occur than in normal tissue.Such lesions may show epithelial dysplasia (ED) on histopathologic examination.Screening techniques are used to identify mouth cancer or precancerous lesions that may result in mouth cancer at an early stage when lesions are most easily removed and most likely to be cured. Vital Staining, Light-based detection systems, Histological, Imaging Diagnostic, Cytological, Molecular Analyses, Imaging Diagnostic, and Onco-chip screening procedures are some examples.However, screening techniques have not been shown to be effective in saving lives, so doctors must overcome difficulties in the oral exam for oral cancer screening.Early oral cancer identification and diagnosis can increase patient survival and lower morbidity rates.As a result, modern computer science techniques are currently used for precise diagnosis. III. LITERATURE SURVEY In this study [1], a method for using an orthopantomogram to detect oral tumours is proposed.In order to maintain these edge characteristics as well as the conspicuous watershed on images, which causes oversegmentation despite being pre-processed, a novel mathematical morphological watershed approach is suggested.Marker controlled watershed segmentation is used to segment tumours in order to prevent oversegmentation.In this paper [2] a hybrid model is put forth that consists of two stages, the first of which uses the ReliefF-GA feature selection method to identify the best feature of the subset and the second of which uses the ANFIS classification to categorise patient survival after a specific number of years since diagnosis.Two datasets of oral cancer with clinicopathologic and genomic markers each were used to test the suggested predictive model.It has been tested that the suggested model performs better when both types of datasets are used, along with additional techniques like logistic regression, support vector machines, and artificial neural networks. The high risk markers can be found using this prognostic model, which can also be utilized to assist physicians in the decision-support stage and more accurately forecast each patient's chance of surviving oral cancer.Researchers Wafaa K. Shams and Zaw Z. Htike [3] foresee the potential emergence of oral cancer in OPL patients.They have chosen pertinent features from the gene expression array using Fisher discriminate analysis.As classifier methods, Deep Neural Network (DNN), Multi-Layer Perceptron (MLP) with Back Propagation, Support Vector Machine (SVM), and Regularized Least Squares (RLS) are employed.Themis P. Exarchos [4] and Konstantina Kourou gave a survey of contemporary ML techniques used in the modelling of cancer progression.In this research, multiple supervised ML approaches, as well as a variety of input attributes and data samples, are used to discuss various predictive models. Researchers K. Anuradha1 and K. Sankaranarayanan [5] have conducted a thorough analysis of the various techniques used for the early identification of oral malignancies.The comparison of several cancer classification and identification techniques.The cancer detection algorithms include every stage.According to Shikha Agrawal and Jitendra Agrawal [6], categorization of cancer is a hot research topic in the field of medicine.They have provided an overview of various neural network methods.Convolutional neural networks, according to Hakan Wieslander, Gustav Forsli, and Ewert Bengtsson [7], have been shown to be reliable for image classification tasks.Using two datasets encompassing oral cells and cervical cells, the performance of two distinct network designs, ResNet and VGG, was assessed.ResNet was a prereferable network, according to the results, with a better degree of accuracy and a lower standard deviation.The ED&P framework was introduced by Neha Sharma and Hari Om [8] and is used to create a data mining model for the early identification and prevention of oral cavity cancer. K. Anuradha and Dr. K. Sankaranarayanan [9] presented their research on employing image processing to find oral tumours.Noise is removed from the input dental X-ray image using linear contrast stretching.Tumors are segmented from the augmented image using marker controlled watershed segmentation, which has been improved.The segmentation algorithms' speed and accuracy are compared, and it is discovered that the upgraded method offers better segmentation. The integrated diagnostic model with hybrid features selection methods for the detection of oral cancer that identify the attributes lowers the number of features that are obtained from various patient records was developed by Fatihah Mohd, Noor Maizura, and Mohamad Noor [10].Oral cancer patients' diagnoses are predicted using classifiers like Multilayer Perceptron, K-Nearest Neighbors, and Support Vector Machine that are updateable [11].They also stated that, after adding feature subset selection with SMOTE during the preprocessing stages, the support Vector Machine outperforms other machine learning techniques. IV. MACHINE LEARNING Machine learning creates a model that is a good and useful approximation to the data and uses it to solve problems in the real world [12,13].Nowadays, machine learning is common due to the expanding amounts and types of data available, cheaper computing processing, and more capable and reasonably priced data storage [14].Even on a very large scale, machine learning swiftly and automatically creates models that can evaluate larger, more complicated data sets and produce faster, more accurate results.Without human assistance, a machine learning model can provide very accurate predictions that can be utilised to make smarter judgments and take clever actions [15]. To progress the field of machine learning and employ it in a number of study fields, including healthcare, it is necessary to create newer algorithms.According to a recent study, machine learning can be used to treat cancer [16,17].A new era of individualized medicine with swift and sophisticated data analysis, previously unreachable, is beginning with the use of machine learning and AI techniques in basic and translational cancer research.Countless data sets and machine learning algorithms aid in the diagnosis, treatment, and prognosis of cancer, among other aspects of the fight against disease [18,19].Machine learning makes it possible to tailor the therapy to the patient, which would not be possible without it. On EMR databases, various machine learning techniques are used to search for hidden patterns that aid in cancer diagnosis [20].Deep learning neural networks are utilized to evaluate CT and MRI scans, and natural language processing (NLP) is used to interpret doctor's prescriptions [21][22][23][24].Big data and machine learning can be used for diagnosis.When the diagnosis is based on sufficient and high-quality data, it is correct.Machine learning algorithms may query databases to detect similarities and provide precise projected models when the dataset is vast.The discovery of new drugs is being revolutionized by big data and machine learning.Despite the fact that Big Data and Machine Learning have improved the process of cancer diagnosis, therapy, and medication discovery, scientists still confront numerous obstacles in this field.In hospitals where data is not digitalized, it is collected and recorded using antiquated techniques and cannot be processed using cutting-edge technologies. The foundation of machine learning is the learning process, which is split into two phases: training and testing.When building a learning model, a learning algorithm is utilised in which features are learned from input examples in training data.During testing, a learning model that uses the execution engine to create predictions tests production data.The output of the learning model is classed or tagged data that provides the final forecast.The three broad categories into which machine learning techniques are divided are as follows: 1. Supervised Learning: In supervised learning, labelled examples that have been trained and the desired output are provided as inputs [25][26][27][28].Features and labels are both present in the training dataset.Using labelled training data made up of a collection of training instances, the goal of supervised learning is to infer a function.It is used to build learning models that forecast an object's label given a set of features.In supervised learning, the learning algorithm takes a set of features as inputs along with the corresponding correct outputs, and learning is performed by algorithm after comparing its actual output with correct outputs.If error occurs after comparison, it then modifies the model accordingly.It training data is missing; model is not capable to infer prediction correctly.Supervised learning is used in applications where classification is done on some data and to predict some data.For instance, an astronomy problem is a classification problem that involves detecting an object and categorising it into different groups, such as a star or a galaxy.However, when the label (age) is a continuous number, determining an object's age based on observations is a regression problem. 2. Unsupervised Learning: In this type of learning, unlabeled data are utilized as the input, and learning is carried out to examine the data and identify patterns among the objects.The data itself is used to find the labels [29][30]. Applications involving supplied transactional data use unsupervised learning.For instance, group clients who share characteristics so that they might be addressed similarly in marketing initiatives. Classifying objects into galaxies and stars is a task carried out in supervised learning.Unsupervised learning, on the other hand, uses extensive observations of distant galaxies to identify the traits or feature combinations that are most crucial for differentiating between galaxies.In contrast to classification, which requires prior knowledge of the groupings, clustering is an unsupervised operation that divides a set of inputs into groups.Popular unsupervised techniques include nearest-neighbor mapping, self-organizing maps, k-means clustering, and singular value decomposition. 3. Semi-supervised Learning: Semi-supervised learning is used in a variety of real-world learning contexts, including text processing, video indexing, and bioinformatics, where there is a plentiful supply of unlabeled data but a finite amount of labelled data that can be produced at a reasonable cost [31][32][33][34].As a result, training material in semi-supervised learning includes both labelled and unlabeled data.In order to organise the data and produce predictions, the learning model needs to learn the structures.Semi-supervised learning is helpful when the cost of labelling is too high to permit a completely labelled training procedure.Classification, regression, and prediction techniques can all be utilized in conjunction with supervised learning.An illustration would be to recognize a face on a webcam [35][36][37][38]. 4. Reinforcement Learning: In reinforcement learning, the student interacts with a dynamic environment and completes a specific task without the help of an instructor, who just provides feedback on whether the student has achieved the task.With reinforcement learning, learning techniques are utilized to predict which behaviours will result in the highest rewards and to discover the actions through trial and error [39].As an illustration, when playing chess, a player must make a series of trial-and-error moves in order to succeed. In reinforcement learning, the learner, the environment, and the behaviours are the main three fundamental components.The learner's main objective is to select behaviours that produce the desired outcome over a specified period of time.In order to accomplish the goal in this style of learning, the learner selects the optimum policy. V. ML ALGORITHMS On the basis of learning style, a broad range of algorithms are implemented to create machine learning models, which are then categorised as follows: 1. Algorithms for regression Regression is a modelling method that iteratively refines the link between constantly varying variables, such as price and temperature, using a measure of error.Logistic regression and linear regression are the two most widely used regression methods [40][41][42] VI. APPLICATION AND TOOLS Machine learning applications are categorised based on learning types like supervised and unsupervised learning.supervised learning is used to classify problems in areas such as pattern identification, facial recognition, character recognition, medical diagnosis, and web advertising.Clustering, association analysis, customer segmentation in CRM, image compression, and bioinformatics are examples of applications based on unsupervised learning.Robot control and game play are examples of reinforcement learning applications. To use the finest algorithms for machine learning, the correct tool selection is crucial.Machine learning employs effective techniques to provide quicker and simpler prediction.Machine learning tools provide intuitive interface onto the sub-tasks by offering good mapping and appropriateness in the the task's user interface Great machine learning tools provide best practises for methodology, configuration, and implementation.Algorithms for machine learning are automatically configured The tool's structure is constructed with a well-executed method.Platforms and libraries are used to categorise ML tools.While a library merely offers a collection of modelling algorithms required to finish a project, a platform offers the environment needed to perform a project.WEKA Machine Learning Workbench, R Platform, and a subset of Python called SciPy, which includes Pandas and scikit-learn, are a few examples of machine learning platforms. VII. CONCLUSION In this study, machine learning methods for early oral cancer prediction are reviewed.Additionally, an overview of various machine learning methodologies used in the detection of oral cancer is provided, along with findings regarding these methodologies. Decision Tree Technique: In this algorithm, a predictive model known as a decision tree is utilised to map observations on input data to forecast the item's target value.In these tree-like structures, leaves stand in for class labels, while branches represent the characteristics that give rise to those class labels.Decision trees are fast and accurate algorithms which \sare trained on data for classification and regression applications.Popular decision tree algorithms include Classification and Regression Tree (CART) and Chi-Squared Automatic Interaction Detection (CHAID).4. Bayesian Algorithms: The probability theory, which is utilised to describe uncertainty, is the foundation of Bayesian algorithms.These specifically use Bayes' Theorem to solve classification and regression-related issues.Naive Bayes, Gaussian Naive Bayes, Multinomial Naive Bayes, Bayesian Belief Network, and Bayesian NetworK are the most widely used Bayesian algorithms. 5.The clustering algorithm divides things into many categories.Clustering is a sort of unsupervised learning in which the data set is divided into clusters based on shared properties and a predetermined distance metric.Hierarchical clustering and partitional clustering are two types of clustering techniques.The most often used clustering algorithms are hierarchical clustering, k-means, kmedians, and expectation maximisation (EM).6. Artificial Neural Network Algorithms: Based on supervised learning, this algorithm for learning has a structure that is comparable to that of biological neural networks.The units of artificial neurons are strongly interconnected, and learning is accomplished by altering the connection weights to carry out distributed processing in parallel.Perceptron, BackPropagation, Hopfield Network, and Radial Basis Function Network are the most widely used artificial neural network techniques.7. Deep Learning Methods : Updated artificial neural networks provide abundant, affordable computation to create deep learning.When implementing semisupervised techniques with big datasets that contain only a small amount of labelled data, deep learning methods are built on significantly larger and more complicated neural networks.Deep Boltzmann Machine (DBM), Deep Belief Networks (DBN), and Convolutional Neural Network are the three most often used deep learning methods (CNN).8. Dimensionality Reduction Algorithms: When an item is described using a number of dimensions, the computing cost is decreased by removing unnecessary and redundant data using the dimensionality reduction approach.Principal Component Analysis (PCA), Principal Component Regression (PCR), Linear Discriminant Analysis (LDA), Mixture Discriminant Analysis (MDA), Quadratic Discriminant Analysis (QDA), and Flexible Discriminant Analysis are the dimensionality reduction algorithms (FDA).
4,261.6
2023-03-31T00:00:00.000
[ "Computer Science" ]
Influence of heteroatoms on optical properties and photoluminescence kinetics of carbon dots Carbon dots have attracted much attention due to the ease of synthetic methods and their stable photoluminescence with relatively high quantum yield. By the doping of carbon dot by heteroatoms the optical transitions can be tuned in a wide spectral range. In this work, the impact of heteroatoms on the optical properties of carbon dots has been investigated: the appearance of the luminescent states attributed to the CD surface states and the dependence of the photoluminescence lifetime versus emission wavelength have been revealed. 1. Introduction Carbon dots (CDs) are novel class of fluorophores which possess unique properties such as tunable photoluminescence, high quantum yield, relatively high photostability and possibility of roomtemperature phosphorescence realization. Moreover, another significant characteristic of CDs is an ability of doping their core and surface by a variety of heteroatoms, which allows the altering their optical transitions in a wide range. In recent studies, influence of doping by nitrogen, sulfur and phosphorus on optical properties of CDs is widely investigated [1][2][3]. However, it is under discussion how the doping changes not only CDs' energy structure, but also the kinetics of creation and relaxation of charge carriers. Understanding this mechanism that originates in CDs will help to predict and control their properties by the heteroatom doping. In this work we will discuss the impact of sulfur compounds on CDs' optical properties with focus on photoluminescence kinetics. Synthesis of CDs The CDs were synthesized based on a solvothermal procedure described in [4]. Under solvothermal conditions, the nitrogen or/and sulfur compound were mixed with citric acid and then dissolved in dimethylformamide. Then the solution was to a Teflon lined autoclave and heated at 160°C for 6 hours. After the reaction, autoclave was cooled to a room temperature. Urea, thiourea and thioacetamide were IOP Conf. Series: Journal of Physics: Conf. Series 1461 (2020) 012008 IOP Publishing doi:10.1088/1742-6596/1461/1/012008 2 used as a heteroatom source; the synthesized CDs were designated hereafter as CD-u, CD-tu, and CDta, respectively. For further investigation the synthesized product was diluted to obtain appropriate optical density of the CD solution. 2.2. Experimental setup Absorption (Abs) and photoluminescence (PL) spectra were obtained by a spectrophotometer UV-3600 (Shimadzu) and fluorescence spectrophotometer FP-8200 (Jasco), respectively. PL decay measurements were carried out by MicroTime 100 (PicoQuant) with excitation wavelength of 405 nm in spectral range of 430-780 nm. For the spectral selection the interferential filters with full width at half maximum (FWHM) of 10 nm were used. 3. Results The Abs spectra of the samples presented in Fig.1a possess the band at 350 nm which is typical for the CDs and attributed to the n -π transition in the CD' core. The position of low-energy band attributed to the surface states, e.g. originated from the dopants, is redshifted in the set CD-ta, CD-tu, and CD-u. The PL spectra of CDs presented in Fig. 1b follow the Abs spectra trend: the PL band is centered at 470, 485, and 495 nm of the CD-ta, CD-tu, and CD-u, respectively. The PL decay of the CDs (Fig. 1c) in the wide spectral range can be approximated by multi-exponential function with average PL lifetime of 9.4, 10.1, and 11.0 ns for CD-ta, CD-tu, and CD-u, respectively. To understand the process of radiative relaxation of charge carriers in the synthesized CDs the spectrally resolved PL decay measurements were carried out. Average PL lifetimes vs PL spectra of the CDs are presented in Fig. 2 The PL lifetime measured in narrow spectral range within CDs' PL band depends on the emission wavelength, which confirms that the PL signal originates from the CDs instead of molecular fluorophores. At the same time, the PL lifetime behavior is complicated and varies for the synthesized CD set. For the CD-ta (Fig.2a) the PL lifetime decreases with the increase of the emission wavelength, which may be attributed to the influence of trap-states to the radiative relaxation process. For the CD-tu (Fig.2b) the PL lifetime reflects both Abs and PL bands shape with its prolonged value in the low-energy spectral region. For the CD-u (Fig.2c) the increase of the PL lifetime with the emission wavelength is observed, which is presumably due to the nonradiative energy transfer of the photoexcitation within the CD. These results indicate the complexity of the energy structure and, in particular, of the carrier relaxation processes in the investigated CDs. In conclusion, it is shown that the presence of heteroatoms in the CDs may result in the altering of their optical properties: appearance of the luminescent states with lower energies, and complicated dependence of the PL lifetime vs emission wavelength. The revealing of the physical mechanism of the observed optical properties requires further investigation in the light of the CD' utilization in the wide field of photonics, sensing, and bio applications.
1,121.4
2020-03-01T00:00:00.000
[ "Physics", "Chemistry" ]
Reactivating Immunity Primed by Acellular Pertussis Vaccines in the Absence of Circulating Antibodies: Enhanced Bacterial Control by TLR9 Rather Than TLR4 Agonist-Including Formulation Pertussis is still observed in many countries despite of high vaccine coverage. Acellular pertussis (aP) vaccination is widely implemented in many countries as primary series in infants and as boosters in school-entry/adolescents/adults (including pregnant women in some). One novel strategy to improve the reactivation of aP-vaccine primed immunity could be to include genetically- detoxified pertussis toxin and novel adjuvants in aP vaccine boosters. Their preclinical evaluation is not straightforward, as it requires mimicking the human situation where T and B memory cells may persist longer than vaccine-induced circulating antibodies. Toward this objective, we developed a novel murine model including two consecutive adoptive transfers of the memory cells induced by priming and boosting, respectively. Using this model, we assessed the capacity of three novel aP vaccine candidates including genetically-detoxified pertussis toxin, pertactin, filamentous hemagglutinin, and fimbriae adsorbed to aluminum hydroxide, supplemented—or not—with Toll-Like-Receptor 4 or 9 agonists (TLR4A, TLR9A), to reactivate aP vaccine-induced immune memory and protection, reflected by bacterial clearance. In the conventional murine immunization model, TLR4A- and TLR9A-containing aP formulations induced similar aP-specific IgG antibody responses and protection against bacterial lung colonization as current aP vaccines, despite IL-5 down-modulation by both TLR4A and TLR9A and IL-17 up-modulation by TLR4A. In the absence of serum antibodies at time of boosting or exposure, TLR4A- and TLR9A-containing formulations both enhanced vaccine antibody recall compared to current aP formulations. Unexpectedly, however, protection was only increased by the TLR9A-containing vaccine, through both earlier bacterial control and accelerated clearance. This suggests that TLR9A-containing aP vaccines may better reactivate aP vaccine-primed pertussis memory and enhance protection than current or TLR4A-adjuvanted aP vaccines. Pertussis is still observed in many countries despite of high vaccine coverage. Acellular pertussis (aP) vaccination is widely implemented in many countries as primary series in infants and as boosters in school-entry/adolescents/adults (including pregnant women in some). One novel strategy to improve the reactivation of aP-vaccine primed immunity could be to include genetically-detoxified pertussis toxin and novel adjuvants in aP vaccine boosters. Their preclinical evaluation is not straightforward, as it requires mimicking the human situation where T and B memory cells may persist longer than vaccine-induced circulating antibodies. Toward this objective, we developed a novel murine model including two consecutive adoptive transfers of the memory cells induced by priming and boosting, respectively. Using this model, we assessed the capacity of three novel aP vaccine candidates including genetically-detoxified pertussis toxin, pertactin, filamentous hemagglutinin, and fimbriae adsorbed to aluminum hydroxide, supplemented-or not-with Toll-Like-Receptor 4 or 9 agonists (TLR4A, TLR9A), to reactivate aP vaccine-induced immune memory and protection, reflected by bacterial clearance. In the conventional murine immunization model, TLR4A-and TLR9A-containing aP formulations induced similar aP-specific IgG antibody responses and protection against bacterial lung colonization as current aP vaccines, despite IL-5 down-modulation by both TLR4A and TLR9A and IL-17 up-modulation by TLR4A. In the absence of serum antibodies at time of boosting or exposure, TLR4A-and TLR9A-containing formulations both enhanced vaccine antibody recall compared to current aP formulations. Unexpectedly, however, protection was only increased by the TLR9A-containing vaccine, through both earlier bacterial control and accelerated clearance. This suggests that TLR9A-containing aP vaccines may better reactivate aP vaccine-primed pertussis memory and enhance protection than current or TLR4A-adjuvanted aP vaccines. Keywords: vaccine, pertussis, adjuvant, TLR9 agonist, TLR4 agonist INTRODUCTION B. pertussis (Bp), the causative agent of whooping cough, is a gram-negative bacterium highly transmissible in humans across all ages and an important cause of morbidity and mortality in infants worldwide. Introduced in 1950s, whole-cell pertussis (wP) vaccines dramatically reduced disease incidence in infants and young children. However, vaccine-associated reactogenicity and unjustified fears of vaccine-induced encephalopathy affected public confidence and compliance. This lead in the late 1990s to their replacement in most developed countries by less reactogenic acellular pertussis (aP) vaccines (1). Pediatric aP vaccines are composed of 1-5 Bp antigens adsorbed to Alum, combined with diphtheria (DT) and tetanus (TT) toxoids (DTaP) ± polio, Haemophilus influenzae b and hepatitis B antigens. Adolescent/adult booster vaccines (Tdap) include lower amounts of DT and Bp antigens. Alternatively, novel vaccine formulations may prove better at boosting aP vaccines-primed memory than current Tdap vaccines. To circumvent the limitations of preclinical models in which antibodies persist to much higher levels than in humans, we previously reported the usefulness of an adoptive transfer model in which aP-induced memory cells were transferred to naïve recipients prior to boosting with Tdap (37). To address the specific influence of various booster formulations, we subsequently developed a novel model including two consecutive adoptive transfers, memory cells induced by boosting aP-primed cell recipient mice being transferred to naïve recipient mice prior to bacterial challenge. Using this model, we tested three modified (m)Tdap formulations composed of gdPT, filamentous hemagglutinin (FHA), pertactin (PRN), and fimbriae type 2 and 3 (FIM2,3) antigens, adjuvanted with Alum and supplemented or not with TLR4A or TLR9A ( Table 1). We show here that this model readily discriminates among TLR agonists-adjuvanted modified Tdap vaccines and identifies TLR9A as more effective than TLR4A against Bp challenge. Mice Adult female CD1 and BALB/cByJ mice were purchased from Charles River (L'Arbresle, France) and kept under specific pathogen free conditions. Mice were used at 6-8 weeks of age. All animal experiments were carried out in accordance with Swiss and European guidelines and approved by the Geneva Veterinary Office and by French Ministry of Higher Education, of Research and Innovation and ethic committee. Adoptive Transfer Spleens were harvested 42 days after priming or boosting BALB/cByJ mice. Single cell suspensions were obtained by mechanical disruption and processed for red blood cell lysis. 50 × 10 6 splenocytes (experimentally defined as optimizing the recall of immune memory, unpublished data) in 100 µl were transferred intravenously (i.v.) by retro-orbital injection into naïve BALB/cByJ mice. Antibodies Quantification In the experiments shown in Figure 1, pertussis antigen-specific IgG1 and IgG2a antibodies were titrated in a multiplex MSD U-PLEX assay (Meso Scale Discovery). The coating proteins were coupled to biotin to allow their subsequent coupling to the linkers present in the bottom of Uplex plate. Uplex plates were coated with PT (2 µg/ml), PRN (10 µg/ml), FHA (3 µg/ml), FIM2,3 (4 µg/ml), DT (4 µg/ml), or TT (8 µg/ml) (all antigens from Sanofi Pasteur). Serial dilutions of serum sample, control and reference sera (WHO/NIBSC reference Bp anti-serum (NIBSC code: 97/642) for IgG1 and an in-house pool of hyperimmune sera for IgG2a) were added, a wash step performed, and IgG1 or IgG2a antibodies bound to each antigen were detected using anti-IgG1 or anti-IgG2a (Jackson ImmunoResearch) antibodies linked to SULFO-TAG TM (RD-Biotech) using MSD GOLD SULFOTAG NHS-Ester Conjugation kit (Meso Scale Discovery). Statistical Analysis Values are expressed as mean ± SEM. Statistical analysis were performed using unpaired t-test or one-way ANOVA followed by a Tukey multiple comparison test when more than two groups of mice were tested. All analysis were done using the Prism 7.0 (GraphPad software). Differences with p > 0.05 were considered insignificant. Modified Acellular Pertussis Vaccines Protect Efficiently Against Pertussis Challenge Independently of TLR4A or TLR9A Supplementation We first tested the capacity of three novel mTdap formulations to boost immune memory elicited by current DTaP vaccines. CD1 mice were primed i.m. with DTaP and boosted 42 days later with mTdap with/without TLR4A or TLR9A. A control group was primed and boosted with DTwP, known to better protect than DTaP (6, 7) (see Table 1 for abbreviations and vaccine content). DTaP/mTdap elicited similar titers of PRN-, FHA-, and FIM2,3-specific IgG1 and IgG2a antibodies as DTwP/DTwP and higher PT-specific IgG1 6 weeks after boosting (Figure 1A), in line with the lower PT content of DTwP (39). The addition of TLR4A or TLR9A to mTdap did not significantly affect antibody titers ( Figure 1A) nor their IgG1/IgG2a ratio (data not shown). T cell responses were also assessed 42 days after boosting for the secretion of Th2-(IL-5), Th1-(IFNγ), and Th17-(IL-17) cytokines. In line with the respective Th2-and Th1-inducing properties of aP and wP vaccines (16,20), mTdap significantly induced IL-5-secreting splenocytes whereas DTwP preferentially induced IL-17-and IFNγ-producing cells, although differences in IFNγ did not reach statistical significance in this experimental setting ( Figure 1B). Compared to mTdap, mTdap/TLR4A and mTdap/TLR9A formulations significantly reduced IL-5 responses (to similar levels as DTwP), without increasing IFNγ-producing cells. IL-17-producing cells were only observed after mTdap/TLR4A boosting, reaching similar numbers as in DTwP-primed/boosted mice ( Figure 1B). DTaP/mTdap elicited similar antibody and T cell responses as DTaP/Tdap (data not shown). To evaluate the protective efficacy of mTdap-based boosters through bacterial clearance, mice were challenged intranasally with Bp 42 days after boosting. Bacterial loads remained high in the lungs of naïve mice, initially increased and started to decrease after day 3 (Figures 1C,D). In contrast, rapid bacterial decline was observed in the lungs of all immunized mice. By day 3, bacterial colonization was slightly but significantly lower in mice primed and boosted with mTdap formulations (with/without TLR4A or TLR9A) compared to DTwP, as previously reported (40). Nevertheless, by day 7, most of the mice had cleared the infection (Figures 1C,D). DTaP/mTdap elicited similar protection as DTaP/Tdap (data not shown). An Adoptive Transfer Model of Pertussis Immunity to Better Recapitulate the Human Situation The rapid protection conferred in mice by DTaP/DTwP priming and mTdap/DTwP boosting may reflect the contribution of both pertussis-specific antibodies and T cell effectors present at time of challenge (Figure 1). In humans, however, vaccine-induced antibodies rapidly wane and are frequently low or absent at time of exposure by boosting or infection. To mimic this condition and assess in mice the protective efficacy of novel vaccine formulations in the absence of circulating antibodies, we developed adoptive transfer models. Following upon our single adoptive transfer model, which enables the characterization of the influence of priming (37), we developed here a double transfer model to assess the influence of boosting-both in absence of serum antibodies (Figure 2A). Despite the use of distinct mouse strains, bacterial strains, and experimental procedures (anesthesia, etc.) in Lyon/France and Geneva/Switzerland, similar antibody responses and bacterial clearance patterns were observed both in naïve and immunized mice [ Figures 1C,D, 2E and (37)]. This allowed using and further developing in BALB/c mice the adoptive transfer model developed in Geneva. To establish the benchmarks with current vaccines, we primed mice with DTaP or DTwP and transferred 50 × 10 6 splenocytes into naïve recipient BALB/c mice-subsequently boosted with Tdap or DTwP. After priming, anti-PT, FHA, PRN, and FIM2,3 IgG antibodies were similar as observed in CD1 mice (Figure 2B; Supplementary Figure 1 ). Six days after the first adoptive transfer (d-1), PT-, PRN-, FHA-, and FIM2,3-specific serum antibodies were undetectable in recipient mice, as wished ( Figure 2C). Tdap and DTwP boosting rapidly reactivated antibody responses in recipient mice, with detectable PT-, PRN-, FHA-, and FIM2,3-specific IgG antibodies from 7 days onwards and reaching a plateau by day 14 (Figure 2C). We previously reported that the Tdap boosting of non-transferred naïve mice induces much lower and slower kinetics of anti-PT antibody responses, which only appear by day 14 (37). While all prime/boost strategies similarly recalled FHA-and FIM2,3-specific responses, mice transferred with DTaP-primed cells developed faster and higher anti-PT IgG responses than recipients of DTwPprimed cells (Figure 2C), consistently with their primary responses to PT (Figure 2B). In contrast, mice transferred with DTwP-primed cells developed significantly more robust anti-PRN responses, independently of the boosting strategy ( Figure 2C). This confirmed that the adoptive transfer of spleen memory cells preserve the relative ratio of antigen-specific primed cells. To evaluate the protective efficacy of booster formulations in the absence of circulating antibodies at time of challenge, we performed a second adoptive transfer into naïve BALB/c mice, 7 days prior to bacterial challenge. The kinetics of Ag-specific antibody responses was very slow in naïve mice: PRN-and FHA-specific IgG appeared by day 14, PT by day 21, and FIM2,3 IgG antibodies remained undetectable at all time-points ( Figure 2D). As expected, recipients of memory cells raised faster antigen-specific responses: Bp infection mostly reactivated strong anti-PRN responses, and robust but slower anti-PT and FHA responses. FIM2,3-specific antibodies were detectable early but at low titers only in recipients of DTwP/DTwP immune cells (Figure 2D). Of note, the Tohama I Bp strain used here and in the following experiments expresses only the serotype 2 of FIM (41), explaining the very low or undetectable Ab titers for FIM2,3 after the challenge. Lungs were collected at various time-points and analyzed for their bacterial content. In contrast to the early (day 3) reduction of bacterial counts observed when circulating antibodies are present at time of challenge (Figures 1C,D and data not shown in BALB/c mice), bacterial loads increased between day 0 and day 7 in Tdap-boosted recipient mice and Tdap boosting had no impact on Bp clearance, independently of DTaP or DTwP priming. In contrast, a plateau of bacterial CFUs was observed on day 7 in DTwP/DTwP recipients, followed by a significantly faster clearance (Figures 2E,F): only all recipients of DTwP-boosted splenocytes had cleared bacteria by day 21 (data not shown). In summary, this novel double-transfer adoptive model allows discriminating the ability of different boosting strategies to reactivate Bp immunity and to confer protection/enhanced bacterial clearance against challenge in the absence of confounding high titers of serum antibodies. In this model, DTwP/DTwP enhanced bacterial clearance but not TdaP boosting, thus validating the double-transfer model and benchmarking the optimal protective efficacy for novel booster candidates. Bp challenge recalled rapid and strong anti-PT, PRN and FHA IgG responses in recipients of immune cells (Figure 3B). Bacterial challenge better reactivated PT memory responses elicited by mTdap/TLR9A than mTdap/TRL4A, as shown by significantly faster and stronger IgG titers. Only minor differences were observed for PRN-and FHA-specific responses while anti-FIM2,3 IgG remained barely detectable (Figure 3B). Both adjuvanted formulations significantly enhanced bacterial clearance compared to naïve mice ( Figure 3C). mTdap/TLR9A provided earlier bacterial control than mTdap/TLR4A, as shown by significantly lower bacterial load after day 10. However, the two formulations conferred similar protection at day 14 and all mice had cleared the infection by day 21 (Figure 3C), resulting in similarly smaller AUC (mTdap/TLR9A: 64.7%; mTdap/TLR4A: 72.1%) compared to naïve mice ( Figure 3D). Thus, a single dose of either mTdap/TLR4A or mTdap/TLR9A induces potent memory responses that confer protection against Bp when reactivated in the absence of serum antibodies. Boosting DTaP With mTdap/TLR9A but not mTdap/TLR4A Favors a Th1-Associated IgG2a Antibody Profile Despite DTaP Priming We next investigated whether these formulations remain sufficiently Th1-driving and thus protective in the Th2-skewed setting elicited by DTaP priming. To this end, we used the double adoptive transfer model described in Figure 2. Recipients of DTaP-primed splenocytes were boosted with Tdap (control), mTdap/TLR9A or mTdap/TLR4A. As the large number of mice required for these double adoptive transfer experiments did not allow the direct assessment of T cell responses, IgG1 and IgG2a titers were used as surrogates for Th2 and Th1-associated responses, respectively. Both adjuvanted mTdap formulations reactivated robust IgG responses reaching significantly higher titers than Tdap boosting (Figure 4A). Overall, antigen-specific IgG1 and IgG2a titers mirrored those of total IgG, and significantly higher IgG1 and IgG2a titers were observed following mTdap than Tdap boosting ( Figure 4B). Interestingly, mTdap/TLR9A further increased IgG2a responses to PT, PRN, and FHA as compared to mTdap/TLR4A, resulting in a significantly smaller IgG1/IgG2a ratio for these three antigens ( Figure 4C). Given the higher antigen content of mTdap vs. Tdap (Table 1), we first compared booster responses in our double adoptive transfer model. mTdap boosting elicited slightly but significantly higher PT and PRN titers than Tdap, likely reflecting the higher antigen content, but similar FHA and FIM2,3 antibody responses (Supplementary Figure 3A). Thus, when used in the absence of serum antibodies, mTdap-adjuvanted formulations designed to boost DTaP priming increase humoral responses, but only mTdap/TLR9A enhances Th1-associated IgG2a antibody responses in the context of DTaP-induced Th2 primary responses. Boosting DTaP With mTdap/TLR9A but not mTdap/TLR4A Enhances Protection Against Bp As previously observed in Figure 2, Bp-induced responses remain extremely low/slow in naïve mice. In contrast, a faster and stronger reactivation of PT (day 10) and FHA (day 7) IgG responses were observed in recipients of DTaPprimed/mTdap/TLR9A-boosted cells as compared to recipient of DTaP/Tdap-boosted cells (day 14) (Figure 5A). Recipients of DTaP-primed/Tdap/TLR4A-boosted cells showed an intermediate phenotype with slower/lower PT and FHA responses ( Figure 5A). Anti-PRN IgG responses were similar in all groups and anti-FIM2,3 IgG antibodies were again barely detectable (Figure 5A). In line with booster responses (Figure 4B), we observed a significant increase in PT-and PRN-specific IgG2a antibody titers in mice that received DTaPprimed/mTdap/TLR9A-boosted cells, despite overall low levels of IgG2a antibodies (Figure 5B). Again as previously observed (Figure 2E), the kinetics of bacterial clearance were similar between naïve mice and recipients of DTaP-primed/Tdapboosted cells (Figure 5C). Despite slightly higher and faster PT and PRN antibody recalls, boosting with mTdap did not improve protection compared to Tdap (Supplementary Figures 3B-D), confirming that the currently used Tdap vaccine could be used as control. DTaP-priming/mTdap/TLR4A boosting conferred a slightly earlier bacterial control, reflected by significant lower bacterial counts on day 10, but with no overall impact compared to DTaP-priming/Tdap boosting (Figures 5C,D). In contrast, mTdap/TLR9A boosting after DTaP-priming significantly (Figures 5C,D) to a similar extent than observed after a DTwP prime/boost schedule, as reflected by smaller AUC (DTwP-DTwP: 83.5%; DTaP-mTdap/TLR9A: 75.25%) (Figures 2E, 5D). In conclusion, TLR4A and TLR9A added to mTdap vaccines behave differently in the absence of circulating antibodies, a condition in which mTdap/TLR9A induces memory responses better recalled upon bacterial challenge and markedly enhancing bacterial clearance. DISCUSSION The shortcomings of current aP vaccines raise the need of third-generation pertussis vaccines. Given the importance of priming, efforts are currently dedicated to define how to best prime young infants against pertussis, inducing potent and longlasting B and Th1/Th17 cell effectors and memory. However, licensing a novel infant vaccine will be most challenging given the resources required to demonstrate its safety, its efficacy, its non-interference on responses to other infant vaccines, and its sustained boostability. The development of new aP formulations proving better at boosting and/or redirecting aPprimed memory responses in adolescents and adults is thus an interesting approach. Using a model of adoptive transfer, we show here that despite DTaP priming, an alum-based Tdap booster vaccine including genetically instead of chemically-detoxified PT (in addition to FHA, PRN, and FIM2,3 antigens) and a TLR9 agonist enhances Th1-associated IgG2a responses, induces memory responses that are better recalled by Bp and enhances protection against Bp. The correlates of Bp protection for pertussis vaccines are not well-defined. A critical role in mediating protection has been attributed to antibodies (42), also supported by the transfer of pertussis-specific maternal antibodies to newborns (43). However, several murine studies have demonstrated an important role for CD4 + Th1/Th17 cells in long-lasting protection (44,45), and these are often considered as critical effectors for novel pertussis vaccines. Here we demonstrate the critical role of antibodies, which rapidly clear all bacteria if present at sufficient titers at time of challenge, in contrast with the much slower bacterial clearance (only initiated when antibodies appear) when serum antibodies are absent at time of challenge. Our adoptive transfer model thus strongly suggests that the sole reactivation of memory Th1/Th17 cells is not sufficient to protect mice against Bp, which also requires the reactivation of B cell memory into potent antibody-secreting cells. Human studies have demonstrated the importance of the priming in imprinting lifelong vaccine-specific T cell responses, as illustrated by the persistence of wP-induced Th1/Th17 polarization despite repeated aP boosters (46,47). However, Bp challenge can boost and shift aP-induced immune responses toward Th1 response (15). Considering the important cohort of aP-vaccinated subjects worldwide, the identification of formulations able to redirect aP-driven Th2 responses toward Th1/Th17 represents an important milestone for the development of novel booster vaccines. In this study, we assessed alum-based formulations complemented with TLR4A or TLR9A as (1) studies in TLR4-deficient mice have identified the contribution of TLR4 signaling to the immunogenicity of wP vaccines (48) and protective immunity against Bp infection (49) induced by aP or wP immunization (23,40), and (2) TLR9 signaling is known to promote Th1 responses (50). Importantly, TLR agonists have already been included in human vaccines currently licensed [MPL/TRL4 (51) and CpG/TLR9 (52)]. Consistently with previous data (13,14,26), both TLR ligands reduced the number of IL-5-producing T cells. This did not correlate with increased number of IFNγ-producing cells, and only TLR4A-based formulations elicited IL-17-secreting cells. The induction of Th17, but not Th1 cells, by TLR4A and not TLR9A is consistent with previous studies using a meningococcal LPS as TLR4 ligand in combination with alum (14), or CpG as TLR9 ligand in substitution of Alum (13) in aP formulations. The role of TLR4 signaling in Th17 cell responses has been demonstrated in TLR4-deficient mice, which showed impaired IL-17 secretion upon wP but not aP immunization (23). Thus, Bp LPS, present in wP formulations, is a key factor in its induction of Th17 responses. The failure of TLR9A to enhance Th1 immune responses, which contrasts with two previous reports (13,29), may have several explanations. First, none of these two studies demonstrated the effective induction of Th1 responses by TLR9A-based formulations in an aP-primed Th2-biased setting-i.e., following aP priming. Second, Ross et al. used CpG without Alum, thus avoiding the Th2-promoting intrinsic properties of Alum (13). Last, the C57BL/6 mouse strain used in the latter study is a prototypical Th1-prone mouse strain, in contrary to the more Th2-oriented mouse strains used here. Based on decreased Th2 and increased Th17 responses (Figure 1), TLR4 signaling seemed more promising than TLR9 signaling at improving protection against Bp. However, mTdap/TLR9A slightly enhanced protection compared to mTdap/TLR4A when Bp challenge was performed after a single adoptive transfer. We observed significantly decreased IgG1/IgG2a ratios after a single dose of mTdap/TLR9A as compared to mTdap/TLR4A, indirectly suggesting that TLR9 signaling elicits stronger Th1-polarized responses than TLR4. The recall of PT and FHA antibody responses by Bp challenge was much faster in recipients of mTdap/TLR9A-primed cells. As protection relies mostly on the reactivation of memory B cells rather than T cells in the absence of circulating antibodies, this more rapid antibody response likely contributes to the better protective efficacy of mTdap/TLR9A. mTdap/TLR9A showed potent efficacy after a single dose, or when given as a booster after the transfer of aP-primed splenocytes: this was reflected by higher and faster B cell memory recall and improved bacterial clearance. Lower IgG1/IgG2a ratios after boosting indirectly suggests that adding TLR9A to alum is able to redirect aP-induced Th2-associated IgG1 primary responses toward a more Th1-associated IgG2a profile. However, the observed changes are modest and direct analysis of T cell responses would be needed to confirm the extent of the ability of TLR9 ligands to redirect alum-induced pertussis-specific Th2 toward Th1 responses. This has been previously demonstrated upon neonatal/adult immunization against hepatitis B (53, 54), but not yet in the context of pertussis immunization. The chemical treatment used in most current aP vaccines to detoxify PT is known to destroy many of its important protective epitopes (34), reducing the induction of neutralizing antibodies (35). By comparing the immunogenicity of various aP vaccines including chemically-or genetically-detoxified PT in infants, Edwards et al., clearly showed enhanced immunogenicity of the genetically-detoxified PT (55). Consistently, mouse studies showed that the gdPT used here generally exhibits higher immunogenicity than PT, especially when assessing neutralizing Ab titers (56). However, replacing PT by gdPT did not increase the protective efficacy of Tdap here. This may have two explanations. First in standard prime/boost murine model, the rapid clearance mediated by high antibody titers to all vaccine antigens likely masks any difference in the neutralizing ability of anti-PT antibodies. Second, in our adoptive transfer model, increased anti-PT antibodies were observed after boosting, confirming the higher immunogenicity of gdPT; however, booster-induced antibody responses do not contribute to protection, as the challenge is performed following a second adoptive transfer-in absence of circulating antibodies. This likely explains why the higher immunogenicity of gdPT is not reflected by improved protection in these murine models. Although mice share multiple feature of pertussis disease with humans, they do not cough, they fail to transmit the disease to other mice, and they raise different lung pathophysiological responses (57). Using a murine model of intranasal infection, we show enhanced protective efficacy of the mTdap/TLR9A formulation, reflected by faster bacterial clearance. However, murine models may not be used to assess colonization and transmission, in contrast to non-human primates (NHP) (58) which develop similar symptoms of pertussis disease to humans (59). However, the NHP model does not permit to assess the respective contribution of memory T and B cell-mediated protection, as bacterial challenge is performed in presence of high levels of vaccine antibodies. Consequently, the conclusions raised from NHP studies may better apply to priming than to previously primed adolescent/adult vaccines who lost circulating antibodies. Our double adoptive transfer murine model overcomes this drawback, highlighting the importance of using diverse animal models to evaluate the various aspects of the protective efficacy of novel vaccines (60). Nevertheless, caution should be exerted when extrapolating from one species to humans, especially for adjuvanted formulations given differences in expression of TLR4/TLR9 (61,62). In conclusion, a double adoptive transfer murine model allows us to dissect the ability of different boosting strategies to recall Bp immunity and enhance bacterial clearance in the absence of circulating antibodies-a setting that resembles the human situation. It shows that the presence and/or rapid recall of pertussis antibodies are crucial to protection and that TLR9 (better than TLR4 agonists) may improve current aP vaccines and thus possibly better protect adolescents and adults against pertussis. DATA AVAILABILITY The raw data supporting the conclusions of this manuscript will be made available by the authors, without undue reservation, to any qualified researcher. ETHICS STATEMENT This study was carried out in accordance with the recommendations of Swiss and European guidelines and approved by the Geneva Veterinary Office and by French Ministry of Higher Education, of Research and Innovation and ethic committee. AUTHOR CONTRIBUTIONS FA, BM-G, MC-R, NR, MG, NM, YL, P-HL, MO, and C-AS designed the study. FA, BM-G, PF, MC-R, and NR performed the experiments. MG and NM manufactured and characterized vaccine formulations. FA, MB, BM-G, MC-R, NR, YL, P-HL, MO, and C-AS analyzed and/or interpreted the results. FA and C-AS wrote the manuscript. All authors contributed to manuscript revision, read and approved the submitted version. FUNDING This study was supported by funding provided by Sanofi Pasteur and research grants of the Center for Vaccinology and Neonatal Immunology.
6,193.6
2019-07-03T00:00:00.000
[ "Biology", "Medicine" ]
New Variations of the Online k -Canadian Traveler Problem: Uncertain Costs at Known Locations In this chapter, we study new variations of the online k -Canadian Traveler Problem ( k -CTP) in which there is an input graph with a given source node O and a destination node D. For a specified set consisting of k edges, the edge costs are unknown (we call these uncertain edges). Costs of the remaining edges are known and given. The objective is to find an online strategy such that the traveling agent finds a route from O to D with minimum total travel cost. The agent learns the cost of an uncertain edge, when she arrives at one of its end-nodes and decides on her travel path based on the discovered cost. We call this problem the online k -Cana-dian Traveler Problem with uncertain edges. We analyze both the single-agent and the multi-agent versions of the problem. We propose a tight lower bound on the competitive ratio of deterministic online strategies together with an optimal online strategy for the single-agent version. We consider the multi-agent version with two different objectives. We suggest lower bounds on the competitive ratio of deterministic online strategies to these two problems. Introduction The online k-Canadian Traveler Problem (k-CTP) is a well-known navigation problem within the field of combinatorial optimization. In the online k-CTP, the objective is to reach a destination in a network within minimum travel time under uncertainty of some information. Uncertain information is revealed, while one or more travelers (agents) discover the information during their travels. In the k-CTP and its variants studied in the literature, uncertainty is on the locations of blocked edges in the input graph. That is, it is known that there are at most k blocked edges, but their locations are not known. In this study, we consider new variations of the k-CTP where a known set of edges have unknown (uncertain) travel times (costs). To the best of our knowledge, this variant of the k-CTP with given locations of edges that have unknown traveling costs has not been studied yet in the literature. Uncertainty in travel times arises in various situations, such as following a disaster or in daily urban traffic systems. After a disaster, uncertainty in travel times arises due to both damage on road segments and traffic congestion on some parts of the road network. We typically know which roads are likely to have damage and to be congested, but the actual travel times can be estimated more accurately when we observe the situation right on the spot. Regarding urban traffic systems, problematic road segments can be detected beforehand since in most current traffic management systems, data indicating locations with high accident frequency are available, but it is difficult to predict the time of occurrence or the intensity of the accident accurately. Also, we usually know where there is a high likelihood of heavy traffic, but travel times show variability. Moreover, nowadays navigation applications indicate which locations have heavy traffic, but the travel times are still not known with certainty, and the situation evolves dynamically as we reach the locations themselves. In many real-world emergency operations, including response to disasters and daily medical or fire emergencies, operations managers must give dispatching decisions urgently under uncertain travel times. Therefore, it is useful to develop online strategies beforehand. For example, for effective disaster response, these strategies can be adopted before the disaster so that they can be implemented in the shortest time after the disaster. Likewise, when traveling in traffic, in order to reach the desired destination in the shortest time, we need a strategy defined on a network which answers the following questions: when to arrive at the end-node of an uncertain edge to learn its travel cost and when to avoid visiting it; when the travel time of an uncertain edge is learned, whether to take it or change the travel route; and if there exists a route to the destination without any uncertain edges, whether to take it or not. In this chapter, we focus on both developing effective online strategies that answer these questions and analyzing their performances theoretically to reveal their worst-case behavior. We next define our problem and its variants formally. The online k-CTP with uncertain edges Þdenote an undirected graph with O as the source and D as the destination in which the costs of k edges with given locations in the graph are unknown and a traveling agent can only discover their costs when she reaches an end-node of them. The costs of the remaining edges are known and deterministic. We call the edges with unknown costs uncertain edges and the edges with known costs deterministic edges. The objective is to provide an online strategy such that the traveling agent who is located at O initially receives G ¼ V; E; k ð Þand the known costs as input and targets to reach D with minimum total travel cost under uncertainty. Since the problem is a new variation of the k-CTP, we call this problem the single-agent k-CTP with uncertain edges, in short the S-k-CTP-U. We also study the multi-agent version of this problem where there are L agents, who are initially located at O. We assume that the agents have the capability to transmit their location and edge cost information to the other agents in real time. We consider the multi-agent version of the problem with two different objectives, where the traveling agents follow an online strategy to ensure that the time when (1) the first agent and (2) the last agent arrive at D is minimum. We call these problems the M-k-CTP-U-f and the M-k-CTP-U-l, respectively. In real-life applications mentioned before, e.g., disaster response, the objective of the M-k-CTP-U-f is applicable when search-and-rescue teams try to reach a target in the shortest time, whereas the objective of the M-k-CTP-U-l is applicable when a convoy of k vehicles delivers aid to a point of distribution. Competitive analysis The key concept in analyzing an online strategy is to compare a solution produced by the online strategy with the best possible solution under complete information, which is called the offline optimum solution. An offline strategy is to solve the same problem as an online strategy, except that all information about the problem inputs is revealed to an offline strategy from the beginning. An optimal offline strategy is the optimal strategy in the presence of complete input information which produces the offline optimum solution. To analyze the performance of online strategies, competitive ratio has been introduced in [1] and used by many researchers. The competitive ratio is the maximum ratio of the cost of the online strategy to the cost of the offline strategy over all instances of the problem. In our problems, the costs of the uncertain edges are known in the offline counterparts. Hence, the offline problems reduce to the shortest path problem. Next, we discuss related work in the literature. Then, we state our contributions to the defined problems later on in this section. Previous studies We focus on studies on the k-CTP which are conducted from the online optimization and the competitive analysis perspective, since these are the most related works to our survey. First, we review the literature for the single-agent variants. Next, we discuss the relevant studies on the multi-agent versions. Single-agent k-CTP and variants The CTP was defined first in [2]. Papadimitriou and Yannakakis [2] proved that devising an online strategy with a bounded competitive ratio is PSPACE-complete for the CTP. Bar-Noy and Schieber [3] also considered the CTP and its variants. They introduced the k-CTP, where an upper bound k on the number of blocked edges is given as input. They showed that for arbitrary k, the problem of designing an online strategy that guarantees the minimum travel cost is PSPACE-complete. Westphal [4] considered the k-CTP from the competitive ratio perspective. He showed the lower bounds of 2k þ 1 and k þ 1 on the competitive ratio of deterministic and randomized online strategies, respectively. He also presented an optimal deterministic online strategy for the k-CTP which is called the backtrack strategy. Xu et al. [5] also considered the k-CTP and presented two online strategies, the greedy and the comparison strategy, and proved competitive ratios of 2 kþ1 À 1 and 2k þ 1, respectively, for these strategies. Bender and Westphal [6] presented a randomized online strategy for the k-CTP which meets the lower bound of k þ 1 in special cases. Shiri and Salman [7] modified the strategy given in [7] and proposed an optimal randomized online strategy for the k-CTP on O-D edge-disjoint graphs. Multi-agent k-CTP and variants A generalization of the k-CTP with multiple agents was first considered by Zhang et al. in [8]. They analyzed the multi-agent k-CTP in two scenarios, with limited and complete communication. They proposed lower bounds of 2 kÀ1 L 1 j k þ 1 and 2 k L Ä Å þ 1 on the competitive ratio of deterministic online strategies for the cases with limited and complete communication, respectively. Note that in the proposed lower bounds L denotes the total number of agents and L 1 denotes the number of agents who benefit from complete communication. They also proposed an optimal deterministic online strategy when there are two agents in the graph. Shiri and Salman [9] also investigated the multi-agent k-CTP. They provided an updated lower bound on the competitive ratio of deterministic online strategies for the case with limited communication. They also presented a deterministic online strategy which is optimal in both cases with complete and limited communication on O-D edge-disjoint graphs. Randomized online strategies for the multi-agent k-CTP are investigated in [10], where lower bounds on the expected competitive ratio together with optimal randomized online strategies on O-D edge-disjoint graphs are proposed for the cases with limited and complete communication. Xu and Zhang [11] focused on a real-time rescue routing problem from a source node to an emergency spot in the presence of online blockages. They analyzed the problem with the objective to make all the rescuers arrive at the emergency spot with minimum total cost. They studied the problem in two scenarios, without communication and with complete communication. They investigated both of the scenarios on the grid networks and general networks, respectively. They showed that the consideration of both the grid network and the rescuers' communication can significantly improve the rescue efficiency. Our contributions In the literature, the common unknown information in the k-CTP variants is the locations of the blocked edges in the graph. In fact, in all of the versions of the online k-CTP, all of the edges are equally likely to be blocked, and the agents have to explore the blockages in the graph to identify a route from the source node to the destination node with minimum total travel cost. However, in many real-world instances, assuming that all of the edges are equally likely to be congested or blocked ignores valuable information. In other words, there might exist many edges in the graph in which the agent is assured that they are not blocked before she starts her travel. Hence, considering all of the edges to be blocked with equal chance is not a realistic assumption in some of the real-world applications of the k-CTP. As discussed at the beginning of this section, it is possible to identify the potential locations of the blocked edges in the graph in many real-world instances, such as in the urban traffic and post-disaster response. We introduce a new variation of the k-CTP with at most k number of uncertain edges with given locations and unknown traveling costs. We call this new problem the online k-Canadian Traveler Problem with uncertain edges. We consider both single-agent and multi-agent versions of this problem. In the multi-agent version of the problem, we analyze the problem with two different objectives, where the agents aim to ensure the first and the last arrival of the agents at D with minimum travel cost, respectively. The main contributions of our study are detailed below: 1. We introduce new variations of the online k-CTP which find applications in real-world problems, namely, the S-k-CTP-U, the M-k-CTP-U-f, and the Mk-CTP-U-l. 2. We provide a tight lower bound on the competitive ratio of deterministic online strategies for the S-k-CTP-U and introduce an optimal deterministic online strategy. 3. We derive lower bounds on the competitive ratio of deterministic online strategies for the M-k-CTP-U-f and the M-k-CTP-U-l. The rest of this chapter is organized as follows. In Section 2, we describe the assumptions and give preliminaries. In Section 3, we analyze the single-agent version of the problem and provide a tight lower bound and an optimal strategy to this problem. In Section 4, we suggest lower bounds on the competitive ratio for the multi-agent versions of the problem. Finally, we conclude in Section 5. Preliminaries We consider the single-agent and the multi-agent problems defined in Section 1.1 with the following assumptions [1]: 1. The agent(s) are initially located at O. We call this stage the initial stage of the problem. 2. If any k edges are removed from the graph, there still exists a path between the source and the destination node. This is a standard assumption in the literature. 3. The cost of the uncertain edges can take any value between 0 and M. An uncertain edge with explored cost equal to M would be considered as a blocked edge. 4. Once the cost of an uncertain edge is learned, it remains the same whenever the traveler visits that edge. In other words the cost is not assumed to be timedependent. 5. We call the time periods in which the cost of a new uncertain edge is identified, stages of the problem. That is, there are k stages in the problem, i.e., stage 1 corresponds to the time period starting at the initial stage and ending at the moment before the cost of the first uncertain edge is learned. We apply the following symbols and definitions to describe our results. We call the O-D paths which contain uncertain edges uncertain paths and which do not have uncertain edges deterministic paths. Let D i denote the shortest deterministic path at the ith stage and d i i ¼ 1; 2; …; k ð Þdenote its corresponding cost. If there are more than one shortest deterministic path at the ith stage, one of them can be selected as D i arbitrarily. Note that at any stage of the problem there exists at least one deterministic O-D path based on Assumption 2. We define the optimistic cost of the O-D path as the cost of the O-D path after setting the costs of the unvisited uncertain edges on it equal to 0. The optimistic shortest O-D path at the ith stage of the problem is denoted by π i , which corresponds to the shortest O-D path after setting the costs of the remaining uncertain edges equal to 0. We denote its corresponding cost by p i i ¼ 1; 2; …; k ð Þ . That is, π 1 is the optimistic shortest O-D path at the initial stage of the problem. We denote the shortest path after the status of all of the uncertain edges is explored by π kþ1 , i.e., π kþ1 is the offline optimum and p kþ1 is its corresponding cost. Single-agent k-CTP with uncertain edges In this section, we analyze the single-agent problem, namely, the S-k-CTP-U. We present a lower bound to this problem and prove its tightness by introducing a simple strategy. To suggest a lower bound on the competitive ratio of deterministic strategies, we need to analyze the performance of all of deterministic strategies on a special instance. Below, we propose our lower bound on the S-k-CTP-U, by analyzing an instance of O-D edge-disjoint graphs. Note that an O-D edge-disjoint graph is an undirected graph G with a given source node O and a destination node D, such that any two distinct O-D paths in G are edge-disjoint, that is, they do not have a common edge. Theorem 1.1 For the S-k-CTP-U, there is no deterministic online strategy with competitive ratio less than min d 1 =p 1 ; 2k À 1 È É . Proof. Consider the special graph in Figure 1. For each of deterministic strategies, we consider the instance when the cost of all of the first k À 1 visited uncertain edges equals M and the cost of the last visited uncertain edge equals 0. Hence, the cost of the offline shortest path equals p 1 . For a strategy, we call this instance the adverse instance. In the special graph in Figure 1, any deterministic strategy corresponds to a permutation which specifies in which order the uncertain paths and D 1 (not necessarily all of them) are going to be selected. For each of these strategies, consider the adverse instance. We define α as a binary coefficient which equals 1, if the agent takes D 1 , and equals 0, if the agent does not take D 1 in the strategy. Suppose that the agent has taken i number of uncertain paths before taking D 1 when α equals 1. In this case, the competitive ratio of deterministic strategies on the special graph shown in Figure 1 can be formulated as . Note that in the adverse instance, the agent has to incur a cost equal to 2p 1 in her first k À 1 trials at the uncertain paths, since she has to come back to O after finding the uncertain edges blocked. However, since the cost of the kth visited uncertain edge equals 0, the agent incurs p 1 in her kth trial at the uncertain paths and reaches D. Now, we present our proof by considering two cases. • α ¼ 1. In this case the competitive ratio of the corresponding strategies can be formulated as • α ¼ 0. In this case the minimum competitive ratio of the corresponding strategies equals 2k À 1, which is greater than or equal to the lower bound of the problem. • Case 2. d 1 p 1 >2k À 1. We also consider this case for α ¼ 0 and α ¼ 1 separately. , when i ¼ 0, which is greater than the proposed lower bound of the problem. • α ¼ 0. In this case the minimum competitive ratio of the corresponding strategies equals 2k À 1, which matches the lower bound of the problem. Since we proved that the competitive ratios of all of the deterministic strategies for this special instance are greater than or equal to min d 1 =p 1 ; 2k À 1 È É , the proof is complete. Now, we introduce a new deterministic strategy which meets the presented lower bound. We call this strategy the pessimistic strategy since the agent avoids to explore more than one uncertain edge at each iteration. Pessimistic strategy • Initialization. Put i ¼ 0, where i denotes the iteration number. At each iteration the agent starts her travel from O and explores the cost of one uncertain edge or will reach D without visiting any unvisited uncertain edge. If the agent reaches D, then the strategy ends. Otherwise, she backtracks to O or reaches D without visiting any other unvisited uncertain edge. In the latter case when the agent reaches D, the strategy ends. Note that each iteration corresponds to one of the stages of the problem, because at each iteration the cost of one of the uncertain edges is learned. That is, the first iteration corresponds to stage 1 of the problem. Also note that p i is nondecreasing in i, where p i is the cost of the optimistic shortest O-D path at the beginning of the ith iteration. Let c i denote the cost of the uncertain edge which is learned at the ith iteration. Note that p iþ1 is computable immediately after the agent observes c i . Let S denote the set of the uncertain edges in the graph. • Step 2. If (i ¼ k À 1), then go to step 3; otherwise, put i ¼ i þ 1 and find π i . If it does not contain uncertain edges, the agent takes it to reach D. Otherwise, take π i to reach the ith visited uncertain edge, observe c i , set the value of the newly visited uncertain edge equal to c i , and remove it from S. That is, it is not considered as an uncertain edge hereafter. Next, check the following conditions. • Condition 1. Check if and there exists no uncertain edge in the selected path, and proceed to reach D. Otherwise, check condition 2. • Condition 2. Note that immediately after the agent observes c i , D iþ1 and d iþ1 are computable. Check if and then go back to O and take D iþ1 . Otherwise, return to O and go to the beginning of step 2. • Step 3. Take π k and observe c k . Then compare and If A < B return to O and take the shortest path π kþ1 ð Þ; otherwise, travel through the uncertain edge in the kth uncertain path and reach D. Below we show that our strategy is optimal by using the inequalities which are presented in different steps of the pessimistic strategy. Theorem 1.2 The pessimistic strategy is optimal for the S-k-CTP-U. Proof. Note that if the strategy ends in either step 1 or 2, the competitive ratio would be less than or equal to the lower bound. Hence, we just need to analyze the cases where the strategy ends in step 3. Note that the competitive ratio of the strategy would not exceed min A; B f gin step 3. Thus, it is enough to show that either A or B does not exceed the proposed lower bound of the problem, if the strategy ends in step 3. We consider three different scenarios for π kþ1 to show the optimality of the pessimistic strategy, if the strategy ends in step 3. • Scenario 1. π kþ1 contains the uncertain edge which is visited in the kth iteration. In this case, we show that B meets the proposed lower bound of the problem. Since both π kþ1 and π k (i.e., π kþ1 ≥ π k ) contain the kth visited uncertain edge, p k þ c k equals p kþ1 . Hence we can replace p k þ c k by p kþ1 in the numerator of B. We can also replace p i values for i ¼ 1; 2; …; k À 1 ð Þ by p kþ1 in the numerator of B, since p i is nondecreasing in i. In this case, B would be at most 2k À 1 which equals the lower bound of the problem. Here, we note that π kþ1 does not contain the kth visited uncertain edge in the next two scenarios. • Scenario 2. π kþ1 contains the uncertain edge which is visited in the k À 1 ð Þth iteration. Note that k ≥ 2 in this scenario, since π kþ1 does not contain the kth visited uncertain edge and contains the k À 1 ð Þth visited uncertain edge. In this case, we show that A meets the proposed lower bound of the problem. Consider condition 1 in step 2 at the k À 1 ð Þth iteration. Since we have assumed that the strategy ends in step 3, we have 2 P kÀ2 i¼1 p i þ p kÀ1 þ c kÀ1 p k >2k À 1: Since, both π kþ1 and π kÀ1 (i.e., π kþ1 ≥ π kÀ1 ) contain the k À 1 ð Þth visited uncertain edge, p kÀ1 þ c kÀ1 is less than or equal to p kþ1 . Hence, we can replace p kÀ1 þ c kÀ1 by p kþ1 in the numerator above. We can also replace p i values for i ¼ 1; 2; …; k À 2 ð Þ by p k in the numerator since p i is nondecreasing in i. We obtain 2k À 4 ð Þp k þ p kþ1 > 2k À 1 ð Þp k ; hence, p kþ1 >3p k . Now, we replace p i values for i ¼ 1; 2; …; k ð Þby p k in the numerator of A. We obtain Now, we can replace 2k p k À Á by 2k 3 p kþ1 À Á in the numerator of A. In this case, A would be at most 2k 3 þ 1 which is less than or equal to the lower bound for k ≥ 2 since we are comparing 2k 3 þ 1 and min d 1 =p 1 ; 2k À 1 È É for k ≥ 2. Note that since the strategy ends in step 3, min d 1 =p 1 ; 2k À 1 È É equals 2k À 1. • Scenario 3. π kþ1 does not contain the uncertain edges which are visited in the k À 1 ð Þth and the kth iterations. In this case, we show that A meets the proposed lower bound of the problem. Note that when k ≤ 2, π kþ1 ¼ D 1 in this scenario. Thus, the strategy ends in step 1 when k ≤ 2. For k ≥ 3, consider condition 2 in step 2 at the k À 2 ð Þth iteration. We have Since π kþ1 does not contain the uncertain edges which are visited in the k À 1 ð Þth and the kth iterations, π kþ1 is equivalent to D kÀ1 . Hence we can replace d kÀ1 by p kþ1 in the numerator above. We can also replace p i values for i ¼ 1; 2; …; k À 2 ð Þ by p kÀ1 in the numerator since p i is nondecreasing in i. We obtain 2k À 4 ð Þp kÀ1 þ p kþ1 > 2k À 1 ð Þp kÀ1 . Thus, p kþ1 >3p kÀ1 . Now, we replace p i values for i ¼ 1; 2; …; k À 1 ð Þ by p kÀ1 in the numerator of A. We obtain Now, we can replace 2k À 2 ð Þp kÀ1 by 2kÀ2 3 p kþ1 À Á in the numerator of A. We also replace p k by p kþ1 , since p i is nondecreasing in i. In this case, A would be at most 2kÀ2 3 þ 3, which is less than or equal to the lower bound for k ≥ 3. Since we showed that the competitive ratio of the pessimistic strategy is less than or equal to the lower bound, the proof is complete. As an illustrative example for the pessimistic strategy, consider the instance given in Figure 2 which represents a part of the Gulf Coast area of the United States. In Figure 2, the nodes represent the cities, and the numbers on the edges denote the edge travel times (per hour) in a post-disaster scenario. The edges (2,6) and (5,6) are the uncertain edges whose costs are not known at the beginning. The traveling agent is initially at node 1 and node 6 is the destination node. Path 1-3-6 is the shortest deterministic path (D 1 ), and path 1-2-6 is the shortest optimistic path (π 1 ) at the initial stage, i.e., d 1 ¼ 11 and p 1 ¼ 3. When step 1 of the pessimistic strategy is implemented, the agent compares d 1 , the strategy enters step 2. Next, the agent takes the shortest optimistic path π 1 and arrives at node 2 after traversing edge (1,2). We assume that the costs of the uncertain edges (2,6) and (5,6) are 3 and 2, respectively. When the agent arrives at node 2, she learns the traveling time of edge (2,6), i.e., c 1 ¼ 3. Then she checks if p 1 þc 1 p 2 < 2k À 1. Since 6 6 < 3, the agent takes edge (2,6) to arrive at node 6 and the strategy ends. Note that the cost of the offline optimum is 6. Therefore, the competitive ratio of the pessimistic strategy is one in the described scenario. Multi-agent k-CTP with uncertain edges In this section, we study the M-k-CTP-U-f and the M-k-CTP-U-l. Note that L denotes the number of agents in the graph in these problems. We assume that there is no distinction between the L agents and all of the agents benefit from complete communication in the sense that they can transmit their locations and explored uncertain edges' cost information to the other agents in real time. By considering an instance of O-D edge-disjoint graphs, we derive lower bounds on the competitive ratio of deterministic online strategies to the M-k-CTP-U-f and the M-k-CTP-U-l. Theorem 1.3 For the M-k-CTP-U-f and the M-k-CTP-U-l, there is no deterministic online strategy with competitive ratio less than min We again consider the special graph in Figure 1. In this case, any deterministic strategy corresponds to a permutation which describes in which order the uncertain paths and D 1 (not necessarily all of them) are going to be selected by the agents. For all of these strategies, consider the adverse instance which is defined in the proof of Theorem 1.1. Note that the agents will not reach D via uncertain paths unless the costs of all of the uncertain edges are specified. Before we present the rest of our proof, we need to propose the following lemma. Lemma 1.4 In the adverse instance, the competitive ratio of the strategies in which the arrivals of the agents at D is via the uncertain paths is at least 2 k L Ä Å þ 1 and 2 k L AE Ç þ 1, for the M-k-CTP-U-f and the M-k-CTP-U-l, respectively. Proof. Note that the agents will not reach D via the uncertain paths unless the costs of all of the uncertain edges are specified since we are considering the adverse instance. Now we present our proof for each claim separately. • M-k-CTP-U-f. In this problem, the agents have to incur a cost of at least 2 k L Ä Å À Á p 1 to discover the costs of L k L Ä Å À Á number of uncertain edges and backtrack to O. The agents have to incur p 1 to learn the costs of the remaining uncertain edges and deliver at least one of the agents to D. Since the cost of the shortest path is at least p 1 in the adverse instance, the competitive ratio of deterministic strategies when none of the agents take D 1 would be at least • M-k-CTP-U-l. In this problem, it takes a cost of at least 2 k L AE Ç À Á p 1 to explore the costs of all of the k uncertain edges and backtrack the agents to O. It takes at least p 1 for all of the agents to take the shortest path and arrive at D. Since the cost of the shortest path is p 1 in the adverse instance, the competitive ratio of deterministic strategies when none of the agents take D 1 would be at least Note that since we are considering the arrivals of the agents at D via the uncertain paths, the performance of the strategies will not be improved if one or more agents take D 1 . The proof is complete. Now, we present the rest of our proof for each problem separately: • M-k-CTP-U-f. We present our proof by considering two cases: In this case, the competitive ratio of the strategies in which the first arrival of the agents at D is via D 1 is at least d 1 , which is greater than or equal to min d 1 =p 1 ; 2 k L Ä Å À Á þ 1 È É . The competitive ratio of deterministic strategies in which the first arrival of the agents at D is via the uncertain paths would be at least 2 k L Ä Å þ 1, which matches the proposed lower bound of the problem. • Case 2. d 1 In this case, the competitive ratio of deterministic strategies in which the first arrival of the agents at D is via the uncertain paths would be at least 2 k L Ä Å þ 1, which is greater than the proposed lower bound of min d 1 =p 1 ; 2 k L Ä Å À Á þ 1 È É . The competitive ratio of the strategies in which the first arrival of the agents at D is via D 1 is at least d 1 p 1 , which matches the proposed lower bound of the problem. • M-k-CTP-U-l. We present our proof by considering two cases: • Case 1. We just proved that the competitive ratio of deterministic strategies in the adverse instance is not better than min d 1 =p 1 ; 2 k L Ä Å À Á þ 1 È É and min d 1 =p 1 ; 2 k L AE Ç À Á þ 1 È É , for the M-k-CTP-U-f and the M-k-CTP-U-l, respectively. Hence, we conclude that the competitive ratio of the problems cannot be better than the proposed lower bounds. Conclusions We introduced new variants of the online k-CTP which find various important real-life applications. In these variants, the locations of the uncertain edges are known, where the traveling costs of these edges are unknown. We investigated both the single-agent and the multi-agent versions of the problem. We proposed a tight lower bound on the competitive ratio of deterministic online strategies and an optimal strategy for the single-agent problem that we call the S-k-CTP-U. We derived lower bounds on the competitive ratio of deterministic online strategies for the multi-agent problems called as the M-k-CTP-U-f and the M-k-CTP-U-l. Providing optimal strategies for the M-k-CTP-U-f and the M-k-CTP-U-l which match the proposed lower bounds can be considered as a future research direction. Analyzing the problem on special networks such as grid networks is another future research direction for these new variations. Author details
8,022
2019-10-23T00:00:00.000
[ "Computer Science", "Mathematics" ]
SYSTEMS OF DIFFERENTIAL EQUATIONS ON THE LINE WITH REGULAR SINGULARITIES 3. Vagin V. S., Gropen V. O., Pozdniakova T. A., Budaeva A. A. Mnogokriterial’noe ranzhirovanie ob"ektov metodom etalonov kak instrument optimal’nogo upravleniia. Ustoichivoe razvitie gornykh territorii [Sustainable Development of Mountain Territories], 2010, no 1. pp. 47–55 (in Russian). 4. Keeney R. L., Raiffa H. Decisions with Multiple Objectives: Preferences and Value Tradeoffs. New York, John Wiley & Sons, Inc., 1976. (Rus. ed. : Keeney R. L., Raiffa H. Priniatie reshenii pri mnogikh kriteriiakh: predpochteniia i zameshcheniia. Moscow, Radio i sviaz’, 1981). 5. Rosen V. V. Matematicheskie modeli priniatiia reshenii v ekonomike. [Mathematic decision-making models in economy]. Moscow, Vysshaia shkola, 2002 (in Russian). INTRODUCTION Consider the Dirac system on the line with a regular singularity: where here µ is a complex number, q j (x) are complex-valued absolutely continuous functions, and q ′ j (x) ∈ L(−∞, +∞).In this short note we construct special fundamental systems of solutions for system (1) with prescribed analytic and asymptotic properties.Behavior of the corresponding Stockes multipliers is established.These fundamental systems of solutions will be used for studying direct and inverse problems of spectral analysis by the contour integral method and by the method of spectral mappings [1,2].Differential equations with singularities inside the interval play an important role in various areas of mathematics as well as in applications.Moreover, a wide class of differential equations with turning points can be reduced to equations with singularities.For example, such problems appear in electronics for constructing parameters of heterogeneous electronic lines with desirable technical characteristics [3,4].Boundary value problems with discontinuities in an interior point appear in geophysical models for oscillations of the Earth [5].The case when a singular point lies at the endpoint of the interval was investigated fairly completely for various classes of differential equations in [6][7][8] and other works.The presence of singularity inside the interval produces essential qualitative modifications in the investigation (see [9]). Our plan is the following.In the next section we consider a model Dirac operator with the zero potential Q(x) ≡ 0 and without the spectral parameter.It is important that this system is studied in the complex x-plane.We construct fundamental matrices for the model system.Using analytic continuations and symmetry we calculate directly the Stockes multipliers for the model system.Then we consider the Dirac system on the real x-line with Q(x) ≡ 0 and with the complex spectral parameter, and carry over our constructions to this system.In the last section 3 we construct fundamental matrices for system (1) with necessary analytic and asymptotic properties.Asymptotic properties of the Stockes multipliers for system (1) are also established. SYSTEMS WITHOUT SPECTRAL PARAMETERS Let for definiteness, Re µ > 0, 1/2 − µ / ∈ N. Consider the model Dirac system in the complex x-plane: Let x = re iϕ , r > 0, ϕ ∈ (−π, π], x ξ = exp(ξ(ln r + iϕ)), and Π − be the x-plane with the cut x 0. Let numbers c 10 , c 20 be such that c 10 c 20 = 1.Then equation ( 2) has the matrix solution where We agree that if a certain symbol denotes a matrix solution of the system, then the same symbol with one index denotes columns of the matrix, and this symbol with two indeces denotes entries, for example, The functions C k (x), k = 1, 2, are entire in x, and the functions form the fundamental system of solutions for (2), and det C(x) ≡ 1. Denote Note that the matrix e 0 (x) is a solution of the system BY ′ (x) = Y (x). SYSTEMS WITH THE SPECTRAL PARAMETER Now we consider system (1) and assume that In this section we construct fundamental matrices for system (1) and establish properties of their Stockes multipliers.The following assertion is proved by the well-known method (see, for example, [1,2]). Theorem 4. System (1) has a fundamental system of solutions S j (x, λ) = x µj S j (x, λ), j = 1, 2, where the functions S j (x, λ) are solutions of the integral Volterra equations (5): The functions S j (x, λ) are entire in λ, and | S j (x, λ)| C on compact sets. The results have been obtained in the framework of the national tasks of the Ministry of Education and Science of the Russian Federation (project no.1.1436.2014K)and by the Russian Foundation for Basic Research (project no.13-01-00134).
1,023.4
2015-03-01T00:00:00.000
[ "Mathematics" ]
mudfold: An R Package for Nonparametric IRT Modelling of Unfolding Processes Item response theory (IRT) models for unfolding processes use the responses of individuals to attitudinal tests or questionnaires in order to infer item and person parameters located on a latent continuum. Parametric models in this class use parametric functions to model the response process, which in practice can be restrictive. MUDFOLD (Multiple UniDimensional unFOLDing) can be used to obtain estimates of person and item ranks without imposing strict parametric assumptions on the item response functions (IRFs). This paper describes the implementation of the MUDFOLD method for binary preferential-choice data in the R package mudfold . The latter incorporates estimation, visualization, and simulation methods in order to provide R users with utilities for nonparametric analysis of attitudinal questionnaire data. After a brief introduction in IRT, we provide the methodological framework implemented in the package. A description of the available functions is followed by practical examples and suggestions on how this method can be used even outside the field of psychometrics. Introduction In this paper we introduce the R package mudfold (Balafas et al., 2019), which implements the nonparametric IRT model for unfolding processes MUDFOLD. The latter, was developed by Van Schuur (1984) and later extended by Post (1992) and Post and Snijders (1993). IRT models have been designed to measure mental properties, also called latent traits. These models have been used in the statistical analysis of categorical data obtained by the direct responses of individuals to tests and questionnaires. Two response processes that result in different classes of IRT models can be distinguished. The cumulative (also called monotone) processes and the unfolding (also called proximity) processes in the IRT framework differ in the way that they model the probability of a positive response to a question from a person as a function of the latent trait, which is termed as item response function (IRF). Cumulative IRT models also known as Rasch models (Rasch, 1961), assume that the IRF is a monotonically increasing function. That is, the higher the latent trait value for a person, the higher the probability of a positive response to an item (Sijtsma and Junker, 2006). This assumption makes cumulative models suitable for testing purposes where latent traits such as knowledge or abilities need to be measured. The unfolding models also known as proximity models consider nonmonotone IRFs. These models originate from the work of Thurstone (1927Thurstone ( , 1928 and have been formalized by Coombs (1964) in his deterministic unfolding model. In unfolding IRT the IRF is assumed to be a unimodal (single 'peak') function of the distance between the person and item locations on a hypothesized latent continuum. Unimodal IRFs imply that the closer an individual is located to an item the more likely is that he responds positively to this item (Hoijtink, 2005). Unfolding models can be used when one is interested to measure bipolar latent traits such as preferences, choices, or political ideology, which are generally termed as attitudes (Andrich, 1997). Such type of latent traits when they are analyzed using monotone IRT models usually result in a multidimensional solution. In this sense, unfolding models are more general than the cumulative IRT models (Stark et al., 2006;Chernyshenko et al., 2007) and can be seen as a form of quadratic factor analysis (Maraun and Rossi, 2001). Parametric IRT (PIRT) models for unfolding processes exist for dichotomous items (Hoijtink, 1991;Andrich and Luo, 1993;Maydeu-Olivares et al., 2006), polytomous items (Roberts and Laughlin, 1996;Luo, 2001) as well as for bounded continuously scored items (Noel, 2014). Typically, estimation in PIRT models exploits maximum likelihood methods like the marginal likelihood (e.g. Roberts et al., 2000) or the joint likelihood (e.g. Luo et al., 1998), which are optimized using the expectation-maximization (EM) or Newton type of algorithms. Unfolding PIRT models that infer model parameters by adopting Bayesian Markov Chain Monte Carlo (MCMC) algorithms (Johnson and Junker, 2003;Roberts and Thompson, 2011;Liu and Wang, 2019;Lee et al., 2019) are also available. PIRT models however, make explicit parametric assumptions for the IRFs, which in practice can restrict measurement by eliminating items with different functional properties. Nonparametric IRT (NIRT) models do not assume any parametric form for the IRFs but instead introduce order restrictions (Sijtsma, 2005). These models have been used to construct or evaluate scales that measure among others, internet gaming disorder (Finserås et al., 2019), pedal sensory loss (Rinkel et al., 2019), partisan political preferences (Hänggli, 2020), and relative exposure to soft versus hard news (Boukes and Boomgaarden, 2015). The first NIRT model was proposed by Mokken (1971) for monotone processes. His ideas were used for the unfolding paradigm by Van Schuur (1984) who designed MUDFOLD as the unfolding variant of Mokken's model. MUDFOLD was extended by Van Schuur (1992) for polytomous items and Post (1992) and Post and Snijders (1993) derived testable properties for nonparametric unfolding models that were adopted in MUDFOLD. Usually, NIRT methods employ heuristic item selection algorithms that first rank the items on the latent scale and then use these ranks to estimate individual locations on the latent continuum. Such estimates for individuals' ideal-points in unfolding NIRT have been introduced by Van Schuur (1988) and later by Johnson (2006). NIRT approaches can be used for exploratory purposes, preliminary to PIRT models, or in cases where parametric functions do not fit the data. IRT models can be fitted by means of psychometric software implemented in R (Choi and Asilkalkan, 2019), which can be downloaded from the Comprehensive R Archive Network (CRAN) 1 . An overview of the R packages suitable for IRT modelling can be found at the dedicated task view Psychometrics. PIRT models for unfolding where the latent trait is unidimensional, such as the graded unfolding model (GUM) (Roberts and Laughlin, 1996) and the generalized graded unfolding model (GGUM) (Roberts et al., 2000) can be fitted by the R package GGUM (Tendeiro and Castro-Alvarez, 2018). Sub-models in the GGUM class are also available into the Windows software GGUM2004 (Roberts et al., 2006). A large variety of unfolding models for unidimensional and multidimensional latent traits can be defined and fitted to data with the R package mirt (Chalmers, 2012). To our knowledge, software that fits nonparametric IRT in the unfolding class of models (analogous to the mokken package ( Van der Ark, 2007 in the cumulative class) is not yet available in R. In order to fill this gap, we have developed the R package mudfold. The main function of the package implements item selection algorithm of Van Schuur (1984) for scaling the items on a unidimensional scale. Scale quality is assessed using several diagnostics such as, scalability coefficients similar to the homogeneity coefficients of Loevinger (1948), statistics proposed by Post (1992), and newly developed tests. Uncertainty for the goodness-of-fit measures is quantified using nonparametric bootstrap (Efron et al., 1979) from the R package boot (Canty and Ripley, 2017). Missing values can be treated using multiple multivariate imputation by chained equations (MICE, Buuren et al., 2006), which is implemented in the R package mice (van Buuren and Groothuis-Oudshoorn, 2011). Estimates for the person locations derived from Van Schuur (1988) and Johnson (2006) are available to the user of the package. Generally, the MUDFOLD algorithm is suitable for studies where there are no restrictions on the number of items that a person can "pick". Besides these pick-any-out-of-N study designs, sometimes individuals are restricted to select a prespecified number of items, i.e. pick-K-out-of-N. The latter design, due to the violation of independence does not respect the IRT assumptions. However, our package is also able to deal with such situations. Methodology Consider a sample of n individuals randomly selected from a population of interest in order to take a behavioral test. Participants indexed by i, i = 1, 2, . . . , n are asked to state if they do agree or do not with each of j = 1, 2, . . . , N statements (i.e. items) towards a unidimensional attitude θ that we intend to measure. Let X ij be random variables associated with the 0, 1 response of subject i on item j. We will denote the response of individual i on item j as X ij and x ij its realization. Subsequently, we can define the IRF for an item j as a function of θ. That is, the probability of positive endorsement of item j from individual i with latent parameter θ i we write P j (θ i ) = P X ij = 1|θ i . In PIRT models for unfolding, P j (θ i ) is a parametric unimodal function of the proximity between the subject parameter θ i and the item parameter β j . NIRT unfolding models avoid to impose strict functional assumptions on the IRFs. In the latter case, the focus is on ordering the items on a unidimensional continuum. The item ranks are then used as measurement scale to calculate person specific parameters (ideal-points) on the latent continuum. Assumptions of the nonparametric unfolding IRT model In unidimensional IRT models, unidimensionality of the latent trait, and local independence of the responses are common assumptions. However, the usual assumption of monotonicity that we meet in the cumulative IRT models, needs modification in the unfolding IRT where unimodal shaped IRFs are considered. For obtaining diagnostic properties for the nonparametric unfolding model, Post and Snijders (1993) proposed two additional assumptions for the IRFs. The assumptions of the nonparametric unfolding model are: A1. Unidimensionality (UD): There exists a unidimensional latent variable θ ∈ R on which individuals and items are scaled. A2. Local Independence (LI): The responses of individuals on distinct items are independent given the latent parameter θ, i.e the joint conditional probability of N responses simplifies into the likelihood form, A3. Unimodality (UM): For every item j, P j (θ) is a weakly unimodal function of θ. For the sake of clarity, a function P j (θ) : R → R, is weakly unimodal if there exists a β j ∈ (−∞, +∞) such that, P j (θ) is non decreasing for all θ ≤ β j and non increasing for all θ ≥ β j . The location parameter β j for the jth item is the value of the latent trait for which the IRF P j (θ) reaches its maximum (or the midpoint of the interval where P j (θ) is maximum when β j is not unique). A4. Stochastic Ordering (SO): For any probability distribution G (θ) of latent trait values and any value θ 0 on the latent scale, P G θ > θ 0 |X j = 1 is nondecreasing function of j for all j such that p j (x) > 0. Given the item ordering this assumption is equivalent to two properties for the IRFs. First, given that a single item is chosen, the posterior densities g of θ have a monotone likelihood ratio (MLR) in θ, and second, the IRFs have a monotone traceline ratio (MTR). The next assumption concerns only unfolding models and is not applicable for cumulative IRT. Assumption A1 implies that there exist only one latent trait that explains the responses of persons on the items. Assumption A2 is mathematically convenient since it reduces the likelihood to a simple product and implies that given the latent trait value no other information on the other items is relevant to predict the responses to a particular item. The next assumption concerns the conditional distribution of each item given the latent trait. The unimodality assumption that is described in A3 restricts the IRFs to have a single-peak shape without imposing any explicit functional form. If A3 holds for all the IRFs then we can order the items on the unidimensional continuum based on their location parameter β j such that β 1 ≤ β 2 ≤ · · · ≤ β N . The set of assumptions A1-A3 is the core in unfolding IRT models. Additionally, two assumptions are needed about the individuals {i | i = 1, . . . , n} and the distribution G of their latent trait values {θ i | i = 1, . . . , n} in order to obtain testable properties for the nonparametric unfolding model (Post and Snijders, 1993). Assumption A4 is analogous to the invariant item ordering (IIO) assumption in the monotone IRT models and implies that the posterior distribution of θ given a positive response to an item located at β j is stochastically ordered by the location β j (Johnson, 2006). In simple words, A4 assumes that an individual who responds positively to an item with higher rank should have a larger latent trait than those individuals who respond positively to a low-rank item. For example, if a person responds positively to an item that is considered politically conservative, then this person is more likely to be a conservative compared to a person who responded positively to a liberal statement. Despite the fact that this assumption seems intuitive, not all parametric unfolding models require this additional assumption. Assumption A5 suggests that individual i who endorses item j has a latent trait value θ i that is most likely close to item location β j and less likely either much lower or much higher on the latent scale than that. Post (1992) shows that the measurement assumptions A4-A5 are related to the mathematical property of total positivity of order 2 (TP 2 ) (Karlin, 1968). In addition, if the IRFs P j (θ) are positive for all j, then these assumptions hold if and only if the IRFs satisfy the property of TP 3 . Errors and scalability coefficients PIRT approaches use well defined IRFs that parametrize explicitly persons and items on some known parameter space. Estimates of the parameters can be obtained using suitable frequentist or Bayesian methods and the fit of the model to the data is assessed using goodness-of-fit indices. Contrarily, in NIRT modelling the functional form of the IRF is unknown and alternative estimation methods are needed (Mokken, 1997). Models in the NIRT class, typically employ item selection algorithms that construct ordinal measurement scales for persons by iteratively maximizing some scalability measure upon the items. The resulting scales are then used to locate the individuals on the latent continuum based on their responses. Usually, these item selection algorithms are bottom-up methods that are divided into two parts. In the first part the algorithms seek to find the best minimal scale, that is a minimal set of items that meets certain scalability requirements. The best minimal scale is the starting point for the second part of the scaling procedure, where it is extended iteratively by adding in each step the item that best fulfills the prespecified scalability criteria. As in other NIRT models, MUDFOLD adopts a two step item selection algorithm that identifies the unique rank order for a maximal (sub) set of items. In this algorithm, scalability coefficients analogous to the ones defined by Mokken (1971) are used as tests for the goodness-of-fit. Mokken's coefficients are similar to the H coefficients proposed by Loevinger (1948), which were defined on the basis of violation probabilities of the deterministic cumulative model (see Guttman, 1944) for ordered item pairs. In the same line, the scalability coefficients in MUDFOLD are defined on the basis of violation probabilities of the deterministic unfolding model of Coombs (1964) for triples of items. MUDFOLD's scalability coefficients in a triple of items compare the number of errors observed (i.e. the number of {1, 0, 1} responses, which falsify the Coombsian model) with the number of errors that we would expect if the items were statistically independent. A triple of items is a permutation (ordering) of three distinct items. is the realization of random variable X i. and x i. = 1 if the i th individual responds positively on item (.) otherwise x i. = 0. It can be seen that the number of observed errors for three items stays invariant for the permutations (h, l, k) and (k, l, h) for any h = l = k = h in the integer set {1, 2, . . . , N}. Expected errors (EO) in an ordered item triple (h, l, k) under random ordering is the expected frequency of {1, 0, 1} responses if the items h, l, and k were statistically independent multiplied by the sample size, EO hlk = p (h) (1 − p (l)) p (k) n. We can estimate p (j) for item j p (j) = ∑ n i=1 x ij n as the relative frequency for item j. Scalability coefficient (H) for any ordered item triple (h, l, k), is defined as the value obtained if we subtract from unity the ratio of observed errors over the expected errors for this triple, Using the scalability coefficients for triples, we can extend the notion of scalability for a scale s consisting of m items, where 3 < m ≤ N and for an item j ∈ s. The H coefficient for an item j ∈ s, j = 1, 2, . . . , m is given by, where T j (s) = {(s h , s l , s k ) | s h < s l < s k : j ∈ {s h , s l , s k }} is the set of all item triples (with respect to the item order), that include item j. Given that the m items constituting the scale are ordered, we are able to calculate the H coefficient for the total scale s by summing the observed errors and the expected errors for all m! 3!(m−3)! triples of items of s and calculate their error ratio. If we subtract the obtained number from the unity results in a total scalability measure, where T (s) = {(s h , s l , s k ) | s h < s l < s k } is the set of all item triples for a given scale s. Perfect fit of the scale to the data yields a scalability coefficient value of H total (s) = 1. The latter means that no error patterns are observed in this scale. Likewise, H total (s) = 0 implies that the number of observed errors is equal to what you would have expected for a random ordering. Values around 0.5 suggest a moderate unfolding scale. Calculating the triple scalability coefficients for all the items is the first step in the construction of a MUDFOLD scale. We will demonstrate how the H coefficients for triples are calculated using the dataset ANDRICH that comes with the mudfold package in R data format. The dataset contains the binary responses of n = 54 students on N = 8 statements towards capital punishment. This attitudinal test have been constructed by Andrich (1988) in order to measure attitudes towards capital punishment. Calculating scalability coefficients for the ANDRICH data. We can install and subsequently load the package and the data into the R environment. ## Install and load the mudfold package and the ANDRICH data install.packages("mudfold") library(mudfold) data("ANDRICH") N <-ncol(ANDRICH) # number of items n <-nrow(ANDRICH) # number of persons item_names <-colnames(ANDRICH) # item names Functions for calculating the observed errors, expected errors, and H coefficients for each possible item triple are available internally in the mudfold package. These functions can be accessed by the ::: operator. For the ANDRICH data the H coefficients for triples can be calculated as follows. experr <-mudfold:::Err_exp(ANDRICH) # errors expected obserr <-mudfold:::Err_obs(ANDRICH) # errors observed hcoeft <-1 -(obserr / experr) # H coefficients Generally, there exist N 3 item permutations of length three with repetitions that can be obtained from N items. Thus, the corresponding H coefficients of each possible item permutation of length three can be stored into a three way array with dimension N × N × N. In the ANDRICH data example, the scalability coefficients for the item permutations of length three are stored into three-way array with dimension 8 × 8 × 8. It can be seen that the H coefficients for symmetric permutations stay invariant and we demonstrate this feature below. Consider the ordered triple of items (HIDEOUS, DONTBELIEV, DETERRENT) and its symmetric permutation (DETERRENT, DONTBELIEV, H IDEOUS). ## Compare H coefficients hcoeft[triple_HDODE] == hcoeft[triple_DEDOH] The H hlk coefficients form the basis in order to calculate the scalability coefficients for items and scales. The item selection algorithm implemented in the package runs in two steps and scalability criteria are used in both steps. Scale construction In the first step of the item selection algorithm, a search in order to find the best triple of items is conducted. A lower bound λ 1 that controls the scalability properties of the best triple can be specified by the user (default value is λ 1 = 0.3). The value of λ 1 is used as a threshold to determine if the triple is good enough to continue the scaling process. Larger values of λ 1 lead to more strict criteria while lower values of λ 1 relax these criteria. In its second step, the item selection algorithm extends the best elementary scale repeatedly until no more items fulfill its scalability criteria. A second threshold λ 2 = 0 is explicitly used in the first criterion of this step. This threshold controls the scalability properties of the triples containing a candidate item in the scale extension procedure. As for λ 1 , larger values of λ 2 lead to more strict scalability requirements, while, lower values relax these requirements. Step 1: search for the best unique triple. The search for the optimal item triple in the first step requires the calculation of the scalability coefficients for every possible permutation of length 3 that can be obtained from N starting items. Among the set of all permutations of length three we seek to find those that fulfill certain scalability criteria and we call this set of permutations unique triples. Unique triples is a finite set containing all (h, l, k) with h, l, k ∈ {1, 2, . . . , N}, and h = l = k = h for which only one of their permutations (out of three possible) presents a positive H hlk coefficient i.e. This guarantees that triples in the set of unique triples are "uniquely" represented on the latent dimension, i.e. are scalable together in only one permutation besides the reverse permutation. From the set of unique triples, the triple (h, l, k) that has the maximum H hlk is called the best unique triple and it will be selected as the best starting scale if its scalability coefficient is positive and greater than a specified lower bound λ 1 . If more than one triples fulfill the requirements for being the best unique triple it can be shown that all of them will converge to same solution in the second step. If the set of unique triples is empty, the algorithm stops automatically without proceeding in the second step. The same holds also in the case in which unique triples exist but their scalability coefficient is lower that the bound specified by the user. First step: search for best minimal scale in the ANDRICH data. Here we describe how the main function of the mudfold package searches for the best minimal unfolding scale in the first step of the implemented algorithm. After we calculated the observed errors, the expected errors, and the scalability coefficients for each triple of items in the ANDRICH dataset, we need to determine the optimal triple for the first step of MUDFOLD's item selection algorithm. The triples of items in the order (h, l, k) for the ANDRICH data can be obtained with the combinations() function from the R package gtools (Warnes et al., 2015). These combinations are then permuted twice to yield the orderings (h, k, l) and (l, h, k) respectively. ## Install and load the library "gtools" install.packages(gtools) library(gtools) The set of unique triples can then be obtained. ## Find the set of unique triples. The set of unique triples in the ANDRICH data example contains sixteen item triples. With the command hcoeft [unq] we can see that all except one of the triples show H hlk coefficients greater than the lower bound. The ordered triple of items (INEFFECTIV, DONTBELIEV, DETERRENT) is selected as the best starting scale with a maximum scalability coefficient of 0.853 which is indeed larger than λ 1 . This triple will be extended repeatedly in the second step of the algorithm. In each iteration one from the remaining ones is added to the scale in a specific position if certain scalability requirements are met. Step 2: extending the best starting scale Given the best unique triple obtained in the first step of the algorithm, in the second step of the item selection process the algorithm investigates repeatedly the remaining N − 3 items to find the best fourth, fifth, etc to add to the scale. In each iteration of this step, all the possible scales that contain one of the remaining items in every possible position are investigated to choose the most appropriate one. For a scale consisting of m items, (3 ≤ m ≤ N − 1) we intend to find one of the remaining N − m items to add in the scale. For the (m + 1) th item there exist m + 1 possible scale positions that have to be investigated with respect to their scalability properties. In each iteration of the MUDFOLD scaling algorithm, the number of candidate scales under investigation is (N − m) (m + 1). In order to determine the (m + 1) th best fitting item we test three criteria. The first criterion uses an explicit value λ 2 (by default λ 2 = 0) as a lower bound for the scalability coefficients. The scalability criteria in the second step are : 1. All the ( m 2 ) item triples in the scale (with respect to the item order), containing the candidate item must have H hlk coefficient greater than λ 2 . 2. If more than one item fulfills the first criterion, then the item with the minimum number of possible scale positions is chosen. 3. The scalability coefficient H j (s) of the selected item has to be higher than λ 1 . It can be the case that more than one scales fulfill these criteria. In such instances, the algorithm continues by choosing the scale that includes the most uniquely represented item and shows the minimum number of expected errors. The scale extension process continues as long as the scalability criteria described above are fulfilled. Second step: scale extension for the ANDRICH data For the ANDRICH data, after the first step of the item selection process where we obtained the best unique triple, the remaining five items can still be added to the scale. BestUnique <-unq[which.max(hcoeft[unq]), ] # Best unique triple ALLitems <-colnames(ANDRICH) Remaining <-ALLitems[!ALLitems %in% BestUnique] # Remaining items Next, an iterative procedure needs to be defined for the second, scale extension step of the MUDFOLD algorithm. Adding one item in each repetition implies that a maximum of N − 3 = 5 iterations can take place if all items fit in a MUDFOLD scale. In each iteration we construct the scales to be evaluated where each scale contains one of the remaining items in a specific position. For example, in the first iteration of the scale extension step for the ANDRICH dataset, all the scales that need to be assessed can be constructed as follows. Each of these scales will be judged in terms of its scalability properties. For instance, let us consider the first scale that is constructed in the first iteration of the scale extension step in the ANDRICH data. This scale has been constructed after inserting the item H IDEOUS into the first possible position of the minimal scale (INEFFECTIV, DONTBELIEV, DETERRENT). The first scalability criterion for this scale determines if the H hlk coefficients of the triples that contain the new item (i.e. H IDEOUS) are larger than a user specified λ 2 (default λ 2 = 0). We can extract all the triples for this specific scale using the combinations() function. les <-length(Examplescale) ExamplescaleTRIPLES <-combinations(n = les, r = 3, v = Examplescale, set = FALSE) From the four triples in total, only the first three are containing the new item H IDEOUS. We can obtain the H coefficient for each of these triples with and we can see that the triple (HIDEOUS, I NEFFECTIV, DETERRENT) has a H coefficient which is lower than λ 2 . Hence, this scale does not fulfill the first criterion and should be excluded from the scale extension process. The first criterion is calculated for every scale possible and the scales that conform to this criterion continue the scale extension process. Lowering the values of λ 2 to a negative number will allow more scales to pass this criterion, while setting λ 2 to a large negative number e.g. −99 will allow all scales to pass this criterion. The second scale assessment determines which scale or scales contain the item that is the most "uniquely" represented. Let us assume that the number of scales that fulfill the first criterion is six. Moreover, assume that five out of these six scales contain the item MUSTH AVEIT and one scale contains the item CRI MDESERV. In this scenario the scale that contains the item CRI MDESERV, will be the one that continues the scale extension. The scales that contain the least frequently observed item are checked according to a third criterion. The third and last criterion in the iterative scale extension phase concerns the scalability properties of the new item. The scale that contains the new item with the highest item scalability coefficient will be chosen as the best MUDFOLD scale if and only if H j (s) > λ 1 where λ 1 is the lower bound that have been used also in the first step of the item selection algorithm. In the ANDRICH example the algorithm completes five iterations in the second step which means that all the items are included in the MUDFOLD scale. The latter, consists of eight items and shows a scale scalability coefficient equal to 0.64. After a MUDFOLD scale with a good fit is obtained, one can assess its unfolding quality. This is done by scale diagnostics described by Post (1992) and Post and Snijders (1993). These diagnostics are based on sample proportions from which the unimodality assumption of the scale is evaluated and nonparametric estimates of the item response functions are obtained. MUDFOLD diagnostics In this section, we discuss diagnostics implemented in the mudfold package, which can be used to assess if a scale s consisting of m items, j = 1, . . . , m conforms with the assumptions A2 to A5 of a unidimensional nonmonotone homogeneous MUDFOLD scale. Diagnostic for assumption A2 Let us denote by X −j the n × (m − 1) matrix that contains the responses of n individuals to all the items in the scale except item j. Testing if A2 (local independence) holds, is equivalent to testing if the positive response on an item depends solely on the latent trait θ, i.e. P X j = 1|X −j , θ = P X j = 1|θ . If p j = P X j = 1 denotes the probability of positive response to item j, testing this hypothesis implies fitting the following regularized logistic regression model, where X −jk denotes the kth column of X −j andθ = θ 1 , . . . ,θ n is a nonparametric estimate of the latent attitude with regression parameter β θ . The response regression parameters β k are penalized using the least absolute shrinkage and selection operator (LASSO, Tibshirani, 1996). LASSO shrinks the coefficients β k of the regression in (4) towards zero. If β k = 0 for all k = 1, . . . , m then the local independence assumption if fulfilled and the probability of positive response on the item j depends only on θ. On the other hand if there is any k for which β k = 0 there is evidence of violations in the local independence assumption. Fitting sparse generalized linear models with simultaneous estimation of the regularization parameter is straightforward in R with the function cv.glmnet() that is available with the package glmnet (Friedman et al., 2010). Diagnostic for assumption A3 The condition A3 required by MUDFOLD is the assumption of unimodality of the IRFs, which are unknown nonlinear functions of the latent trait. In order to obtain estimates of these functions, we use a nonlinear generalized additive model (GAM, Wood, 2011) that is implememented in the R package mgcv (Wood, 2017). Specifically, for each item the probability of positive response p j is modelled as a smooth function of the latent trait θ, that is, where f θ θ is a smooth function ofθ. Plotting the probability of positive response modelled by (5) against a nonparametric estimate of the latent traitθ, should yield a single 'peaked' curve if the unimodality assumption for the IRFs holds. Diagnostics for assumptions A4 and A5 For the assumptions A4-A5, diagnostic statistics that quantify to which extent the scale agrees with these assumptions have been proposed by Post (1992). These statistics are based on conditional IRF probabilities, which are estimated by their corresponding sample proportions and collected into a matrix that is called the conditional adjacency matrix (CAM). CAM in its (j, k) element contains the conditional frequency that a subject from the sample will choose the row item j given that the column item k is chosen. The probability P X j = 1 | X k = 1 is estimated from the data by dividing the joint frequency of choosing both items j and k by the relative frequency of choosing item k. That is, In the package mudfold, the CAM can be obtained using the function CAM(), which takes as input either a fitted MUDFOLD object or a dataset with the complete responses of n individuals to m items. In the ANDRICH dataset example, the CAM of the original data can be calculated using the command CAM(ANDRICH). Each row of the CAM is regarded as an empirical estimate of the corresponding IRF. Hence, if the ordering of the items is correct, and if assumptions A1 to A5 hold, then (i) the observed maxima of the different rows of the CAM are expected to appear around the principal diagonal (moving maxima property), and (ii) the rows of the CAM are expected to show a weakly unimodal pattern. One can potentially evaluate the unfolding model by checking how strongly the observed row patterns of the CAM deviate from the expected patterns described above. Max statistic (MAX) : The moving maxima property of the CAM corresponds to condition A4, which assumes stochastic ordering of the items by their location parameter β j . In order to formally check this assumption, Post (1992) proposed a statistic that quantifies the violations of the moving maxima property for the rows of the CAM , which is called the max statistic (MAX). Calculation of the MAX can be done in two ways, namely a top-down and a bottom-up method where M j is the position of the maximum in the jth row of CAM. In order to create a measure of the moving maxima property that is bounded within the interval [0, 1] we divide MAX j by the number of potential violations of the moving maxima property which are approximately equal to m 2 /12. The sum over all rows yields the total MAX statistic of the scale, i.e. MAX total = The R Journal Vol. XX/YY, AAAA 20ZZ ISSN 2073-4859 ∑ m j=1 MAX j .. The quantity MAX total will be the same for both methods in (7), however, the number of items showing positive MAX can be different. In this situation the method that yields the minimum number of items showing positive MAX is chosen. If the number of items with positive MAX is the same for both methods then we choose arbitrarily the top-down method. In the case where M j is next to a diagonal element then the maximum in the jth row can have two positions and the position that yields the lower MAX value will be chosen. The MAX statistic can be calculated using the function MAX() from the R package mudfold, which takes as input either a fitted MUDFOLD object obtained from the main mudfold() function, or an object of class "cam.mdf" calculated from the function CAM(). The argument type of the MAX() function controls if the MAX for the items or the whole scale will be returned to the user. Visual inspection of the observed maxima pattern can also be useful. If the maximum values of the CAM rows are close to the diagonal then the unfolding model holds. The diagnostics() will return and plot a matrix with a star at the maximum of each CAM row for visual inspection of their distribution. Iso statistic (ISO) : In order to quantify if the rows of the CAM show a weakly unimodal pattern, the iso statistic (proposed by I. Molenaar, personal communication) was introduced. Iso statistic (ISO), is a measure for the degree of unimodality violation in the rows of CAM. ISO can be obtained for each item (ISO j ) and their summation results in the total ISO for the scale (ISO tot ). To come up with an ISO value for an item j, one should first locate the maximum in each row of the CAM. If we index m * the maximum in row j of CAM, the ISO measures deviations from unimodality to the left and right of m * , i.e. The total ISO statistic for a scale consisting of m items is calculated as the sum of the individual ISO statistics, i.e. ISO j 's, i.e. ISO total = ∑ m j=1 ISO j . The ISO statistic, both for an item or for the scale, is zero if the unimodality in row j of the conditional adjacency matrix is not disturbed and positive if disturbances in unimodality occur in row j. The user can calculate the ISO statistic using the function ISO(), which takes as input outputs either from the mudfold() function, or from the function CAM() and returns a vector with the ISO j 's for each j ∈ {1, 2, . . . , m} or the sum of this vector if type = scale . All the diagnostic tests discussed in this section are implemented in the function diagnostics() of the mudfold package. The function diagnostics() can be used with fitted objects from the main mudfold() function. Uncertainty estimates for MUDFOLD statistics Since the sampling distributions of the MUDFOLD's goodness-of-fit and diagnostic statistics are non-standard, calculating their standard errors is not straightforward. Instead, for providing uncertainty estimates of the MUDFOLD statistics both at the item and the scale level, nonparametric bootstrap is used (Efron et al., 1979). Bootstrap is a resampling technique that can be used for assessing uncertainty in instances when statistical inference is based on complex procedures. With bootstrapping we sample R times n samples with replacement from a dataset of size n. The bootstrap samples of the statistic obtained from R iterations are then used to approximate the sampling distribution of the statistic. Given a MUDFOLD scale s, statistics for items such as the O j (s), EO j (s), H j (s), and the total scale such as the O total , EO total , H total are bootstraped R times. The bootstrap procedure implemented in mudfold depends on the function boot() from the R package boot (Canty and Ripley, 2017). Using the boot package allows the user of mudfold package to obtain different types of confidence intervals for assessing uncertainty using the function boot.ci(). Additional to the uncertainty estimates, a bootstrap estimate of the unfolding scale can be also calculated. This estimate corresponds to the most frequently obtained MUDFOLD scale in R bootstrap iterations. In many instances the bootstrap estimate will coincide with MUDFOLD scale obtained by the item selection algorithm. When the two estimates are different the bootstrap scale estimate can be used to correct the MUDFOLD scale after assessing its properties carefully. Nonparametric estimation of person ideal points With MUDFOLD, after obtaining an item ordering (scale) that consists of a (sub) set of m items, m ≤ N, one can estimate in a nonparametric way subject locations on a latent continuum. Two nonparametric estimators can be used with slightly different properties both based on the Thurstone (1927Thurstone ( , 1928 estimator for the measurement of attitudes. Originally, the Thurstone estimatorθ β i of the i-th respondent location parameter given a vector of known item location parameters β = (β 1 , β 2 , . . . , β m ) was defined as, where x ij is the response of person i on item j. The parameter estimateθ β i for each i takes values within the item parameter range. In MUDFOLD however, the item parameters vector β is unknown, thus we need to estimate it. In order to do so, we make use of two alternative estimates for β's proposed by Van Schuur (1988) and Johnson (2006), respectively. The former uses item ranks as approximations of the item locations while latter uses item quantiles. Van Schuur's person parameter estimator uses the item ranks obtained from MUD-FOLD's item selection algorithm as estimates for the vector β = (β 1 , β 2 , . . . , β m ) . Since MUDFOLD estimates only the rank order of the parameter vector, i.e. r = (r 1 , r 2 , . . . , r m ) one can define a rank estimateβ where r j is the rank of the item j on the MUDFOLD scale. By using the estimated ranks as approximations of the parameter vector we can estimate a respondent's location as the mean of the endorsed item ranks. That is, Alternatively Johnson's quantile estimator bounds both estimates for θ's and β's within a unit interval. This estimator uses the item ranks divided by the length of the scale m as approximations for the β vector. In all the estimators described in this section, no estimates can be defined for individuals with total score X + i = ∑ m j=1 x ij equal to zero. These individuals are not endorsing any item and therefore provide no information whether they belong to the extreme right of the scale or to extreme left. The user of the package mudfold can choose between Van Schuur's and Johnson's estimators for obtaining persons scores on the factors. Missing values Missing data occur when intended responses from one or multiple persons are not provided. Handling missing values is critical since it can bias inferences or lead to wrong conclusions. One way to go is to ignore the missing observations by applying list-wise deletion. This, however, can lead to a great loss of information especially if the number of missing values is large. The other approach, is to replace the missing values with actual values which is called imputation. In the case of random missing value mechanisms such as missing completely at random (MCAR) and missing at random (MAR) (Rubin, 1976;Little and Rubin, 1987), different approaches can be used in order to impute the missing observations. Imputation within IRT The R Journal Vol. XX/YY, AAAA 20ZZ ISSN 2073-4859 is in general associated with more accurate estimates of item location and discrimination parameters under several missing data generating mechanisms (Sulis and Porcu, 2017). In the package mudfold missing values can be imputed using the logistic regression version of multiple multivariate imputation by chained equations (MICE). The latter is available from the R package mice. MICE imputation within mudfold can be used solely or in combination with bootstrap uncertainty estimates. In the latter case, each bootstrap sample is imputed before fitting a MUDFOLD scale, while in the former the data are imputed M times and the results are averaged across the M datasets. The mudfold package The R package mudfold contains a collection of functions related to the MUDFOLD item selection algorithm. In the following we describe the functionality of the package and the ANDRICH dataset is used for demonstration purposes. Description of the functions mudfold() and as.mudfold() The main function of this package, called mudfold(), fits Van Schuur's item selection algorithm to binary data in order to obtain a unidimensional ordinal scale for the persons. The mudfold() function can be called with, mudfold(data, estimation, lambda1, lambda2, start.scale, nboot, missings, nmice, seed, mincor, ...) The functions has ten main arguments where only the first one is obligatory. These are: data: The input data, i.e. a n × N data.frame or matrix, with persons in the rows and items in the columns. It contains the binary responses of n individuals on N items. . estimation: This argument handles the nonparametric estimation of the person parameters. The default, estimation = "rank" uses a rank based estimator (Van Schuur, 1988). Alternatively, person parameters are obtained by a quantile estimator (Johnson, 2006), which is accessible by setting estimation = "quantile". lambda1: The parameter λ 1 , 0 ≤ λ 1 ≤ 1 is a user specified lower bound for scalability criteria that are used in MUDFOLD's item selection algorithm. In the default setting, λ 1 = 0.3. Large values of λ 1 lead to more strict criteria in the item selection procedure. start.scale: The user can pass to this argument a character vector of length greater than or equal to three, containing ordered item names from colnames(data) that are used as the best elementary scale for the second step of the item selection algorithm. If start.scale = NULL (default), the first step of the item selection algorithm determines the best elementary triple of items that is extended in the second step. nboot: Argument that controls the number of bootstrap iterations. If nboot = NULL (default) no bootstrap is applied. missings: Argument that controls treatment of missing values. If missings = "omit" (default) list-wise deletion is applied to data. If missings = "impute" then the mice function is applied to data in order to impute the missings nmice times. nmice: Argument that controls the number of mice imputations (This argument is used only when missings = "impute" and nboot = NULL. seed: Argument that is used for reproducibility of bootstrap results. mincor: This can be scalar, numeric vector (of size ncol(data)) or numeric matrix (square, of size ncol(data) specifying the minimum threshold(s) against which the absolute correlation in the data is compared. See ?mice:::quickpred for more details. To be used when mice becomes problematic due to co-linear terms. ... : Additional arguments to be passed into the boot() function (see ?boot in R ). The function mudfold() internally has four main steps. A data checking step, the first step of the item selection process, the second step of the item selection process, and the bootstrap step if the user chooses this option. The output of mudfold(), is a list() of class "mdf" that contains information for each internal step of the function. The first element of the output list contains information on the function call. The second element contains results of the data checking step. The next element of the output contains descriptive statistics obtained from the observed data and the last element of the output has all the information from the the fitting process (triple statistics, first step, second step). If bootstrap is applied to estimate uncertainty , an additional element that contains the bootstrap information is given to the output. For example, if you want to fit a MUDFOLD scale to the ANDRICH data and run a nonparametric bootstrap with R = 100 iterations in parallel, you can specify it directly into the mudfold() function as follows. fitANDRICH <-mudfold(ANDRICH, nboot = 100, parallel = "multicore", seed = 1) In the example above, the first two arguments are core in the mudfold() function. The third argument parallel is an argument of the boot() function that runs bootstrapping in parallel fashion in order to reduce computational time. The last argument seed is used to ensure reproducibility of the bootstrap results. In some cases the unfolding scale could be known. In these instances, the user is interested in obtaining the MUDFOLD goodness-of-fit and diagnostic statistics for the given scale. The function as.mudfold() can be used for treating the given rank order of the items as a MUDFOLD scale. The function uses only the first two arguments of the mudfold() function. In principle, this function transforms a given scale into an S3 class "mdf" object. Description of the generic functions For "mdf" objects from the mudfold() or as.mudfold() functions, generic functions for print(), summary() and plot() and coef() are available. The generic function print.mdf() can be accessed with, print(x) where x is an "mdf" class object. This function prints information for x, such as time elapsed for fitting, warnings from the data checking step, convergence for each step of the algorithm and statistics with bootstrap confidence intervals if nboot is not equal to NULL. In the ANDRICH data example, the command print(fitANDRICH) is used to print information from the fitANDRICH object to the console. The function call together with the elapsed time to fit the model, the number of individuals, and the number of items used in the analysis is the first part of the output. Next, the values of the mudfold() arguments are given, which are followed by convergence indicators for each step of the item selection algorithm. Scale statistics such as the scalability coefficient and the ISO statistic are also printed together with their percentile confidence intervals obtained in 1000 bootstrap iterations. The summary of the bootstrap iterations finalize the output when printing the fitANDRICH object. The function summary is a generic function that is summarizing information from model fitting functions. In our case the output of summary.mdf() is a list object summarizing results from the mudfold() function. The function can be called via summary(object, boot, type = "perc", ...) and consists of three arguments: object: a list of class "mdf", output of the mudfold() function. boot: logical argument that controls if bootstrap confidence intervals and bootstrap summary for each coefficient will be returned. If boot = FALSE (default) no information for bootstrap is returned. When boot=TRUE, confidence intervals, standard errors, biases, calculated from the bootstrap iterations for each parameter are given with the output. type: The type of bootstrap confidence intervals to be calculated if the argumnet boot = TRUE. Available options are "norm", "basic", "perc" (deafult), and "bca". See the argument type of the boot.CI() for details. The output of the summary.mdf() is a list with two main components. The first component of the list is a data.frame with scale statistics and the second component is a list with item statistics. Typing summary(fitANDRICH,boot = TRUE) into the R console will return the summary of the fitted scale to the ANDRICH data. The output consists of six distinct data.frame objects. The first data.frame contains information on scale statistics with their bootstraped statistics. The next four data.frame objects correspond to the H coefficients, the ISO statistics, the observed errors, and the expected errors for each item in the scale together with their bootstrap summary statistics. The last data.frame gives descriptive statistics for the items in the scales. A generic function for plotting S3 class "mdf" objects is also available to the user. The function plot.mdf() returns empirical estimates of the IRFs, the order of the items on the latent continuum or a histogram of the person parameters . You can plot "mdf" class objects with the following R syntax. plot(x, select = NULL, plot.type = "IRF") This function consists of three arguments from which the first is the usual argument x which stands for the "mdf" object to be plotted. The argument plot.type controls the type of plot that is returned, and three types of plots are available. If plot.type = "scale", a unidimensional continuum with the items in the obtained rank order is returned. In the default settings of this function (i.e. plot.type = "IRF"), the corresponding plot has the items on the x-axis indicating their order on the latent continuum and the probability of a positive response on the y-axis. The IRF of each item among the latent scale is plotted with different colours. When plot.type = "IRF" will return a plot with the distribution of person parameters on the latent continuum. The argument select is optional and provides the possibility for the user to plot a subset of items. The user can provide in this argument a vector of item names to be plotted. If select = NULL, the function returns the estimated IRFs for all items in the obtained MUDFOLD scale. For plotting S3 class "mdf" objects, we use the functions na.approx(), melt() and ggplot() from the R packages zoo (Zeileis and Grothendieck, 2005), reshape2 (Wickham, 2007), and ggplot2 (Wickham, 2009), respectively. A generic coef.mdf() function for S3 class "mdf" objects can also be used. This function is a simple wrapper that uses a single argument named type . The coef.mdf() will extract nonparametric estimates of: persons ranks when type = "persons", item ranks when type = "items", or both when type = "all" from a fitted MUDFOLD object. The diagnostics() function After a scale has been obtained, scale diagnostics need to be applied is order to assess its unfolding properties. The MUDFOLD diagnostics described in section 2.2.4 of this paper are implemented into a function named diagnostics() that can calculate all of them simultaneously. The function syntax is, diagnostics(x, boot, nlambda, lambda.crit, type, k, which, plot) and uses eight arguments described below. x: a list of class "mdf", output of the mudfold() function. boot: logical argument that controls if bootstrap confidence intervals and summary for the H coefficients and the ISO and MAX statistics will be returned. If boot = FALSE (default) no information for bootstrap is returned. When boot = TRUE, confidence intervals, standard errors, biases, calculated from the bootstrap iterations for each diagnostic are given with the output. nlambda: The number of regularization parameters to be used in cv.glmnet() function when testing local independence. lambda.crit: String that specifies the criterion to be used by cross-validation for choosing the optimal regularization parameter. Available options are "class" (default), "deviance", "auc", "mse", "mae". See the argument type.measure in the cv.glmnet() function for more details. type: The type of bootstrap confidence intervals to be calculated if the argumnet boot = TRUE. Available options are "norm", "basic", "perc" (deafult), and "bca". See the argument type of the boot.CI() for details. k: The dimension of the basis in the thin plate spline that is used when testing for IRF unimodality. The default value is k = 4. plot: Logical. Should plots be returned for the diagnostics that can be plotted? Default value is plot = TRUE. For the ANDRICH data example the command diagnostics(fitANDRICH) will calculate and plot the scale diagnostics for the fitANDRICH object. Unfolding data simulation and description of the mudfoldsim() function In order to provide the user the flexibility of simulating unfolding data, the function mudfoldsim() is available from the mudfold package. The responses of subjects on distinct items are simulated with the use of a flexible parametric IRF that generalizes proximity relations between item and person parameters. Assume that we want to simulate a test dataset with responses from n individuals indexed by i = 1, 2, . . . , n on N proximity items (indexed by j) with latent parameters θ i and β j respectively. The vector of item parameters β = (β 1 , . . . , β N ) is drawn at random from a standard normal distribution. For the person parameters, the user can choose if they will follow a standard normal distribution, or they will be drawn uniformly in the range of item parameters. Simulating person parameters from a standard normal distribution may imply that a number of individuals are located too far to the left or right of the most extreme items (due to sampling variation). These subjects will not agree with any item. These responses are not useful in unfolding analysis since no discriminant information is provided for the items in the scale. The user of mudfold package is free to include or exclude such type of responses. Unfolding models are also known as distance models since they model the probability of positive endorsement of item j from individual i as a function of the proximity between θ i and β j . We consider a linear transformation τ ij of the squared difference d 2 ij = θ i − β j 2 given by τ ij = γ 1 + γ 2 d 2 ij , where the parameters γ 1 ( deterministic parameter) and γ 2 (discrimination parameter) are fixed. Using τ ij with the standard logistic function one obtains a parametric IRF f τ ij = 1 1+e −τ ij . Consequently, the positive binary response of individual i on item j can be considered as the outcome of a Bernoulli trial with "success" probability 1/ 1 + e −τ ij . Hence, the item response variables X ij that contain binary responses from n individuals on N items, follow a Bernoulli distribution according to, X ij ∼ Bernoulli 1 1 + e −τ ij for i = 1, . . . , n, j = 1, . . . , N. In mudfoldsim() function, the model parameters γ (.) are user specified with default settings γ 1 = 5 and γ 2 = −10 respectively. This specific set up of the model parameters produces nearly deterministic response curves for the subjects which in turn guarantees that the number of observed errors is small. We note that the IRF proposed by Andrich (1988) is a special case of the one implemented in the mudfoldsim() function for γ 1 = 0 and γ 2 = −1. This parametric simulation method is implemented in a flexible R function available from the mudfold package. This function consists of several arguments that allow the user to control the unfolding properties of the simulated data. The function in its default settings can be called easily with the following syntax, mudfoldsim(N, n, gamma1 = 5, gamma2 = -10, zeros = FALSE, parameters = "normal", seed = NULL) and makes use of six user-specified arguments: N: An integer corresponding to the number of items to be simulated. n: The number of persons to be simulated. gamma1: This argument is passed to the IRF. Controls the γ 1 or discriminative parameter of the IRF. The higher the parameter the larger the number of items that individuals tend to endorse if parameter γ 2 is kept constant. gamma2: The deterministic parameter (i.e. γ 2 ) of the IRF. As the value of this parameter decreases, individuals tend to make less "errors" in their responses (i.e. their responses are more in line with the unfolding scale). zeros: A logical argument that controls if individuals who endorse no items will be simulated. If zeros=TRUE the function allows for individuals that are not endorsing any of the items. On the other hand, if zeros=FALSE (default) only individuals who endorse at least one item will be part of the simulated data. parameters: Argument for the person parameters with two options available. In the default option parameters="normal" and in this case the person parameters are drawn from a standard normal distribution. On the other hand, the user can set this argument equal to "uniform" which implies that subject parameters will be drawn uniformly in the range of the item parameters. seed: An integer to be used in the set.seed() function. If seed=NULL (default), then the seed is not set. The output of the mudfoldsim() function is a list containing the simulated data (in a random item order), the parameters used in the IRF, and the matrix of probabilities under which the binary data has been sampled. Description of the pick() function Since the main mudfold() function is designed for dichotomous (binary) items, we provide the user with the function pick(). The latter, is used to transform quantitative or ordinal type of variables into a binary form. The underlying idea of this function is that the individual selects those items with the highest preference. This transformation can be done in two different ways, either by user specified cut-off value(s) or by assuming a pick K out of N (individuals are asked to explicitly pick K out of N items) response process, where each response vector consists of the K highest valued items. Dichotomization is performed row-wise by default, however the user can also perform the transformation column-wise. The R function pick() can be utilized with the following code, pick(x, k = NULL, cutoff = NULL, byItem = FALSE) and makes use of four parameters. These are, x: A data.frame or matrix with persons in the rows and items in the columns containing quantitative or ordinal type of responses from n individuals/raters on N items. Missing values are not allowed. k: This integer (1 ≤ k ≤ N) controls the number of items a person can pick (default k=NULL). This argument is used if one wants to transform the data into pick K out of N form. If the parameter k is provided by the user, then cutoff should be NULL and vice verca. cutoff: The numeric value(s) that will be used as thresholds for the transformation (default cutoff=NULL). Any value greater than or equal to the cutoff will be 1 and 0 otherwise. The length of this argument should be equal to 1 (indicating same threshold for all rows of x) or equal to n (when byItem=FALSE) which imposes an explicit cut-off value for each individual in x. If byItem=TRUE then the length of this parameter should be 1 (global cut-off value) or N (explicit cut-off per item). byItem: This is a logical argument. If byItem=TRUE, the transformation is applied on the columns of x. In the default byItem=FALSE, the function "picks" items row-wise. In the default parameter settings of the function pick(), the parameters k and cutoff respectively are equal to NULL. In this case, the mean from N responses is used as a personspecific cut-off value (if byItem=FALSE). When byItem=TRUE (with k,cutoff equal to NULL) then the item mean over all individuals is used as an item specific cut-off value. The parameters k and cutoff are responsible for different dichotomization processes and they cannot be used simultaneously, which means that only one of the two arguments can be different than NULL. In the case in which the user chooses to transform the data assuming that persons are asked to pick exactly K out of N items, ties can occur. If x i is a response vector subject to transformation, in which ties exist, then we select among the tied items at random. Generally, dichotomization should be avoided since it could distort the data structure and lead into information loss. Models that take into account information different categories should be prefered over dichotomization for polytomous data. Applications In this section we provide examples of how to use MUDFOLD method on two datasets, which are provided with the mudfold package. The first application is from the field of psychometrics while the second example is a linguistic application. The commands install.packages("mudfold") and library(mudfold) will download, install and load the mudfold package so it can be used. The command set.seed(1) will set the seed for reproducibility. Loneliness data In order to demonstrate the functionality of the mudfold package we re-analyze questionnaire data following the strategy suggested by Post et al. (2001). For this purpose, we use a unidimensional measurement scale for loneliness that follows the definitions of a Rasch scale and has been constructed by de Jong-Gierveld and Kamphuls (1985). De Jong-Gierveld loneliness scale consists of eleven items, five of which are positive and six are negative. The items in the loneliness scale are given below and the sign next to the items corresponds to the item content. A: There is always someone I can talk to about my day to day problems + B: I miss having a really close friend -C: I experience a general sense of emptiness -D: There are plenty of people I can lean on in case of trouble + E: I miss the pleasure of company of others -F: I find my circle of friends and acquaintances too limited -G: There are many people that I can count on completely + H: There are enough people that I feel close to + I: I miss having people around -J: Often I feel rejected -K: I can call on my friends whenever I need them + Each item in the scale has three possible levels of response, i.e. "no", "more or less", "yes" and dichotomization methods that involve item reverse coding have been proposed by De Jong and van Tilburg (1999). These methods as well as the determination of dimensionality of this scale have been under critical discussion. Following this discussion, Post et al. (2001) reanalyzed the loneliness scale data obtained from the NESTOR study (Knipscheer et al., 1995) using MUDFOLD in a three step analysis routine. Persons with missing responses are removed from the data (n miss = 69). The dataset with the complete responses is included in the R package mudfold in R data format. List-wise deletion in this case yields identical results with MICE imputation. Following the routine suggested by Post et al. (2001) responses of each subject are dichotomized setting "yes" versus "no" and "more or less". The threshold that is used for the main analysis has been determined on the basis of MUDFOLD scale analysis on datasets with different thresholds. Specifically, the data has been dichotomized using as thresholds the response, (i) "yes", (ii) "more or less", (iii) different thresholds per item where the response category "more or less" is collapsed with the smaller category between "yes" and "no". The results from this analysis showed that dichotomizing the data at the higher preference will yield the best unfolding measurement scale for loneliness. Dichotomizing the data at "yes" is straightforward with the pick() function. data("Loneliness") dat <-pick(Loneliness, cutoff = 3) In the first step of the analysis, we conduct a MUDFOLD scale search on the transformed binary responses of n = 3987 individuals on N = 11 items. The λ 1 parameter in the mudfold() function is set to λ 1 = 0.1 since the default value leads to a minimal scale of length three. Lonelifit <-mudfold(dat, lambda1 = 0.1, nboot = 100, seed = 1) The function takes about five minutes to run 100 bootstrap iterations. The resulting scale and its associated statistics can be obtained by summarizing the Lonelifit object. loneliSummary <-summary(Lonelifit, boot = TRUE) The MUDFOLD scale for the Loneliness data in its estimated rank order is: loneliScale <-loneliSummary$ITEM_STATS$ITEM_DESCRIPTIVES$items loneliScale ## "G" "H" "D" "K" "C" "E" "I" "F" The scale has length eight, with the first four items positively formulated and the last four negatively formulated. Items A,B, and J are excluded from the scale. This is because some triples (with respect to the item rank order) that include these items have scalability coefficient H hjk lower than λ 2 . Statistics for the resulting MUDFOLD scale and each item explicitly can be accessed directly from the summary object loneliSummary. Scale statistics with their bootstrap uncertainty estimates can be obtained with the following command. The output above, in each row shows a scale statistic and its columns correspond to the bootstrap properties of this statistic. The H coefficient for the scale shows strong evidence towards unidimensionality (H total (s) ≈ 0.54, se = 0.031), the ISO statistic is low (ISO total ≈ 0.08, se = 0.459) denoting small amount of violations of the manifest unimodality, and the MAX statistic is zero (se = 0.683) meaning no violations of the stochastic ordering. Scale diagnostics are given in Figure 1 and 2. Visual inspection if the maxima of the CAM rows are a nondecreasing function of the item ranks, violations of the local independence assumption, and the IRF for each item in the Loneliness unfolding scale can be obtained by using the diagnostics() function as shown below. par(mfrow = c(1, 2)) # testing for local independence diagnostics(Lonelifit, which = "LI") # visual inspection of moving maxima diagnostics(Lonelifit, which = "STAR") par(mfrow=c(2,4)) # visual inspection for IRF unimodality diagnostics(Lonelifit, which = "UM") par(mfrow = c(1, 1)) The H coefficients for each item in the scale are also available in the summary object and can be accessed by: From the item fit we can see that the H coefficient for each item in the scale is above 0.5 which means that all the items are scalable together. Looking at the column boot(iter) of the output above you can get information for the number of times each item was included in a MUDFOLD scale out of R = 100 bootstrap iterations. The item G was the most frequently included item (96%) while the items K,I were included less frequently in a MUDFOLD scale compared to the other items (60% and 47% respectively). Typing loneliSummary$ITEM_STATS$ISO_MUDFOLD_items into the R console will return a summary of the ISO statistic for each item in the scale. The latter, shows that only small violations of unimodality occur for the items in the scale. The same holds for the MAX statistic (it can be accessed by loneliSummary$ITEM_STATS$MAX_MUDFOLD_items), which shows zero values for all the items in the scale. After the scale is obtained and checked for its conformity to the unfolding principles we can visualize the estimated empirical IRFs and the distribution of the estimated person parameters. Plots for the IRFs and the person parameters can be obtained by: plot(Lonelifit,plot.type = "IRF") plot(Lonelifit,plot.type = "persons") Figures 3 and 4 show the empirical estimates of the IRFs and the distribution of the person parameters respectively. In figure 3 you can see that the scale clearly consists of four positively formulated items in its beginning for which the IRF is decreasing as one moves from the left to the right of the scale, and four negatively formulated items in the end for which the IRF is increasing as one moves from the left to the right of the scale. In figure 4 we can see that the sample under consideration tends to feel less lonely since the distribution of the person parameters is skewed to the right. In such example, clearly any parametric model that assumed a latent normal distribution of the latent person parameters would be inappropriate. Plato's seven works data In this section, we present an application of MUDFOLD method to the Plato7 data set. This dataset is available from the R package smacof (de Leeuw and Mair, 2009) and has been also included in the mudfold package. The data can be loaded into the R environment with the command data("Plato7"). Plato7 contains information on the quantity distribution over the sentence ending from seven works of Plato (D. R. Cox, 1959). Specifically, the last five syllables from each sentence in seven Plato's works are extracted and categorized as short or long. This produces 2 5 = 32 possible combinations of short-long syllables of length five, which are called clausulas and can be used to identify rhythmic changes in the literary style. The quantity of the clausulas in each work of Plato is recorded in terms of proportions. The question is whether it is possible using these data to assign a chronological order to the works of Plato. Particularly, it is known that Plato wrote first the Republic and last the Laws. In between Republic and Laws, Plato wrote the Critias, Philebus, Politicus, Sophist and Timaeus. However, the exact order of these five works is unknown. Assuming that the change in Plato's literary style was monotone in time, we might be able to assign a time order in his works by analyzing the clausula's distribution in each Plato's work. We consider the development of Plato's literary style as a unidimensional scale, on which clausulas and works are ordered. In this analysis we consider that the quantity of clausula i in Plato's work j will be governed by a proximity relation. That is, each clausula with a parameter θ i on a latent literary style continuum tends to prefer (appear most frequently in) the works of Plato with parameters β j close to θ i . Since the data is given in continuous form, we transform the percentages into binary format in order to apply MUDFOLD. We consider the mean quantity of each clausula as an explicit cut-off value for the transformation. The latter can be seen as a pick any out of N response process where the number of items "picked" varies across subjects. We can apply the transformation with the function pick() from the mudfold package in its default settings as follows. dat.Plato <-pick(Plato7) After the transformation, we end up with a matrix containing the binary preferences of n = 32 clausulas on N = 7 works of Plato. Now we can fit a MUDFOLD scale (with bootstrap for assessing parameter uncertainty) to the transformed data with the default search settings and study its summary. fitPlato <-mudfold(dat.Plato, nboot = 100, seed = 1) summaryPlato <-summary(fitPlato, boot = TRUE) We can check the MUDFOLD scale from the summary object. The results shows that the MUDFOLD scale has length five and the items Critias and Timaeus have been excluded from the measurement process. Republic is correctly ordered first and Laws is correctly ordered last among Plato's works. Almost all the items are strong unfolding items with H j (s) higher than 0.5 which means that the items are scalable together in one dimension. The item Sophist shows moderate unfolding strength with the lowest item scalability coefficient (i.e. H j (s) = 0.41) while the item Republic is the strongest unfolding item in the scale. Since the ISO statistic for the scale is positive one may wants to check which items are responsible for the small amount of manifest unimodality violations that are observed. Assessing these violations for each item involves checking their ISO statistics. The obtained summary output for the ISO statistics of the items in the MUDFOLD scale show that Republic is the item with the higher manifest unimodality errors in its estimated IRF with an iso statistic value of 0.1. The higher uncertainty is observed for the item Philebus that shows a bootstrap standard error of 0.1. The estimated empirical IRFs and the estimated IRFs for the items in the Plato7 unfolding scale can be visualized with plot(fitPlato, plot.type = "IRF") par(mfrow = c(2, 3)) diagnostics(fitPlato, which = "UM") par(mfrow = c(1, 1)) and the output is shown in figures 5 and 6 respectively. From figure 5 it can be seen that the scale consists of two items in the first positions (i.e. Republic and Sophist) with decreasing empirical IRFs as one moves from the left to the right hand side of the latent scale. These two items show small amount of manifest unimodality violations which can be seen at the end of their IRFs where the value of the curves is larger for the item Laws compared to item Philebus. Third in the scale is the item Politicus for which the empirical IRF shows a single-peak shape. Politicus is followed by the items Philebus and Laws with increasing empirical IRFs at positions four and five of the scale. The estimates of the IRFs are shown in figure 6 with no obvious violations of the IRF unimodality. Other diagnostics can be obtained by the diagnostics() function. In this example the bootstrap estimate of the scale with the estimated MUDFOLD scale are slightly different. In such instances an additional element with a summary of the scale estimated by the bootstrap is included in the output. Accessing the summary of the bootstrap scale is straightforward with summaryPlato$BOOT_SCALE. Summary In this paper we introduced an R package named mudfold (Balafas et al., 2019). The latter is available under general public license (GPL ≥ 2) from the Comprehensive R Archive Network (CRAN) at http://CRAN.R-project.org/package=mudfold. This package implements a nonparametric item response theory model for unfolding proposed by Van Schuur (1984, 1988 and further developed by Post (1992) (see also Johnson (2006)). The mudfold package is an addition to a broad family of R packages that fit IRT models. The approach described here is an additional exploratory and validation method when fitting such models. Moreover it adds to the package mokken for the case in which proximity item response data needs to be analysed. Looking to the future our focus will be on extending the functionality of this package. In detail, we aim on the implementation of a more efficient item selection algorithm which can reduce the computational cost implied from the old fashioned iterative algorithm presented here when the sample size and item number are significantly increasing. Methodologies for handling multicategory type of items (Van Schuur, 1984) are not yet implemented in the package, however, we plan to extend its applicability in the future. Last but not least, a parametric version of MUDFOLD method based on the IRF implemented in the mudfoldsim() will offer a complete framework for the analysis of data that have been generated under an unfolding response process. Bibliography D. Andrich. The application of an unfolding model of the pirt type to the measurement of attitude. Applied psychological measurement, 12 (1)
18,135.6
2020-01-01T00:00:00.000
[ "Computer Science", "Mathematics" ]
Preliminary study on the tea dust explosion: the effect of tea dust size Food-based dust is considered as combustible dust as they composed of distinct particles, regardless of the size or chemical composition and when suspended in air or any other oxidizing medium over a range of concentrations will present a fire or deflagration hazard. The explosion effect from food-based dust can cause catastrophic consequences because the initial shock wave from the explosion lift up more dust and triggers a chain reaction through the plant. One of the parameters that can enhance the explosion is the particle size of the dust. In this study, the effect of four different particle sizes of tea dust on the dust explosion severity was tested in a confined 20 L explosion bomb. Tea dust tends to explode due to its molecular structure which contains a carbon-hydrogen bond that can release the significant amount of thermal energy. The experimental results showed that the values of Pmax and (dP/dt)max of tea dust were more severe for the particle size of 160 μm for which are 1.97 bar and 4.97 bar/s before drying and 2.09 bar and 7.01 bar/s after drying process. The finer dust reacted more violently than coarser ones. As particle size decreases, the rate of explosion pressure change increases, as long as the size is capable of supporting combustion. Combustible dust explosions have caused several large property losses at industrial plants in the past decade. A wide variety of materials that can be explosible in dust form exist in many industries such as food, grain, tobacco, wood, plastics, pulp, paper, rubber, pharmaceuticals, pesticides, dyes, coal and metals. These materials are used in a wide range of industries and processes and may also occur naturally such as from pollens, volcanic ashes and sandstorms. The mechanisms that create dust and keep it suspended in air emerge from aerodynamic forces. Then it will be carried away to another place as a result of air currents. Combustible dust is fine particles that present an explosion hazard or a blast risk when suspended in air under specific conditions. As defined by the National Fire Protection Association (NFPA) [1], combustible dust is characterized as a combustible particulate solid that exhibits a fire or deflagration hazard when suspended in air, or some other oxidizing medium, over a range of concentrations, regardless of particle size and shape. In dust explosion studies, the focus has been mainly on dust explosion mechanisms and preventative safety measures on carbonaceous and metal dust explosion. However, agricultural dust explosion, especially in the food and beverage industries, is seldom seen. Furthermore, many people did not know food-based such as flour, grain, sugar, coffee, tea, and spices are among highly combustible dust. Under the right conditions, table sugar can be as flammable as wood; which is made of cellulose or lots of sugar molecules linked together. , 0 (2019) MATEC Web of Conferences https://doi.org/10.1051/matecconf/20192 255 5502014 EAAI Conference 2018 2014 © The Authors, published by EDP Sciences. This is an open access article distributed under the terms of the Creative Commons Attribution License 4.0 (http://creativecommons.org/licenses/by/4.0/). According to Yan and Yu [2] these particles are much more flammable because of their surface area-to-volume ratio. Taveau [3] said a primary dust explosion, which usually followed by a secondary explosion, will lead to serious damage to nearby units. The overpressure and flames from primary explosion play an important role in triggering a secondary explosion. Tea was originated in South Eastern China and nowadays it is cultivated in many countries all over the world and has more than 82 different species. Adnan et al. [4] stated that the chemical components in tea include amino acids, polysaccharides, volatile acids, vitamins, lipids, alkaloids (theobromine, caffeine, and theophylline), polyphenols (catechins and flavonoids) as well as inorganic elements. Furthermore, during photosynthesis, plants store energy in the form of starches and sugars, also known as carbohydrates. Plants later use this stored energy to fuel important reactions. In tea, the enzymatic reactions that occur during oxidation are fuelled by the carbohydrates and additionally, they are responsible for the formation of polyphenols in young tea leaves. During the processing of tea dust, much dust is generated and it leads to a dust explosion hazard. The explosion effect from food-based dust can cause catastrophic consequences because the initial shock wave from the explosion lift up more dust and triggers a chain reaction through the plant. As the result, there are mass destruction of pieces of equipment and buildings, as well as causing possible death or injury to employees. In order to prevent such accident, Proust et al., Dufaud et al. and Dobashi [5-7] stated that the chemical property of the dust, the dust explosion sensitivity parameter such as the particle size and the dust explosion severity characteristics which are the maximum explosion pressure (Pmax), rate of pressure rise (dP/dt) and dust deflagration index (KSt) are necessary required. Thus, in this paper, the chemical properties of tea and the explosion severity analysis were studied. 1.1 Sample preparation The sample used in this research was tea dust which can be purchased from local stores and the selection was done on the basis of brand popularity. The samples were ground by using a high-performance laboratory blender. After the grinding process, the samples were sieved into four different sizes which were 125 μm, 160 μm, 180 μm and 220 μm. Upon testing, the samples would be dried at a temperature of 105 °C in an oven for one hour to get rid of the moisture [4]. 1.2 Experimental equipment and methodology 1.2.1 Chemical properties identification: analysis by thermogravimetry (TGA) The equipment is used in order to measure the amount and the rate of change of weight of material as a function of temperature or time in a controlled atmosphere. First, 5 mg of sample was weighed in a platinum pan. Next, the programme of ramping was chosen and the sample was heated at a heating ramp of 10 °C per minute until the temperature reached 900 °C. The components are calculated based on the specific temperature i.e. for moisture content, T = 105 °C, volatility, T = 500 °C and fixed carbon T = 600 °C while ash was determined as the residual. The following equations were used to analyze the mass loss and the differential loss. % of Moisture = (W-W105)/W x 100% (1) % of Ash = W600/W x 100% (2) % of Volatility = (W105-W500)/W x 100% (3) % of Fixed Carbon = (W500-W600)/W x 100% (4) Where W is the initial mass of the sample (mg), W105, W500 and W600 are the mass of the sample at the temperature of 105 °C, 500 °C and 600 °C respectively. 1.2.2 Analysis of explosion data Fig. 1. Schematic diagram of Siwek 20 L spherical vessel [8] 20 L spherical vessel as shown in Figure 1 was used to obtain the flammability and severity data. The explosion experiments were performed by using two 5 kJ chemical igniters as the standard ignition source that were connected to the ignition leads. The ignition delay time, tv was fixed at 60 ms. The pressure inside the spherical vessel was measured by two “Kistler” piezoelectric pressure sensors. The dust was loaded directly to the storage container and were dispersed with the rebound nozzle connected to an outlet valve located at the bottom of the vessel by using compressed air pressurized at 20 bar (gauge). The vessel was interfaced with a computer, which controls the dispersion or firing sequence and data collection by using the control system named KSEP. As part of the experimental programme, three repeat tests were performed on each test and these demonstrated , 0 (2019) MATEC Web of Conferences https://doi.org/10.1051/matecconf/20192 255 5502014 EAAI Conference 2018 2014 Combustible dust explosions have caused several large property losses at industrial plants in the past decade. A wide variety of materials that can be explosible in dust form exist in many industries such as food, grain, tobacco, wood, plastics, pulp, paper, rubber, pharmaceuticals, pesticides, dyes, coal and metals. These materials are used in a wide range of industries and processes and may also occur naturally such as from pollens, volcanic ashes and sandstorms. The mechanisms that create dust and keep it suspended in air emerge from aerodynamic forces. Then it will be carried away to another place as a result of air currents. Combustible dust is fine particles that present an explosion hazard or a blast risk when suspended in air under specific conditions. As defined by the National Fire Protection Association (NFPA) [1], combustible dust is characterized as a combustible particulate solid that exhibits a fire or deflagration hazard when suspended in air, or some other oxidizing medium, over a range of concentrations, regardless of particle size and shape. In dust explosion studies, the focus has been mainly on dust explosion mechanisms and preventative safety measures on carbonaceous and metal dust explosion. However, agricultural dust explosion, especially in the food and beverage industries, is seldom seen. Furthermore, many people did not know food-based such as flour, grain, sugar, coffee, tea, and spices are among highly combustible dust. Under the right conditions, table sugar can be as flammable as wood; which is made of cellulose or lots of sugar molecules linked together. According to Yan and Yu [2] these particles are much more flammable because of their surface area-to-volume ratio. Taveau [3] said a primary dust explosion, which usually followed by a secondary explosion, will lead to serious damage to nearby units. The overpressure and flames from primary explosion play an important role in triggering a secondary explosion. Tea was originated in South Eastern China and nowadays it is cultivated in many countries all over the world and has more than 82 different species. Adnan et al. [4] stated that the chemical components in tea include amino acids, polysaccharides, volatile acids, vitamins, lipids, alkaloids (theobromine, caffeine, and theophylline), polyphenols (catechins and flavonoids) as well as inorganic elements. Furthermore, during photosynthesis, plants store energy in the form of starches and sugars, also known as carbohydrates. Plants later use this stored energy to fuel important reactions. In tea, the enzymatic reactions that occur during oxidation are fuelled by the carbohydrates and additionally, they are responsible for the formation of polyphenols in young tea leaves. During the processing of tea dust, much dust is generated and it leads to a dust explosion hazard. The explosion effect from food-based dust can cause catastrophic consequences because the initial shock wave from the explosion lift up more dust and triggers a chain reaction through the plant. As the result, there are mass destruction of pieces of equipment and buildings, as well as causing possible death or injury to employees. In order to prevent such accident, Proust et al., stated that the chemical property of the dust, the dust explosion sensitivity parameter such as the particle size and the dust explosion severity characteristics which are the maximum explosion pressure (P max ), rate of pressure rise (dP/dt) and dust deflagration index (K St ) are necessary required. Thus, in this paper, the chemical properties of tea and the explosion severity analysis were studied. Sample preparation The sample used in this research was tea dust which can be purchased from local stores and the selection was done on the basis of brand popularity. The samples were ground by using a high-performance laboratory blender. After the grinding process, the samples were sieved into four different sizes which were 125 μm, 160 μm, 180 μm and 220 μm. Upon testing, the samples would be dried at a temperature of 105 °C in an oven for one hour to get rid of the moisture [4]. Chemical properties identification: analysis by thermogravimetry (TGA) The equipment is used in order to measure the amount and the rate of change of weight of material as a function of temperature or time in a controlled atmosphere. First, 5 mg of sample was weighed in a platinum pan. Next, the programme of ramping was chosen and the sample was heated at a heating ramp of 10 °C per minute until the temperature reached 900 °C. The components are calculated based on the specific temperature i.e. for moisture content, T = 105 °C, volatility, T = 500 °C and fixed carbon T = 600 °C while ash was determined as the residual. The following equations were used to analyze the mass loss and the differential loss. (1) Where W is the initial mass of the sample (mg), W 105 , W 500 and W 600 are the mass of the sample at the temperature of 105 °C, 500 °C and 600 °C respectively. 20 L spherical vessel as shown in Figure 1 was used to obtain the flammability and severity data. The explosion experiments were performed by using two 5 kJ chemical igniters as the standard ignition source that were connected to the ignition leads. The ignition delay time, t v was fixed at 60 ms. The pressure inside the spherical vessel was measured by two "Kistler" piezoelectric pressure sensors. The dust was loaded directly to the storage container and were dispersed with the rebound nozzle connected to an outlet valve located at the bottom of the vessel by using compressed air pressurized at 20 bar (gauge). The vessel was interfaced with a computer, which controls the dispersion or firing sequence and data collection by using the control system named KSEP. As part of the experimental programme, three repeat tests were performed on each test and these demonstrated 2 Results and discussion 2.1 Chemical properties identification: moisture content, volatility, fixed carbon and ash Thermogravimetry (TGA) method based on ASTM (2008) procedure was applied to determine the explosion severity characteristics of the tea dust. The percentage of weight loss of the four different sizes of tea dust can be calculated from the TGA curves. From the data obtained, the chemical parameters such as moisture, volatility, fixed carbon and ash content can be determined. Particle size distribution plays a significant role in the flame propagation process. It is the dominant physical parameter that affects explosion severity and ease of ignition for combustible dust [8]. Benedetto et al. [9] suggested that when coarser particles exist, devolatilization and particle heating could control the explosion process. Table 1 shows the results obtained from TGA. As shown in Table 1, the moisture content was the highest for the size of 220 μm followed by the size of 180 μm, 160 μm and 125 μm. Tea dusts tend to absorb air moisture and a layer of water molecules will form on the particle surface. This layer causes the particle to agglomerate, and increase the virtual particle size and reduce the surface area. It can be concluded that the greater the particle size will have higher the moisture content. With the increase of moisture content, the ignition sensitivity of dust weakens significantly, and the lower heat value of dust reduces linearly. Wade et al. and Du et al. [10,11] suggested that he energy of an ignition source was absorbed by the water in the particles. Consequently, the maximum explosion pressure and the rate of pressure rise decrease with the rise of moisture content. Analysis of explosion data Another parameter obtained from Table 1 is the volatility of tea dust. Based on Table 1, it shows the volatility of tea decreases as the size increases. As stated by Abbasi and Abbasi [12], in dust explosion mechanism, the smaller the particle size of the dust, the more volatiles are expelled. The measurement of the volatile content by TGA is a slow heating process. It is possible that under fast heating in a flame front, the carbon is converted to CO and adds to the volatiles. It was also found that the volatile release activation energy increased with the content of water and ash in the tea. Devolatilization and particle heating could control the explosion process when coarser particles existed as mentioned by Todaka et al. [13]. The more volatile the dust, the smaller the value of heat is needed to ignite the dust/air mixture. Lower flammability limit coincided well with the conditions when the mass density of smaller particles was above the limit. Besides, Table 1 also shows the fixed carbon of tea dust. Fixed carbon is the solid flammable residue that remains once the particle is heated and the volatile matter is removed. The value was calculated from the difference between 100 and the sum of the moisture, volatile matter and ash. The TGA result showed that the fixed carbon of tea dust ranging from 3 to 30 wt %. From Table 1, the particle size of 160 μm has the highest value of fixed carbon which is 30.46% and this might be due to the greater surface area of the particle and the content of the moisture, volatile matter and ash of the sample. The last parameter obtained from Table 1 is the ash content. Ash is the residue remaining after water and organic matter has been removed by a heating process with the presence of oxidizing agents. Cashdollar, Fumagalli et al. and Bershad [14,15,16] mentioned the ash present in the dust sample is a measure of the inorganic material content and it also represents the fraction which is incombustible. Based on Table 1, the ash content increases as the particle size increases. The higher the ash content might be due to the less moisture content in the sample [15]. By absorption process or thermal energy released from combustion reaction, ash which is incombustible may act as inertant [16], and it does not affect the combustion and explosion. From these results, it can be concluded that as the particle size increases, the moisture content and ash content also increase while the volatility decreases. Maximum explosion overpressure (P max ) of tea dust In order to identify the explosion characteristics of tea dust, the tea dust was tested and performed within a 20 L vessel. Figure 2 shows the P max function of time. The value is one of the explosive properties estimated within the experiment to measure the severity of a dust explosion. It measures the maximum explosion overpressure generated in the test vessel. Figure 2, all dust sizes start to ignite at t = 0.1 s. A slow combustion was performed for all dust sizes. After a few milliseconds, the combustion turns fast due to the flame acceleration as the mass burning rate increase. However, the particle size of 160 μm takes the longest time to complete the combustion and explosion which is about 1.4 s before and 0.9 s after drying. According to Lemkowitz et al. [17], a flammable mixture in a closed vessel undergoing deflagration and ignited in the centre, the flame expands spherically from the centre of the vessel until it reaches the wall. During this process, the pressure in the vessel continuously rises. Both the pressure and the rate of pressure rise reach a maximum when the flame reaches the wall of the vessel. From Figure 2, P max for particle size 125 μm -220 μm before drying are ranging from 0.07 bar -1.97 bar. After the drying process, the P max for all sizes is found to be increased by 6.16%, 5.94%, 10.20% and 30.09%, respectively. From the results, the particle size of 160 μm shows the highest value of P max for both before and after the drying process which is 1.97 bar and 2.09 bar. As discussed by Lemkowitz and Pasman [18], the behaviour of the dust explosion strongly depends on the particle size. When the particle size decreases, the minimum energy required igniting the dust cloud decreases and thus the P max will increase. Lee et al. [19] also suggested that P max could increase with the decreasing of the particle size and the moisture content. Inoka et al. [20] convinced that the dispersibility of the dust increased as its moisture content decrease. A smaller particle size has a larger surface area and lower moisture content. The moisture content can reduce the amount of static electricity needed for ignition which makes the larger size dust which is 180 μm and 220 μm more difficult to ignite. This is shown in Figure 2, where P max values decline slowly with increasing of particle size. The size of the particles has a large influence on velocity and acceleration. The particle with smaller size and larger surface area are more ready to absorb heat and rapidly form ignitable mixtures. Based on the research done by Suhaimi et al. [21], the mass burning rate will speed up the flame propagation and result in the highest and steepest explosion overpressure development which represents the pressure versus time curves as shown in Figure 2. From Figure 2, the particle size of 160 μm shows the significant steep rising from 0.08 bar to 1.95 bar before drying and 0.07 bar to 1.97 bar after drying. The burning rate for the particle size of 180 μm and 220 μm are slow due to the higher moisture content of the dust. Maximum Rate of Pressure Rise (dP/dt) max From Figure 3, (dP/dt) max before drying for particle size 125 μm -220 μm before drying are ranging between 0.21 bar/s -4.97 bar/s. After drying, (dP/dt) max increases to 1.4 bar/s, 7.01 bar/s, 4.63 bar/s and 2.78 bar/s, respectively. From the results obtained in Figure 3, it shows that (dP/dt) max decreases as the particle size increases. This might be due to the distribution of the particle size. Particulates with a similar average particle size typically have a different particle size distribution. Dahoe et al. [22] stated that dust with exactly the same chemical composition, but with a smaller particle size distribution around the same median size may not explode at all under the standard test conditions. However, the same dust with a greater particle size distribution may result in a high explosion severity and sensitivity. This is because that the dust contains a significant fine fraction is more sensitive to ignition than the coarse fraction. Based on Dobashi [7], the particles may have irregular shapes which result in a larger surface area than the sphere with the same volume which makes the dust more explosive. Also, the larger particle size has a higher moisture content which increase the ignition energy and reduce the (dP/dt) max value. For flame propagation mechanism, Cashdollar [14] indicates that a smaller particle size is likely to react faster than a larger particle size of the same material. Furthermore, the smaller particles are more easily to disperse and remain airborne longer, which is why the particle size of 160 μm gives the highest (dP/dt) max compared to other sizes. The particle shape and porosity can also affect the surface area and reaction rate. The shapes with the greater surface area will propagate flame more readily and thus more hazardous. Ramírez et al. [23] also indicate that the speed of particle combustion (dP/dt) max . A faster and stronger explosion also can be created by a smaller particulate since this greatly will increase the value of (dP/dt) max . It may also result in a more powerful pressure wave since it represents how much pressure developed within a second. Based on Figure 3, the particle size of 160 μm showed the highest rate of pressure rise for before and after the drying process. Although the smaller particle size could give greater P max and (dP/dt) max values, the surface area-to-volume ratio must be taken into consideration. From the test, it showed that the particle size of 160 μm was the optimum size of tea dust that could generate high P max and (dP/dt) max . Eckhoff [24] stated that for most organic materials, a further decrease in particle size will no longer increase the combustion rate as the devolatilization no longer controls the explosion rate. This explains why the particle size of 125 μm has the lowest P max and (dP/dt) max although the size is the smallest . Fig. 3. Graph of (dp/dt) max versus Various Particle Sizes Deflagration Index, K St Besides P max , another property to determine the explosion severity is K St . The value determines the normalized rate of pressure rise of a combustible rise. The relationship of deflagration index and maximum overpressure was calculated by using the equation below, which is known as cubic's law: Where V is the volume of the vessel and (dP/dt) max is the rate of pressure rise. K St represents the maximum mass burning rate and corresponds to the time in the explosion when the flame area is at its maximum. Generally, K St would increase with the increase of P max . Based on The flame propagation during dust explosion starts with devolatilization before speeding to vapour phase combustion, which replicates the gas explosion mechanism. From the TGA result discussed earlier, tea dust at 160 μm has a larger surface area and more volatile than 180 μm and 220 μm. Ammyotte et al. [25] stated that with the increase of volatile content, the hazard posed is more dramatic. This suggested that dust with high volatility could give a higher value of K St and thus high severity of the dust explosion. Besides, K St decreases linearly with increasing of moisture content. The water may inhibit the explosibility and severity of the particles and tends to lower the ignition sensitivity of the materials. This explained why the particle size of 180 μm and 220 μm have lower K St as their moisture content are higher than the particle size 125 μm and 160 μm. Since the range of the tea dust is above 0 -200 bar.m/s, it fell in class St1. According to OSHA [26], most of the food-based dust are class St1. Research work has been done by Ramírez et al. [23] on the materials such as wheat grain dust and alfalfa have K St value of 148 bar.m/s and 50 bar.m/s respectively. Even though the class are the same, however, the moisture content and the particle size did not match with the tea dust sample in this research. Although these materials are class St 1, those K St values could create adequate power to cause a flash fire, compromise containment on a piece of equipment or blow out the walls of a building as mentioned by Dastidar et al. [27]. Conclusion This paper emphasized the effect of tea size on dust explosion severity. The experiment was done in a 20 L spherical vessel over different particle sizes at 125 μm, 160 μm, 180 μm and 220 μm. The following are the conclusions of this study: (i) As the particle size increases, the surface area of the particle decreases, the moisture content increases, the volatility decreases and the ash content increases. (ii) The results showed that the values of P max and (dP/dt) max of tea dust were more severe for the particle size of 160 μm for which are 1.97 bar and 4.97 bar/s respectively before drying and 2.09 bar and 7.01 bar/s respectively after drying. (iii) Low moisture content would be the main reason contributing to higher K St of the dust. The high volatility of dust could give a higher value of K St hence the high severity of the dust explosion. As the moisture content decrease, the mass of dust particles decrease and thus increase their ability to remain fugitive in the air and to contribute to dust distribution and layering. (iv) Since the K St value for tea dust is between 0 -200 bar.m/s, it falls in Class St 1.
6,434.4
2019-01-01T00:00:00.000
[ "Physics" ]
High Q-Factor Wideband Resonators for Millimeter and Submillimeter Applications Physical principles for designing a multipurpose set of high Q-factor quasioptical and corrugated resonators with automatic frequency tuning (Q > 6 × 10, VSWR < 1.6) that can operate in the frequency band from 37.5 to 400 GHz are presented. The electrodynamical calculation methods of resonators, the constructive particularities of resonators, the methods and results of the experimental researches are considered. This set of resonators can be used as a universal measuring resonator for measuring radiosignal fluctuations and parameters of different media, in particular, nanotube composites and high-temperature superconductors. Introduction In recent years, the present-day world of science and practical engineering has seen the UHF-and SHF-band MWtechnology evolving at a vigorous pace.At the same time, research and development work in mm-and submm-wave technology is still ongoing and gaining momentum; this is mainly because radar, radio navigation, communication equipment, guidance, and control systems tend to enhance their capabilities which can be achieved over these bands alone (specifically, we have to deal with resolving power, detection, hitting of a target, noise immunity, and speed of response) [1][2][3].As regards the MW-technology, we are concerned with designing the devices and the components having required parameters.In addition, a series of MWcomponents such as resonators, detectors, mixers, oscillators, attenuators, directional couplers, and others have been engineered not only for the UHF-and MW-band (from 1 to 180 GHz) but also for submm-band wavelengths (from 180 to 405 GHz). The basic advantages of these components are as follows: a broadbandedness, a low losses of power, and a small level of noise.The measuring systems enabling the amplitude and phase characteristics of communication and radar facilities to be determined over the frequency ranges of 1 to 180 GHz are likewise underdevelopment.The noise amplitude measurement is based upon the low-noise detectors for an output power of 1 μW.The phase noise is measured by means of a two-channel frequency discriminator at an output power of 1 mW.The basic technical characteristics are as follows: (1) the frequency range varies between 1 and 118 GHz (9 models); (2) the measurement sensitivity at a tune-away frequency from a 10 kHz carrier is equal to between −150 dB/Hz and −165 dB/Hz depending upon a model during the measurement of amplitude noise levels; between −110 dB/Hz and −145 dB/Hz during the phase noise measurement. The systems intended to measure the parameters of different dielectric materials are currently well underdevelopment.A good deal of effort is being undertaken to create new types of transmission lines whose operation is based on the novel physical principles. High-quality VHF resonators are used to create highquality signal sources of radar and navigation systems, for carrying out spectral and frequency measuring and physical research.VHF resonators are used in VHF electronics: resonance and stabilizing systems of generators, in measuring techniques: the wavemeters, the filters, the measuring instrument for signal spectrum, the frequency discriminators, in experimental physics: a spectroscopy of electronic International Journal of Microwave Science and Technology paramagnetic resonance, a measuring of materials parameters, and so forth. Principles of Q-Factor Increasing for mm and Submm Resonators At present, the technological possibilities of Q-factor increasing under room temperature (use of high-conductivity materials, high-quality processing to surfaces) practically are exhausted.In this context, using new physical principles gains a special role.The simplest approach here: use the higher modes in multimode resonators.However in this case it is necessary to take the supplementary means of the modes selection because of the existence of the degenerate modes in the multimode wideband resonance system.Use of quasioptical opened resonators [4], which are one subject of the present report, is a high effective mean for mode selection.It follows to note that even in them the degree selection sometimes is insufficient, and it is necessary to use the additional means (see below).In this paper two types of open resonators are considered: type A-with two identical spherical mirrors and type B-having a spherical mirror and a planar disk mirror.The other approach is concluded in using the effect of the reduction of the losses in corrugated surface.This effect was used for reduction of the H 11 -mode attenuation in flexible cylindrical waveguides [5] and horn antennas [6].In a given work, the cylindrical resonator with longitudalcorrugated lateral surface (the worker of the mode E omn with m = 1, . . ., 2 and n = 15, . . ., 25) was designed on his base.The method of impedance-type efficient border conditions generalized on event of the final conductivity of the material [6] is a base of electrodynamical calculation of this resonator (Figure 1). The Wide-Range Matching of Multimode Resonators with Single Mode Rectangular Waveguides The simplest coupling element is a diaphragm with round hole (R radius), located in cross section of waveguide (R λ).The calculation of such element is usually conducted on base dipole approach.However, using the usual expression for magnetic polarizability of hole does not lead in this instance to satisfactory consent with experiment.The reason of that is the magnetic polarizability being greatly changed under the action of walls of waveguide [7].This influence is taken into account in a given work that brings about consent of the theories and experiment.The nondiaphragm coupling element representing a narrow slot in the center of the spherical mirror is more efficient.It linked a smooth waveguide transition from the slot cross section to that of standard rectangular waveguide, which is orthogonal to the mirrors external surface.The achievement of a high degree of matching in broad range of the frequencies is reached by optimum choice of the irregularity low.The Oliners model, traditionally used for the analysis of microwave integrated circuits, is applied to calculation of the given coupling element.Using the Oliners model, we first pass from the real coupling element to its prototype, which instead of the narrow "electric" walls of the waveguide has "magnetic" ones.The condition of the physical equivalence of the prototype to the given element consists in the equality of the waveguide impedances of operational modes at corresponding cross-sections of the original waveguides and its prototype.For this, the size of the wide wall of the waveguide in the prototype should be b eff = bv, where v = [1 − (λ/2h) 2 ] 1/2 .The operational TEM onq oscillation in the resonator will not change its field structure if the "magnetic" walls are continued into the resonator.This Oliners model enables us to easily calculate the power radiated into the waveguide.For decision of the 2D-prototype boundary value problem is used by Galerkins incomplete method with a semi-inversion of the singular operators in the boundary conditions [6]. One more type of the coupling element, investigated in the given work, presents itself as the semitransparent grating of strips applied on a dielectric substrate.This type of coupling element is manufactured involving integral-circuit technology. Additional Activity for Selection of the Modes As is well known, the main principle to selection of the modes in opened resonator is founded on powerfully differing radiation Q factor of the different modes [4].The additional activity is a use of dissipative selection of the modes.The example considered in the given work, is semitransparent gratings of strips, located above mirror of the resonator (the effect of the resonance absorption in these gratings is used) [8].In corrugated resonator, the dissipative selection of the modes is realized automatically-an effect of the anomalous small absorption is very sensitive to corner of the fall, frequency, and polarizations of the field [6]. Experimental Results The typical characteristics of the under study resonator are specified in Table 1.All resonators are provided with varactor section for electronic tuning of the frequency.The measured values of VSWR, the passband, and the bandwidth of the electronic tuning of the frequency are presented in Figure 2 as functions of frequency f .It can be seen that the varactor section allowed us to obtain the value of automatic electronic frequency tuning of 1 GHz/μs without affecting significantly the loaded Q-factor of the open resonator (the decrease in the loaded Q-factor was so small that its value was comparable with the measurement error). A uniform electronic timing of an open resonator is obtained.The use of the varactor section located on the ridged waveguide eliminates the dead zones; that is, the frequency regions, in which there is no electronic frequency tuning. Conclusions The results obtained in this paper have been used for designing a multipurpose set of six high Q-factor quasioptical resonators with automatic frequency tuning (Q > 6 × 10 4 , VSWR < 1, 6) which can operate in the frequency band 37.5-400 GHz.A series of theoretical and experimental studies has been made into the principles of improving Q-factor, mode selection, and wide-band matching of all types of the MW resonators with single-mode waveguides. The proposed resonators (apart from using them as reference measuring resonators) can be employed to stabilize the oscillator frequency, to measure both the dielectric and magnetic parameters of media, the surface impedance of metals and superconductors, and to measure the insignificant mechanical oscillations.These resonators can likewise be used as wave meters and bandpass filters. We have also fallen back on the examined concepts of matching the multimode resonators and standard waveguides in developing MW devices that are operated on the principle of microwave heatingup technology. Figure 2 : Figure 2: The measured values of VSWR, 2Δ f , ΔF as a function of resonant frequency f . Table 1 : Typical technical characteristics of resonators.
2,178.4
2012-10-13T00:00:00.000
[ "Physics", "Engineering" ]
Critical and Subcritical Anisotropic Trudinger–Moser Inequalities on the Entire Euclidean Spaces We investigate the subcritical anisotropic Trudinger–Moser inequality in the entire space R N , obtain the asymptotic behavior of the supremum for the subcritical anisotropic Trudinger–Moser inequalities on the entire Euclidean spaces, and provide a precise relationship between the supremums for the critical and subcritical anisotropic Trudinger–Moser inequalities. Furthermore, we can prove critical anisotropic Trudinger–Moser inequalities under the nonhomogenous norm restriction and obtain a similar relationship with the supremums of subcritical anisotropic Trudinger–Moser inequalities. In 2000, Adachi-Tanaka [6] obtained a sharp Trudinger-Moser inequality on R N : where Φ N (t): � e t − N− 2 i�0 t i /i!. Note that inequality (3) has the subcritical form, that is, α < α N . Later, in [7,8], Li and Ruf showed that the best exponent α N becomes admissible if the Dirichlet norm R N |∇u| N dx is replaced by Sobolev norm R N (|u| N + |∇u| N )dx. More precisely, they proved that e proofs of both critical and subcritical Trudinger-Moser inequalities (3) and (4) rely on the Pólya-Szegö inequality and the symmetrization argument. Lam and Lu [9,10] developed a symmetrization-free method to establish the critical Trudinger-Moser inequality (see also Li,Lu,and Zhu [11]) in settings such as the Heisenberg group where the Pólya-Szegö inequality fails. Such an argument also provides an alternative proof of both critical and subcritical Trudinger-Moser inequalities (3) and (4) in the Euclidean space. In fact, the equivalence and relationship between the supremums of critical and subcritical Trudinger-Moser inequalities have been established by Lam, Lu, and Zhang [12]. In 2012, Wang and Xia [21] investigated a sharp Trudinger-Moser inequality involving the anisotropic Dirichlet norm ( Ω F N (∇u)) 1/N dx on W 1,N 0 (Ω) for N ≥ 2: Here, k N is the volume of a unit Wulff ball W F : � x ∈ R N |F 0 (x) ≤ 1 , F is convex and homogeneous of degree 1, and its polar F 0 represents a Finsler metric on R N . Similar to B. Ruf's work [8], when anisotropic Dirichlet norm ( Ω F N (∇u)dx) 1/N is replaced by full anisotropic Sobolev norm ( Ω (F N (∇u) + |u| N )dx) 1/N , Zhou [22] extended the results of Wang and Xia [21] to the entire space, provided λ ≤ λ N , and the integral above will tend to infinity for any λ > λ N . In this paper, we will establish the Adachi-Tanaka-type subcritical Trudinger-Moser inequality and the equivalence relationship between the supremums of critical and subcritical Trudinger-Moser inequalities involving the anisotropic norm restriction similar as in [12]. Our main results can be stated as follows. If λ is close enough to λ N , then there exist constants c(N, β) and C(N, β) such that where λ N is sharp, that is, AAT(λ N , β) � ∞. en, AMT a,b (β) < ∞ if and only if b ≤ N, and λ N is sharp, and we also have in particular, Finsler Metric and Some Useful Lemmas Before giving the proof, for the convenience of the readers, we provide some notations and basic facts about the Finsler metric. Let F: R N ↦R be a nonnegative convex function of class C 2 (R N \ 0 { }) which is even and positively homogeneous of degree 1, for any ξ ∈ R N and t ∈ R so that Because of homogeneity of F, there exist two constants If we consider the map which is defined by We can verify that We can see that φ( ≤ r a Wulff ball of radius r with center at 0. Next, according to the assumption of F, we can give some properties of the function F: In the following, we give two lemmas that will be used later. Lemma 1 and then we have Due to the homogeneity of F(x), we obtain Hence, and the proof is finished. □ By Lemma 2, when we consider the sharp Trudinger-Moser inequality, we can always assume ‖u‖ L N � 1. Lemma 2. e sharp subcritical Trudinger-Moser inequality is the sequence of the sharp critical Trudinger-Moser inequality. More precisely, if AMT a,b (β) is bounded, then AAT(λ, β) is also bounded and in particular, where en, Because ‖F(∇v)‖ a L N + ‖v‖ b L N ≤ 1, we have Mathematical Problems in Engineering 5 □ Equivalence between the Critical and Subcritical Anisotropic Trudinger-Moser Inequalities under the Homogeneous Norm Restriction In this section, we give the asymptotic behavior of the supremum for the subcritical anisotropic Trudinger-Moser inequalities, show the equivalence between the critical and subcritical anisotropic Trudinger-Moser inequalities under the homogeneous norm restriction, and finish the Proof of eorem 3. Now, we estimate the volume of Ω u : We rewrite (38) en, By calculation, In the area Ω u , we assume where v ∈ W 1,N 0 (Ω u ), and we can easily have Set ε � λ N /λ − 1, for any a, b, ε > 0 and p > 1.We have that 6 Mathematical Problems in Engineering by using the following elementary inequality: Using the singular Trudinger-Moser inequality under the anisotropic norm in the bounded domain [23], we have that erefore, Next, we show that that AAT(λ N , β) � ∞. Set u k (x) : By calculation, we have Mathematical Problems in Engineering Next, (50) at is, ere exists a quite large constant M 1 and when k ≥ M 1 , we have erefore, Now, we establish the lower bounds of AAT(λ, β): Mathematical Problems in Engineering When λ/λ N ≥ (1/2), there exists a very large constant M 2 which is independent of λ; when k ≥ M 2 , we have that en, When λ is close enough to λ N , we can always find a suitable k satisfying 1 ≤ (1 − (λ/λ N ))k ≤ 2 and Since λ is close enough to λ N , we have which is Critical Anisotropic Trudinger-Moser Inequalities under the Nonhomogenous Norm Restriction and the Relationship with the Subcritical Anisotropic Trudinger-Moser Inequalities In this section, we prove critical anisotropic Trudinger-Moser inequalities under the nonhomogenous norm restriction and give a precise relationship between the supremums for the critical and subcritical anisotropic Trudinger-Moser inequalities under the nonhomogenous norm restriction. Proof of where By the simple calculation, By eorem 3, we can obtain Mathematical Problems in Engineering When θ ⟶ 1, we can use L'Hospital's rule to estimate the last but one term and then we can obtain By eorem 3, we can obtain Next, we will prove the constant λ N (1 − (β/N)) is optimal. We can establish the same sequence u k (x) as in (47). We recall that u k (x) satisfies where λ k ∈ (0, 1) satisfies and because λ k ⟶ 1 and so there exists a large enough k, such that λ/λ N λ Now, we will prove when AMT a,b (β) < ∞. By Lemma 2, we have Let u k (x) be the maximizing sequence of AMT a,b (β), i.e., (78) us, we have AAT F ∇u k (x) � � � � � � � � AAT(λ, β) < ∞. (82) Since AMT a,b (β) < ∞, From eorem 3, we can obtain us, when λ is close enough to λ − N , we have that AAT(λ, β) ∼ (1 − (λ/λ N ) N− 1 ) (β− N)/N . us, which is impossible because b > N. Hence, we complete the proof. Data Availability Some or all data, models, or codes that support the findings of this study are available from the corresponding author upon reasonable request. Conflicts of Interest e authors declare that they have no conflicts of interest.
1,828.4
2021-09-30T00:00:00.000
[ "Mathematics" ]
Evaluation of factors affecting prescribing behaviors, in iran pharmaceutical market by econometric methods. Prescribing behavior of physicians affected by many factors. The present study is aimed at discovering the simultaneous effects of the evaluated factors (including: price, promotion and demographic characteristics of physicians) and quantification of these effects. In order to estimate these effects, Fluvoxamine (an antidepressant drug) was selected and the model was figured out by panel data method in econometrics. We found that insurance and advertisement respectively are the most effective on increasing the frequency of prescribing, whilst negative correlation was observed between price and the frequency of prescribing a drug. Also brand type is more sensitive to negative effect of price than to generic. Furthermore, demand for a prescription drug is related with physician demographics (age and sex). According to the results of this study, pharmaceutical companies should pay more attention to the demographic characteristics of physicians (age and sex) and their advertisement and pricing strategies. Introduction Providing medicines in an accessible and affordable manner is the aim of all health systems. Establishment and reinforcement of local pharmaceutical manufacturing is one of the strategies to achieve mentioned aim, but viability and competitiveness of the local industry is necessary. In this way, local industry needs to know and understand the behavior of the target market and factors affecting selection of a pharmaceutical product by customers. On the other hand, understanding effects of different effective factors can be useful to optimize promotion activities which in turn can lead to the cost reduction. Iranian pharmaceutical context The year 1981 witnessed the beginning of a roundup of actions aimed at adopting and implementing policies to modernize the Iranian pharmaceutical sector, which influenced this industry all the way up to 1994 (1). Analysis of the Iranian pharmaceutical market in the 13year period shows an annual sale growth equal to 28.38%. A study on domestic production and import revealed 9.3% and 42.3% annual growth, respectively (2). Mentioned actions, entitled Generic Scheme, sometimes also called the Generic Concept, formed the foundation of the new pharmaceutical system in the country. In recent years, national pharmaceutical system has been directed to the brand-generic and brand systems. This policy increased competition in pharmaceutical industry (3). The population of available in Iran drug market. Therefore we could analyze the effects of advertisement and insurance covarage on acceptability of a drug by physicians. In order to evaluate the effect of advertisement on prescription, we compared generic and brand (Luvox®) types in Iranian drug market. We have assumed that the importer company uses advertising for brand type, but for generic type domestic producers don't. The domestic generic type is covered by insurance but the brand form isn't so we could examine the insurance coverage and ads effect. Method In order to find out the effect of each factor on the rate of prescription of each medicine, a multivariate model was offered and the impact of each component was studied within that model. Statistical method According to the model offered in this study, the data of the drug prescribed by physicians were gathered in the time period between 2007 & 2009 and combination data method in econometrics (panel data) was used for estimation of the model. Panel (data) analysis is used in epidemiology, econometrics and social science, which deals with two-dimensional (cross sectional/times series) data. In other words, the data related to each case are usually collected over the time and then a regression is run over these two dimensions. Multidimensional analysis is an econometric method in which data are collected over more than two dimensions (typically, time, individuals, and some third dimension) A common panel data regression model looks like y it = a + bx it +Є it , where y is the dependent variable, x is the independent variable, a and b are coefficients, i and t are indices for individuals and time. The error term, Є it, is very important in this analysis. Assumptions about the error term determine whether we speak of fixed effects or random effects. In a fixed effect model, Є it is assumed to vary non-stochastically over i or t making the fixed effects model analogous to a dummy variable model in one dimension. Iran is now over 74 million. The country's gross domestic product (GDP) per capita in 2011 was reported to be over US$12,000 and the country spends about 6% of its GDP on health (4). Literature review Physicians and patients have principal-agent relationship that arises under conditions of imperfect information (5). As agents, physicians play main role in deciding which medication or method of treatment best fits the patients' health condition (6, 7). Many factors affect medication prescribing, including pharmaceutical industry influences, academic detailing interventions, efforts to educate health care providers, personal experience with a medication or class of medications, and patient requests (8). The competition between pharmaceutical companies in selling their products in domestic and international markets has caused huge investment in developing marketing strategies with direct focus on physicians and in some territories, patients (9, 10). Pharmaceutical companies seek for the best strategies in their targeted markets regarding the physicians' and patients' attitude and market characteristics (11). Several studies have been conducted to assess factors affecting the sale of prescription drugs such as age and sex of prescriber, price, advertising (12, 13). Considering previous studies conducted on this subject, in this study we aimed to assess the effects of five main factors, including: advertisement, insurance coverage, price, gender and age of prescriber on drug selling in Iran health market (12,14). Data and variables In an effort to investigate the effect of price, advertisement and insurance coverage as well as characteristics of the physician (age and sex), the group of Antidepressant drugs was selected and from each a good sold drug whose data was accessed in Iran was chosen. The drug was Fluvoxamine. We selected Fluvoxamine because of being new which facilitated the evaluation of the effect of age and sex in the physicians' acceptability of the new drug. Both of the domestic generic and the imported brand product (Luvox®) are In a random effects model, Є it is assumed to vary stochastically over i or t requiring special treatment of the error variance matrix (16). Panel data sets for economic research possess several major advantages over conventional crosssectional or time-series data sets (17, 18). First, panel data usually give the researcher a large number of data points (n i * m t ), increasing the degrees of freedom and reducing the collinearity among explanatory variables. Second and more importantly, longitudinal data allow a researcher to analyze a number of important economic questions that cannot be addressed using cross-sectional or time-series data sets. Third, panel data provides a means of resolving the magnitude of econometric problems that often arises in empirical studies, namely the often heard assertion that the real reason one finds (or does not find) certain effects is the presence of omitted (miss measured or unobserved) variables that are correlated with explanatory variables (19). Whereas both brand and generic types of the drugs are available, two models, namely a generic and a brand were performed. Models were estimated using Eviews 6.0 software package. Our sample includes 200 physicians who prescribed Fluvoxamine between 2007Fluvoxamine between & 2009 In this study, antidepressant drug categories were chosen because: First, these drugs are often used for chronic diseases so we can measure the impact of various factors over the long-term time frame. Second, there is no strong clinical evidence that the various antidepressants have different rates of efficacy. Thus, these drugs can be replaced with other antidepressant drugs category so the impact of other factors such as price and advertisement can be better measured (20, 21). The antidepressant drug, Fluvoxamine was chosen because at the time the study was conducted, both generic and brand types, were available. Also this drug in the pharmaceutical market of Iran was new. Therefore, we could measure the response of physicians to new medicines. We calculated that each doctor how often prescribed this medication. Data were taken from the Social Security Organization of Iran (SSOI). The amount of money that is spent annually for advertising the drug, was obtained from importing company. The proposed model follows hereunder. Factors affecting the sale of prescribed drugs in Iran were analyzed using this model. Variables of the model are explained in Table1. Yijt = F (Pjt, Mt, ADjt, D1, D2) Results In order to investigate the factors affecting the prescription of the generic and brand (The brand type is not covered by insurance) of the drug two models are estimated as follows: according to the Table 2, the coefficients of all independent variables are statistically significant. There is a positive relation between the age and frequency of prescription of generic product and positive relation between male gender of the physician and the frequency of prescription of this product too. The estimated model for brand Fluvoxamine which enjoys advertisement is summarized in Table 3. There is a negative relation between the age of the physician and the frequency of prescription of the drug whereby older physicians prescribe the brand type less than younger ones. As we can see in the results, advertisements, have a positive impact on the prescription. Increasing price have negative effect in both generic and brand type. Discussion The results of the estimation imply that foreign brand (Luvox) advertisements had a positive and significant effect on the sale of this drug. From quantitative perspective it can be concluded that a marginal increase equal to one Rial (Iran currency) in advertising the drug, is associated with an increase equal to 0.000123 in the number of prescriptions for this drugs. In other words, spending one million Rials in advertising to the doctors, leads to one hundred and twenty-three additional prescriptions. As the results show, gender and age of physicians had a significant effect on the frequency of prescription of the generic product. As can be seen from Table 2, male gender has a positive effect which may be interpreted, for Flovoxamine as a new drug in Iran market; male doctors are more inclined to prescribe new drugs. For the brand type age had a negative and significant effect that means younger doctors more willing to prescribe brand type. In case of the responsiveness of prescription to the prices we should say that there is a statistically negative and significant relation between the prices of the generic and brand type of the drug and the frequency of its prescription. The results show at least 69 numbers reduction in the number of prescriptions for one Rial increase to the price. It should be noted that Fluvoxamine as an antidepressant drug should be used for a long time. Therefore elasticity of the price increases and physician is more willing to prescribe less expensive drug. In Iran health market, since the brand type is much more expensive than generic type, doctors are more sensitive to price changes in brand type. Insurance has a positive effect but as the result shows (Table 3) its effect is very high compared to other factors. The results of this study are consistent with previous studies which have estimated the positive effect of different kinds of advertisement strategies on prescription frequency by physicians (22)(23)(24)(25). There are also studies about the effect of price and price-affecting factors -including health insurance coverage with the similar result of this study (26). The effect of different approach on pricing regulation has also been the subject of many studies (27). Empirical evidence shows that insurance coverage is associated with rising health expenditure (28,29). Because doctors' prescriptions are a major source of health expenditures, exploring whether and why doctors respond to patients' insurance is essential for understanding why expanding insurance coverage leads to rising expenditures. There are some studies which show it is more likely that a doctor prescribes brand type drugs or There are also many articles about the effect of the gender of consumers and providers on the amount of the sale of a product. Most of the works have been carried out about women as customers of a particular series of products and shows that men are more independent, more certain, competitive, enthusiastic to change and risk (32). Similarly about physicians, a study conducted on 358 women and men showed that male physicians pay more attention to new technologies than female physicians therefore prescribe newer drugs (33). Stevenson and Tamblyn conclude in a qualitative report that female physicians principally prescribe fewer drugs, carry out less diagnostic activities and tend to be more favorable toward prevention of drug consumption (34). About the age, studies have indicated that older physicians are less willing to use the newer drugs (35). It has been shown the year of graduation from university is an effective factor in prescription (36). Limitation In this study we were faced with some limitation. We had problem with data collection because cost of companies' promotion for each product was not clear. Also due to the lack of prescription data of all insurer companies, we assumed that SSOI insurer data can be extended to the whole insurance information.
3,060.4
2015-05-01T00:00:00.000
[ "Economics", "Medicine" ]
Topologically massive gauge theories from first order theories in arbitrary dimensions We thereby prove that a large class of topologically massive theories of the Cremmer-Scherk-Kalb-Ramond-type in any $d$ dimensions corresponds to gauge non-invariant first-order theories that can be interpreted as self-dual models. defines the duality operation by where µ is a mass parameter here introduced to render the ⋆ -operation dimensionless. This is basically a functional curl (rotational operator). We name self(anti-self )-duality, when the relations ⋆ f = ±f are (respectively) satisfied. The so-called Self-Dual Model (Townsend, Pilch and van Nieuwenhuizen [7]) is given by the following action, The equation of motion is the self-duality relation: This model is claimed to be chiral, and the chiralities χ = ±1 result defined precisely from this self-duality. On the other hand, the gauge-invariant combination of a Chern-Simons term with a Maxwell action, is the topologically massive theory, which is known to be equivalent [13] to the self-dual model (2). F µν is the usual Maxwell field strength, This equivalence has been verified with the Parent Action Approach [14]. We write down the general parent action proposed by Deser and Jackiw in [13], which proves this equivalence: where is the Chern-Simons action [1]. For general dimensions, it is possible to define self (and anti-self)-duality for pairs (doublets) of form-fields with different ranks [6]; so, a paralell of this structure with the one in d dimensions will be observed. The problem of defining the Hodge duality for all dimensions is well-known; for instance, in Lorentzian four-dimensional spacetime, the main obstruction to self-duality comes from the relation of doubledualization 2 for a rank-two tensor: * * F = (−1) s F, 2 For a generic q-form, A, the Hodge dual is defined by where s is the signature of space time 3 . For the case of the Lorentzian metric, where s is an odd number, the self-duality concept seems inconsistent with the double dualization operation due to the minus sign in (9). This problem remains for dimensionality d = 4m (m ∈ Z + ) [15]; in contrast, it is absent for d = 4m − 2. Thus, self-duality is claimed to be well-defined (only) in such a dimensionality. First, let us recall that (9) has led to the prejudice that the (Abelian) Maxwell theory would not possess manifest self-duality solutions. The resolution of this obstruction came with the recognition of an internal two-dimensional structure hidden in the space of fields. Transformations in this internal duality space extends the self-duality concept to this case and is currently known under the names of Schwarz and Sen [16], but this deep unifying concept has also been appreciated by others [17]. The actions worked out correspond to self-dual and anti-self-dual representation of a given theory and make use of the internal space concept. The duality operation is now defined to include the internal (two dimensional) index (i , j) in the fashion where the 2 × 2-matrix, e, depends on the signature and dimension of the spacetime in the form: with σ αβ 1 being the first of the Pauli matrices and ǫ αβ is the totally antisymmetric 2 × 2 matrix with ǫ 1,2 = 1. The double dualisation operation, generalizes (9) to allow consistency with self-duality. It has been shown that this prescription works in the construction of self-dual Maxwell actions [18] . This structure has always been considered in the literature only for tensorial objects where the field has the same tensorial rank that its corresponding dual. Howerver, we may generalize further these ideas, introducing more general doublets [6]. Let a d-dimensional space-time with signature s, and a generic element Φ ≡ (a , b) in the space i.e, a , b are either a p-form and a (d − p)-form respectively. Thus, one may define a Hodge-type operation for these objects 4 by means of where S q is a number defined by the double dualisation operation, for a generic q-form A: * ( * A) = S q A . This depends on the signature (s) and dimension of the spacetime in the form Notice that * applied to doublets is defined such that its components are interchanged with a supplementary change of sign for the second component. For our purpose in this paper, we are more interested in proposing and working with another type of dual-operation of a similar nature to the duality we describe above for the case of 2 + 1-dimensions. Let a d-dimensional space-time with signature s: we consider the tensor doublet, where f is a p(< d)-form ( a totally antisymmetric tensor type (0; p) ) , and g is a There is also a well defined notion of self (and anti-self)-duality for the objects in this space. Consider the action with topological coupling: For a more concise notation, in terms of forms, consider the following definitions: d(f, g) ≡ (df , dg), and once more, the operation * applied to objects in ∆ p supposes components interchange and an appropriate modification of sign for the second component. In so doing, the equations of motion derived from the action (16) read as where m is a mass parameter introduced for dimensional reasons. It may trivially be verified that these equations require that F satisfies a Proca equation with mass m 5 Notice that the equation (18) looks like (3). In that sense, we state that S DSD describes doublet-selfduality. The other remarkable similarity of this model with SD (in (2+1)-d) is that this is dual to a topologically massive theory (CSKR-type, with BF-coupling between two gauge forms) in the same way that the SD-MCS duality in three dimensions. This constitutes our main point, which confirm and generalize some recents results [8]. Below, we are going to prove this correspondence. Note that this structure is insensitive to the space-time dimensions and the tensorial ranks of the doublet components. Thus, a Deser-Jackiw-inspired parent action may be written in d space-time dimensions. Consider the doublet of gauge fields A ≡ (a µ1···µp , b µ1···µ d−p−1 ) in addition to F = (f µ1···µp , g µ1···µ d−p−1 ); the parent action proposed is: where may be recognized as a BF-action. Varying S P with respect to F , we obtain plugging this back into (19), we recover the topologically massive gauge action (CSKR): We shall observe that this is invariant under the gauge transformations; A → A + dD, where dD is a pure gauge doublet, i.e, it is a pair of exact differentials of (p − 1, d − p − 2)-forms. Now, we vary S P with respect to A and obtain: This implies that the differences a − f and b − g may locally be written as exact forms; therefore, one it is possible to express the solution to these equations as Putting this back into the action (19) , we recover the SD theory (16) up to topological terms. This completes the proof of our main statement. As an example, one can particularize this result for the special dimensionality, d = 3 + 1. In this case, only two tensor doublets may be chosen: G = (A µ , B νρ ) and H = (φ, F νρα ). The first one describes a Cremmer-Scherk-Kalb-Ramond massive spin-one particle and, by virtue of the general result proven before, its dynamics may alternatively be described by either, the Cremmer-Scherk-Kalb-Ramond theory or the first-order SD model: which is gauge non-invariant. This confirm the result recently presented in ref. [8] 6 . The second possible doublet in four dimensions describes a scalar (spin-zero) massive particle whose dynamics may be given by a topologically massive action, or alternatively, by a first order (SD) model : Doublet Hodge Duality has been defined in a similar sense to the duality in 3d [6]. This suggests a list of formal correspondences between theories in 3d which involve self-duality and similar models in other dimensions. This constitutes by itself a very important application of this formalism since one can, in principle, translate the constructions of 3d to arbitrary dimensions. An interesting possibility that we open up is the study of bosonization in arbitrary dimensions, mainly in higher dimensions. This is not a trivial matter [19,20,21], but with the help of the technique suggested here, d ≥ 4 bosonization comes out in connection with a topologically massive model that mixes different gauge forms. Results on this issue shall soon be reported elsewhere [22]. Aknowledgements: The author is indebted to J. A. Helayel-Neto for invaluable discussions and pertinent corrections on the manuscript. Thanks are due to the GFT-UCP for the kind hospitality. CNPq is also acknowledged for the invaluable financial help.
2,025.8
2001-10-29T00:00:00.000
[ "Physics" ]
LIMIT CYCLES OF PIECEWISE SMOOTH DIFFERENTIAL EQUATIONS ON TWO DIMENSIONAL TORUS In this paper we study the limit cycles of some classes of piecewise smooth vector fields defined in the two dimensional torus. The piecewise smooth vector fields that we consider are composed by linear, Ricatti and perturbations of these two classes. For these kind of piecewise smooth vector fields we study their global dynamics, their upper bounds for the maximum number of limit cycles that they can exhibit, and the existence of non-trivial recurrences and of a continuum of periodic orbits. We also present a family of piecewise smooth vector fields that possesses a finite number of fold points, and that for any positive integer k there are values of the parameters of this family for which the piecewise smooth vector field exhibit k limit cycles, Introduction The theory of piecewise smooth vector fields (PSVF) has been studied intensively in these last years, mainly due to its strong relation with branches of applied sciences. These PSVF are in the boundary between mathematics, physics and engineering, for more details see for instance the two recent surveys [7] and [13], and the two books [4] and [12] on this subject, where also models of PSVF from control theory are considered. Roughly speaking the PSVF are formed by several smooth differential systems defined in different regions of the global domain of definition of the PSVF. The common frontier between the regions that separate the different smooth vector fields is called switching manifold (or discontinuity manifold). Let T be the two dimensional torus. We decomposed T as the union of T + with T − , where T + denotes the closed upper half part of the torus T (homeomorphic to a closed annulus), and T − the closed bottom half part of this torus (also homeomorphic to a closed annulus). We denote by Σ = T + ∩ T − a smooth curve, formed by two circles, which separates T into two connected components, each one homeomorphic to an open annulus. Let X + and X − be smooth vector fields on T + and T − , respectively. A precise definition of T, T + , T − and Σ is given at the beginning of Section 2. In this paper we consider piecewise smooth differential equations of the form The dynamics over Σ is defined following the Filippov's convention (see [5]). For simplicity a differential system (1) will be denoted by (X + , X − ), and referred as vector field (1). The study of piecewise smooth dynamical systems defined on torus is not new, but as far as we know it has been restricted to the case of discrete dynamical systems. There are a large number of results for piecewise maps [1], [2], [3] and [14], but there is a lack of theoretical results for the case of piecewise dynamical systems where the flow is the solution of a piecewise differential system. The research of the number and stability of limit cycles for some classes of vector fields is one of the most relevant problems of the qualitative theory of the dynamical systems. This kind of studies started with Poincaré in [11] and [10]. The main objective of this paper is to start this research first for the PSVF (1) when the smooth vector fields on T + and T − are either linear, or Ricatti, or some families of perturbations of them coming from the applications (see (3)), and after for a family of PSVF presenting a finite number of fold points (see (6)). This paper is organized as follows. In Section 2 we formalize some basic concepts on the PSVF, as the first return map in this scenario and present some techniques that we shall use in the proof of the main results. In Section 3 the main results are presented, in Section 4 we prove these results, and in Section 5 we end this paper presenting some numerical examples of PSVF with the maximum number of limit cycles that they can exhibit. Designate by X r the space of C r -vector fields on T endowed with the C r -topology with r = ∞ or r ≥ 1 large enough for our purposes. Call Ω r the space of PSVF X : T → T such that where ·, · denote the Euclidean inner product. We may consider Ω r = X r × X r endowed with the product topology and denote any element in Ω r by X = (X + , X − ), which we will accept to be multivalued in points of Σ. In this context the basic results on the PSVF were stated by Filippov in [5]. Related theories can be found in [4,9,13] and references therein. On Σ we generically distinguish three regions: the crossing region Σ c = {p ∈ Σ : X + 2 (p) X − 2 (p) > 0}, the stable sliding region Σ s = {p ∈ Σ : X + 2 (p) < 0, X − 2 (p) > 0}, and the unstable sliding region Σ u = {p ∈ Σ : X + 2 (p) > 0, X − 2 (p) < 0}. Following the Filippov's convention if q ∈ Σ s the sliding vector field associated to X ∈ Ω r is the vector field X s tangent to Σ s , expressed in coordinates as which, after a time rescaling, is topologically equivalent to the normalized sliding vector field X s (q) = (X + 1 − X − 1 )(q). A point q ∈ Σ such that X s (q) = 0 is called a pseudo equilibrium of X, and a point p ∈ Σ such that X + h(p)X − h(p) = 0 is called a tangential singularity of X (i.e. the trajectory through p is tangent to Σ). We say that a point q ∈ Σ is a regular point if q ∈ Σ c or q ∈ Σ s and X s (q) = 0. The flow φ X of X ∈ Ω r is obtained by the concatenation of flows of X + , X − and X s , denoted by φ X + , φ X − and φ X s , respectively. Let be X = (X + , X − ) ∈ Ω r , we say that p ∈ Σ is a fold-regular point of X if p is a fold point of X + and X − (p) is transversal to Σ at p. 2.2. Extended Chebyshev systems. Let I be a proper real interval. A ordered set of functions F = {g j : I → R for j = 0, 1, . . . , k} is an extended Chebyshev system on I if and only if every nontrivial linear combination of functions of F has at most k zeros taking into account their multiplicities. F is an extended complete Chebyshev system on I if and only if for any s, 0 ≤ s ≤ k, we get that (g 0 , g 1 , . . . , g s ) is an extended system. For details and proofs see [6]. It is necessary and sufficient for proving that F is an extended Chebyshev system on I that W (g 0 , g 1 , . . . , g s )(t) = 0 on I for 0 ≤ s ≤ k, where W s (t) = W (g 0 , g 1 , . . . , g s )(t) is the Wronskian of the functions (g 0 , g 1 , . . . , g s ) with respect to t. In [8] the authors proved that for a family of n + 1 linearly independent analytical functions where at least one of that possess constant sign in its domain, there exists a linear combination of these functions having at least n simple zeros. Precisely, they proved the following result: Theorem B Let F = {g 0 , g 1 , . . . , g n } be an ordered set of real C ∞ functions on (a, b) for which there exists ξ ∈ (a, b) with W (g 0 , g 1 , . . . , g n−1 )(ξ) = W n−1 (ξ) = 0. Then the following statements hold. (a) If W n (ξ) = 0 then for each configuration of m ≤ n zeros, taking into account their multiplicity, there exists a linear combination of the functions of F having this configuration of zeros. (b) If W n (ξ) = 0 and W n (ξ) = 0 then for each configuration of m ≤ n + 1 zeros, taking into account their multiplicity, there exists a linear combination of the functions of F having this configuration of zeros. Main results for PSVF in the two dimensional torus One of the main objectives of this paper is to study the linear and Ricatti vector fields in T, that we denote by e ω + f ω y + g ω y 2 ), respectively, where a ω , b ω , c ω , d ω , e ω , f ω , g ω ∈ R and either ω = + or ω = −, if the vector field is defined either in T + or in T − . The special case of X ω L where a ω = c ω = 0 in X ω L will be denoted by X ω C (constant vector field). In the following we shall perturb these PSVF considering the functions defined in T: where η 1 , η 2 ∈ R and are small. We denote by X LL the PSVF composed by two linear vector fields in each half torus, by X LR the PSVF composed by a linear vector field in T − and Ricatti vector field on T + , and by X RR the PSVF composed by the Ricatti vector fields in each half torus. Considering the PSVFs X LL , X LR and X RR we perform the following perturbations: where − → 0 denotes the null vector field (0, 0) in T. Remark 1. We only consider perturbations of X LL and X RR in T + , because due to the symmetry of the problem we should obtain the same results if we consider perturbations in T − . For each of the families presented in (4) we consider the following subfamilies , , , , , , , Prior to present the theorem we define the following real numbers In Theorem 2 we prove that these subfamilies correspond the piecewise smooth vector fields where the first return map P : Theorem 2. Consider the PSVFs defined in (5). (a) If ∆ LL ∈ Q then X LL has a continuum of periodic orbits, and if ∆ LL ∈ Q then all trajectories of X LL are dense. (b) If ∆ LR ∈ Q then X LR has a continuum of periodic orbits, and if ∆ LR ∈ Q then all trajectories of X LR are dense. (c) If ∆ RR ∈ Q then X RR has a continuum of periodic orbits, and if ∆ RR ∈ Q then all trajectories of X RR are dense. Considering the perturbations F ω i we have has a continuum of periodic orbits, and if ∆ LL2+ ∈ Q then all trajectories of X LL2+ are dense. (e) If ε > 0 then the maximum number of limit cycles for X RR1+ is two, and this upper bound is reached. has a continuum of periodic orbits, and if ∆ RR2+ ∈ Q then all trajectories of X RR2+ are dense. (g) The maximum number of limit cycles of X RR3+ is two, and this upper bound is reached. (h) The maximum number of limit cycles of X LR1− is two, and this upper bound is reached (j) The maximum number of limit cycles of X LR3− is two, and this upper bound is reached. (l) If ∆ LR2+ ∈ Q then X LR2+ has a continuum of periodic orbits, and if ∆ LR2+ ∈ Q then all trajectories of X LR2+ are dense. In what follows we consider a PSVF X Ck = (X C , X k ) in T having a finite number of foldregular points in Σ, where k is a positive integer and α, β ∈ R. For this PSVF there exists a choice of the parameters of X Ck such that X Ck exhibits a finite number of limit cycles depending on k. More precisely we have the following result. Theorem 3. The PSVF X Ck has at most k limit cycles, and this upper bound is reached for every k ≥ 1. Remark 4. Note that the vector field in the family X Ck can have no limit cycles. In such case there are sliding regions over the switching manifold and X Ck may present a chaotic behavior, see for instance [15]. Proof of main results 4.1. Preliminary results. Before to prove the main results of this paper we need some auxiliary results. The next lemma provides the expression of the first return map for the PSVFs X LL , X RR , X LR and their perturbations. Lemma 5. Consider the PSVFs defined in (5) and the functions defined in (3). L − then the first return map P LL : Σ 1 → Σ 1 is well defined and is given by R + then the first return map P LR : Σ 1 → Σ 1 is well defined and is given by R − then the first return map P RR : Σ 1 → Σ 1 is well defined and is given by L + then the first return map P LL2+ : Σ 1 → Σ 1 is well defined and is given by ε is a small positive number then the first return map P RR1+ : Σ 1 → Σ 1 is well defined and is given by R + then the first return map P RR2+ : Σ 1 → Σ 1 is well defined and is given by P RR2+ (x 0 ) = x 0 + ∆ RR2+ (g) If X RR3+ ∈ Ω 2 R − ∩ Ω 1 R + and ε is a small positive number then the first return map P RR3+ : Σ 1 → Σ 1 is well defined and is given by ε is a small negative number then the first return map P LR1− : Σ 1 → Σ 1 is well defined and is given by R − then the first return map P LR2− : Σ 1 → Σ 1 is well defined and is given by P LR2− (x 0 ) = x 0 + ∆ LR2− . (j) If X LR3− ∈ Ω 1 L + ∪ Ω 2 L + ∩ Ω 2 R − and ε is a small positive number then the first return map P LR3− : Σ 1 → Σ 1 is well defined and is given by R − then the first return map P LR2+ : Σ 1 → Σ 1 is well defined and is given by P LR2+ (x 0 ) = x 0 + ∆ LR2+ . Proof. The flow φ X (t) where X is one of the vector fields X Lω , X Rω , X L2ω , X R1ω , X R2ω , X R3ω passing through the point p = (x 0 , y 0 ) when t = 0 is given by respectively. In the following we detail the proof for X LL . In this case considering the flow φ XL− (t) = (x 1 (t), y 1 (t)) starting at the point p = (x 0 , 0) ∈ Σ 1 , the smallest positive time t 1 (p) such that φ XL− (t 1 (p)) ∈ Σ 2 is In this way we obtain the half first return map P − L : Σ 1 → Σ 2 given by P Considering now the flow φ XL+ (t) = (x 2 (t), y 2 (t)) and the initial condition p 1 = (x 1 , 1/2) the smallest positive time such that φ XL+ (t(p 1 )) ∈ Σ 2 is that provides the upper half first return map P + 1 : Σ 2 → Σ 1 given by P + L (x 1 , 1/2) = φ XL+ (t 2 (p 1 ) = (x 2 , 0). Note that a sufficient condition in order that t 1 (p) and t 2 (p 1 ) are the smallest positive times is that Finally the first return map P LL : Σ 1 → Σ 1 is given by the composition P LL ( Working in a similar way as in the computation of the first return map P LL , we have obtained for the other first return maps their domains of definition and their expressions. 4.2. Proof of Theorem 2. Now we are able to perform the proof of Theorem 2. Let P X be the first return map for each PSVF X consider in this paper. Now we define the displacement map The limit cycles of X are given by simple zeros of d X . Lemma 5 provides the first return map for PSVF. Thus the proof of statements (a), (b), (c), (d), (f ), (i) and (l) follows directly because in each one of these cases the first return map is given by P X (x 0 ) = x 0 + ∆ X where ∆ X is a real number given in function of coefficients of X. Therefore the iterates of P X are P k X (x 0 ) = x 0 + k∆ X , or equivalently the k-iterate of the displacement map is d k X (x 0 ) = k∆ X , where k is an integer number. Considering the equivalence relation (2) that defines the two dimensional torus, we have that d k X (x 0 ) return to x 0 if and only if there exists an integer k 0 such that k 0 ∆ X ∈ Z, or equivalently ∆ X is a rational number. Otherwise if ∆ X is not a rational number then the trajectory passing through X 0 never closes. In other words, P k X is a rotation on the circle with irrational rotation number, so we conclude that all trajectories are dense in torus and the proof follows for these cases. In the following we detail the proofs of statements (e) and (h). The first return map for X RR1+ is given in statement (e) of Lemma 5, so the displacement map in this case is recall that ε > 0. For the case (g), similarly to the previous cases, the displacement map for X RR3+ is . As X RR3+ ∈ Ω 2 R − ∩ Ω 1 R + then ξ 5 < 0 and ξ 6 > 0. The solutions of equation d XRR3+ (x 0 ) = 0 are where k 1 , k 2 are integer numbers. In the torus we obtain only two distinct points and the integers k 1 and k 2 are the smallest such that x 1 0 , x 2 0 ∈ [0, 1]. In Example 8 we perform a PSVF with two limit cycles. Besides than if we consider the ordered set of functions F = {g 0 (z), g 1 (z), g 2 (z)}, the Wronskians W 1 (z) and W 2 (z) are So W 1 (z) = 0 and W 2 (z) = 0, because cot( Therefore, by Theorem B we obtain that the upper bound of zeros of any linear combination of functions in F is two and besides that there exists a linear combination of F presenting exactly two zeros. In this way, as the displacement map d XLR3− is given as a specific linear combination of functions of F we guarantee that the upper bound of zeros of d XLR3− is two (as a function of z = tan(πx 0 )). But if z 1 , z 2 are the zeros of d XLR3− then there exists x 1 0 + k 1 and x 2 0 + k 2 real numbers where k 1 , k 2 are integer numbers such that tan(π(x 1 0 + k 1 )) = z 1 and tan(π(x 2 0 + k 2 )) = z 2 . We choose the integers k 1 and k 2 such that x 1 0 , x 2 0 ∈ [0, 1]. Despite the displacement map d XLR3− is a specific linear combination of g 0 , g 1 and g 2 in Example 9 we are able to present values of the parameters such that X LR3− presents exactly two limit cycles. So to find limit cycles we have to find the simple zeros of the displacement map . Now we show that for every m = 0, . . . , k − 1, there are at most one solution for (7) with x 0 ∈ (m/k, (m + 1)/k), thus there is at most k limit cycles for X Lk . Now we study the solutions of where m ∈ Z and x ∈ (m/k, (m + 1)/k). Fix m = 0 and k = 1 without loss of generality (we can always restrict ourselves to x ∈ [0, 1/k]). Note that the function g(x 0 ) = d + arcsin kπα + β sin(2πkx 0 ) β has two critical points, 1/4 and 3/4, so it is monotone on (1/4, 3/4). Therefore the straight line 2x 0 d + kπ − b + kπ − 2md + π meets the graph of g at most in one point. Thus there is at most 1 limit cycles for X Ck with k = 1. It is easy to see that there are at most k limit cycles for X Ck . In Example 10 we provided values of the coefficients α, β, b + and d + for which the PSVF X Ck presents one limit cycle for k = 1. Final remarks and some examples In the present section we exhibit explicit values for the parameters of the PSVFs X RR1+ , X RR3+ , X LR1− and X LR3− for which they realize their upper bound on the maximum number of limit cycles. Example 10. Finally we provide an example with exactly k limit cycles for X Ck (see Theorem 3) for k = 1. Given the vector field X k (x, y) = (α, β cos(2kπx) with α, β > 0 and k > 0 an integer, we will construct a vector field X C (x, y) = (b + , d + ) with a limit cycle. Note that for every m = 0, . . . , k − 1 we have So the first restriction is −1 < kπα β + √ 2 2 < 1. Now we fix m = 0 and prove that we have at least limit cycles for x ∈ [0, 1/k]. and consider X C (x, y) = (∆ Ck , 1/2), i.e. b + = ∆ Ck and d + = 1/2. By construction, P X Ck (1/8k, 0) = (1/8k, 1), so we have a fixed point of the Poincaré map of X Ck , and consequently a limit cycle. The derivative of the Poincaré map on this fixed point is that is not zero under generic conditions. So this is an isolated fixed point, providing a limit cycle, see Figure 2. Thus we have exactly k = 1 limit cycles.
5,590.2
2017-03-27T00:00:00.000
[ "Mathematics" ]
A fractional Landweber iterative regularization method for stable analytic continuation : In this paper, we consider the problem of analytic continuation of the analytic function g ( z ) = g ( x + iy ) on a strip domain Ω = { z = x + iy ∈ C | x ∈ R , 0 < y < y 0 } , where the data is given only on the line y = 0. This problem is a severely ill-posed problem. We propose the fraction Landweber iterative regularization method to deal with this problem. Under the a priori and a posteriori regularization parameter choice rule, we all obtain the error estimates between the regularization solution and the exact solution. Some numerical examples are given to verify the e ffi ciency and accuracy of the proposed methods. Introduction The problem for analytic continuation of an analytic function is encountered in many practical applications (see,e.g., [1][2][3][4] and the references therein), and the numerical analytic continuation is a more interesting and hard problem. It is well known, in general, to be ill-posed in the sense that the solution does not depend continuously on the data. To obtain stable numerical algorithms for ill-posed problems, some effective regularization methods must be adopted, and several nonclassical methods have been developed rapidly in recent years. In [5], the authors used the fourier truncation regularization method to solve this problem. In [6], the authors used the modified kernel method to solve this problem. In [7], the authors used the generalized Tikhonov regularization method to solve this problem. In [8], the authors used the optimal filtering method to solve this problem and obtained the optimal error estimate. However in [5][6][7][8], the regularization parameter is chosen by the a priori choice which depends on the a priori bound E. But in practice, the a priori bound doesn't know for sure, and working with a wrong constant E may lead to the bad regularization solution. In [9], the authors study a continuous fractional regularization method called FAR. In this work, it is strictly proved that FAR is an accelerated algorithm relative to the comparable order optimal regularization method if the fractional order is in the range of (1,2). In [10], the authors study the convergence of Landweber iteration for linear and nonlinear inverse problems in Hilbert scale. Different from the usual application of Hilbert scale in the framework of regularization method, the case of s < 0 (for Tikhonov regularization) corresponds to the weaker regularization standard in the regularization method. In [11], the authors propose a new iterative regularization method for solving ill-posed linear operator equations. The prototype of these iterative regularization methods is a second-order evolution equation with linear vanishing damping term. It can not only be regarded as an extension of asymptotic regularization, but also as a continuous simulation of Nesterov accelerated scheme. They also discussed the application of the newly developed accelerated iterative regularization method with a posteriori stopping rule in diffusion-based bioluminescence tomography, which is modeled as an inverse source problem of elliptic partial differential equations with both Dirichlet and Neumann boundary data. In [12,13], the authors used the modified Tikhonov regularization method to solve this problem, and gave the error estimates between the regularization solution and the exact solution under the a priori and a posteriori regularization parameter choice rules, respectively. However, under the a posteriori regularization selection rule, the form of error estimation is logarithmic. In [14][15][16][17], the authors used the iterative regularization method, wavelet regularization method, a modified Tikhonov regularization method and a modified Lavrentiev iterative regularization method to solve this problem. Under two regularization parameter choice rules, the Hölder type error estimates were all obtained. In this paper, we use fraction Landweber iterative regularization method and Landweber iterative regularization method to solve this problem. Landweber regularization method which is very useful to solve the inverse problem overcomes the saturation phenomenon of Tikhonov regularization method. The Landweber regularization method first comes from [18]. Now, the Landweber regularization method has been used to solve a lot of inverse problem, one can see [19][20][21][22][23][24]. In [25], Xiong firstly proposed Fractional Landweber regularization in 2017. Compared with the standard Landweber method, it reduces the step of the iteration greatly. We can sum up the problem as follows. Suppose the domain Ω is in complex plane C, i is the imaginary unit and y 0 is a positive constant. The function g(z) = g(x + iy) is an analytic function in Ω. g(· + iy) ∈ L 2 (R 2 ) for all y ∈ [0, y 0 ]. The data at y = 0 is the given measurement data, i.e., g(z)| y=0 = g(x) ∈ L 2 (R). Let the noise data be g δ (x), which belongs to L 2 (R). And g(x), g δ (x) satisfy where δ is the noise level, · is L 2 (R) norm. Moreover, assume there holds the following a priori bound where E is a fixed positive constant. g is the Fourier transform of the function g(x), which is expressed as followŝ 4) and the inverse Fourier transform of the functionĝ(ξ) is The L 2 (R) norm for g(x) is defined by and by utilizing the Parseval formula, The problem that needs to be solved is wielding the measurement data g(x + iy)| y=0 to recover the data g(x + iy) for 0 < y < y 0 . By using the inverse Fourier transform with the respect to the variable x, Therefore, we get the following equation i.e., e yξ g(· + iy)(ξ, y) =ĝ(ξ). The clue of this paper is as follows. In section 2, fractional Landweber iterative regularization method and the a priori error estimation are proposed. In section 3, the a posteriori error estimations between the exact solution and the approximate solution are given. In section 4, several examples are selected to show the effectiveness of this method for solving this problem. The convergence error estimate with an a priori parameter choice rule Now, we take on a regularization method which is called Landweber regularization method. And the Eq (1.11) could be described as the following operator equation Therefore, the form of iterative scheme is as follows: 2) m = 1, 2, 3, · · · is the iterative steps. Combined with noise disturbance termĝ δ (ξ), the approximate solution obtained by Landweber iterative regularization method can be expressed as The fractional Landweber regularization solution is where 0 < γ < 1 is a constant. Now, we first give some useful Lemmas. Numerical implementation and numerical examples In this section, we illustrate the effectiveness of the fractional Landweber iterative regularization method for solving this problem through different examples. Examples are the same as those in [5]. For the sake of calculation, we fix y 0 = 1 and the domain is The data g(x) has error, which is expressed as follows where "randn(·)" generates a random number that obeys the standard normal distribution. Use the following equation to express the noise level Under numerical calculation, we select M = 100, p = 2 and the a priori bound is E = g L 2 . The way to calculate the approximate solution is fast Fourier transform. We present the error between g(x + iy) and g m,δ (x + iy) under the means of L 2 norm, e a (g(x + iy)) := g m,δ (x + iy) − g(x + iy) . (4.4) Example 1. Take function g(z) = e −z 2 = e −(x+iy) 2 = e y 2 −x 2 (cos(2xy) − i sin(2xy)), with g(x) = e −x 2 , Reg(z) = e y 2 −x 2 cos(2xy), Img(z) = e y 2 −x 2 sin(2xy). Example 2. Take function with g(x) = cos(x), Reg(z) = cosh(y) cos(x), Img(z) = − sinh(y) sin(x). Figure 1 shows the comparison of the real parts of the exact solution and the approximate solution at y = 0.5 and y = 0.9 for different noise levels ε = 0.1, 0.001 for example 1. Figure 2 shows the comparison of the imaginary parts of the exact solution and the approximate solution at y = 0.5 and y = 0.9 for different noise levels ε = 0.1, 0.001 for example 1. Figure 4 shows the comparison of the imaginary parts of the exact solution and the approximate solution at y = 0.1 for different noise levels ε = 0.01, 0.001 and y = 0.5 for different noise levels ε = 0.001 for example 2. From Figures 1-4, we can find that the smaller the ε is, the better the computed approximation is. And the bigger the y is, the worse the computed approximation is. The fitting effect of example 2 is better than that of example 1. Table 1 shows the error results for different y and ε in example 1. We take γ = 0.55 and γ = 1 for comparison. According to the data in Table 1, we can see that the smaller the γ, the smaller the error result. The larger y is, the smaller the error result is, which is consistent with the error estimation result obtained in Section 3. Table 2 shows the results of example 1 for different iteration steps m for y and ε. We take γ = 0.55 and γ = 1 for comparison. From the data in Table 2, we can see that the smaller the γ, the smaller the iteration steps m. When y is larger, the number of iteration steps m is larger, which means that the better the image fitting effect, the greater the value of iteration steps m. In addition, when γ = 0.55, the regularization method involved is fractional Landweber iterative regularization method. When γ = 1, it is the standard Landweber iterative regularization method. It can be seen from Tables 1-3 that the results of fractional Landweber iterative regularization method are significantly less than those of Landweber iterative regularization method in terms of error and iteration steps m. Therefore, the fractional Landweber iterative regularization method is more effective than the Landweber iterative regularization method. Conclusion In this paper, we use the fractional Landweber regularization method to solve the problem of analytic continuation on strip domain. We not only give the a priori regularization parameter choice rule, but also we give the a posteriori regularization parameter choice rule. Under these two parameter selection rules, the corresponding convergence error estimates are obtained respectively. For the ill-posed problem discussed in this paper, besides the fractional Landweber iterative regularization method used in this paper, there are also other regularization methods, such as Tikhonov regularization method, quasi-boundary regularization method and so on. In these methods, the process of obtaining a priori and a posteriori error convergence estimates is similar, but in the results obtained, Tikhonov regularization method will produce saturation effect, while the fractional Landweber iterative regularization method will not produce saturation effect. Numerical examples also show this regularization method is effective.
2,556.4
2020-09-14T00:00:00.000
[ "Mathematics" ]
Causation and Information: Where Is Biological Meaning to Be Found? The term ‘information’ is used extensively in biology, cognitive science and the philosophy of consciousness in relation to the concepts of ‘meaning’ and ‘causation’. While ‘information’ is a term that serves a useful purpose in specific disciplines, there is much to the concept that is problematic. Part 1 is a critique of the stance that information is an independently existing entity. On this view, and in biological contexts, systems transmit, acquire, assimilate, decode and manipulate it, and in so doing, generate meaning. I provide a detailed proposal in Part 2 that supports the claim that it is the dynamic form of a system that qualifies the informational nature of meaningful interactive engagement, that is, that information is dependent on dynamic form rather than that it exists independently. In Part 3, I reflect on the importance of the distinction between the independent and dependent stances by looking specifically at the implications for how we might better interpret causation and emergence. the secret connexion, which binds them together, and renders them inseparable. (Hume 1748, Section 7, part 1: 108) 1 That the secret connexion is entirely concealed should be a cautionary note: is it not the case that the concept of information has found its way into that very role? Information has become a metaphor for this secret connexion. 2 As such, it is the unseen 'commodity' 3 which connects one causal agent to the next, ensuring the determination of one event to another in an inexorable chain of informational events. 4 Much of this way of thinking is implied, but Johansson (2009: 84) is explicit: 'we can't get any information from a system without interacting causally with it … information is a causal process'. Fresco et al. (2018: 547) provide another example: 'functional information is a special type of causal information'. So, too, does Jablonka (2002: 582), who details the consistent causal role that she says information plays in contributing to functional, goal-oriented behaviours. From this conception, it is but a small step to have this informational commodity appear to bear a correspondence with, and to become a carrier of, meaning. 5 It seems that the adoption of information as the secret connexion in causal process has, in general, underscored a creeping bias which has granted causation its volitional character in virtue of the transmission of an informational (read, 'meaningful') directive. 6 This stance has greatly influenced thinking in the fields of biology, cognitive science and philosophy of consciousness. I am broadly in agreement with Levy (2011) whose 'fiction-based' explanation concerning the application of the concept of information in biology indicates the fallacy in treating information as a concrete physical entity: 'Informational notions have theoretical significance, but this should not lead us to reify them' (p. 653). Levy argues that applying an informational schema is a pretence for qualifying the causal facts. 7 In this paper, I reconsider the causal-informational relation and explore in what way it makes sense to connect information with meaning. Firstly, in Part 1, I explore and critique the unqualified assumption that information is a commodity that meaningfully informs causal process. In Part 2, I defend the claim that information, in relation to biology and mental content, can make sense only in reference to any given Entity's, Agency's, System's, or Observer's (EASO) particular meaningful categorisation of interactive events, and that this ultimately depends on the EASO's own dynamic form. In Part 3, I will indicate how these two opposing positions on information have a bearing on emergence and on the metaphysics of causation. Information as a Commodity The use of the acronym EASO is in itself a statement of intent, for it encompasses a very broad range of views, definitions and disciplines. But it also reflects the tendency for an overlap in the use of terminology; many of these terms are used interchangeably. A more encompassing term that I might use instead of EASO is 'construct'. But for the purposes of this paper I will use the acronym because each of the terms that the acronym encompasses has a unifying connection to the concept of information: they all are said, in one form or another, to use it, process it, transmit it, decode it and so forth. In other words, they are all terms that support the premise that information is a property or commodity that can be 'moulded' in a meaningful way. It is due to this manner of thinking-through the use of the terms which make up the acronym-that the concept of information has become the metaphor in place of Hume's secret connexion. This notion of information as a substantive property is ubiquitous in the fields of biology, cognitive science and philosophy of consciousness where researchers typically have come to speak of information as being acquired, used, detected, read, processed, transmitted, received, extracted, converted, utilised, exchanged, coded and stored. As such, it is passed on from one instantiation to the next, from environment to agent, from one system to another, or from object to observer or interpreter. It is then the task of the EASO to assimilate it, and it is in this capacity, usually, that meaning is said to be constructed (later I will question the veracity of this syntax to semantic order; see Section 1.4). 8 Subsequently, I refer to this stance, which treats information as a causally efficacious commodity, as the EASO-independent stance. 9 This is the view that information exists independently of any class of EASO that might register and assimilate it. For most researchers, information informs the processes of physical, biological and mental coherence. 10 Information and the Construction of Biological Meaning In their treatment of information as a commodity researchers typically say, for instance, that plants and animals make practical use of environmental information; genes and cells carry semantic information; phenotypic traits are coded for by genes which contain information; cell processes execute a program of information stored in genes; information flows from one generation to the next; mind content is constructed from information; mental representations may consist of analogue or digital information; information enters into our perceptual world through our senses where processes of cognition convert syntactical information into semantical meaning, and so on. 11 In this capacity, information is something that endures until it is read and processed by an EASO. This reading and processing is considered possible only if the EASO (be it natural or artificial-noting Dennett 1987) possesses the necessary 'functional complexities' to facilitate the construction of the counterfactual aspects of meaningful information. This objectification of information, that is, the tendency to view information as a substantive thing, is reaffirmed by the idea that information can be reduced quantitatively into smaller 'bits'. 12 Inevitably where information is thought to consist of incremental bits, somewhere along an arbitrary line of assimilating complexity-which for many entails decoding or computation 13 -certain organisational mechanisms, it is assumed, must also be capable of constructing meaning from it. 14 For many in biology, it is convenient and expedient to utilise information in this manner and assume that somewhere along the line 'meaning just happens'-courtesy perhaps of 'Mother Nature' (noting Fodor's 1996 critique). Of course, biological meaning comes in many forms. I am not sure how contentious it would be if one were to try to make a tentative list of these forms. For instance, I consider biological meaning to include such things as function, quality, quantity, process, structure, temporality (e.g. 'memory') and spatial differentiation. But the point is not whether these forms should be included in a list of what might or might not constitute meaning in biological contexts. The point is that there is always the temptation to equivocate between the concept of information and the forms of meaning to which it might be affiliated. This equivocation then allows certain assumptions about information to be left unchallenged. Jablonka (2002) presents an interesting example (see also Fresco et al. 2018). On Jablonka's functional account, a source in the environment becomes information only when the interpretative system of a receiver facilitates functional reaction-a source is not information otherwise. This represents an observer-dependent stance on information. But if we take her shortened definition from the abstract, 'a source becomes an informational input when an interpreting receiver can react to the form of the source (and variations in this form) in a functional manner,' we find that it can be written alternatively without recourse to the concept of information, as follows, 'a source is meaningful when a receiver reacts to the form of the source in a functional manner'. In other words, Jablonka's definition, as I have put it, is about biological meaning specifically as goal-oriented function. As such, it is not clear what the concept of information serves in the original untampered definition. If we say that the phenomenal experience, for example, to 484 THz in two different species is qualified in virtue of their contrasting ontogenetic and phylogenetic ancestry, in what way does it make sense to say that the source (484 THz) 'has information' unique to each creature? Are we not merely talking about an environmental source 'meaning' different things to different organisms? One might seek to incorporate information into Jablonka's thesis concerning function and its relation to biological meaning by drawing a distinction 12 See Peters's (1988) historical overview. 13 Concerning computation, Piccinini and Scarantino (2010) argue that the cybernetic movement (see McCulloch and Pitts 1943;Rosenblueth et al. 1943;von Neumann 1945;Ashby 1952) influentially contributed to the conflation of the concept of information with its computational treatment in many disciplines, including cognitive science. 14 See Bergstrom and Rosvall (2011) on information flow and the many commentaries on their work; consider Hoffmeyer's (2002) critique of the notion of information flow and transmission, and on the fallacy that 'instructions' are passed from DNA to protein; and Bickhard (2009: 575) who argues that 'the information flow model of perception, cognition, and language is wrong from top to bottom'. between functional, syntactic and semantic information. Clearly though, to do this, is to introduce distinct conceptual versions of information which, in effect, ends up leading to an equivocation between both EASO-independent and EASO-dependent positions. Information and the Construction of Mental Content In regard to philosophy of consciousness, Akins (1996: 337-8) argues that attempts at the naturalisation of mental content typically rest upon an intuitive view of what the senses do, namely, that they function to inform the brain of what is going on 'out there' in the external world. This orthodoxy regards the human mind as an information processing system with storage and processing capabilities that operate on internal representations. Importantly, Akins (1996: 350) cites empirical studies to reinforce her point. This evidence indicates that sensory signals do not correspond with some property in the world: sensory signals clearly and evidentially do not encode external properties. 15 In essence, Akins is questioning the veracity of the view that the sensory or mental states to which we ascribe meaning to the world bear a correspondence with an informational world. 16 Prakash et al. (2020) apply 'interface games' (a class of evolutionary game) to indicate that perceptions never faithfully report the structures of the observer-independent world: natural selection, they argue, shapes perceptual systems not in order to provide veridical perceptions, but to serve as species-specific 'interfaces' that guide adaptive behaviour. Steward (1997Steward ( , 2012 also criticises the standard conception of perception and agency, proposing instead that we view agency as a special form of downward causation (see Part 3). Other advocates of Akins' position include Cohen and Nichols (2010) who make the case that 'colour' perception is a construct of the perceiver rather than that colour is EASO-independent, and Bickhard (2009: 573-5) who questions the classic notion of 'information flow' from perception to cognition by arguing that perception is not a matter of sensory encoding. There are a number of reasons why the EASO-independent position has gained traction in recent history. For instance, there has been the influence of the cognitive revolution of the 1960/1970s (see Gardner 1985) and of cybernetics and computer science. There was also the potent idea of genes as information bearing following the discovery of the double helix. They all seemed to point to an information bearing world. The task was then about how 'complex systems' constructed meaning from it. Simon (1978: 3) expresses unreserved confidence when adopting this position: 'The human brain encodes, modifies, and stores information that is received through its various sense organs, transforms that information by the processes that are called 'thinking', and produces motor and verbal outputs of various kinds based on the stored information. So much is noncontroversial. ' Dretske (1981: 194) echoes Simon's stance: 'In teaching someone the concept red … we exhibit the colored objects under conditions in which information about their color is transmitted, received, and (hopefully) perceptually encoded. … it is the information that the object is red that is needed to shape the internal structure that will eventually qualify as the subject's concept red' (emphasis in original). Other examples that illustrate the EASO-independent stance on information are numerous, but note Tye's (1995: 145) unquestioning observance of the orthodoxy: 'The obvious view, suggested by our color experiences (and compatible with my position), is that the colors we see objects and surfaces to have are simply intrinsic, observer-independent properties of those objects and surfaces' (emphasis added). Clearly Akins was against the significant tide of opinion at the time that Lycan (1996: 54-5) also declared: 'When real human beings regard a physical object from different visual points of view, they take in different and all highly selective bunches of information about that object.' In each of these quoted examples, some kind of informational property is assumed to be existing out there in the world independently of the observer. To take this view prompts those who are interested in naturalisation to consider the great mystery of where and how meaning comes about; how and why does information from the objective informational world get re-expressed as biological, phenomenal or conceptual meaning? This prompting inevitably leads many to consider the role of representation and/or computation in the biological, phenomenal and conceptual processing of environmental information: 'Many representational models hold that representation is constituted in some special ... relationship between the representation and the represented. Typically, this special relation is thought to be causal, nomological, or informational' (Bickhard 2009: 559). Representational and computational theories typically begin, then, from the problematic premise that there is such a thing as a physical entity that possesses the 'know-how' to qualify informational differences and, subsequently, to instruct a meaningful and comparable measurement of those differences (noting Bateson 1970;Dennett 1998: 142-9;Stoffregen 2000). Information: The Aether and the Meaning-Maker It is evident that the EASO-independent view of information performs an ideological alchemy which is deeply problematic. How does this ideological alchemy work? Well, consider 'aether'. It was once thought by the greats, such as Newton, Maxwell, Lorentz and Kelvin, that there was an all-pervasive aether-a field or space-filling medium throughout the Universe-that facilitated the propagation of electromagnetic and gravitational forces. The concept of information serves a very similar function: information is typically viewed as an all-pervasive entity that exists everywhere. In this capacity, information is given the status of a medium that occupies every corner of existence (note Wiener 1948: 155;Günter 1963;Stonier 1990: 21, Stonier 1991, Stonier 1996 and that facilitates the transmission, storage and measurement of 'value-laden' properties across the broadest possible range of interactions, be they physical, biological or mental. Unfortunately, in its capacity as a kind of aether, information has become the realist's prop for the 'intrinsic property of reality'. In tandem with this conceptualisation, the term 'system' has found its place in the lexicon of academia to refer to any complex process that is deemed to have access to, to read, and to interpret this 'information-aether'. In this manner, and across a very broad range of disciplines, the term 'system' has become a metaphor for that which organises complexity (or 'makes sense of it') or that turns information into meaning. For illustration, consider these examples from three prominent philosophers. The first is from Dretske's influential book, Knowledge and the flow of information (Dretske 1981): a semantic structure may be viewed as a system's interpretation of incoming, information-bearing signals. (p. 181) the system making the conversion necessarily abstracts and generalizes. It categorizes and classifies. (p. 182; emphasis in original) the system has interpreted the signal as meaning … The system has seen a red square. (Dretske 1981: 181;emphasis in original) Similarly, Chalmers (1995; see also Chalmers 2011), whose intention in his seminal paper is to define and classify the problem of consciousness, utilises the term 'system' as the entity or process that derives meaning from a world full of informational properties: Sometimes a system is said to be conscious of some information when it has the ability to react on the basis of that information, or, more strongly, when it attends to that information, or when it can integrate that information and exploit it in the sophisticated control of behavior. (p. 201) A third example is taken from the transcription of an interview between Warburton and Dennett (2013) concerning Searle's (1980) much debated Chinese Room argument and illustrates how the EASO-independent stance influences the conception of the argument: Dennett: Imagine the capital letter D. Now turn it 90 degrees counterclockwise. Now, perch that on top of a letter J. What kind of weather does that remind you of? Nigel Warburton: The weather today, raining . . . Dennett: That's right; it's an umbrella. Now notice that the way you did that is by forming a mental image. You know that coz you are actually manipulating these mental images … Now, that would be a perfectly legitimate question to ask; in the Chinese Room scenario, and … if Searle-in the backroom-actually followed the program, without his knowing it, the program would be going through those exercises of imagination: it would be manipulating mental images. He would be none the wiser coz he's down there in the CPU opening and closing registers, so he would be completely clueless about the actual structure of the system that was doing the work. Now, everybody in computer science, with few exceptions, they understand this because they understand how computers work, and they realize that the understanding isn't in the CPU, it's in the system … that's where all the competence, all the understanding lies. (09:14; emphasis in original) In each of these three examples, we see the authors referring to 'systems' as doing the hard work of creating meaning. The last quotation in particular from Dennett is illustrative of the way in which the term 'system' is so readily used in its role as a meaning creator (the system, in this particular example, being a computer) and indicates how it is so effectively accepted as an explanation. In relation to mental content, the conceptual basis underpinning the term 'system', which has also become central to the predominant expository language, perpetuates the notion of a syntactic-semantic dichotomy. This dichotomy must exist in such cases because the concept 'system' assumes the role of a facilitator through which an external syntactical information-aether gets re-presented as content that is meaningful; the concept 'system', as meaning creator, is the necessary term that provides the convenient bridging concept that plugs the syntactic-semantic gap. Without that plug, the concept of information becomes virtually redundant. Now, the reader might interject that this criticism is all very well, but what is the alternative? In Part 2, I propose inverting the syntactic-semantic order by putting meaning first and positioning information as an EASO-dependent construct. To understand this view, it is perhaps easiest to consider it in relation to human written or spoken language. An individual human, for instance, may possess a meaningful subjective perspective about the world and their place within it. They then might articulate part of that perspective through a syntactic digitised socioculturally adapted linguistic code. A second individual might then read that coded text. While doing so, this reader, I suggest, does not upload or convert that code's meaning from author to reader: 'A book itself cannot transmit or "install" information (clues, ideas, and images) into readers' minds' (Sukhoverkhov 2010: 164). Rather, the reader moderates their own existing world-view: they reconstruct a new meaning about the world that incorporates some of the ideas implied in the text. This new meaning, like all fingerprints, will be slightly different from all others. What I suggest is that there is no meaning in the text itself whose substance, as 'information,' is transmitted or relayed from author to reader-information is not a vehicle that carries meaning via some causal correspondence from one medium to another. Rather, any sense of information from the author that is established by the reader is an appraisal of the reader's re-evaluated worldview-theirs is a new constructed meaning that bears a relation to that of the compositor due to a multitude of equivalent cultural, social and experiential references and definitions. For the reader to talk of this new meaning in informational terms is for them to attempt to standardise by digitising and conceptualising their meaningful worldview, that is, to seek to make it relational. 17 My position is a broad metaphysical one that states that this is the case in all instances and for all EASOs. Information: An Expansive Metaphysical Proposition Searle (2013) adopts the position that information is EASO-dependent in expressing the view that 'Information is only information relative to some consciousness that assigns the informational status' (sec. 6). But I advocate a more expansive metaphysical proposition that relates this view-namely that information is EASO-dependent-to include any kind of EASO, be it mental, biological or even purely physical. This view reflects those of Josephson (2017) who has proposed the incorporation of meaning into fundamental physics, drawing specifically on the insights of biosemiotics. It is also inferred by Pharoah's (2018) hierarchical model, which proposes distinct ontological levels of interactive discourse that might be extrapolated into the realm of quantum mechanics 18 (see also Barad 2006). One of the consequences of this expansionist EASO-dependent stance is that it allows for the possibility that a host of distinctive categories of EASO possess differing categories of informational relations to their environment. Nevertheless, for the purposes of this paper the focus is only on the biological and mental aspects of information and meaning. In these areas, the proposal outlined in this paper aligns with Bateson (1972: 453) when he expresses the view that the informational status of an object, such as a piece of chalk, is dependent on the subject observing it, and therefore, is dependent on the nature of that subject's dynamic construct. It is also consistent with Jordan and Vinson (2012: 9) who take the embodiment position that 'organisms do not need to be "informed" by environments in order to be about environments because they are necessarily 'about' the contexts they embody'. In other respects, the paper advocates the enactivist view that meaningful information necessitates a description in terms of the locus or characterisation of engagement. 'For the enactivist, sense is not an invariant present in the environment that must be retrieved by direct (or indirect) means. Invariants are instead the outcome of the dialog between the active principle of organisms in action and the dynamics of the environment' (Di Paolo et al. 2010: 39). 19 Consider also Akins (1996, discussed above) and Thelen and Smith (1994) who propose that systems do not store information acquired from the world but that multimodal systems structures categorise the world informationally. Similarly, for Merleau-Ponty (1962, 1963, biological reactions are not reducible to structural parts within the organism or to localised bits of stimuli but rather to the interacting relation of an agency's entire dynamic form with its environment (bearing in mind that an environment may be, for some interactive agents, the internal world of a single cell and for others, such as a species, the transgenerational environment). Dynamic Internal Adjustment and Meaning as a Construct of the Whole Merleau-Ponty expresses the view that biological reactions are an embodied interacting relation between an agency's entire dynamic form and its environment: 'It can happen that, submitted to external forces which increase and decrease in a continuous manner, the system [comprising 'the individual'], beyond a certain threshold, redistributes its own forces in a qualitatively different order' (Merleau-Ponty 1963: 137, 'Structure in 18 Notably Fuchs (2010, who argues for the subjective nature of quantum information, emphasises that supporters of the informational point of view about quantum states have tried to have it both ways: on the one hand that quantum states are not real physical properties, yet on the other that there is a right quantum state that is agent independent. 19 Also consider Torrance and Froese (2011). Physics' 20 ). One can interpret this in relation to the evolutionary and processual nature of biology, adaptation and mental engagement: what Merleau-Ponty is saying is that individual bodies maintain a balance in virtue of internal adjustments that influence the whole. Interestingly, on his account the system redistributes its own forces qualitatively. Merleau-Ponty qualifies this with the following: physical stimuli act upon the organism only by eliciting a global response which will vary qualitatively when the stimuli vary quantitatively; with respect to the organism they play the role of occasions rather than of cause; the reaction depends on their vital significance rather than on the material properties of the stimuli. Hence, between the variables upon which conduct actually depends and this conduct itself there appears a relation of meaning, an intrinsic relation. One cannot assign a moment in which the world acts on the organism, since the very effect of this 'action' expresses the internal law of the organism. Clearly, Merleau-Ponty subscribes to the view that information does not exist out there in the environment (these passages indicate that he would also subscribe to Levy's 2011 bifold distinction): that which would otherwise be bland causal mechanics become qualified (informationally), notably, not by 'the material properties of the stimuli,' but rather by the 'internal law of the organism' in virtue of the adjustment of its internal dynamics. Vehkavaara's (1998: 210) view that 'The interaction between a living system and its surroundings is not considered as causal chains of the necessary causes and effects, but as sign processes' echoes Merleau-Ponty's position where 'the sign process' equates, for Merleau-Ponty, to the 'internal law of the organism' (see below concerning the sign process). For Merleau-Ponty, meaning is a construct of the whole, where the whole, I suggest may be an organism, a cell, an organelle, even a replicating lineage or a consciousness. But what is this 'internal law' of which he speaks? Well, we can confidently say that the internal law must be that which ensures the maintenance of a dynamic stability following interaction because a dynamic stability is what persists: when any given EASO responds to an interaction through an adjustment of its internal dynamics, we can note that it is adjusting to this end. Whatever else it could be, this end, in all its complex and diverse incantations, must be the observance of its internal law (until such time as the maintenance of stability undermines structural and/or functional integrity, or when a system reaches 'beyond a certain threshold'). The Dynamic Whole that Qualifies Meaning as Informational Insofar as there is a dynamic stability at any given instance in observance of its internal law, we can say that an EASO's dynamic construct itself must be that which qualifies the nature of its response to interactive impulses. Additionally, if we say of an EASO's dynamic state that it responds differently to a variety of interactive impulses in subtle pursuance of its particular internal law, we can surmise that all its responses are due to the nature of its own particular dynamic state. It is the internal law that generates significance and meaning via a dynamic coupling with the environment. Thus we can extend Ingold's (1990: 216) notion that 'enfolded within the organism itself is the entire history of its environmental conditions' by suggesting that the construct of the organism, namely, its ancestral history and its changing environment, as well as its experiences and thinking, constitute the informational integrity of its entire 'form' (a viewed shared in Jablonka 2002). This 'form' is what informs its embedded and meaningful relation to the world. Without that form, there is no meaning, there is no information (consider here Cao's 2012 teleosemantic approach to information). We should note, therefore, that there must be an ongoing informational relation between an EASO's dynamic stability and its environment as it adjusts to interactions. Furthermore, it is an EASO's changing dynamic form itself which qualifies the ongoing informational relationship that it has with its environment. Consequently, the argument supporting the key claim is made, namely, that if there exists an informational character or status to physical interaction, it is not a commodity that is transmitted by a direct causal correspondence from one instantiation to another, but rather, is a reflection of the specifics of the internal dynamics of any given EASO: in other words, information is EASO-dependent, not EASO-independent. As a physical principle, we have good reason to claim that this is true of any given EASO, whether it be mental, biological or even physical in nature. This conclusion undermines the view that the mind is principally a computational module and that the body is an information processing device: 'the body is not a puppet controlled by the brain but a whole animate system with many autonomous layers of self-constitution, self-coordination, and self-organisation and varying degrees of openness to the world that create its sense-making activity' (Di Paolo et al. 2010: 42). Semiosis and Information as EASO-Dependent Peircean semiotics has greatly influenced biosemioticians in their desire to interpret biological meaning and clarify the nature of its origins in informational and transactional terms. But, I suggest, biosemiotics should not focus on studying living organisms in terms of their ability to generate and to interpret information; life is not intrinsically related to information processing and its communication (Sharov 2010(Sharov : 1051. Indeed, the term 'interpretant' can be misleading. 22 This is not to say that the triadic objectsign-interpretant model is incorrect but that it readily suggests an information processing interpretative agent; this is clearly problematic from the EASO-dependent standpoint. What the EASO-dependent stance requires instead is an emphasis on the role of the dynamic interpretant in the final analysis. Thus, the EASO-dependent stance might be considered to incorporate semiosis as a triadic process whereby a construct mediates objects as sign relations in virtue of their meaningful relevance-be that relevance functional, qualitative, quantitative, processual, objective, temporal or spatial: these being aspects or modes of biological meaning. This stance is evident in Sukhoverkhov 22 Savan (1988) express the view that interpretants need to be understood as 'translations'. (2010) in his analysis of temporal consistency in biological, mental and social contexts: 'A sign is a heteronomous phenomenon whose being depends on subjects . . . Signs as material objects or "sign vehicles" have no special semiotic (representative) essenceonly material existence. The semiotic essence is assigned to them…" (p. 162 emphasis in original). The EASO, as a construct, generates its own interpretant as a unique translation of interactive experience. This reconfigures the Peircean stance where semiosis is often interpreted as a process of communication of a form, from the object, to the interpretant through sign mediation. Part 3 Emergence and Causation Is the EASO-Dependence -Independence Distinction Worth Examining? Hoffmeyer (2002: 9) emphasises that in recent years 'quite a few biologists and philosophers have claimed that efficient (Aristotelian) causation cannot exhaustively account for the dynamics of living natural systems (Juarrero 1999;Riedl 1997;Rosen 1991;Salthe 1993;Ulanowicz 1997)'. Hoffmeyer goes on to say that this change towards a richer concept of causation makes the idea of information flow look a little like the antiquated notion of phlogiston. But many still persist with the view that the dependent-independent distinction is unimportant or that the viewpoints amount to the same thing. They might say, for example, that an aeroplane they see flying across the sky exists independently of their seeing it, and therefore, so too must exist the informational properties by which they and others identify it as an aeroplane: the information, they claim, is obviously not dependent on the perceiving subject because the object exists irrespective of the subject's observation of it. But to clarify, the view that information is EASO-dependent is not a denial of realism. Rather, all it says is that information is not a commodity that is extracted from the environment and from which meaning is then constructed, re-presented or computed. Instead, the EASO-dependent stance holds that the nature of an EASO's meaningful relation to the world is a function of its existing dynamic construct. This meaningful construct can only then be viewed in informational terms. The importance of this distinction is best examined in relation to emergence, for in the biological disciplines the idea of emergence is widely accepted (Korn 2005;Rothschild 2006;Okasha 2011). In the next section, however, what I intend to do is show that emergence fits uncomfortably with the view that information is EASOindependent. This is not the case if one adopts the EASO-dependent stance on information. Emergence and the Upward-Downward Causation Paradox Kim's (1999Kim's ( , 2006a argument against emergentism is under-appreciated in biological contexts (note Bickhard 2009: 550-1;O'Connor and Wong 2020). Kim states that downward causation is the emergentist's most problematic issue; how is it that lowerlevel processes cause higher-level processes which in turn exercise downward causal influences on lower-level processes? Kim (2006b) presents the following argument to support the view that emergentism faces a problematic overdetermination: I would like to give an idea of the difficulties that confront anyone who wants causal efficacy for emergent properties. Suppose a claim is made to the effect that an emergent property, M, is a cause of another emergent property, M* (this is short for saying that an instance of M causes an instance of M*). As an emergent property, M* is instantiated on this occasion because, and only because, its basal condition, call it P*, is present on this occasion. It is clear that if M is to cause M*, then it must cause P*. The only way to cause an emergent property is to bring about an appropriate basal condition; there is no other way. So the M-M* causation implies a downward causal relation, M to P*. But M itself is an emergent property and its presence on this occasion is due to the presence of its basal condition, call it P. When one considers this picture, one sees that P has an excellent claim to be a cause of P*, displacing M as a cause of P*. The deep problem for emergent causal powers arises from the closed character of the physical domain, which can be stated as follows: . . . If a physical event has a cause, it has a physical cause. And if a physical event has an explanation, it has a physical explanation. (p. 199) To summarise: Emergent property M has basal properties P. Emergent property M* has basal properties P*. If M causes M*, it does so in virtue of causing M*'s basal properties, P*. But P also has claim to cause P*, displacing M's claim to causing P*. 23 My view, however, is that the Kim paradox relies on the assumption that a causal entity determines an effect through some direct instructional correspondence. 24 The causal impetus, it is assumed, provides the information to the system to act in a particular way courtesy of that impetus's causal properties. The Kim paradox, therefore, aligns with the EASO-independent standpoint-it is from this standpoint that he undermines the emergentist position. Is it not possible, therefore, to disarm Kim's argument of overdetermination by undermining his EASO-independent standpoint? For instance, if we say of a certain environmental impetus C, that it corresponds with a certain kind of action E 1 by a system S 1 , one might assume with Kim that C causes S 1 to E 1 . Furthermore, if we say that C corresponds with an alternative kind of action E 2 by an alternative system S 2 , we have reason to assume that C also causes S 2 to E 2 (noting comparable arguments in Alexander 1920: 43;van Cleve 1990: 221;O'Connor and Wong 2005: 665-70). Consequently, we might justifiably deduce of C that it could, conceivably, correspond to E n in virtue of the possibility of S n (where n is an integer). But this plurality of possible actions, E n , I suggest is nonsensical for we can conclude nothing substantive from any given observation concerning the informational properties pertaining to C. This indicates that the observer-independent position on information is an idealised one. When it is said that an entity has causal properties, the term 'property' implies the existence of an informational commodity that can be transmitted up the causal chain. The problem with Kim's stance relates, therefore, to the assumed determination of action by causal entities courtesy of their informational properties; hence his claim that there is a paradox of overdetermination for the emergentist. What I propose instead, is that the nature of any given action E, following an impetus from a property C, is indicative not of C being the cause of E at all, but rather of the respective S effecting a particular action courtesy of its particular structure and mechanism of interactive engagement, or, as Bickhard (2009: 553) puts it, that it is internal organisation that is the locus of causal power. 25 The nominal appearance, then, is that C causes E, that is, that there is a direct mechanistic informed correspondence from any given C to E (a point emphasised by Merleau-Ponty), but this appearance is a deception. Kim's argument holds only when one maintains the orthodox position that information exists as a EASO-independent entity. Under this formulation, the secret connexion is 'information'. And in this idealised role, information just is what it is; determines what it determines; is the difference which causes the difference; and is that which allows for the possible construction, representation, decoding and computation of meaning by EASOs. This position holds that information is the aether of our modern era that fills every space of existence and facilitates the required conceptual bridge, from interaction without meaning-in physics, chemistry and biology-to meaningfulness in biology and mental content. Alternatively, if one takes the position that information is not a commodity that exists and, therefore, that information is not passed along the causal chain, then it forces a revision of such things as meaning and its characterisation. How might this revision look? Meaningful information is a dynamic construct of any given EASO. It is expressed through the requirement of the whole in the maintenance of a dynamic stability. In this capacity EASOs exert their influence through meaningful action following interactive engagement. How do we resolve this with the notion that there is upward influence? This is a difficult question to address, but I suggest that the physical, biological and mental operate at ontologically distinct levels of meaningful engagement. As such, they can all operate in parallel, each effecting meaningful engagement at distinct levels. This view implies that there is no 'upward' causation. Summary I hope this paper gives some foundational ideas on how to revisit the concept of information in the context of biological meaning and mental content. The primary objective in Part 1 has been to provoke a critical appraisal of the orthodox view and, in Part 2, to make a case for an alternative stance. This stance indicates that biological meaning takes form as spatiotemporal, functional, structural and qualitative extension. In this regard, it can be interpreted as an informational construct of the world. Environmental interaction leads to modifications to these constructs, which is to say that biological meaning evolves. Notably, then, meaning is not constructed from information. Information is not something that moves from one space or from one time to another. Structures are not made of it. An informational world is not represented meaningfully. There is no causal correspondence or power in the transmission of information in biology. In Part 3, my intention was to show that the distinction is important by indicating that it has profound implications for how we might interpret such things as emergence and causation. Funding Not applicable. Compliance with Ethical Standards Conflict of Interest Not applicable. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
9,615.2
2020-11-13T00:00:00.000
[ "Philosophy", "Biology" ]
Interface Trap‐Free, 100% Yield, Wafer‐Scale, Non‐Volatile Optically‐Guided Memory Array from Cumulatively‐Stacked Small Molecules/Fluoropolymer/Copper‐Oxide Nanoparticles Heterostructure Optically‐guided memory devices, with its photo‐response, allow for photodetector and memory functions to be combined in a single device. As a result, the issue of unnecessary signal delay can be alleviated by reducing the metal wire between the photodetector and the memory device, and the function of two different devices can be performed simultaneously, which enables an image recognition system to be miniaturized. With such advantages, optically‐guided memory is therefore considered to be highly promising as a potential key component for next‐generation applications where image detection and processing capabilities are paramount. Here, a wafer‐scale 12 × 12 dinaphtho[2,3‐b:2′,3′‐f]thieno[3,2‐b]thiophene (DNTT)‐based optically‐guided memory transistors (OMT) array with a cumulatively stacked small‐molecules/fluoropolymer/copper‐oxide nanoparticles structure is demonstrated. The proposed OMT is formed in 4 different states depending on the light intensity. Furthermore, read current (IRead) and threshold voltage (Vth) in the programming state (P‐state) of OMT is maintained stably even after 20 days. Based on the optimized DNTT thickness, a wafer‐scale 12 × 12 OMT array is fabricated consisting of 144 devices for text image detection with 100% yield. This study also demonstrates a text image detection with non‐volatile memory characteristics depending on the presence or absence of light irradiation. Introduction In this era of the 4th industrial revolution, many technologies are developing at tremendous speeds, and artificial stable operation characteristics by showing the dual sweep operation in the transfer curve of the OMT according to the memory state. ii) Second, previously reported studies showing memory characteristics through light only showed the programmed state and the erased state. [19][20][21] In a conventional memory device, this is only sufficient if specific logical values such as 0 and 1 can be distinguished. However, since the core of OMT is to store information about light, multi-state memory characteristics according to wavelength or light intensitywhich are involved in light information-should be required. iii) Third, the materials that have primarily been used for a floating-gate have been noble metals such as Au and Ag or high-cost organic materials, thus requiring a high-cost process for mass-production that is limited by their high scarcity. [22][23][24] Therefore, there is a need for research on alternative materials that are relatively low-cost and stable even when exposed to ambient air. iv) Fourth, most emphasized, the cumulative deposition of heterogeneous layers with a trap-free interface is needed. As the floating-gate structure is made through at least four multilayered depositions (i.e., blocking dielectric, floatinggate, tunneling dialectic, and semiconductor channel layer), [25][26][27][28] each layer must have a surface without interfacial traps or defects to achieve operational stability and reduced leakage current during programming and erasing. v) Finally, although many studies have been reported to implement OMT characteristics, most studies have demonstrated only the unit device level. It is essential to obtain a sufficient yield on a large-area substrate and develop an array level beyond unit devices are essential for applications with more practical functions such as image detection. In this paper, we present a wafer-scale 12 × 12 array of 144 dinaphtho[2,3-b:2′,3′-f ]thieno [3,2-b]thiophene (DNTT)-based OMTs with a cumulatively stacked small-molecules/fluoropolymer/copper-oxide nanoparticles (CuO NPs) structure. The proposed OMTs satisfy the above-described five requirements i)-v). To understand the optically-guided memory operation, we analyze the optically-programming and erasing characteristics according to the thickness of the DNTT. The proposed OMT is not programmed under the dark while under light irradiation, the devices provide four distinct memory states as a function of the light intensity. Furthermore, read current (I Read ) and threshold voltage (V th ) in the programming state (P-state) of OMT are maintained stably even after 20 days nonvolatile. Based on the explored OMT device, we fabricate a 12 × 12 OMT array consisting of 144 devices on a 4-inch wafer. It is emphasized that the fabricated 144 OMTs exhibit 100% yield of the optically-guided memory operation with no hysteresis behavior, and it is confirmed that the devices have a low variation of I Read in the P-state and erased state (E-state), presenting their uniformity. Finally, we demonstrate a text image detection for alphabet "G" with non-volatile memory properties with or without light irradiation while programming through I Read and V th mapping. The proposed OMT was composed of boron-doped silicon with 1.46 × 10 16 -4.44 × 10 14 atoms·cm −3 and SiO 2 as the back gate and blocking dielectric, respectively. CuO NPs and CYTOP were formed by spin coating on SiO 2 as floating-gate and tunneling dielectrics, respectively. Then, 56 nm thick DNTT and 100 nm thick Au were deposited as channel and source/drain electrodes using a thermal evaporator. The channel length and width of the fabricated OMT are 100 and 1000 µm, respectively. The fabrication process of OMT is depicted in detail in Figure S1, Supporting Information. Figure 1a shows a structural illustration of the proposed OMT. The cross-section scanning electron microscope (SEM) image of the fabricated OMT showing each layer is cumulatively stacked without a pin-hole formation (Figure 1b). Figure 1c shows the optical microscopy image of the fabricated OMT. Figure 1d shows the absorbance of DNTT used as a channel layer in the proposed OMT, which indicates that DNTT absorbs visible light between 450 and 500 nm. Figure 1e shows an atomic force microscopy (AFM) image of the surface of a Si/SiO 2 substrate coated with CuO NPs to be applied as a floating-gate. The measured AFM image of the CuO NPs exhibited island-shaped particles periodically with a height of several tens of nanometers, thus indicating that the CuO NPs are uniformly distributed without an aggregation. The implication of the island-shaped particles as a floating-gate will be discussed on a prevention of interference effect in adjacent devices in an array. Regarding the roughness of the CuO NPs surface, it has previously been reported that when DNTT used as a channel layer is deposited on a rough surface, it leads to significantly degraded electrical characteristics-including effective mobility-due to the formation of many grain boundaries. [29] Therefore, we analyzed the surface roughness through AFM images of each layer while depositing CuO NPs, CYTOP, and DNTT, respectively (Figure 1e-g). The root-mean-square surface roughness (R q ) was 1.829 nm immediately after CuO NPs coating, and it improved to 0.483 nm after CYTOP coating. Therefore, the DNTT channel layer could be formed without degradation. On the other hand, after the DNTT was deposited, the surface roughness increased to 7.091 nm, but rather the increased DNTT surface roughness enables to increase the contact area with the Au electrode to facilitate charge injection. [30] Figure 1h shows a plot for comparing the surface roughness of each layer. The OMTs were prepared by sequentially depositing the CuO NPs, CYTOP, DNTT, and Au contact electrodes. A more detailed description of the fabrication process is given in the experiment section below. Figure 2a depicts a schematic symbol of the proposed OMT. The scheme for measuring the optically-guided memory characteristics of the OMT is shown in Figure 2b. For the optically-guided memory operation evaluation, each of the fabricated OMTs was irradiated by visibleregion multiple-wavelength LED light, and the wavelength characteristics are provided in Figure S2, Supporting Information. Figure 2c shows the respective transfer curves of the proposed OMT in the pristine state and the E-state. It can be emphasized that the hysteresis-free electrical characteristics were observed. The hysteresis-free operation of the proposed OMT derived from the interfacial trap-free fluoropolymer was maintained even after erasing/programming operation. After erasing operation (V G = −100 V) under dark, the V th of OMT was shifted from −4.14 to −20 V. The shift of the V th toward the negative gate voltage direction indicates that hole carriers are trapped in the floating-gate at a large negative gate voltage (V G = −100 V) in the erasing operation is applied. While the erasing operation was performed under dark, the shift of transfer curve was not observed after the programming operation (V G = 100 V) under dark. Thus, the same memory state as the E-state was maintained ( Figure 2d). In contrast, a significant positive shift of V th (∆V th = 21.5 V) in the transfer curve occurred when programming operation (V G = 100 V) was performed under light illumination at L int = 5500 lx (P inc = 0.81 mW cm −2 ), resulting in a clearly distinct memory state (Figure 2e). For a quantitative comparison of the programming behavior with or without light illumination, Figure 2f,g shows I Read and the V th for E-state and two P-state (dark, w/light). As a result of comparing the I Read for each state, the P-state formed under dark was almost identical to the I Read of a few pA at V D = −1 V and V G = 0 V in the E-state, resulting that the memory state was not distinguishable. On the other hand, the P-state formed under light of L int = 5500 lx led to significantly increased I Read = 2 × 10 −7 A. Therefore, the I Read ratio of P-state and E-state (I P-state /I E-state ) could reach up to 2.34 × 10 5 , which is a 10 5 -fold improvement compared to I P-state /I E-state = 2.36 in P-state formed under dark. The increased I P-state /I E-state enabled a clear distinction between the P-state and E-state. The V th shift according to the presence or absence of light irradiation during programming operation also showed the same trend as the change in I Read . The V th of P-state (under dark) was measured to −19.55 V shifted by only 0.42 V (1.05% change at the operating voltage of −40 V) compared to that of E-state, and there was negligible difference with that of E-state. On the other hand, the V th of the P-state (w/light) formed under light irradiation (L int = 5500 lx) was located at 1.53 V, shifted by 21.5 V (53.75% change at the operating voltage of −40 V). The memory states of OMT modulated by light irradiation showed the light-dependent memory characteristics of the proposed OMT. The proposed OMT properly switched its state depending on the erasing and programming operations as a function of time ( Figure S3, Supporting Information). To further investigate the optically-guided memory properties, we evaluated the retention characteristics of the OMT (Figure 2h). The P-state and E-state of the proposed OMT were stably maintained with the programming/erasing current ratio of about 10 6 or more even after 2000 s. This high I P-state /I E-state ratio enables the formation of several multi-memory states corresponding to intermediate values between the I P-state and I E-state states, which will be presented in the next paragraphs. Furthermore, we further evaluated the retention characteristics of the proposed OMT, and the measured retention test exhibited that I Read and V th were maintained unchanged even after 20 days ( Figure S4, Supporting Information). To understand the operating mechanism of the proposed OMT, we performed ultraviolet photoelectron spectroscopy (UPS) and UV-vis spectroscopy to investigate the energy band structure information of the DNTT. secondary cut-off region and valence band edge region. The Fermi level (E F ) and highest occupied molecular orbital (HOMO) level of the DNTT measured by UPS were equal to −4.42 and −4.97 eV, respectively. In addition, the energy bandgap (E g ) of the DNTT measured from UV-vis spectroscopy was 2.61 eV (Figure 3c), and we additionally extracted that the lowest unoccupied molecular orbital (LUMO) level of the DNTT was −2.36 eV based on the obtained UPS and UVvis-based HOMO level and energy band gap values. Considering that the work function (W F ) of Au was equal to −4.7 eV according to previously reported study, [31] with the energy band information of the DNTT and Au mentioned above, we represented the energy band diagrams as shown in Figure 3d. The difference between the HOMO level of DNTT (−4.97 eV) and the W F of Au (−4.7 eV) corresponding to the hole injection barrier (Ф b,hole ) is 0.27 eV. On the other hand, the electron injection barrier (Ф b,electron ), the difference between the LUMO level of DNTT and the W F of Au (−4.7 eV) is 2.34 eV, which is 8.6 times larger than that of the hole injection barrier. In the structure of the DNTT channel and Au contact electrode, the formation of a low hole injection barrier (Ф b,hole = 0.27 eV) and electron injection barrier (Ф b,electron = 2.34 eV) implies that hole injection is preferable while electron injection behavior is limited, which is directly related to the programming mechanism of the proposed OMT. The energy band diagram of the proposed OMT to explain the operating mechanism is provided in Figure 3e dark (V G = 100 V), it showed almost the same transfer curve characteristics as in E-state despite the high positive gate voltage ( Figure 2d). As mentioned above, electron injection into the DNTT cannot be achieved due to the significantly high electron injection barrier, Ф b,electron , as large as 2.34 eV, which resulted in no trappable electrons in the floating-gate CuO NPs (Figure 2n). The difficulty of electron injection from the Au to the DNTT still exists when programmed under light irradiation. However, when light is irradiated, even if electrons are not injected from the Au, an excitation of electron carriers occurs in the DNTT that has absorbed photon energy. At this regime, the generated electrons can be trapped in the CuO NPs by a high positive gate voltage (V G = 100 V) (Figure 3g). The positive shift in the transfer curve appears as a result of the trapping of electrons in the floating-gate (Figure 2e). As the thickness of the DNTT increased, the absorbance increased as shown in Figure S5, Supporting Information, and the increase in absorbance means an increase in the number of excited electrons, and a positive shift of the transfer curve was available as shown in Figure S6, Supporting Information. The observed DNTT thickness-dependent memory characteristics supports that the optically-induced programming behavior is due to electron carriers generated from the light absorption of the DNTT. Based on these results, we analyzed whether the proposed OMT distinguish and store light information depending on the light intensity. Figure 4a shows an illustration of the device structure of OMT, and Figure 4b,c illustrates the state in which holes and electrons are trapped in CuO NPs in the E-state and P-state, respectively. We initialized the OMT through the erasing operation (V G = −100 V, 3 s) before performing the programming operation for storing light information. We used three light intensities of 880 lx (P inc = 0.13 mW cm −2 ), 3800 lx (P inc = 0.56 mW cm −2 ), and 5500 lx (P inc = 0.81 mW cm −2 ) to compare the memory states formed according to the light intensity. Then, we observed that the amount of positive shift in the transfer curve increased as a function of the light intensity (Figure 4d-f). The P-state of OMT was determined depending on the light intensity irradiated during programming. It is noted that we denoted the programming states as P-state i (880 lx), P-state ii (3800 lx), and P-state iii (5500 lx), respectively. We emphasize that hysteresis-free electrical characteristics were observed in all memory states regardless of the memory state. The hysteresis-free electrical properties of OMT indicated that charge trapping in CuO NPs occurs only by erasing/programming operations. As another observation, the hysteresis-free operation indicates that charge trapping did not occur at the interface between the DNTT channel and the SiO 2 tunneling dielectric surface. This behavior resulted from, as mentioned previously, the interfacial trap-free fluoropolymer of CYTOP as the tunneling dielectric. Figure 4g,h shows the comparison of I Read and V th of memory states formed as light intensity increases (from P-state i to P-state iii), respectively. I Read , which was 1.24 pA in E-state, increased to 1.12 and 13 nA in P-state i and P-state ii as the light intensity gradually increased to 880 and 3800 lx, respectively. When the high intensity light (5500 lx) was irradiated, I Read was measured to 1.44 µA in P-state iii, about 10 6 more than E-state, which was measured to 1.24 pA. Similarly, V th , which was measured to −6.26 V in E-state, was positively shifted to −1.59 and −0.24 V in P-state i and P-state ii, respectively. Also, V th shifted positively by as large as 21 V from −6.26 (E-state) to 14.73 V (P-state iii) when irradiated with the light intensity of 5500 lx. As the increased I Read and V th shift of OMT changes incrementally as the light intensity increases, the light intensity-dependent multi-level memory operation can be available in the proposed OMTs. We measured the retention of four-distinct memory states, such as the E-state and three P-states (P-state i, P-state ii, P-state iii), respectively (Figure 4i). The four memory states were maintained unchanged for 2000 s, and in particular, the current difference between adjacent states also maintained distinguishable as large as about 100 times. This multi-level operation was obtained by the considerably high current ratio between P-state and E-state as large as ≈10 7 . Next, we fabricated a 12 × 12 OMT array with a total of 144 devices on a 4-inch wafer substrate using the proposed OMT. The fabricated OMT array is shown in Figure 5a. The transfer curves of all OMTs constituting the array in the pristine state are shown in Figure 5b. Through the transfer curve of the pristine OMTs, 144 devices exhibited uniform characteristics with average threshold voltage of −2.94 ± 0.64 V. It is worthy note that the optically-guided memory characteristics operated with a yield of 100% in the evaluated 144 devices, as shown in Figure 5c. All 144 fabricated devices exhibited transfer curve shift behavior depending on the light-induced electron trapping in the floating-gate. The linear-scale transfer curve, which approximates the V th in the E-state and P-state of all OMTs constituting the 12 × 12 array, is shown in Figure S7, Supporting Information. We also checked an interference issue between adjacent devices during the programming or erasing. The programming operation of a single OMT did not interfere with other adjacent OMTs memory states around the programmed device ( Figure S8, Supporting Information). This is because CuO NPs, a floating-gate, were dispersed in the island form without additional patterning process, thereby separating the floating-gates between adjacent devices. Figure 5d,e shows mapping images of the I Read when a gate voltage of 0 V is applied in the E-state and P-state, respectively. The measured I Read mapping images confirmed that the P-state and E-state of all OMTs clearly distinguishes I Read depending on the optically-guided memory behavior with a high I P-state /I E-state of 10 6 ( Figure S9, Supporting Information). In addition, the distribution of V th of E-state and P-state for 144 OMTs are given in Figure 5f,g. The V th distribution was equal to −6.73 ± 0.8 V for the E-state and was equal to 30.7 ± 8.2 V for the P-state. Furthermore, we demonstrated a selectively-image detection test through the sequentially-proceeded device-to-device programming in the fabricated 12 × 12 array as shown in Figure 5h. We determined the illuminated and non-illuminated device addresses, representing the text image of "G." As a result, the alphabet "G" image was optically-programmed, and the selectively programmed devices provided a distinct I Read increase compared to that of dark-stateprogrammed devices (Figure 5i). The distinct alphabet "G" text image was obtained in the extracted V th values in a selectively programmed 12 × 12 OMT array (Figure 5j). In summary, we demonstrated the optically-guided memory array in the 4 inch wafer-scale, consisting of 144 devices with 100% yield. Due to the cumulatively stacked small-molecules/ fluoropolymer/CuO NPs structure, all 144 devices had hysteresis-free switching behaviors as well as the statistical test provided uniform V th distribution with 2% and 20.2% variation and of P-state and E-state, respectively. Compared with previously reported OMTs in the last 3 years (Table S1, Supporting Information), [10,13,19,[32][33][34] our results showed the possibility of expanding to a more practical sensor array beyond unit device-level OMT by demonstrating a 12 × 12 array of 144 devices with respect to hysteresis-free operation, superior yield as high as 100%, and the large-area manufacturing of 4-inch wafer (Figure 5k). Furthermore, we investigated the optically-guided memory characteristics by means of the comprehensive analysis including AFM, cross-section SEM, UPS, and UV-vis measurements to understand the operating principle of the OMT. The optically-guided memory behavior can be extended to the realization of image sensor and storage applications through various materials as long as it can form an injection barrier for one carrier of either hole or electron between the contact metal and the photoreactive channel. Through the comprehensive investigation in this study, the proposed OMT satisfies the above-mentioned five technical requirements: (i)-(v), opening up possibilities for practical applications beyond the basic exploration level. Thus, we believe that this study will contribute to the development of next-generation applications based on image processing used in future technological areas. Experimental Section Synthetic Procedure of Copper-Oxide Nanoparticles and Preparation of Coating Solution: CuO NPs were synthesized via a simple solvothermal method. Copper (II) acetate monohydrate (≥98%, Sigma-Aldrich) and tetramethylammonium hydroxide (TMAH, 10 wt% in H 2 O) were used as a copper precursor and a hydroxide source, respectively. First, 1.5 mmol copper precursor was dissolved in 30 mL of anhydrous ethanol (94-96%, Alfa Aesar) under vigorous stirring. At this time, 3.25 mL of TMAH solution was added to 6.75 mL of anhydrous ethnaol in a separate container. After the copper precursor was completely dissolved, the precursor solution was transferred to an oil bath preheated to 75 °C. The reaction was initiated by adding the TMAH solution to the copper precursor solution dropwise at regular intervals for 10 min. The continuous condensation and hydrolysis reaction was maintained for 2 h under constant stirring at a speed of 500 rpm. As the reaction progressed, the color of the solution changed to black. The as-synthesized CuO NPs were purified and collected through centrigugation in n-hexane (extra pure, DUKSAN) at 4000 rpm for 10 min. To prepare the coating solution, CuO NPs were re-dispersed in chloroform (anhydrous, ≥99%, Sigma-Aldrich) and absolute ethanol co-solvent (chloroform, 75%, v/v) at concentrations of 5 mg mL −1 . The average size of CuO NPs in coating solution was measured to 16.8 ± 4.02 nm ( Figure S10, Supporting Information). Fabrication Process of Optically-Guided Memory Transistor: 300 nm thick SiO 2 /Si substrate was prepared. The substrate was cleaned by sonication in ethanol and IPA for 10 min each. For the floating-gate layer, 0.35 mL of CuO NPs solution was dropped on a Si/SiO 2 substrate, spin-coated at 3000 rpm for 30 s, and dried at 120 °C for 30 min. Tunneling dielectric was formed by mixing CYTOP and solvent 1:10, dropping 0.2 mL, and spin coating at 1000 rpm for 30 s. In addition, annealing was performed at 120 °C for 1 h to dry the CYTOP solvent. Then, 56 nm thick DNTT and 100 nm thick Au as channel and source/drain electrodes were deposited using a thermal evaporator and patterned with a shadow mask. Characterization: The size distribution of CuO NPs in coating solution was measured by dynamic light scattering spectrophotometer (ELS-8000, Otsuka Electronics Co. Ltd.). The memory characteristics of OMT were obtained using a Keithley 4200 semiconductor parameter analyzer in ambient conditions. The memory operation was measured according to light irradiation using multi-wavelength white LED ( Figure S2, Supporting Information). The intensities of multi-wavelength white LED were classified into three levels through a manual lever. 880, 3800, and 5500 lx lights was used corresponding to P inc = 0.13, 0.56, and 0.81 mW cm −2 to measure the programming states (P-state i, ii, iii) according to the light intensity, respectively. White LED was only applied during the programming operation, and the erasing operation was performed under dark. Supporting Information Supporting Information is available from the Wiley Online Library or from the author.
5,623.8
2022-10-03T00:00:00.000
[ "Materials Science", "Engineering", "Physics" ]
MHD waves generated by high-frequency photospheric vortex motions . In this paper, we discuss simulations of MHD wave generation and propagation through a three-dimensional open magnetic flux tube in the lower solar atmosphere. By using self-similar analytical solutions for modelling the magnetic field in Cartesian coordinate system, we have constructed a 3-D magnetohydrostatic configuration which is used as the initial condition for non-linear MHD wave simulations. For a driver we have implemented a high-frequency vortex-type motion at the footpoint region of the open magnetic flux tube. It is found that the implemented swirly source is able to excite different types of wave modes, i.e. sausage, kink and torsional Alfv´en modes. Analysing these waves by magneto-seismology tools could provide insight into the magnetic structure of the lower solar atmosphere. Introduction There is observational evidence of a variety of magnetic field structures, i.e. open magnetic flux tubes, solar coronal loops, etc. in the higher layers of the solar atmosphere.These configurations, while in magnetohydrostatic equilibrium and thus relatively long-lived, are subject to wave motions, if a perturbation acts on them.The studies of recently reported observed ubiquitous wave motions in a range of mag-Correspondence to: V. Fedun<EMAIL_ADDRESS>netic field configurations in the photosphere, chromosphere and corona are of particular interest, since through different mechanisms of wave damping they might be responsible for the heating of the solar plasma (see e.g.Banerjee et al., 2007;Taroyan and Erdélyi, 2009).Furthermore, one can exploit these waves by the technique of solar magneto-seismology to probe the fine structure of the Sun's magnetised and highly dynamic atmosphere (see e.g.Goossens et al., 2002;Arregui et al., 2007;Erdélyi and Fedun, 2007;Goossens et al., 2008;Verth, 2008;Andries et al., 2005Andries et al., , 2009; Ruderman and Erdélyi, 2010).A popular excitation mechanism of waves in magnetic chromospheric and coronal structures is the leakage of oscillatory modes from the inner layers of the solar photosphere along magnetic regions, because the magnetic field favourably shifts the acoustic cut-off frequency. Periodic motions at the footpoint regions of magnetic flux tubes can generate waves that may supply energy to the upper part of the solar atmosphere.For example, granular buffeting motion could be responsible for the excitation of kink (transverse) waves.Turbulence in the convection zone near the surface of the Sun excites solar p-modes (Goldreich and Keeley, 1977).These acoustic waves are transmitted to the photospheric region of the solar atmosphere and are able to drive longitudinal magneto-acoustic waves in magnetic flux tubes.These motions have large spatial scales.A variety of one-and multi-dimensional MHD numerical simulations of wave generation in a solar magnetic flux tube by various periodic vertical or horizontal sources have already been performed (see for example Bogdan et al., 2003;Hasan et al., 2005;Vigeesh et al., 2009;Ofman, 2009;Fedun et al., 2009;Felipe et al., 2010;Fedun et al., 2011).Here we push these frontiers further. Published by Copernicus Publications on behalf of the European Geosciences Union.Modern state-of-the-art ground-and space-based solar observational instruments e.g.Swedish Solar Telescope (SST), DST/ROSA, Hinode, SDO etc., carry out the highest spatiotemporal resolution of dynamical processes in the lower and upper regions of the solar atmosphere (see e.g.Bonet et al., 2008;Wedemeyer-Böhm and Rouppe van der Voort, 2009;Wedemeyer-Böhm, 2010). Recently, Bonet et al. (2008) have discovered localised vortex-type motions created at downdrafts where the plasma returns to the solar interior after cooling down.The reported swirls have diameters 0.2-1.5 Mm and lifetimes of 3-7 min.They have found numerous examples of convectively driven vortex flows with clockwise and counterclockwise rotation.Wedemeyer-Böhm and Rouppe van der Voort (2009) have analysed the time series of spectral scans through the Ca II 854.2 nm spectral line with the CRISP instrument mounted at the Swedish Solar Telescope and found rotations up in the chromosphere.They interpret these swirly motions and the associated bright point motions as a direct indication of upper-atmospheric magnetic field twisting and braiding as a result of convective buffeting of magnetic footpoints.We show that such observed rotational motions could be a natural driver for not just Alfvén but a range of MHD waves, i.e., slow (SMAW) and fast (FMAW) magnetoacoustic waves. In these papers the authors have analysed the linearised propagation of axisymmetric twists on axisymmetric vertical flux tubes.Open and closed configurations of the magnetic flux tubes which may model e.g. the coronal hole and active region loops have been studied.It was found that torsional Alfvén waves may produce enough energy flux to heat the solar corona (see also Antolin et al., 2008).Verth et al. (2010) have shown that observation of the eigenmodes of torsional Alfvén waves can provide temperature diagnostics of both the internal and surrounding plasma, i.e. these waves are the ideal magneto-seismological tool for probing radial plasma inhomogeneity in solar waveguides. Due to the incompressible nature of the Alfvén wave the detection of this type of wave MHD mode is rather challenging and is difficult in the solar atmosphere.Only recently Jess et al. (2009), by studying of a chromospheric magnetic bright point (MBP) group and analysing periodic Hα spectral line broadening with no intensity variations, have clearly shown the existence of this wave mode in the Sun. Numerically, vortex-type motion in the radiative MHD simulations of magnetoconvection has recently been found by Shelyag et al. (2011a,b).These authors have shown a direct connection between magnetic vortices and rotary motions of photospheric bright points, and suggested that there may be a connection between the MBP rotation and smallscale swirly motions observed higher in the atmosphere. Based on these theoretical, observational and numerical results, in this short paper, we focus on waves emanating from a spatially localised vortex source at the footpoint region of an open magnetic flux tube.Due to the swirly motion it is anticipated that such drivers are the natural exciters of torsional Alfvén waves.Analysing the evolution of the horizontal cross section of a simulated magnetic flux tube we are able to establish clear evidence of propagating MHD kink and sausage modes as well.constructions for description of various solar magnetic phenomena.Currently, magnetic fields of this type are widely used in numerical simulations in helioseismology and coronal physics (e.g.Gordovskyy and Jain, 2007;Cameron et al., 2008;Shelyag et al., 2009;Fedun et al., 2011).In our particular three-dimensional case, due to the self-similarity, the open magnetic flux tube is obtained analytically from the following set of equations: and where B 0z (z) is the vertical z component of the background magnetic field along the symmetry axis towards the top boundary of the model, r 0 is a radial scaling and G is an arbitrarily chosen function which describes the profile in the radial direction of the vertical magnetic field component.In the present case this function is chosen to be exponential, i.e. where A is an arbitrary constant.To implement this magnetic field configuration into the background hydrostatic equilibrium, we have used the same principles as Fedun et al. (2011) for a two-dimensional case.The constructed 3-D magnetic flux tube is then visualised by selected magnetic field lines as demonstrated in Fig. 1.The different colors along the field lines correspond to magnetic field strength.The magnetic field strength at the footpoint region is B 0 = 1000 G and reduces with height in the mid-chromosphere to a few G (in the horizontal direction the magnetic field has a maximum absolute value at the centre and minimum value at edges of the flux tube).The footpoint flux tube radius is 100 km. In the Introduction we have noted that vortex-type motion at the photospheric region is widely observed.Swirly motions and associated MBP motions are a direct indication of upper-atmospheric magnetic field twisting and braiding as a result of convective buffeting of magnetic footpoints.Based on these observational results we have implemented here a vortex-type high-frequency periodic source located at the footpoint of the magnetic flux tube as an initial driver in our numerical simulations.The V x and V y components of the velocity perturbation have a Gaussian spatial distribution in the x-, y-and z-directions: Fig. 5.A time series of horizontal cuts at height h = 0.75 Mm of the total magnetic field.The strength of the magnetic field is shown as iso-contours.The difference in time between the snapshots is approximately equal to 30 s, i.e., the period of footpoint driver.Note the clear evidence of the sausage and kink oscillations which propagate along the magnetic flux tube. where A 0 is the amplitude of the initial perturbation; r and z are the half width of the Gaussian spatial profiles of the driver in the radial and vertical directions, respectively; T is the period of the driver, r 2 = x 2 +y 2 is the radial distance.In our numerical simulations r = 0.1 Mm and z = 0.01 Mm in order to mimic observed drivers. In Fig. 2 the initial state and disturbances generated by a vortex driver at a later elapsed time (t = 120 s) are shown as a two-dimensional horizontal slice at height h = 0.12 Mm, i.e. just above the region where the driver is located in the computational domain.We have plotted a color rendering of the perturbed components of the magnetic field (b r , b φ , b z ) and velocities (V r , V φ , V z ) in cylindrical coordinate system.Disturbances excited by the implemented vortex driver in the computational domain are similar to those obtained by Shelyag et al. (2011a,b) in direct radiative MHD simulations of magneto-convection and observed vortex motions by Bonet et al. (2008) in photospheric G-band bright points. The horizontal component of velocity V x is shown in Fig. 3 as isosurfaces.Illustrative snapshots are captured at times t = 7.2 and 312 s, respectively.At the centre of each snapshot we have overplotted the vertical cut of the background magnetic field component B z which shows the internal magnetic structure of the constructed magnetic flux tube.The high frequency (T = 30 s) vortex driver is located at the footpoint region of the magnetic flux tube (see the first upper left panel of Fig. 3). After ten periods (see the right panel of Fig. 3) we observe a 3-D pattern of the SMAW and FMAW distribution in the computational domain.The SMAWs are located near the axis of the magnetic flux tube and propagate vertically upward, while the FMAWs propagate obliquely.It can be seen that SMAWs, generated by the driver, are weak and present only up to the height h = 0.7 Mm, i.e., SMAWs do not supply considerable energy to the upper part of the chromosphere.Note, in a 3-D geometry we can now clearly see the shape of generated waves.SMAWs have a maximum of amplitude at the axis of the magnetic flux tube, equal to 800 m s −1 . Time-distance diagrams of the radial (V r ) and vertical (V z ) components of the velocity rendered approximately at the axis of the magnetic flux tube are shown in Fig. 4. As we have noted previously, two types of waves with different phase speeds can be resolved.The SMAW and FMAW propagate with phase speeds V ph = 3.8 km s −1 and V ph = 8.2 km s −1 , respectively. The dark horizontal regions on the time-distance diagram of the radial component (V r ) (see left panel of Fig. 4) occur due to wave interference (both SMAW and FMAW) in the radial direction. In Fig. 5 we have shown iso-contours of the total magnetic field at h = 0.75 Mm for simulation times t ≈ 142, 171, 199 and 227.It is clearly seen that the area inside, e.g., the iso-contour at 32 Gauss, decreases and increases periodically.This is clear evidence of a sausage mode excited by a vortex driver.Could this be observed?Our prediction of generated sausage waves has recently been confirmed observationally by Morton et al. (2011).Furthermore, this driver also excites the kink mode as is shown by transverse motion of the magnetic flux tube centre (see Fig. 5).We appreciate that is difficult to identify the changes of position and areas of iso-contours in static images.To help readers, we also provided movies in downloadable electronic form, that can be found at http://swat.group.shef.ac.uk/simulations.html. Conclusions The purpose of this short report is to investigate numerically the generation of MHD waves in an open magnetic flux tube by a vortex-type photospheric driver in a very realistic www.ann-geophys.net/29/1029/2011/Ann.Geophys., 29, 1029-1035, 2011 three-dimensional geometry.We have shown that a such driver can excite both SMAW and FMAW.The torsional Alfvén wave is also present and the interesting properties of this mode will be a focus of our future study.There is simply no space available here for a deep analysis.We have not investigated the possible heating contribution of the generated MHD modes.This requires much more detailed numerical and analytical analysis.For more efficient heating due to MHD waves, these waves must transfer their energy to smaller length scales due to various types of instability and dissipation mechanisms (see Fedun et al., 2004;Copil et al., 2010).In order to resolve the dissipation numerically very powerful high performance computational equipment is needed and we hope to make the first steps in this direction in the near future. Fig. 1 . Fig. 1.Initial magnetic flux tube configuration.Selected colored lines correspond to the magnetic field lines.Also, color indicates the value of the magnetic field strength at different regions of the flux tube.The iso-contours of the initially constant magnetic field are overplotted at the top of the simulation box.Note, only central part (i.e.x,y = [0.8,1.2]Mm) of the full computational domain (i.e.x,y = [0,2] Mm) is shown. Fig. 2 . Fig. 2. Rendering of the magnetic field (b r , b φ , b z ) and velocity (V r , V φ , V z ) perturbation components generated by the vortex-type periodic driver (see Eq. 2) at the footpoint of the magnetic flux tube.Two different times are shown.The top and bottom sets of six horizontal slices at hight h = 0.12 Mm correspond to simulation times t = 1.5 s and t = 120.02s, respectively. Fig. 3 . Fig. 3. Snapshots showing the iso-surfaces of the horizontal velocity component V x during numerical simulation.From left to the right the snapshots are at t = 7.2, 312 s at the centre of the numerical domain we have overplotted the 2-D vertical slice of the background B z component of the magnetic field.Note, rendering of the 3-D numerical data has been constructed using the VAPOR visualisation package. Fig. 4 . Fig. 4. The time-distance diagrams of the radial (V r ) and the vertical (V z ) components of the velocity at x = 0.98 Mm and y = 0.98 Mm, i.e., near the axis of the magnetic flux tube.
3,371.2
2011-06-16T00:00:00.000
[ "Physics" ]
Weight Reduction of Automobile Using Glass-Mat Thermoplastic Composites in Spare-Wheel Well The reduction of carbon dioxide emission and light weighting are the most important issues in automotive industry. Lightweight, high strength, high corrosion resistance and easy manufacturing of composite materials have been used in automotive sector in recent years. Glass-mat thermoplastic (GMT) materials are based on polypropylene and it stands out among the composite materials used in the automotive industry with existing long fiber and / or endless glass matt structure as reinforcing material providing high strength, performance impact, energy absorption and recyclability. The material used in this study utilizes GMTex reinforced composite structure with multi-layer glass fiber technology that is reinforced with woven and randomly laid fiber structures to enhance the superior properties of GMT. The spare-wheel well (SWW) which includes many parts weighs about 10 kg and it is a potential area for weight reduction. This study has been based on the boundary conditions of the existing sheet material in this area, the current performance has been maintained with the GMT/GMTex material to provide approximately 2 kg weight reduction. The effects of orientations of GMTex reinforcement on crash performance have been verified by the rear impact virtual analyzes. Besides, the joining methods and some design criticalities during the integration of composite SWW part in BIW (Body in White) have been mentioned. INTRODUCTION The weight of a vehicle is the predominany factor in terms of the reduction of emission and fuel consumption [1]. As can be seen in Fig 1, around 75% of the weight of a motor vehicle is divided between body, powertrain, suspension and chassis components. Reducing the weight of the body-in -white means smaller engines can be employed and smaller suspension systems are needed, so reducing the total weight of the body is crucial towards achieving lighter vehicles [2]. Considering the lightweight and high energy capacity of the automotive industry, composite material usage is increasing day by day with their high specific strength and stiffness and high energy absorption capacity. [1] The development of automobile technologies necessitates the use of lighter materials in automobile bodies. Different researchers have implemented different types of composite materials such as carbon fiber reinforced plastic (CFRP), glass fiber reinforced plastic (GFRP), sheet moulding compound (SMC), and glass mat thermoplastic (GMT) for bumper beam to improve the bumper subsystem performance as it can offer lightweight as well as reduce the energy consumption, [3][4][5]. Currently, SMC and GMT are widely used because of easy formability, low material and manufacturing costs, even though CFRP and GFRP Different researchers have implemented different types of composite materials such as carbon fiber reinforced plastic (CFRP), glass fiber reinforced plastic (GFRP), sheet moulding compound (SMC), and glass mat thermoplastic (GMT) for bumper beam to improve the bumper subsystem performance as it can offer lightweight as well as reduce Abstract The reduction of carbon dioxide emission and light weighting are the most important issues in automotive industry. Lightweight, high strength, high corrosion resistance and easy manufacturing of composite materials have been used in automotive sector in recent years. Glass-mat thermoplastic (GMT) materials are based on polypropylene and it stands out among the composite materials used in the automotive industry with existing long fiber and / or endless glass matt structure as reinforcing material providing high strength, performance impact, energy absorption and recyclability. The material used in this study utilizes GMTex reinforced composite structure with multi-layer glass fiber technology that is reinforced with woven and randomly laid fiber structures to enhance the superior properties of GMT. The spare-wheel well (SWW) which includes many parts weighs about 10 kg and it is a potential area for weight reduction. This study has been based on the boundary conditions of the existing sheet material in this area, the current performance has been maintained with the GMT/GMTex material to provide approximately 2 kg weight reduction. The effects of orientations of GMTex reinforcement on crash performance have been verified by the rear impact virtual analyzes. Besides, the joining methods and some design criticalities during the integration of composite SWW part in BIW (Body in White) have been mentioned. Keywords: GMT, GMTex, Spare-Wheel Well, Thermoplastics the energy consumption, [3][4][5]. Currently, SMC and GMT are widely used because of easy formability, low material and manufacturing costs, even though CFRP and GFRP Different researchers have implemented different types of composite materials such as carbon fiber reinforced plastic (CFRP), glass fiber reinforced plastic (GFRP), sheet moulding compound (SMC), and glass mat thermoplastic (GMT) for spare wheel well to improve the performance as it can offer lightweight as well as reduce the energy consumption, [3][4][5]. Currently, SMC and GMT are widely used because of easy formability, low material and manufacturing costs, even though CFRP and GFRP can offer better mechanical performances. GMT is more appreciated in automotive industries because of its short shaping and curing cycles. Moreover, GMT is a recyclable material because of its thermoplastic matrix. Nowadays in the European Union (EU), about 75% of endof-life vehicles, are recyclable, i.e. their metallic part. The rest (~25%) of the vehicle is considered to be waste and generally goes to landfills, [6]. EU legislation requires the reduction of this waste to a maximum of 5% by 2015. To take into account this directive, in this paper a recyclable GMT material has been chosen as a potential candidate for the spare whell well (that is a close section thin-walled beam) construction and its performance is compared with reference material, steel, and CFRP non-recyclable composite material solutions. GMT material stands out as a material with high mechanical strength, high energy absorption capacity and recyclability [2] Furthermore, GMT material provides good resistance to chemicals and moisture at low temperatures [3]. The weights of automotive structural parts can be reduced using these benefits without sacrificing their mechanical performance. [4] The SWW is a common component on most passenger vehicles with a trunk or rear hatch (back door). This round or square pan is mounted into the trunk opening, where it holds an extra wheel, tire iron, and jack. [5] (Fig. 1). Moreover, it provides a big advantage for the structure and crash safety of vehicle. That is to say it's an accomplished applicant for composite-metal conversion. [6] This paper represents, a weight reduction study using Glass-Mat Thermoplastic Composites (GMT) in Spare-Wheel Well (SWW) on the car body to reduce the weight of the vehicle without sacrificing their mechanical performance. The GMT materials manufactured by Mitsubishi Chemical Advanced Materials Composites are based on polypropylene (PP) and polyamide (PA). Reinforcement is provided by long fibers and/or endless glass mat technology. And this is the key to GMT's success. For unreinforced and short-fiber-reinforced thermoplastics become brittle at low temperatures and shatter dangerously in crashes. Its special long fiber technology ensures high energy absorption before fracture and thereafter benign failure behavior without sharp lines of fracture. Based on GMT, the composite GMTex has been further developed for applications demanding high impact resistance, strength and durability. In the production of these high-performance thermoplastic composites, the basic materials of polypropylene (PP), polyamide (PA) and thermoplastic polyester (TPP) are reinforced with woven and randomly laid fibers (multi-layer glass fiber technology). GMTs are an established material class that are used to produce complex components, principally for the automotive industry. [7] Compared to GMT, GMTex is specially developed for applications that require high impact resistance and high durability. Polypropylene (PP), polyamide (PA) and thermoplastic polypropylene (TPP) matrices are reinforced with randomly laid fibers (multilayer glass fiber technology) in the production of this high performance GMTex material. [3] The GMTex material also has better impact damping when it is compared with GMT material. For this reason, in addition to having more benefit than GMT in terms of weight reduction, it is also possible to replace metal materials for structural parts. [7]. (Fig. 2). [9] DEVELOPMENT PROCEDURE Spare-Wheel Well (Fig 1) is a structural component on the underbody structure between rear rails and front of rear panel. This component should be able to absorb the energy which will be transferred during rear crash. Many other automotive makers provide light weighting through using different composite material instead of using steel for spare wheel well area. (Fig 3) [8] GMT is an abbreviation for glass fiber thermoplastic material. GMT consists of thermoplastic matrix and glass fiber reinforcement. Which can be short or continuous. [2] GMT has a short history and rapid development. It has good mechanical property and short shaping cycling. It can form into large and complex parts with good dimen-sional Stability. (Fig 4) [9] Nevertheless, its extraordinary properties, weight reduction potential and the advantages that it provides during production are the significant reasons to prefer GMT instead of other composite materials. Molded GMT part is almost isotropic with 50-300 N/mm 2 strength. GMT part has good resistance to impact and recuperability. Therefore, GMT is becoming widespread especially in European automobile industry to produce front end, seat frame, engine noise shield, bumper, instrument panel bracket and so on. [2] Various metal brackets, plastic plugs and insulating materials on the steel part are integrated into the new composite design (Fig 5 ) This also gave us a cost advantage. The rear floor design of the vehicle is almost completely differentiated by the composite design. Load cases for spare Wheel Well SWW is located in rear side of the vehicle that assembled by using spot welding method, it carries a spare tire in it, over 23 kg weight. (Fig.6) In this study final element model has been created at SWW zone for all surrounded components by using Altair-Hyper mesh software by adding of spare wheel & repair kit weights. NVH, safety & structural virtual analyzes have been done in complete vehicle level and thereby target values have been defined for composite material. (Fig.5). Composite material that can replace steel material should provide first these target values and will lose weight without compromising of vehicle performance. Modal Analysis Modal analysis for NVH calculations is the basic analysis that determines the vehicle body performance in the product development process. This analysis is usually performed during each project period and body design changes, and the body NVH performance is constantly monitored. Modal analysis is performed to calculate the natural frequencies of the vehicle. The results of the analysis are evaluated and the value are tried to be above the target value. When modal analysis of the vehicle was made, BIW (Body in White) model was used. (Fig 7) Figure 7. Targets of modal analysis Natural frequency analysis of the relevant region has been carried out were found to provide target values. (Fig 8) Torsional Stiffness The torsional stiffness of the vehicle has been made with the composite SWW, and there is no decrease compared to the present situation. (Fig 9) Rear Crash This component must pass a number of tests. Spare-wheel wells mounted into the vehicle must meet impact require-ments. The wheel well must stay attached to the vehicle frame after a crash. [5] Figure 9. Torsional stiffness calculation For the evaluation of the rear crash performance of the vehicle TRIAS 33 has been executed. TRIAS 33 basically controls the deformations on the fuel pipes and fuel tank. Within the TRIAS 33 a rigid barrier has been impacted with 52 km/h to the vehicle from the rear side. [11] (Fig 10) Figure 10. Rear impact condition of the vehicle for TRIAS 33 validation [11] After the analyzes made, in the composite part; the effect of the GMTex amounts and positions used in GMTex, especially on the rear crash performance, has been observed. FIXING METHOD The biggest function of adhesives is to join the dissimilar materials with no direct contact between parts. It is possible to provide load transmission by providing a more suitable stress distribution than conventional mechanical coupling methods. However, in addition to forming the connection at a lower cost and weight, the adhesives may provide equivalent or greater bond strength to the mechanical joining methods. In addition, design flexibility, improved stiffness of the joint, ability to noise and vibrations are the other main advantages of the adhesive joining. [12] Instead of being riveted to the BIW, composite wheel wells are elastomerically bonded, generally with the same urethane adhesive system used to bond glass windshields to the frame. To ensure the part stays flat and cures properly against the steel frame, a 10-kg weight is set inside the well for approximately 30 min during assembly. It is removed when the spare tire is placed inside the well. [5] With a structural paste to be applied around the composite part; both the incoming force has been met across the entire adhesion surface and we have eliminated the water infiltration problem. The composite part and the seating surfaces that are in contact must be designed to allow for paste application. The application cross-section can be seen in Figure 11. CONCLUSION Although steel material is still the most common material used on vehicle body, automotive manufacturers have gone to reduce vehicle weight by using alternative materials to reduce CO 2 emissions. Easy recycling and short shaping cycling GMT as a successful alternative to such material with high mechanical and chemical properties. In this study, GMT-GMTex material has been used for the SWW design to reach 2 kg (from 12,2 kg to 10,15 kg) weight reduction per vehicle and at the same time no loss of vehicle performance. In our study design studies were carried out for spare wheel well, the resulting designs were examined by numerical analysis methods and weak points were improved with the necessary design activities. As a result of the improvements made according to the numerical analysis reports, the expected results were obtained. It was observed that the forms on the part design had a high effect on Modal Analysis and Torsional Stiffness and the rate of GMT-GMTex material affected Rear crash performance. With the use of spare wheel well in passenger vehicles, it has been observed that the NVH quality of the vehicle improves positively the customer's perceive quality. It has been found that an improved design supported by numerical analyzes can meet the same function requirements as sheet metal architecture.
3,291.4
2020-03-20T00:00:00.000
[ "Materials Science" ]
Iberian Lynx Lynx pardinus Temminck , 1827 ( Mammalia : Carnivora : Felidae ) in central Spain : trophic niche of an isolated population Understanding predator-prey relationships is fundamental to develop effective conservation plans. Between 2015 and 2018, we combed 21 transects, each 7km long, searching for Iberian Lynx Lynx pardinus scat within the province of Madrid in central Spain. In order to minimise inherent subjectivity of visual identification as much as possible, we performed a double specific nested polymerase chain reaction (PCR) followed by a primer extension assay addressed to two Iberian Lynx diagnostic single nucleotide polymorphisms. Forty-six scat samples were positively identified as belonging to Iberian Lynx through genetic analysis. From these, we extracted remains of consumed prey, which we determined to the lowest possible taxonomic level, mainly through hair identification. Identified prey was divided into four types: lagomorphs, small mammals, birds, and ungulates. The species’ diet composition was described based on the frequency of occurrence (FO) of each prey and niche breadth, and also compared with prior knowledge of the species using four prior studies as a comparative reference through the calculation of the niche overlap value. The FO of lagomorphs (39%) was the lowest, while the FO of small mammals (54%) was the highest recorded to date. The niche breath (0.36) was higher than recorded in prior studies, but still showing the specialist character of the Iberian Lynx. Niche overlap was low (C = 0.49), showing differences in trophic niche between the population in our study area and the one studied in southern Spain. This indicates that the Iberian Lynx is adept at switching its main prey, an ability that has previously been firmly rejected. It is, however, capable of adapting to alternative prey more often than recorded to date, which could be a behavioural response to the patchy distribution of European Rabbit Oryctolagus cuniculus in the study area. INTRODUCTION The Iberian Lynx Lynx pardinus is endemic to the Iberian peninsula (Rodríguez & Delibes 1992), and is regarded as a trophic super specialist (Ferrer & Negro 2004). Since the 1950s, the Iberian Lynx population has declined continuously (Valverde 1963;Cabezas-Díaz et al. 2009). Only 93 individuals were recorded in 2002 (Guzmán et al. 2004). Following conservation measures such as reintroductions of captive-bred Iberian Lynxes in southern Spain, this population experienced a constant growth (Simón et al. 2011;Rodríguez & Calzada 2015), reaching 589 individuals in 2017 (Simón 2018). Furthermore, Cruz et al. (2019) confirmed the presence of Iberian Lynxes outside the currently known range of the species in the southern Iberian peninsula, suggesting the continued existence of a stable population in central Spain within the province of Madrid. The diet of a species is a fundamental aspect of its ecology that depends mainly on the abundance and availability of prey types (Terraube & Arroyo 2011), but also on learning and experience of individuals (Shipley et al. 2009). A widespread phenomenon in many vertebrate and invertebrate taxa (Bolnick et al. 2003) is the socalled 'niche variation hypothesis'. This occurs when some co-occurring individuals of a species actively select different prey types in their shared environment (Araujo et al. 2011). The niche variation could be a response to two main factors: (i) change in environmental conditions that affects prey availability and prompts all individuals of a population to use a larger spectrum of resources, or (ii) each individual continues to use a narrow range of resources that diverges from conspecifics, thus minimizing the interspecific competition (Costa et al. 2008). Understanding predator-prey relationships is fundamental to identify conservation priorities, prior to the design of conservation programmes for vulnerable or endangered species (Popp et al. 2018). Lacking information on these relationships could result in illinformed conservation strategies that lead to a failure of reaching conservation goals and at the same time to a gross waste of resources, as occurred in the Doñana National Park with the restocking of European Rabbits Oryctolagus cuniculus (Carro et al. 2019). The knowledge about diet plasticity of a species is a keystone to assess the transferability of results obtained in a certain area to another, and for assessing whether an alternative management will provide similar results (Terraube & Arroyo 2011). A relevant descriptor of niche is breadth, which is a function of the proportion of each resource used with regard to total consumed resources (Smith 1982). Therefore, a species that uses a wide range of trophic resources in a similar proportion will show a high niche breadth and, consequently, will be regarded as a generalist for studied resources (Symondson et al. 2002). On the contrary, a species that uses a high proportion of a narrow range of resources will be regarded as a specialist (Shipley et al. 2009). Rodríguez & Delibes (1992) were the last authors who reported an Iberian Lynx population in the province of Madrid before Cruz et al. (2019). The territorial and solitary behaviour of the Iberian Lynx (San Miguel 2006;Calzada et al. 2007;Martín et al. 2007) results in a lowdensity spatial organization that makes it extremely difficult to find and track (Alfaya et al. 2019). The central Spanish population was already small in the early 1990s (Rodríguez & Delibes 1992), remained elusive and was not considered in conservation programs initiated in 2002 (Rodríguez & Calzada 2015) that lead to the recovery of the population in southern Spain (Simón 2018). In this article, we report the diet composition of Iberian Lynx, based on analysis of scat collected in a study area in central Spain. We discuss the trophic niche breadth of this population in the light of research conducted on the species' diet in southern Spain. STUDY AREA The research was performed in the western region of the province of Madrid (Figure 1), which is delimited by boundaries with the community of Castilla-León in the north and northwest, the community of Castilla-La Mancha in the south, and the Manzanares River basin in the east. The study area ranges in elevation from 440 to 2,320 m. It represents three main landscape regions: (i) the Guadarrama Mountains, a mountainous granitic zone, (ii) the foothills with a gradient of siliceous sand and soft slopes, and (iii) the depression, a terrain characterised by interfluvial hills (Rivas-Martínez 1982;Zabía & del Olmo 2007). The meso-Mediterranean zone is the dominant bioclimatic belt within the study area, but the oro-and supra-Mediterranean zones are also present in the Guadarrama Mountains (Rivas-Martínez 1982). The main climatic features of the study area are the seasonal variation in temperature between -8°C and 44°C, summer drought and irregular precipitation ranging from 400 to 2,000 mm per year (Zabía & del Olmo 2007). The landscape in the study area is a mosaic of pastures with scrub and Holm Oak Quercus ilex groves interspersed with villages and patches of agricultural land (Schmitz et al. 2007; Image 1). Local people use pastures traditionally during the summer for grazing transhumant cattle, periodically perform selective logging in the forests and clean the understorey (Arnaiz- Schmitz et al. 2018). Sample collection Evidence of the presence of Iberian Lynxes within the study area was collected between January 2015 and May 2018. We designed 21 transects of 7km each that were combed by at least two researchers. We searched for scat on foot along pathways and firebreaks, since both Iberian Lynx and European Wildcat Felis silvestris usually move along such linear structures (Lozano et al. 2013;Garrote et al. 2014). Sometimes, we also combed other less regular landscape features such as the bases of large rocks, around Rabbit holes and near rivers, where scat was more likely to be found (Martín et al. 2007). Along these transects, we searched for scat that J TT is morphologically compatible with scat of the Iberian Lynx. This ranges in length from 5cm to 9cm and in width from 1.5cm to 2cm, and is divided into several fragments (Rodríguez 1993). It ranges in colour from ash-grey to dark-brown, and is entirely covered by a mucous patina when fresh (Iglesias & España 2010). The probability of an erroneous identification of the scat of Iberian Lynx, however, is high (Boshoff & Kerley 2010, Molinari-Jobin et al. 2012Garrote & de Ayala 2015). It has been often misidentified due to its similarity with the scat of European Wildcat and Red Fox Vulpes vulpes (Palomares et al. 2002). To reduce this probability as much as possible, we performed a specific genetic identification analysis designed by Cruz et al. (2019). This genetic analysis consists of a double specific nested PCR followed by a primer extension assay addressed to two Iberian Lynx diagnostic single nucleotide polymorphisms (SNPs). The product of the double nested PCR is already specific for the Iberian Lynx since we used the primer DL7F [5'-CTT AAT CGT GCA TTA TAC CTTGT-3'] developed by Palomares et al. (2002), which was aligned to sequences of orthologue carnivores including Eurasian Lynx Lynx lynx, Canada Lynx L. canadensis, European Wildcat, and Domestic Cat Felis catus in order to select diagnostic positions. Then we identified two SNPs specific of the Iberian Lynx. These SNPs were marked with fluorescence and detected through a capillary electrophoresis. This method of analysis provides an increase of sensitivity and straightforward verification of the belonging species through the diagnostic SNPs, being strongly protected against false positive results. For further details see Cruz et al. (2019). Content analysis Subsequent to positive genetic identification of scat samples as belonging to the Iberian Lynx, we analysed the contents of these samples. We used a stereomicroscope to identify and remove remains of consumed prey like broken bones, teeth, feathers, and hair. Teeth and bone remains were identified with a stereomicroscope, while feathers and hair required the use of a 40x microscope. We washed hairs, first with distilled water and detergent, and then with 70% alcohol as described in Teerink (1991). After drying hairs, we poured a thin layer of transparent nail varnish over a slide and let it dry for 30 seconds. Then we put each hair on the slide for 30 minutes and covered it with a cover glass. That way, we obtained a hair cuticle mould with a scale pattern showing a certain, although limited, taxonomic value (Short 1978). Removed remains were identified up to the family level, except those belonging to Wild Boar Sus scrofa, because of their easy identification. We identified hair using Teerink (1991) and Valente et al. (2015), teeth using Dueñas et al. (1985, and feathers using Dove & Koch (2011). Hutchinson (1957) defined the niche as an n-dimensional hypervolume where distribution of environmental variables and/or factors would allow a certain species to exist indefinitely. This approach provides a quantitative perspective of the niche concept and, therefore, established the conceptual basis for the performance of studies in many different fields of ecology (Smith 1982). We defined the trophic niche as the n-dimensional hypervolume, n being the number of prey types consumed by the target species, constrained by used trophic resources that would allow the species to exist indefinitely. Diet composition For diet description we grouped consumed prey into four categories: birds, lagomorphs, small mammals and ungulates. We calculated the frequency of occurrence (FO) for each category regarding total analysed scat samples, and also the niche breadth using Levins Index To compare this with other populations, we used the standardisation suggested by Colwell & Futuyma (1971), B stand = (B-1)/ (n-1) where n is the number of prey categories consumed. This index shows the degree of specialisation of a certain species; a value close to 0 is indicative of a specialist predator, while a generalist predator shows values close to 1 (Colwell & Futuyma 1971). Both FO and B stand calculated from analysed scat samples were compared with prior knowledge of the trophic ecology of the Iberian Lynx. For that, we selected four relevant studies as a comparative reference (Table 1), and regrouped their results to our four prey types. This was not possible in regard to the study by Fedriani et al. (1999) who used a broad classification of prey items, e.g., other vertebrates, referring to all non-lagomorph vertebrates. Therefore, we calculated the FO of each prey category and B stand for all four reference studies, and compared results with those obtained in our study area. We compared the trophic niche of the Iberian Lynx population in the province of Madrid (M) with that described in prior studies (A). For the latter, we J TT calculated the average FO of each prey category in the reference studies. Then we used the index formulated by Schoener (1970) for calculating the niche overlap between both populations, M and A: C = 1-½ ∑|p iM -p iA | where p iM is the proportion of occurrence of the category i within population M, and p iA is the same but within population A. C takes a minimum value of 0 when there is no overlap, and a maximum of 1 when the proportions of consumed resources are the same in both populations. Lastly, we compared FO and B stand obtained in the province of Madrid between the two periods when samples were collected, i.e., spring-summer and autumn-winter. We used Fisher's exact test, which is suitable for small sampling sizes. RESULTS Between January 2015 and May 2018, we collected 98 scat samples along 21 transects that were each combed twice, once in spring-summer from May to July and once in autumn-winter from October to February. Through genetic analysis we identified 46 of these samples positively as belonging to Iberian Lynx, with 31 collected in the spring-summer season and 15 collected in the autumn-winter season. As our genetic method allows only for identifying scat of Iberian Lynx, we did not attempt to identify the remaining 52 scat samples to other species. The content analysis of the 46 scat samples revealed an overall niche breadth B stand of 0.36, with small mammals constituting the majority of prey categories. Fisher's exact test shows the existence of marginally non-significant (p = 0.07) differences in diet composition between both seasons of the year considered (Figure 2). Details are provided in Table 2. DISCUSSION The B stand (0.36) calculated for the population in our study area shows the specialist character of the Iberian Lynx. However, this value is higher than those obtained for comparative reference studies in Table 1. Furthermore, the obtained C value of 0.49 shows the trophic niche shift of this population, with regard to that known so far. Figure 3 shows that the trophic niche of the Iberian Lynxes within the study area is directed towards predation on small mammals. The Iberian Lynx is regarded as a trophic specialist, strictly dependent of the European Rabbit (Delibes 1980;Aymerich 1982;Beltrán et al. 1985;Beltrán & Delibes 1991;Calzada & Palomares 1996; et al. 2001;Ferrer & Negro 2004;Gil-Sánchez et al. 2006). There are such strong links between these two species that the collapse of Rabbit populations can even inhibit the reproductive capability of the Iberian Lynx which has been interpreted as its 'inability' to switch its main prey (Ferreras et al. 2011). In this research, we compared niche breadth and overlap between a potential population in the central Iberian peninsula and prior knowledge obtained from southern populations. Our results show differences in comparison with those obtained from the four studies used as a comparative reference (Fig. 3). Therefore, our study is the first record in which lagomorphs are not the main prey, showing a 30% lower FO than in the lowest record so far (70%, Beltrán & Delibes 1991). On the contrary, the FO of small mammals is clearly over-represented (47.5% higher) in comparison with prior studies. A similar pattern than the one observed here was already recorded in Delibes et al. (1975). In this study carried out in the provinces of Cáceres and Salamanca (closer to our study area than to southern populations), the recorded FO for Rabbits was 56.5% while the small mammals and birds occur in the 27% and 12% of samples, respectively. These results, although still different to ours, show a pattern of Iberian Lynxes farther inland feeding on alternative prey other than Rabbits more frequently. Rabbit distribution within our study area shows clear differences between main landscape regions. The population in the north is naturally fragmented, most likely because of the patchy distribution of suitable habitat (Virgós et al. 2003). In the south, Rabbits are widespread (Blanco & Villafuerte 1993) due to the existence of a high density of boundaries between croplands and scrublands (Calvete et al. 2004) where they find a suitable combination of trophic resources and shelter (Tapia et al. 2014). As far as we know there is no more actualized information about new population trends, but the described spatial arrangement coincides with our field observations throughout the sampling period. The observed pattern in our study area could be a response to: (i) Iberian Lynx adaptation that shows a different trophic behaviour in different environments. Note that 65% of the Iberian Lynx scat samples analysed were collected in the landscape region of the Guadarrama Mountains, where Rabbit distribution is patchy. This could lead to the exploration of different trophic niches in areas where Rabbit abundance is lower. A similar pattern was obtained by Sáez-Gómez et al. (2018) and Nájera et al. (2019), who recorded Iberian Lynxes preying on Red-necked Nightjar Caprimulgus ruficollis eggs and Domestic Cats, respectively, as a response to the decline of Rabbit abundance; (ii) an uncertain proportion of our Iberian Lynx scat samples could come from juvenile individuals, whose habitat requirements are less restrictive than those of resident individuals (Gastón et al. 2016). Therefore, trophic plasticity could be wider too, which would add some noise to our results; and (iii) overestimations of the FO of small species might have been obtained (Torres et al. 2015). These have more hair and other indigestible matter per unit of body mass, which can cause their occurrence in a higher number of scat samples per unit of consumed mass (Floyd et al. 1978). Despite this, earlier studies on the trophic ecology of the Iberian Lynx did not suggest evidence of overrepresentation of small prey (Delibes 1980;Aymerich 1982;Beltrán et al. 1985;Beltrán & Delibes 1991;Calzada & Palomares 1996;Palomares et al. 2001;Ferrer & Negro 2004;Gil-Sánchez et al. 2006). Therefore results are still comparable. The observed seasonal variation in the diet of the Iberian Lynx in our study area corroborates results of previous studies on the species (Delibes 1977;Beltrán & Delibes 1991;Gil-Sánchez et al. 2006) as well as on the Eurasian Lynx (Krofel et al. 2011). Lagomorph predation resulted in a 27% lower value during the spring-summer period, while small mammals consumed showed a 10% increase in comparison with the autumn-winter period. Bird predation was only recorded in spring-summer (FO = 21%). B stand also shows differences between both J TT seasons, being higher in spring-summer (0.58) than in autumn-winter (0.41). Therefore, during the cold season of the year, the Iberian Lynx consumes a lower variety of trophic resources, whilst this pattern changes in the warm season. This could be motivated by two facts that are likely to produce a synergic effect: (i) during autumn-winter, when high precipitation and low temperature occur, the daily activity of prey is reduced, being less available for Iberian Lynxes (Beltrán & Delibes 1994). On the other hand, during the spring-summer season, climatic conditions are less adverse, which allows for an increase in daily activity and, therefore, higher availability of different prey species; (ii) the Rabbit reproduction period begins in October-November and can last until June-July, depending on environmental conditions. This produces a maximum peak of abundance just before summer. Then Rabbits become the most abundant prey and, as a consequence, predators apply the highest pressure to a single trophic resource. Moreover, Rabbits do not reproduce during summer (Soriguer & Palacios 1994). Therefore, a quick and deep decrease of Rabbits occurs, forcing Iberian Lynxes to prey on alternative trophic resources (Delibes 1980) for the rest of the summer. Our results reinforce the key role that lagomorphs play in the diet of the Iberian Lynx. This category is the most frequent prey when diversity of available prey is lower. Here, however, we provide evidence for a lower trophic dependence of the Iberian Lynx on lagomorphs than in the areas of Doñana-Aljarafe and Andújar-Cardeña. In our study area, the Iberian Lynx shows its adaptive capacity, adopting a relatively generalist strategy when trophic diversity is high, and a more specialist strategy when diversity is low. Despite this, the low number of samples collected in autumnwinter season (n=15) must be taken into account and, therefore, the pattern showed here may change with a larger dataset. Knowledge of predator-prey relationships is fundamental for the adequate design and implementation of species conservation plans (Popp et al. 2018). Therefore, the results of our research provides base line information for designing conservation actions for the Iberian Lynx in central Spain. We show that the Iberian Lynx is capable to adapt to a wider prey spectrum than previously assumed by Ferrer & Negro (2004) and Ferreras et al. (2011). Based on the described pattern, we think that Iberian Lynxes can profit from an increase in prey diversity provided in enrichment programmes carried out at captive breeding centres (Rivas et al. 2016). Familiarising them with a broader prey diversity may enhance the ability of reintroduced individuals to colonise and survive in new territories. Future research efforts on the trophic ecology of the Iberian Lynx should focus on increasing the number of scat samples for analysis of diet composition, but also on prey availability and the estimation of 'real' proportion each prey species contributes to the diet by means of correction factors, as suggested by Wachter et al. (2012) and Klare et al. (2011). This will provide more reliable information about the trophic needs of the Iberian Lynx.
5,052.6
2020-02-17T00:00:00.000
[ "Environmental Science", "Biology" ]
Active Learning for Abstractive Text Summarization Construction of human-curated annotated datasets for abstractive text summarization (ATS) is very time-consuming and expensive because creating each instance requires a human annotator to read a long document and compose a shorter summary that would preserve the key information relayed by the original document. Active Learning (AL) is a technique developed to reduce the amount of annotation required to achieve a certain level of machine learning model performance. In information extraction and text classification, AL can reduce the amount of labor up to multiple times. Despite its potential for aiding expensive annotation, as far as we know, there were no effective AL query strategies for ATS. This stems from the fact that many AL strategies rely on uncertainty estimation, while as we show in our work, uncertain instances are usually noisy, and selecting them can degrade the model performance compared to passive annotation. We address this problem by proposing the first effective query strategy for AL in ATS based on diversity principles. We show that given a certain annotation budget, using our strategy in AL annotation helps to improve the model performance in terms of ROUGE and consistency scores. Additionally, we analyze the effect of self-learning and show that it can further increase the performance of the model. Introduction Abstractive text summarization (ATS) aims to compress a document into a brief yet informative and readable summary, which would retain the key information of the original document.State-of-theart results in this task are achieved by neural seq-toseq models (Lewis et al., 2020;Zhang et al., 2020;Qi et al., 2020;Guo et al., 2021;Liu and Liu, 2021) based on the Transformer architecture (Vaswani et al., 2017).Training a model for ATS requires a dataset that contains pairs of original documents and their short summaries, which are usually writ-ten by human annotators.Manually composing a summary is a very tedious task, which requires one to read a long original document, select crucial information, and finally write a small text.Each of these steps is very time-consuming, resulting in the fact that constructing each instance in annotated corpora for text summarization is very expensive. Active Learning (AL;Cohn et al. (1996)) is a well-known technique that helps to substantially reduce the amount of annotation required to achieve a certain level of machine learning model performance.For example, in tasks related to named entity recognition, researchers report annotation reduction by 2-7 times with a loss of only 1% of F1-score (Settles and Craven, 2008a).This makes AL especially important when annotation is expensive, which is the case for ATS. AL works iteratively: on each iteration, (1) a model is trained on the so far annotated dataset; (2) the model is used to select some informative instances from a large unlabeled pool using a query strategy; (3) informative instances are presented to human experts, which provide gold-standard annotations; (4) finally, the instances with annotations are added to the labeled dataset, and a new iteration begins.Traditional AL query strategies are based on uncertainty estimation techniques (Lewis and Gale, 1994;Scheffer et al., 2002).The hypothesis is that the most uncertain instances for the model trained on the current iteration are informative for training the model on the next iteration.We argue that uncertain predictions of ATS models (uncertain summaries) are not more useful than randomly selected instances.Moreover, usually, they introduce more noise and detriment to the performance of summarization models.Therefore, it is not possible to straightforwardly adapt the uncertainty-based approach to AL in text summarization. In this work, we present the first effective query strategy for AL in ATS, which we call in-domain diversity sampling (IDDS).It is based on the idea of the selection of diverse instances that are semantically dissimilar from already annotated documents but at the same time similar to the core documents of the considered domain.The empirical investigation shows that while techniques based on uncertainty cannot overcome the random sampling baseline, IDDS substantially increases the performance of summarization models.We also experiment with the self-learning technique that leverages a training dataset expanded with summaries automatically generated by an ATS model trained only on the human-annotated dataset.This approach shows improvements when one needs to generate short summaries.The code for reproducing the experiments is available online1 .The contributions of this paper are the following: • We propose the first effective AL query strategy for ATS that beats the random sampling baseline. • We conduct a vast empirical investigation and show that in contrast to such tasks as text classification and information extraction, in ATS, uncertainty-based AL query strategies cannot outperform the random sampling baseline. • To our knowledge, we are the first to investigate the effect of self-learning in conjunction with AL for ATS and demonstrate that it can substantially improve results on the datasets with short summaries. Related Work Abstractive Text Summarization.The advent of seq2seq models (Sutskever et al., 2014) along with the development of the attention mechanism (Bahdanau et al., 2015) consolidated neural networks as a primary tool for ATS.The attentionbased Transformer (Vaswani et al., 2017) architecture has formed the basis of many large-scale pre-trained language models that achieve state-ofthe-art results in ATS (Lewis et al., 2020;Zhang et al., 2020;Qi et al., 2020;Guo et al., 2021).Recent efforts in this area mostly focus on minor modifications of the existing architectures (Liu and Liu, 2021;Aghajanyan et al., 2021;Liu et al., 2022). Active Learning in Natural Language Generation.While many recent works leverage AL for text classification or sequence-tagging tasks (Yuan et al., 2020;Zhang and Plank, 2021;Shelmanov et al., 2021;Margatina et al., 2021), little attention has been paid to natural language generation tasks.Among the works in this area, it is worth mentioning (Haffari et al., 2009;Ambati, 2012;Ananthakrishnan et al., 2013).These works focus on neural machine translation (NMT) and suggest several uncertainty-based query strategies for AL.Peris and Casacuberta (2018) successfully apply AL in the interactive machine translation.Liu et al. (2018) exploit reinforcement learning to train a policy-based query strategy for NMT.Although there is an attempt to apply AL in ATS (Gidiotis and Tsoumakas, 2021), to the best of our knowledge, there is no published work on this topic yet. Uncertainty Estimation in Natural Language Generation. A simple yet effective approach for uncertainty estimation in generation is proposed by Wang et al. (2019).They use a combination of expected translation probability and variance of the translation probability, demonstrating that it can handle noisy instances better and noticeably improve the quality of back-translation.Malinin and Gales (2021) investigate the ensemble-based measures of uncertainty for NMT.Their results demonstrate the superiority of these methods for OOD detection and for identifying generated translations of low-quality.Xiao et al. (2020) propose a method for uncertainty estimation of long sequences of discrete random variables, which they dub "BLEU Variance", and apply it for OOD sentence detection in NMT.It is also shown to be useful for identifying instances of questionable quality in ATS (Gidiotis and Tsoumakas, 2022).In this work, we investigate these uncertainty estimation techniques in AL and show that they do not provide any benefits over annotating randomly selected instances. Diversity-based Active Learning.Along with the uncertainty-based query strategies, a series of diversity-based methods have been suggested for AL (Kim et al., 2006;Sener and Savarese, 2018;Ash et al., 2019;Citovsky et al., 2021).The most relevant work among them is (Kim et al., 2006), where the authors propose to use a Maximal Marginal Relevance (MMR; Carbonell and Goldstein (1998))-based function as a query strategy in AL for named entity recognition.This function aims to capture uncertainty and diversity and selects instances for annotation based on these two perspectives.We adapt this strategy for the ATS task and compare the proposed method with it. Uncertainty-based Active Learning for Text Generation In this section, we give a brief formal definition of the AL procedure for text generation and uncertainty-based query strategies.Here and throughout the rest of the paper, we denote an input sequence as x = (x 1 . . .x m ) and the output sequence as y = (y 1 . . .y n ), with m and n being lengths of x and y respectively.Let D = {(x (k) , y (k) )} K k=1 be a dataset of pairs (documents, summaries).Consider a probabilistic model p w (y | x) parametrized by a vector w.Usually, p w (y | x) is a neural network, while the parameter estimation is done via the maximum likelihood approach: where Many AL methods are based on greedy query strategies that select instances for annotation, optimizing a certain criterion A(x | D, ŵ) called an acquisition function: (2) The selected instance x * is then annotated with a target value y * (document summary) and added to the training dataset: D := D ∪ (x * , y * ).Subsequently, the model parameters w are updated and the instance selection process continues until the desired model quality is achieved or the available annotation budget is depleted. The right choice of an acquisition function is crucial for AL.A common heuristic for acquisition is selecting instances with high uncertainty.Below, we consider several measures of uncertainty used in text generation. Normalized Sequence Probability (NSP) was originally proposed by Ueffing and Ney (2007) and has been used in many subsequent works (Haffari et al., 2009;Wang et al., 2019;Xiao et al., 2020;Lyu et al., 2020).This measure is given by NSP where we define the geometric mean of probabilities of tokens predicted by the model as: p ŵ(y | x) = exp 1 n log p ŵ(y | x) .A wide family of uncertainty measures can be derived using the Bayesian approach to modeling.Under the Bayesian approach, it is assumed that model parameters have a prior distribution π(w).Optimization of the log-likelihood L(D, w) in this case leads to the optimization of the posterior distribution of the model parameters: Usually, the exact computation of the posterior is intractable, and to perform training and inference, a family of distributions q θ (w) parameterized by θ is introduced.The parameter estimate θ minimizes the KL-divergence between the true posterior π(w | D) and the approximation q θ(w).Given such an approximation, several uncertainty measures can be constructed. Expected Normalized Sequence Probability (ENSP) is proposed by Wang et al. (2019) and is also used in (Xiao et al., 2020;Lyu et al., 2020): In practice, the expectation is approximated via Monte Carlo dropout (Gal and Ghahramani, 2016), i.e. averaging multiple predictions obtained with activated dropout layers in the network. Expected Normalized Sequence Variance (ENSV; Wang et al. (2019)) measures the variance of the sequence probabilities obtained via Monte Carlo dropout: BLEU Variance (BLEUVar) is proposed by Xiao et al. (2020).It treats documents as points in some high dimensional space and uses the BLEU metric (Papineni et al., 2002) for measuring the difference between them.In such a setting, it is possible to calculate the variance of generated texts in the following way: The BLEU metric is calculated as a geometric mean of n-grams overlap up to 4-grams.Consequently, when summaries consist of less than 4 tokens, the metric is equal to zero since there will be no common 4-grams.This problem can be mitigated with the SacreBLEU metric (Post, 2018), which smoothes the n-grams with zero counts.When we use this query strategy with the Sacre-BLUE metric, we refer to it as SacreBLEUVar.Therefore, IDDS queries instances that are dissimilar to the annotated instances but at the same time are similar to unannotated ones (Figure 1).We propose the following acquisition function that implements the aforementioned idea (the higher the value -the higher the priority for the annotation): ) where s(x, x ′ ) is a similarity function between texts, U is the unlabeled set, L is the labeled set, and λ ∈ [0; 1] is a hyperparameter.Below, we formalize the resulting algorithm of the IDDS query strategy. 1.For each document in the unlabeled pool x, we obtain an embedding vector e(x).For this purpose, we suggest using the [CLS] pooled sequence embeddings from BERT.We note that using a pre-trained checkpoint straightforwardly may lead to unreasonably high similarity scores between instances since they all belong to the same domain, which can be quite specific.We mitigate this problem by using the task-adaptive pre-training (TAPT; Gururangan et al. (2020)) on the unlabeled pool.TAPT performs several epochs of selfsupervised training of the pre-trained model on the target dataset to acquaint it with the peculiarities of the data. 2. Deduplicate the unlabeled pool.Instances with duplicates will have an overrated similarity score with the unlabeled pool. 3. Calculate the informativeness scores using the IDDS acquisition function (8).As a similarity function, we suggest using a scalar product between document representations: The idea of IDDS is close to the MMR-based strategy proposed in (Kim et al., 2006).Yet, despite the resemblance, IDDS differs from it in several crucial aspects.The MMR-based strategy focuses on the uncertainty and diversity components.However, as shown in Section 6.1, selecting instances by uncertainty leads to worse results compared to random sampling.Consequently, instead of using uncertainty, IDDS leverages the unlabeled pool to capture the representativeness of the instances.Furthermore, IDDS differs from the MMR-based strategy in how they calculate the diversity component.MMR directly specifies the usage of the "max" aggregation function for calculating the similarity with the already annotated data, while IDDS uses "average" similarity instead and achieves better results as shown in Section 6.2. We note that IDDS does not require retraining an acquisition model in contrast to uncertainty-based strategies since document vector representations and document similarities can be calculated before starting the AL annotation process.This results in the fact that no heavy computations during AL are required.Consequently, IDDS does not harm the interactiveness of the annotation process, which is a common bottleneck (Tsvigun et al., 2022). Self-learning Pool-based AL assumes that there is a large unlabeled pool of data.We propose to use this data source during AL to improve text summarization models with the help of self-learning.We train the model on the labeled data and generate summaries for the whole unlabeled pool.Then, we concatenate the generated summaries with the labeled set and use this data to fine-tune the final model.We note that generated summaries can be noisy: irrelevant, grammatically incorrect, contain factual inconsistency, and can harm the model performance.We detect such instances using the uncertainty estimates obtained via NSP scores and exclude k l % instances with the lowest scores and k h % of instances with the highest scores.We choose this uncertainty metric because according to our experiments in Section 6.1, high NSP scores correspond to the noisiest instances.We note that adding the filtration step does not introduce additional computational overhead, since the NSP scores are calculated simultaneously with the summary generation for self-learning. Active Learning Setting We evaluate IDDS and other query strategies using the conventional scheme of AL annotation emulation applied in many previous works (Settles and Craven, 2008b;Shen et al., 2017;Siddhant and Lipton, 2018;Shelmanov et al., 2021;Dor et al., 2020).For uncertainty-based query strategies and random sampling, we start from a small annotated seeding set selected randomly.This set is used for fine-tuning the summarization model on the first iteration.For IDDS, the seeding set is not used, since this query strategy does not require fine-tuning the model to make a query.On each AL iteration, we select top-k instances from the unlabeled pool according to the informativeness score obtained with a query strategy.The selected instances with their gold-standard summaries are added to the so-far annotated set and are excluded from the unlabeled pool.On each iteration, we fine-tune a summarization model from scratch and evaluate it on a held-out test set.We report the performance of the model on each iteration to demonstrate the dynamics of the model performance depending on the invested annotation effort. The query size (the number of instances selected for annotation on each iteration) is set to 10 documents.We repeat each experiment 9 times with different random seeds and report the mean and the standard deviation of the obtained scores.For the WikiHow and PubMed datasets, on each iteration, we use a random subset from the unlabeled pool since generating predictions for the whole unlabeled dataset is too computationally expensive.In the experiments, the subset size is set to 10,000 for WikiHow and 1,000 for PubMed. Baselines We use random sampling as the main baseline.To our knowledge, in the ATS task, this baseline has not been outperformed by any other query strategy yet.In this baseline, an annotator is given randomly selected instances from the unlabeled pool, which means that AL is not used at all.We also report results of uncertainty-based query strategies and an MMR-based query strategy (Kim et al., 2006) that is shown to be useful for named entity recognition. Metrics Quality of Text Summarization.To measure the quality of the text summarization model, we use the commonly adopted ROUGE metric (Lin, 2004).Following previous works (See et al., 2017;Nallapati et al., 2017;Chen and Bansal, 2018;Lewis et al., 2020;Zhang et al., 2020), we report ROUGE-1, ROUGE-2, and ROUGE-L.Since we found the dynamics of these metrics coinciding, for brevity, in the main part of the paper, we keep only the results with the ROUGE-1 metric.The results with other metrics are presented in the appendix. Factual Consistency.Inconsistency (hallucination) of the generated summaries is one of the most crucial problems in summarization (Kryscinski et al., 2020;Huang et al., 2021;Nan et al., 2021;Goyal et al., 2022).Therefore, in addition to the ROUGE metrics, we measure the factual consistency of the generated summaries with the original documents.We use the SummaC-ZS (Laban et al., 2022) -a state-of-the-art model for inconsistency detection.We set granularity = "sentence" and model_name = "vitc". Datasets We experiment with three datasets widely-used for evaluation of ATS models: AESLC (Zhang and Tetreault, 2019), PubMed (Cohan et al., 2018), and WikiHow (Koupaee and Wang, 2018).AESLC consists of emails with their subject lines as summaries.with their headlines as summaries.PubMed (Cohan et al., 2018) is a collection of scientific articles from the PubMed archive with their abstracts.The choice of datasets is stipulated by the fact that AESLC contains short documents and summaries, WikiHow contains medium-sized documents and summaries, and PubMed contains long documents and summaries.We also use two non-intersecting subsets of the Gigaword dataset (Graff et al., 2003;Rush et al., 2015) of sizes 2,000 and 10,000 for hyperparameter optimization of ATS models and additional experiments with self-learning, respectively.Gigaword consists of news articles and their headlines representing summaries.The dataset statistics is presented in Table 2 in Appendix A. Models and Hyperparameters We conduct experiments using the state-of-the-art text summarization models: BART (Lewis et al., 2020) and PEGASUS (Zhang et al., 2020).In all experiments, we use the "base" pre-trained version of BART and the "large" version of PEGASUS. Most of the experiments are conducted with the BART model, while PEGASUS is only used for the AESLC dataset (results are presented in Appendices B, C) since running it on two other datasets in AL introduces a computational bottleneck. We tune hyperparameter values of ATS models using the ROUGE-L score on the subset of the Gigaword dataset.The hyperparameter values are provided in Table 3 in Appendix A. For the IDDS query strategy, we use λ = 0.67.We analyze the effect of different values of this parameter in Section 6.2. Uncertainty-based Query Strategies In this series of experiments, we demonstrate that selected uncertainty-based query strategies are not suitable for AL in ATS. Figure 2a and Figures 6, 7 in Appendix B present the results on the AESLC dataset.As we can see, none of the uncertaintybased query strategies outperform the random sampling baseline for both BART and PEGASUS models.NSP and ENSP strategies demonstrate the worst results with the former having the lowest performance for both ATS models.Similar results are obtained for the WikiHow and PubMed datasets (Figures 2b and 2c). In some previous work on NMT, uncertaintybased query strategies outperform the random sampling baseline (Haffari et al., 2009;Ambati, 2012;Ananthakrishnan et al., 2013).Their low results for ATS compared to NMT might stem from the differences between these tasks.Both NMT and ATS are seq2seq tasks and can be solved via similar models.However, NMT is somewhat easier, since the 5 in Appendix E. In-Domain Diversity Sampling In this series of experiments, we analyze the proposed IDDS query strategy.Figure 3a and Figures 10,11 in Appendix C show the performance of the models with various query strategies on AESLC.We can see that the proposed strategy outperforms random sampling on all iterations for both ATS models and subsequently outperforms the uncertainty-based strategy NSP.IDDS demonstrates similar results on the WikiHow and PubMed datasets, outperforming the baseline with a large margin as depicted in Figures 3b and 3c.We also report the improvement of IDDS over random sampling in percentage on several AL iterations in Table 4.We can see that IDDS provides an especially large improvement in the cold-start AL scenario when the amount of labeled data is very small.We carry out several ablation studies for the proposed query strategy.First, we investigate the effect of various models for document embeddings construction and the necessity of performing TAPT.Figures 17 and 18 in Appendix F illustrate that TAPT substantially enhances the performance of IDDS. Figure 17 also shows that the BERT-base encoder appears to be better than Sen-tenceBERT (Reimers and Gurevych, 2019) and LongFormer (Beltagy et al., 2020). Second, we try various functions for calculating the similarity between instances.Figures 19, 20 in Appendix F compare the originally used dot product with Mahalanobis and Euclidean distances on AESLC and WikiHow.On AESLC, IDDS with Mahalanobis distance persistently demonstrates lower performance.IDDS with the Euclidean distance shows a performance drop on the initial AL iterations compared to the dot product.On WikiHow, however, all the variants perform roughly the same.Therefore, we suggest keeping the dot product for computing the document similarity in IDDS since it provides the most robust results across the datasets. We also compare the dot product with its normalized version -cosine similarity on AESLC and PubMed, see Figures 21 and 22 in Appendix F. On both datasets, adding normalization leads to substantially worse results on the initial AL iterations.We deem that this happens because normalization may damage the representativeness component since the norm of the embedding can be treated as a measure of the representativeness of the corresponding document. Third, we investigate how different values for the lambda coefficient influence the performance of IDDS.Table 7 and Figure 23 in Appendix F shows that smaller values of λ ∈ {0, 0.33, 0.5} substantially deteriorate the performance.Smaller values correspond to selecting instances that are highly dissimilar from the documents in the unlabeled pool, which leads to picking many outliers.Higher values lead to the selection of instances from the core of the unlabeled dataset, but also very similar to the annotated part.This also results in a lower quality on the initial AL iterations.The best and most stable results are obtained with λ = 0.67. Fourth, we compare IDDS with the MMR-based strategy suggested in (Kim et al., 2006).Since it uses uncertainty, it requires a trained model to calculate the scores.Consequently, the initial query is taken randomly as no trained model is available on the initial AL iteration.Therefore, we use the modification, when the initial query is done with IDDS because it provides substantially better results on the initial iteration.We also experiment with different values of the λ hyperparameter of the MMR-based strategy.Figure 24 illustrates a large gap in performance of IDDS and the MMRbased strategy regardless of the initialization / λ on AESLC.We believe that this is attributed to the fact that strategies incorporating uncertainty are harmful to AL in ATS as shown in Section 6.1. Finally, we compare "aggregation" functions for estimating the similarity between a document and a collection of documents (labeled and unlabeled pools).Following the MMR-based strategy (Kim et al., 2006), instead of calculating the average similarity between the embedding of the target document and the embeddings of documents from the labeled set, we calculate the maximum similarity.We also try replacing the "average" aggregation function with "maximum" in both IDDS components in (8).Figures 25 and 26 in Appendix F show that average leads to better performance on both AESLC and WikiHow datasets. The importance of diversity sampling is illustrated in Table 6 in Appendix E. We can see that NSP-based query batches contain a large number of overlapping instances.This may partly stipulate the poor performance of the NSP strategy since almost 9% of labeled instances are redundant.IDDS, on the contrary, does not have instances with overlapping summaries inside batches at all. Self-learning In this section, we investigate the effect of selflearning in the AL setting.Figures 4a, 4b illustrate the effect of self-learning on the AESLC and Gigaword datasets.For this experiment, we use k l = 10, k h = 1, filtering out 11% of automati-cally generated summaries.In both cases: with AL and without, adding automatically generated summaries of documents from the unlabeled pool to the training set improves the performance of the summarization model.On AESLC, the best results are obtained with both AL and self-learning: their combination achieves up to 58% improvement in all ROUGE metrics compared to using passive annotation without self-learning. The same experiment on the WikiHow dataset is presented in Figure 4c.To make sure that the quality is not deteriorated due to the addition of noisy uncertain instances, we use k l = 38, k h = 2 for this experiment, filtering out 40% of automatically generated summaries.On this dataset, self-learning reduces the performance for both cases (with AL and without).We deem that the benefit of selflearning depends on the length of the summaries in the dataset.AESLC and Gigaword contain very short summaries (less than 13 tokens on average, see Table 2).Since the model is capable of generating short texts that are grammatically correct and logically consistent, such data augmentation does not introduce much noise into the dataset, resulting in performance improvement.WikiHow, on the contrary, contains long summaries (77 tokens on average).Generation of long, logically consistent, and grammatically correct summaries is still a challenging task even for the state-of-the-art ATS models.Therefore, the generated summaries are of low quality, and using them as an additional training signal deteriorates the model performance.Consequently, we suggest using self-learning only if the dataset consists of relatively short texts.We leave a more detailed investigation of this topic for future research. Consistency We analyze how various AL strategies and selflearning affect the consistency of model output in two ways.We measure the consistency of the generated summaries with the original documents on the test set on each AL iteration.Figure 5 shows that the model trained on instances queried by IDDS generates the most consistent summaries across all considered AL query strategies on AESLC.On the contrary, the model trained on the instances selected by the uncertainty-based NSP query strategy generates summaries with the lowest consistency. Figure 28 in Appendix G demonstrates that on AESLC, self-learning also improves consistency 5135 regardless of the AL strategy.The same trend is observed on Gigaword (Figure 27 in Appendix G).However, for WikiHow, there is no clear trend.Figure 29 in Appendix G shows that all query strategies lead to similar consistency results, with NSP producing slightly higher consistency, and BLEU-Var -slightly lower.We deem that this may be due to the fact that summaries generated by the model on WikiHow are of lower quality than the golden summaries regardless of the strategy.Therefore, this leads to biased scores of the SummaC model with similar results on average. Query Duration We compare the average duration of AL iterations for various query strategies.Figure 30 in the Appendix H presents the average training time and the average duration of making a query.We can see that training a model takes considerably less time than selecting the instances from the unlabeled pool for annotation.Therefore, the duration of AL iterations is mostly determined by the efficiency of the query strategy.The IDDS query strategy does not require any heavy computations during AL, which makes it also the best option for keeping the AL process interactive. Conclusion In this work, we convey the first study of AL in ATS and propose the first active learning query strategy that outperforms the baseline random sampling.The query strategy aims at selecting for annotation the instances with high similarity with the documents in the unlabeled pool and low similarity with the already annotated documents.It outperforms the random sampling in terms of ROUGE metrics on all considered datasets.It also outperforms random sampling in terms of the consistency score calculated via the SummaC model on the AESLC dataset.We also demonstrate that uncertainty-based query strategies fail to outperform random sampling, resulting in the same or even worse performance.Finally, we show that self-learning can improve the performance of an ATS model in terms of both the ROUGE metrics and consistency.This is especially favorable in AL since there is always a large unlabeled pool of data.We show that combining AL and self-learning can give an improvement of up to 58% in terms of ROUGE metrics. In future work, we look forward to investigating IDDS in other sequence generation tasks.This query strategy might be beneficial for tasks with the highly variable output when uncertainty estimates of model predictions are unreliable and cannot outperform the random sampling baseline.IDDS facilitates the representativeness of instances in the training dataset without leveraging uncertainty scores. Limitations Despite the benefits, the proposed methods require some conditions to be met to be successfully applied in practice.IDDS strategy requires making TAPT of the embeddings-generated model, which may be computationally consuming for a large dataset.Self-learning, in turn, may harm the performance when the summaries are too long, as shown in Section 6.3.Consequently, its application requires a detailed analysis of the properties of the target domain summaries. Ethical Considerations It is important to note that active learning is a method of biased sampling, which can lead to biased annotated corpora.Therefore, active learning can be used to deliberately increase the bias in the datasets.Our research improves the active learning performance; hence, our contribution would also make it more efficient for introducing more bias as well.We also note that our method uses the pre-trained language models, which usually contain different types of biases by themselves.Since bias affects all applications of pre-trained models, this can also unintentionally facilitate the biased selection of instances for annotation during active learning. A Dataset Statistics and Model Hyperparameters Table 2: Dataset statistics.We provide a number of instances for the training and test sets and average lengths of documents / summaries in terms of tokens.All the datasets are English-language.We filter the WikiHow dataset since it contains many noisy instances: we exclude instances with documents that have 10 or less tokens and instances with summaries that have 3 or less tokens. Dataset Subset Num.instances Av. document len.Av. summary len.Table 3: Hyperparameter values and checkpoints from the HuggingFace repository (Wolf et al., 2019) of the models.We imitate the low-resource case by randomly selecting 200 instances from Gigaword train dataset as a train sample, and 2,000 instances from the validation set as a test sample for evaluation consistency.For each model, we find the optimal hyperparameters according to evaluation scores on the sampled subset.Generation maximum length is set to the maximum summary length from the available labeled set. AESLC For WikiHow and PubMed datasets, we reduce the batch size and increase gradient accumulation steps by the same amount due to computational bottleneck.Hardware configuration: 2 Intel Xeon Platinum 8168, 2.7 GHz, 24 cores CPU; NVIDIA Tesla v100 GPU, 32 Gb of VRAM. Figure 2 : Figure 2: ROUGE-1 scores of BART-base with various uncertainty-based strategies compared with random sampling (baseline) on various datasets.Full results are provided in Figures 6, 8, 9, respectively. Figure 4 :Figure 5 : Figure 4: ROUGE-1 scores of the BART-base model with IDDS and random sampling strategies with and without self-learning on AESLC, Gigaword, and WikiHow.Full results are provided in Figures 14, 15, and 16, respectively. Figure 6 :Figure 7 :Figure 8 : Figure 6: The performance of the BART-base model with various uncertainty-based strategies compared with random sampling (baseline) on AESLC. Figure 9 : Figure 9: The performance of the BART-base model with various uncertainty-based strategies compared with random sampling (baseline) on PubMed. FFigure 17 :Figure 18 :Figure 19 :Figure 20 :Figure 21 :Figure 22 :Figure 23 : Figure 17: Ablation study of the document embeddings model & the necessity of performing TAPT for it in the IDDS strategy with BART-base on AESLC. Figure 24 :Figure 25 : Figure24: Comparison of IDDS with the MMR-based strategy suggested in(Kim et al., 2006) with BART-base on AESLC.We experiment with different λ values in MMR and the initialization schemes. Figure 26 :Figure 27 :Figure 28 :Figure 29 : Figure 26: Comparison of the average and maximum aggregation functions in IDDS with BART-base on WikiHow. Figure 30 : Figure 30: Average duration in seconds of one AL query of 10 instances with different strategies on the AESLC dataset with BART-base as an acquisition model.Train refers to the average time required for training the model throughout the AL cycle.Hardware configuration: 2 Intel Xeon Platinum 8168, 2.7 GHz, 24 cores CPU; NVIDIA Tesla v100 GPU, 32 Gb of VRAM. Table 1 : Golden Summary Gener.Summ.NSP Aquarius -Horoscope Friday, September 8, 2000 by Astronet.com.Powerful forces are at work to challenge you (...) Don't let hurt feelings prevent you from (...) Could I have the price for a 2 day swing peaker option at NGI Chicago, that can be exercised on any day in February 2002.Strike is FOM February, (...) Examples of instances selected with the NSP and IDDS strategies.Tokens overlapping with the source document are highlighted with green.Tokens that refer to paraphrasing a part of the document and the corresponding part are highlighted with blue.Tokens that cannot be derived from the document are highlighted with red.output is usually of similar length as the input and its variability is smaller.It is much easier to train a model to reproduce an exact translation rather than make it generate an exact summary.Therefore, uncertainty estimates of ATS models are way less reliable than estimates of translation models.These estimates often select for annotation noisy documents that are useless or even harmful for training summarization models.Table1reveals several documents selected by the worst-performing strategy NSP on AESLC.We can see that NSP selects domain-irrelevant documents or very specific ones.Their summaries can hardly be restored from the source documents, which means that they most likely have little positive impact on the generalization ability of the model.More examples of instances selected by different query strategies are presented in Table Table 4 : Percentage increase in ROUGE F-scores of IDDS over the baseline on different AL iterations.Average refers to the average increase throughout the whole AL cycle. Table 6 : Share of fully / partly overlapping summaries among batches of instances, queried with various AL strategies during AL using BART-base model on AESLC.We consider two summaries to be partly overlapping if their ROUGE-1 score > 0.66.The results are averaged across 9 seeds for all the strategies except for IDDS, which has constant seed-independent queries. Table 7 : ROUGE on AL iterations for different values of the lambda hyperparameter in IDDS.We select with bold the largest values w.r.t. the confidence intervals.
8,087.8
2023-01-09T00:00:00.000
[ "Computer Science" ]
Simulation of a randomly percolated CNT network for an improved analog physical unclonable function Carbon nanotube networks (CNTs)-based devices are well suited for the physically unclonable function (PUF) due to the inherent randomness of the CNT network, but CNT networks can vary significantly during manufacturing due to various controllable process conditions, which have a significant impact on PUF performance. Therefore, optimization of process conditions is essential to have a PUF with excellent performance. However, because it is time-consuming and costly to fabricate directly under various conditions, we implement randomly formed CNT network using simulation and confirm the variable correlation of the CNT network optimized for PUF performance. At the same time, by implementing an analog PUF through simulation, we present a 2D patterned PUF that has excellent security and can compensate for error occurrence problems. To evaluate the performance of analog PUF, a new evaluation method different from the existing digital PUF is proposed, and the PUF performance is compared according to two process variables, CNT density and metallic CNT ratio, and the correlation with PUF performance is confirmed. This study can serve as a basis for research to produce optimized CNT PUF by applying simulation according to the needs of the process of forming a CNT network. To address these problems, carbon nanotube (CNT)-based PUFs have been studied and fabricated in recent years 12,13 .Conventional CNTs have been used in various types of CNT network-based thin film transistors (TFTs) due to their numerous advantages, such as excellent electrical properties [14][15][16][17][18][19][20] , room-temperature processing compatibility 21 , transparency 22 , and flexibility [23][24][25] .From a CNT TFT perspective, randomly formed networks and the coexistence of both metallic and semiconducting nanotubes have always been a challenge to solve [26][27][28][29][30] , but this can lead to high reliability from a PUF perspective.During the CNT solution deposition process, CNTs are randomly interconnected with each other to form a network, such that the unique distribution of CNTs within the network is unpredictable and cannot be reproduced identically.A previous study fabricated a PUF device based on solution-processed CNT network TFTs 31 .This demonstrates that the CNT network itself can be the secret key for high-level hardware security using a simple process with high CMOS compatibility and a small footprint area.However, it is not clear how to control the yield of semiconducting and metallic CNT connections to maximize the randomness of the CNT network. In this paper, we introduce a random CNT network-based field effect transistor (FET) as a PUF, which exploits the randomness of the CNT network in the channel to generate keys.By presenting the conditions of the CNT network that must be formed through simulation, we maximize the randomness and security level of the CNT network-based PUF.CNT networks can vary significantly during manufacturing due to various controllable process conditions, which have a significant impact on PUF performance.To fabricate a PUF with the best performance, experiments under various conditions are required.However, fabricating such devices has the significant challenge of being very time consuming, expensive, and material intensive.Therefore, by implementing a CNT network according to various variables through simulation, we confirm the correlation between CNT process conditions and PUF performance without direct processing (or without wasted elements) and optimize CNT PUF process conditions.There are many different process methods for forming CNT networks, but in particular, the solution-based CNT network deposition method is completely random because the position or direction of the CNTs present in the solution cannot be controlled during the deposition process.Therefore, this simulation can be applied to any solution-based CNT network deposition method.Additionally, the CNT PUF implemented here is an analog PUF.This analog PUF is distinguished from the multibit digital PUF, such as ternary and quaternary bits, whereas the multibit digital PUF uses multilevel yet discretized values, and the analog PUF uses continuous values 32 .Therefore, our analog PUF can achieve a much higher level of randomness and security compared to traditional digital PUFs of binary or multibit.Since the existing evaluation method of digital PUFs cannot be applied to analog PUFs, we propose a new evaluation method to evaluate the randomness and uniqueness of PUFs.This paper is proposed to be of great significance in fabricating future CNT PUFs with excellent performance. Monte-Carlo method-based MATLAB simulation We performed two simulation codes using MATLAB.First, the basic code forms a CNT network with random characteristics 33 .We used a 2D thin film model to reduce the computational demands.We used rand, a random number generation function in MATLAB, to randomly determine the formation position and angle of the CNT lines.The number of CNT lines per area, CNT line length, and m-CNT ratio were set to form random numbers with a normal distribution based on the adjustable variable values.Therefore, it is completely different depending on the number of trials, but as the number of trials increases significantly, numerical results can be obtained using the Monte-Carlo method.Next, the PUF code recognizes the CNT network formed by the base code as a two-terminal device with electrodes on both sides of the x-axis, and extracted the resistance value of the device.Contact was made through the point where the two CNT lines intersected, and a current path was created through the CNTs connected to both terminals.For subsequent node analysis, the resistance of the entire CNT network was calculated by combining Kirchhoff 's current law and Ohm's law.Then, this process was repeated m × n times to obtain m × n resistance values, which were then extracted into an m × n matrix. Results Figure 1a shows AFM images of the CNT network according to the CNT deposition time.The deposition times are 1, 3, and 14 min, and it can be seen that the density of the CNT network increases as the deposition time increases.Figure 1b shows the drain current (I DS )-gate voltage (V GS ) of the CNT network-based transistor fabricated under the above deposition time conditions.The drain voltage is − 0.5 V, and this is the measurement data for 144 devices for each deposition time 34 .As the deposition time increases, i.e., the density of the CNT network increases, I on defined at V GS = − 10 V and V D = − 0.5 V increases, the possibility of metal interconnection between S/D electrodes also increases, and I off defined at V GS = 0 V and V D = − 0.5 V increases correspondingly.Even if devices are fabricated under the same conditions, their electrical characteristics may vary due to the random nature of the CNT network.Additionally, the degree of deviation varies greatly depending on the process conditions.At this time, the I off value on the log scale showed a large process deviation under the same process conditions, and the deviation also greatly differed according to the deposition time, that is, the density of the CNT network.Figure 1c shows the average and deviation of log(I off ) according to the deposition time.The average log(I off ) value differed by density, as did the deviation.When the density of the CNT network is either too low or too high, the deviation is reduced, and the largest process deviation occurs at a specific CNT network density, which is neither too low nor too high.These characteristics can be described in terms of the number of connecting paths between S/D electrodes.At low densities, where the number of connecting paths is too few, process variability is reduced because there are too few possible ways to form connecting paths through which carriers can flow.Even with too many connecting paths, process variability is also reduced due to the averaging effect.Only when the number of connection paths is appropriate is the number of cases in which a path can be formed maximized, and thus, the process deviation has the maximum value.For the off-current region (V GS = 0 V), connecting paths are formed only through the metallic interconnections, such that the CNT network density and the semiconducting/metallic-CNT (s/m-CNT) ratio can control the number of connecting paths.In other words, both the CNT network density and m-CNT ratio are parameters that have a significant impact on the process variation of the CNT network and can be easily controlled.Thus, we evaluated the effects of these two parameters through simulation. We use MATLAB to conduct the simulations and determine the influence of the density of the CNT network and the m-CNT ratio.This enables quick and easy examination of how physical parameters influence process deviation by obtaining results under various conditions without direct processing.The basic code specifies a region from 0 to the desired value on the x-axis and y-axis and randomly distributes CNTs within this region.The randomly distributed CNTs are expressed as lines, and the rand function is used to make the positions and the rotation angles of the lines have completely random values.(The range of the rand function used to form the position was limited to values within the specified xy area, and to form the rotation angle, it was limited to 0 to 360 degrees.)Other parameters, including the number of CNT lines per area (CNT density), m-CNT ratio, and CNT line length, were set to form random numbers with a normal distribution based on the input value.Next, the PUF code recognizes the CNT network formed in the basic code as a 2-terminal device using both sides of the x-axis as electrodes and extracts the resistance value of the device.The CNTs exhibit different resistance values depending on the type, and the two resistance values are set to have a difference of 4 orders of magnitude, which is the minimum difference compared to other references.The current was allowed to flow only through the path made up of the CNT line connecting the two electrodes.Then, this process is simulated m × n times to obtain m × n resistance values, which are then extracted into an m × n matrix. Figure 2a,b show simulation results provided by this basic code.Figure 2a is the result of three simulations when the CNT density is 10 #/µm 2 , and Fig. 2b is the result of three simulations when the CNT density is 50 #/ µm 2 .In addition, all other conditions are the same: the CNT length is 1 µm, the m-CNT ratio is 30%, and the network area is 3 μm × 3 μm.Both sides of the x-axis are electrodes, and among the CNT lines, black represents s-CNTs, and red represents m-CNTs.Although the three simulation results in (a) and (b) were all performed under the same conditions, i.e., with the same simulation code, the CNT networks were obtained in completely different shapes by the rand function.Therefore, the shape of the CNT network is unpredictable, and unique shapes can be obtained that vary with each simulation run. Figure 2c shows a circuit schematic diagram of one CNT PUF obtained through the PUF code.Using the PUF code, an m × n matrix of resistance values is generated, which is regarded as a 2-terminal CNT network device in an m × n array connected to m word lines (WLs) and n bit lines (BLs).Therefore, m-n resistance values for CNT network devices at each position can be obtained, and these can be configured into one PUF.The PUF proposed here is an analog-based PUF that uses analog data, has a much larger capacity for the same area compared to digital bits and has higher security.Additionally, the different resistance values obtained at each location create a unique 2D pattern of the CNT PUF 32 .The CNT PUF can improve encryption by adding matrix information about the location of extracted resistance values, and the matrix can be appropriately arranged to satisfy the needed security level and system specifications. Discussion To compare performance based on two variables, CNT density and m-CNT ratio, the values of the two parameters were adjusted variously in the basic code.The CNT density was divided from 10 #/μm 2 to 50 #/μm 2 in 5 #/μm 2 steps, and the m-CNT ratio was divided into 10%, 20%, 30%, 40%, and 50%.Afterward, a PUF is implemented for each condition through the PUF code, and the performance of the PUF is compared.In this study, 100 simulations were performed using the PUF code to obtain one CNT PUF, and 100 data points obtained from one CNT PUF were arranged in a 10 × 10 matrix.The characteristics of the PUF device are evaluated using the representative PUF parameters, randomness, and uniqueness.Randomness is the diversity of responses to multiple challenges within one PUF.The uniqueness represents the noncorrelation between the responses measured from different chips or arrays.Ideally, the responses of two selected arrays should be uncorrelated, and the logic states of the PUF devices with ideal uniqueness cannot be predicted even if the states of other arrays are known.In the case of digitized PUFs, the comparison between PUFs can be made using binarized data, an array of 1 and 0. For example, randomness is examined by the probability of observing a "1" or "0" in the response of the selected PUF, with an ideal value of 50% for the proportion of the overall random responses.Uniqueness is obtained by counting the number of different responses between two PUFs using the inter-Hamming distance (inter-HD), with an ideal value of 50%.However, since the proposed CNT PUF here uses analog data, another method to evaluate the PUFs should be defined. First, to evaluate the randomness of the analog data, we normalize the resistance value first.The following function was used to eliminate order differences in resistance values depending on the variable: where R is the resistance value, μ is the average R value within one PUF device, and Z is the normalized value.By adding constant value of 5, all data is mapped to the range between 0 and 10, which simplifies calculations.These normalized CNT PUFs can be compared at the same scale and can be displayed in various approaches, such as grid heatmaps and contour maps. Figure 3a shows the distribution of 10 × 10 array PUFs according to the CNT density as grid heatmaps.From the left, the CNT density is split into 10, 20, 30, and 40 #/μm 2 , and the m-CNT ratio is fixed to 30% in this simulation.In the heatmap, the darker colors indicate deviation from the median, red represents large resistance values, and blue represents low resistance values. When the CNT density is 10, 30, and 40 #/μm 2 , the color of the PUF is relatively light, which indicates that there is almost no deviation between the data within the PUF.On the other hand, the color of the PUF when the CNT density is 20 #/μm 2 is relatively dark, and the data deviation within the PUF is severe.Therefore, the randomness within the PUF is expected to be higher at a certain density (CNT density = 20 #/μm 2 ) than under conditions where the CNT density is too high or too low.To quantitatively compare randomness, we used the relative standard deviation (RSD) between the data comprising one PUF.RSD is an indicator used to compare the standard deviation of groups with different units.When extracting the resistance value of a CNT device, order differences in the overall distribution of resistance values occur depending on the variable; hence, RSD was applied for comparison according to the variable.This RSD value was calculated by dividing the standard deviation (S) of the resistance value by the average value (µ) within the PUF. Figure 3b shows the RSD of the resistance value according to the CNT density for 100 PUFs in a 10 × 10 array and was also classified according to the m-CNT ratio.RSD tends to decrease rapidly at very high or very low densities, peaking at a certain density.As the m-CNT ratio increased, RSD peaked at a lower CNT density.For example, when the m-CNT ratio is 20%, the RSD has the maximum value at a CNT density of 35 #/μm 2 , and when the m-CNT ratio is 50%, the RSD has the maximum value at a CNT density of 15 #/μm 2 .In other words, the number of cases in which a path can be formed increases depending on the proportion of connection paths through which current can flow among the entire CNT network path.As the CNT density increases and the m-CNT ratio increases, the proportion of connection paths increases, so both variables must be considered in the evaluation.To confirm the reliability of the RSD parameter, the RSD values were extracted by increasing the array size of the PUF, i.e., the number of iterations in the simulation.As shown in Fig. 3c, the RSD values for each variable showed constant convergence as the number of iterations increased, thus proving that the RSD (1) www.nature.com/scientificreports/parameter was reliable.When the m-CNT ratio is 30%, the RSD appears consistently high at a density of 20 #/ μm 2 , and the data show that there is a density point that optimizes the CNT PUF for each m-CNT ratio.Second, we evaluated the uniqueness of each network.Uniqueness represents the degree of difference between two different PUFs, and this was assessed in two approaches on the analog PUFs in this study.One of them calculates the error factor, and the calculation formula is as follows: where i and j are two different PUF elements, k is the array location within the PUF element, and N is the total number of arrays. Figure 4a shows the error factors calculated for each of the 10 PUF devices in a 10 × 10 array.When the CNT density was 15 #/μm 2 , the m-CNT ratio showed the highest uniqueness at 40%, and it was confirmed that the uniqueness decreased significantly when the m-CNT ratio was too high or too low.We also assessed uniqueness in a more intuitive manner.This includes a method that displays the PUF in the matrix state as a 2D contour map image and then compares the differences between the pictures using an image matching test.To convert the PUF in the matrix state into a 2D pattern image, it passes through the resistance value normalization process mentioned above.Figure 4b shows the PUF image converted to a contour map and is the result of four simulations under the same conditions.Although all four simulation results were obtained under the same conditions, with an m-CNT ratio of 50% and a CNT density of 15 #/μm 2 , completely different shapes were obtained.The contour map conversion process changes the shape of the contour line depending on the relative position of adjacent data values, allowing it to be expressed as a complex fingerprint pattern, further increasing uniqueness.The image match rate for each of these 2D pattern images was calculated using a software program (Prismatic Software Dup Detector v 3.0) 32 .The difference (%) was calculated as the 1 − image match rate (%) and is shown in Fig. 4c.The difference (%) is also highest for an m-CNT ratio = 40% at a CNT density of 15 #/μm 2 .Comparing Fig. 4b,c shows that the uniqueness measured by both methods shares the same tendency.Therefore, both methods are reliable indicators and allow us to find optimal conditions for PUF.Additionally, expressing PUF as a 2D pattern image solves the problem of errors caused by bit inversion by exploiting the relative differences between adjacent resistances instead of using absolute values obtained from electrode pairs. Conclusion We implemented a random CNT network-based analog PUF through simulation.Using the rand function and Monte-Carlo method, a statistical approximation can be obtained by significantly increasing the number of trials while ensuring that the simulated CNT network is completely unpredictable and random for each simulation run on the same code.Using the PUF code, the resistance values of 100 devices obtained by executing the same basic code 100 times are expressed as a PUF of a 10 × 10 array.This array contains matrix position information and can be represented as a unique 2D pattern.The resulting PUF has excellent security because it uses analog data, and since the 2D contour pattern is expressed as the relative difference between adjacent values, it solves the error problem caused by bit inversion.To optimize the CNT solution deposition process conditions most suitable for PUF, the density of the CNTs and the ratio of m-CNTs were varied.The performance evaluation of analog PUFs was provided using a different method from the existing digital PUF performance evaluation parameters.Randomness was compared using the RSD formula, and uniqueness was compared in two ways: calculating the error factor for the resistance value and the image matching test of the 2D contour map.The results confirm that both parameters have an optimal point at a low CNT network density as the m-CNT ratio increases and at a high CNT network density as the m-CNT ratio decreases.After the CNT solution and processing methods are clarified, the corresponding simulation can be used to find the optimal point suitable for PUF.Furthermore, PUF, which is analog in itself, can apply encryption keys in a simple process without an analog-to-digital converter, and is very similar to the encryption method of fingerprints, so it is an encryption technology that will replace fingerprints in the future. Figure 1 . Figure 1.(a) AFM images of the CNT network according to deposition time.(b) I DS -V GS curves of the CNT network transistors.The s-CNT ratio is 90%, and the CNT deposition times are 1, 3, and 14 min, showing differences in electrical properties and process deviation depending on the process conditions.(c) Deviation of log(I off ) according to deposition time.I off is the I DS value at V GS = 0 V. Figure 2 . Figure 2. (a), (b) Results of basic code simulations performed three times each.(a) is when the CNT density is 10 #/μm 2 , and (b) is when the CNT density is 50 #/μm 2 .(c) A circuit schematic diagram of one CNT PUF obtained through the PUF code. Figure 3 . Figure 3. (a) Grid heatmap of normalized PUF.From the left, the CNT densities are 10, 20, 30, and 40 #/μm 2 .(b) RSD of the resistance value according to the CNT density.Expressed according to the m-CNT ratio of 10, 20, 30, 40, and 50%.(c) RSD of the resistance value according to the number of iterations for each CNT density when m-CNT is 30%. Figure 4 . Figure 4. (a) Error factor according to the m-CNT ratio.(b) Four PUFs simulated under the same conditions are shown as contour map images.(c) Difference (%) according to the m-CNT ratio.
5,189.8
2024-04-16T00:00:00.000
[ "Engineering", "Materials Science", "Physics" ]
Energy Efficient Cluster based Reactive Algorithm in Wireless Sensor Networks Objectives: To eliminate hot spot problem and achieve uniform load distribution among many cluster heads (CHs), we propose a cluster based reactive (CBR) algorithm with threshold based data transmission. Methods/Statistical Analysis: This study achieves innovation through the incorporation to two phases namely cluster head selection using the vote-based measure as well as transmission power of sensor nodes and reactive data transmission. In reactive data transmission, the sensor node sends data only while the sensed value exceeds the threshold value. Findings: A series of experiments are carried out to validate the goodness of the CBR algorithm in terms of energy utilization and network lifetime. From the experimental results, it is absolute that network lifetime of the proposed method is increased by 55.72% and the energy consumption is reduced to 50% when compared to LEACH. Application/Improvements: The proposed method can be implemented in real time WSN. Introduction A Wireless Sensor Network (WSN) includes a group of little sized, low energy sensor nodes with the ability to detect and transmit the physical phenomenon to base station (BS) 1 . It finds useful in various areas like border surveillance, power plants, industries, environmental monitoring, industrial automation and so on 2 . In contrast to conventional wired systems, the deployment cost of WSN is very low. Further, the WSN has the capability for adapting with the varying environmental conditions 3 . The sensing field can be the physical environment, buildings otherwise an information technology structure. The sensor node includes four main components namely sensor module, processor module, power supply and communication module. The sensor module converts the sensed data to electrical form where each node forwards the sensed data to BS through intermediate sensor nodes. As sensor nodes operate only on the inbuilt battery power, it might be employed in hazardous or difficult environments; it is very hard or not possible for recharging or replacing the power supply 4 . So, clustering and routing protocols are needed to enhance the lifetime of WSN. The lifetime of WSN may be able to be described as Half Node Die (HND), First Node Dies (FND), Last Node Dies (LND), and so on. But, the proper explanation of network lifetime is mainly depending upon the application description. For instance, FND can be represented as lifetime for some of the critical applications like healthcare monitoring, where the depletion of the energy of a sensor node may leads to serious effects. In the other way, the depletion of energy of the sensor nodes that reside in the non-critical application, the lifetime is defined as a particular number of nodes remain alive 5 . In WSNs, routing is a difficult task since it is connected to different features of WSN which makes it different from conventional communication networks, For example, ad hoc networks. Firstly, it is not probable to use a global addressing method while deploying nodes. Next, contrastingly to conventional communication models, every WSN application needs the course of sensed information out off many nodes to a specific BS. Thirdly, many nodes may acquire identical data in the nearby region which leads to high data redundancy. These redundant data have to be used through the routing protocol to effectively utilize the bandwidth as well as energy efficiency. Then, the sensor nodes are mainly controlled on transmission power, battery and bandwidth. So, there is a node to use better routing to enhance the available energy utilization. Several studies proved that the energy utilization is mainly minimized by the use of clustering mechanisms. So, different energy efficient clustering algorithms have been presented 6,7 . Clustering techniques groups the nearby nodes into clusters based on some criteria and leader known as cluster head (CH) would be selected in a cluster. The CH is solely responsible for the particular cluster and the remaining nodes will be termed as cluster members. Though several measures in the literature involved the energy consumption criteria, the major drawback lies in the fact of high data redundancy along with the problem of hot spot issue. Hot spot issue refers the faster energy depletion of CHs located closer to the BS compared to other CHs. To resolve this problem, unequal clustering schemes were introduced, which constructs small clusters near to BS and large clusters far from BS. The overall unequal clustering model is depicted in Figure 1. To overcome the above mentioned problems, this paper presents a cluster based reactive (CBR) algorithm using unequal clustering mechanism and threshold based data transmission. This paper incorporates two main phases: (i) Unequal clustering using vote based measure as well as transmission power of sensor nodes and (ii) Reactive data transmission. In reactive data transmission, the sensor node sends data only when the sensed value exceeds the threshold value. This reactive manner eliminates the data redundancy as well as the amount of data transmission. The simulations were performed to highlight the advantages of the CBR algorithm to enhance the network lifetime with reduced energy consumption as shown in Table 1. Literature Survey LEACH, being a popular and foremost clustering algorithm, is widely used in periodical gathering of data applications in WSNs. The nodes selection itself as CH by means of little possibility. This selection of possibility to become of CH depends on the consideration which every sensor nodes initiates with similar quantity of energy; in addition to that every node will send data in its time frame. When the nodes with variant quantity of energy, the node need high energy needs to become CH than nodes with low energy, to verify which all the nodes depletes its energy at the similar instance. In HEED 8 , every sensor node has the possibility to become CH depend on the residual energy. The sensor which has not been under any CHs with twice fold possibility to become CH. The sensor selects the node as CH with typical lowest reach-ability power (AMRP) while it is inside the cluster radius of many CHs. In similar way as LEACH, every sensor node would communicate to its respective CH and the CHs transmit the aggregated data to BS through multihop communication. In 9 presented a general weight-based clustering technique which integrates every sensor in the midst of some weights. In WCA, the weight is computed using some local information about the sensors like transmission power, degree of the node, mobility as well as level of battery of sensor node. The CHs are chosen from the nodes with less weight compared to their nearby nodes. This algorithm employs single hop communication in which every node straightly transmit the data to the CH. In UCS 10 , the foremost unequal clustering strategy is presented for uniform load distribution among the CHs. The BS is placed at the middle of the target region and it gathers data from WSN. The location of CHs are fixed earlier, with all CHs are sorted in the form of concentric circles in the region of the BS. Each cluster is comprised of nodes in the Voronoi area approximately the CH. The sensed value of each and every sensor in the clusters is gathered on the CH that does aggregation of data and broadcast the data towards the BS. In EEUC 11 , an energy efficient Unequal-Clustering protocol for periodic applications of data gathering. Through the use of unequal clustering along with multihop communication, the nodes are properly organized to clusters. EEUC be a competitive algorithm, wherever CHs be chosen by partly opposition and the intermediate node with high remaining energy to forward the data. WCA algorithm employs voting scheme to elect CH in UCRA. In 12 the cluster setup phase, the nodes exchanges information to compute vote and it select the node with maximum vote will be considered as CH. The CH broadcasts the control messages to intimate remaining nodes. The CHs broadcasts the manages messages to notify additional nodes. The left nodes select the most excellent CH to connect based on the fitness. This process undergoes iterations till every node goes under a CH. Proposed Algorithm The overall structure of unequal clustering routing is demonstrated in Figure 2 that the unequal size circles denote the unequal size clusters and the technique of multi-hop forwarding is demonstrated through traffic between CH. The parameters are given in Figure 2. The maximum competition radius is denoted through Rmax that is predefined, d0denote the every sensor node radiation radius, dmin and dmax demonstrate the minimum and maximum distance among the sink and sensor nodes. Partitioning of nodes into clusters is known as clustering. Every cluster comprises of CH and few normal nodes as its members. A novel voting based unequal clustering scheme for WSN is proposed. During the CH competition, the CH is elected mainly depends on every node weight. Here, the CH is chosen depends on the node's weight. The CHs nearer to BS supports small cluster size because of high energy utilization. Consequently, more clusters will be produced at the location nearer to BS. In other way, it can be represented as; the decrease in distance to BS enhances the cluster counts as well as reduces the size of the cluster. Let R max is the high competition radius, which is fixed. A R i of node viserves as a function of its towards the sink distance: where, d v BS i , ( ) indicates the distance among vi and BS, c is a constant coefficient lies in the range of 0 to 1. The competition radius ranges between from ( ) 1− c R max to R max using Equation (2). In the cluster construction phase, the sensor undergoes election process using vote method. The topology, remaining energy and transmission energy are three factors in the selection of CHs. The topology, transmission and remaining energy are used to elect CHs. The sensors contend with every other while in the clustering stage. When a node contains many neighbors, every neighboring node gets a lesser vote as there exist more candidate nodes for this node. Every neighbor provides few votes towards a sensor node and sensor with huge neighbor's counts has the tendency to get more number of votes. So, for a sensor v i , the vote it cast over other sensor v j is: where, from sensor v i to sensor v k , dik refers the distance and e j refers the remaining energy level of sensor v j . The vote total of sensor vi is the total of votes from the entire voters, An advertisement Ai for every node v i , where, distance among vi and any of its n th neighbors is referred as d ik . When node comprises none of the neighbors, then the broadcast value will be 1. A node which receives an advertisement node from other node knows the position of the neighbor. The node with higher advertisement will be chosen as CH. To determine the announcement rate, the sensor node is required to identify the neighboring distance and residual energy. It is required to broadcast the node id, position and remaining energy of the nearby nodes. Hence, each node becomes aware of its nearby nodes and the respective distances. On the reception of messages, every sensor node updates and broadcasts the advertisement to all neighboring nodes. The energy utilization of CH is based on the node degree. where, distance among cluster head i and node j is d degree ij i , is number of neighboring nodes. When a sensor lies in the competition radius of many CHs, it selects the CH with maximum fitness value. When the amount of residual energy is found to be identical, then the CH with high node degree has lower fitness than those with few neighbors. The majority of the reactive protocols broadcast information occasionally in a periodical way. It enhances the count of data transmission as well as the sensed information would be extremely connected. To enhance threshold, energy efficiency depends on data transmission is projected. Vol 12(29) | August 2019 | www.indjst.org This approach allows the CH transmits the attributes to its members and the thresholds are listed below: Hard Threshold (HT): For the sensed attribute, it is a threshold assessment. It is the complete attribute rate away from that, the node sensing this assessment should control over their transmitter as well as study to the cluster head. Soft Threshold (ST): It is a little modification in sensed attribute rate that trigger the node to knob over its transmitter and broadcast. The nodes sense the setting constantly. The nodes would subsequently broadcast data within the present period of cluster, alone while together the subsequent circumstances are true: • When comparing with the hard threshold, the present rate of the sensed attribute is higher. • The present rate of the attribute that is sensed varies out off SV through a sum equivalent to or superior when compared to the soft threshold. A data can be transmitted at any time, SV is locate equivalent to the in progress sensed attribute rate. As a result, the hard threshold minimizes transmissions counts through enabling the nodes to broadcast alone while the sensed value in the region of significance. The soft threshold additionally minimizes count of broadcasts through the elimination of data transmissions that may contain or else occurred while here is small otherwise no modifications in the sensed attribute on one occasion the rigid threshold. Simulation Results and Discussion The projected CBR algorithm is implemented in MATLAB R2014a. A WSN of 200 nodes undergo random deployment in the area of 500x500m 2 . For energy consumption analysis, the first order radio model is employed. The cluster construction of the projected CBR method is illustrated in Figure 3. For validation, the proposed method is validated by comparing its results with the LEACH protocol. The obtained outcomes of the projected CBR algorithm in terms of number of clusters are shown in Figure 4. Figure 5 demonstrates the alive nodes counts for various rounds. It is absolute from the Figure 5 that the alive nodes counts are at maximum for CBR technique when comparing with the LEACH. The CH is selected through LEACH randomly and it does not involve any parameter to node for selecting the CH. The projected reactive and voting data transmission technique results in efficient CH selection and when compared to the actual period, the node works for long. This tends to enhanced alive sensor node counts within the network. When comparing with LEACH, the CBR algorithm energy utilization is low. Figure 6 demonstrates the projected technique energy utilization with LEACH. When compared to the LEACH values, the projected method consumes low rate. This shows that using fuzzy logic in dynamic CH selection gives superior outcomes when comparing with LEACH. It is absolute from the network lifetime that had enhanced as 55.72%, and energy utilization is minimized by half when comparing with LEACH. Conclusion This study presents a CBR algorithm using unequal clustering mechanism and threshold based data transmission. This study incorporates two main phases: (i) Unequal clustering using vote based measure as well as transmission power of sensor nodes and (ii) Reactive data transmission. This reactive manner eliminates the data redundancy as well as the amount of data transmission. The simulations were performed to highlight the advantages of the CBR algorithm to enhance the network lifetime with reduced energy consumption. For examining the energy utilization, the first order radio model is employed that network lifetime of the proposed method is increased by 55.72% and the energy consumption is reduced to 50% when compared to LEACH.
3,630.2
2019-08-01T00:00:00.000
[ "Computer Science" ]
Existence and Large Time Behavior of Entropy Solutions to One- Dimensional Unipolar Hydrodynamic Model for Semiconductor Devices with Variable Coefficient Damping In this paper, we investigate the global existence and large time behavior of entropy solutions to one-dimensional unipolar hydrodynamic model for semiconductors in the form of Euler-Possion equations with time and spacedependent damping in a bounded interval. Firstly, we prove the existence of entropy solutions through vanishing viscosity method and compensated compactness framework. Based on the uniform estimates of density, we then prove the entropy solutions converge to the corresponding unique stationary solution exponentially with time. We generalize the existing results to the variable coefficient damping case. Introduction The present paper is concerned with the one-dimensional isentropic Euler-Possion model for semiconductor devices with damping: where space variable x ∈ ½L 1 , L 2 (L 1 and L 2 are two positive constants) and time variable t ∈ ½0, TÞðT > 0Þ. Here, ρ ≥ 0, m, Hðx, tÞ, PðρÞ, and E stand for electron density, electron current density, damping coefficient, pressure, and electric filed, respectively. We assume the damping coefficient Hðx, tÞ is bounded, and the pressure function is given by PðρÞ = p 0 ρ γ , where p 0 = θ 2 /γ and θ = ðγ − 1Þ/2: Here, γ presents the adiabatic coefficient, and γ > 1 corresponds to the isentropic case. The doping profile bðxÞ ≥ 0 stands for the density of fixed, positively charged background ions. In this paper, we assume where b * and b * are two positive constants. The initialboundary value conditions of system (1) are where ρ 0 ðxÞ satisfies Firstly, let us survey the related mathematical results. In 1990, Degond and Markowich [1] firstly proved the existence and uniqueness of the steady-state to (1) in subsonic case, which is characterized by a smallness assumption on the current flowing through the device. It was proved that the existence of local smooth solution to the time-dependent problem by using Lagrangian mass coordinates in [2]. However, Chen-Wang in [3] had studied the smooth solution would blow up in finite time; therefore, it is worthwhile considering the existence and other properties of weak solutions. As for weak solutions, Zhang [4] and Marcati-Natalini [5] proved the global existence of entropy solutions to the initial-boundary value and Cauchy problems for γ > 1, respectively. Li [6] and Huang et al. [7] proved the existence of L ∞ entropy solution of (1) with γ = 1 on a bounded interval and the whole space by using a fractional Lax-Friedrichs scheme. It is worth noting that the L ∞ estimates of entropy solution, especially the estimate of density, in all of the above works [4][5][6][7] depend on time t, which restricted us to consider their large time behavior further. We refer [8][9][10] for more results on this model and topic. In this paper, for 1 < γ ≤ 3 and variable coefficient damping, we shall first verify the assumption in [11], where the density is assumed to be uniformly bounded with respect to space x and time t and then use the entropy inequality to consider the large time behavior of the obtained solutions. Based on the related results in [12][13][14][15][16], we are convinced that the method developed in this paper can be used to bipolar Euler-Poisson system with time depended damping. We will investigate this problem in next papers. To start our main theorem, we define the entropy solution of system (1) as. Definition 1. For every T > 0, a pair of bounded measurable functions vðx, tÞ = ðρðx, tÞ, mðx, tÞ, Eðx, tÞÞ is called a L ∞ weak solution of (1) with initial-boundary condition (3) if holds for any test function φ ∈ C ∞ 0 ð½L 1 , L 2 × ½0, TÞÞ, and the boundary condition is satisfied in the sense of divergencemeasure field [17]. Furthermore, we call the weak solution ðρ, m, EÞðx, tÞ to be an entropy solution if the entropy inequality satisfies in the sense of distribution for any weak convex entropy pairs ðηðρ, mÞ, qðρ, mÞÞ: The stationary solution of problems (1) and (3) is the smooth solution of with the boundary conditioñ Our main results in this paper are as follows. Theorem 3 (Existence). Let 1 < γ ≤ 3, we assume that the initial data and the damping coefficient satisfy for some positive constants M 0 and M 1 . Then, there exists a global entropy solution ðρ, m, EÞðx, tÞ of the initial-boundary value problems (1) and (3) satisfying where C is independent of t. 2 Advances in Mathematical Physics Preliminary and Formulation We consider the homogeneous system Firstly, we use r 1 and r 2 to denote the right eigenvectors corresponding to the eigenvalues λ 1 and λ 2 . After simple calculation, we have The Riemann invariants ðw, zÞ are given by satisfying ∇w · r 1 = 0 and ∇z · r 2 = 0, where ∇ = ð∂ ρ , ∂ m Þ is the gradient with respect to U = ðρ, mÞ. A pair of functions ðη, qÞ: ℝ × ℝ + ↦ ℝ 2 is called an entropy-entropy flux of system (13) if it satisfies Furthermore, if for any fixed m/ρ∈ð−∞, + ∞Þ, η vanishes on the vacuum ρ = 0; then, η is called a weak entropy. For example, the mechanical energy-energy flux pair should be a strictly convex entropy pair. We approximate the equations in (1) by adding artificial viscosity to get the smooth approximate solutions ðρ ε , m ε Þ, that is, with initial-boundary value conditions where M in (18) is a big enough constant to be determined later and m ε in (19) is the standard mollifier with small parameter ε.We shall prove that the viscosity solutions of (18) and (19) are uniformly bounded with respect to time t. Viscosity Solutions and A Priori Estimates For any fixed ε > 0, we denote the solution of (18) and (19) by ðρ ε , m ε , E ε Þ, since E ε ðx, tÞ is uniquely determined by ρ ε ðx, tÞ, bðxÞ, and E − ; then, the system (18) may be seen as one system with the unknowns ρ ε and m ε . Regarding the proof of local existence of approximate solution, the techniques used in this article are similar to those used in [19]. To extend the local solution to global one, the key point is to obtain the uniform upper bound of ρ ε , |m ε | and the lower bound of density ρ ε . The following theorem gives the uniform bound of ðρ ε , m ε Þ. Proof. (For simplicity of notation, the superscript of ρ ε and m ε will be omitted as ðρ, mÞ.) By the formulas of Riemann invariants (15), we can decouple the viscous perturbation equation (18) as We set the control functions ðφ, ψÞ as A direct calculation tells us Advances in Mathematical Physics Define the modified Riemann invariants ð w, zÞ as: Then, inserting the above formulas into (21) yields the decoupled equations for w and z: We rewrite (25) into with In above calculation, we have used the relations: Noting 0 < θ ≤ 1, |Hðx, tÞ | ≤M 1 , and choosing M ≥ ð2/ ð1 − θÞÞM 1 , we have On the other hand, (27) tells us And use the same calculations in [18], we estimate the approximate electric fields and obtain where M 2 depends only on initial data. Thus, taking M big enough, we have and the initial-boundary value conditions satisfy Basing on the above discussion, using Lemma 7 of Therefore, By (35), we have and Lemma 7 is completed. From (20), the velocity u = m/ρ is uniformly bounded, i.e., |u | <C. Then, following the same way of [20], we could obtain Based on the local existence of smooth solution, the uniform upper estimates (Lemma 7) and the lower bound estimate of density (37), we derive the following lemma. Through Lemma 8 and the compensated compactness framework theory established in [19,[21][22][23], we can prove that there has a subsequence of ðρ ε , m ε Þ (still denoted by 4 Advances in Mathematical Physics Furthermore, it is clear for us that ðρ, mÞ is an entropy solution of initial-boundary value problems (1) and (3). We complete the proof of Theorem 3. Large Time Behavior of Weak Solutions This section is devoted to the proof of Theorem 5. Firstly, for stationary solution, from the result in [24], we have the following argument: Lemma 9. Under the assumption (2) of bðxÞ, there exists a unique solution ðρ,ẼÞ to problems (7) and (8) satisfying where C only depends on γ, b * and b * . Now, we shall derive that the entropy solution ðρ, m, EÞ acquired in Theorem 3 converges strongly to the corresponding stationary solution ðρ,ẼÞ in the norm of L 2 with exponential decay rate. From (7) and (8), we see that Give the definition of the new function as follows Obviously, we observe that From (1) and (7), we have Multiplying y with (44) and integrating from L 1 to L 2 , we have d dt Lemma 7 of [25] tells us there exist two nonnegative con-stantsC 1 andC 2 such that Putting (46) into (45), we have d dt Additionally, denote the relative entropy-entropy flux by From the entropy inequality (16), we have the following inequality holds in the sense of distribution: We notice that and use the theory of divergence-measure fields [17] to arrive at d dt Let Λ sufficiently big so that Λ > b * /δ 0 + ∥ρ∥ L ∞ + 1. Data Availability This paper uses the method of theoretical analysis. Conflicts of Interest The authors declare that they have no conflicts of interest.
2,303
2020-11-22T00:00:00.000
[ "Engineering", "Physics" ]
Low-Resource Comparative Opinion Quintuple Extraction by Data Augmentation with Prompting , Introduction COQE is an essential subfield of Natural Language Processing (NLP).Its primary objective is to extract five specific components from comparative sentences, namely: subject, object, shareable aspect, comparative opinion, and preference, as defined in (Liu et al., 2021).For example, in the sentence "Like the viewfinder, the Nikon D80 has the same sensor as the D200.","Nikon D80" and "D200" are respectively the subject and object entities, the aspect term is "sensor", the opinion word is "same", and the comparative preference is "Equal".COQE plays a crucial role in various applications, such as comparative opinion mining (Jindal and Liu, 2006b;Wang et al., 2010;Ma et al., 2020), sentiment analysis (Schouten and Frasincar, 2015;Zhang et al., 2022;Aftab et al., 2022), and customer satisfaction estimation (Ando et al., 2022). Existing pipeline-based method (Liu et al., 2021) suffers from error propagation.The heavy reliance on extensive annotated data poses a bottleneck in the training process.To address the aforementioned issues, we propose a data augmentation method with prompting for low-resource COQE.Firstly, we propose a BERT-based (Devlin et al., 2018) end-toend deep learning model as our backbone to avoid error propagation.Although existing LLMs such as ChatGPT possess rich linguistic knowledge and impressive generative capabilities, they encounter difficulties in generating satisfactory quintuple examples due to the inherent complexity of COQE.In this paper, we propose to develop a lightweight data augmentation, where the triple examples are required to be generated for augmentation, instead of the unabridged quintuple examples.It is relatively easy for ChatGPT to produce some qualified triple examples rather than quintuple.Additionally, we leverage these generated triple examples to warm up the end-to-end extraction model before training it over the benchmark quintuple dataset.To summarize, the main contributions of our work are as follows: • We introduce an end-to-end model framework to suit data augmentation methods better and avoid error propagation.Additionally, we propose a twostage data augmentation approach for low-resource COQE, leveraging the generative capabilities of ChatGPT and the transfer learning method. • Experimental results demonstrate that our approach yields substantial improvements compared to the baseline and the current state-of-the-art model, resulting in a new highest performance on three COQE datasets.Furthermore, we conduct further analyses and supplementary experiments to verify the effectiveness of our approach. Camera-COQE [Prompt]: Subject and oject refer to the subject and object entities being compared, aspect term denotes the comparative aspect term of two compared entities.For the given aspect term in the camera domain, please generate three pairs of comapred entities. Step2 Generate sentences Generate three pairs of tuplets Generate triplet data Transfer learning End-to-end Model We utilize BERT (Devlin et al., 2018) as the sentence encoder.Using a BPE tokenizer (Sennrich et al., 2016), we obtain context-aware representations for each token in the input sentence X. We employ a non-autoregressive (Guo et al., 2019) decoder to generate the quintuples, which remove the dependence on previous target tokens from the input of the decoder.Specifically, we randomly initialize a vector representation Q to represent a quintuple.In each decoder layer, we update the representation of Q using the formula (2).In this paper, we utilize a l-layer transformer decoder for non-autoregressive generation, and in our experiments, we set l to 3. Given the output of the final decoder layer, we employ a classifier and four pointer networks to extract quintuples.Each pointer network is responsible for identifying the start and end position of one element within a quadruple.We calculate the classification probabilities using formula (3) and the extraction probabilities for the quadruple using formula (4). where W c , W e , W h , b and V are all trainable parameters.q l i is the i-th embedding output by the final decoder layer. We optimize the combined objective function during training.L total comprises classification and extraction loss using the cross-entry loss function. where N denotes the number of initialized Q, K denotes the number of loss functions computations required for a quadruple, which is equal to 8. Data Augmentation for Transfer Currently, there is a lack of datasets that are specifically annotated for subject, object and aspect in comparative sentences.In this paper, we introduce a data-centric method to leverage the rich linguistic knowledge within ChatGPT and further enhance COQE performance.ChatGPT generates a dataset containing triplets {sub, obj, asp}.In this section, we take the Camera dataset as an example.A multi-stage approach is needed to generate proper sentences for automatic annotation in the dataset.Building the triplet dataset involves four steps, as depicted in Figure 1. • Obtaining Aspect Terms Firstly, we count the unique aspect terms separately from each dataset.The number of distinct aspect terms for three datasets is provided in Table 2. • Generating Triplets We generate triplets based on the aspect terms obtained from step 1.Specifically, we invoke the ChatGPT API and design an appropriate prompt to generate three triplets. Kind Prompt {sub, obj, asp} Please generate a new comparative sentence that compares or describes subject and object based on the given aspect term.• Generating Sentences Based on the statistics from the Camera dataset, 221 sentences contain triplets {sub, obj, asp}.In contrast, there are 202 sentences with only binary combinations such as {sub, obj}, 139 sentences with {sub, asp}, and 52 sentences with {obj, asp}.To ensure diversity in the generated sentences by ChatGPT, we prioritize the first three scenarios.We design specific prompts for each scenario.For example, in the first scenario, we establish prompt as shown in Table 1. • Data Filtering and Processing Despite the provided constraints, ChatGPT may still generate some samples that do not meet the specifications.Therefore, before automatic labeling, matching the generated sentences with the given triplets is essential.Only the sentences that successfully match the triplets should be labeled automatically. We train the backbone model using newly constructed triplet data to obtain feature representations.Subsequently, we employ transfer learning techniques to fine-tune the gold quintuples based on the obtained representations. Datasets and Evaluation Metrics Datasets We evaluate our method on three datasets.Camera is an English corpus (Liu et al., 2021).It builds upon the prior work of Kessler et al. (Kessler and Kuhn, 2014) by providing additional annotation for comparative sentences with comparative opinions and preferences.Besides, Liu et al. (Liu et al., 2021) ditional annotations for data points regarding comparative opinions and preferences.The statistics of the three datasets are shown in Table 2. Evaluation Metrics We evaluate all models through Precision (P ), Recall (R), and F 1 metrics.Additionally, we employ three matching strategies to evaluate prediction performance: exact-match (Ex), proportional-match (Pr) and binary-math (Bi) evaluation.The details of the three matching strategies are as follows: where, g i and p i denote the i-th element in the gold and predicted quintuple result, respectively.The index i ranges from 1 to 5. We report the average performance based on three runs, utilizing shuffled random seeds for each run. Compared Models We compare with the following models: MS SVM+CRF first propose comparative sentence identification.They utilize SVM (Cortes and Vapnik, 1995) for identifying comparative sentences and CRF (Lafferty et al., 2001) for extracting comparative elements (Jindal and Liu, 2006a).MS CRF employ a CRF-based model for comparative sentence identification and comparative element extraction (Wang et al., 2015). MS LSTM introduce a multi-stage framework utilizing LSTM as the text encoder.Firstly, they identify comparative sentences and extract comparative elements.Subsequently, they combine and filter these elements.Finally, they classify valid quadruples into four categories Liu et al. (2021). MS BERT is a a modified version of MS LSTM .For MS BERT , Liu et al. (2021) choose BERT as the model's text encoder. Main Results We show the performance over three test sets in Table 3.It can be observed that DAP yields substantial improvements on three datasets compared to all the current SoTA or our backbone (E2E).In particular, even E2E is superior to the performance of MS BERT .This also proves that the end-to-end model can effectively avoid error propagation.Besides, compared to the current SOTA results, DAP leads to F 1 score improvements of 7.70%, 6.38%, and 8.51% on the Camera, Car, and Ele datasets, respectively.This shows that the performance of COQE can be effectively improved by introducing external knowledge contained in LLM. Cross-Domain Experiments We follow Liu et al. (2021) to evaluate the crossdomain generalization ability of our method.We conduct cross-domain experiments on two Chinese datasets, where cross-domain refers to using training and validation sets from the source domain (SOU) and a test set from the target domain (TAR).In Table 4, "Ele → Car" denotes electronic domain serves as the SOU, car domain is the TAR. It can be observed that our method DAP yields superior cross-domain experimental results surpassing previous COQE approaches, owing to our approach's generalization of data transfer. Significance Test We perform a statistical significance test (abbr., SST) to validate the reliability of our method.The sampling-based P-value (Johnson, 1999) is used as the metric for measuring significance levels.On the other hand, to provide a comprehensive insight into the significance, we conduct the practical significance test (abbr., PST) (Zhu et al., 2020), which is more reliable than SST.Cohen's D-value is used as the metric of PST.It is noteworthy that, in SST, the reported P-value below 0.05 (i.e., 5.0E-02) indicates significant improvement, otherwise insignificant (Dror et al., 2018).Similarly, in PST, the reported Cohen's D-value exceeds 1 indicates a significant improvement. The results of the SST and PST are presented in Table 5.It can be found that the P-values of SST are lower than the threshold, while Cohen's D-value of PST is higher than the threshold.This demonstrates that DAP yields significant improvements. Comparative Sentence Analysis Comparison-oriented information extraction has attracted considerable research interest.Jindal and Liu (2006a) first introduce the concept of comparative sentences and implement comparative sentence discrimination based on rules and SVM (Cortes and Vapnik, 1995).Park and Blake (2012) explore an extensive set of syntactic and semantic features, and employ three different classifiers to identify comparative sentences.The recent studies concentrate on fine-grained component analysis and parsing upon comparative sentences (Kessler and Kuhn, 2013;Arora et al., 2017;Ma et al., 2020).In particular, Liu et al. (2021) propose a novel task called comparative opinion quintuple extraction, which aims to extract quintuples from the given comparative sentences. Data Augmentation Data augmentation is a technique that expands and diversifies training datasets by applying various transformation or modification approaches to the existing data.Fadaee et al. (2017) use backtranslation as a data augmentation method for generating synthetic parallel sentences, thereby enhancing the performance of low-resource neural machine translation systems.Wei and Zou (2019) use synonym replacement, random insertion, random deletion and synonym swapping to increase the diversity of training data.The approaches contribute to enhancing the robustness of text classifier.Chen and Qian (2020) develop a prototype generator for data augmentation, where internal and external prototypes are adopted. Conclusion and Future Work To avoid error propagation, we design an end-toend model.Additionally, we propose a data-centric augmentation approach using the powerful generative capability of ChatGPT.The performance on three datasets achieves SoTA.Future work will focus on integrating existing annotated triplet data with automatically generated domain-specific data. Limitations Despite achieving a new state-of-the-art performance, our model still has several limitations.One obvious limitation of the method is that the original three data sets contain a certain proportion of multiple comparison sentences, making predictions more difficult.In this paper, we only improve performance by introducing external knowledge and extracting the quintuple from the difficult to the easy.Future work can concentrate on tackling the COQE problem specifically from the perspective of multiple comparative sentences Table 1 : The prompt for generating comparative sentences. Table 2 : Statistics in the training, dev and test sets."Sent" and "Asp" indicate the total number of sentences and unique aspect terms for each dataset, respectively. Table 3 : construct two Chinese datasets specifically designed for comparative opinion quintuple extraction.They extend the COAE (Songbo Tan, 2013) dataset by building upon it and providing ad-F1-scores for various COQE methods using exact strategy.The best scores are in bold. Table 4 : (Liu et al., 2021)posed model in the crossdomain setting.The mark "-" indicates the results were not presented in(Liu et al., 2021)'s work. Table 5 : Results of two significance tests.
2,797.2
2023-01-01T00:00:00.000
[ "Computer Science" ]
The Implicit Function as Squashing Time Model: A Novel Parallel Nonlinear EEG Analysis Technique Distinguishing Mild Cognitive Impairment and Alzheimer's Disease Subjects with High Degree of Accuracy Objective. This paper presents the results obtained using a protocol based on special types of artificial neural networks (ANNs) assembled in a novel methodology able to compress the temporal sequence of electroencephalographic (EEG) data into spatial invariants for the automatic classification of mild cognitive impairment (MCI) and Alzheimer's disease (AD) subjects. With reference to the procedure reported in our previous study (2007), this protocol includes a new type of artificial organism, named TWIST. The working hypothesis was that compared to the results presented by the workgroup (2007); the new artificial organism TWIST could produce a better classification between AD and MCI. Material and methods. Resting eyes-closed EEG data were recorded in 180 AD patients and in 115 MCI subjects. The data inputs for the classification, instead of being the EEG data, were the weights of the connections within a nonlinear autoassociative ANN trained to generate the recorded data. The most relevant features were selected and coincidently the datasets were split in the two halves for the final binary classification (training and testing) performed by a supervised ANN. Results. The best results distinguishing between AD and MCI were equal to 94.10% and they are considerable better than the ones reported in our previous study (∼92%) (2007). Conclusion. The results confirm the working hypothesis that a correct automatic classification of MCI and AD subjects can be obtained by extracting spatial information content of the resting EEG voltage by ANNs and represent the basis for research aimed at integrating spatial and temporal information content of the EEG. INTRODUCTION The electroencephalogram (EEG), since its introduction, was considered the only methodology allowing a direct and online view of the "brain at work." At the same time, abnormalities of the "natural" aging of the brain have yet been noticed in different types of dementias. The introduction of different structural imaging technologies in the 1970's and 1980's (computed tomography and magnetic resonance imaging) and the good results in the study of brain function obtained with techniques dealing with regional metabolism, glucose and oxygen consumption, and blood flow (single-photon emission computed tomography, positron emission tomography, functional magnetic resonance imaging) during the following two decades closet the role of EEG in a secondary line, particularly in the evaluation of Alzheimer's dementia (AD) and related dementias. Computational Intelligence and Neuroscience Lately, EEG computerized analysis in aged people has been enriched by various modern techniques able to manage the large amount of information on time-frequency processes at single recording channels (wavelet, neural networks, etc.) and on spatial localization of these processes [2][3][4][5][6][7][8][9][10]. The results have encouraged the scientific community in exploring electromagnetic brain activity, which changes by aging and can greatly deteriorate, through the different stages of the various forms of dementias. The use of neural networks represents an alternative and very promising attempt to make EEG analysis suitable for clinical applications in agingthanks to their ability in extracting specific and smooth characteristics from huge amounts of data. Computerized processing of a large quantity of numerical data in wakeful relaxed subjects ("resting" EEG) made easier the automatic classification of the EEG signals, providing promising results even using relatively simple linear classifiers such as logistic regression and discriminant analysis. Using global field power (i.e., the sum of the EEG spectral power across all electrodes) as an input, some authors reached an accurate differential diagnosis between AD and MCI subjects with accuraces of 84% and 78%, respectively [11,12]. Using evaluation of spectral coherence between electrode pairs (i.e., a measure of the functional coupling) as an input to the classification, the correct classification reached 82% when comparing the AD and normal aged subjects [13,14]. Spatial smoothness and temporal fluctuation of the EEG voltage are considered as measures of the synaptic impairment, along with the notion that cortical atrophy can affect the spatiotemporal pattern of neural synchronization generating the scalp EEG. These parameters have been used to successfully discriminate the respective distribution of probable AD and normal aged subjects [15]. The interesting new idea in that study [15] was the analysis of resting EEG potential distribution instant by instant rather than the extraction of a global index along periods of tens of seconds or more. Table 1 summarizes the results of a higher preclassification rate with ANN's analysis than with standard linear techniques, such as multivariate discriminatory analysis or the nearest-neighbour analysis [16]. Some authors [17] developed a system consisting of recurrent neural nets processing spectral data in the EEG. They succeeded in classifying AD patients and non-AD patients with a sensitivity of 80% and a specificity of 100%. In other studies, classifiers based on ANNs, wavelets, and blind source separation (BSS) achieved promising results [18,19]. In a study from the same workgroup of this paper, we used a sophisticated technique based on blind source separation and wavelet preprocessing developed by Vialatte et al. [18] and Cichocki et al. [20][21][22] recently, whose results appear to be the best in the field when compared to the literature. We named this method BWB model (blind source separation + wavelet + bumping modeling), [1]. The results obtained in the classifications tasks, comparing AD patients to MCI subjects, using the BWB model, ranged from 78.85% to 80.43% (mean = 79.48%). The aim of this study is to assess the strength of a novel parallel nonlinear EEG analysis technique in the differential classification of MCI subjects and AD patients, with a high degree of accuracy, based on special types of artificial neural networks (ANNs) assembled in a novel methodology able to compress the temporal sequence of electroencephalographic (EEG) data into spatial invariants. The working hypothesis is that this new approach to EEG based on nonlinear ANNsbased methods can contribute to improving the reliance of the diagnostic phase in association with other clinical and instrumental procedures. Compared to the results already presented by the workgroup [1], the included new artificial organism TWIST could produce a better classification between AD and MCI. MATERIAL AND METHODS The IFAST method includes two phases. (1) A squashing phase: an EEG track is compressed in order to project the invariant patterns of that track on the connections matrix of an autoassociated ANN. The EGG track/subject is now represented by a vector of weights, without any information about the target (AD or MCI). (2) "TWIST" (training with input selection and testing) phase: a technique of data resampling based on the genetic algorithm GenD, developed at Semeion Research Center. The new dataset which is composed by the connections matrix (output of the squashing phase), plus the target assigned to each vector, is splitted into two sub samples, each one for five times with a similar probability density function, in order to train, test, and validate the ANN models. General philosophy The core of this new methodology is that the ANNs do not classify subjects by directly using the EEG data as an input. Rather, the data inputs for the classification are the weights of the connections within a recirculation (nonsupervised) ANN trained to generate the recorded EEG data. These connection weights represent a model of the peculiar spatial features of the EEG patterns at the scalp surface. The classification, based on these weights, is performed by a standard supervised ANN. This method, named IFAST (acronym for implicit function as squashing time), tries to understand the implicit function in a multivariate data series compressing the temporal sequence of data into spatial invariants and it is based on three general observations. (1) Every multivariate sequence of signals coming from the same natural source is a complex asynchronous dynamic highly nonlinear system, in which each channel's behavior is understandable only in relation to all the others. (2) Given a multivariate sequence of signals generated from the same source, the implicit function defining the above-mentioned asynchronous process is the conversion of that same process into a complex hypersurface, representing the interaction in time of all the channels' behavior. (3) The 19 channels in the EEG represent a dynamic system characterized by asynchronous parallelism. The nonlinear implicit function that defines them as a whole represents a metapattern that translates into space (hypersurface) that the interactions among all the channels create in time. The idea underlying the IFAST method resides in thinking that each patient's 19-channel EEG track can be synthesized by the connection parameters of an autoassociated nonlinear ANN trained on the same track's data. There can be several topologies and learning algorithms for such ANNs; what is necessary is that the selected ANN be of the autoassociated type (i.e., the input vector is the target for the output vector) and that the transfer functions defining it benon linear and differentiable at any point. Furthermore, it is required that all the processing made on every patient be carried out with the same type of ANN, and that the initial randomly generated weights have to be the same in every learning trial. This means that, for every EEG, every ANN has to have the same starting point, even if that starting point is random. We have operated in two ways in order to verify this method's efficiency. (1) Different experiments were implemented based on the same samples. By "experiment," we mean a complete application of the whole procedure to every track of the sample. (2) The second way is using autoassociated ANNs with different topologies and algorithms on the entire sample in order to prove that any autoassociated ANN can carry out the task of translating into the space domain the whole EEG track through its connections. The squashing phase The first application phase of the IFAST method may be defined as "squashing." It consists in compressing an EEG track Autoassociative backpropagation with two layers ij,k , W 0j,k ); con W 0j,j = 0. W ij,j = 0 means that every ith EEG track is processed by the two-layered autoassociated ANN in which W j, j = 0, as the connections on the main diagonal are not present (see Figure 1). It is possible to use different types of autoassociated ANNs to run this search for spatial invariants in every EEG. (1) A backpropagation without a hidden unit layer and without connections on the main diagonal (for short, AutoBp): First hidden layer Second hidden layer New recirculation network This is an ANN featuring an extremely simple learning algorithm: AutoBP is an ANN featuring N 2 − N internode connections and N bias inside every exit node, for a total of N 2 adaptive weights. This algorithm works similarly to logistic regression and can be used to establish the dependency of variables from each others. The advantage of AutoBP is due to its learning speed, in turn due to the simplicity of its topology and algorithm. Moreover, at the end of the learning phase, the connections between variables, being direct, have a clear conceptual meaning. Every connection indicates a relationship of faded excitement, inhibition, or indifference between every pair of channels in the EEG track of any patient. The disadvantage of AutoBP is its limited convergence capacity, due to that same topological simplicity. That is to say, complex relationships between variables may be approximated or ignored (for details, see [23,24]). (2) New recirculation network (for short, NRC) is an original variation [25] of an ANN that has existed in the literature [26] and was not considered to be useful to the issue of autoassociating between variables. The topology of the NRC which we designed includes only one connection matrix and four layers of nodes: one input layer, corresponding to the number of variables; one output layer whose target is the input vector; two layers of hidden nodes with the same cardinality independent from the cardinality of the input and output layers. The matrix between input-output nodes and hidden nodes is fully con-nected and in every learning cycle, it is modified both ways, according to the following equations: NRC then features N 2 internode adaptive connections and 2·N intranode adaptive connections (bias). The advantages of NRC are its excellent convergence ability on complex datasets and, as a result, an excellent ability to interpolate complex relations between variables. The disadvantages mainly have to do with the vector codification that the hidden units run on the input vectors making the conceptual decoding of its trained connections difficult. (3) Autoassociative multilayer perceptron (for short, AMLP) may be used with an auto-associative purpose (encoding)-thanks to its hidden units layer, that decomposes the input vector into main nonlinear components. The algorithm used to train the MLP is a typical backpropagation algorithm [27]. The MLP, with only one layer of hidden units, features two connection matrices and two intranode connection vectors (bias), according to the following definitions: N = number of input variables = number of output variables; M = number of nodes in the hidden layer; Hidden Output Multilayer perceptron (IFAST : noise reduction) The advantages of MLP are its well-known flexibility and the strength of its backpropagation algorithm. Its disadvantages are the tendency to saturate the hidden nodes in the presence of nonstationary functions, and the vector codification (allocated) of the same hidden nodes. (4) Elman's hidden recurrent [28] can be used for autoassociating purposes, again using the backpropagation algorithm (for short, autoassociative hidden recurrent AHR, see Figure 4). It was used in our experimentation as a variation for MLP with memory set to one step. It is not possible to call it a proper recurring ANN in this form, because the memory would have been limited to one record before. We used this variation only to give the ANN an input vector modulated at any cycle by the values of the previous input vector. Our purpose was not to codify the temporal dependence of the entrance signals, but rather to give the ANN a "smoother" and more mediated input sequence. The number of connections in the AHR BP is the same as an MLP with extended input, whose cardinality is equal to the number of hidden units: The software IFAST (developed in Borland C) [29] produces the squashing phase through the training operated by these four networks; in the "MetaTask" section the user can define the whole procedure by selecting (i) the files that will be processed (in our case every complete EEG), Autoassociative hidden recurrent (ii) the type of network, (iii) the sequence of the records for every file (generally random), (iv) the number of epochs of training, (v) a training stop criterion (number of epochs or minimum RMSE), (vi) the number of hidden nodes of the autoassociated network, which determines the length of the output vector of the file processed (vii) the number of matrices, depending on the type of the autoassociated network selected, (viii) the learning coefficient and delta rate. TWIST From this phase, the procedure is completely different from the one described in our precedent work [1]. The choice of following a different methodology was due to the will of improving the classification results and removing causes of loss of information. In the former study, the dataset coming from the squashing phase was compressed by another autoassociated ANN, in the attempt of eliminating the invariant pattern, codified from the previous ANN, relating to specific characteristic of the brain (anxiety level, background level, etc.) which is not useful for the classification, leaving the most significant ones unaltered. Then the new compressed datasets were split into two halves, (training and test) using T&T [30] evolutionary algorithm, for the final binary classification. Rather in this work, the elimination of the noisiest features and the classification run parallel to each other. We will show that the new procedure has obtained better performances. First of all, a new dataset called "Diagnostic DB" was created for easier understanding. The diagnostic gold standard has been established, for every patient, in a way that is completely independent of the clinical and instrumental examinations (magnetic resonance imaging, etc.) carried out by a group of experts whose diagnosis has been also reconfirmed in time. 6 Computational Intelligence and Neuroscience The diagnoses have been divided into the following two classes, based on delineated inclusion criteria: (a) elderly patients with "cognitive decline" (MCI); (b) elderly patients with "probable Alzheimer" (AD); We rewrote the last generated dataset, adding to every H ns vector the diagnostic class that an objective clinical examination had assigned to every patient. The H ms vectors represent the invariant traits s as defined by the squashing phase for every m-th subject EEG track, that is, the columns number of the connections matrix depending on the specific autoassociated network used. Then the dataset is ready for the next step. This new phase is called TWIST [31] and includes the utilization of two systems T&T and IS [30], both based on a genetic algorithm, GenD, developed at Semeion Research Centre [32]. T&T systems are robust data resampling techniques able to arrange the source sample into subsamples, each one with a similar probability density function. In this way the data split into two or more subsamples in order to train, test, and validate the ANN models more effectively. The IS system is an evolutionary system for feature selection based on a wrapper approach. While the filter approach looks at the inner properties of a dataset providing a selection that is independent of the classification algorithm to be used afterwards, in the wrapper approach various subsets of features are generated and evaluated using a specific classification model using its performances as a guidance to optimization of subsets. The IS system reduces the amount of data while conserving the largest amount of information available in the dataset. The combined action of these two systems allows us to solve two frequent problems in managing artificial neural networks: (1) the size and quality of the training and testing sets, (2) the large number of variables which, apparently, seem to provide the largest possible amount of information. Some of the attributes may contain redundant information, which is included in other variables, or confused information (noise) or may not even contain any significant information at all and be completely irrelevant. Genetic algorithms have been shown to be very effective as global search strategies when dealing with nonlinear and large problems. The "training and testing" algorithm (T&T) is based on a population of n ANNs managed by an evolutionary system. In its simplest form, this algorithm reproduces several distribution models of the complete dataset D Γ (one for every ANN of the population) in two subsets (d [tr] Γ , the training set, and d [ts] Γ , the testing set). During the learning process each ANN, according to its own data distribution model, is trained on the subsample d [tr] Γ and blind-validated on the subsample d [ts] Γ . The performance score reached by each ANN in the testing phase represents its "fitness" value (i.e., the individual probability of evolution). The genome of each "network in-dividual" thus codifies a data distribution model with an associated validation strategy. The n data distribution models are combined according to their fitness criteria using an evolutionary algorithm. The selection of "network individuals" based on fitness determines the evolution of the population, that is, the progressive improvement of performance of each network until the optimal performance is reached, which is equivalent to the better division of the global dataset into subsets. The evolutionary algorithm mastering this process, named "genetic doping algorithm" (GenD for short), created at Semeion Research Centre, has similar characteristics to a genetic algorithm [33][34][35][36][37] but it is able to maintain an inner instability during the evolution, carrying out a natural increase of biodiversity and a continuous "evolution of the evolution" in the population. The elaboration of T&T is articulated in two phases. In a preliminary phase, an evaluation of the parameters of the fitness function that will be used on the global dataset is performed. The configuration of a standard backpropagation network that most "suits" the available dataset is determined: the number of layers and hidden units, some possible generalizations of the standard learning law, the fitness values of the population's individuals during evolution. The parameters thus determined define the configuration and the initialization of all the individual networks of the population and will then stay fixed in the following computational phase. The accuracy of the ANN performance with the testing set will be the fitness of that individual (i.e., of that hypothesis of distribution into two halves of the whole dataset). In the computational phase, the system extracts from the global dataset the best training and testing sets. During this phase, the individual network of the population is running, according to the established configuration and the initialization parameters. Parallel to T&T runs "Input Selection" (IS), an adaptive system, based on the same evolutionary algorithm GenD, consisting of a population of ANN, in which each one carries out a selection of the independent and relevant variables on the available database. The elaboration of IS, as for T&T, is developed in two phases. In the preliminary phase, a standard backpropagation ANN is configured in order to avoid possible over fitting problems. In the computational phase, each individual network of the population, identified by the most relevant variables, is trained on the training set and tested on the testing set. The evolution of the individual network of the population is based on the algorithm GenD. In the I.S. approach, the GenD genome is built by n binary values, where n is the cardinality of the original input space. Every gene indicates if an input variable is to be used or not during the evaluation of the population fitness. Through the evolutionary algorithm GenD, the different "hypotheses" of variable selection, generated by each ANN of the population, change over time, at each generation; this leads to the selection of the best combination of input variables. As in the T&T systems, the genetic operators crossover and mutation are applied on the ANNs population; the rates of occurrence for both operators 7 are self-determined by the system in an adaptive way at each generation. When the evolutionary algorithm no longer improves its performance, the process stops, and the best selection of the input variables is employed on the testing subset. The software based on TWIST phase algorithm (developed in C-Builder [31]) allows the configuration of the genetic algorithm GenD: • the population (the number of individual networks), • number of hidden nodes of the standard BP, • number of epochs, • the output function SoftMax, • the cost function (classification rate in our case). The generated outputs are the couple of files SetA and SetB (subsets of the initial db defined by the variables selected) that will be used in the validation protocol (see Section 2.3). The validation protocol The validation protocol is a fundamental procedure to verify the models' ability to generalize the results reached in the Testing phase of each model. The application of a fixed protocol measures the level of performance that a model can produce on data that are not present in the testing and/or training sample. We employed the so-called 5 × 2 cross-validation protocol (see Figure 6) [38]. This is a robust protocol that allows one to evaluate the allocation of classification errors. In this procedure, the study sample is randomly divided ten times into two subsamples, always different but containing a similar distribution of cases and controls. The ANNs' good or excellent ability to diagnostically classify all patients in the sample from the results of the confusion matrices of these 10 independent experiments would indicate that the spatial invariants extracted and selected with our method truly relate to the functioning quality of the brains examined through their EEG. The samples were matched for age, gender, and years of education. Part of the individual data sets was used for previous EEG studies [2][3][4]. In none of these studies we addressed the specific issue of the present study. Local institutional ethics committees approved the study. All experiments were performed with the informed and overt consent of each participant or caregiver. Subjects and diagnostic criteria The present inclusion and exclusion criteria for MCI were based on previous seminal studies [39][40][41][42][43][44][45][46] and designed for selecting elderly persons manifesting objective cognitive deficits, especially in the memory domain, who did not meet criteria for a diagnosis of dementia or AD, namely, with, (i) objective memory impairment on neuropsychological evaluation, as defined by performances ≥ 1.5 standard deviation below the mean value of age and educationmatched controls for a test battery including memory rey list (immediate recall and delayed recall), Digit forward and Corsi forward tests; (ii) normal activities of daily living as documented by the patient's history and evidence of independent living; (iii) clinical dementia rating score of 0.5; (iv) geriatric depression scale scores < 13. Exclusion criteria for MCI were: (i) mild AD, as diagnosed by the procedures described above; (ii) evidence of concomitant dementia such as frontotemporal, vascular dementia, reversible dementias (including pseudodepressive dementia), fluctuations in cognitive performance, and/or features of mixed dementias; (iii) evidence of concomitant extrapyramidal symptoms; (iv) clinical and indirect evidence of depression lower than 14 as revealed by GDS scores; (v) other psychiatric diseases, epilepsy, drug addiction, alcohol dependence, and use of psychoactive drugs including acetylcholinesterase inhibitors or other drugs enhancing brain cognitive functions; (vi) current or previous systemic diseases (including diabetes mellitus) or traumatic brain injuries. Probable AD was diagnosed according to NINCDS-ADRDA criteria [47]. Patients underwent general medical, neurological, and psychiatric assessments and were also rated with a number of standardized diagnostic and severity instruments that included MMSE [48], clinical dementia rating scale [49], geriatric depression scale [50], Hachinski ischemic scale [51], and instrumental activities of daily living scale [52]. Neuroimaging diagnostic procedures (computed tomography or magnetic resonance imaging) and complete laboratory analyses were carried out to exclude other causes of progressive or reversible dementias, in order to have a homogenous probable AD patient sample. The exclusion criteria included, in particular, any evidence of (i) front temporal dementia diagnosed according to criteria of Lund and Manchester groups [53]; (ii) vascular dementia as diagnosed according to NINDS-AIREN criteria [54] and neuroimaging evaluation scores [55,56]; (iii) extra pyramidal syndromes; (iv) reversible dementias (including pseudo dementia of depression); (v) Lewy body dementia according to the criteria by McKeith et al. [57]. It is important to note that benzodiazepines, antidepressant, and/or antihypertensive drugs were withdrawn for about 24 hours before the EEG recordings. EEG recordings EEG data were recorded in wake rest state (eyes-closed), usually during late morning hours from 19 electrodes positioned according to the international 10-20 system (i.e., analysis was carried out after EEG data were rereferenced to a common average reference. The horizontal and vertical electrooculogram was simultaneously recorded to monitor eye movements. An operator controlled, online, the subject and the EEG traces by alerting the subject any time there were signs of behavioural and/or EEG drowsiness in order to keep the level of vigilance constant. All data were digitized (5 minutes of EEG; 0.3-35 Hz band pass 128 Hz sampling rate). The duration of the EEG recording (5 minutes) allowed the comparison of the present results with several previous AD studies using either EEG recording periods shorter than 5 minutes [58][59][60][61][62] or shorter than 1 minute [7,8]. Longer resting EEG recordings in AD patients would have reduced data variability, but they would have increased the possibility of EEG "slowing" because of reduced vigilance and arousal. EEG epochs with ocular, muscular, and other types of artefact were preliminarily identified by a computerized automatic procedure. Those manifesting sporadic blinking artefacts (less than 15% of the total) were corrected by an autoregressive method [63]. The performances of the software package on EOG-EEG-EMG data related to cognitive-motor tasks were evaluated with respect to the preliminary data analysis performed by two expert electroencephalographists (gold standard). Due to its extreme importance for multicentric EEG studies, we compared the performances of two representative "regression" methods for the EOG correction in time and frequency domains. The aim was the selection of the most suitable method in the perspective of a multicentric EEG study. The results showed an acceptable agreement of approximately 95% between the human and software behaviors, for the detection of vertical and horizontal EOG artifacts, the measurement of hand EMG responses for a cognitive-motor paradigm, the detection of involuntary mirror movements, and the detection of EEG artifacts. Furthermore, our results indicated a particular reliability of a "regression" EOG correction method operating in time domain (i.e., ordinary least squares). These results suggested the use of the software package for multicentric EEG studies. Two independent experimenters-blind to the diagnosis-manually confirmed the EEG segments accepted for further analysis. A continuous segment of artefact-free EEG data lasting for 60 seconds was used for subsequent analyses for each subject. Preprocessing protocol The entire sample of 466 subjects was recorded at 128 Hz for 1 minute. The EEG track of each subject was represented by a matrix of 7680 sequential rows (time) and 19 columns (the 19 channels). Every autoassociative ANN independently processed every EEG of the total sample in order to assess the different capabilities of each ANN to extract the key information from the EEG tracks. After this processing, each EEG track is squashed into the weights of every ANN resulting in 4 different and independent datasets (one for each ANN), whose records are the squashing of the original EEG tracks and whose variables are the trained weights of every ANN. After TWIST processing, the most significant features for the classification were selected and at the same time the training set and the testing set with a similar function of probability distribution that provides the best results in the classification were defined. The validation protocol 5x2CV was applied blindly to test the capabilities of a generic supervised ANN to correctly classify each record (the number of inputs depending on the number of variables selected by IS). A supervised MLP was used for the classification task, without hidden units. In every experimentation, in fact, we were able to train perfectly the ANN in no more than 100 epochs (root mean square error (RMSE) < 0.0001). That means that in this last phase, we could have used also a linear classifier to reach up the same results. RESULTS The experimental design consisted in 10 different and independent processing for the classification AD versus MCI. Every experiment was conducted in a blind and independent manner in two directions: training with subsample A and blind testing with subsample B versus training with subsample B and blind testing with subsample A. Table 3 shows the mean results summary for the classifications of AD versus MCI, compared to the results obtained in the experimentations reported in a previous study [1], based on a different protocol (without the TWIST phase). Regarding the protocol IFAST-TWIST, the ABP and AHR achieved the best results comparing AD with MCI subjects (94.10% and 93.36%), but all the performances are considerably better than those obtained in the previous study. DISCUSSION Various types of nonreversible forms of dementias represent a major health problem in all those countries where the average life span is progressively increasing. There is a growing amount of scientific and clinical evidences that brain neural networks rearrange their connections and synapses to compensate neural loss due to neuro degeneration [64]. This process of plasticity maintains brain functions at an acceptable level before clear symptoms of dementia appear. The length of this presymptomatic period is currently unknown but, in the case of AD, often preceded by MCI, it lasts several years. Despite the lack of an effective treatment, able to block progression and/or to reverse the cognitive decline, it is generally agreed that early beginning of the available treatment (i.e., inhibitors of anticholinesterase drugs) provides the best results [65]. A significant advancement in the fight against dementias would be to have in our hands a non-invasive, easyto-perform, and low-cost diagnostic tool capable of screening with a high rate of positive prognostication a large at-risk population sample (i.e., MCI, subjects with genetic defects and a family history of dementias or other risk factors). To test this issue, we performed automatic classification of MCI and AD subjects extracting with ANNs the spatial content of the EEG voltage. The results showed that the correct automatic classification rate reached 94.10% for AD versus MCI, better than the classification rate obtained with the more advanced currently available nonlinear techniques. These results confirm the working hypothesis that this EEG approach based on ANNs can contribute to improve the precision of the diagnostic phase in association with other clinical and instrumental procedures. The present results suggest that the present variant of IFAST procedure (TWIST) could be used for a large screening of MCI subjects under control, to detect the first signs of conversion to AD for triggering further clinical and instrumental evaluations crucial for an early diagnosis of AD (this is invaluable for the beginning of cholinergic therapies that are generally carried out only in overt AD patients due to gastro intestinal side effects). Indeed, the actual percentage of correct discrimination between MCI and probable AD is around 94%. This rate is clearly insufficient for the use of the IFAST procedure for a diagnosis, due to 6% of misclassifications. The present results prompt future studies on the predictive value of cortical EEG rhythms in the early discrimination of MCI subjects who will convert to AD. This interesting issue could be addressed by a proper longitudinal study. MCI subjects should be divided into "converted" and "stable" subgroups, according to final out-come as revealed by followup after about 5 years (i.e., the period needed for conversion of all MCI subjects fated to decline over time based on the mentioned literature). That study should demonstrate that the spatial EEG features at baseline measurement as revealed by the IFAST procedure might be discriminated between MCI converted and MCI stable subjects. Furthermore, baseline values of spatial EEG features in individual MCI subjects should be successfully used as an input by the IFAST procedure to predict the conversion to dementia. This intriguing research perspectives are the sign of the heuristic value of the present findings. However, apart from clinical perspectives, the present findings have an intrinsic value for clinical neurophysiology. They provided further functional data from a large aged population to support the idea that spatial features of EEG, as a reflection of the cortical neural synchronization, convey information content able to discriminate preclinical stage of dementia (MCI) from probable AD. Furthermore, the evaluation of that diagnostic contribution may motivate future scientific studies probing its usefulness for prognosis and monitoring of AD across temporal domain. Although EEG would fulfil up all the previous requirements, the way in which it is currently utilized does not guarantee its ability in the differential diagnosis of MCI, early AD, and healthy nonimpaired aged brains. The neurophysiologic community always had the perception that there is much more information about brain functioning embedded in the EEG signals than those actually extracted in a routine clinical context. The obvious consideration is that the generating sources of EEG signals (cortical postsynaptic currents at dendritic tree level) are the same ones as those attacked by the factors producing symptoms of dementia. The main problem is that usually in the signal-to-noise ratio the latter is largely overcoming the former. This paper suggests that the reasons why the clinical use of EEG has been somewhat limited and disappointing with respect to early diagnosis of AD and identification of MCIdespite the progresses obtained in recent years-are due to the following, erring, general principles: (A) identify and synthesizing the mathematical components of the signal coming from each individual recording site, considering the EEG channel as exploring only one, discrete brain area under the exploring electrode, and suming up all of them in attempt to reconstruct the general information; (B) focusing on the time variations of the signal coming from each individual recording site, (C) mainly employing linear analysis instruments. The basic principle which is proposed in this work is very simple; all the signals from all the recording channels are analyzed together-and not individually-in both time and space. The reason for such an approach is quite simple; the instant value of the EEG in any recording channel depends, in fact, upon its previous and following values, and upon the previous and following values of all the other recording channels. We believe that the EEG of each individual subject is defined by a specific background signal model, distributed in time and in the space of the recording channels (19 in our case). Such a model is a set of background invariant features able to specify the quality (i.e., cognitive level) of the brain activity, even in so a called resting condition. We all know that the brain never rests, even with closed eyes and if the subject is required to relax. The method that we have applied in this research context completely ignores the subject's contingent characteristics (age, cognitive status, emotions, etc.). It utilized a recurrent procedure which squeezes the significant signal and progressively selects the features useful for the classification. CONCLUSIONS We have tested the hypothesis that a correct automatic classification of MCI and AD subjects can be obtained extracting spatial information content of the resting EEG voltage by ANNs. The spatial content of the EEG voltage was extracted by a novel step-wise procedure. The core of this procedure was that the ANNs did not classify individuals using EEG data as an input; rather, the data inputs for the classification were the weights of the connections within an ANN trained to generate the recorded EEG data. These connection weights represented a useful model of the peculiar spatial features of the EEG patterns at scalp surface. Then the new system TWIST, based on a genetic algorithm, processed the weights to select the most relevant features and at the same time to create the best subset, training set, and testing set, for the classification. The results showed that the correct automatic classification rate reached 94.10% for AD versus MCI. The results obtained are superior to those obtained with the more advanced currently available nonlinear techniques. These results confirm the working hypothesis and represent the basis for research designed to integrate EEG-derived spatial and temporal information content using ANNs. From methodological point of view, this research shows the need to analyze the 19 EEG channels of each person as a whole complex system, whose decomposition and/or linearization can involve the loss of many key information. The present approach extends those of previous EEG studies applying advanced techniques (wavelet, neural networks, etc.) on the data of single recording channels; it also complements those of previous EEG studies in aged people, evaluating the spatial distributions of the EEG data instant by instant and the brain sources of these distributions [2][3][4][5][6][7][8][9][10]. With complex systems, it is not possible to establish a priori which information is relevant and which is not. Nonlinear autoassociative ANNs are a group of methods to extract from these systems the maximum of linear and nonlinear associations (features) able to explain their "strange" dynamics. This research also documents the need to use different architectures and topologies of ANNs and evolutionary systems within complex procedures in order to optimize a specific medical target. This study's EEG analysis used (1) different types of nonlinear autoassociative ANNs for squashing data; (2) a new system, TWIST, based on a genetic algorithm, which manages supervised ANNs in order to select the most relevant features and to optimize the distribution of the data in training and testing sets; (3) a set of supervised ANNs for the final patterns recognition task. It is reasonable to conclude that ANNs and other adaptive systems should be used as cooperative adaptive agents within a structured project for complex, useful applications. NOTE IFAST is a european patent (application no. EP06115223.7date of receipt 09.06.2006). The owner of the patent is Semeion Research Center of Sciences of Communication, Via Sersale 117, Rome 00128, Italy. The inventor is Massimo Buscema. For software implementation, see [53]. Dr. C. D. Percio (Associazione Fatebenefratelli per la Ricerca) organized the EEG data cleaning.
9,342.2
2007-11-25T00:00:00.000
[ "Computer Science" ]
Analysis and applications of the proportional Caputo derivative In this paper, we investigate the analysis of the proportional Caputo derivative that recently has been constructed. We create some useful relations between this new derivative and beta function. We discretize the new derivative. We investigate the stability and obtain a stability condition for the new derivative. Introduction Fractional calculus is an emerging field of mathematics [1] having important contributions in modeling the dynamics of complex systems [2,3] from various fields of science and engineering [4,5]. Nowadays a huge debate was opened by asking the simple" question: can we classify the fractional operators?" Curiously the answer of this question is not simple and, so far, several answers seemed to be possible [6][7][8][9][10][11]. A new non-singular fractional operator was proposed by Caputo and Fabrizio [12] and their result was generalized by Atangana and Baleanu [13] and applied successfully to a lot of complex phenomena including biological ones. Khalid et al. [14] have studied the computational research of the Caputo time fractional Allen-Cahn equation. Owolabi [15] has studied by analysis and numerical simulation a multicomponent system with the Atangana-Baleanu fractional derivative. Akgül [16] has presented a novel method for a fractional derivative with non-local and non-singular kernel. Akgül [17] has investigated the solutions of differential equations with the generalized fractional derivatives. Atangana et al. [18] have investigated the analysis of the fractal fractional derivatives in detail. Fernandez et al. [19] investigated the series representations for fractional-calculus operators involving generalized Mittag-Leffler functions. Wu et al. [20] have investigated the fractional impulsive differential equations including the exact solutions, integral equations and short memory case. Some inequalities were investigated within the proportional fractional operators [21,22] and in [23] was investigated the proportional derivatives of a function with respect to another function. Very recently, a new fractional operator has been constructed in [24]: In this paper, we aim to analyze the above derivative in detail for k 0 (α, t) = (αt 1-α )c 2α and Here c is as a constant of the time dimension t for the two terms involved in the new derivative (1.1). The new fractional operator in the Caputo sense is a generalization of the classical proportional derivative introduced by [24] which has deep applications in control theory. The new fractional operator will provide better applications in control theory. Due to the physical meaning of the initial conditions we concentrate here on the Caputo fractional generalization. For more details see [25][26][27][28]. We construct the paper as follows. We give some scientific theorems for the new derivative in Sect. 2. We present the discretization and the applications of the proportional Caputo derivative in Sect. 3. We show the stability analysis in Sect. 4. We demonstrate the numerical results in Sect. 5. We discuss the conclusion in the last section. Analysis of the proportional Caputo derivative We present the following scientific results for the new derivative. Lemma 2.1 We have the following relation for the new derivative given by (1.1): Then we obtain Then we get the desired result: This completes the proof. If u and v are continuous and bounded, then we get Lemma 2.3 Assume that f and g are differentiable and bounded. Then we obtain This completes the proof. Lemma 2.4 If f and g are differentiable and satisfy the following condition: This completes the proof. Lemma 2.5 Let f be analytic around 0, then we obtain a j t j (j + α + 1) (j + 2) (2.4) We let τ = ht. Then we obtain This completes the proof. Discretization and applications of the proportional Caputo derivative We consider the new derivative [24]: We put t n = n t, then at t n+1 , we have We take into consideration [18] PC Here u(x, 0) = g(x), x mx m-1 = x, t n+1t n = t, t n = n t, x m = m x. The above equation can be approximated as Stability analysis We discretize the following problem and investigate the stability of it. We consider the heat equation, We change the left hand side of the above equation with the new derivative and we obtain We obtain at (t s+1 , x m ). We put u s m = δ s exp(ik m x). Plugging this into the above equation, we obtain After simplification we get For simplicity, we take Then we obtain Thus, we obtain s p=0 Using the relation between the trigonometric functions and exponential functions gives For s = 0, we obtain Here | δ 1 δ 0 | < 1 implies This is true for ∀m. Thus, we get B 0,α A 0,α + B 0,α + 4a < 1. We assume that | δ s δ 0 | < 1. We need to show that | δ s+1 δ 0 | < 1. We know that Then we get Thus, we reach This is true for ∀m. Thus, we get Therefore, the method is stable if Numerical results We consider the following problem: We apply the Laplace transform to Eq. (5.1): Then we obtain After simplification, we get . If we apply the inverse Laplace transform to the above equation, we will obtain We demonstrate the above solution by the following figures for different values of α. We choose K 1 (α) = (1α)w α , K 0 (α) = αc 2α w 1-α , c = 1, w = 0.5 and u(0) = 1 in Figs. 1-6. In Fig. 7, we choose c = w = α = 0.8. In these figures, we can see the effect of the fractional order. Conclusion We presented the analysis of the proportional Caputo derivative in this paper. We presented some scientific theorems for this new derivative. We discretized the new derivative. We presented the stability analysis and experiments. We obtained the stability condition for a problem using the new derivative. We considered a problem with the constant proportional Caputo derivative. We solved the problem by the Laplace transform. We demonstrated the numerical simulations by some figures.
1,408.4
2021-02-25T00:00:00.000
[ "Mathematics" ]
Low-loss single-mode hollow-core fiber with anisotropic anti-resonant elements : A hollow-core fiber using anisotropic anti-resonant tubes in the cladding is proposed for low loss and effectively single-mode guidance. We show that the loss performance and higher-order mode suppression is significantly improved by using symmetrically distributed anisotropic anti-resonant tubes in the cladding, elongated in the radial direction, when compared to using isotropic, i.e. circular, anti-resonant tubes. The effective single-mode guidance of the proposed fiber is achieved by enhancing the coupling between the cladding modes and higher-order-core modes by suitably engineering the anisotropic anti-resonant elements. With a silica-based fiber design aimed at 1.06 µm, we show that the loss extinction ratio between the higher-order core modes and the fundamental core mode can be more than 1000 in the range 1.0-1.65 µm, while the leakage loss of the fundamental core mode is below 15 dB/km in the same range. Introduction Light guidance in hollow-core fibers (HCFs) [1,2] has enabled new applications due to their extraordinary properties compared to solid-core fibers: when light propagates in a gas-filled core instead of glass, it propagates faster and often with low dispersion, and the gas tolerates extremely large pulse energies and allow tunable control over dispersion and nonlinearity through pressure [3].Applications include high-power [4] and ultra-short pulse delivery [5], pulse compression [6], mid-IR transmission [7], telecommunication [8] and terahertz applications [9].The hollow-core photonic band gap (HC-PBG) fiber is a common HCF, which guides light in the air-core using a 2D periodic cladding structure showing a photonic band gap [1].Thus, the cladding does not support modes for a certain range of optical frequencies and propagation constants.In these ranges the core mode is not able to couple with cladding modes and is thus guided in the hollow air-core.However, the HC-PBG fiber suffers from limited transmission bandwidth [10], strong power overlap of the core modes with the glass cladding and high group-velocity dispersion (especially at the band-gap edges). Negative-curvature HCFs are interesting alternatives, promising a low power fraction in the cladding, low dispersion, and broadband transmission [11][12][13][14][15][16][17].The term "negative curvature" indicates that the surface normal to the core boundary is oppositely directed from the core [12].The hollow-core anti-resonant (HC-AR) fiber is particularly simple as it only needs a single layer of cladding tubes.The key property of the HC-AR fiber is that it has a sequence of narrow-bandwidth high-loss regions, where the core modes become resonant (phase matched) with the cladding modes.In between these high-loss regions the core modes are anti-resonant with the cladding modes, which allow air-core confinement, i.e. low-loss transmission.The absolute amount of loss of the core modes will then depend on inhibited coupling between the core and cladding modes due to a low density of cladding modes [18][19][20][21], which is a property that can be controlled by suitable fiber design engineering.Loss is further reduced in a "node-free" design [16], where the cladding tubes do not touch each other so the core mode no longer couples to the glass cladding mode in the nodal intersections.The current challenges of HC-AR fibers are realizing loss-values comparable to HC-PBG fibers as well as single-mode operation, achieved by increasing the loss of higher-order modes (HOMs). HC-AR fibers with both circular AR tubes [12,15,22] as well as more intricately shaped AR tubes [11,13] have been investigated.A significant loss reduction is possible with nested tubes inside the AR tubes [14,16,17,23], but this substantially increases complexity.Here we propose a simpler solution using anisotropic AR tubes, elongated along the fiber radial direction, which allows simultaneously achieving (a) an increased negative curvature in the core, (b) a node-free design, and (c) a larger distance from the core to the outer capillary.All these properties could not be achieved simultaneously in the previous cases [11][12][13][14][15][16][17]22].Importantly, these properties offer a degree of freedom in the design to reduce the losses significantly, achieve low-loss broadband transmission, and effectively suppress HOMs.Numerical results for a silica-based design targeting 1.06 μm show that the leakage loss can be reduced 1-2 orders of magnitude (4 dB/km at 1.06 µm) compared to the standard HC-AR fiber with circular AR tubes.Moreover, the fiber is made effectively single-moded by suppressing HOMs resulting in an extinction loss ratio between the core HOMs and fundamental mode (FM) that is over 1000 in the 1.0-1.65 µm spectral range, while the FM in the same range experiences loss <15 dB/km.Such specs cannot be reproduced with the standard isotropic design without using complex designs with multiple nested AR tubes [17]. Numerical results Figure 1(a)-1(c) shows the considered HC-AR fiber geometries, using a thick outer capillary with AR tubes on the inner wall.Design (a) is the usual case with touching isotropic (circular) AR cladding tubes.Starting from (a), the design is optimized for minimal losses, resulting in the circular design (b) and elliptical design (c).The latter is the proposed anisotropic AR tube design, here an ellipse squeezed in the azimuthal direction.Other anisotropic shapes are possible.We focus on a silica fiber designed for λ = 1.06 μm (i.e. for high-power Yb lasers), which has 6 AR tubes (a larger number is also feasible, and below we will specifically compare 6 vs. 8 AR tubes), a fixed core radius R = 15 µm (large enough to enable high-power transmission) and silica strut thickness t = 0.42 µm (making the first high-loss resonance occur at around λ = 0.88 µm [16]).This choice of strut thickness implies that light at 1.06 μm is guided in the fundamental AR transmission band, which compared to the next higher-order AR transmission bands is favorable since it performs better in terms of loss and transmission bandwidth.The ellipticity is defined as η = r y /r x , where r y is the radius in the azimuthal direction and r x is the radius in the radial direction; in the following we keep r x = 15 μm fixed, and η<1 will reduce the loss.We used a quarter of the geometry for the numerical calculations because of mode symmetry [24], except for the bend loss calculations where a half geometry was used due to a reduced symmetry in the elliptical case.We used the same numerical method as explained in [17], which briefly explained relies on finite-element simulations to calculate the fiber modes and their propagation constants. Optimization the leakage loss of the HC-ARFs First the HC-AR fibers were optimized to get the lowest loss at 1.06 µm by adjusting the size of the AR elements with the core size fixed (see [17] for details on the calculations).Figure 2(a) shows the leakage loss (or confinement loss, α c ) as a function of AR air-hole radius for the circular case.When the AR tube radius decreases from r = 15 to 10.2 µm, the leakage loss decreases to a minimum value of 30 dB/km, i.e. improved with around one order of magnitude.Interestingly, the AR tubes are here much smaller than the "non-touching" nodefree design (i.e.where the AR tubes are reduced just enough to prevent them from touching each other), which otherwise has been considered optimal [16,22]: the reasoning has been that the core FM no longer has coupling loss to the cladding modes that in the touching-case reside in the glass intersections in these nodes.However, this would imply a sharp drop in loss as r is taken below 15 μm to the non-touching value.Instead it is continuous, indicating that the coupling loss to the cladding modes in the glass nodes is not dominating for the chosen fiber design (however, this does not mean that for other designs it is not important), and the loss instead drops smoothly because the FM phase-mismatch to the cladding modes increases gradually.This is eventually balanced with the increased loss of the FM as its evanescent tail overlaps more with the outer capillary wall as the circles shrink.This is because we here fix the core size, so the shrinking circles imply that the outer capillary becomes closer to the core.shows the leakage loss for the elliptical case.As we fix the core size and decrease η, the major axis (in radial direction) is fixed at r x = 15 µm, and the minor axis (in the azimuthal direction) changes from r y = 15 to 9 µm.The lowest leakage loss of 4 dB/km was obtained for η = 0.65, i.e., r x = 15 µm and r y = 9.80 µm.Thus, orders of magnitude improvement is realized by squeezing the azimuthal axis of the AR tubes.The minimum has a different explanation than the circular case, because we are here able to fix the distance from the core to the outer capillary.Instead the loss improvement obtained for η<1 is due to an increased phase mismatch between the FM and cladding modes is eventually balanced by an increased leakage loss as the FM starts leaking into the voids between the ever slimmer AR ellipses. Figure 2(c) depicts the spectral loss distribution for different HC-AR fibers.Curves 1-3 show the circular cases: case 1 (green) where the tube walls are touching each other, thus forming glass nodes in the cladding, and case 2 when the air-hole radius is reduced to 14 µm so the AR tubes no longer touch each other (black).Case 3 (blue) shows the additional reduction of leakage loss until the minimum is reached, achieved as mentioned above by shrinking the circular tubes further (thus separating them further).Finally, case 4 (red) shows the elliptical design, displaying significantly improved loss performance; the leakage loss is over one order of magnitude lower than for the best circular design.Moreover, the low-loss range spans nearly the entire near-IR, which is promising for broadband ultrafast applications. Scaling the strut thickness One of the limiting factors in high-power beam delivery is the so-called fraction of power in silica (FOPS), i.e., the optical power overlap with the silica cladding, which should be kept small.Figure 3(b) shows that FOPS can be reduced by reducing the strut thickness of silica (~2x10 −5 at 1.06 µm for t = 0.35 μm), which is several orders of magnitude lower than the HC-PBG [4].This makes HC-AR fibers an ideal medium to explore propagation of high power beam delivery.When comparing FOPS for circles and ellipses the performance is similar, but the advantage in using ellipses comes in terms of leakage loss, see Fig. 3(a), which has a local minimum at a wavelength controlled by the strut thickness.As the strut thickness is reduced, this minimum shifts towards lower wavelengths, but at the wavelength of 1.06 µm that the design is intended for the leakage loss becomes quite high as the strut thickness is reduced.In the elliptical case the wavelength loss-variation is instead much more flat, so the leakage loss at 1.06 µm varies only little when the strut thickness is reduced.Figure 3(c)-3(d) summarizes these trends by showing the leakage loss and FOPS vs. strut thickness at 1.06 µm for both cases: elliptical tubes have one order of magnitude lower losses compared to circular tubes when the strut thickness varies from 0.35 to 0.42 µm (note the yaxis is linear); in turn the FOPS is almost the same for both cases.The elliptical design will therefore allow for more design degrees of freedom.Note that t = 0.35 μm is generally considered a practical limit for proper fiber cleaving, rather than a fabrication limit.This limit is less severe for silica HC-AR fibers at longer wavelengths as this demands larger strut thicknesses, in which case the potential of the elliptical design can be exploited fully. Effectively single-mode operation HC-AR fibers with large cores are not single-moded, but they can be made effectively singlemoded by engineering the shape and size of the AR tubes so the HOMs experience more loss than the FM [17].Figure 4(a) shows the relative effective indices (Δn eff = n eff -1) of the first three core modes (here denoted LP 01 , LP 11 , and LP 21 ) and the first three cladding modes.The core FM (LP 01 ) has the highest Δn eff , which remains constant as a function of ellipticity.The first three cladding modes have only slightly larger Δn eff than the first core HOM (LP 11 ) because the core area is only a few times larger than the area of a single cladding tube.Thus, the first HOM is located within the domain of the cladding modes, which increases phase matching to cladding modes [25,26].This effect is more evident for the strongly elliptical AR tubes (η~0.60-0.70),where the cladding modes are better phase-matched to the HOMs than to the FM, which effectively suppresses the guidance of the HOMs due to higher losses. The FM loss decreases much more than the HOM loss when the ellipticity is decreased, and in the η = 0.60-0.70range the HOM losses even start increasing while the FM loss remains at 4 dB/km.This shows how the HOM losses can be made higher by suitably choosing the ellipticity.The aim is maximizing the so-called HOM extinction ratio (HOMER), defined as the ratio between the loss of the HOM with the lowest loss and the FM loss [16].The maximum HOMER was found to be ~2500 at η~0.61, while at η = 0.65, where the lowest FM loss was found at 1.06 μm, a HOMER of ~200 is found; both high enough to make the fibers effectively single-moded.Remarkably, the η = 0.61 case is only slightly more losssy (5 dB/km) than the η = 0.65 case, so the loss penalty of maximizing HOMER is small.This again shows the design freedom of the anisotropic AR elements.The spectral loss and HOMER is shown in Fig. 4(b).The HOMER for the design with the lowest loss at 1.06 μm (η = 0.65) can be made in excess of 150 between λ = 0.95-1.8µm, while keeping α c <15 dB/km.Thus, this fiber has low-loss and is effectively single-moded over an octave of bandwidth.Interestingly, when the loss is increased slightly to maximize HOMER (η = 0.61), HOMER>1000 between λ = 1.0-1.75µm with α c <15 dB/km from λ = 1.0-1.65 μm. Figure 5(a) shows HOMER vs. wavelength, and confirms that the elliptical case outperforms the circular case in the entire wavelength regime 0.9-2 µm.Figure 5(b) shows the bend loss (α b ) of the considered structures, calculated in both the x and y directions.The elliptical case shows an azimuthal variation of the bend loss, evidenced by a loss peak seen only in the xdirection for low bending radii due to increased core-cladding mode coupling.Therefore the circular case shows better bend loss performance for low bend radii for the 6-tube structure studied here. Comparison with other design cases In Fig. 6 we compare the loss performance and HOMER of 6 and 8 AR tubes.The calculated loss spectra in Fig. 6(a) shows that 6 and 8 AR tubes have similar loss performance in the 0.9-1.45µm spectral regime; the 8 tube cases have slightly lower losses, but using 6 tubes gives much broader low-loss transmission window for both circular and elliptical cases.We also note that the 6 tube cases (both circular and elliptical) are very smooth while the 8 tube cases show spectral fluctuations vs. wavelength at the end of the transmission window.Similar fluctuations have previously been attributed to presence of nodes in the cladding [16], but clearly the origin here is different as there are no nodes.The reason is instead found in the fact that for 8 tubes the core mode for longer wavelengths expands its mode field diameter so that it starts interacting weakly with cladding modes found at the outer capillary wall [27]; this leads to the observed fluctuations.Figure 6(b) shows the HOMER from which we see that 6 elliptical AR tubes have much higher HOMER compared to 8 AR tubes in the whole spectral regime.Finally, Figs. 6 (c)-6(d) show the bend loss performance in the x-and y-directions, respectively.For 8 AR tubes the elliptical case has better bend loss performance than the circular case.We believe this is because for the 8 tubes case, the AR tubes are smaller compared to the 6 tubes case.Therefore for small bend radius, 6 AR tubes have coupling between the core modes and tube modes due to the larger AR tubes whereas the coupling is reduced in 8 AR tubes because of the smaller AR tubes.Figure 6 also shows contour plots of the fundamental modes for 10 cm bending radius in which for 6 AR tubes there is a coupling between the core modes and tube modes whereas for 8 AR tubes there is no coupling between the core modes and tube modes.Therefore, choosing 6 or 8 tubes would have to be a compromise in terms of whether loss bandwidth, HOMER or bend loss is the most important feature. We also considered introducing the ellipticity from the optimized circular design (10.2 μm AR tube radius), but instead of squeezing the ellipse azimuthally, it was elongated radially (i.e.extending the major axis) while keeping the core size fixed.This implies that the outer capillary expands its size as the ellipticity drops, which results in an improved loss performance compared to shrinking the minor axis (the case we have discussed so far).However, our calculations showed that the HOMER could not reach the same high values as found in Fig. 5(a).This implies that it is harder to reduce the phase-mismatch between the core HOMs and the cladding modes when extending the major axis. Conclusion In summary, a novel hollow-core anti-resonant fiber design has been proposed, in which the anti-resonant elements in the cladding are anisotropic in shape, in contrast to the conventional isotropic circular shape.Numerical simulations using an elliptical shape as the anisotropic cladding element showed loss performances and effective single-mode guidance that could not be achieved with isotropic (circular) cladding elements.In both cases the guiding of light in the core is based on the anti-resonances of the struts in the cladding and the inhibited coupling mechanism.However, the anisotropic shape has improved performance because it simultaneously offers: (a) strong negative curvature in the core, (b) node-free (non-touching) anti-resonant elements, and (c) larger distance from the core to the outer capillary for a given core curvature.This gives a design degree-of-freedom essential for enhancing the performance, e.g. by fixing the size of the core and outer capillary while tuning the ellipticity to minimize loss.We studied a specific case, where a silica-based fiber was optimized for the Yb-laser wavelength of 1.06 μm.The HOM extinction ratio was over 1000 in the range λ = 1.0-1.75µm with FM loss <5 dB/km at 1.06 µm and <15 dB/km for λ = 1.0-1.65 µm, which relied on increasing HOM loss by reducing the phase-mismatch between them and the cladding modes, while still maintaining a large phase-mismatch between the FM and the cladding modes.These properties are extremely promising for ultra-fast extreme nonlinear optics applications exploiting fiber-based light-matter interaction [3].The proposed design is generic, irrespective of glass composition and target wavelength, and we expect it to improve almost any type of hollow-core fiber design exploiting the anti-resonant effect. Fig. 1 . Fig. 1.Geometry of the considered HC-AR fibers, keeping fixed the core radius R = 15 µm and silica strut thickness t = 0.42 µm.The structural parameters shown in the figure are those that optimized the leakage loss at 1.06 μm.The figures are scaled to indicate their relative size. Fig. 2 . Fig. 2. Calculated leakage loss at 1.06 µm as a function of (a) air-hole radius for circular AR tubes and (b) ellipticity of the AR tubes keeping r x = 15 µm.Inset: FM field profiles at 1.06 µm.(c) Loss vs. wavelength for different HC-AR fibers (dashed line: λ = 1.06 μm).All structures have the same core radius R = 15 µm and uniform silica strut thickness t = 0.42 µm. Figure 2 ( Figure 2(b)shows the leakage loss for the elliptical case.As we fix the core size and decrease η, the major axis (in radial direction) is fixed at r x = 15 µm, and the minor axis (in the azimuthal direction) changes from r y = 15 to 9 µm.The lowest leakage loss of 4 dB/km was obtained for η = 0.65, i.e., r x = 15 µm and r y = 9.80 µm.Thus, orders of magnitude improvement is realized by squeezing the azimuthal axis of the AR tubes.The minimum has a different explanation than the circular case, because we are here able to fix the distance from the core to the outer capillary.Instead the loss improvement obtained for η<1 is due to an increased phase mismatch between the FM and cladding modes is eventually balanced by an increased leakage loss as the FM starts leaking into the voids between the ever slimmer AR ellipses. Fig. 3 . Fig. 3. Calculated (a) leakage loss and (b) fraction of power in silica (FOPS) vs. wavelength for different strut thicknesses.Solid lines and dashed lines are calculated for t = 0.42 µm and t = 0.35 µm respectively; (c) Leakage loss and (d) FOPS vs strut thickness at 1.06 µm. Fig. 5 . Fig. 5. (a) Wavelength dependence of HOMER for circular (r = 10.2µm) and elliptical (η = 0.61) AR tubes (b) Bend loss vs bend radius for circular (r = 10.2µm) and elliptical (η = 0.65) AR tubes with t = 0.42 µm.The FM profiles are shown in the right hand side for a 10 cm bending radius. Fig. 6 . Fig. 6.(a) Loss vs. wavelength (b) HOMER (c-d) Bending loss vs. bending radius for different HC-AR fibers.All structures have the same core radius R = 15 µm and uniform silica strut thickness t = 0.42 µm.All fiber designs are optimized at 1.06 µm to give minimum leakage loss.The contour plots of the fundamental air-core mode distribution are shown in the right hand side for a 10 cm bending radius.The color of the frame corresponds to the color of the line in the plot.
5,047
2016-04-18T00:00:00.000
[ "Engineering", "Physics" ]
Theoretical Biology and Medical Modelling: ensuring continued growth and future leadership Theoretical biology encompasses a broad range of biological disciplines ranging from mathematical biology and biomathematics to philosophy of biology. Adopting a broad definition of "biology", Theoretical Biology and Medical Modelling, an open access journal, considers original research studies that focus on theoretical ideas and models associated with developments in biology and medicine. Main text Theoretical Biology and Medical Modelling (TBioMed), a 10 year old online journal, has steadily grown since its first launch in late 2003 as an independent journal of BioMed Central (BMC). Included in PubMed and PubMed Central and indexed by SCI for an impact factor, the journal has increasingly attracted a broad scientific audience, who are interested in mathematical modelling studies in biology and medicine [1]. We are pleased to announce that we have accepted the role of Editors-in-Chief of TBioMed, from June 2013, on behalf of all scholars in the community. We act as successors of Dr. Paul Agutter, a medical expert and mathematical modeller, who has made a respected effort to cultivate the field, grow TBioMed and let the journal be recognized across the world. There have been four notable characteristics of TBioMed which we regard as advantageous for the authors and readers and we aim for these to remain unchanged. First, as an independent journal of BMC, TBioMed has continuously been an open access journal. The open access journal can permit all scientists across the world to have unrestricted and unlimited access to every single study in the field of theoretical biology and medicine. Instead of printing and selling the journal, the authors pay an article processing charge, thereby allowing all readers to download via internet and print the article at free of charge. Second, the online journal has another advantage for submitting authors, in that authors do not to have any limitation on the number of pages or figures and data that can be included, per article. For instance, the authors can decide whether a rigorous mathematical proof should be included in the main text or in the online only supporting material. Third, TBioMed makes a difference from other journals, in fulfilling the scope of the latter half of the title "Medical Modelling", and attracting medical studies that are useful in applications to diagnosis, treatment and prevention. In fact, many published studies have been operational and directly applicable to existing medical problems. Fourth, the length of peer review is kept shorter than those of other journals in theoretical biology and medicine. We make an effort to ensure a fast review process, as we understand that operational studies cannot wait for months and years to reach publication, even when the study involves rigorous mathematical and analytical exercises. In the field of theoretical biology and medicine, publications of new studies have tended to be restricted to those with an explicit methodological advancement and those significantly improving our understanding in biology. These types of publications, in particular, are seen in other good journals that have restricted scopes, such as PLoS Computational Biology or Journal of Theoretical Biology. Otherwise, original modelling studies are frequently considered by interdisciplinary journals, that remain to be rather broad and have very little restriction in the content (e.g. PLoS ONE). In relation to this point, we believe that TBioMed can beautifully fill its biomedical modelling niche, and indeed, the scope and standpoint of TBioMed are in line with this notion. We aim to offer a platform to publish, read and discuss relatively unrestricted studies, at the same time as having good quality control of the scientific content. Under the Open Access publishing model, authors need to consider the financial support available to them to cover the article processing charge for publication in TBioMed [2]. For authors from developing nations in particular, with evident financial difficulty, BioMed Central operates an open access waiver fund [3] and authors who genuinely cannot afford to pay the article processing charge are able to request a discretionary waiver. Otherwise, authors are asked to promise to cover the charge upon submission, and in return, we promise to make an effort to justify the cost with academic merit. To justify the article processing charge, we aim to reach an improved impact factor and other citation metrics. Presently, the impact factor of TBioMed is low to moderate, i.e., 1.46 in 2012, and other citation metrics are not substantially high (e.g. Eigenfactor is 0.00182). One of the most important roles for us during the next few years is to ensure greater impact and better metrics of the journal. Not only to recover and improve the impact factor, but we also aim to compete with and replace the aforementioned journals in the same subject category. We plan to promote special issues on topical subject areas and publish other special materials including award lectures. Moreover, we aim to contribute to education among students and early career researchers, and we will also consider a series of solicited review articles, written by invited leading scholars. As we prepare to lead the journal, we would like to acknowledge the efforts that have been made by the founding editor, Dr. Denys Wheatley and former Editor-in-Chief, Dr. Paul Agutter in establishing the journal. Dr. Wheatley has worked on cell biology to better understand the mechanism of cancer, and most notably, he has been successful in raising and growing important journals in his professional areas including Cancer Cell International. Dr. Agutter has worked on a broad range of studies in theoretical biology, including the aetiology of deep venous thrombosis and chronic venous insufficiency, allometric scaling of metabolic rate, mechanisms of intracellular transport and history and philosophy of medicine and biology. We must emphasize that Dr. Agutter has spent a substantial fraction of his personal time in handling the manuscripts published in the journal to date, and we also thank him for his voluntary editing of submitted manuscripts written by authors in non-English speaking countries. Dr. Wheatley and Dr. Agutter are not only friends but also collaborators [4]. Dr. Ed Rietman, Professor Rongling Wu and I were put forward by Dr. Agutter as his successors, providing him with a chance to take a small step back from his busy life and benefit from having more personal time. As new Editors-in-Chief, all of us have contributed to the journal in different subjects [5][6][7], sharing different expertise and having experienced editorial jobs, as part of the editorial board of TBioMed. To deal with the broad range of subjects covered by the journal, new associate editors (who help manage peer-review) and editorial board members, who represent different subject areas, have been recruited. Moreover, we feel very confident in having Drs' Wheatley and Agutter remain involved with the journal as associate editors. Splitting editorial responsibility into three, whilst supported by our strong colleagues on the board, we are taking over with a belief that TBioMed can be further strengthened and take the lead in its subject category. We invite all authors in the community to consider TBioMed when submitting your original studies in the area of theoretical biology and medicine. We adopt a broad definition of "biology" and all original research studies that focus on theoretical ideas and models associated with developments in biology and medicine are considered. Submissions that are not only technically sound, but contributing to the field by offering either improved understanding in biology or progress in theory or methods will be highly welcomed.
1,727.6
2013-07-11T00:00:00.000
[ "Biology", "Mathematics", "Medicine", "Philosophy" ]
A Novel Dual Ultrawideband CPW-Fed Printed Antenna for Internet of Things (IoT) Applications This paper presents a dual-band coplanar waveguide (CPW) fed printed antenna with rectangular shape design blocks having ultrawideband characteristics, proposed and implemented on an FR4 substrate. The size of the proposed antenna is just 25mm × 35mm. A novel rounded corners technique is used to enhance not only the impedance bandwidth but also the gain of the antenna. The proposed antenna design covers two ultrawide bands which include 1.1–2.7 GHz and 3.15–3.65GHz, thus covering 2.4GHz Bluetooth/Wi-Fi band and most of the bands of 3G, 4G, and a future expected 5G band, that is, 3.4–3.6GHz. Being a very lowprofile antennamakes it very suitable for the future 5G Internet ofThings (IoT) portable applications. A step-by-step design process is carried out to obtain an optimized design for good impedancematching in the two bands.The current densities and the reflection coefficients at different stages of the design process are plotted and discussed to get a good insight into the final proposed antenna design. This antenna exhibits stable radiation patterns on both planes, having low cross polarization and low back lobes with a maximum gain of 8.9 dB. The measurements are found to be in good accordance with the simulated results. Introduction Internet of Things (IoT) applications incorporate major advancements of computer networking, microelectronics and modern communication system.This technology enables physical sensing and actuating devices to be controlled remotely over the Internet.To attain reliable communication, these devices are required to be compact, cost-effective, and energy efficient to operate on multibands for LTE, WLAN (IEEE 802.11 a/b/g/n), WiMAX (IEEE 802.16), ZigBee (IEEE 802.15.4),GSM (800 MHz, 850 MHz, and 1900 MHz), and so on.Scope for Internet of Things (IoT) operating on these bands can be seen as in 2003; the world population was 6.3 billion and connected devices per person were 0.0793%, while with population grown to 7.2 billion in 2015, revolutionized connected devices per person increased to 3.4%.This trend is expected to grow exponentially so the demand for smaller devices along with the better antenna module will grow as well.Due to miniaturization of embedded systems, multiple modules can be assembled on these small gadgets to improve efficiency, reliability, and robustness for various scenarios of environmental monitoring, smart cities, smart healthcare, smart grid, military/defense, and so on [1,2]. Apart from the many advantages of power options, flexibility, ease of installation, and replacement there are numerous challenges of scalability, fault tolerance, energy harvesting, and security issues which need to be addressed for worldwide acceptability [3].The antenna system, being the front end of all hand-held communication devices is expected to cover all major frequency bands of IEEE 802.11 (2.4-2.48GHz) and IEEE 802.15.4 (2.5-2.69GHz/ 3.4-3.69GHz/5.25-5.85GHz) with acceptable gain and radiation pattern for multiple integrated services.Further, it is anticipated that modern antenna design should be flexible References Type Total area (mm 2 ) Bandwidth Peak gain (dBi) [12] Dual-band 1020 2.3-2.5 and 2.9-15.0GHz 2.5 [13] Dual-band 900 1.86-1.97and 3.0-12.0GHz 3.0 [14] Dual-band 1250 3.4-3.6and 8-15 GHz ------- [15] Tri-band 1600 2.28-2.58,3.38-3.66,and 5.07-5.86GHz 3.3 [16] Tri-band 839.5 enough to regulate impedance bandwidth for various center frequencies independently [4].Some well-known techniques such as slots in the radiating patch, defected ground structures (DGS), engraving strips on antenna, and the induction of band notched structures in the designs have been adopted to satisfy the abovementioned characteristics [4,5].Apart from these orthodox methods, the use of metamaterials and complementary split ring resonator (CSRR) techniques are employed in literature for getting higher gain in order to reduce cross frequency interference.Ahmed et al. implemented a magneto-electric (ME) dipole antenna that showed wide impedance bandwidth, better gain, and matching radiation patterns on both E-plane and Hplane [6].But such cross-magneto-electric structures are not suitable for mass production in Internet of Things (IoT) due to the large size and sensitive design parameters.On the other hand, the coplanar waveguide method has advantages such as wide bandwidth, uniplanar design and ease of installation with MMIC, and active components, making it more suitable for targeted IoT applications. Various CPW-fed antennas have been reported in literature such as multiband [3][4][5][6][7][8], CPW antennas with added strip for WLAN [9][10][11], and asymmetric coplanar strip antennas [12][13][14][15][16][17][18][19][20].However, majority of these designs have large antenna dimensions and do not cover all the major bands of WLAN/WiMAX/LTE [13][14][15][16]18].Dual-band antenna in [21] with an average gain of 2.5 dB having dimensions 25 mm × 25 mm being compact in size operates only in WLAN band.Further, in some other designs, the antenna gains and reflection coefficient parameters are not impressive compared to the antenna dimensions [15][16][17][18][19][20][22][23][24][25].In [26], a CPWfed multiband antenna having dimension 70.4 mm × 45 mm is implemented having an impedance bandwidth of 127 MHz for WiMAX band only.Similarly, a 70 mm × 70 mm CPW-fed WLAN antenna implemented in [27] operating at 2.4 GHz with peak gain of 6.5 dB has a very low impedance bandwidth.The rounded corners concept is found in literature to enhance the overall gain, with stable radiation patterns [22][23][24][25].Moreover, frequency dispersion is reduced and current is uniformly distributed on radiation surface corners using this procedure.Adjustable strips in our design enabled CPW design to improve bandwidth over higher frequency bands without compromising the overall size of 875 mm 2 .In order to reduce size and avoid the complexity, major overlapped slots have been introduced in the proposed antenna design along with two strips above and below the main radiating patch.Compact ground plane length ratio to the overall length of the antenna is optimized to achieve 50 Ohm impedance matching by adjusting the microstrip width and gap between the microstrip and the sides of the ground plane [6,8,24].In Table 1, a comparison is made between different existing CPW designs found in literature and our proposed work.Design working principle and antenna dimensions are explained in the following sections along with the detailed simulated and experimental results. Design Specifications The feed line calculation in CPW design is depicted in Figure 1, whereas the detailed geometry of the proposed antenna is shown in Figure 2. The antenna is fabricated on an FR4 substrate with relative permittivity of 4.4 having a standard thickness of 1.6 mm.The length, width, and the wavelength of the main rectangular patch is calculated and gradually modified by calculating the resonant frequencies for first and second resonance bands using the following expressions for coplanar waveguide design. where "" is the speed of light, .eff is the effective relative permittivity of substrate which is equal to 2.7, and " " is the guided wavelength which depends on the length of upper and lower strips for both bands.Characteristics impedance of the feed line having finite width ground planes on each side of FR4 substrate is given by Van Caekenberghe et al. [28]. where "" is complete elliptic integral of first iteration and "" and " " are CPW line dependent variables.These two parameters are calculated as follows: Center frequencies 1 and 2 are calculated from (3) and optimized using Ansoft's High Frequency Structure Simulator (HFSS) software package.The strip lengths 4 and 2 are optimized close to a quarter wavelength of center frequency considering min around 2.1 GHz and max at 3.6 GHz.The gap between the ground and feed elements "" is 1 mm and the length of the feed line is 18.7 mm, while the radius of small rounded corners on the main antenna segment is 1.4 mm.The detailed antenna design parameters values are illustrated in Table 2.The gap between the one of the ground planes and feed line is optimized through simulation software to be 1 mm.Band stop function is realized by adding overlapped rectangular and circular slot in the main rectangular radiator.This reduces Wireless Communications and Mobile Computing the interference and creates notched frequencies between 2.4 GHz and 3.4 GHz bands.Similar kind of reactively loaded CPW antenna in [29] shows promising results with y-shaped and u-shaped slots in rectangular patch.In literature, various shapes of the slots are used to enhance the bandwidth of CPW designs including square wavelength line slot, fractal shaped slots, asymmetrical CPW slots, and circular slots [25][26][27][29][30][31][32].Square shaped slot implemented in [33] showed ultrawideband bandwidth and reduced overall antenna size effectively. The method of overlapped symmetrical rectangular and circular slots is embedded in our proposed antenna to reduce interference of adjacent frequency bands to obtain efficient antenna parameters.To determine a good impedance matching, the electrical wavelength of the top and bottom strips is kept close to the quarter wavelength along with wideband microstrip coplanar strip line to couple the electromagnetic energy for better radiation efficiency.Similar kinds of small slit loaded antennas in [31,32,34] use series inductive slits and rectangular and circular shaped slots for impedance bandwidth improvement.Through iterative simulations, it is experienced that wider overlapped circular and rectangular slots are more productive in widening the impedance bandwidth and improving antenna gain. Antenna Performance The fabricated prototypes of the two final designs are shown in Figure 3.The simulations are performed in Ansoft's HFSS and the reflection coefficients of the proposed antenna are measured using a Vector Network Analyzer (E5072A).An SMA connector is carefully coupled with ground and feed structures to obtain measurements.4(a)-4(d).Simulated reflection coefficient results for all antenna design steps/types are depicted in Figure 5.The antenna design process is started from a coplanar waveguide fed printed antenna by attaching a rectangular patch with feed line that attains a very broad fractional impedance bandwidth of more than 100% (1.1 GHz-3.9)GHz for 11 < −10 dB threshold without having second resonant band.However, our design goal is to make a dual-band antenna in which each band tuned/modified Frequency (GHz) comparatively independently without significantly affecting the other band as per our design needs.The 2nd design goal is to increase the gain of the antenna as CPW-fed printed antennas are generally omnidirectional antennas.In order to achieve these design goals, Antenna 1 is modified by adding an additional rectangular strip that creates a second resonance around 3.4 GHz as shown in Figure 4(b).Antenna 3 is created by adding another top strip and etching overlapped slots in first rectangular patch that shows first resonance at 2.4 GHz with impedance bandwidth from 1.0 GHz to 2.7 GHz and a second resonance at 3.4 GHz with impedance bandwidth from around 3.1 GHz to 3.7 GHz.Finally, Antenna 4 (proposed) is simulated and fabricated with imbedded rounded corners technique for improved performance in terms of reflection coefficient and gain.This design attains dual bands with a simulated result of around 80% fractional bandwidth (1.1 GHz-2.8GHz) in the first band and around 23% fractional impedance bandwidth (3.0 GHz-3.75GHz) in the second resonance band.This is worth mentioning that 11 has shown better notched frequency characteristic between the two bands.On the other hand, it is noted that introducing rounded corners has a very low effect on the resonant frequencies, but it has effectively improved the fractional impedance bandwidth and the gain of the final antenna design.Detailed parametric studies have been carried out including all the major lengths, widths, feed lines, and positions of rectangular strips to achieve higher gain of the proposed design. Current Distribution and Impedance Matching Analysis. Figure 6 shows the current distribution on the antenna at 2.4 GHz and 3.4 GHz to get a better insight of the antenna design which depicts that the current varies along the antenna -axis dimension with minimum current at the ends due to the reduced "end effect."As a matter of fact antenna radiates energy because of radiation resistance.Loss resistance of the antenna is small compared to radiation resistance that is usually considered negligible in measurements.Input pulse and corresponding electric field intensity are calculated by the following expression: where " " is the input signal and " " is the received signal in the antenna far-field.Using Ansoft's HFSS, fullwave time domain results are studied.Maximum current at top strip is obtained on both frequencies which have 90-degree phase shift that justifies the inductance of top cladding strip.The rounded corner concept serves well as it reflected more and more energy to the metal strips at resonant bands.Nevertheless, consistent large current density is concentrated on top strip for both frequency bands as common characteristics.Figure 7 show the real and imaginary components of the input impedance of final two designs.Rounded corners model is tightly aligned to 50 Ω line in real part and has less tolerance in case of imaginary part.Overall better impedance matching is achieved by rounded corners design for both first and second resonance bands.IMT (2100 MHz), and LTE (1700, 1900 MHz), and a future expected 5G band, that is, 3.4-3.6GHz.There is generally a good agreement between simulated and measured results where differences between simulated and measured results can be attributed to factors such as small antenna size, SMA connector quality, soldering effect, and uncertainties in substrate dielectric constant.Major rectangular/circular slot size and lengths/widths of both top and bottom strips are optimized during designing process.The effects of variations in the width of bottom strip on the reflection coefficient 11 are plotted in Figure 9.It is experienced that the first resonance frequency increases with the decrease in the width of bottom strip and has negligible effect on second band, while variation of top strip controls the 3.4 GHz resonance frequency (second resonance band) that makes it simple and easy to reconfigure design for other adjacent frequencies if needed.The gains of the antennas with and without rounded edges/circular slot in dB versus frequency are shown in Figure 10.Comparing Antenna 3 and Antenna 4, the value of peak gain is subsequently improved from 6.2 dB to 8.9 dB.This shows that the gain is reasonably increased when the bottom strip edges are rounded and circular slot are overlapped on some of the corners of the upper strip which depicts the variation in gain between 6.2 dB and 8.9 dB in the range of interest.The radiation patterns of this dual-band antenna at first and second resonance bands are illustrated in Figure 11 for both and -planes.High order modes are responsible to generate distribution effect at higher frequencies.It is clearly evident from the 2D patterns that the antenna performs as a directional radiator at -plane and quite close to bidirectional in -plane.These characteristics make this novel design a strong candidate and effectively suitable for profound Internet of Things (IoT) applications. Conclusion In this article, a novel rectangular shape CPW antenna with overlapped circular slots and rounded edges is proposed Figure 1 : Figure 1: Feed line calculation in CPW design. WirelessFigure 2 : Figure 2: Geometry of the proposed antenna: (a) parametric details and (b) major components of proposed antenna. Figure 5 : Figure 5: Simulated reflection coefficients of the four antennas. Figure 6 :Figure 7 :Figure 8 :Figure 9 : Figure 6: Current distribution on the proposed antenna: (a) vector current distribution at 3.4 GHz, (b) Vector current distribution at 2.4 GHz, and (c) Current densities at 2.4 GHz and 3.4 GHz. 4 Figure 10 : Figure 10: The gains of the antennas with and without the rounded edges/circular slot in dB versus frequency in GHz. Table 1 : Comparison between different existing CPW designs and our proposed work. Table 2 : Proposed CPW antenna design parameters values.
3,500
2018-03-28T00:00:00.000
[ "Computer Science", "Engineering" ]
Low-cost bump bonding activities at CERN Conventional bumping processes used in the fabrication of hybrid pixel detectors for High Energy Physics (HEP) experiments use electroplating for Under Bump Metallization (UBM) and solder bump deposition. This process is laborious, involves time consuming photolithography and can only be performed using whole wafers. Electroplating has been found to be expensive when used for the low volumes which are typical of HEP experiments. In the low-cost bump bonding development work, electroless deposition technology of UBM is studied as an alternative to the electroplating process in the bump size / pitch window beginning from 20 μm / 50 μm. Electroless UBM deposition used in combination with solder transfer techniques has the potential to significantly lower the cost of wafer bumping without requiring increased wafer volumes. A test vehicle design of sensor and readout chip, having daisy chains and Kelvin bump structures, was created to characterize the flip chip process with electroless UBM. Two batches of test vehicle wafers were manufactured with different bump pad metallization. Batch #1 had AlSi(1%) metallization, which is similar to the one used on sensor wafers, and Batch #2 had AlSi(2%)Cu(1%) metallization, which is very similar to the one used on readout wafers. Electroless UBMs were deposited on both wafer batches. In addition, electroplated Ni UBM and SnPb solder bumps were grown on the test sensor wafers. Test assemblies were made by flip chip bonding the solder-bumped test sensors against the test readout chips with electroless UBMs. Electrical yields and individual joint resistances were measured from assemblies, and the results were compared to a well known reference technique based on electroplated solder bumps structures on both chips. The electroless UBMs deposited on AlSi(2%)Cu(1%) metallization showed excellent electrical yields and small tolerances in individual joint resistance. The results from the UBMs deposited on AlSi(1%) metallization were non-uniform and closer inspection revealed micro cracks at aluminum — electroless nickel interface. UBM deposition was also done for Timepix wafers and solder ball placement process was prototyped with 40 μm balls. Introduction Bump bonding of pixel detectors has been shown to be the major cost-driver for some of the LHC vertex detectors [1,2]. The bump bonding procedure comprises of depositing bumps on sensor and readout wafers, dicing of wafers and Flip Chip (FC) bonding of individual dies. The conventional flip chip bump used in a hybrid pixel detector consists of an electroplated Under Bump Metallization (UBM) and a solder alloy bump. The electroplating process is laborious and has been found to be expensive for low wafer bumping volumes, which is the case for HEP experiments. Wafer bumping costs could be reduced by using Electroless Nickel (EN) deposition technology for UBMs in combination with advanced solder transfer techniques. EN deposition technology enables various Flip Chip (FC) assembly scenarios, doesn't require lithography and is a high-volume capable batch processing technique. For these reasons the technology is cost-efficient and attractive. In the past, Electroless Nickel (EN) technology was quickly adopted by the electronics industry without careful characterization. This led to failures and reliability issues and to the poor reputation of electroless technology. However, during the last 10 years many of the technical challenges have been understood [3], and fully automated process equipment lines allowing precise online control of the plating chemistry have been developed. This has contributed to improved quality, reproducibility and reliability of EN depositions. Solder ball placement systems have become commonly available during the last 5 years. Solder ball placement technology is cost-effective because of its simplicity and fully automatic operation. At the moment the finest solder balls available have 40 µm diameter. The solder balls have a very well controlled volume facilitating high yields in flip chip assembly. The novel solder ball placement technologies aim at having a 100% bumping yield by controlling the quality with -1 - Figure 1. Process flow chart of ENIG/ENEPIG UBM deposition using double zincating process. automated vision system before and after the solder transfer. If bumping defects are recorded, the solder bumps will be reworked. The optimization of the bumping throughput and quality requires both individual solder ball placement and wafer-level mass transfer techniques. Currently, mass transfer is done with 60 µm solder balls, but the technique is developing rapidly and is foreseen to move to 40 µm in near future. The 40 µm balls, which allow for a bumping pitch of ∼100 µm, could already be used in some HEP pixel detector systems, such as in outer tracking layers. However, since this requires covering of large areas, the issue related to the prevailing high FC assembly costs has also to be solved. In this work EN deposition of UBMs is studied as an alternative for electroplating in fine pitch wafer bumping processes. The yields and electrical properties of EN UBMs are characterized on test vehicle structures. In addition, solder ball placement technology with 40 µm bumps has been demonstrated on Timepix chips with EN UBMs. Although the development of cost-effective FC assembly techniques are also essential for low-cost bump bonding, they are not included in this paper. Electroless Nickel (EN) process description The process flow of Electroless Nickel -Electroless Palladium -Immersion Gold (ENEPIG) and Electroless Nickel -Immersion Gold (ENIG) used in this work are illustrated in figure 1. The process begins with a series of activation steps which contribute to the surface pre-treatment of Aluminium (Al). The first step is a cleaning cycle, in which all organic contaminates and Si residues are removed from the wafers. After the cleaning step, the native oxide of aluminium is removed by an etching process to enable the growth of Zn in the subsequent zincating steps. In the zincating process a thin layer of Zn is nucleated on Al pads. The zincating step is a prerequisite for the autocatalytic deposition of Ni on Al pads. The catalytic nucleation process of Zn is based on an exchange reaction between the exposed Al on the wafers and the zinc complexes in the zincating solution. In the double zincating process that is used here, the first Zn layer is stripped in nitric acid solution, which results a more uniform Zn layer with a finer grain structure in the second zincating step [4]. The fine-grained and uniform Zn layer has been found to result in a more uniform Ni growth in the subsequent process steps and also to increase the adhesion at Al-Ni interface [5]. The deposition of Ni is started by dissolving Zn into the plating solution and replacing it with Ni. Once the surface of the Zn layer has been completely covered by Ni, the autocatalytic plating reaction of Ni begins. The growth of Ni is isotropic, and this has to be taken into account in the design of bump structures. The deposited Ni follows the surface of the passivation very closely, but does not adhere to it. The mechanical contact is created only to the Al pad and not to the passivation. Immediately after the plating of Ni, either a thin layer of gold (Au), or palladium (Pd) followed by Au, is deposited onto the Ni to protect it from oxidation. The plating reaction in immersion Au process is a self-limiting exchange reaction and typically results a gold layer with a thickness of 50 nm -100 nm. The optional Pd layer is complementary to immersion gold and it helps to maintain the solderability of the UBM after long or multiple heating cycles. The Pd layer also hinders the formation of the Inter Metallic Compounds (IMC) between tin and nickel. Solder ball placement technologies Individual solder ball placement systems have been developed from gold wire bonding systems using the precision placement capabilities of the equipment. These systems place preformed solder spheres on the bump pads one by one. Typically, the individual bump placement systems can achieve a rate of the order 10 bumps a second. This is economical for wafers with low number of I/Os (< 200,000) or eventually for single chip area arrays. If a higher number of I/Os are used, the individual solder ball placement process ceases to be economical. The conventional systems make a contact to the wafer, while transferring the solder bump from the nozzle to solderable UBM. However, new contact-free methods have been emerging during the last years. Pac Tech's advanced solder placement tool SB2 has a nozzle with a high-power laser. Solder spheres are injected to the nozzle one by one and they are instantly melted by the laser in inert ambient and "spit" on the chips. This technique is well adapted for chips and wafers with solderable UBM like ENIG. This technology is especially interesting in single chip bumping of readout chips made in MPW runs. If EN is deposited on I/O pads, solder could be easily deposited on the chips. The most powerful solder ball placement technique is the mass transfer method in which all the solder balls are moved on wafer with UBMs in one step. Pac Tech has introduced the so-called Gang Ball Placement (GBP) technology, which seems to be one of the most promising low-cost bumping solutions currently available. In GBP a stencil grid is used in combination with vacuum on the arm side to hold the wafer-level array of solder balls and to compress them against the UBM pads. The stencil is a replica of the I/O matrix of the wafer to be bumped. The solder balls are picked up from a platter using a vacuum which is applied behind the stencil plate. The balls fill all the holes and an ultrasonic vibration pulse is subjected to shake off the excess balls. An automated vision system scans through the stencil grid to analyze the bumping quality. If "pass" signal is given by the vision system, the solder balls will be transferred on the UBM pads over the whole wafer. If there are too many defects (missing bumps & solder ball clusters) in the stencil, all solder balls are dropped and reloaded. In addition, the individual solder ball placement machines can be used to rework single bump defects after the mass transfer. In electroplating processes the control of the bumping quality cannot be done at this level, and therefore solder transfer processes are expected to have better yields. Test vehicle wafers Two mask sets were created for processing test sensor and test readout wafers. Both wafer types contained 44 chips on 150 mm wafers. Two batches of test wafers were processed, both including sensor and readout wafers. Batch #1 had AlSi(1%) wiring layer metallization, which is similar to the one on the pixel sensor wafers used at CERN. Batch #2 had AlSi(2%)Cu(1%) wiring metallization, which is similar to the one used on pixel readout wafers. The 8" IBM CMOS wafers which are used at CERN have AlSi(1%)Cu(0.5%) as topmost metal. However, due to the unavailability of a sputtering target of that particular alloy, AlSi(2%)Cu(1%) metal was used instead. In total, 24 Batch #1 and 17 Batch #2 wafers were processed by the authors at VTT Micronova facility in Espoo, Finland. Both wafer batches went through an electroless UBM deposition process at Pac Tech, Germany. 12 wafers from Batch #1 were run through an ENIG process and 13 wafers from Batch #2 were processed with ENEPIG. The target UBM thicknesses were 6 µm and 4 µm for the ENEPIG and ENIG processes, respectively. In addition to the EN UBMs deposited on wafers from Batch #1 and Batch #2, tin-lead solder bumps with nickel UBMs were electroplated on five wafers (without EN) from Batch #1with VTT's standard process. The used bumping technology has been well characterized and it is known to have good FC yields. Therefore it was chosen as a solder bump deposition technology and as a reference technology. Assembly procedures All the wafers were then diced and the chips were visually inspected prior to flip chip assembly. Only the chips with flawless bumping quality were chosen for FC assembly. Test assemblies were constructed by tack bonding test sensor chips with electroplated solder bumps against test readout chips with electroless UBM pads. Two sets of assemblies were made using test readout chips from Batch #1 (8 assemblies) and Batch #2 (16 assemblies). 8 reference assemblies were constructed similarly by tack bonding test sensor chips with electroplated solder bumps against test readout chips with electroplated thin solder bumps. SET FC150 flip chip bonder was used in all the assemblies. The tack bonding sequence was followed by a collective assembly reflow process in formic acid ambient at 230 • C. Figure 2. Distribution of single joint resistances for A) batch #1, B) batch #2 and reference assemblies. Results and discussions After the electroless deposition of UBM pads on the test wafers, the deposition yield and quality were visually estimated. Only very few visually observable defects were seen on the wafers and the yields were estimated to be better than 99.9% for both batches. The typical defects were EN growing from cracks in the passivation layer and missing UBM pads. As the EN process is maskless, the metal will grow from all the pinholes and cracks that reveal the underlying Al. Therefore, moving from electroplating to electroless deposition will require wafers with very good quality passivation. The passivation layer on the sensor wafers currently used in some CERN pixel detectors has been observed to contain pinholes and thus might not fulfil these requirements. The daisy chains and Kelvin structures were measured from the test assemblies with a probe station and an Agilent 34970A data acquisition unit. If the average resistance of a single joint in a chain exceeded 1 Ω, the chain and all the joints in it, were classified as nonfunctional. This is a very stringent rating policy considering that the shortest daisy chains have 512 joints. The results of the daisy chain measurements are summarized in table 1 and the results of joint resistance measurements from Kelvin bump structures are plotted in figures 2A and B. It should be noted that the measured values of joint resistance, especially from the reference samples, are close or even below of the measurement equipment's capabilities and should therefore be considered as indicative values. As expected, the reference technology using electroplated tin-lead solder bumps had a perfect yield and a very low solder joint resistance of few mΩ. The test assemblies from Batch #2 also showed good results, with yield of 99.3% and a mean joint resistance of ∼20 mΩ. However, the assemblies from Batch #1 measured poor yields and variation of many decades in the joint resistances, as can be seen from figure 2A. To understand the reason behind the variation of solder joint resistances in the assemblies from Batch #1, cross-sectional samples were made of assemblies from both batches and studied with Scanning Electron Microscopy (SEM). Cracks shown in figure 3a were found frequently in between Al and Ni layers in assemblies from Batch #1. Besides the cracks, Al layer was strongly etched and the Al -Ni interface was rough. In the samples from batch #2 (figure 3b) the Al-Ni interface was significantly smoother, there was less etching of aluminum and most importantly there were no signs of cracking. The cracks at Al-Ni interface are a very logical explanation for the high joint resistance values measured from batch #1 assemblies. It has been previously reported that small amounts of Cu in Al bond pad metallization improves the electroless plating quality [3]. This effect can also be clearly seen in the results here. By alloying 1 % of copper in the Al pad metallization in batch #2 wafers, the deposition quality of EN was significantly improved. Pac Tech's SB 2 solder ball jetting system was demonstrated on Timepix chips having ENEPIG UBM pads (figure 4). The UBMs were grown at 110 µm pitch using a protective photoresist layer to mask every second pixel row and column, and 40 µm solder balls were jetted on ∼ 16 k I/O pads. 80 missing bumps were counted on the first prototyping sample, which equals to 99.5% yield. With increasing number of samples, the process will become more optimized and the bumping yield is expected to improve respectively. Shear strength tests were done for 30 individual solder bumps at Pac Tech and the average shear strength of 8 g/bump was measured, which is good. The individual solder ball deposition tests will continue soon with bumping of single Timepix chips and they are followed by the flip chip bonding. Conclusions Deposition of EN UBM structures was done on two sets of test wafers with different bump pad metallization to characterize the process. Chips with electroplated solder bump structures were FC assembled against chips with the EN UBMs and the test structures in the assemblies were electrically characterized. The results indicate that the alloying of Cu in Al pad metallization significantly improves EN UBM deposition quality. The AlSi(2%)Cu(1%) metallization used in the Batch #2 test wafers is very similar to the topmost metal on the IBM CMOS wafers used at CERN. Therefore, the electroless UBM technology has potential to be used as a low-cost alternative for electroplated UBMs on readout wafers. Further development is needed to optimize the process also for AlSi(1%) metallization that is commonly used in sensor wafers. Furthermore, in order to fully realize the cost saving potential of the EN UBM deposition technology, low cost solder bumping processes, such as solder ball placement methods, need to be developed for the bump pitch window of 50-100 µm. Individual solder ball placement tests were successfully demonstrated on a single Timepix chip using 40 µm balls and 110 µm pitch.
4,082.4
2010-01-01T00:00:00.000
[ "Materials Science" ]
Tunable Adhesion for Bio-Integrated Devices With the rapid development of bio-integrated devices and tissue adhesives, tunable adhesion to soft biological tissues started gaining momentum. Strong adhesion is desirable when used to efficiently transfer vital signals or as wound dressing and tissue repair, whereas weak adhesion is needed for easy removal, and it is also the essential step for enabling repeatable use. Both the physical and chemical properties (e.g., moisture level, surface roughness, compliance, and surface chemistry) vary drastically from the skin to internal organ surfaces. Therefore, it is important to strategically design the adhesive for specific applications. Inspired largely by the remarkable adhesion properties found in several animal species, effective strategies such as structural design and novel material synthesis were explored to yield adhesives to match or even outperform their natural counterparts. In this mini-review, we provide a brief overview of the recent development of tunable adhesives, with a focus on their applications toward bio-integrated devices and tissue adhesives. Introduction Although adhesion has long been studied, early efforts focused on the contact between stiff materials [1]. Due to the emerging interest in reconfigurable systems [2] and bio-integrated devices [3,4], adhesion that involves a soft material with different levels of adhesion strength or even a tunable range started attracting attention. Soft materials of interest range from synthetic polymers to biological tissues [5]. Adhesion involving soft materials could be affected by the surface structure/morphology, the deformation of soft materials, and wet/dry conditions, among many others [6][7][8]. The strategies to design and achieve various levels of adhesion strength can be achieved through structural designs or material innovations. The rapid development in both these classes is greatly promoted by bio-inspiration from several marvelous animals (e.g., gecko, octopus, and mussel), which shed light on the effects of surface roughness, the directionality of the adhesive, and surface chemistry [6,[9][10][11]. Although adhesion is greatly modulated by the properties of the adhesive layer, it is also affected by the target substrates due to interaction at the adhesive-substrate interface; thus, the adhesive has to be specifically designed for each application [12]. When it comes to adhesion to biological tissues, tunable adhesion is of great importance. For instance, strong adhesion to the wound edge is expected in a tissue adhesive to suture the wound [13]. Upon completion of wound healing, a weak adhesion is then desirable for easy removal of the adhesive. Tunable adhesion is also one essential step for realizing repeatable use, as easily removed adhesive could be sanitized and prepared for further use. Current commercial tissue adhesives such as Dermabond ® [14] are designed for one-time use. However, when combined with tunable properties, tissue adhesives can be used as an alternative to surgical sutures in clinical practice to eliminate the need for stitch removal. As a decrease in adhesion is observed following multiple uses of the adhesive, strategies to minimize such a decrease need to be explored [15]. In this mini-review, we firstly provide a brief overview of the structural design for adhesives with applications mostly in the dry environment. As extensive review articles exist for dry adhesives [8,[40][41][42], only selected key developments are highlighted here. Next, we discuss material innovations for using adhesives in the wet environment, which are largely based on bio-inspiration from mussels. As special considerations have to be given to the application of adhesives on biological tissue surfaces, we then highlight several recently developed techniques for such applications. Structural Design for Dry Adhesion Due to its remarkable ability to climb rapidly up a variety of vertical surfaces ( Figure 1A (i)), the gecko inspired researchers to uncover the underlying mechanisms behind its significant enhanced, highly robust and repeatable, and reversible adhesion. Observation of the pad area ( Figure 1A (ii)) shows nearly 500,000 keratin setae (pillars) ( Figure 1A (iii)), with each seta consisting of branches of spatulas that are approximately 200 nm in diameter and 20-60 µm in length ( Figure 1A (iv)) [11]. Experimental evidence confirmed that the dry adhesion of gecko setae results from van der Waals forces rather than mechanisms associated with a high surface polarity such as capillary adhesion [43], which indicates that the exceptional adhesion is merely a result of the size and shape of the setae tips. The direct observation of the van der Waals interaction indicates that the adhesion is not affected by the surface chemistry, and repeatable use is possible [40]. In order to reveal the role of the van der Waals interaction on the enhanced adhesion observed in the gecko pad [41,44,45], an array of biomimetic microscopic fibrils on an elastic support was created [41,46]. In direct contrast to a flat surface that only has limited contact to the target substrate with a microscale surface roughness, the array of fibrils with a high aspect ratio in a dense arrangement [47] was observed to form intimate contact with the target substrate due to its low effective Young's modulus and increased effective contact area, especially when a preload was applied ( Figure 1B) [48,49]. The principle of contact mechanics was further applied to illustrate that contact splitting (i.e., reducing the radius of the fibril) yielded substantially improved adhesion, and the scaling was found to be applicable to animals differing in weight by six orders of magnitude ( Figure 1C) [45,47,50]. The use of soft polymers in most biomimetic systems helps increase the adhesion, but their tacky nature also makes them more susceptible to particulate fouling; thus, a hydrophobic surface with the capacity for self-cleaning is desired. In fact, a fibrillar adhesive can partially transfer particles in a certain size range from its surface to the clean substrate and recover ca. one-third of its shear adhesion [51], as observed in gecko setae [52]. In the practical application where defects commonly exist, the fibrillar structure also localizes the contact failure at individual fibrils and minimizes the effect on contact adhesion, thereby increasing the defect tolerance. Moreover, the adhesion is also affected by the underlying supports. By peeling a polydimethylsiloxane (PDMS) substrate patterned with different hexagonal arrays of cylindrical pillars (to mimic fibrils) from an acrylic adhesive, the enhancement in the adhesion was shown to be more than the increase of the contact area, and this was attributed to the deformation of the underlying support [6]. The fibrillary structure can also be used directly beneath the existing viscoelastic adhesive film (e.g., pressure-sensitive adhesives) to change the dissipative crack trapping and the stress field in the viscoelastic layer for enhanced adhesion [53]. Due to the need for locomotion [54], a reversible adhesion is desirable, and animals such as geckos are observed to use direction to switch from strong to weak adhesion [55]. This direction-dependent adhesion is attributed to the angled fibrils on the gecko's foot, as evidenced by 20 • cryo-SEM imaging [56]. While several methods were explored to fabricate vertical structures (e.g., e-beam lithography [57], nano-molding [58], constructing polymers from stiff thermoplastic [59], nano-drawing of stretched polymers [60], and growth of carbon nanotubes [61,62]), it is a challenge to obtain angled structures with high resolution and high aspect ratio, though several attempts were made (e.g., directional exposure in the lithography [63,64], deforming the shape memory polymer of vertical structures from soft lithography [65], post directional e-beam exposure of Pt-coated vertical polyurethane acrylate (PUA) nanohairs [66], and direct laser writing [67]). In another effort to address this challenge, an angled etching technique was developed, where a Faraday cage introduced in the conventional plasma etching system allows vertical movement of ions to induce an angled etching to the silicon substrate that is placed on an inclined stage [9]. Curing the polymer (e.g., polyurethane acrylate resin) in the etched Si master yields slanted structures with the designed angle and aspect ratio ( Figure 1D). Taken together with the ultraviolet (UV)-assisted capillary force lithography, the etched Si master can be further used to create two-level hierarchical PUA hairs for enhanced robustness to a rough surface (<20 µm). In addition to the hierarchical structure [9,68,69], the shape of the tip was also found to have an influential role on adhesion strength [70] (e.g., mushroom-like and spatula tips [50,64,71,72] were shown to have higher adhesion than flat and round tips [73]). On a separate route to structural design for enhanced adhesion, octopus suckers that reversibly adhere to wettable surfaces provided another source of inspiration [74][75][76][77]. The strong adhesion in both dry and wet environments results from the lower pressure in the octopus suckers than that of the environment. Using an external control (e.g., suctioning system [78], vacuum pump [79], dielectric elastomer actuator [80], or magnetic actuated film [81]), the biomimetic system can be easily created. The miniaturization of the system was also achieved through the use of lithographic processes [82,83]. In one attempt to create nanoscale suction cups [84], a non-close-packed self-assembled silica nanoparticle array served as an etching mask to prepare mushroom-like structures consisting of polymer stems and silica caps (Figure 2A (i)). Drying and peeling a polyvinyl alcohol (PVA) film from the etched structure created a replica with embedded silica nanoparticles (Figure 2A (ii)), which formed a mold to yield silicone polymer with nano-sucker structures (Figure 2A (iii)). By controlling the meniscus of a liquid precursor from applied pressure, a simple molding process from the mold with different surface energies could yield artificial micro-suckers with well-controlled cross-sectional profiles [85]. The adhesion in both dry and underwater environments was shown to increase as the curvature of the cross-sectional profile increased, due to the increased contact area from the preload ( Figure 2B). In the wet environment, a model that combined the suction effect and capillary interaction [86] captured the experimental observation. When the elastomeric PDMS film with suction-cup structures was covered by thermoresponsive hydrogel of poly(N-isopropylacrylamide) (pNIPAM), a resulting smart adhesive pad could respond to temperature change, with an increased temperature inducing an increased volume and decreased pressure in the suction cup due to the deformation of the pNIPAM layer ( Figure 2C) [87]. Design of the Material for Use in Wet Conditions Though mechanical properties (e.g., the previously discussed structural designs and Young's modulus of the structure [88][89][90]) showed significant effects on the strength of adhesion in the dry environment, many of them are compromised in the wet environment. In order to address the challenge, several bio-inspired materials [91][92][93] and their integration with structural designs [7,94] were explored. As a celebrated biological model for wet adhesion [95], mussels were shown to attach virtually all types of inorganic and organic surfaces, including classically adhesion-resistant materials such as poly(tetrafluoroethylene) (PTFE). Clues to this versatility may lie in the amino-acid composition of the specialized adhesive proteins that contain the catecholic amino acid 3,4-dihydroxy-L-phenylalanine (DOPA) and lysine [96]. DOPA and other catechol components perform well as adhesives. With inspiration from both geckos and mussels, a flexible organic nano-adhesive "geckel" was created by dip-coating the gecko-foot-mimetic PDMS pillar array in an ethanol solution of mussel-adhesive-protein-mimetic polymer ( Figure 3A (i)) [7]. With a high catechol content, the adhesive monomer, dopamine methacrylamide (DMA), was used in a free-radical polymerization to synthesize poly(dopamine methacrylamide-co-methoxyethyl acrylate) (p(DMA-co-MEA)) as the mussel-adhesive-protein-mimetic polymer. The addition of a p(DMA-co-MEA) coating on the pillars enhanced the wet adhesion by nearly 15 times ( Figure 3A (ii)), and this geckel nanoadhesive maintained its adhesive performance for over 1000 contact cycles in both dry and wet environments ( Figure 3A (iii)). Containing both catechol (DOPA) and amine (lysine) functional groups, dopamine as a simple-molecule compound also shows promise to achieve adhesion to a wide spectrum of materials [97]. The adherent polydopamine (PDA) coating produced by self-polymerization of dopamine can also serve as a versatile platform to graft various organic molecules and biomacromolecules for secondary surface-mediated reactions ( Figure 3B). Taken together with its biocompatibility and hydrophilicity, polydopamine-based materials demonstrated great potential toward biomedical applications, ranging from cell adhesion/encapsulating/patterning to tissue engineering and re-endothelialization of vascular devices [98,99]. As a versatile building block, PDA was also integrated with other materials such as a hydrogel. However, hydrogel is often associated with long-term instability from water evaporation and physical changes from use at relatively extreme temperatures [100,101]. In order to provide hydrogel with long-term stable operation in a wide temperature window, a glycerol-water (GW) mixture as the binary solvent was used in hydrogel development ( Figure 3C) [102], as glycerol is a well-known nontoxic anti-freezing agent. Incorporating PDA-decorated carbon nanotubes (CNTs) as conductive nano-fillers into hydrogel imparts good conductivity (~8 S/m), enhanced toughness (~2000 J/m 2 ), and excellent adhesion (57 kPa to porcine skin) to the resulting GW hydrogel. Due to its advantages of good adhesiveness and anti-heating or anti-freezing properties, GW hydrogel demonstrated its capability to protect skin from damage in harsh environments (e.g., during frostbite or burn) by serving as an excellent wearable dressing. Other challenges of hydrogel also include foreign body response and poor mechanical properties (i.e., toughness and stretchability). The former could be attenuated by encapsulating mesenchymal stem cells within the hydrogel, such as poly(ethylene glycol) (PEC) [103], and the latter is addressed by the design of tough hydrogel [104][105][106], discussed in Section 4. on a clean Si substrate is followed by the self-assembly of host copolymer poly(N-isopropylacrylamide) (pNIPAM)/cyclodextrin (CD) using host-guest molecular recognition. (ii) Schematic drawing showing the tunable wet adhesion that responds to a local temperature trigger. When the local temperature of the adhesive is below lower critical solution temperature (LCST), the pNIPAM easily forms intermolecular hydrogen bonding with adjacent water molecules, and the infused water layer transforms the pNIPAM side chains to a swelling layer, which spatially stabilizes and confines the underlying adhesive moiety, DOPA. On the other hand, heating above the LCST leads to a phase transition and the collapse of pNIPAM-CD chains to form numerous agglomerates, exposing the adhesive group. Reproduced with permission from Reference [10]; Copyright 2017, Nature Publishing Group. In order to provide reversible and tunable wet adhesion that responds to a local temperature trigger in an on-demand manner, a mussel-inspired guest-adhesive copolymer was combined with a thermoresponsive host copolymer [10]. The guest copolymer pDOPA/adamantine (AD)/methoxyethyl acrylate (MEA) consists of a mussel-inspired adhesive DOPA polymer, a guest motif adamantine (AD), and a methoxyethyl acrylate (MEA) monomer as a hydrophobic matrix to enhance the wet adhesion of DOPA ( Figure 3D (i)). In the host copolymer pNIPAM/cyclodextrin (CD), the poly(N-isopropylacrylamide) (pNIPAM) undergoes a reversible lower critical solution temperature (LCST) phase transition from a swollen hydrated state to a shrunken dehydrated state when heated above the LCST and β-cyclodextrin (β-CD) is the host molecule providing selective binding with the AD moiety in the guest copolymer. Dip-coating the as-prepared guest copolymer on the target substrate surface (e.g., Si, Ti, Al, glass, PTFE, or PDMS) allows the self-assembly of the host copolymer through the host-guest interaction. When the local temperature of the adhesive is below the LCST, the swollen hydrated pNIPAM spatially confines and stabilizes the underneath adhesive moiety, DOPA, through the host-guest interaction, resulting in a dramatically screened interaction area and reduced adhesion. In contrast, the collapsed pNIPAM exposes the adhesive moiety, DOPA, when the local temperature is above LCST (Figure 3D (ii)). The versatile demonstration of the wet adhesive also goes from inorganic (Si, Ti, Al, and glass) to organic surfaces (PDMS and PTFE). In addition, the gecko-like surface structure (e.g., an array of PDMS posts with a diameter of 5 µm and and height of 10 µm), discussed in Section 2, was explored to further enhance the interfacial adhesion strength, which is in direct contrast with the gecko-like dry adhesive. Although mussel-like wet adhesion was successfully realized, typical catechol functionalization and solution processing entail complex components and steps. In order to reduce the complexity, synthetic low-molecular-weight catecholic zwitterionic surfactants were developed to adhere to diverse surfaces with very strong adhesion (~50 mJ/m 2 ) [107]. Based on catechol-modified amphiphilic poly(propylene oxide)/poly(ethylene oxide) (PPO-PEO) block copolymers, a mechanically tough zeroor negative-swelling mussel-inspired surgical adhesive was synthesized, minimizing the weakening mechanism from swelling [108]. The range of zero to −25 % swelling was achieved through a hydrophobic collapse of PPO blocks upon heating to physiological temperature. The lap shear adhesion measurements of decellularized porcine dermis show nearly 50 kPa adhesive strength. Although the single-layer mussel-like adhesion is effective, a layer-by-layer (LbL) assembly may be explored to further enhance the adhesion strength due to the versatile control in the assembly process (e.g., introducing sodium chloride in the assembly process yields an adhesion enhanced by two orders of magnitude [109]). Adhesion to Biological Tissues When it comes to adhesion to biological tissues such as skin, several additional challenges are encountered, including soft properties, multiscale roughness, and biocompatibility. For instance, adhesives based on chemical bonding may irritate the skin and cause discomfort upon removal due to strong adhesion. Though several commercial adhesives were used in bio-integrated electronics on the skin [110,111], their applications are limited by their given properties, and the adhesion strength was also shown to be dependent on the target tissue. Taking a synthetic tissue adhesive (i.e., Dermabond ® , 2-octyl cyanoacrylate) as an example, its adhesion to collagen films was observed to be 40 times that when compared with its adhesion to muscle tissue, due to increased wetting (and the decreased contact angle) of the Dermabond ® adhesive on the collagen film [12]. In the two classes of tissue adhesives, biologic (e.g., fibrin glue) and synthetic (e.g., n-butyl-2-cyanoacrylate) [112], a variety of different bonding mechanisms were explored (e.g., physical interaction, mechanical interlocking, and chemical bonding) [113,114]. As an extensively used synthetic polymer for tissue engineering, polyethylene glycol (PEG) was used with chondroitin sulfate (CS) to form a biodegradable CS-PEG adhesive hydrogel that can covalently bond to proteins in tissue or to collagen in the extracellular matrix via amide bonds, improving the adhesion strength by ten times that of fibrin glue ( Figure 4A) [115]. In a separate effort, a buckypaper (BP) film produced from oxidized multi-walled carbon nanotubes demonstrated enhanced adhesion to the rimmed muscular fascia of the abdominal wall of New Zealand female rabbit during both peeling and shearing tests, due to soft tissue deformation from water suction resulting in water bridge formation and BP-tissue mechanical interlocking, respectively ( Figure 4B) [116]. Because of the self-cleaning effect and adaptation capability, the gecko-inspired fibrillar adhesive demonstrated repeatable and restorable adhesion to the skin surface over multiple cycles of use [5]. Though the adhesion does not show a direct correlation with classical roughness parameters, strong adhesion was shown to decrease significantly when surface roughness increased [117][118][119] and a clear correlation was even observed when a newly integrated roughness parameter was introduced [120]. In order to address these challenges, special considerations have to be given to the design of the adhesion layer. In one effort to utilize the unique advantages of the gecko-inspired fibrillar adhesive, a composite adhesive was designed by coating polymer microfibers with a skin interfacing material (e.g., vinylsiloxane) to form mushroom-shaped tips ( Figure 4C) [26]. As a member of the family of silicone rubbers, the biocompatible vinylsiloxane (VS) approved for biomedical applications (e.g., forming dental impression) can fully cross-link and form covalent bonds with the PDMS microfibers within a few minutes at room temperature, enabling its direct cross-linking on and conformal contact to the skin surface with multiscale roughness. Due to the high flexibility and strong attachment to the skin surface, the adhesive layer shows an efficient strain signal to the strain sensor integrated on top, thereby significantly increasing the signal-to-noise ratio when compared with medical tape or fibrillar adhesive film fully immersed into a flat vs. film. The adhesion strength of the composite adhesive to a wet skin surface was also shown to be comparable to that of a dry environment. In contrast to the gecko-inspired adhesive, the octopus-inspired adhesive that relies on the pressure difference was less affected by the surface roughness. Such an adhesive also showed comparable adhesion strength to pigskin in moist conditions (40% of area covered with droplets) to that in dry conditions, even when hairs were present [85]. When integrated with physiological sensors and drug-delivery actuators, the octopus-inspired adhesive allowed sensitive biometric measurements and transdermal drug delivery through tight skin coupling ( Figure 4D) [83]. Relying on relatively weak physical interactions, existing tissue adhesives (including mussel-inspired adhesives) are associated with low adhesive energy on the order of 10 J/m 2 [121], which is far from ideal, especially when compared with the example in nature. For instance, cartilage bonds to bones with an adhesion energy of 800 J/m 2 [122]. In order to achieve high adhesion, a synergy from an adhesive surface layer and a dissipative matrix was explored ( Figure 5A) [105]. The design was inspired by a sticky and tough secretion from slug Arion subfuscus [123], which may arise from two interpenetrating networks of polymers [124]. The strong adhesion from the adhesive surface layer to the tissue substrate can be achieved through electrostatic interactions, covalent bonds, or physical interpenetration. Meanwhile, the energy dissipation through hysteresis in the matrix amplifies the effective adhesion energy. As the surface of tissues or cells is negatively charged, a bridging polymer that bears positively charged primary amine groups enables covalent binding via electrostatic attraction. In the case of a permeable target surface, the bridging polymer penetrating into the target forms a physical entanglement and a chemical anchor for the adhesive. As for the dissipative matrix, a substrate that can dissipate energy is used. By exploiting the synergy of these two factors, a class of tough adhesives demonstrated high adhesion energy (~1000 J/m 2 ) on wet surfaces. In vivo demonstrations included strong adhesion to a beating porcine heart in the presence of blood, heart sealants to prevent liquid leakage, and hemostatic dressing for a deep wound ( Figure 5B). This simple yet effective strategy opens up a wide range of applications, including tissue adhesives, wound dressing, and tissue repair. In order to provide a reversible adhesion that can respond to the external stimuli, a responsive polymer was explored to diffuse into and form an entangled network with two polymer networks of two wet materials for enhanced adhesion upon the trigger of one signal (e.g., in one pH range), while being soluble and separating both wet materials upon trigger removal (e.g., in the other pH range) ( Figure 5C) [125]. In the demonstration, several stitching polymers were identified to cover the full range of pH (e.g., cellulose forms a network for pKa < 13, alginate for pKa < 3.5, chitosan for pKa > 6.5, and poly(4-aminostyrene) for pKa > 4.5). Adhesion energy as high as 1000 J/m 2 could be achieved when the stitching polymer introduced hysteresis in the wet hydrogel materials. The demonstrated strong adhesion also went beyond the hydrogel to various porcine tissues (e.g., skin, liver, heart, artery, and stomach) and the skin was shown to exhibit a relatively high adhesion energy (100 J/m 2 ) due to its relatively high toughness. , made of a hydrogel containing both ionically (calcium; red circles) cross-linked and covalently cross-linked polymers (black and blue lines), and an adhesive surface that contains a bridging polymer with primary amines (green lines). The bridging polymer can penetrate into the TA and the substrate (light-green region) to facilitate covalent-bond formation. In the presence of a crack, the process zone (orange area) dissipates significant amounts of energy as ionic bonds between alginate chains and calcium ions break. (B) (i) Tough adhesives exhibit a rapid increase in adhesion energy to porcine skin over time. (ii) Comparing with cyanoacrylate (CA), TAs showed strong adhesion even when the porcine skin was exposed to blood in the in vitro experiment. n = 4-6. (iii) The TAs were further tested in an in vivo experiment on a beating porcine heart with blood exposure. (A,B) Reproduced with permission from References [105]; Copyright 2017, American Association for the Advancement of Science. (C) Chitosan chains dissolve in water at pH 5 and form a network in water at pH 7. Placing an aqueous solution of chitosan of pH 5 between two hydrogels (or biological tissues) of pH 7 is followed by the diffusion of chitosan chains into the two hydrogels, forming a network that topologically entangles with the networks of both hydrogels. Reproduced with permission from Reference [125]; Copyright 2018, John Wiley and Sons. Conclusions and Future Perspectives In order to robustly adhere bio-integrated devices to soft biological tissues, an adhesive layer with tunable adhesion is of great interest. The capability to switch between strong and weak adhesion would allow the use of strong adhesion to efficiently transfer vital signals to the device for accurate measurement, followed by the use of weak adhesion for easy removal. However, this tunable adhesion to the tissue surface has long been a challenge due to multiscale roughness, wet conditions, biocompatibility, and natural motion, among many others. Thanks to the recent developments that shed light on the underlying mechanisms of the remarkable adhesion observed in several animal species, great strides were made, and effective strategies ranging from structural design for dry adhesion to novel material synthesis for wet conditions were explored to yield adhesives that can match or even outperform those from nature. The importance of the developed adhesives also goes beyond bio-integrated devices to cell culture [126,127] and to tissue glues that can potentially replace sutures in clinical practice. When combined with the tunable properties of the adhesive, tissue glues would promise repeatable use, which can dramatically reduce the cost and pave the way for commercialization. Despite great strides made in the field of tissue adhesives, several challenges still exist, including fabricating high-aspect-ratio fibrillar structures with diameters down to submicron scales [40], long-term reliability of the tissue adhesives to diverse wet surfaces, tunable properties in the adhesives to accommodate dynamic changes in target tissues, and integration with multifunctional electronics for real-time sensing and closed-loop control [128,129]. In the burgeoning field of tissue adhesives, different testing methods and tissue models are used to evaluate the adhesive properties of newly developed structures and materials. Thus, it is a bit challenging to directly compare the results reported by different research groups. It would be desirable to have standardized testing procedures and tissue models in place to allow for direct comparison among the newly developed tissue adhesives. Nevertheless, the challenges simply represent a small fraction of the great opportunities for future development, which may require the collective wisdom of material scientists, chemists, mechanical engineers, and clinicians, among many others.
6,130.6
2018-10-01T00:00:00.000
[ "Engineering" ]
Noncommutativity, Saez-Ballester theory and kinetic inflation This paper presents a noncommutative (NC) version of an extended S\'{a}ez-Ballester (SB) theory. Concretely, considering the spatially flat Friedmann-Lema\^{\i}tre-Robertson-Walker~(FLRW) metric, we propose an appropriate dynamical deformation between the conjugate momenta and applying the Hamiltonian formalism, obtain deformed equations of motion. In our model, the NC parameter appears linearly in the deformed Poisson bracket and the equations of the NC SB cosmology. When it goes to zero, we get the corresponding commutative counterparts. Even by restricting our attention to a particular case, where there is neither an ordinary matter nor a scalar potential, we show that the effects of the noncommutativity provide interesting results: applying numerical endeavors for very small values of the NC parameter, we show that (i) at the early times of the universe, there is an inflationary phase with a graceful exit, for which the relevant nominal condition is satisfied; (ii) for the late times, there is a zero acceleration epoch. By establishing an appropriate dynamical framework, we show that the results (i) and (ii) can be obtained for many sets of the initial conditions and the parameters of the model. Finally, we indicate that, at the level of the field equations, one may find a close resemblance between our NC model and the Starobinsky inflationary model. I. INTRODUCTION To overcome the problems of standard cosmology, various alternative theories to general relativity have been established. Among them, the scalar-tensor theories have played a significant role, see, for instance, [1][2][3][4][5] and reference therein. In the Sáez-Ballester (SB) scalar-tensor theory [6], in which the scalar field is minimally coupled to gravity, a particular non-canonical kinetic term was added to the Einstein-Hilbert action. The original Lagrangian associated with the SB theory includes the ordinary matter sector, but neither cosmological constant nor a scalar potential have contributed to it. Moreover, we should mention that the SB theory possesses dimensionless parameters n and W, in which the latter specifies the strength of the coupling between the gravity and the SB scalar field. To the best of our knowledge, it has not been investigated for which values of W, the observational limits can be satisfied. The SB theory and its extended versions, in both the classical and quantum levels, have been widely applied to investigate various cosmological problems in either four or arbitrary dimensions [7][8][9][10][11][12][13][14][15][16][17][18]. Another category of alternative theories has arisen due to the incapability of the GR in predicting the effects of some phenomena at the Planck regime [19,20]. Among such theories, one can refer to some approaches to noncommutative (NC) gravity (see, [21][22][23][24][25], and references therein), which has roots in noncommutative geometry and noncommutative quantum field theories. As these frameworks are highly nonlinear, therefore, for investigating effects of noncommutativity on different aspects of the universe, noncommutative cosmology has been proposed, see, for instance, [26][27][28]. It has been believed that for constructing noncommutative models, both at the quantum as well as classical regime, cosmology can be considered as an interesting arena [29]. For instance, at the classical regime, by modifying the Poisson brackets of the classical theories, one can obtain noncommutative equations of motion. In these frameworks, by including a NC parameter which is usually interpreted as the Planck (length) constant, the effects of the noncommutativity may assist to resolve a few open problems of cosmology [30][31][32][33][34][35][36]. The main objective of the present work is to establish a noncommutative cosmological model in the context of the SB scalar-tensor theory containing an arbitrary potential. Subsequently, we will study the effects of noncommutativity in a particular case where the ordinary matter, as well as scalar potential, are absent. For such a simple model, we will see that in addition to the NC parameter, the presence of n and the SB coupling parameter W are also significant in describing the universe at an early time. Moreover, it is worthy to mention that, if the noncommutativity is present at a small scale, by the UV/IR mixing (which is a feature of the noncommutativity), it can also be observed at late times of the universe. The paper is outlined as follows. In the next section, considering a spatially flat Friedmann-Lemaître-Robertson-Walker (FLRW) metric as the background geometry and applying the Hamiltonian approach, we will obtain cosmological equations of motion for an extended SB cosmology in the non-deformed case. Then, we propose a general dynamical deformation (noncommutativity) to establish an interesting cosmological scenario. In section III, we first obtain analytic exact cosmological solutions for the commutative model. Subsequently, we focus on the NC model and show that, for very small values of the NC parameter, there is an inflationary phase, with graceful exit, at the early time. Moreover, we show that, for our herein NC model, the nominal condition associated with the inflation is satisfied. Furthermore, our numerical endeavors show that the noncommutative effects can also be seen at late times. Concretely, for the latter, we observe that the scale factor increase with zero acceleration. In section IV, by proposing an appropriate dynamical setting, we show that the above-mentioned results are confirmed. Finally, we present our conclusions in section V. II. A NONCOMMUTATIVE COSMOLOGICAL SCENARIO IN SÁEZ-BALLESTER THEORY We start with the spatially flat FLRW universe where t is the cosmic time, x, y, z are the Cartesian coordinates, a(t) is the scale factor and N (t) is a lapse function. Let us consider an extended version of the SB Lagrangian density: where χ ≡ 8πG; n and W are dimensionless independent parameters, g denotes the determinant of the metric g µν , R is the Ricci scalar, the Greek indices run from zero to three and we have assumed the units where c = 1 = . The scalar field φ is minimally coupled to gravity, V (φ) is a scalar potential, L matt = −2 (a) is the Lagrangian density associated with the ordinary matter and D α denotes the covariant derivative. Substituting the Ricci scalar associated with the metric (2.1) into (2.2), we obtain where a dot denotes a derivative with respect to the time and we have neglected a total time derivative term. It is straightforward to show that the Hamiltonian of the model is given by where P a and P φ stand for the momenta conjugates of the scale factor and the scalar field, respectively. Considering the comoving gauge, i.e, setting N = 1, employing the Hamiltonian (2.4), and admitting the Poisson algebra {a, φ} = 0, {P a , P φ } = 0, {a, P a } = 1 and {φ, P φ } = 1 for the phase space coordinates {a, φ; P a , P φ }, we easily obtain:ȧ Using equations (2.5)-(2.8), it is straightforward to obtain the equations of the non-deformed SB cosmological model: where H ≡ȧ/a is the Hubble parameter. In order to establish an appropriate noncommutative scenario, we would propose a (dynamical) deformation solely between the conjugate momenta as 1 where we have used in which h is an arbitrary function of the conjugate momenta. It is easy to show that the equations of motion associated with our herein NC framework are given by We should note that in a particular case, where the NC parameter θ vanishes, equations (2.17)-(2.19) reduce to their non-deformed counterparts. In the subsequent sections, we restrict our attention to a specific case of a formerly constructed NC framework. 1 Some arguments for such a deformation have been presented in [30]. 2 In what follows, let us briefly present another approach to obtain the NC field equations, see, for instance, [37,38]. In order to obtain the Hamiltonian corresponding to the NC model, we proceed as follows. (i) All the variables of (2.4) should be replaced by new ones, for instance, primed variables. (ii) Introducing the only transformation P φ = P φ − θaφ 2n+3 , and assuming that the other primed variables are equal to the corresponding unprimed ones, we can easily recover not only the deformed Poisson bracket (2.12) but also the NC Hamiltonian. (iii) Finally, using the latter together with usual (standard) Poisson brackets, we can easily obtain the NC counterparts of (2.5)-(2.8). III. KINETIC INFLATION AND THE HORIZON PROBLEM In this section, let us investigate a very simple set up of our herein NC and commutative models, in which the ordinary matter, as well as the scalar potential, are absent, i.e., = 0 and V (φ) = 0. Therefore, the energy density and pressure reduce to Using equations (2.19)-(3.2), it is straightforward to show that the conservation equation for the matter (associated with the NC framework) is identically satisfied: It is important to note that p nc , in equation (3.2), depends explicitly on the NC parameter θ. Although, the NC parameter does not, explicitly, appear in the relations associated with the ρ tot (or equivalently, ρ tot − p nc ), but we will show that they also depend on the NC parameter, implicitly. The above statements point out that there is no way to specify the commutative sector of any quantity, unless substituting θ = 0 in all the equations of motion. where we have assumed W > 0. Moreover, assuming n = −2, from using equation (3.4), we obtain the scale factor as a function of the SB scalar field as 3 where a i and φ i are integration constants. By substituting (3.4) and (3.5) into (2.19), the wave equation will be a differential equation for the scalar field only: In what follows, we are going to solve equation (3.6) either analytically or numerically, by which we will present analysis for the commutative and NC cosmological models. A. Commutative case Let us first investigate the commutative case, which will be required later to compare with the corresponding NC case. Substituting θ = 0 into equation (3.6), we can easily obtain an exact solution: where c 1 and c 2 are integration constants. Moreover, substituting (3.7) into the corresponding relation of scale factor in (3.5), we obtain which implies that, in the commutative case, the scale factor of the universe decelerates forever. In a particular case where n = 0 and W = 1, the solutions (3.7) and (3.8) reduce to the corresponding ones obtained in [39]. Let us abstain from analyzing these results here. However, the behavior of the physical quantities for this case will be described and compared with the corresponding NC model, see, for instance, figures 1-6. B. Noncommutative case For the NC case, i.e., θ = 0, it is not feasible to obtain exact solutions for complicated differential equation (3.6), analytically. In this respect, let us investigate this case by applying numerical methods. Concretely, assuming n = −2, we apply the numerical solution of the equation (3.6) to plot the physical quantities. In what follows, we will present briefly the consequences of our numerical endeavors, which have been obtained for every proper set of the initial conditions (ICs), the values of the parameters of the model, and the integration constants. We should note that for every set, we have taken very small negative values of the NC parameter. Our results are (see, for instance, figures 1-5): 1. In the early times, both the scalar field and the scale factor experience accelerated expansion. 2. Thereafter, there is another different phase in which both of them decelerate, see figures 1. Up to now, we can conclude that stages 1 and 2 indicate that our NC model may be considered as a successful cosmological inflationary model (see also the discussions will be presented in the following). Concretely, an inflationary phase took place at the earlier times, and afterward, there is a radiation-dominated epoch. Moreover, the effects of the dynamical noncommutativity (2.12) provide an appropriate transition from the accelerating phase to the decelerating one, which is known as the graceful exit. However, stage (iii) may be interpreted as a quantum gravity footprint as the coarse-grained explanation. 4. Let us now analyze the time behavior of the energy density and pressure, see, for instance, the upper right panel of figure 2. It is seen that ρ tot always takes positive values such that it increases during the inflationary epoch to reach its maximum value. Soon after exiting from the accelerated phase, it decreases forever. Whilst, both the p tot and p nc always take negative values. In contrast to the ρ tot , they decrease during the inflationary phase whilst increasing during the radiation-dominated era. They reach their minimum value at the moment of the transition phase. 5. In contrast to exact solutions, we should not always expect a numerical solution to satisfy the conservation equation identically. In this regard, it is worth plotting the quantity on the left-hand side of equation (3.3) to find out how much disperses from zero (for this we use the numerical solution of equation (3.6)). Therefore, for every set of the ICs and the values of the parameters, which have been used to depict the behavior of the quantities, we have checked its corresponding degree of accuracy. Specifically, for every numerical set, we have plotted the corresponding numerical error to be sure that they whether or not satisfy the conservation equation 6. We have also investigated the time behavior of φ(t), a(t), their first and second derivatives (with respect to the cosmic time) for different values of the parameters W and n, see, for instance, figures 3 and 4, which show the behavior of φ(t) and a(t) against the cosmic time. Our consequences indicate that, for a specific set of values, by changing the values of W (or n) and leaving the others unchanged, there are no perceptible changes in the general behavior of the quantities, which was reported by in stages 1 to 5. Notwithstanding, we found that for any t, assuming W > 0, the smaller the value of W, the larger the values of a and φ. Moreover, our endeavors have shown that the smaller the value of W, the shorter the amount of the interval time of the inflationary epoch. According to figure 4, an interpretation can also be presented for the case if only n varies. 7. Up to now, we have seen that our herein NC model can provide an accelerating phase at early times, and soon after the scale factor can gracefully exit from that accelerating phase and enter to a decelerating phase, which could be assigned to the radiation-dominated era. Therefore, it seems that our model, disregarding the 60 e-fold duration, can be considered as a proper inflationary scenario. Notwithstanding, it has been believed that among the problems associated with the standard cosmology, the horizon problem is the most important one, which should be resolved by a successful inflationary scenario. In this respect, let us investigate only a nominal condition as the key to resolving the horizon problem [40,41]: where d γ denotes the distance a photon has traveled In order to check satisfaction of the nominal condition (3.9), we first should obtain D γ . In this respect, for our herein NC, we substitute the relations associated with the scale factor from relations (3.5) into (3.10). Therefore, we obtain an integration over dt with an unknown integrand (as a function of the scalar field), which, in turn, is obtained from (3.6). Moreover, we should also substitute the Hubble parameter (which can be also obtained from φ) from (3.4) into (3.9). Consequently, investigating the nominal condition (3.9) for our herein NC model is not possible unless we obtain φ(t) from solving (3.6). However, as mentioned, for the NC case, we have to apply numerical analysis. Our numerical endeavors have shown that condition (3.9) is satisfied for every set of values that yield the above-mentioned stages 1 to 6, see, for instance, figure 5. IV. COSMOLOGICAL DYNAMICS IN DEFORMED PHASE SCENARIO It seems that it is impossible to reconstruct the Lagrangian of our NC model. In this respect, let us focus on it at the level of the field equations. Specifically, we can compare the evolution of the scale factor for our herein NC cosmological setting, i.e., with that of the Starobinsky inflationary model [42], see section V. In order to obtain equation (4.1), we have used equations (2.18) and (3.5). Moreover, in order to confirm the results presented in the previous section, let us provide an appropriate settings for the dynamical system. In this regard, let us rewrite equation (4.1) in more convenient form aṡ Letting y =ȧ, then instead of equation (4.2), we have which is very susceptible to the ICs, values of integration constants (a i and φ i ) and the parameters of the model, i.e., W, n, and θ. Now, by plotting the phase portrait of equation (4.3), the difference between the commutative and noncommutative cases are clearly visible. For the former (see the left panel of figure 6), for all the solutions (a,ȧ), it is seen thatȧ always decreases while a increases. Whereas, for the latter (see the right panel of the figure 6), for very small values of the scale factor, we observe an additional interesting behavior for all the solutions (a,ȧ). More concretely, during a very short time,ȧ increases until it reaches its maximum value. Thereafter, it decreases while the scale factor increases. Finally, depending on the ICs and the values of the parameters of the model, it gets constant values at late times. It is worth noting that figure 6 includes a vast range of solutions such that the particular solution shown in figure 1 corresponds to one of the trajectories plotted in figure 6. This phase portrait, with more complete specifications, certifies the inflationary phase (with a graceful exit) described in the previous section. We should emphasize that our herein NC model, for various sets of the parameters, can yield the interesting results presented within the preceding section. For instance, the figure 7 shows the phase portrait of equation (4.3) with other values of n, W and θ. V. CONCLUSIONS In this work, by considering the spatially flat FLRW metric and applying the Hamiltonian formalism, we first retrieved the equations of motion associated with a generalized SB theory. Subsequently, by proposing a dynamical deformation between the conjugate momenta (associated with the scale factor and the SB scalar field), in which the deformation parameter appears linearly, we have constructed a noncommutative SB framework, such that in a particular case where θ = 0, the commutative cosmological model is recovered (cf. Section II). In order to elucidate the kinetic acceleration arisen from our herein NC model, we restricted our attention to a simple case in which not only the scalar potential but also the Lagrangian density associated with the ordinary matter are absent. Then, we have shown that it is straightforward to write the NC Klein-Gordon equation in terms of the only SB scalar field and its time derivatives. It is worthy to note that the NC parameter appears in that wave equation linearly, too. Subsequently, to do the latter comparison, we first obtained an exact cosmological solution for the standard case. Whilst, concerning the NC model, we found that it is not feasible to obtain an exact analytic solution for the complicated nonlinear differential equation (i.e., the NC Klein-Gordon equation). Therefore, we resorted to applying numerical methods to analyze the time behavior of the physical quantities. In contrast to the corresponding standard SB model, our numerical endeavors have indicated that our herein simple NC model (in which the NC parameter appears linearly not only in the proposed deformed Poisson bracket but also in the field equations) can provide fascinating aspects. Let us be more precise. We have shown that our NC model yields a kinetic acceleration at early times. Thereafter, the scale factor can exit gracefully from that phase and enter a decelerating one. The latter can be assigned to the radiation-dominated phase. Therefore, we have interpreted this outcome, which is attained due to the presence of the NC effects, as an inflationary phase for the universe at early times and shown that its corresponding expected nominal condition is satisfied. Moreover, at the late times, we also observed the effects of the noncommutativity: we showed that the scale factor increases with constant speed (presence of a zero acceleration epoch), which can be assigned to a coarse-grained explanation. We have also depicted the time behavior of the NC energy density, NC pressure and then compared them with the corresponding counterparts associated with the non-deformed model. We have shown that the time behavior of the quantities depends not only on the NC parameter but also on the values taken by the parameters W and n, as expected. Finally, we constructed an appropriate dynamical setting, by which we easily illustrated the effect of the noncommutativity. More concretely, using the same set of ICs and the parameters of the model applied for plotting figure 1, we have depicted the phase portrait of equation (4.3). It is seen that the sole trajectory of that plot corresponds to those of the figure 1 (for either commutative model or NC model), which confirms all the results of subsection III B. Before closing this section, it is worthy to mention a few comments regarding the strengths and shortcomings of our herein NC model: • In this work, we have investigated the effects of the noncommutativity for a particular case. More concretely, we have restricted our attention to a special case where (i) the ordinary matter and the scalar potential are absent; (ii) a particular dynamical deformation between only the conjugate momenta was proposed. Obviously, by removing either one or more of the above restrictions, one can construct more extended models, which may yield more interesting results. For instance, generalizing this work to a NC model including a non-vanishing scalar potential, but still admitting the other constraints, we can establish NC counterparts for the deformed versions of Luccin-Mataresse model [43] and Barrow-Burd-Lancaster-Madsen model [44,45], see also [45]. The generalized version of the former and of the latter has been established in the non-deformed phase space in the context of SB theory [46]. Such extended frameworks have been investigated and will be presented within our forthcoming works. • In comparison with the NC model presented in [47], we observe that in our herein NC model, there are two extra free parameters, i.e., W and n, by which one can not only provide different behaviors for the physical quantities but also it may assist to retrieve appropriate values for the e-fold number (which is also one of the essential features of a expected inflationary epoch) to be in agreement with the observational data. In a particular case where W = 1 and n = 0, we recover the corresponding model investigated in [47]. Moreover, in another particular case where (a) = 0 and n = −2, using transformation where ϕ i = constant and ϕ carry the same dimension of φ, the Lagrangian (2.2) transforms to the corresponding standard minimally coupled scalar field. Therefore, in the commutative case, we recover the model studied in [47]. However, it is important to note that, under the transformation (5.1), the deformation (2.12) is not equivalent to that chosen in [47]. Concretely, the NC case corresponding to n = −2 will be different from that investigated in [47]. This particular kinetic model was contributed to other extended NC models mentioned in the preceding comment. • We should emphasize that it is almost impossible to retrieve the Lagrangian associated with our NC model, and therefore, it is a complicated procedure to investigate the quantum features of the model by means of the perturbation analysis. In this respect, at the level of the field equations, we have obtained a proper NC differential equation associated with the evolution of the scale factor. By means of such a procedure as well as by establishing the corresponding dynamical setting, one may probe the possible relations between the parameters appeared in our model (i.e., NC parameter, SB coupling parameter, n and the integration constants) and the quantum corrections observed in the Starobinsky inflationary model to find a feasible correspondence between these scenarios.
5,605.6
2022-03-01T00:00:00.000
[ "Mathematics" ]
Assessment of Carbon Dioxide Storage Capacity of Selected Aquifers in ‘J’ Field, West Africa : A combination of seismic data and petrophysical logs from five wells acquired in ‘J’ Field, Niger Delta, Nigeria, have been analyzed to assess the carbon dioxide (CO 2 ) storage potential of some saline aquifers in ‘J’ field. The study aims to evaluate the volume of CO 2 that can be potentially stored in the aquifers and the risk of CO 2 leakages in the storage. The sand aquifers were correlated across the five wells to evaluate their thicknesses and lateral extent. Porosity, permeability, formation water resistivity, and net sand thickness were estimated in the different wells. The Horizons corresponding to the top of the aquifers was mapped, and time and depth structured maps were generated for structural analysis and volumetric estimations. The risk of CO 2 leakages through sealing layers (cap rocks) was evaluated in terms of caprock integrity and pore pressure sealing mechanism. Results of the study showed that four aquifers, namely, L20, N30, M40, and P50, are laterally extensive across the five wells and have thicknesses that range from 14 to 352 m. The individual CO 2 storage capacity of L20, M30, N40, and P50 was estimated to be 6.97×10 10 , 1.48×10 10 , 7.78×10 9 and 1.49×10 10 tons, respectively. The combined aquifer storage capacity was estimated to be 1.07×10 11 tons. The sealing layers have low risk of CO 2 leakages. The study concluded that the aquifers have good potential for CO 2 storage and low risk of leakages. The study ranked L20 as the best among the four aquifers. INTRODUCTION Emission of gases by large industries, oil refineries, and automobile engines releases a vast amount of CO2 and other air pollutants to the atmosphere, thereby contributing to the greenhouse gas effect. Globally, about 80% of the greenhouse gas emission is attributed to CO2 released to the atmosphere from fossil fuel during energy production and consumption (Metz et al., 2006;Bachus, 2015;2016;Berghout et al., 2019). Despite the global efforts to generate energy from non-fossil fuel sources such as solar and wind, about 80% of the global energy need is still being met from fossil fuel (IEA 2017;EPA 2018). Therefore, there is a need to develop strategies to deal with the negative consequences of the consumption of fossil fuels while maximizing the efforts to increase non-fossil fuel sources of energy. The leading solution to greenhouse gas and consequential global warming is to isolate and store CO2 away from the atmosphere in a geological storage. Studies that confirmed the safety and reliability of Carbon Capture and Storages and demonstrate the capability of Seismic Tomography for detecting CO2 leakages in geological formations include Saito et al. (2006), Ajo-Franklin et al. (2013), Chadwick et al. (2014), Chadwick et al. (2016), Furre et al. (2015), Raji et al. (2018) and Raji et al. (2021). The new trend in Carbon Capture and Storage, CCS research is to characterize the storage site and quantify the volume of CO2 that can be stored in some of the geological formations. CCS is the method of capturing carbon dioxide which would have been released to the atmosphere, converting the CO2 to a supercritical state, and injecting them into deep geological formations such as depleted oil and gas reservoirs, deep saline aquifers, deep coal seams, and salt caverns, among others. CO2 storage in a subsurface geological formation requires site characterisation, estimation of the potential storage capacities, and evaluation of the risk of leakages in the geological formation. These three factors are important for the safety of the environment. The prior knowledge of the quantity of CO2 that can be stored in local fields and the property of the regional geological formation is crucial to the successful execution of CCS projects. Estimation of the volume of CO2 that can be stored in the saline aquifers in 'J' Field, Nigeria, and evaluation of the risk of leakages are the key foci of this paper. To the best of the author's knowledge, the only published study on CO2 sequestration potential of saline aquifers in Nigeria is a recent paper by Raji et al. (2021). At the same time, this type of studies are important to demonstrate the readiness of Nigeria to comply with the Kyoto Protocol and United Nations Framework Convention on Climate Change and global warming. Furthermore, the 2015 World Bank report showed that Nigeria is rated number 39 on the global ranking of carbon emission from all sources. Studies by Saito et al. (2006), Ajo-Franklin et al. (2013), Xu and Lei (2006), Bohm et al. (2015 among others have shown that injection of CO2 into saline water aquifers or hydrocarbon reservoirs can change the seismic velocity of the reservoirs or aquifers by up to 30%. Seismic velocity tomography can be used to image the velocity changes in the CO2-injected geological structures to monitor possible leakages. Raji et al. (2018) simulated the time-lapse CO2 movement in a complex reservoir structure of Marmouzi in Angola. The study showed the capability and effectiveness of Seismic Velocity Tomography for monitoring the movement of CO2 in stratigraphically complex geological storages. The accurate estimates of CO2 containment of a sequestration site are critical for determining the life span of a storage site, facility costing, and field planning prior to injection. Saline aquifers, when compared to other geological formations such as oil reservoirs and coal beds in terms of CO2 storage capacity has the largest storage capacity. This is because some of the aquifers are regional in size and have higher porosities compared to hydrocarbon reservoirs and coal seams. For this reason, saline aquifers are considered as the most abundant geological storage for CO2 (Tomić et al. 2018). This is especially true for Nigeria. The first project on CO2 storage in offshore saline aquifer in Europe started in 1996 in Sleipner -Norway. More than 17 Mt. of CO2 has been injected into the aquifer (I.E.A., 2017). A large project on CO2 storage in onshore saline aquifer is ongoing in Salah, Algeria and Weyburn, Canada -where over 1 Mt CO2 is being injected into the aquifers per year (Ajofranklin and Orr 2009). Unlike in developed world like U.S.A., Australia, Norway, Canada, and Netherlands where there have been extensive published studies on CO2 storage potentials of subsurface geological media (e.g., Bachus, 2002;Friedman et al., 2005;Solomon, 2007;Kaldi and Gibson-Poole, 2008;Ramirez et al., 2009;Godec et al., 2013;Boyd et al., 2013;Sayer et al., 2013), studies on carbon capture and storage, CCS in Nigerian geologic space are scarcely published. To the best of our knowledge, except (Raji et al. 2021), there are no published studies on CO2 storage potentials of aquifers in Nigeria. However, published research work on CO2 storage potential and leakage assessment in Nigerian are essential to demonstrate prior knowledge and state of the art for future projects. Further, recent studies showed that the nature of CO2brine-rock behavior in geosequestration site depends on phase of CO2, the mineral composition of the rock, and the age of the storage (Peter et al., 2022). Visco-acoustic modelling of P-and S-waves velocity models of complex structures suitable for CO2 storage and wavefield separation of complex seismic data are described in Raji (2017) and Raji et al. (2019). The future research agenda include large scale storage at GtCO2/year and reservoir characterisation from nano to kilometer scales (Kelemen et al., 2019). The current study extended the work of Raji et al. (2021) which estimates the volume of CO2 storable in some saline aquifers in the Niger Delta of Nigeria by including the computation of spatial petrophysical maps of aquifer properties and the assessing the potential of CO2 leakages in the cap rocks. The overall aim of this study is to evaluate the volume of CO2 that can be potentially stored in the aquifers and the risk of CO2 leakages in them. DELTA The study area is located in the Niger Delta Province of Nigeria. A detailed description of the field is not provided for proprietary reasons. The Niger Delta is located between latitude 4 o and 6 o N, longitude 3 o and 9 o E ( Figure 1). It is formed by a rift basin in relation to the opening of the South Atlantic Ocean. It is one of the largest sub-aerial basins in Africa, covering about 300,000 km 2 with sediment fill of 9 -12 km. The geology of the area originally described by (Short and Stauble, 1967;Doust and Omatsola, 1990) is briefly reviewed in this section. The three main lithostratigraphic units in the Niger Delta are: (i) shale dominated Akata Formation, (ii) the sand dominated Agbada Formation, and (iii) the Benin Formation. Akata formation is the lowest and oldest unit. This formation underlies the entire Niger Delta area having sediment thickness up to 7 km in some places (Doust and Omatsola, 1990). Akata formation's age ranges from Paleocene to recent and it primarily consist of shale, clay and silt. The shale in Akata formation forms the potential source rock. The shale is sufficiently thick and rich in organic matter capable of generating hydrocarbon (Evamy et al., 1978). Agbada formation overlies the Akata formation and is made of sand and shales of fluvio-marine origin. Agbada is the main hydrocarbon-bearing interval in the Niger Delta (Evamy et al., 1978). The formation is about 3700 m thick, dated Eocene to recent. The Agbada formation forms the hydrocarbon-prospective sequence in the Niger Delta. Most exploration wells in the Niger delta have bottomed in the Agbada formation. Hydrocarbon traps in Agbada formation are formed by stratigraphic traps. In few cases, we have structural traps and a combination of structural and stratigraphic traps. Roll-over anticline, which occurs in front of growth faults, is the main target of hydrocarbon explorationists in the Niger Delta of Nigeria. Agbada formation houses the reservoir, the trap, and the seal. In the exploration sense, the Agbada formation is the most important lithofacies in the Niger Delta petroleum system. (Jibrin and Raji 2014; Adeoye et al. 2018). The Benin formation is the youngest (Oligocene to Recent) and shallowest among the three lithofacies in the Niger Delta. It directly overlies the Agbada formation and consists of coarse-grained to gravelly sandstones. Benin formation hosts the most prolific aquifers in the Niger Delta region of Nigeria. The aquifers range from shallow to intermediate and deep. The deep aquifers in the Benin Formation are the candidate facility for CO2 storage in this study. The deep aquifers have good internal regional hydraulic connections and are separated by shale layers of significant thickness. These shale layers have characteristic low permeability and porosity to serve as cap rocks for the aquifers and hence made these sand layers good candidates for the storage of CO2. A. Materials To estimate the potential CO2 storage capacity of any geological formation, the evaluation of the area, thickness, porosity, and permeability, among other properties of the formation are required. This information is often derived from well logs and core data. The data used for this study were provided by the Department of Petroleum Resources (DPR), Nigeria. The data set comprised petrophysical logs from five wells and 3D seismic data covering the 'J' Field. The wells are named Pearl 01, 02, 03, 04, and X01; the welllogs provided include gamma, resistivity, spontaneous potential, and porosity. Core data from the wells were not available for this study. Five thick and laterally extensive saline sand aquifers penetrated by the wells were selected for the study. Seismic and well logs data were evaluated using volumetric approach and Petrel 2009 (by Schlumberger) was used to plot the maps and correlate the aquifers across the wells in the area. B. Evaluation of the Selected Aquifers and Estimation of their Storage Capacity A combination of gamma-ray and spontaneous potential (SP) log was used to discriminate sand from shale layers using a cut of 70 American Petroleum Institute (API). Then, resistivity logs were used to ascertain that the thick sand aquifers selected were saline aquifers, not freshwater aquifers. The values of the deep resistivity logs were examined at the reservoir intervals and compared to the freshwater resistivity in the same area. The resistivity of fresh water in the Niger Delta is typically greater than 10 Ωm (Oteri, 1987). The selected aquifers were examined for lateral continuity across the five wells using lithologic correlation. The lithologic correlation template in Petrel 2009 version was applied to correlate the sand layer in one well to the equivalent sand layer in another well and then across all the five wells. Consequent to the correlation, one of the five sand layers that were initially selected for the study was rejected due to poor lateral continuity. The four sand layers that have good lateral continuity and vertical extent were further evaluated. For references and clarity, the four saline aquifers were named L20, M30, N40, and P50. The gross thicknesses of the saline aquifers were estimated from the logs, and then the net thicknesses Nt. Other petrophysical parameters such as formation water resistivity, hydraulic conductivity, porosity and permeability of the aquifer were also estimated within the aquifer intervals and plotted for spatial correlation. The 3D seismic volume has been preprocessed for signal enhancement and interpreted to better define the structural framework of 'J' Field. Well-to-seismic tie were performed to determine the horizons that correspond to the top of the saline aquifers on the seismic section. Synthetic seismic data were generated from density and velocity (inverse sonic log) logs using the reflectivity method and Ricker wavelet as the source impulse. Then, the horizons corresponding to the top of the four aquifers, namely, L20, M30, N40, and P50 were picked using seed detection and line-based interpretation strategy. Time-domain structural maps were generated for each aquifer. Then, the time structured maps were converted to their corresponding depth-structured maps using the check shot data. The depth structured map was used to calculate the aquifer surface area required for volumetric estimations, to evaluate the structural framework and the potential trapping mechanism within the aquifers. The CO2 storage capacity, G_CO2 of the individual aquifer was calculated following the method of Bachus (2015) as: where: is the average area of the aquifer, ℎ is average thickness of the aquifer, is the average porosity, 2 is the density, E is the storage efficiency factor, and is the average water saturation. The density of supercritical CO2 at depth interval of 1000 to 2500 and temperature of 67oC is 0.54 g/cm3 (or 540 kg/m3). In addition to the storage property and the viscousity of the fluid, the CO2 storage efficiency, E of an aquifer depend on a combination of four factors described in Bachus (2015). These factors include: (i) the in situ conditions of the aquifer (temperature, pressure, lithology, porosity, permeability, heterogeneity, anisotropy, among others); (ii) characteristics of the confining aquitard or cap-rock (capillary entry pressure and permeability); (iii) characteristics of CO2 operationinjection rate, duration of injection, number of injection wells and their spacing, and (iv) regulatory constraintsthe maximum bottom hole injection pressure, relevant aquifer area, and the scale of assessmentlocal or regional. The results obtained are presented in Figures 2 to 5 and discussed in Section IV. C. Evaluation of the Caprocks for Leakages The caprocks (or seals) covering the aquifers were examined for the possibility of CO2 leakages. The sealing layers to the aquifers were mapped, their thickness and lateral coverage were evaluated from the well logs. The densities of each of the four sealing layers were plotted against depths following Skerlec's model (Skelec, 1982) to evaluate the insitu ductile-brittle behaviour of each sealing layer/cap rocks and to predict their response to pressure. The results obtained are presented in Figures 6a and 6b and discussed in the next sections. IV. RESULTS AND DISCUSSION The correlation panel in Figure 2 showed that the selected four saline aquifers are sufficiently thick and laterally continuous across the five wells. This suggests that the aquifers can store a significant quantity of CO2. Further, Figure 2 also shows that saline aquifers are located within a depth range of 910 and 2300 m, which is higher than the minimum depth of 800 m required for a CO2 storage site. The higher the depth, the lower the chance of CO2 leakage to the atmosphere. Figure 2 also shows the depth sequence of the aquifers, indicated that aquifer L20 is the shallowest, while aquifer P50 is the deepest. The average values of porosity, permeability, hydraulic conductivity, water saturation, formation water resistivity, and aquifer thickness estimated in the aquifer intervals are shown in tables 1-4. Core data for the interval under study are not available, however published data on an adjacent oil field (Etu-Efeotor and Akpokodje, 1990) confirmed the validity of the porosity and permeability data. The porosity and permeability models around the wells are shown in Figure 3. Porosity and permeability are key parameters in storage and fluid flow, bulk resistivity and formation resistivity are important parameters for predicting the nature of the fluid in the aquifer and the chemical reaction CO2 may undergo during storage in the aquifers. Water saturation is important for estimating the fraction of the pore space that is readily available for CO2 storage at in situ condition. When the injection pressure is higher than the pore pressure, the pressure difference can force CO2 to replace formation water in the pore spaces of the aquifers. Finer details of the petrophysical parameters of the aquifers, from one well to another, are shown in Tables 1-4. The tables show that the four saline aquifers L20, M30, N40, and P50 have sand thicknesses that ranges from 219 to 277 m, 105 to 147 m, 59 to 79 m, 28 -105 m, respectively. Table 1 also shows that the formation water resistivity is very low -ranging from 0.12 to 0.25 Ωm, thereby suggesting that the aquifers contain saltwater, not freshwater. The freshwater aquifers in the Niger Delta area have resistivity values greater than 10 Ωm (Oteri, 1987). The seismic section showing the stratigraphic succession of the saline aquifers is presented in Figure 4. The Horizons corresponding to the tops of the aquifers were picked and some faults were mapped using different colours as shown in Figure 4. The shallowest and deepest horizons correspond to L20 and P50, respectively. The depth maps used for volumetric estimation of CO2 storage in the aquifers are shown in Figure 5. The maps showed the position of the wells, depths (coded in colours), and some structural elements such as fault-assisted closures. The estimated volumes of CO2 potentially storable in Aquifers L20, M30, N40, and P50 are presented in Table 5, at 1%, 4%, 10% and 15% efficiency factors, respectively. The reason for calculating CO2 volume at different efficiency factors is that there is no consensus among CO2 sequestration researchers on the best or the most appropriate efficiency factor to estimating the CO2 storage potential of aquifers. Also, the efficiency factor depends on a number of factors which are still not completely understood, including the characteristics of the aquifer and the caprock (Bachus, 2015). The value commonly used in the literature ranges from 1% to 20% ( Van der Meer, 1982:1995Holloway et al., 2006;EERC, 2009). Consequently, the storage potential of aquifers estimated at 1%, 4%, 10%, and 15% are shown in Table 5. Results in Table 5 shows that aquifer L20 has the highest storage capacity at all the efficiency factors, while aquifer P50 has the lowest storage capacity at all the efficiency factors. The total volume of CO2 that can be stored in the combined aquifer are 2.78×10 11 tons, 5.90×10 11 tons, 3.11×10 10 tons, and 5.94×10 10 tons at 1%, 4%, 10%, and 15% efficiency factors, respectively. For this study, the average estimated storage capacity of the individual aquifer was computed as the mean of the storage capacities of the respective aquifers at 1%, 4%, 10% and 15% efficiency factors. The average estimated storage capacities of the individual aquifers are 6.97×10 10 tons, 1.48×10 10 tons, 7.78×10 9 tons and 1.49×10 10 tons for L20, M30, N40, and P50 aquifers, respectively. The estimated combined aquifer storage capacity, being the sum of the estimated average storage capacity of the four aquifers, is 1.07×10 11 tons. The estimated volumes are comparable with those obtained in previous studies (Sayers et al., 2015;Kelemen et al., 2019): keeping other factors constant, the thicker the aquifers the higher the CO2 volume storable in them. The cap rocks (seals) were found to be laterally extensive, covering the entire aquifer area. Figure 6a shows the estimated thickness of seals 1 -4 in the different wells, where seals 1, 2, 3, and 4 are the respective seal to aquifers L20, N30, M40, and K(mD) Rw(Ωm) k (m/day) Sw Pearl 02 Table 3. Petrophysical parameters of aquifer N40 across the five wells. P50. The thicknesses of the seal (Figure 6a) ranges from 14 to 352 m which are above the minimum. The minimum seal thickness required for CO2 sequestration is 10 m (Kaldi et al., 2008). Seal 4 is consistently the thickest in all the wells, while seal 1 is the second thickest. The thicker the seal, the lower the risk of CO2 leakage due to breakage or diffusion. Using Skerlec (1982) model to assess the brittle-ductile behaviour of the cap rocks (seals), Figure 6b shows that all the cap rocks plotted in the lower part of the ductile section within the density values of 2.0 to 2.35 g/cm 3 , at depth range of 910 to 2300 m. This result suggests that the seals are moderately ductile and have a low risk of breakage. Ductility in shale is a function of the compaction state; the more the ductility, the lower the risk of breakage. Overall, Aquifer L20 has the highest storage capacity, and its seal has the second-best rating. Therefore, it is rated as the best aquifer in terms of CO2 storage and risk of leakage. Compacted low-density shale layers are very ductile, while a high density un-compacted shale layers are usually brittle. The ductility of the caprock allows it to deform without developing high permeability pathways for leakages. Redox reaction and carbonate precipitation in caprocks can further reduce CO2 diffusion when there are no large permeability features (Wang and Tokunaga, 2015). Considering a density range of 1.2 to 2.8 g/cm 3 within a depth range of 100 -5000 m according to Skerlec's model and the result in Figure 6b where the seal (shale) plotted at the medium density values of 2 to 2.35 g/cm 3 within a depth range of 910 to 2300 m. The seals are interpreted to be moderately ductile. Therefore, the seals have a low risk of breakage and CO2 leakages. Further, the depth structured map shown in Figure 5 revealed the presence of fault assisted closures that are potentially useful for CO2 trapping within the aquifers. Considering capillary pressure and the trapping mechanism for CO2 in storage media, capillary pressure generally serves as either a driving or opposing force for CO2 leakages through the sealing layer depending on the prevailing condition and the L20 Storage Capacity P50 Storage Capacity Combined storage capacity of aquifer CO2 storage at 1% Efficiency Factor 9.61x10 9 2.03x10 9 1.07x10 9 2.06x10 9 2.78x10 11 CO2 storage at 4% Efficiency Factor 3.88x10 10 8.14x10 9 4.30x10 9 8.25x10 9 5.90x10 11 CO2 storage at 10% Efficiency Factor 9.61x10 10 2.03x10 10 1.07x10 10 2.06x10 10 3.11x10 10 CO2 storage at 15% Efficiency Factor 1.35x10 11 2.84x10 11 1.52x10 10 2.88x10 10 5.97x10 10 The storage capacity of the aquifer at the average of 1, 4, 10, & 15% E. F 6.97x10 10 1.48x10 10 7.78x10 9 1.48x10 10 1.07x10 11 property of the storage formation especially in the cap rock transition zone. As seen in the log signatures, porosity heterogeneity at the aquifer -cap rock (seal) transition zone will lead to residual trapping of CO2 in the cap rock, and this would play a major role in opposing CO2 leakages in the cap rock (see also, Al-Menhali and Krevor, 2016). Solubility trapping of CO2 is also possible due to the presence of brine in pore spaces of the media. However, core sample analyses are required to describe the detailed trapping mechanisms. Further, the presence of interbedded layers of shale and sand at the top and base of the storage media will cause significant porosity heterogeneity at the top and base of the aquifers. The heterogeneity will limit the capillary pressure driving CO2 migration in the caprock. Furthermore, the stratigraphic traps caused by porosity heterogeneity can store significant CO2 volume, block the pores in the zones, and further reduce the chance of CO2 leaking through the cap rock. V. CONCLUSION The volume of CO2 that can be stored in the saline aquifers in 'J' Field Niger Delta, Nigeria, has been estimated. Furthermore, the risk of CO2 leakages through the cap rocks overlying the aquifers has been evaluated. The aquifers were found to be sufficiently thick and laterally extensive to store a significant volume of CO2. The storage capacity of the combined aquifers was estimated to be 1.07×10 11 tons, while the individual storage capacity of L20, M30, N40, and P50 aquifers are 6.97×10 10 tons, 1.48×10 10 tons, 7.78×10 9 tons and 1.49×10 10 tons, respectively. The caprocks (seals) are formed by shale that are moderately ductile, sufficiently thick, and laterally extensive, covering the entire surface area of the respective aquifers to be used for storage. Aquifer L20 has the highest storage capacity, and its seal has the second-best rating. Aquifer P50 has the best sealing layer and the least storage capacity. In terms of storage capacity and the risk of leakages, aquifer L20 is rated as the best. The stratigraphic succession of the selected aquifers made it possible for the aquifers to be sandwiched between the competent top and bottom shale layers, which further reduced the risk of CO2 leakages. The study concludes that aquifers L20, M30, N40 and P50 are good and reliable for safe and secure storage of CO2 in J field. Findings from this study are important for basin-wide evaluation of CO2 storage in Nigerian geological space in the mitigation of greenhouse gas effect. Similar studies on depleted hydrocarbon reservoirs in the Niger Delta of Nigeria is recommended with a view to prepare a template for a pilot study.
6,116.6
2022-09-23T00:00:00.000
[ "Geology" ]
Novel Water Probe for High-Frequency Focused Transducer Applied to Scanning Acoustic Microscopy System: Simulation and Experimental Investigation A scanning acoustic microscopy (SAM) system is a common non-destructive instrument which is used to evaluate the material quality in scientific and industrial applications. Technically, the tested sample is immersed in water during the scanning process. Therefore, a robot arm is incorporated into the SAM system to transfer the sample for in-line inspection, which makes the system complex and increases time consumption. The main aim of this study is to develop a novel water probe for the SAM system, that is, a waterstream. During the scanning process, water was supplied using a waterstream instead of immersing the sample in the water, which leads to a simple design of an automotive SAM system and a reduction in time consumption. In addition, using a waterstream in the SAM system can avoid contamination of the sample due to immersion in water for long-time scanning. Waterstream was designed based on the measured focal length calculation of the transducer and simulated to investigate the internal flow characteristics. To validate the simulation results, the waterstream was prototyped and applied to the TSAM-400 and W-FSAM traditional and fast SAM systems to successfully image some samples such as carbon fiber-reinforced polymers, a printed circuit board, and a 6-inch wafer. These results demonstrate the design method of the water probe applied to the SAM system. Introduction Scanning acoustic microscopy (SAM) is a powerful, non-destructive instrument for material quality evaluation owing to its capabilities of visualizing the internal structure of a sample without destruction.In the 1970s, the SAM system was developed by Lemons and Quate [1]; it was used for imaging inside solid samples and biological tissues [2,3].Nowadays, SAM is a well-accepted imaging instrument for scientific and industrial applications. In scientific applications, several studies have utilized the SAM system to analyze the biological characteristics and quality of samples.The SAM system has been employed to characterize both soft tissues [4,5] and hard tissues [6,7].Kundu et al. [8] and Soon et al. [9] Sensors 2024, 24, 5179.https://doi.org/10.3390/s24165179https://www.mdpi.com/journal/sensors Sensors 2024, 24, 5179 2 of 15 used the SAM system to visualize live cells and determine their mechanical properties.The conditions of biological matter were illustrated using the SAM system [10,11].Moreover, the SAM system was used to measure cell properties, such as sound speed, thickness, and density [12].Using the C-scan image of the SAM system, the structural homogeneity of a sample was examined by identifying the shape and size of internal defects.The delamination between mold compound and lead frame of an integrated circuit (IC) chip was visualized in a C-scan image [13][14][15].Wang et al. [16] used the SAM system to evaluate the quality of the flip chip assembly process.By interpreting the C-scan image, the delamination region was detected, thereby evaluating the quality of a printed circuit board (PCB).Owing to the complicated manufacturing process of wafers, internal defects can occur, which can be observed using the SAM system [17].Twerdowski et al. [18] used the SAM system to locate the disbonded and weakly bonded regions inside the wafer.Noh et al. [19] evaluated the bonding quality by using the SAM system to visualize bonding behavior.In addition, the SAM system was proposed for the investigation of microstructure damage [20] and evaluation of composite adhesive joints [21] in a carbon fiber-reinforced polymer (CFRP) plate.By using the SAM system, the quality of other samples was evaluated such as welded joints [22][23][24][25][26], coating materials [27], and batteries [28]. For industrial applications, many commercial SAM systems have been developed, such as those by KSI, Honda, Sonix, Nordson, and PVA TePla.Their systems have been used for product inspection in mass production.The automotive SAM system was proposed to enhance the system's capabilities in in-line inspection.In this way, a robot arm is incorporated into the SAM system to transfer the sample, for instance, the SAM 300 system (PVA TePla).The wafer sample was transferred from the tray to the fixture.To implement the scanning process, the fixture was controlled to fully immerse the sample into the water.By integrating a robot arm and fixture into the SAM system, the entire system becomes complicated, thereby increasing time consumption. Technically, water is used as an environment for ultrasound wave propagation from transducer to sample.Therefore, the sample is fully immersed and the transducer is semiimmersed in water in most SAM systems during the scanning process.The sample being fully immersed in water could lead to damage due to contamination such as oxidation, especially for metal material.To overcome the aforementioned problems, a water probe has been proposed for use in the SAM system.However, water probes are only used in commercial SAM systems (Sonoscan, Nordson) and there is no research focused on water probe development for SAM systems.According to the commercial version, the water pressure at the outlet of the water probe is high, which may cause damage to the sample's top surface, especially for soft materials. This study aims to design and fabricate a novel water probe to be used for the SAM system, that is, a waterstream.The waterstream supplies a continuous flow between the transducer and sample, which maintain acoustic coupling during the scanning process.In other words, the tested sample is not immersed in water, which can avoid damage of the sample due to contamination.In addition, using a waterstream leads to a simple design of the automotive SAM system and reduces time consumption by transferring the sample by a standard mechanism (i.e., conveyor).The waterstream was designed based on the water domain that was simulated to investigate the internal flow characteristics.The water domain was modeled using the measured focal length calculation of the transducer.Using the PISO (Pressure Implicit of Splitting Operator) algorithm integrated into the open-source OpenFOAM 8 software, the instantaneous values of velocity and pressure were plotted.Furthermore, the water pressure at the outlet (sample top surface) was determined to have a value approximately equal to atmospheric pressure, which could protect the sample's top surface, particularly for soft materials (i.e., soft tissues).Based on the simulation results of the water domain, the waterstream was prototyped and applied to the traditional SAM (TSAM−400) and wafer fast SAM (W−FSAM) systems (Ohlabs Corp., Busan, Republic of Korea) to conduct experiments.The setup of the water supply system Sensors 2024, 24, 5179 3 of 15 was simple; it comprised an immersion pump with a manual valve.Using the SAM system with waterstream, CFRP, PCB, and 6-inch wafer samples were successfully captured. The remainder of the paper is organized as follows.Section 2 describes the schematic of the SAM system and focal length calculation of the transducer.Based on the results of Section 2, the water domain is modeled and set up for simulation in Section 3. Section 4 shows the simulation results and waterstream prototype.The waterstream is used for two SAM systems to conduct the experiments that are presented in Section 5. Finally, the conclusions are expressed in Section 6. Scanning Acoustic Microscopy (SAM) System The operation of the SAM system is built based on the ultrasound (US) transducer characteristics that uses the sensitivity of US waves for visualizing the internal structure of the sample without deconstruction.The US transducer propagates US waves to the sample through the water and receives the echo signals reflected off the sample, which are converted to digital signals.The transducer is attached to the scanning module and the sample is fixed to the sample table.The scanning module can be adjusted along the z-axis to define the focal zone.Technically, the scanning module consists of two linear motions along the xand y-axes.During the scanning process, the scanning module is controlled in a sequence to generate the scanning images.When the scanning module completes one line along the x-axis, it will move one step along the y-axis. In previous studies, traditional SAM (TSAM−400) and fast SAM (W−FSAM) systems were developed to successfully capture samples that were fully immersed in water.The motion along the y-axis of both systems was implemented by a ball-screw mechanism.TSAM−400 uses a linear motor to develop linear motion along the x-axis, which evaluates the quality of spot-welded sheets [29].To enhance the efficiency of the SAM system with regard to in-line inspection, the W−FSAM system was designed by exploiting the slidercrank mechanism to conduct high-speed motion along the x-axis.Using the W−FSAM system, the scanning time was significantly reduced while maintaining the high resolution of the image results.In this study, a waterstream is used to supply water instead of immersing the sample in water during the scanning process; it is applied to the TSAM−400 and W−FSAM systems.Figures 1 and 2 show schematics of the TSAM−400 and W−FSAM systems using the waterstream, respectively.The waterstream is attached to the transducer.Using an immersion pump, water is supplied to the waterstream from a water container.The flow input is controlled using a manual valve to maintain the flow velocity at the inlet.Water returns to the water container via a water tank, as shown in Figures 1 and 2. the traditional SAM (TSAM−400) and wafer fast SAM (W−FSAM) systems (Ohlabs Corp., Busan, Republic of Korea) to conduct experiments.The setup of the water supply system was simple; it comprised an immersion pump with a manual valve.Using the SAM system with waterstream, CFRP, PCB, and 6-inch wafer samples were successfully captured. The remainder of the paper is organized as follows.Section 2 describes the schematic of the SAM system and focal length calculation of the transducer.Based on the results of Section 2, the water domain is modeled and set up for simulation in Section 3. Section 4 shows the simulation results and waterstream prototype.The waterstream is used for two SAM systems to conduct the experiments that are presented in Section 5. Finally, the conclusions are expressed in Section 6. Scanning Acoustic Microscopy (SAM) System The operation of the SAM system is built based on the ultrasound (US) transducer characteristics that uses the sensitivity of US waves for visualizing the internal structure of the sample without deconstruction.The US transducer propagates US waves to the sample through the water and receives the echo signals reflected off the sample, which are converted to digital signals.The transducer is attached to the scanning module and the sample is fixed to the sample table.The scanning module can be adjusted along the z-axis to define the focal zone.Technically, the scanning module consists of two linear motions along the x-and y-axes.During the scanning process, the scanning module is controlled in a sequence to generate the scanning images.When the scanning module completes one line along the x-axis, it will move one step along the y-axis. In previous studies, traditional SAM (TSAM−400) and fast SAM (W−FSAM) systems were developed to successfully capture samples that were fully immersed in water.The motion along the y-axis of both systems was implemented by a ball-screw mechanism.TSAM−400 uses a linear motor to develop linear motion along the x-axis, which evaluates the quality of spot-welded sheets [29].To enhance the efficiency of the SAM system with regard to in-line inspection, the W−FSAM system was designed by exploiting the slidercrank mechanism to conduct high-speed motion along the x-axis.Using the W−FSAM system, the scanning time was significantly reduced while maintaining the high resolution of the image results.In this study, a waterstream is used to supply water instead of immersing the sample in water during the scanning process; it is applied to the TSAM−400 and W−FSAM systems.Figures 1 and 2 show schematics of the TSAM−400 and W−FSAM systems using the waterstream, respectively.The waterstream is attached to the transducer.Using an immersion pump, water is supplied to the waterstream from a water container.The flow input is controlled using a manual valve to maintain the flow velocity at the inlet.Water returns to the water container via a water tank, as shown in Figures 1 and 2. Focal Length Calculation of the Transducer To generate high-contrast images, the transducer and sample are aligned along the z-axis to define the focal zone before the scanning process.According to the transducer technical report, the focal length (f) measured in water is given.For a specific sample, the measured focal length of the transducer depends on the material's characteristics.The acoustic wave propagates to the top surface of the sample at an incident angle (aperture angle, Θw) through the water.Due to the difference in the acoustic properties of the water and sample material, the acoustic wave is refracted when it goes inside the sample.The refracted angle (Θs) is calculated using Snell's law [30,31], as defined by where cw, cs denote the longitudinal velocities of acoustic waves propagating in the water and sample, respectively.As the acoustic velocity in most materials is higher than that in water, the focal length in the sample is effectively shortened.The measured focal length inside the sample (fs) is determined as follows: where h and dp are calculated as Focal Length Calculation of the Transducer To generate high-contrast images, the transducer and sample are aligned along the z-axis to define the focal zone before the scanning process.According to the transducer technical report, the focal length (f ) measured in water is given.For a specific sample, the measured focal length of the transducer depends on the material's characteristics.The acoustic wave propagates to the top surface of the sample at an incident angle (aperture angle, Θ w ) through the water.Due to the difference in the acoustic properties of the water and sample material, the acoustic wave is refracted when it goes inside the sample.The refracted angle (Θ s ) is calculated using Snell's law [30,31], as defined by where c w , c s denote the longitudinal velocities of acoustic waves propagating in the water and sample, respectively.As the acoustic velocity in most materials is higher than that in water, the focal length in the sample is effectively shortened. Focal Length Calculation of the Transducer To generate high-contrast images, the transducer and sample are aligned along the z-axis to define the focal zone before the scanning process.According to the transducer technical report, the focal length (f) measured in water is given.For a specific sample, the measured focal length of the transducer depends on the material's characteristics.The acoustic wave propagates to the top surface of the sample at an incident angle (aperture angle, Θw) through the water.Due to the difference in the acoustic properties of the water and sample material, the acoustic wave is refracted when it goes inside the sample.The refracted angle (Θs) is calculated using Snell's law [30,31], as defined by where cw, cs denote the longitudinal velocities of acoustic waves propagating in the water and sample, respectively.As the acoustic velocity in most materials is higher than that in water, the focal length in the sample is effectively shortened.Figure 3 The measured focal length inside the sample (fs) is determined as follows: where h and dp are calculated as The measured focal length inside the sample (f s ) is determined as follows: Sensors 2024, 24, 5179 5 of 15 where h and dp are calculated as where D and wp stand for element size of the transducer and distance from the transducer to the sample (top surface), respectively.The waterstream is designed based on the wp value, that is defined by Water Domain Modeling The waterstream is used to continuously maintain the water environment between the transducer and sample during the scanning process.Water is supplied at the inlet through one-touch fitting, whereas the outlet is defined at the top surface of the sample.The water domain is modeled based on the wp value that depends on the transducer and sample characteristics.In this study, a 100 MHz focused transducer with focal length (f ) of 8 mm and element size (D) of 3 mm was used, thereby generating an aperture angle (Θ w ) of 10.81 • .To increase the ability of the waterstream, the water domain was modeled with the minimum value of wp, corresponding to the maximum values of dp and c s .The maximum value of dp was considered as the penetration depth of the transducer; the value was 0.4 mm, considering the 100 MHz focused transducer [32].The value of c s was 4660 m/s, with regard to the copper material.All the experiments were conducted at room temperature (25 • C); thus, the value of c w was set to 1500 m/s.Therefore, the value of wp was 6.4 mm. Figure 4 shows the modeling of the water domain.The inlet diameter was 2.3 mm corresponding to the hole diameter obtained through one-touch fitting.The distance between the transducer surface and outlet was set equal to wp.Water was flowed from the inlet to the outlet through the wall boundary that was modeled to cover the transducer outside diameter of 22.9 mm. Sensors 2024, 24, x FOR PEER REVIEW 5 of 16 where D and wp stand for element size of the transducer and distance from the transducer to the sample (top surface), respectively.The waterstream is designed based on the wp value, that is defined by Water Domain Modeling The waterstream is used to continuously maintain the water environment between the transducer and sample during the scanning process.Water is supplied at the inlet through one-touch fitting, whereas the outlet is defined at the top surface of the sample.The water domain is modeled based on the wp value that depends on the transducer and sample characteristics.In this study, a 100 MHz focused transducer with focal length (f) of 8 mm and element size (D) of 3 mm was used, thereby generating an aperture angle (Θw) of 10.81°.To increase the ability of the waterstream, the water domain was modeled with the minimum value of wp, corresponding to the maximum values of dp and cs.The maximum value of dp was considered as the penetration depth of the transducer; the value was 0.4 mm, considering the 100 MHz focused transducer [32].The value of cs was 4660 m/s, with regard to the copper material.All the experiments were conducted at room temperature (25 °C); thus, the value of cw was set to 1500 m/s.Therefore, the value of wp was 6.4 mm. Figure 4 shows the modeling of the water domain.The inlet diameter was 2.3 mm corresponding to the hole diameter obtained through one-touch fitting.The distance between the transducer surface and outlet was set equal to wp.Water was flowed from the inlet to the outlet through the wall boundary that was modeled to cover the transducer outside diameter of 22.9 mm. Governing Equations In this study, all the experiments are conducted at room temperature (25 ∂w ∂t where u, v, and w denote the flow velocities along the x-, y-, and z-axes, respectively.The gravitation acceleration along the j-axis (j = x, y, and z) is given by g j .p and υ denote the kinematic pressure and kinematic viscosity, respectively.The Reynolds number (Re) is used to predict flow patterns, which is calculated by where d and U are the inlet diameter and magnitude of the flow velocity, respectively.In this study, the flow velocity was maintained at 12 m/s using a manual valve.The water kinematic viscosity (υ) was 10 −6 m 2 /s.Therefore, Re was 27,600, which represents turbulent flow.The kappa-epsilon (k-ε) model is widely used to describe turbulent flow with high Reynolds numbers.Two transport equations of turbulent kinetic energy (k) and dissipation (ε) are expressed to represent turbulent properties.To conduct the numerical simulation of the k-ε model, the initial conditions are defined as follows. where I denotes the initial turbulent intensity, given by where c µ is a k-ε model parameter with a value of 0.09, and l is the turbulent length scale, which is calculated as l = 0.07d (17) Simulation Setup Before the simulation process, the water domain was created and mesh constructions were generated by using the open-source SALOME software version 9.7.0. Figure 5a shows the mesh construction of the water domain that contains 295,161 elements.The mesh constructions of the inlet and outlet are shown in Figures 5b and 5c, respectively.The convergence and runtime depend on the element sizes, that were set with minimum and maximum values of 0.01 and 0.07 mm, respectively. The convergence and runtime depend on the element sizes, that were set with minimum and maximum values of 0.01 and 0.07 mm, respectively.The k-ε turbulent model was simulated using OpenFOAM.The pressure and velocity values were achieved through the PISO scheme.The initial pressure was set to 101.325 m 2 /s 2 (atmospheric pressure in kinematic pressure).The velocities on the wall boundaries were defined as no-slip condition.The initial conditions of the simulation are listed in Table 1. Simulation Results In this study, a waterstream is used to maintain the water environment between the transducer and sample during the scanning process.To investigate the flow characteristics, the instantaneous values of velocity and pressure in the water domain were determined and saved at each time step.Using OpenFOAM, these values were plotted at t = 250 × n × Δt, where n denotes integers from 0 to 8. Figure 6 shows the instantaneous velocity distribution inside the water domain, which is plotted as a streamline.At t = 0 s, water is injected from the inlet at the highest velocity.Owing to the high flow velocity, turbulence was generated inside the water domain.When the flow moved toward the outlet, the turbulence was dissipated because of the flow velocity decrease.From t = 0.1 s, the flow pattern was stable, thus forming a continuous flow that moved to the outlet.The k-ε turbulent model was simulated using OpenFOAM.The pressure and velocity values were achieved through the PISO scheme.The initial pressure was set to 101.325 m 2 /s 2 (atmospheric pressure in kinematic pressure).The velocities on the wall boundaries were defined as no-slip condition.The initial conditions of the simulation are listed in Table 1. Simulation Results In this study, a waterstream is used to maintain the water environment between the transducer and sample during the scanning process.To investigate the flow characteristics, the instantaneous values of velocity and pressure in the water domain were determined and saved at each time step.Using OpenFOAM, these values were plotted at t = 250 × n × ∆t, where n denotes integers from 0 to 8. Figure 6 shows the instantaneous velocity distribution inside the water domain, which is plotted as a streamline.At t = 0 s, water is injected from the inlet at the highest velocity.Owing to the high flow velocity, turbulence was generated inside the water domain.When the flow moved toward the outlet, the turbulence was dissipated because of the flow velocity decrease.From t = 0.1 s, the flow pattern was stable, thus forming a continuous flow that moved to the outlet. The pressure distribution inside the water domain is shown in Figure 7.According to Bernoulli's principle, low-pressure regions were obtained at the areas that exhibited high flow velocity.In other words, low pressure appeared in turbulent areas.The flow velocity in the inlet throat was maintained at a high value for all the time steps, thereby generating low pressure in the inlet throat.Similar to the velocity distribution, a stable state of pressure was obtained from t = 0.1 s.An animation video of the velocity and pressure distributions was recorded for illustration (Supplementary Movie S1).The pressure distribution inside the water domain is shown in Figure 7.According to Bernoulli's principle, low-pressure regions were obtained at the areas that exhibited high flow velocity.In other words, low pressure appeared in turbulent areas.The flow velocity in the inlet throat was maintained at a high value for all the time steps, thereby generating low pressure in the inlet throat.Similar to the velocity distribution, a stable state of pressure was obtained from t = 0.1 s.An animation video of the velocity and pressure distributions was recorded for illustration (Supplementary Movie S1).For further understanding the velocity and pressure behaviors inside the water domain, the velocity and pressure profiles along the centerline between the transducer surface and outlet are shown in Figure 8.The velocity was the highest at Z = 3 mm, near the middle orifice.The flow velocity decreased to 0.55 m/s when the water level reached to transducer surface.As expected, the pressure behavior plotted in contrast to the velocity The pressure distribution inside the water domain is shown in Figure 7.According to Bernoulli's principle, low-pressure regions were obtained at the areas that exhibited high flow velocity.In other words, low pressure appeared in turbulent areas.The flow velocity in the inlet throat was maintained at a high value for all the time steps, thereby generating low pressure in the inlet throat.Similar to the velocity distribution, a stable state of pressure was obtained from t = 0.1 s.An animation video of the velocity and pressure distributions was recorded for illustration (Supplementary Movie S1).For further understanding the velocity and pressure behaviors inside the water domain, the velocity and pressure profiles along the centerline between the transducer surface and outlet are shown in Figure 8.The velocity was the highest at Z = 3 mm, near the middle orifice.The flow velocity decreased to 0.55 m/s when the water level reached to transducer surface.As expected, the pressure behavior plotted in contrast to the velocity For further understanding the velocity and pressure behaviors inside the water domain, the velocity and pressure profiles along the centerline between the transducer surface and outlet are shown in Figure 8.The velocity was the highest at Z = 3 mm, near the middle orifice.The flow velocity decreased to 0.55 m/s when the water level reached to transducer surface.As expected, the pressure behavior plotted in contrast to the velocity profile; this indicates that the pressure increased when the velocity decreased and vice versa.At the outlet, the pressure was approximately equal to atmospheric pressure, whereas the flow was continuous at 1 m/s. profile; this indicates that the pressure increased when the velocity decreased and vice versa.At the outlet, the pressure was approximately equal to atmospheric pressure, whereas the flow was continuous at 1 m/s.According to the simulation results, the flow was stable after 0.1 s, which indicated the scanning process could be started immediately without delay of the water supply.The flow behavior demonstrated that the acoustic propagation was continuous between transducer and sample, which resulted in no missing signal in the data collection for image processing.In addition, the pressure at the outlet was approximately 1 atm, potentially protecting the sample's top surface from damage. Waterstream Prototype The waterstream is designed based on the modeling parameters, which cover the entire water domain.To simplify the design and fabrication processes, the concept of the waterstream includes seven components, as shown in Figure 9a,b.One-touch fitting was fixed on the upper plate by a thread joint, which defined the inlet of the water domain.The walls and outlet were created in the cavity plate.Figure 9a shows a two-dimensional (2D) drawing of the waterstream.The upper and cavity plates were connected by bolts.The waterstream was mounted on the transducer using a grub screw.Figure 9b shows an exploded view of a three-dimensional (3D) drawing of the waterstream.An O-ring was used between the upper and cavity plates to restrict water overflow.The grub screws were fixed on both sides of the cavity plate to curb the water leakage problem.According to the simulation results, the flow was stable after 0.1 s, which indicated the scanning process could be started immediately without delay of the water supply.The flow behavior demonstrated that the acoustic propagation was continuous between transducer and sample, which resulted in no missing signal in the data collection for image processing.In addition, the pressure at the outlet was approximately 1 atm, potentially protecting the sample's top surface from damage. Waterstream Prototype The waterstream is designed based on the modeling parameters, which cover the entire water domain.To simplify the design and fabrication processes, the concept of the waterstream includes seven components, as shown in Figure 9a,b.One-touch fitting was fixed on the upper plate by a thread joint, which defined the inlet of the water domain.The walls and outlet were created in the cavity plate.Figure 9a shows a two-dimensional (2D) drawing of the waterstream.The upper and cavity plates were connected by bolts.The waterstream was mounted on the transducer using a grub screw.Figure 9b shows an exploded view of a three-dimensional (3D) drawing of the waterstream.An O-ring was used between the upper and cavity plates to restrict water overflow.The grub screws were fixed on both sides of the cavity plate to curb the water leakage problem. During the scanning process, the transducer was controlled to move in a sequence.To facilitate the motions, the gap between the bottom face of the cavity plate and top face of the sample was set at a minimum value of 3 mm, and thus, the distance between the transducer surface and bottom face of the cavity plate was 3.4 mm, as shown in Figure 9a. Figure 9c shows the rendered concept of a waterstream.The upper and cavity plates were fabricated using acrylic materials.Finally, the waterstream was assembled as shown in Figure 9d.During the scanning process, the transducer was controlled to move in a sequence.To facilitate the motions, the gap between the bottom face of the cavity plate and top face of the sample was set at a minimum value of 3 mm, and thus, the distance between the transducer surface and bottom face of the cavity plate was 3.4 mm, as shown in Figure 9a. Figure 9c shows the rendered concept of a waterstream.The upper and cavity plates were fabricated using acrylic materials.Finally, the waterstream was assembled as shown in Figure 9d. Experimental Results The TSAM-400 and W-FSAM systems were used with the waterstream to validate the simulation results.The water was supplied to two waterstreams using the immersion pump.Two independent manual valves were used to maintain the flow velocity through one-touch fitting.To demonstrate the capabilities of the waterstream, some samples were successfully captured using the SAM systems: a CFRP, PCB, and 6-inch wafer. Experimental Results The TSAM-400 and W-FSAM systems were used with the waterstream to validate the simulation results.The water was supplied to two waterstreams using the immersion pump.Two independent manual valves were used to maintain the flow velocity through one-touch fitting.To demonstrate the capabilities of the waterstream, some samples were successfully captured using the SAM systems: a CFRP, PCB, and 6-inch wafer. Waterstream for the TSAM−400 System A CFRP sample was prepared to be scanned using the TSAM−400 system with waterstream.To reduce the scanning time, two identical transducers were arranged within a distance of 77.5 mm along the x-axis, as shown in Figure 10. Figure 11a shows the CFRP sample with an area of 146 × 122 mm 2 .To entirely cover the sample along the x-axis, the linear motor was implemented to travel 77.5 mm at 4.5 Hz, which were the same values used in a previous study [29]. A CFRP sample was prepared to be scanned using the TSAM−400 system with waterstream.To reduce the scanning time, two identical transducers were arranged within a distance of 77.5 mm along the x-axis, as shown in Figure 10. Figure 11a shows the CFRP sample with an area of 146 × 122 mm 2 .To entirely cover the sample along the x-axis, the linear motor was implemented to travel 77.5 mm at 4.5 Hz, which were the same values used in a previous study [29]. Waterstream for the W−FSAM System The W−FSAM system was developed for providing acceptable scanning images in a short time.Owing to the fast movement of the transducer, the scanning time was significantly reduced.To demonstrate the capabilities of the waterstream applied to the W−FSAM system, the CFRP sample was scanned with the same scanning parameters as Waterstream for the W−FSAM System The W−FSAM system was developed for providing acceptable scanning images in a short time.Owing to the fast movement of the transducer, the scanning time was significantly reduced.To demonstrate the capabilities of the waterstream applied to the W−FSAM system, the CFRP sample was scanned with the same scanning parameters as those of the TSAM−400 system.Using the W−FSAM system, the image was successfully captured within approximately 6.4 min (Supplementary Movie S2), thereby reducing the scanning time by 50% compared to the TSAM−400 system. The quality of the PCB sample was evaluated using the W−FSAM system with waterstream.Figure 12a shows the PCB sample with an area of 150 × 109 mm 2 .The accuracy of the component positions is illustrated in the C-scan image of the top surface, as shown in Figure 12b.Figure 12c shows the underlayer C-scan image, which can be used to evaluate the soldering quality of the PCB.Delamination regions were detected in the soldering area, as highlighted by the yellow ellipses in Figure 12d.The yellow ellipses were darker than those in remained regions, which indicated that the amplitude of reflected signal at yellow ellipses was stronger.That means the structure at these regions was nonhomogeneous.In other words, these regions were the delamination.Compared to the results of immersing the PCB in water using the FSAM system [33], these images were identical, and were obtained at the same speed.Using the W−FSAM system, only one scanning process was conducted to successfully evaluate the quality of the entire PCB, which enabled in-line inspection in PCB mass production.The 6-inch wafer sample was prepared to be scanned using the W−FSAM system, as shown in Figure 13a.The scanning parameters were set at 155 × 146 mm 2 , 0.04 mm, and 8 Hz, corresponding to the scanning area, step size, and B-scan frame rate, respectively.The scanning time was approximately 7.6 min (Supplementary Movie S3). Figure 13b shows the C-scan image of the wafer, which is interpreted to evaluate the quality of the wafer.There were two defects inside the wafer sample, as highlighted by the red rectangles.Some delamination areas were detected in the C-scan image, as highlighted by the yellow The 6-inch wafer sample was prepared to be scanned using the W−FSAM system, as shown in Figure 13a.The scanning parameters were set at 155 × 146 mm 2 , 0.04 mm, and Sensors 2024, 24, 5179 13 of 15 8 Hz, corresponding to the scanning area, step size, and B-scan frame rate, respectively.The scanning time was approximately 7.6 min (Supplementary Movie S3). Figure 13b shows the C-scan image of the wafer, which is interpreted to evaluate the quality of the wafer.There were two defects inside the wafer sample, as highlighted by the red rectangles.Some delamination areas were detected in the C-scan image, as highlighted by the yellow ellipses.Figure 13c,d show an enlarged view of the defect (I) and delamination (II) areas, respectively.These results indicate that the sample quality is unacceptable. Conclusions In this study, a novel water probe (waterstream) was designed and fabricated for a high-frequency focused transducer, which was used for the SAM system to enhance its capabilities in in-line inspection.The waterstream was developed based on the water domain that was modeled by the measured focal length calculation of the transducer.A numerical solution was conducted to investigate the internal flow characteristics inside the water domain.The simulation was performed based on the PISO algorithm using Open-FOAM.The simulation results indicated that continuous flow was maintained between the transducer surface and the outlet.The flow velocity at the outlet was approximately 1 m/s while the pressure was equal to atmospheric pressure. The simulation results were validated by experiments.Based on the water domain model, the waterstream was designed, fabricated, and applied to the traditional and fast SAM systems.Some samples were successfully imaged using the TSAM−400 and Conclusions In this study, a novel water probe (waterstream) was designed and fabricated for a high-frequency focused transducer, which was used for the SAM system to enhance its capabilities in in-line inspection.The waterstream was developed based on the water domain that was modeled by the measured focal length calculation of the transducer.A numerical solution was conducted to investigate the internal flow characteristics inside the water domain.The simulation was performed based on the PISO algorithm using Open-FOAM.The simulation results indicated that continuous flow was maintained between the transducer surface and the outlet.The flow velocity at the outlet was approximately 1 m/s while the pressure was equal to atmospheric pressure. The simulation results were validated by experiments.Based on the water domain model, the waterstream was designed, fabricated, and applied to the traditional and fast SAM systems.Some samples were successfully imaged using the TSAM−400 and W−FSAM systems with waterstream, which were identical to those in the case of immersing the sample in water.The results demonstrate the simulation results and indicate that these systems can be used for in-line inspection.Normally, when the SAM system was used to scan a sample immersed in water, a robot arm was implemented for transferring the sample, which created a complicated system and increased time consumption.Using a waterstream in the SAM system, a conveyor was used to transfer the sample to conduct an automotive inspection, which led to a simple system and reduced time consumption.In addition, the sample was scanned without immersing it in water, which could protect the sample from contaminants.Furthermore, the water pressure at the top surface of the sample was approximately equal to atmospheric pressure, which could help avoid damage to the sample, especially for soft samples.These results demonstrate the potential application of waterstreams in automotive SAM systems. In order to extend the research based on the simulation and experimental results, future studies are suggested.First, a waterstream can be developed for transducers based on their measured focal length and internal flow simulation.Second, the SAM system with waterstream can be incorporated into an industrial robot for imaging samples with surface curvature. Figure 3 depicts the refraction at the boundary of the water and sample (the sample top surface) of the acoustic wave generated by the US transducer. Figure 3 . Figure 3.The measured focal length of transducer inside sample. depicts the refraction at the boundary of the water and sample (the sample top surface) of the acoustic wave generated by the US transducer. Figure 3 . Figure 3.The measured focal length of transducer inside sample. Figure 3 . Figure 3.The measured focal length of transducer inside sample. Figure 6 . Figure 6.Velocity distribution inside water domain plotted as streamlines. Figure 7 . Figure 7. Pressure distribution inside water domain plotted as streamlines. Figure 6 . Figure 6.Velocity distribution inside water domain plotted as streamlines. Figure 6 . Figure 6.Velocity distribution inside water domain plotted as streamlines. Figure 7 . Figure 7. Pressure distribution inside water domain plotted as streamlines. Figure 7 . Figure 7. Pressure distribution inside water domain plotted as streamlines. Figure 8 . Figure 8.The velocity and pressure distribution along the centerline between the transducer and outlet. Figure 8 . Figure 8.The velocity and pressure distribution along the centerline between the transducer and outlet. Figure Figure 11b,c show the scanning images of the top surface and underlayer of the CRFP sample, respectively.These results indicate the good adhesive quality between the top surface and underlayer (no delamination), as shown in Figure 11c.In addition, the fiber orientation is illustrated in the underlayer image, as shown in Figure 11d. Figure 16 Figure 11 . Figure 11b,c show the scanning images of the top surface and underlayer of the CRFP sample, respectively.These results indicate the good adhesive quality between the top surface and underlayer (no delamination), as shown in Figure 11c.In addition, the fiber orientation is illustrated in the underlayer image, as shown in Figure 11d.Sensors 2024, 24, x FOR PEER REVIEW 12 of 16 Sensors 2024 , 16 Figure 13 . Figure 13.(a) The 6-inch wafer sample, (b) C-scan image of wafer, (c) enlarged view of I area, (d) enlarged view of II area. Figure 13 . Figure 13.(a) The 6-inch wafer sample, (b) C-scan image of wafer, (c) enlarged view of I area, (d) enlarged view of II area. • C); thus, the water flow is incompressible.To determine velocity and pressure, the continuity and momentum equations are established based on the Navier-Stokes equations, that are Table 1 . The initial conditions of simulation. Table 1 . The initial conditions of simulation.
9,195.2
2024-08-01T00:00:00.000
[ "Engineering", "Materials Science", "Physics" ]
Distinct Roles of Transforming Growth Factor-β-activated Kinase 1 (TAK1)-c-Rel and Interferon Regulatory Factor 4 (IRF4) Pathways in Human T Cell Lymphotropic Virus 1-transformed T helper 17 Cells Producing Interleukin-9* Investigation of helper T cell markers in HTLV-1-transformed cell lines demonstrated that HuT-102 has an IL-9-producing Th17 phenotype. We confirmed the vital role of retinoic acid-related orphan receptor C, a Th17 transcription factor, in the expression of IL-17. Interferon regulatory factor 4 (IRF4), a transcription factor overexpressed in all HTLV-1-infected cells, regulated IL-17 and IL-9 concomitantly. We further demonstrated a novel pathway for the regulation of Tax-induced cytokines, IL-9 and IL-6, through TAK1-mediated nuclear accumulation of c-Rel. A microarray analysis for IRF4 knocked down HuT-102 cells showed a significant up-regulation in the set of genes related to Th1, mainly IFN-γ and several transcription factors. T-bet and IRF1, but not STAT1 and IRF9, participated in counteracting the inhibitory effect of IRF4 on the production of IFN-γ. Finally, suppression of both IRF4 and c-Rel resulted in the reduced proliferation. Collectively, these findings indicate that TAK1-c-Rel and IRF4 pathways play distinct roles in the maintenance of IL-9-producing Th17 phenotype of HTLV-1-transformed cells. Human T cell lymphotropic virus 1 (HTLV-1) 2 infects 20 million people worldwide with 3% developing adult T cell leukemia (ATL), and a further 0.25-3% developing an inflammatory disease of the CNS known as HTLV-1-associated myelopathy/tropical spastic paraparesis (1,2). ATL is an aggressive proliferation of mature activated CD4 ϩ T cells, usually showing very poor prognosis for treatment (3,4). Although the antiviral combination therapy with IFN-␣ and zidovudine (AZT) is considered a treatment for ATL, patients frequently suffer relapse. This relapse emphasizes the need for new therapeutic approaches and strategies. Clonal expansions of HTLV-1 result from the expression of the viral transactivator protein Tax, which is thought to be a key molecule of ATL onset. Tax has many pathological functions such as virus replication, immortalization of host cells, and the activation of several transcriptional factors and signal transduction molecules (5)(6)(7). We also have shown previously Tax-dependent constitutive activation of TAK1-MAPK and TAK1-IRF3 pathways (8,9). IRF4, which is preferentially expressed in lymphoid cells, was first identified as a transcription factor that negatively regulates the activity of IFN-regulated genes and TLR signaling (10,11). In 2007, Ramos et al. (12) showed that either IRF4 or c-Rel was overexpressed in antiviral-resistant ATL cells. On the other hand, IRF4 is reported to be emerging as a critical regulator of T-helper cell (Th) differentiation, playing an important role in both Th2 and Th17 development by controlling cytokine expression and apoptosis (13,14). Th1-, Th2-, and T regulatory cell-associated cytokines were shown previously to be detected in the serum from HTLV-1infected patients (15). On the other hand, a study of T cells showed a close relationship between HTLV-1-associated myelopathy/tropical spastic paraparesis and both multiple sclerosis and experimental autoimmune encephalomyelitis lesions, which are also known as being pathological indicators for the presence of Th17 (16,17). In a 2004 study, ATL cells were suggested to be derived from T regulatory cells after the detection of FOXP3 gene transcription in 47% of ATL cases (18). In the same year, one year before the proposal of Th17 as a new T helper lineage, Dodon et al. (19) showed that Tax induces IL-17 gene expression. From the previous data, it is clear that the phenotype for ATL is a matter of debate. In this study, we managed to identify the T cell lineages involved in HTLV-1. Subsequently, we explored the role of both IRF4 and c-Rel in the expression of pivotal cytokines in this phenotype and proliferation. We found that IRF4 preferen-tially maintains the axis of IL-17-IL-9 production against IFN-␥ production. RNA Interference-Cells were transfected with siRNA using the Amaxa electroporation system. IRF1, IRF3, IRF4, IRF9, STAT1, c-Rel, RORC, Tax, and T-bet siRNAs were designed at and purchased from Invitrogen. Luc siRNA with a two-nucleotide overhanging at the 3Ј-end of the sequence was synthesized by Hokkaido System Science (Sapporo, Japan). The target sequences are summarized in supplemental Table S1. Cell Proliferation Assay-HuT-102 cells transfected with siRNAs against Luc, IRF4, c-Rel, or both IRF4 and c-Rel were harvested. Viable cells were counted microscopically using trypan exclusion assay. The statistical significance of cell proliferation was calculated by performing Turkey-Kramer test, and p values Ͻ 0.01 were regarded as significant. Immunoblotting-Whole cell lysates, cytoplasmic extracts, and nuclear extracts prepared as described previously (20), resolved by SDS-PAGE, and transferred to an Immobilon-P nylon membrane (Millipore, Bedford, MA). The membrane was treated with BlockAce (Dainippon Pharmaceutical Co. Ltd., Suita, Japan) overnight at 4°C and probed with primary antibodies, as described above. Antibodies were detected using horseradish peroxidase-conjugated anti-rabbit, anti-mouse, anti-goat, and anti-sheep IgG (DakoCytomation, Glostrup, Denmark) and visualized with the ECL system (GE Healthcare). Immunoprecipitation-Cell lysates prepared as described previously (21) were immunoprecipitated with anti-STAT1 antibody. The immunoprecipitates were immunoblotted as described above. Plasmid DNA-pcDNA-IRF1 expression vector was kindly provided by Dr. Mark Perrella (Brigham and Women's Hospital, Boston, MA). Transfection was performed using the Amaxa electroporation system. DNA Microarray-Total RNA was extracted from cells using RNAeasy Mini Kit (Qiagen, Valencia, CA). Gene expression was analyzed using a GeneChip system with Human Genome Array U133 plus 2.0 (Affymetrix, Santa Clara, CA) as described previously (22). In this study, six arrays were used: two for HuT-siLuc cells, two for HuT-siIRF4 cells, and two for HuT-siIRF3 cells (positive counter control). A fold change value of Ͼ2 (upregulated) or Ͻ0.5 (down-regulated) was considered to be bio-logically important. The statistical significance of the fold change was calculated for two groups by performing a Student's t test, and p values Ͻ 0.05 were regarded as significant. The microarray results were deposited in the GEO Database (accession no. 22036). Real-time RT-PCR-Total RNAs was prepared using the RNeasy Mini kit (Qiagen). First-strand cDNA was synthesized by SuperScript II reverse transcriptase (Invitrogen). The cDNA was amplified quantitatively using SYBR Premix Ex Taq (Takara Bio, Otsu, Japan). The primer sequences are summarized in supplemental Table S2. Real-time quantitative RT-PCR was performed using a Prism 7300 sequence detection system (Applied Biosystems, Foster City, CA). All data were normalized to ␤-actin mRNA. The data shown are representative of at least three independent experiments. ELISA-The DuoSet ELISA development system for human IL-17 was purchased from R&D Systems. Briefly, each cell line (1 ϫ 10 6 cells/ml) was cultured in RPMI1640 supplemented with 10% FCS, 100 units/ml penicillin, and 100 g/ml streptomycin. Cells were left to reach confluency by incubating at 37°C in 5% CO 2 . After centrifugation, supernatants were collected and used in the analysis as described by the manufacturer. RESULTS HuT-102 Is an IL-9-producing Th17 Phenotype-Based on previous data designating possible CD4 ϩ phenotypes for HTLV-1-infected cells, we used two Tax-negative HTLV-1-infected cell lines (ED40515(Ϫ) and MT-1), and two Tax-positive cell lines (HuT-102 and MT-2) for a Th phenotype screening. Jurkat leukemic T cells were used as HTLV-1-free control lymphocytes. We performed real-time RT-PCR of the main cytokines and transcription factors involved in Th1, Th2, Th17, and T regulatory CD4 ϩ cells. HuT-102 cells, in contrast to other cell lines, showed a characteristic phenotype of IL-9-producing Th17 cells. The results showed, similar to classic Th17 cells, high levels of IL-17 and RORC and moderate levels of STAT3 and IL-23. Of note, IL-9 and IL-6 were significantly expressed in HuT-102 as well as MT-2 Tax-positive cell lines. On the other hand, IRF4 was expressed highly in the HuT-102 as well as other cell lines ( Table 1). As IL-17 and RORC are considered to be the hallmark cytokine and transcription factor, respectively, for the presence of Th17, we tended to confirm the RT-PCR data for both of them. We utilized an ELISA assay to estimate the absolute cytokine content of IL-17, showing high cytokine production by HuT-102 cells (15 ng/ml), whereas in contrast, other cell lines secreted slightly detectable amounts of IL-17 (48, 34, 23, and 35 pg/ml for Jurkat, ED, MT-1, and MT-2, respectively) (Fig. 1A). In agreement with this, HuT-102 cells displayed substantial expression of RORC, whereas STAT3 showed an almost identical pattern of expression among the four cell lines (Fig. 1B). On the other hand, we and others have previously confirmed the preferential high expression levels of IRF4 in all HTLV-1-infected cells (9). To further corroborate the crucial role of RORC, we transfected cells with a specific siRNA for RORC (Fig. 1C). The knockdown of RORC caused selective down-regulation of IL-17 expression (Fig. 1D). Meanwhile, IRF4, STAT3, IL-6, and IL-9 were unchanged ( Fig. 1, C and D). These results confirmed the critical role of RORC in the maintenance of the HuT-102 Th17 phenotype. IRF4 and c-Rel Differentially Regulated IL-17, IL-9, and IL-6 in HuT-102-Ramos et al. (12) reported previously that both IRF4 and c-Rel are expressed in ATL cells derived from antiviral-resistant patients. This fact, in addition to the reported role of IRF4 in Th17, raised our interest to investigate the role of IRF4 and c-Rel in HuT-102 cells. To this end, we transfected HuT-102 cells with siRNAs against IRF4 or c-Rel and confirmed the selective knockdown of the target proteins (Fig. 2, A and C). The knockdown of IRF4 caused both IL-17 and IL-9 down-regulation (Fig. 2B), whereas c-Rel knockdown caused both IL-9 and IL-6 down-regulation (Fig. 2D). Given the reported regulation of IL-6 and IL-9 by Tax in HTLV-1-infected cells (23,24), one could argue that c-Rel regulation of IL-9 and IL-6 is done by regulating Tax. To rule out this possibility, we checked for the effect of c-Rel knockdown on Tax expression and demonstrated that c-Rel does not control Tax expression level (supplemental Fig. S1). In contrast, Tax knock-down caused the down-regulation of c-Rel (supplemental Fig. S1). Of note, we confirmed the down-regulation of IL-9 and IL-6 by Tax knockdown (supplemental Fig. S1). Role of TAK1 in c-Rel-dependent Regulation of IL-9 and IL-6-Several reports have previously pointed out the possible regulation of c-Rel by a Tax oncoprotein (25). But whether the regulation of IL-9 and IL-6 is a simple Tax/c-Rel regulatory pathway or whether it would involve other factors is still an unanswered question. Recently, our group has reported the Tax-dependent constitutive TAK1 activation in HTLV-1-infected cells (9). To this end, we managed to gain further insight into the possible involvement of TAK1 in the Tax/c-Rel pathway. In that context, we checked for the expression of IL-9 and IL-6 in HuT-102 cells stably transfected with a TAK1 shRNA vector (HuT-shTAK1 cells). As expected, TAK1 knockdown caused both IL-9 and IL-6 down-regulation (Fig. 3A). To uncover a possible link between TAK1 and subsequently c-Rel in regulating IL-9 and IL-6, we investigated the role of TAK1 in the control of NF-B pathways. Our results showed slight down-regulation of total protein expression level for both p52 and c-Rel in response to TAK1 knockdown (Fig. 3B). We further elucidated this effect by fractionating the cells obtained on harvesting into cytoplasmic and nuclear extracts. We found that c-Rel and p100, the precursor of p52, were not changed in the cytoplasmic fractions, whereas nuclear fractions showed the significant down-regulation of both c-Rel and the active p52 (Fig. 3C). Collectively, it was clear that Tax regulates IL-9 and IL-6 through the TAK1/c-Rel pathway. To uncover whether IL-9, commonly regulated by IRF4 and c-Rel, is controlled by separate or common pathways, we used HuT-shTAK1 and control HuT-shLuc cells and further transfected them with siRNA for either luciferase or IRF4. We noticed that the knockdown of both TAK1 and IRF4 induced an additional reduction of IL-9 expression, demonstrating that IRF4 and TAK1/c-Rel might regulate IL-9 independently (Fig. 3D). Due to the important role of both IRF4 and c-Rel in regulating cytokine production, we checked for the effect of knockdown of both c-Rel and IRF4 on cell proliferation. Our results showed a significant reduction in proliferation of double knocked down cells (Fig. 3E). Gene Role in CD4 T cells IRF4 Knockdown Up-regulates Th1-related Genes-In addition to the significant role in regulating the two vital lineagespecific cytokines, IL-17 and IL-9, we have previously reported that IRF4 counteracted TAK1-IRF3-mediated expression of interferon-inducible genes (9). Therefore, to understand the comprehensive roles of IRF4 in HuT-102 cells, we performed microarray analysis for genome-wide screening of IRF4-regulated genes. We used IRF3 as a positive counter control to confirm the expected effect of IRF4 on interferon-inducible genes. The microarray showed the up-regulation of a set of important genes related to T helper cell development, especially Th1 ( Table 2). Some of those genes were related directly to IFN-␥ production, as IRF1, IL18RAP, Spp1, and other served as main Th1 regulators, namely, T-bet and STAT1 (26 -29). We confirmed the microarray results for several important candidates in our study by real-time PCR (Table 2). On the other hand, the set of interferon-inducible genes were shown to be up-regulated by IRF4 knockdown and down-regulated by IRF3 knockdown (supplemental Fig. S2). To generalize our finding on other cell lines, we knocked down IRF4 in both the Tax-negative ED40515(Ϫ) and the Tax-positive MT-2 cell lines and found the same pattern of IFN-␥ up-regulation as in HuT-102 cells (Fig. 4A). Moreover, we compared the expression of IFN-␥ in HuT-102 cells transfected with siRNAs against IRF4, RORC, c-Rel, or luciferase control. The results demonstrated specific up-regulation of IFN-␥ only after IRF4 knockdown (Fig. 4A). We also confirmed the effect of IRF4 knockdown in ED40515(Ϫ) cells by showing the up-regulation of both T-bet and IRF1, and the down-regulation of IL-9 (supplemental Fig. S3). Collectively, these results strongly confirm the selective role of IRF4 against IFN-␥ even in the absence of high IL-17 production, as in the case of ED40515(Ϫ) or MT-2 cells. To assess the pivotal role of the Th1 cell-specific transcription factor T-bet on IFN-␥ production, we displayed the effect of T-bet siRNA on the specific down-regulation of IFN-␥ (Fig. 4B). T-bet knockdown alone, as expected, caused the down-regulation of basal expression of IFN-␥ (supplemental Fig. S4). IRF1 Counteracts Effect of IRF4 on IFN-␥-Consistent with the microarray results; Western blotting showed an up-regulation of transcription factors related to IFN-␥ production, including IRF1, IRF9, STAT1, and STAT2 (Fig. 5A). Due to the up-regulation of the set of genes that constitute IFN-stimulated gene factor 3 complex (IRF9, STAT1, and STAT2), we confirmed the formation of IFN-stimulated gene factor 3 complex using immunoprecipitation technique for STAT1. Our results clearly demonstrated the co-precipitation of STAT1 and STAT2 heterodimer and their up-regulation by IRF4 knockdown (Fig. 5B). Eventually, we showed that the use of either STAT1 or IRF9 siRNA concomitantly with IRF4 siRNA did not repress IFN-␥ up-regulated by IRF4 knockdown (Fig. 5C). The results indicated that IFN-␥ production is independent of STAT1 or STAT2. Kano et al. (26) reported that IRF1 contributes to the IFN-␥-IL-12 signaling axis and Th1 versus Th17 differentiation of CD4 ϩ T cells. The effect of IRF1 knockdown on down-regulation of basal IFN-␥ expression was confirmed as shown in supplemental Fig. S4. We primarily confirmed that IRF4 knockdown induced IRF1 expression at both mRNA and protein levels ( Table 2 and Fig. 5D). IRF1 did counteract the effect of IRF4 on both IFN-␥ and CXCL10, but not IL-17, indicating that IRF4 controls IL-17 and IFN-␥ independently (Fig. 5E). In addition, IRF1 overexpression induced the expression IFN-␥ (Fig. 5F). We reported previously that interferon-inducible genes, including CXCL10, whose expression is maintained by the Taxdependent constitutive activation of the TAK1-IRF3 pathway, is down-regulated by IRF4 in HuT-102 cells (9). To examine the role of IRF3 in IFN-␥ production, we performed either single or double siRNA transfection for IRF4 and IRF3. The up-regulation of IFN-␥ by IRF4 knockdown was not reversed by IRF3 knockdown, elucidating that the effect on IFN-␥ is independent of constitutive activation of IRF3 (supplemental Fig. S2). DISCUSSION In this study, we described HuT-102 as an IL-9-producing Th17 phenotype using microarray, RT-PCR, and Western blot. Our evidence was based on the expression of IL-17, RORC, STAT3, and IRF4, in addition to the microarray, which highlighted the expression of CCR6, Ahr, BATF, IL-23R, and ROR␣ (data not shown). On the other hand, other cell lines utilized in our comparative study did not show a clear phenotypic character. Nevertheless, MT-2 cells showed a sophisticated model, expressing a variety of genes related to totally divergent CD4 ϩ phenotypes. This might raise an interest in the possibility of finding different CD4 ϩ phenotypes in the future. The presentation of HuT-102 cells, in contrast to other used cell lines, with a clear T helper cell phenotype was a cornerstone for selecting HuT-102 cells mainly for our study. Moreover, being a Taxpositive cell line universalized the results obtained to be similar to fresh cells from ATL, which has the same morphological and biochemical phenotype as cells that express Tax (4). On the other hand, the main findings of the study were supported by experiments done on ED40515(Ϫ) and/or MT-2 cell lines to make our evidence stronger ( Fig. 4A and supplemental Fig. S3). Low expression of Th1-specific transcription factors or cytokines was also shown in HuT-102 cells, raising the question for the extent of plasticity of T helper cells. We confirmed the potential role of RORC in maintaining IL-17. Furthermore, we determined the role of IRF4 in controlling lineage specific cytokines IL-17 and IL-9, whereas c-Rel contributed to the regulation of IL-6 and IL-9. In Table 1, it appears that RORC, T-bet, Foxp3, in addition to IRF1 patterns of expression, are related either positively or negatively with the expression of both IL-17 and/or IFN-␥, and yet, according to our findings, we assume that IRF4 regulates at least both T-bet and IRF1 and subsequently IFN-␥. On the other hand, the regulation of IL-17 was not only confirmed by our knockdown experiments of IRF4 but rather was mentioned previously (13). Collectively, it is difficult to argue whether IRF4 directly controls cytokine expression or not. We would consider RORC as a master regulator of IL-17, whereas IRF4 might act as a gear that fine-tunes the cytokine profile either directly or through several intermediates such as IRF1 and T-bet. In other words, IRF4 can act as player within a multiplayer network controlling cytokine expression. The double knockdown of IRF4 and c-Rel and their subsequent significance on reducing cell proliferation were further elucidated. One of the possibilities for this reduction in cell proliferation might be due to inducing apoptosis. Although c-Rel was shown to be required for transcriptional activation of IL-2 (31), it is unknown as to how c-Rel participates in the proliferation of HTLV-1-infected cells. We claim that c-Rel might have essential roles in controlling cell growth through regulating IL-6 and IL-9. On the other hand, another group has recently reported the essential role of IRF4 in the development of Th9 cells (32). This report further enhanced the impact of our findings regarding the possible role of IRF4 in the regulation of HTLV-1 pathogenicity. Altogether, IRF4 is now believed to be involved in the development of all currently known Th cell subsets through regulation of the hallmark cytokines IFN-␥, IL-17, IL-4, and recently, IL-9. Tax protein was reported previously to induce the activation of canonical and non-canonical NF-B/Rel pathways. In that study, Tax-induced constitutive activation of p65 was shown to subsequently activate the transcription of the c-Rel gene (33). Currently, we identify TAK1 as the upstream regulator of c-Rel; nevertheless, a role for TAK1 in p65 activation was not identified (8). Thus, we believe that the regulation of c-Rel and p52 by TAK1 involves a more complex mechanism that may identify a possible role for TAK1 in a non-canonical NF-B pathway. On the other hand, we previously reported that TAK1 was involved in the regulation of IRF3 in addition to p38 and JNK MAPKs (8,9). In this regard, we emphasized using siRNA transfection that IRF3 and MAPKs were not involved in inducing neither IL-9 nor IL-6 (data not shown). In the current study, we reported for the first time in a Th17 phenotype that IRF4 knockdown causes up-regulation of Th1 transcription factors/cytokines. Most importantly, the high upregulation of IFN-␥ production was demonstrated in HuT-102 cells and similarly in other HTLV-1 cell lines after IRF4 knockdown. One of the major limitations of knockdown experiments using siRNA is the possible off-target effect. To rule out this limitation, we conducted experiments concurrently using two different IRF4 siRNA sequences. Both sequences specifically down-regulated IL-17 and ultimately up-regulated IFN-␥ (data not shown). On the other hand, we also confirmed our main knockdown finding by performing IRF1 overexpression experiments that up-regulated IFN-␥. An interesting point for discussion was the extremely high up-regulation of IFN-␥ in the ED40515(Ϫ) cell line, whereas the MT-2 cell line showed relatively low up-regulation compared with both HuT-102 and ED40515(Ϫ) cell lines. One possible explanation for this is the high basal expression level of IFN-␥ in MT-2 cells. It is worth mentioning here that a model of HTLV-1 Tax-transgenic mice deficient in IFN-␥ has enhanced tumorigenesis, highlighting the functionally important role of IFN-␥ as a possible solution for HTLV-1 pathogenesis (34). We per- formed our study to show that the inhibition of IFN-␥ by IRF4 is dependent on IRF1 but not IFN-stimulated gene factor 3 complex or IRF3. Presumably, we analyzed the IFNinducible gene, CXCL10, to confirm the role of IRF1 in the IRF4-IFN-␥ axis. This mechanistic finding can serve as an additional tuning factor for IFN-␥. As a conclusion, we have shown that the modulatory effect of IRF4 knockdown in HuT-102 appears to be a "death by a thousand cuts" because numerous IRF4 target genes play crucial roles in the modulation of HuT-102 cells. Doors opened by the microarray in this study are worth further future investigation. The effect of IRF4 knockdown on the up-regulation of Th1 transcription factors and cytokines included a wide range of regulators, such as T-bet, IRF1, STAT1, PHF11, and others. From another point of view, an interesting set of chemokines, namely CXCL9, CXCL10, and CXCL11, which are also related to Th1 cell accumulation, were up-regulated in response to IRF4 knockdown (35,36). On the other hand, the regulatory effect for IRF4 on IL-9 expression was confirmed by the concomitant expression of muc5ac, which is reported previously to be directly stimulated by IL-9 (37). Overall, the role of IRF4 in inducing a Th1 response could negatively regulate both Th17 and Th9. The response of HTLV-1 to a combined therapy of IFN-␣/AZT was a popular topic in several studies (12, 38 -40). According to our findings, it seems that an approach for molecular targeting of IRF4 and c-Rel as well as with antiviral therapy can serve as a powerful treatment tool for AZTresistant patients. As a matter of concern, it is worth mentioning that the phenotypes of IRF4-deficient mice are strictly limited to the immune system, including defects in the differentiation of plasma cells and certain dendritic cell subsets, as well as in lymphocyte activation. Notably, mice lacking one allele of IRF4 are phenotypically normal (41), yet an average of 50% knockdown of IRF4 mRNA and protein was sufficient to kill myeloma cell lines (30). Thus, a therapeutic window could exist in which IRF4-directed therapy would kill IRF4-addicted malignant cells while sparing normal cells.
5,359.2
2011-04-15T00:00:00.000
[ "Biology", "Medicine" ]
Unified ( p , q )-analog of Apostol Type Polynomials of Order α In this work, we introduce a class of a new generating function for (p, q)-analog of Apostol type polynomials of order α including Apostol-Bernoulli, Apostol-Euler and Apostol-Genocchi polynomials of order α. By making use of their generating function, we derive some useful identities. We also introduce (p, q)-analog of Stirling numbers of second kind of order v by which we construct a relation including aforementioned polynomials. Introduction Throughout of the paper we make use of the following notations: N := {1, 2, 3, · · · } and N 0 = N ∪ {0}. Here, as usual, Z denotes the set of integers, R denotes the set of real numbers and C denotes the set of complex numbers. The p, q -number is defined by [n] p,q = p n −q n p−q p q . Obviously that when p = 1, we have [n] q = 1−q n 1−q that stands for q-number. One can see that (p, q)-number is closely related to q-number with this relation [n] p,q = p n−1 [n] q p . By appropriately using this obvious relation between the q-notation and its variant, the (p, q)-notation, most (if not all) of the (p, q)-results can be derived from the corresponding known q-results by merely changing the parameters and variables involved. In the next section, we perform to define the family of unified (p, q)-analog of Apostol-Bernoulli, Apostol-Euler and Apostol-Genocchi polynomials of order α and to investigate some properties of them. Moreover, we consider (p, q) analog of a new generalization of Stirling numbers of the second kind of order v by which we derive a relation including unified (p, q)-analog of Apostol type polynomials of order α. Unified p, q -Analog of Apostol Type Polynomials of Order α Inspired by the generating function [25] f a,b x; t; k, β : in this paper, we consider the following Definition 2.1 based on (p, q)-numbers. Definition 2.1. Unified (p, q)-analog of Apostol-Bernoulli, Apostol-Euler and Apostol-Genocchi polynomials of order α is defined as follows: We note that P n,β x, y, k, a, b : p, q := P n,β x, y, k, a, b : p, q which are called unified (p, q)-analog of Apostol type polynomials. We now give here some basic properties for P (α) n,β x, y, k, a, b : p, q by the following four Lemmas 2.3-2.6 without proofs, since they can be proved by using Definition 2.1. Lemma 2.3. We have n,β x, y, k, a, b : p, q satisfies the following relation: It immediately follows from Lemma 2.4 that P n+k,β 0, y, k, a, b : p, q , n+k,β x, −1, k, a, b : p, q . From Lemma 2.3 and Lemma 2.5, we obtain the following Theorem 2.7. Theorem 2.7. We have n+k,β 0, y, k, a, b : p, q . (6) Corollary 2.8. Upon setting α = 1 in Eq. (6) gives the following relation Here is a recurrence relation of unified (p, q)-analog of Apostol type polynomials by the following theorem. Theorem 2.9. The following relationship holds true for P n,β x, y, k, a, b : p, q : a b P n,β x, y, k, a, b : p, q = β b n j=0 n j p,q q ( n−j 2 ) P j,β x, y, k, a, b : p, q − [n] p,q ! [n − k] p,q ! 2 1−k x + y n−k p,q . Proof. Since we have 2 1−k z k a b e p,q (xz) E p,q yz β b e p,q (z) − a b e p,q (z) = 2 1−k z k β b e p,q (xz) E p,q yz β b e p,q (z) − a b − 2 1−k z k e p,q (xz) E p,q yz e p,q (z) , a b 2 1−k z k β b e p,q (z) − a b e p,q (xz) E p,q yz = β b 2 1−k z k e p,q (xz) E p,q yz β b e p,q (z) − a b e p,q (z) − 2 1−k z k e p,q (xz) E p,q yz . From here we derive that x + y n p,q z n+k [n] p,q ! . Using Cauchy product and then equating the coefficients of z n [n] p,q completes the proof. We provide now the following explicit formula for unified (p, q)-analog of Apostol type polynomials of order α. n,β x, y, k, a, b : p, q holds the following relation: n,β x, y, k, a, b : p, q = n j=0 n j p,q Proof. The proof of this theorem is derived from the Eq. (4) and Theorem 2.9. So we omit the proof. The p, q -integral representations of P (α) n,β x, y, k, a, b : p, q are given in the following theorem. n,β x, y, k, a, b : p, q d p,q y = p Proof. By using Lemma 2.5 and Eq. (3), the proof can be easily proved. So we omit it. The following theorem involves in the recurrence relationship for unified (p, q)-analog of Apostol type polynomials of order α. Proof. Based on the proof technique of Mahmudov in [16], the proof can be made. Now we are in a position to state some recurrence relationships for the unified (p, q)-analog of Apostol type polynomials as follows. Theorem 2.13. The following recurrence relation holds true for n, k ∈ N 0 and x, y ∈ R: P n+1,β x, y, k, a, b : p, q = yq k p n−k P n,β q p x, q p y, k, a, b : p, q +p n+1−k [k] p,q [n + 1] p,q P n+1,β x, y, k, a, b : p, q + xq k p n−k P n,β x, y, k, a, b : p, q n + k j p,q P j,β x, y, k, a, b : p, q q j p n− j P n+k−j,β 1, 0, k, a, b : p, q , Proof. By using the same method of Kurt's work [9], for α = 1 in Definition 2.1, applying p, q -derivative operator to P n,β x, y, k, a, b : p, q , with respect to z, yields to desired result. We now give the following Theorem 2.14. Theorem 2.14. For n ∈ N 0 and x, y ∈ R, the following formulas are valid: n,β x, y, k, a, b : p, q = Proof. This proof can be made by using the same method of Mahmudov [16]. So we omit it. Combining Theorem 2.12 with Theorem 2.14 gives the following theorem. Let us define p, q -analog of Stirling numbers of the second kind of order v as follows. Definition 2.17. (p, q)-analog of Stirling numbers S p,q n, v; a, b, β of the second kind of order v is defined by means of the following generating function: A correlation between the family of unified polynomials P (α) n,β x, y, k, a, b : p, q and the generalized p, q -Stirling numbers S p,q n, v; a, b, β of the second kind of order v is presented in following Theorem 2.18. x, y, k, a, b : p, q S p,q n − j, v; a, b, β is true. Proof. It follows from Definition 2.17. In the case when α = 0 in Theorem 2.18, we have the following corollary. j,β x, y, k, a, b : p, q S p,q n − j, v; a, b, β . Conclusion In this paper, we have introduced unified (p, q)-analog of Apostol type polynomials of order α. We have also analyzed some properties of them including addition property, derivative properties, recurrence relationships, integral representations and so on. By defining the generalized p, q -Stirling numbers of the second kind of order v, a correlation between these numbers and unified (p, q)-analog of Apostol type polynomials of order α is obtained. We note that the results obtained here reduce to known results of unified q-polynomials when p = 1. Also, when q → p = 1, our results in this paper turn into the unified Apostol-Bernoulli, Apostol-Euler and Apostol-Genocchi polynomials.
1,961.4
2018-01-01T00:00:00.000
[ "Mathematics", "Computer Science" ]
Bound-state energy spectrum and thermochemical functions of the deformed Schiöberg oscillator In this study, a diatomic molecule interacting potential such as the deformed Schiöberg oscillator (DSO) have been applied to diatomic systems. By solving the Schrödinger equation with the DSO, analytical equations for energy eigenvalues, molar entropy, molar enthalpy, molar Gibbs free energy and constant pressure molar heat capacity are obtained. The obtained equations were used to analyze the physical properties of diatomic molecules. With the aid of the DSO, the percentage average absolute deviation (PAAD) of computed data from the experimental data of the 7Li2 (2 3Πg), NaBr (X 1Σ+), KBr (X 1Σ+) and KRb (B 1Π) molecules are 1.3319%, 0.2108%, 0.2359% and 0.8841%, respectively. The PAAD values obtained by employing the equations of molar entropy, scaled molar enthalpy, scaled molar Gibbs free energy and isobaric molar heat capacity are 1.2919%, 1.5639%, 1.5957% and 2.4041%, respectively, from the experimental data of the KBr (X 1Σ+) molecule. The results for the potential energies, bound-state energy spectra, and thermodynamic functions are in good agreement with the literature on diatomic molecules. The solution canonical partition function is a prelude to obtaining statistical-mechanical models (or analytical equations) for the calculation of thermo-chemical properties of gaseous molecules.The partition function takes into account; the vibrational, rotational and translational effects of the diatomic system.Analytical equations for the prediction of the molar entropy (S), enthalpy (H), Gibbs free energy (H), and isobaric heat capacity (C p ) exist in the literature, some examples can be found in Refs. . Diffeent potential functions have been employed in the literature to construct analytical model equations 6,[59][60][61][62][63][64][65][66] . The present study is centered on the Schiöberg potential energy function.Previously, the bound-state solutions of the Schrödinger and Dirac equations have been obtained with the Schiöberg potential 67,68 .Using the Varshni conditions 6 , Wang and coworkers demonstrated the equivalence of the Manning-Rosen, Deng-Fan and Schiöberg potentials 64 .The Schiöberg oscillator incorporates three independent input parameters viz D e , r e and ω e . In the quest to model an efficient version of the Schiöberg oscillator, the authors in Ref. 69 employed the transformation r → r−r 0 and the Varshni conditions 6 to construct the reparameterized Schiöberg potential.The reparameterized Schiöberg oscillator is expected to encapsulate four independent input parameters: D e , r e , ω e and α e , nevertheless, the explicit form of the parameter r 0 was not deduced.Many diatomic molecule oscillators have been used by physicists and chemists to predict the thermochemical properties of gaseous molecules 6,55,59,[61][62][63][64][65] .However, for this purpose, the deformed Schiöberg potential has not been considered in the literature.It must be emphasized that q-deforming a potential energy function and subsequently subjecting it to the Varshni conditions for a diatomic molecule potential yields an equivalent model to the reparameterized version 35 .For this reason, the present study primary objectives are to obtain the energy spectra and thermochemical functions of the deformed Schiöberg oscillator.The remaining parts of the paper is organized as follows: In section "Construction of the DSO", the deformed Schiöberg oscillator is constructed.In section "Equation for the energy spectra of the DSO", explicit equation for the energy spectra is derived.Thermochemical functions are obtained in section "Thermochemical functions of the DSO".The results of numerical calculations are presented in section "Results and discussion".A brief conclusion of the work is given in section "Conclusions". Construction of the DSO In this section, the deformed Schiöberg oscillator (DSO) is constructed by employing the Varshni conditions 6 .The suggested model potential is given by where, coth q (αr) = cosh q (αr)/sinh q (αr), cosh q (αr) = ½ (e αr + e −αr ), sinh q (αr) = ½ (e αr −e −αr ), r is the interparticle separation, U 0 is the depth of the potential well, q, α and σ are potential parameters.Evidently, (1) is the q-deformed version given in Ref. 64 .The main difference between Eq. (1) and expression (1) of Ref. 70 lies in the functional forms of the two models. Equation ( 1) is a diatomic molecule oscillator if it satisfies the following conditions 6 where the prime in (2) denotes the derivative with respect to r, the speed of light is designated c, and μ is the reduced mass of a molecule.Inserting Eq. ( 1) into each of the expressions in (2) gives 2 , and γ = αr e .The next step is to determine the potential screening parameter, α.The α e -ω e relationship given in publication 64 can be used, viz where B e = ħ/4πcM 0 r e 2 , ħ = h/2π, h being the Planck constant.U″(r e ) and U‴(r e ) are obtained from Eq. ( 1) as Putting Eq. ( 5) into (4) and simplifying leads to Equation for the energy spectra of the DSO In this section, an analytical equation for the energy spectra is derived by solving the radial SE confined by the DSO.Different analytical methods for solving the SE exist in the literature [22][23][24][25][26][27][28] .However, owing to the simplicity of the parametric Nikiforov-Uvarov (PNU) technique 27 , it is considered in this work. A brief outline of the PNU method The PNU method gives that with the aid of a suitable coordinate transformation, a second-order differential equation of the hypergeometric-type can be expressed as 27 (1) (2) where α j (j = 0, 1, 2) are constant coefficients, n = 0, 1, 2, … is the vibrational (or principal) quantum number and ℓ = 0, 1, 2, … is the rotational (or orbital momentum) quantum number.The quantization condition leading to energy spectra is written as 27 where Analytical equation for the energy levels of the DSO by the PNU method The radial SE for a particle of mass M 0 moving in a radial potential field, U (r) is given by where J = ℓ (ℓ + 1) is the angular momentum of the system, u nℓ (r) is the radial wave function and E nℓ is the bound-state energy eigenvalue.Owing to the presence of the factor r -2 in the centrifugal term, expression (10) has no exact solution with the potential (1), except for the special case where ℓ = 0 (the pure vibrational state). Scientific The maximum vibrational quantum number is deduced from the expression E′ n (n max ) = 0, substituting (18) into this expression gives n max is essentially a positive integer, which is the value of n at which the energy of the system is a maximum. Thermochemical functions of the DSO Having obtained the equation for vibrational state energies, in this section, some important analytical models for the prediction of thermochemical properties of substances are developed for the DSO.The canonical partition function from which the thermodynamic expressions are deduced is first derived.The canonical partition function is written as Z (T) = Z vib Z rot Z tra , where T is the temperature of the system, Z vib , Z rot , and Z tra are the vibrational, rotational and translational partition functions, respectively 44,55 .The vibrational partition function depends on the oscillator used to model the diatomic system, it is given as 34 where β = 1/(k B T), k B is the Boltzmann constant.Putting Eq. ( 18) into (20) gives where The series in ( 21) can be evaluated with the help of the modified Poisson summation formula 72 .The modified Poisson summation approach is used here because it is simple to implement and has yielded very accurate results with many oscillator models such as those in Refs. 35,38,73,74.Other methods for evaluating the vibrational partition function including the phase space sampling method and the Euler-Maclaurin summation approach are given in Refs. 75,76.Based on the modified Poisson summation formula, one can write 72 Substituting the second expression in (22) into the right-hand side of (23) and expanding out the summation gives where . The last-two terms in (24) include quantum correction terms.For the moderate to high temperature range of diatomic systems, the quantum correction terms are small and can be ignored.Therefore, expression ( 24) is recast as Using the substitution z = ς {y + δ−κ/(y + δ)}, followed by the mapping x = (z 2 + 2ςκ 2 ) ½ to evaluate the integral, the summation in ( 25) is obtained as Thus, inserting ( 26) into (21), the vibrational partition function is obtained in compact form as where Based on the formalism of the rigid-rotor approximation for diatomic molecules, the rotational and translational components of the partition function are expressed as 36,40,46,50 where V is satisfied by pV = RT, m is the mass of gas molecules enclosed in volume V, the gas pressure is denoted by p, R is the molar gas constant, � rot = 2 /2πµr 2 e k B is the characteristic temperature of the gas.The parameter υ takes the value 2 if the gas is homonuclear, and 1 for heteronuclear gas molecules.Using the expression for the partition function, explicit equations for molar entropy, enthalpy, Gibbs free energy and constant pressure heat capacity are developed for the DSO as follows. Molar entropy equation for the deformed Schiöberg oscillator The molar entropy (J mol −1 K −1 ) of the system can be evaluated from the relation 53 Substituting the expression Z (T) = Z vib Z rot Z tra into (31) and using Eqs.( 27), ( 29) and (30) in the result, one obtains where for compactness, the following abbreviation is used Molar enthalpy model of the DSO The molar enthalpy (J mol −1 ) of the DSO can be deduced from the expression 54 The substitution Z (T) = Z vib Z rot Z tra and Eqs. ( 27), ( 29) and ( 30) and (34) yields Equation ( 35) can be used to compute molar enthalpy data for diatomic substances.However, to enable the results obtained in this study to be compared with available literature, scaled values of (35) are needed.The scaled molar enthalpy is written as 44,45 (26) Molar Gibbs free energy of the DSO Here, the analytical equation for molar Gibbs free energy is derived for the DSO.The Gibbs free energy is given by Replacing ( 34) and ( 31) into (37) gives For the purpose of relating to observed data, the scaled Gibbs free energy is defined as 44,45 Isobaric molar specific heat capacity model of the DSO The constant pressure (isobaric) molar heat capacity (in J mol −1 K −1 ) is evaluated from . Substituting expression (35) into this equation gives where Z ′ vib and Z ′′ vib are given by Eqs. ( 33) and ( 41), respectively Results and discussion In this section, the equation derived for the energy levels and thermochemical functions are applied to diatomic substances including 7 Li 2 (2 3 Π g ), NaBr (X 1 Σ + ), KBr (X 1 Σ + ) and KRb (B 1 Π) molecules.The model parameters for these molecules are given in Table 1.The experimental values for D e , r e , ω e and α e were obtained from publications [77][78][79] .The values of the potential parameters also listed in Table 1 were computed with Eqs. ( 3) and (6). To numerically affirm the accuracy of the model equations, the percentage average absolute deviation (PAAD) of the predicted results from the observed data for the molecule is employed as accuracy indicator.The PAAD values are interpreted according to Lippincott condition for the applicability of a model equation.The Lippincott criterion requires that the PAAD value of the predicted data from the observed data is at most 1% of the experimental results.The smaller the PAAD value, better the model equation.The PAAD value is written in compact form as 80 where N p is the count of observed data, X, Y and Z are chosen in relation to the predicted and observed data. Numerical results for potential energies Utilizing the spectroscopic parameters in Table 1, Eq. ( 1) is used to generate numerical results for the potential energy U (≡ U min , U max ) for different vales of r (≡ r min , r max ).The results obtained are given in Tables 2, 3, 4 and 5. Available experimental Rydberg-Klein-Rees (RKR) data 77,79 , and the multireference configuration interaction (MRCI) data 78 for the molecules are also included in the tables.The inclusion of the RKR and MRCI data is to Table 1.Model parameters of the diatomic molecules investigated in this study. Applicability of the Pekeris approximation scheme to diatomic systems To ascertain the significance of the Pekeris-type approximation model (11) suggested for the centrifugal barrier of the SE, the function F 1 = r −2 is plotted as a function of interparticle separation.On the same scale and axes, the approximation function F 2 = d 1 + d 2 coth q (αr) + d 3 coth q 2 (αr) is also plotted.The graphical plots depicting F 1 and F 2 for the diatomic molecules are shown in Figs. 5, 6, 7 and 8.It is evident from the figures that for the range of r chosen for the interparticle separations, the Pekeris approximation F 2 is a good representation of the observed function F 1 .The implication of the result is that based on the parameters of the diatomic molecules considered in this study, the Pekeris approximation model F 2 could be used to eliminate the function F 1 to analytically solve the SE (10). Numerical results for pure vibrational state energies With the aid of Eq. ( 18), pure vibrational state energies are generated for the selected diatomic molecules.The computed results are summarized in Tables 2, 3, 4 and 5. To quantitatively compare the obtained bound-state energies with the experimental RKR results for the molecules, the parameters in Eq. ( 42) are adjusted so that X = Z = RKR and Y = E n .The PAAD values obtained are 1.0956%, 0.2935%, 3.8667% and 1.4629% for the 7 Li 2 (2 3 Π g ), NaBr (X 1 Σ + ), KBr (X 1 Σ + ) and KRb (B 1 Π) molecules, respectively.Therefore, based on the Lippincott requirement for the applicability of a model equation, the present formula for the pure vibrational state energies could satisfactorily predict the experimental data for the NaBr (X 1 Σ + ) molecule, and marginally model the results for 7 Li 2 (2 3 Π g ) and KRb (B 1 Π) molecules.The PAAD value obtained for the KRb (B 1 Π) molecule is relatively high (≈ 4% of the observed data), suggesting that the present energy levels equation could not satisfactorily predict the observed data for the KRb (B 1 Π) molecule. Investigation of thermochemical properties of diatomic substances In this section, the thermodynamic functions developed for the DSO are used to analyze the thermochemical properties of pure substances.To substantiate the accuracy of the model equations, numerical data are obtained analytically and the results are compared with the literature on gaseous substances.The experimental results were retrieved from the National Institute of Standards and Technology (NIST) database 81 .The NIST data is available for the gaseous NaBr and KBr molecules only.Thus, our discussions are restricted to these two molecules.PAAD values computed in the temperature range 300-6000 K are used to gauge the accuracy of the model equations.Tables 6 and 7 summarize the data obtained using Eqs.( 32), ( 36), (39) and (40).The NIST data for the molecules are also listed in the tables under the columns (S NIST ), (H NIST ), (G NIST ) and (C pNIST ).Graphical plots of the thermochemical equations versus temperature are represented in Figs. 9, 10, 11 and 12.The corresponding NIST data are also plotted in the figures.Due to the similarity in the figures for the NaBr and KBr molecules, only the plots for NaBr molecule are presented. Figure 9 shows the graphical representation of the molar entropy against temperature.The figure shows that the predicted molar entropy agrees with the experimental results.To appraise the quality of the molar entropy model, the parameters X, Y, Z in Eq. ( 42) are chosen such that X = Z = S NIST and Z = S.The PAAD values deduced are 0.5401% and 1.2919%, for NaBr and KBr molecules, respectively.The obtained PAAD values are within the Lippincott error limit.This means that molar entropy equation proposed for the DSO could satisfactorily predict the NIST data for the gaseous NaBr and KBr molecules.In the plot shown in Fig. 10, scaled molar enthalpy is plotted as a function of temperature.The agreement between the observed and predicted data is evident in the figure.An estimate of the efficiency of the molar enthalpy model can be obtained by letting X = Z = H NIST and Z = H scaled in Eq. (42).Using the data in Tables 6 and 7, the computed PAAD values are 1.9428% and 1.5639% for the NaBr and KBr molecules, respectively.The PAAD values reveal that the DSO model for the scaled molar enthalpy could marginally predict the experimental results for the gaseous molecules.It is also noted from the tables that as the molecules are excited from moderate to high temperature region, the discrepancy between the predicted and observed data increases.The increased difference could be linked to lowest order approximation used to obtain expression (36).The absence of the quantum correction terms in the molar entropy equation is responsible for PAAD values exceeding 1%. The variations in molar Gibbs free energy with temperature is graphically represented in Fig. 11.The figure show that the results obtained by analytical computations are in good agreement with the data reported in the NIST database for the gaseous substances.With the help of the data in Tables 6 and 7, the PAAD values obtained are 0.8164% and 1.5957% for the ground state NaBr and KBr molecules, respectively.The obtained PAAD values are deduced by setting X = Z = G NIST , Y = G scaled in (42).Based on the Lippincott condition, it can be inferred that the molar Gibbs free energy model for the DSO could satisfactorily predict the Gibbs free energy of the selected diatomic molecules. In Fig. 12, the constant pressure molar heat capacity is plotted against the temperature of the molecules.From the figure, it is clear that in the low temperature range, the predicted isobaric molar heat capacity agrees with the experimental data for the molecules.However, in the moderate to high temperature domain, the predicted heat capacity results are smaller, and deviate significantly from the observed data.The reason for the relatively high deviation could be associated with the quantum corrections terms which are absent in Eq. (40). Taking X = Z = C pNIST , and Y = C p , the PAAD values deduced for the molecules are 2.9770% and 2.4041% for the ground state NaBr and KBr, respectively.The results clearly suggest that the isobaric molar heat capacity could not accurately predict the experimental results for the NaBr and KBr molecules.Nevertheless, the results www.nature.com/scientificreports/ in the tables suggest that the model could be used to obtain the molar heat capacity of the molecules within the low temperature range. Conclusions In this work, the necessary conditions for a diatomic molecule oscillator were used to construct an improved version of the deformed Schiöberg oscillator (DSO).By employing the parametric Nikiforov-Uvarov solution recipe to solve the radial SE with the DSO, analytical expressions for energy spectra and canonical partition function were obtained.Using the obtained partition function, thermodynamic properties such as molar entropy, enthalpy, Gibbs free energy and isobaric heat capacity were developed for the DSO.The obtained equations were used to analyze the physical properties of diatomic substances including 7 Li 2 (2 3 Π g ), NaBr (X 1 Σ + ), KBr (X 1 Σ + ) and KRb (B 1 Π) molecules.The percentage average absolute deviation (PAAD) of the predicted data from the experimental data of the molecules is used as the goodness-of-fit indicator.The PAAD values obtained with the DSO are 1.3319%, 0.2108%, 0.2359% and 0.8841% for the molecules.The equation of bound state energy levels gave PAAD of 1.0956%, 0.2935%, 3.8667% and 1.4629% from the experimental data of the 7 Li 2 (2 3 Π g ), NaBr (X 1 Σ + ), KBr (X 1 Σ + ) and KRb (B 1 Π) molecules.PAAD values were also obtained using the expression for molar entropy, scaled molar enthalpy, scaled molar Gibbs free energy and constant pressure molar heat capacity models.The results obtained for NaBr (X 1 Σ + ) molecule are 0.5401%, 1.9428%, 0.8164% and 2.9770%.The corresponding results for KBr (X 1 Σ + ) are 1.2919%, 1.5639%, 1.5597% and 2.4041% from the NIST data.The results obtained are in good agreement with theoretic data reported in existing literature and available experimental data on diatomic systems.The results obtained in this study could have practical applications in the many fields of physics and engineering such as solid-state physics, chemical physics, chemical engineering and molecular physics.Table 6.Predicted and observed data on molar entropy (J mol −1 K −1 ), reduced molar enthalpy (kJ mol −1 ), reduced molar Gibbs free energy (J mol −1 K −1 ) and constant pressure molar heat capacity (J mol −1 K −1 ) for the NaBr (X 1 Σ + ) molecule.RGFE a : Reduced Gibbs free energy; CPHC b : Constant pressure heat capacity.Predicted and observed data on molar entropy (J mol −1 K −1 ), reduced molar enthalpy (kJ mol −1 ), reduced molar Gibbs free energy (J mol −1 K −1 ) and constant pressure molar heat capacity (J mol −1 K −1 ) for the KBr (X 1 Σ + ) molecule.RGFE a : Reduced Gibbs free energy; CPHC b : Constant pressure heat capacity. ( 36 ) H scaled = H − h 298.15 , (37) G = H −TS. (38) G = − lnZ vib − lnZ rot − lnZ tra .(39) G scaled = − (G − H 298.15 )/T.(40) www.nature.com/scientificreports/allow for comparison of the predicted values of the potential energies with the observed data for the molecules.The variation in potential energy of the molecules as a function of interparticle separation is given in Figs. 1, 2, 3 and 4. The experimental RKR data are also plotted in the figures.The figures show that the computed potential energy of the molecules agree with the experimental data for the molecules.The accuracy of the DSO to model the experimental RKR data can be determined by letting X = RKR, Y = U and Z = D e in Eq. ( Figure 3 . Figure 3. Modeling of deformed Schiöberg potential with ab initio MRCI interparticle potential energy data for the KBr (X 1 Σ + ) molecule. Figure 4 . Figure 4. Modeling of deformed Schiöberg potential with experimental RKR interparticle potential energy data for the KRb (B 1 Π) molecule. Figure 6 . Figure 6.Modeling of the Pekeris approximation scheme F2 to the function F 1 for the NaBr (X 1 Σ + ) molecule. Figure 7 . Figure 7. Modeling of the Pekeris approximation scheme F2 to the function F 1 for the KBr (X 1 Σ + ) molecule. Figure 8 . Figure 8. Modeling of the Pekeris approximation scheme F2 to the function F 1 for the KRb (B 1 Π) molecule. Figure 10 . Figure 10.Graphical representation of scaled molar enthalpy versus temperature for the ground state NaBr molecule. Figure 11 . Figure 11.Graphical representation of scaled molar Gibbs free energy versus temperature for the ground state NaBr molecule. Figure 12 . Figure 12.Graphical representation of isobaric molar heat capacity versus temperature for the ground state NaBr molecule. h 298.15 is given by(35), it denotes the molar enthalpy of the molecules calculated at temperature of 298.15K and pressure of 0.1 MPa.
5,527.2
2023-11-21T00:00:00.000
[ "Chemistry", "Physics" ]
The harmonic mean p-value and model averaging by mean maximum likelihood Analysis of ‘big data’ frequently involves statistical comparison of millions of competing hypotheses to discover hidden processes underlying observed patterns of data, for example in the search for genetic determinants of disease in genome-wide association studies (GWAS). Model averaging is a valuable technique for evaluating the combined evidence of groups of hypotheses, simultaneously testing multiple levels of groupings, and determining post hoc the optimal trade-off between group composition versus significance. Here I introduce the harmonic mean p-value (HMP) for assessing model-averaged fit, which arises from a new method for model averaging by mean maximum likelihood (MAMML), underpinned by generalized central limit theorem. Through a human GWAS for neuroticism and a joint human-pathogen GWAS for hepatitis C viral load, I show how HMP easily combines information to detect statistically significant signals among groups of individually nonsignificant hypotheses, enhancing the potential for scientific discovery. HMP and MAMML have broad implications for the analysis of large datasets by enabling model averaging for classical statistics. falsely rejecting a null in favour of an alternative hypothesis in one or more of all tests performed. Controlling the FWER when some subset of the alternative hypotheses tested might be true is considered the strongest form of protection against false positives. However, the simple and widely-used Bonferroni method for controlling the FWER tends to be conservative, especially when the individual tests are positively correlated, as often occurs when alternative hypotheses are compared against the same data. In practice, the conservative nature of Bonferroni correction exacerbates the stringent criterion of controlling the FWER, jeopardizing sensitivity to detect true signals. Alternatives to controlling the FWER have been proposed based on arguments for less stringency. Controlling the false discovery rate (FDR) guarantees that among the significant tests, the proportion in which the null hypothesis is incorrectly rejected in favour of the alternative is limited. 4 The widely-used Benjamini-Hochberg procedure 4 for controlling the FDR shares with the Bonferroni method a robustness to positive correlation between individual tests, 5 but does not share the consequent problem of becoming overly conservative. These advantages have increased the popularity of FDR control, but necessitate the acceptance of a less rigorous standard of control than the FWER, which in practice can produce large numbers of false positives. Bayesian statistics has an answer to these problems. While the posterior odds of any individual hypothesis test are inevitably decreased by increasing the number of alternative hypotheses, model averaging allows alternative hypotheses to be combined, so that comparing a group of alternatives against a common null may rule out the null hypothesis collectively. In the case of GWAS, even if no individual variant shows sufficiently strong evidence of association in a region, the modelaveraged signal may still achieve sufficiently strong posterior odds. 6,7 Since, in the absence of WGS, associations at typed variants must be interpreted merely as markers of regional signals, it is clear that analysing variation exhaustively can only improve the prospects for discovery. However, there is no general method for combining evidence across hypotheses by model averaging in classical statistics. While Bayesian arguments might advocate abandoning classical statistics, 8 they have shown that p-values from likelihood-based inference are mathematically closely related to Bayesian quantities. 9, 10 Pragmatically, factors including difficulties specifying prior information, a tendency for slower methods and inertia mean that uptake of Bayesian methods still lags behind classical approaches in many settings, including GWAS. Here I show that hypotheses can be model-averaged quickly and easily through the harmonic mean p-value, improving the prospects for scientific discovery using classical statistics, and prompting a reevaluation of the issue of controlling false positive rates in analyses of big data. Results The harmonic mean p-value For observed data X consider L mutually exclusive alternative hypotheses M i , i = 1 . . . L, all with the same nested null hypothesis M 0 . Suppose each alternative has been tested against the null to produce a p-value, p i . The main result of this paper is that the weighted harmonic mean p-value (HMP) of any subset R containing |R| of the p-values, defined by Equation 1, combines the evidence in favour of the group of alternative hypotheses R against the common null, is an approximately well-calibrated p-value for small values, and that the test defined by Equation 2 controls the strong-sense family-wise error rate (FWER) at level approximately ↵ for ↵  0.05, no matter how many subsets are tested: where where w R = P i2R w i . Further, generalized central limit theorem (see e.g. ref 11 ) can be used to obtain a p-value that becomes exact for large |R| because 1/ p R tends towards a Landau distribution, 12 which has probability density function This allows tables of significance thresholds to be computed for direct interpretation of the HMP (Table 1), and direct computation of a better-calibrated p-value using the HMP as a test statistic: The HMP outperforms Bonferroni and Simes correction when the tests are mutually exclusive. The HMP complements Fisher's method for combining p-values, with the HMP being more appropriate when (i) rejecting the null implies that only one alternative hypothesis may be true, and not all of them (ii) the p-values might be positively correlated, and cannot be assumed independent. In the next section the theory giving rise to the HMP is explained. Readers most interested in application of the HMP can skip to the following sections. Model averaging by mean maximum likelihood A classical analogue of the Bayes factor is the maximized likelihood ratio, which measures the evidence for the alternative hypothesis against the null: In a likelihood ratio test (LRT), the p-value is calculated as the probability of obtaining an R i as or more extreme if the null hypothesis were true: For nested hypotheses (⇥ M0 2 ⇥ Mi ), Wilks' theorem 13 approximates the null distribution of R i as LogGamma(↵ = ⌫/2, = 1) when there are ⌫ degrees of freedom. The idea of this paper is to develop a classical analogue to the model-averaged Bayes factor by deriving the null distribution for the mean maximized likelihood ratio, where the weights could represent prior evidence for each alternative hypothesis. Formally this means the model is treated as a random effect. The distribution ofR cannot be approximated by central limit theorem because the LogGamma distribution is heavy tailed, with undefined variance. Instead generalized central limit theorem can be used, 11 which states that for equal weights (w i = 1/L) and independent and identically distributed R i s, where = 1 is the heavy-tail index of the LogGamma(⌫/2, 1) distribution, a L and b L are constants and R is a Stable distribution with tail index . When ⌫ = 2, the specific form of the Stable distribution is the Landau. The assumptions of equal weights, independence and identical degrees of freedom can be relaxed. Full details of the Stable distribution approximation are in the Methods. Notably, when ⌫ = 2 and the assumptions of Wilks' theorem are reasonable, the p-value equals the inverse maximized likelihood ratio: so the mean maximized likelihood ratio equals the inverse HMP:R Under these conditions, interpretingR and the HMP are exactly equivalent. This equivalence motivates use of the HMP more generally because 1. The HMP will capture similar information toR regardless of the degrees of freedom. 2. The Landau distribution gives an excellent approximation forR with ⌫ = 2, and so for 1/ p. 3. Combining p i s rather than R i s automatically accounts for differences in degrees of freedom. Further, the HMP is approximately well calibrated because the LogGamma cumulative distribution function is regularly varying, meaning that the model-averaged p-value (Equation 3) is approximated by (see e.g. ref 14 ) Directly interpreting the HMP using Equation 2 constitutes a multilevel test in the sense that any significant subset of hypotheses implies the HMP of the superset will also be signifi- However, this is only approximate because the exact significance threshold varies by the number of hypotheses combined ( Table 1). The formal definition of the multilevel test that controls the strong sense FWER at level ↵ and outperforms the power of Bonferroni and Simes correction is So Equation 2 is formally a shortcut procedure that guarantees either superior power over Bonferroni and Simes or strong sense FWER depending on whether ↵ or ↵ L is employed. In practice, the less stringent threshold ↵ could be used as a first pass, and the significance of important results could then be checked using the exact threshold from Table 1. The key insights here are that model averaging yields more powerful tests, arbitrary combinations of hypotheses can be tested for the same cost, and yet a non-significant overall HMP implies no subsets are likely to be significant. HMP enables adaptive multiple testing correction by combining p-values That the Bonferroni method for controlling the FWER can be overly stringent, especially when the tests are nonindependent, has long been recognized. In Bonferroni correction, a p-value is deemed significant if p  ↵/L, which becomes more stringent as the number of tests L increases. Since human GWAS began routinely testing millions of variants by statistically imputing untyped variants, a new convention was adopted in which a p-value is deemed significant if p  5 ⇥ 10 8 , a rule that implies the effective number of tests is no more than L = 10 6 . Several lines of argument were used to justify this ad hoc threshold, 16,17 most applicable only to human GWAS. In contrast, the HMP affords strong control of the FWER while avoiding both ad hoc rules and the undue stringency of Bonferroni correction, an advantage that increases when tests are non-independent. To show how the HMP can recover significant associations among groups of tests that are individually non-significant, I reanalysed a GWAS of neuroticism, 15 defined as a tendency towards intense or frequent negative emotions and thoughts. 18 Genotypes were imputed for L = 6 524 432 variants across 170 911 individuals. I used the HMP to perform model-averaged tests of association between neuroticism and variants within contiguous regions of 10, 100 and 1000 kilobases (kb), 10 megabases (Mb), entire chromosomes and the whole genome, assuming equal weights across variants. Figure 1 shows the p-value from Equation 3 for each region R adjusted by a factor w 1 R to enable direct comparison to the significance threshold ↵ = 0.05. Similar results were obtained from direct interpretation of the HMP ( Figure S1). Model averaging tends to make significant and nearsignificant adjusted p-values more significant. For example, for every variant significant after Bonferroni correction, the model-averaged p-value for the corresponding chromosome was found to be at least as significant. Model-averaging increases significance more when combining a group of comparably significant p-values, e.g. the top hits in chromosome 9. The least improvement is seen when one p-value is much more significant than the others, e.g. the top hit in chromosome 3. This behaviour is predicted by the tendency of harmonic means to be dominated by the smallest values. In the extreme case that one p-value dominates the significance of all others, the HMP test becomes equivalent to Bonferroni correction. This implies that Bonferroni correction might not be improved upon for 'needle-in-a-haystack' problems. Conversely, dependency among tests actually improves the sensitivity of the HMP because one significant test may be accompanied by other correlated tests that collectively reduce the harmonic mean p-value. In some cases, the HMP found significant regions where none of the individual variants were significant. For example, no variants on chromosome 12 were significant by Bonferroni correction nor by the ad hoc genome-wide significance threshold of 5 ⇥ 10 8 . However, the HMP found significant 10Mb regions spanning several peaks of non-significant in-dividual p-values. One of those, variant rs7973260, which showed an individual p-value for association with neuroticism of 2.4 ⇥ 10 7 , had been reported as also associated with depressive symptoms (p = 1.8 ⇥ 10 9 ). In chromosome 3, individual variants were found to be significant by the ad hoc threshold of 5 ⇥ 10 8 , but neither Bonferroni correction nor the HMP agreed those variants or regions were significant at a FWER of ↵ = 0.05. Indeed the HMP found chromosome 3 non-significant as a whole. Variant rs35688236, which had the smallest p-value on chromosome 3 of 2.4 ⇥ 10 8 , had not validated when tested in a quasi-replication exercise that involved testing variants associated with neuroticism for association with subjective wellbeing or depressive symptoms. 15 These observations illustrate that the the HMP adaptively combines information among groups of similarly significant tests where possible, while leaving lone significant tests subject to Bonferroni-like stringency, providing a general approach to combining p-values that does not require specific knowledge of the dependency structure between tests. HMP allows large-scale testing for higher-order interactions without punitive thresholds Scientific discovery is currently hindered by avoidance of large-scale exploratory hypothesis testing for fear of attracting multiple testing correction thresholds that render signals found by more limited testing no longer significant. A good example is the approach to testing for pairwise or higherorder interactions between variants in GWAS. The Bonferroni threshold for testing all pairwise interactions invites a threshold of (L + 1)/2 times more stringent than the threshold for testing variants individually, and strictly speaking this must be applied to every test, even though this is highly conservative because of the dependency between tests. The alternative of controlling the FDR risks a high probability of falsely detecting an association of some sort. Therefore interactions are not usually tested for. To show how model-averaging using the HMP greatly alleviates this problem, I reanalysed human and pathogen genetic variants from a GWAS of pre-treatment viral load in hepatitis C virus (HCV)-infected patients. 19 Jointly analysing the influence of human and pathogen variation on infection is an area of great interest, but requires a Bonferroni threshold of ↵/(L H L P ) when there are L H and L P variants in the human and pathogen genomes respectively, compared to ↵/(L H +L P ) if testing the human and pathogen variants separately. In this example, L H = 399 420 and L P = 827. In the original study, a known association with viral load was replicated at human chromosome 19 variant rs12979860 in IFNL4 (p = 5.9 ⇥ 10 10 ), below the Bonferroni threshold of 1.3 ⇥ 10 7 . The most significant pairwise interaction I found, assuming equal weights, involved the adjacent variant, rs8099917, with p = 2.2 ⇥ 10 10 . However, this did not fall below the more stringent Bonferroni threshold of 1.5 ⇥ 10 10 (Figure 2A). If the original study's authors had performed and reported all 330 million tests, they could have been compelled to declare the marginal association in IFNL4 non-significant, despite what intuitively appears like a clear signal. Model averaging using the HMP reduces this disincentive to perform additional related tests. Figure 2B shows that despite no significant pairwise tests involving rs8099917, model averaging recovered a combined p-value of 3.7⇥10 8 , below the multiple testing threshold of 1.3 ⇥ 10 7 . Additionally, two viral variants produced statistically significant modelaveraged p-values of 5.5 ⇥ 10 5 and 4.8 ⇥ 10 5 at polyprotein positions 10 and 2 061 in the capsid and NS5a zinc finger domain (GenBank AQW44528), below the multiple testing threshold of 6.0 ⇥ 10 5 . These results show how model-averaging using the HMP can enhance scientific discovery by (i) encouraging tests for higher order interactions when they otherwise would not be attempted and (ii) recovering lost signals of marginal associations after performing an 'excessive' number of tests. Untangling the signals driving significant modelaveraged p-values When more than one alternative hypothesis is found to be significant, either individually or as part of a group, it is desirable to quantify the relative strength of evidence in favour of the competing alternatives. This is particularly true when dis- entangling the contributions of a group of individually nonsignificant alternatives that are significant only in combination. Sellke, Bayarri and Berger 9 proposed a conversion from p-values into Bayes factors which, when combined with the prior information contained in the model weights, produces posterior model probabilities and credible sets of alternative hypotheses. Because the form of relationship is a regularly varying function, Bayes factors for similarly-favoured alternatives are approximately proportional to the inverse p-value. This linearity mirrors the HMP itself, whose inverse is an arithmetic mean of the inverse p-values. After conditioning on rejection of the null hypothesis by normalizing the approximate model probabilities to sum to 100%, the probability that the association involved human variant rs8099917 was 54.4%. This signal was driven primarily by the three viral variants with the highest probability of interacting with rs8099917 in their effect on pre-treatment viral load: position 10 in the capsid (10.9%), position 669 in the E2 envelope (8.7%) and position 2061 in the NS5a zinc finger domain (11.4%) ( Figure 3). Even though the model-averaged p-value for the envelope variant was not itself significant, this revealed a plausible interaction between it and the most significant human variant rs8099917. Discussion The HMP provides a way to calculate model-averaged pvalues, providing a powerful and general method for combining tests while controlling the strong-sense FWER. It provides an alternative to both the overly conservative Bonferroni control of the FWER, and the lower stringency of FDR control. The HMP allows the incorporation of prior information through model weights, and is robust to positive dependency between the p-values. The HMP is approximately wellcalibrated for small values, while a null distribution, derived from generalized central limit theorem, is easily computed. When the HMP is not significant, neither is any subset of the constituent tests. The HMP is more appropriate for combining p-values than Fisher's method when the alternative hypotheses are mutually exclusive, as in model comparison. When the alternative hypotheses all have the same nested null hypothesis, the HMP is interpreted in terms of a model-averaged likelihood ratio test. However, the HMP can be used more generally to combine tests that are not necessarily mutually exclusive, but which may have positive dependency. It can be used alone or in combination, for example with Fisher's method to combine model-averaged p-values between groups of independent data. The theory underlying the HMP provides a new way to think about controlling the FWER through multiple testing correction. The Bonferroni threshold increases linearly with the number of tests, whereas the HMP is the reciprocal of the mean inverse p-value. To maintain significance with Bonferroni correction, the minimum p-value must decrease linearly as the number of tests increases. This strongly penalizes exploratory analyses. In contrast, to maintain significance with the HMP, the mean inverse p-value must remain constant as the number of tests increases. This does not penalize exploratory analyses so long as the 'quality' of the additional hypotheses tested, measured by the inverse p-value, does not decline. Through example applications to GWAS, I have shown that the HMP combines tests adaptively, producing Bonferronilike adjusted p-values for 'needle-in-a-haystack' problems when one test dominates, but able to capitalize on numerous strongly significant tests to produce smaller adjusted p-values when warranted. I have shown how model averaging using the HMP encourages exploratory analysis and can recover signals of significance among groups of individually non-significant tests, properties that have the potential to enhance the scientific discovery process.
4,538.8
2017-08-02T00:00:00.000
[ "Computer Science" ]
Selective scepticism over thought: Am I ever justified in doubting that I think that thought but not this one? In this paper, I subject a number of statements avowing selective doubt about an act of thinking to philosophical analysis (e.g., “A thought occurred just now but I do not believe that I was thinking it”) to ascertain those circumstances under which they constitute a legitimate expression of scepticism. Can a case be made for epistemic discrepancy sufficient to justify the following claim: “I doubt that I think that thought but not this one”? In support of selective scepticism, I discuss the ontological and epistemic properties evident in an indirect form of Moore’s paradox which features beliefs about a thought and a thinker: notably, “I experienced a thought just now but I do not believe that I was thinking it”. I argue that the conjunction above contains conjuncts which are ontologically equivalent but epistemically distinct. This difference explains not only why the statement is indirectly Moore paradoxical but how selective scepticism over thought might be justified. To further support my claim for the legitimacy of selective scepticism, I consider research on how a child acquires beliefs about thinking, and speculate over the cause of a rare pathological condition known as thought insertion. ABOUT THE AUTHOR Garry Young is a senior lecturer in psychology at Nottingham Trent University. His research interests include delusional beliefs, embodied cognition and the ethics of virtual interactions within video games and cyberspace more generally. His publications include Philosophical Psychopathology: Philosophy without thought experiments (Palgrave Macmillan, 2013), Ethics in the Virtual World: The morality and psychology of gaming (Routledge, 2013), and Transcending Taboos: A moral and psychological examination of cyberspace (Routledge, 2012; co-author Monica Whitty). The current paper is part of an ongoing research programme examining the nature of belief, including delusional beliefs. PUBLIC INTEREST STATEMENT Typically, we do not doubt that we think thoughts we experience; it seems self-evident to us that we do think them. Only in pathological cases-e.g. thought insertion-does the relationship between thought and thinker breakdown, such that the subject does not believe that s/he thinks certain thoughts. This paper considers whether I could ever be justified in doubting that a particular thought I experience is one I think. In other words, is it ever legitimate to assert that I doubt thinking that thought but not this one? I am not suggesting that what is doubted could be true (that I really am not the one thinking the thought); rather, I am asking whether such selective doubt is justifiable, insofar as I have a reason to doubt I was thinking that thought. Such justification would of course fly in the face of our everyday experience and beliefs about our thoughts. PART 1 1. Introduction Is there not some God, or some other being by whatever name we call it, who puts these reflections into my mind? (Descartes (1641(Descartes ( /1997, §24; emphasis added) The question above forms part of Descartes' Second Meditation. In its original context, and without the added emphasis, Descartes is entertaining the possibility that all of his thoughts (his reflections) are exogenous. By adding the emphasis, however, I wish to create the sense in which Descartes, rather than expressing ubiquitous doubt over the source of his thoughts-that they are thoughts he is thinking (as we traditionally, and rightly, take him to be doing)-is here being more selective. In this amended version of his Second Meditation, let us allow that he is sceptical about the origin of these thoughts, specifically, whatever these thoughts happen to be. If we accept as necessary the connection between a thought and a thinker, such that the existence of a thought entails a thinker of that thought, is selective scepticism over thought ever justified? To illustrate, consider the following example taken from a patient suffering from thought insertion (a condition we will return to in Section 6): [S]he said that sometimes it seemed to be her own thought … "but I don't get the feeling that it is". She said her "own thoughts might say the same thing … but the feeling isn't the same … the feeling is that it is somebody else's …" (Taken from Allison-Bolger, 1999, #68, cited in Hoerl, 2001 Here, the subject is distinguishing between thought she takes to be her own and thought she does not. Is such a distinction ever legitimate such that I (or anyone) could (legitimately) make the following claim: I doubt that I think that thought but not this one? To be clear, I am not asking whether it is ever legitimate to entertain the possibility that one could doubt that one is thinking certain thoughts; after all, that is precisely what I am doing here and Descartes is doing in my amended version of his Second Meditation. Rather, I am concerned with the legitimacy of the proposition: I doubt that I think that thought but not this one. My query does not stem from a motivation to challenge the entailment between thinker and thought. As noted, for the purposes of this paper, I accept this without defence. Instead, I am interested in whether, epistemically, I could ever be justified in doubting that a particular thought is one that I think, as illustrated by the example of "thought insertion" above. The aim of this paper is to subject a number of statements avowing selective doubt about an act of thinking to philosophical analysis, in order to ascertain those circumstances under which they constitute a legitimate expression of scepticism. I intend to show that there is an unconventional sense in which one could justify selective scepticism with regard to thought and who is thinking it-namely, adopting the view from nowhere-but that the unconventional nature of this example risks trivialising the scepticism involved (Section 2). That said, the introduction of an "objective stance" does highlight the seemingly important role played by perspectivity in abating scepticism. I say "abating" rather than "eradicating" because I intend to illustrate, through the use of an indirect form of Moore's paradox, how selective scepticism may be granted a degree of legitimacy under certain circumstances, even in the case of thinking thoughts constitutive of one's perspective (Sections 3 and 4). What these circumstances are, and therefore what might constitute a reason for one's scepticism, will be considered in Part 2 when discussing how a child develops an awareness of the act of thinking (Section 5) and the rare pathological condition known as thought insertion (Section 6). Selective scepticism 1: adopting the view from nowhere Consider the following claim: D 1 I doubt (qua do not believe) that I think that thought but not this one. D 1 discriminates between thoughts that I doubt thinking and thoughts that I do not. Is this discrimination ever justified? One defence of D 1 requires that we adopt an objective stance, or what Nagel (1986) refers to as the view from nowhere. Assuming the possibility of other minds, there are countless thoughts which I do not think or have any direct awareness of (nor do I hold the belief that I do). Thoughts are being generated every moment of every day which I have no direct involvement in or access to. From an objective stance-when conceiving of the whole of thought-it would appear to be perfectly legitimate (and indeed rational) for me to doubt that I think those thoughts. 1 But if I were to adopt an objective stance, then given the nature of this stance, what would be the basis for my differentiation of those thoughts from, say, these thoughts, or simply that one from this? In other words, how am I to identify and so discriminate thoughts it is legitimate for me to doubt that I think from thoughts it is not? Adopting the view from nowhere means that there are an indeterminate number of thoughts in existence at any given time; thoughts with different content and (let us allow, pace Russell and Nietzsche) 2 different thinkers. But, to reiterate, how is each thought individuated such that I am able to pick out one particular thought from another in a way that justifies doubting that I think that thought, specifically, and therefore for the corresponding proposition regarding my doubt of that thought to be legitimate? Borrowing from Williams (1978), when adopting an objective stance, the problem of individuation can be illustrated as follows: take the thought (T1) "I believe that this is true" and the thought (T2) "I do not believe that this is true". Where the demonstrative pronoun is referring to the same thing, both of these thoughts cannot be expressing a truth unless (T1) and (T2) constitute the content of different thought-worlds: for in the same thought-world, it cannot be the case that whatever this is referring to is both believed and not believed to be true. 3 If each thought is to be upheld as legitimate, then the contradictory content of (T1) and (T2) requires that they are located in different thought-worlds, otherwise such contradictory content, contained within the same thought-world, would (in fact, should) be judged irrational. 4 Based on an objective stance, contradictory content would seem to be sufficient for thought individuation, but only insofar as such a criterion (i.e. contradiction) individuates thoughts into different thought-worlds based on assumed rationality. Where one cannot assume rationality, the individuation of (T1) and (T2) into different thought worlds is not possible. Within the same thought-world, the following thoughts-(T3) "I doubt that that is true" and (T4) "I do not doubt that this is true"-can be true without fear of contradiction: for each demonstrative pronoun ("this" and "that") is picking out a different event the truth of which is either doubted or not doubted. Importantly, though, the same can be said of these thoughts irrespective of whether they are from the same or different thought-worlds. Given this, in the case of (T3) and (T4), the problem of individuation remains. From an objective stance, how do we know if (T3) and (T4) belong to the same or different thought-worlds? With examples (T3) and (T4), it is not even the case that the first person pronoun "I" is able to distinguish between thoughts in terms of individuating thought-worlds. Each thought could be from the same or a different thought-world, with "I" (as an indexical term) referring either to someone different or the same person depending on which thought-world the two thoughts are from. In order to overcome the problem of thought individuation in the absence of contradictory content and assumed rationality, there needs to exist some kind of identity relation in which the "I" refers to (and is understood to refer to) that which constitutes a particular thought-world. Consider, then, (T5) "A thinks 'I believe that this is true'", and (T6) "B thinks 'I do not believe that that is true'". Thought individuation is made apparent if and therefore because (in this case) there exists a different identity relation in each thought's respective use of "I". In (T5), "I" refers to A, and in (T6) it refers to B, although it could simply refer to "not A". Adapting D 1 so that it can be expressed from an objective stance, we get: D 2 I (qua A) doubt that I think that thought (where the demonstrative pronoun "that" refers to the content of a thought-world individuated as "not A") but not this thought (where the demonstrative pronoun "this" refers to the content of the thought-world individuated by A). What does A represent? It represents a particular thought-world and therefore a particular perspective. The figurative use of A signifies this point of view and the thoughts constitutive of this point of view are these thoughts (the only thoughts of this perspective). The use of the first person pronoun "I", in being indexical, is fixed to a given perspective (this perspective, in this case), figuratively individuated by A. The certainty with which "I" constitutes something more than an indexical that is attached to a particular perspective (figuratively individuated by A or "not A", as the case may be) and instead identifies a substantive subject of thought (in the shape of, say, Descartes' res cogitans), is beyond the scope of this paper to discuss. Suffice it to say that all that is needed for this discussion is for there to exist a belief that "I" refers to the substantial subject of this perspective. When understood in this way, and in accordance with D 2 , the thought "I doubt that I think that thought but not this one" expresses a legitimate doubt. Of course, under D 2 , I do not experience the thought I doubt thinking as if from nowhere; instead, I only conceive of it as belonging to a different thought-world, and therefore a different perspective, even if this perspective is simply understood as "other than mine". 5 As Williams (1978) notes when referring to the particular perspective constitutive of that which I (qua A) experience, which he calls Cartesian reflection: "There is nothing in the pure Cartesian reflection to give us that perspective [the view from nowhere]. The Cartesian reflection merely presents, or rather invites us into, the perspective of consciousness" (p. 100; emphasis added). Cartesian reflection, by inviting us into consciousness, is perspectival; and in being perspectival, there is something-it-is-like to have thoughts (thoughts constitutive of a particular point of view; see Nagel, 1974). As Williams informs us, from this perspective-which I will call my perspective-experiential events either happen for me or they do not. I cannot experience events as happening outside of this perspective. Indeed, the quotation from Descartes' Second Meditation presented at the start of this paper reflects this perspectival requirement: for Descartes scepticism concerns thoughts "located" within his mind and therefore constitutive of his perspective. If the demonstrative pronoun "that" is referring to a thought-event in a different thought-world, identified figuratively as "not A", then, given my restricted (and unique) perspective, within the phrase "I doubt that I think that thought", the term "that", for it to be legitimate, must be referring to a thought-event outside my experience. If it is a thought I cannot experience, then I do not stand in any first person epistemic relation to it. At best, it refers to something which I either conceive as a possibility or come to believe occurs in virtue of some mediated third person epistemic relationsome inference-based on my folk psychology interpretation of the actions of another (e.g. if I see S put up his umbrella then I infer the existence of the belief "it is raining"). In contrast, the demonstrative pronoun "this", in referring to a thought I do not doubt, must be referring to something which I experience (in virtue of constituting a thought-event from within my perspective) and therefore something which I stand in a first person epistemic relation to. Let us reconsider D 2 , this time taking into account the fact that my unique perspective necessarily provides a means of differentiating thought-events which I experience from those which I do not. D 2 therefore becomes: D 3 I doubt that I think that thought (where "that" refers to some indeterminate thought-from within thought-world "not A"-whose existence I can only conceive of and infer rather than experience directly, and so cannot stand in a first person epistemic relation to), but not this thought (where "this" refers to a thought-event constitutive of my perspective which I experience directly and therefore stand in a first person epistemic relation to). According to D 3 , what justifies my doubt that I am thinking that thought but not this one is my lack of direct experience of the former thought. That which I experience, I refer to using the demonstrative "this" and (according to D 3 ) consider the fact that I experience it (and therefore stand in a first person epistemic relation to it) sufficient to make illegitimate any claim about doubting that I think it. A thought I do not experience I refer to using the demonstrative "that", and the fact that I can only conceive of it occurring (and stand in a mediated third person epistemic relation to it) justifies my doubt that I think it. 6 What this means in terms of the use of demonstratives, of course, is that, in the context in which they are employed, "that" refers to an experiential event of which I can only conceive (namely, the occurrence of any indeterminate thought from outside of my perspective) whereas "this" refers to a particular experiential event that only I can experience. 7 It is worth noting that underlying D 3 is a more refined version of the ontological position stated earlier-"a thought necessitates a thinker"-in which the assumption is now that a thought I experience (a thought within my thought-world) necessitates that I am its thinker (henceforth "author"). I do not wish to dispute this ontological position. Instead, I seek to challenge the epistemic relationship described within D 3 which complements it (namely, that I do not doubt thinking this thought: the one I experience). It is my contention that the assertion that I do not doubt thinking the thought I experience (in virtue of experiencing it), if interpreted as necessitating that I cannot legitimately doubt thinking it (which is a reasonable interpretation of D 3 ) is erroneous. To understand why, consider the following statement: (1) A thought occurred just now but I do not believe that I was thinking it. Statement (1) is entirely consistent with D 3 . There are lots of thoughts occurring right now that I am not thinking; it is therefore quite legitimate for me to doubt that I am thinking any of these. In light of this, consider statement (2): (2) I experienced a thought just now but I do not believe I was thinking it Statement (2) is not consistent with D 3 . At first glance, this may not appear to be a problem. After all, statement (2) seems a rather odd, perhaps even contradictory, thing to say; so why be concerned if a proposition that is likely to be illegitimate is inconsistent with D 3 ? To understand why statement (2) is potentially problematic for D 3 , we first need to understand that statement (2) expresses what I am claiming is an indirect form of Moore's paradox. It is indirectly Moore paradoxical because it lacks the formal structure of Moore's paradox (as I will demonstrate below) whilst retaining a seeming contradictory set of conjuncts (indicative of Moore's paradox). On closer inspection, however, the first conjunct (concerned with experiencing a thought) does not contradict the second conjunct (not believing that I was thinking the thought) unless one enforces the entailment between experiencing a thought and thinking it. But even if one accepts that "I am experiencing p" entails "I am thinking p" (an ontological claim), this does not negate the possibility that one's epistemic relationship to the thought is such that there are nevertheless grounds-based on the nature of this epistemic relationship-for doubting that one is the author of the thought, and, importantly, that these grounds are justified. A closer examination of the indirect form of the paradox is therefore informative, as it reveals an epistemic disparity in the relationship between oneself (qua author) and the thought as expressed within the two conjuncts in (2), even where one tacitly accepts the assumption I have been making about the entailment between a thought and its thinker. It is my contention that this epistemic discrepancy, in the face of ontological equivalence (see 4.1), is what legitimises statement (2)'s challenge to D 3 , thereby making erroneous the claim "I do not doubt this thought because it is a thought I experience" (where "do not" is taken to mean "cannot"). This, in turn, opens up the possibility of justified doubt regarding one's authorship of a thought one experiences, even when, owing to the ontological equivalence already noted, one is necessarily the author of the thought one has just experienced. In order to defend the claim that statement (2) is an indirect form of Moore's paradox, and in order to gain further insight into what this reveals about the nature of the epistemic discrepancy said to exist in the presence of ontological equivalence, and how this makes legitimate the proposition "I doubt that I think that thought"-where "that" (contra D 3 ) refers to a thought I experience-let us briefly consider the structure of a traditional Moore paradoxical utterance. Moore's paradox The following is an example of Moore's paradox: (3) I went to the cinema today but I do not believe that I did. 8 More formally, this can be written p & ~ IBp, where p equates to going to the cinema today, and ~ IBp represents not believing that I did. 9 To those unfamiliar with Moore's paradox, at first glance, the proposition may appear to be a simple case of contradiction and therefore a somewhat peculiar or even absurd thing to say. Certainly, any alleged paradox may not be immediately apparent to the reader. So what is paradoxical about statement (3)? The first conjunct (p) concerns some event in the world-a fact-which is either true or false: either I went to the cinema today or I did not (either p or ~ p must hold). The second conjunct (~ IBp) refers to some "inner" mental state of mine which, independent of the first conjunct, can also be true or false depending on whether I hold the belief or not. As a consequence, the first conjunct has nothing to say about my mental states, and the second tells me nothing about my actual cinema behaviour (Lawlor & Perry, 2008): for irrespective of whether I went to the cinema today, I can believe that I did or not. The truth or falsity of p is not therefore dependent on the truth or falsity of my belief about p, in much the same way as the truth or falsity of whether I hold a belief about p is not dependent on p. 10 Given the independence of each conjunct, it is possible that I went to the cinema today and equally possible that I do not believe that I did; just as (3) describes. Yet as Moore observed, even though each conjunct could be true-thus making the statement non-contradictory-the assertion of the conjunction (I went to the cinema today but I do not believe that I did) remains an absurd thing to say because it implies a contradiction. What is paradoxical about p & ~ IBp, then, is that despite the fact that the conjunction as a whole can be true and therefore non-contradictory, it cannot be coherently asserted (Vahid, 2005). Selective scepticism 2: Evidence from an indirect form of Moore's paradox Statement (2) has the appearance of a Moore paradoxical utterance although it does not conform to the formal structure found in (3). Instead, it takes the following form-q & ~ IBp(q)-where q equates to "experiencing a thought just now" and ~ IBp(q) equates to "not believing that I was thinking that thought". Given this structure, the truth or falsity of q is independent of the truth or falsity of ~ IBp(q). For (2) to be Moore paradoxical, one would have to endorse the entailment between q and p(q), such that q (experiencing a thought just now) entails p(q) (thinking that thought). Only by endorsing such an entailment would statement (2) contain all the hallmarks of a Moore paradoxical utterance. To illustrate, consider statement (4): (4) I experienced a thought just now (which entails I was thinking it) but I do not believe that I was thinking it. Put differently, but reflecting the entailment in (4), the traditional Moorean structure (p & ~ IBp) becomes more evident when expressed as follows: (5) I was thinking a thought just now but I do not believe that I was thinking it Ontological equivalence; epistemic discrepancy The ontological implication of "q entails p(q)" is that experiencing the thought and thinking it amount to the same event. This means that statements (2) and (5) are ontologically equivalent insofar as the mental event <experiencing a thought> is equivalent to the mental event <thinking that thought> even though they are differently described, at least within the first conjunct of conjunctions (2) and (5). Of course, where it is accepted that q is ontologically equivalent to p(q)-more formally, q = p(q)then it is somewhat unremarkable to add that q entails p(q). But if statements (2) and (5) are ontologically equivalent, insofar as they pick out the same mental event, then what is revealed by any further comparison between (2) and (5) is that the epistemic relationship between the first and second conjuncts described in (5) does not match the epistemic relationship the subject has with what we now understand to be the same mental event described in (2). This is evidenced by the fact that statement (5) expresses an epistemic relationship between the first and second conjuncts that is prima facie contradictory in a way that the relationship between the first and second conjuncts in statement (2) is not. To illustrate, the seeming contradiction within traditional Moore paradoxical utterances (p & ~ IBp) is often explained as follows: when uttered intelligently, "that p" is understood to be equivalent to one's belief that p (Evans, 1982;Williams, 2004). What is implied within statement (3) (I went to the cinema today but I do not believe that I did) is made explicit in statement (3*) "I believe I went to the cinema today but I do not believe that I did". Transferring this to (2) (I experienced a thought just now but I do not believe that I was thinking it), we get (2*) "I believe I experienced a thought just now but I do not believe that I was thinking it". What the subject believes within the first conjunct of (2*) is therefore not equivalent to what they believe within the first conjunct of (5), as is made even more apparent when expressed as follows: (5*) "I believe I was thinking a thought just now but I do not believe that I was thinking it". The epistemic positions differ insofar as the subject believes, according to the first conjunct in (2*), that they have just experienced a thought, compared to believing they were thinking a thought (as claimed within the first conjunct of (5*)). Now, whilst the mental event that each belief is about may be ontologically equivalent, owing to the entailment between q and p(q) (as discussed), the manner in which each mental event is described within the context of the belief expressed, which differs in the respective first conjuncts of statements (2*) and (5*), means that the epistemic relationship the subject has with the mental event is not equivalent. As a result of this difference (and to reiterate my earlier claim), the epistemic relationship between the first and second conjuncts of statement (2*) is not prima facie contradictory, unlike that found in statement (5*). Given the prima facie contradiction inherent in statement (5*) but absent from (2*), consider the following remark by Shoemaker (1995): "what can be coherently believed constrains what can be coherently asserted" (p. 227). In the case of (5*), Shoemaker's constraint clearly applies because what is believed is prima facie contradictory (as noted) and therefore cannot be coherently asserted. But what about (2*)? Here, what is believed is not prima facie contradictory, even though what is believed cannot be true given the entailment between q and p(q) and the ontological implication of this. Given that there is no prima facie contradiction in (2*), might there be grounds to justify those beliefs underlying the assertion found in (2*), thus making the proposition "I doubt that I think that thought but not this one" legitimate? As a means of considering this question, in Part 2 I present findings from the field of developmental psychology indicating that children learn-qua acquire the belief-that the thoughts they experience are their thoughts that they think. The association between a thought and its thinker, then, irrespective of whether one endorses the ontological position presented here, is not an epistemic given. In other words, it does not constitute innate knowledge that we possess but, rather, is something we come to believe over time. If we acquire this belief then one might conjecture that such a belief is open to change, such that under certain circumstances one might come to doubt the association between thought and thinker. When speculating about what these circumstances might be, I draw on contemporary explanations of thought insertion to inform my discussion. Developing an awareness of thinking According to Flavell, Green, and Flavell (1995; see also Flavell, 1999), preschool children (aged between 3 and 5) understand that thinking is a private activity which occurs "in the head". Around this time, they also acquire an understanding of themselves as "knowers": that their thoughts amount to a source of knowledge for them (Kuhn, 2000). Preschoolers are, however, poor at identifying when someone is thinking, even in the case of their own thoughts. They do not assume, for example, that they must have been thinking when engaged in a task (Flavell, Green, & Flavell, 1993); neither do they always show an understanding of what they might have been thinking about or even what others might be presently thinking about in a given context (Flavell et al., 1995). In fact, Flavell and Wong (2009) conclude that, to a large extent, preschoolers severely underestimate the amount of mental activity taking place within a person (including themselves) at any given time. 11 They fail to realise (it seems) that individuals, including themselves, experience a continual flow of mental content; an unstoppable stream of consciousness (James, 1890)-what Harris (1995) calls an "involuntary pulsation" (p. 51)-even in someone who may not be trying to think of anything in particular (see Flavell, Green, & Flavell, 1998). This has led Flavell, Green, Flavell, and Lin (1999) speculatively to claim that "children are less aware than adults of the experiential, what-it-is-like-to-have-them, aspects of conscious mental states such as thoughts and percepts, and instead focus almost exclusively on their cognitive content" (p. 411). In short, an accurate description of the preschoolers' ability is that these children are introspectively aware of thought content before they begin to understand what thinking is (that it involves a thinker, for example), let alone that they themselves continually engage in thinking. Alongside this general lack of awareness of the fact that they are thinking, preschoolers show little understanding of cognitive cueing (Gordon & Flavell, 1977). They do not appear to understand that mental events trigger other mental events, usually in a coherent manner related to one's experiences. Thus, Flavell et al. (1995) describe young children's concept of thought as being quite different to adults. First, the concept of a thought (thinking, mental activity etc.) is doubtless less salient for them than it is for most adults; they do not think about thoughts very often spontaneously. When they are brought to think about them, however, they are more likely than adults to regard them as isolated and largely inexplicable mental happenings, not linked to preceding cues or subsequent effects. Although they may occasionally become aware that something instigated a thought (e.g. an instruction to think, an emotionally arousing situation) or that a thought instigated something (e.g. an action based on that thought), the question of possible causes and effects usually does not even arise for them when thinking about thinking. (pp. 84-85) It would seem that even if preschoolers do demonstrate awareness of the act of thinking, because they lack an understanding of cognitive cueing, it is not equivalent to the older child/adult's awareness of their stream of consciousness. Instead, this awareness, such as it is, is of isolated islands of thought. Moreover, because preschoolers lack sustained awareness of their own mental activityowing to their sporadic ability to perform meta-cognition (or reflexive thinking)-they are less likely to be aware of their own "mental history" (Flavell et al., 1995;Flavell, Green, & Flavell, 2000). So, when performing a task which requires them to express their thoughts, the process by which they arrive at a given solution or judgement will be less accessible to them compared to older children and adults. As such, by the time children enter kindergarten, the child's ability to perform metacognitions is only rudimentary and effortful (Dimmitt & McCormick, 2012). Therefore, if distracted, such that their thoughts "stray", they may be less likely to notice that this distraction has impacted their own thinking, resulting in the production of less relevant thoughts. In fact, on those limited occasions when a preschooler is aware of the fact that she is thinking x, she remains largely ignorant of the fact that this process forms part of her continuous stream of consciousness, and to whether it was a thought she initiated (through the process of cognitive cueing) or was unbidden (Flavell et al., 1998). Preschoolers understand that thoughts are private inner entities, but this does not necessitate that they further equate inner and private with personal ownership (either with these thoughts being my thoughts, or with them being thoughts I think); nor are they aware of the subjective-whatit-is-like-to-have-them-quality of thoughts. The discrepancy between an awareness of the content of thought and the act of thinking that thought indicates that preschoolers do not yet understand that these private inner entities are necessarily generated, let alone self-generated. As such, "[f]or a child for whom the world of thought is largely causeless, any thought might occur at any time" (Flavell et al., 1995, p. 86). Given this, even if we endorse the premise that experiencing a thought necessitates thinking it, it does not follow that experiencing thoughts necessitates experiencing thinking: if by "experiencing thinking" one means experiencing it as thinking: for, as we have seen, this is not the case in preschoolers. Where S has cognitive abilities equivalent to the preschooler, S would not have formed the belief that they are thinking and are therefore the author of the content of the thoughts experienced. But how does this help us assess the legitimacy of selective scepticism regarding thought? Given S's cognitive abilities, a lack of belief about the authorship of a thought is the case for any thought content experienced, not just for some: that one, say, but not this one. What the developmental literature reveals is that one's belief about being the author of thought is not a given; it is something we acquire. Once acquired, the mechanism underlying this belief acquisition must function consistently. For selective doubt to occur, one could conjecture that this mechanism is functioning intermittently. This possibility will be explored in the next section with reference to the pathological condition known as thought insertion. Thought insertion Thought insertion is characterised by the subject's attribution of their thoughts to someone else, such that "it is as if another's thoughts have been engendered or inserted in them" (Cahill & Frith, 1996, p. 278). As Gerrans (2001) further explains: [T]he subject has thoughts that she thinks are the thoughts of other people, somehow occurring in her own mind. It is not that the subject thinks that other people are making her think certain thoughts as if by hypnosis or psychokinesis, but that other people think the thoughts using the subject's mind as a psychological medium. (p. 231) The subject claims that certain thoughts are being put into their mind, like "Kill God" (Frith, 1992, p. 66). There is something about the occurrence of these thoughts specifically that leads the subject to believe that they have been inserted. What, then, is the means or mechanism by which the subject is discriminating between thoughts they doubt thinking and thoughts they do not? 12 A number of explanations of thought insertion have been proposed over the years. My intention, here, is not to critique these or provide any kind of review. Instead, by presenting suitable candidates from among these explanations, I aim to support the legitimacy of the proposition "I doubt that I think that thought but not this one" and so provide a way to justify, epistemically, selective scepticism. As a means to that end, I present explanations proffered by Fernández (2010) and Billon (2013). When considering the phenomenon of inserted thoughts, Fernández asks: What does the subject experience? In response, he states: she experiences a lack of commitment to a particular belief; a belief which, in virtue of this lack of commitment, is experienced as "inserted". To understand how this might come about, Fernández presents an account of how we typically become committed to our beliefs (concerning our thoughts and perceptions); something he refers to as the "bypass" procedure or model of self-knowledge. As he explains: The bypass model of self-knowledge … is a view about what constitutes our epistemic grounds for believing that we have a certain belief. The view is that the mental states that constitute our evidence or grounds for a given belief (states such as our perceptual experiences or our memory experiences) perform a sort of double duty. They entitle us to have that belief, and they also constitute our evidence or grounds for the meta-belief that we have it. (2010, p. 81) By his own admission, Fernández employs an "undemanding notion of epistemic justification" (ibid.) in which a belief is justified if it is a belief that regularly co-occurs with a particular mental state (e.g. the belief that there is a chair in front me is justified if it regularly co-occurs with the perception I have of a chair in front of me). Certainly, the epistemic relationship described here is not of the kind demanded by Descartes' method of doubt (for example); Fernández does not require certainty, just reliability. The epistemic relationship employed within the "bypass model" therefore seems compatible with selective scepticism. To explain: the "bypass" process concerns the manner in which we acquire evidence to justify a particular belief we hold. Let us say that I perceive a chair in front of me. In doing so, I acquire the first-order belief with content "there is a chair in front of me". Given this is the case, consider the extent to which the following two questions differ: (i) Do you believe that there is a chair in front of you? (ii) Is there a chair in front of you? Based on the level of epistemic justification we are operating at, it would make little sense to answer "yes" to one and "no" to the other. One's response to the question "Is there a chair in front of you?" reveals one's belief on the matter. Thus, when asked "Do you believe that there is a chair in front of you?", I do not need to introspect and search out my belief; rather, I turn my gaze outward to see if there is indeed a chair there. What justifies my belief (my meta-belief: believing that I have the first-order belief) such that I feel justified in believing that there is a chair in front of me is the same evidence that justifies my first-order belief with content "there is a chair in front of me". When answering (i), I bypass the need to introspect in order to justify my meta-belief, and in fact answer the question using the same process I would use when responding to (ii). Under normal circumstances, when answering "yes" to the question "Is there a chair in front of you?", I am committed to the belief-my meta-belief-that there is a chair in front of me (I metabelieve that there is a chair in front of me; recall the explanation of Moore's paradox in Section 3; see also Evans, 1982;Williams, 2004). Similarly, if I am thinking about chairs and their relative location to me, typically, I am committed to the following: "I meta-believe I am thinking about chairs … (etc.)". Through "bypass" the epistemic justification for one's first-order belief and one's meta-belief is the same (again, based on the relatively undemanding epistemic justification we are operating at, which is compatible with D 3 ); but, more than this, one's meta-belief acts to endorse the content of one's first-order belief: one is committed to its content as something one believes. What Fernández suggests in the case of thought insertion, however, is that the subject cannot always commit to a first-order belief based on the process of "bypass". Where there is disruption in the process of "bypass", the same evidence used to justify the firstorder belief corresponding to the presence of a chair in front of me no longer provides sufficient justification for the meta-belief "I believe that there is a chair in front of me". In the absence of such a meta-belief, the thought I am experiencing with content "there is a chair in front of me" does not correspond to any meta-belief I possess-in contrast to what would be the case if the process of "bypass" were working normally-and so is not recognised by me as a thought I have initiated. By not committing to the first-order belief, I do not endorse it. Consequently, I doubt that I am its author. What remains unclear, however, is how the disruption in "bypass"-which results in a lack of commitment and endorsement-manifests itself to the subject, such that it should be experienced as "inserted", and whether this change is the means by which discrimination in authorship occurs. Is there, for example, a change in the subjective quality of the experience of the first-order belief, and is it this qualitative shift that justifies (at least to the subject) selective scepticism? It is not clear from Fernández's account what the evidence is (again, from the subject's perspective) that justifies the lack of commitment to the first-order belief and so justifies that that thought does not correspond to a (meta-) belief I possess, which then allows me to doubt it is a thought I was thinking. There is, of course, the possibility that the failure to commit occurs at the subpersonal level (although, given the absence of research, such a possibility remains unsubstantiated). In short, what remains unresolved is what it is like (if it is like anything) for a failure in "bypass" to occur and whether this change acts as a means of discriminating between thoughts, and so as a means of justifying the kind of selective scepticism we are discussing here. Interestingly, Billon (2013) proffers just such a qualitative shift as a mean of identifying and therefore distinguishing "inserted" from "non-inserted" thoughts. Billon acknowledges that subjects are reflexively aware of putatively inserted thoughts-insofar as they have adequate introspective access to them-and also that they accept that these thoughts occur within the boundary of their experience (constitutive of their perspective; again, as required by D 3 ). However, for Billon, what inserted thoughts are not is phenomenally conscious. Typically, thoughts, in virtue of occurring within the bounds of my perspective, are accompanied by a certain something-it-is-like-for-me to have them (as touched on previously), thereby making the thought phenomenally conscious. This quality, one might conjecture, increases my commitment to the thought (the first-order belief), in keeping with Fernández's account, and so contributes to the formation of the corresponding meta-belief with the same content (again, as described by Fernández). What is missing, in the case of thought insertion, Billon conjectures, is this phenomenal quality, such that (as subject) there is nothing-it-islike-for-me to have these thoughts. What I experience, then, is a thought I have normal introspective access to, occurring within my perspective, but for which there is nothing-it-is-like-for-me to have the thought (recall the extract presented in Section 1, taken from the patient suffering from thought insertion, in which she described something feeling different in the case of the putatively inserted thought). The lack of phenomenal quality corresponds to a qualitative shift, I contend, which one might surmise, in virtue of the absence of phenomenal consciousness (or even a salient sense of its loss with regard to that thought) means that there is insufficient evidence to justify any commitment to the thought (the first-order belief). Something to consider, of course, is whether the lack of phenomenal consciousness is the result of a failure to commit to the first-order belief (which is compatible with a higher-order thought theorist's approach to phenomenal consciousness in which a higher-order thought (meta-belief) is necessary for phenomenal consciousness) or whether, as the result of some form of disruption, there occurs a lack of phenomenal consciousness which would normally accompany the first-order belief (as posited by first-order thought theorists), the absence of which prevents one's commitment, or perhaps further reinforces one's lack of commitment, to the thought. Billon's position would seem to be more compatible with the latter view. As a final point, it is worth emphasising that, throughout this paper, the epistemic justification I have sought is merely that which justifies one's selective doubt; in other words, that which provides a reason to be sceptical over one's authorship of certain thoughts. I am not trying to defend a stronger position in which the subject is justified in believing "not p" (as opposed to doubting-qua not believing-p). After all, one could have a reason to be sceptical about p without having a sufficient and therefore justified reason to believe "not p". In the case of thought insertion, the subject typically makes the stronger claim: "I believe 'not p'" (namely, I believe that this thought is not something I think). My selective use of explanations of thought insertion is therefore purely instrumental, insofar as drawing on these explanatory accounts allows me to conjecture over the mechanism(s) that may impact one's belief system. Such impact, I contend, is enough to justify doubt over the authorship of some thoughts-at least from the point of view of the subject-as it provides the subject with sufficient evidence (and therefore reason) to doubt the authorship of those thoughts; but I would not go so far as to say that what has been presented here justifies the belief that one is not the author of certain thoughts. Conclusion In conclusion, what I hope to have shown in this paper is that there are epistemic grounds to justify selective scepticism over the authorship of one's thoughts, thereby making the proposition-"I doubt that I think that thought but not this one"-legitimate. Even where one accepts the entailment between a thought and its thinker (culminating in a particular ontological position), there is nevertheless a case to be made for epistemic discrepancy sufficient to invite and even justify the kind of selective scepticism discussed here. Whilst the argument presented has drawn on respected empirical work in the field of child development, strongly suggesting the acquisition of certain beliefs regarding thought and oneself as thinker (as opposed to this being a given), I nevertheless acknowledge that the same argument has selectively drawn on more speculative work, specifically regarding explanations of thought insertion (pace these authors). My aim in presenting some of these tentative claims has been merely to proffer conjecture: to speculate over what form the mechanisms underlying selective scepticism might take and, subsequently, what might constitute the epistemic basis for this scepticism. Funding The author received no direct funding for this research. Citation information Cite this article as: Selective scepticism over thought: Am I ever justified in doubting that I think that thought but not this one?, Garry Young, Cogent Arts & Humanities (2016), 3: 1145567. Notes 1. One could also argue, in this case, that in addition to doubting that I think those thoughts, one is justified in believing that those thoughts are not thoughts that I think. The focus of this paper, however, is on what constitutes evidence to justify selective scepticism. I do not intend to discuss what might constitute sufficient grounds for a belief (even a belief that something is not the case). 2. Although I say pace Russell and Nietzsche (see Nietzsche, 1886Nietzsche, /2003Russell, 1927Russell, /1970Russell, , p. 171, 1946Russell, /1961, in a sense this is unnecessary because these authors question the certainty with which Descartes, using his method of doubt, could legitimately claim to know that there is an "I" which thinks and not necessarily that an "I" which thinks, or is the subject of thought, exists. This epistemological objection was, of course, first raised by Lichtenberg (1806Lichtenberg ( /1990). 3. The normative position alluded to here is based on the law of non-contradiction, whereby believing ɸ and not believing ɸ is contradictory. Following the law of non-contradiction, if one were to believe ɸ, and equally not believe ɸ, then one would be considered irrational. Priest (2006), however, challenges this position. He accepts that, prima facie, a contradiction presents itself but adds that, on occasion, one could nevertheless rationally believe and not believe the truth of ɸ. By adopting dialetheism, Priest proposes the truth of some contradictions: for example, that the proposition "I always lie" is both true and false when uttered by a liar, thereby making it something that can be believed and not believed to be true. Given dialetheism, the law of non-contradiction is not universally accepted (I thank the anonymous reviewer for bringing this fact to my attention). Nevertheless, the possibility of the truth of contradictions, and therefore the possibility of contradictory beliefs, does not threaten the point I am making. If anything, it adds to the problem of individuating thought under the circumstances described. 4. Again, such a normative claim is based on the law of non-contradiction (see Note 3). 5. For the sake of argument, I assume the existence of other minds (therefore other thought-worlds). I therefore assume that there is sufficient evidence to justify the belief in the existence of other minds, even if this evidence is not direct experiential evidence. What I doubt, based on a lack of direct experiential evidence, is that I think those thoughts. 6. It is the case, of course, that I do not experience thoughts occurring at the subpersonal level (that is, below the level of conscious awareness). The extent to which one is justified in doubting these thoughts would make for an interesting discussion that, unfortunately, is beyond the scope of this paper. 7. For simplicity's sake, I am using the term "thought" in the context of an experiential state to mean something linguistic rather than, say, pictorial (an image). Thus, the thought I experience is, for example, the phrase "Marry, Mary, quite contrary", rather than the image of a young lady tending her garden. 8. Adapted from a version used by Moore (1942, p. 543). http://dx.doi.org/10.1080/23311983.2016.1145567 9. This is an example of the omissive form of the paradox. The paradox can also be presented in the commissive form: p & IB~ p (I went to the cinema today and I believe that I did not). Only the omissive form is discussed in this paper. This fact should not detract from the argument presented, however. 10. The same cannot be said for the truth or falsity of the content of the belief, of course-that is, what the belief is about-which is dependent on p (in this case, on whether I went to the cinema today or not). 11. Interestingly, preschoolers are likely to overestimate the "strength" of their memory; claiming prior to a memory test that they will remember far more of the items to be recalled (sometimes all of them) than they actually do (Lipko, Dunlosky, & Merriman, 2009;Van Overschelde, 2008). 12. In this paper, I am concerned with the act of doubting (qua not believing) "that p" and not with believing "not p". Those suffering from thought insertion often believe "not p"-believe, that is, that they did not think these thoughts-rather than simply doubting (qua not believe) p: that they were the one thinking these thoughts. A prerequisite of believing "not p", however, is doubting "that p". For this reason, I consider the phenomenon of thought insertion to be pertinent to the issue of selective doubt.
12,094.6
2016-02-10T00:00:00.000
[ "Philosophy" ]
Availability of Financial and Medical Resources for Screening Providers and Its Impact on Cancer Screening Uptake and Intervention Programs Interventions for residents and medical/financial resources available to screening providers can improve cancer screening rates. Yet the mechanisms by which the interactions of these factors affect the screening rates remain unknown. This study employed structural equation modeling to analyze the mechanisms underlying these factors. Data for Japanese municipalities’ medical/financial status, their implementation of screening interventions, and the number of municipality-based cancer screening appointments from April 2016 to March 2017 were obtained from an open database. Five cancer screenings were included: gastric, lung, colorectal, breast, and cervical cancer screening; all are nationally recommended for population screening in Japan. We defined two latent variables, namely, intervention for residents and medical/financial resources, and then analyzed the relationships between these variables and screening rates using structural equation modeling. Models were constructed for gastric, lung, and breast cancer screening, and similar relationships were observed. With these cancer types, medical/financial resources affected the intervention for residents, directly affecting screening rates. One limitation of this study is that it only included screening by municipalities, which may cause selection bias. In conclusion, financial pressures and lack of medical resources may cause a reduction in screening intervention programs, leading to stagnant screening rates. Ensuring consistent implementation of interventions for residents may improve local and regional cancer screening rates. Introduction Evidence shows that cancer screening reduces mortality for several types of cancer through early detection and treatment [1]. The importance of cancer screening is increasing, given that the burden of cancer is expected to grow due to aging [2], since aging is one of the main risks of cancer. Previous evidence indicates that several factors associated with participants and some interventions by screening providers affect the screening rate. For example, higher income is associated with higher cancer screening rates, specifically, the screening rates for cervical and breast cancer in the lowest income quartile were 61.6% and 53.8%, respectively, and in the highest income quartile they were 73.4% and 68.3%, respectively [3]. Older age also positively affects screening rates: "men and women 65 years and older had higher rates of any recommended colorectal cancer test (55.8% and 48.5%, respectively) than persons 50 to 64 years (males, 41.0%; females, 31.4%)" [4]. Further, high educational background [4,5] and high socioeconomic status [6,7] also positively affect screening rates. Interventions by screening providers, screening invitations and reminders for residents [8][9][10][11], co-payment strategies for cancer screening at the public expense [12], and education for the target population [8,13,14] positively affect screening rates. Additionally, screening providers' financial status and the availability of medical resources also contribute to the screening rates. Previous studies have shown that financial pressures on screening providers and insufficient numbers of public health nurses are associated with low screening rates [15,16]. In clinical settings, these factors would interact to influence screening rates. However, it is unknown which mechanisms work in this process. Clarifying these mechanisms would make it possible to identify reasons for low screening rates and suggest efficient measures to improve them. Thus, describing these mechanisms is needed, particularly in Japan, where nationally organized screening programs are not yet in place [17]. In Japan, insurers (municipalities and companies) responsible for managing cancer screening are not obligated to implement such screening interventions for the insured or have them screened. Therefore, insurers can decide whether to implement screening interventions at their discretion [17]. This situation allows screening providers to make insufficient efforts to improve screening rates in order to avoid the financial or medical burden of providing screenings. Thus, there may be an intervening factor in the previously reported relationship between financial/medical resources and screening rates, such as screening interventions [15,16]. Analyzing these mechanisms may help correctly recognize the problems related to an ineffective screening system, and help to develop efficient strategies to improve the screening rates. For example, if sequential mechanisms exist, such as medical/financial resources affecting the screening intervention and subsequently having an effect on screening rates, providing resources can help to both increase interventions and improve the screening rates. Alternatively, if each of these factors affects the screening rates independently, it would be necessary to mandate the screening interventions in addition to providing support for resources. In this study, we used structural equation modeling (SEM) to analyze how Japanese municipalities' medical/financial resources and screening intervention policies affect cancer screening rates. Our objective was to elucidate the causal relationships between cancer screening rates and multiple factors of the municipalities policies and conduct. SEM is a statistical technique used to model hypothesized relationships among observed and unobserved variables. Variables that cannot be observed are treated as latent variables in SEM, and constructed from measured variables. The accuracy of SEM results is evaluated based on fit indices and overall fit of the model, which leads to a valid analysis of the relationships. Given these characteristics, SEM was suitable for this study, which aimed to assess causal relationships between screening rates and factors that can affect them, including unmeasurable ones, such as the availability of medical/financial resources of municipalities or how municipalities provide screening interventions. Study Design An ecological study was conducted using an open database. Most of the data were collected from e-stat, an open online database provided by the Statistics Bureau of the Ministry of Internal Affairs and Communications [18]. The sources of all the acquired data are listed in the Supplementary Materials. Cancer Screening In Japan, insurers, mainly local municipalities and corporate employers, implement population-based cancer screening. The data on participants in company-based cancer screening is not formally recorded; therefore, our analysis focused on the number of participants in municipality-based screening to calculate the screening rate in this study. As some data were exclusively available at the prefecture level, we combined data from each local municipality by prefecture, resulting in data on 47 prefectures. Five cancer screenings were included: gastric cancer, lung cancer, colorectal cancer, breast cancer, and cervical cancer; implementation of these testing programs is nationally recommended for population screening in Japan [17,19]. Since gender differences have been reported in previous studies [20][21][22], we analyzed data according to gender as well. The number of participants screened was derived from the Report on Regional Public Health Services and Health Promotion Services conducted in 2016 [18]. In Japan, a two-day fecal occult blood test for over 40-year-olds is recommended for colorectal cancer screening. A biennial Pap smear test for people over the age of 20 is recommended for cervical cancer. The number of participants in these screenings was used to calculate the screening rates for colorectal and cervical cancer. Screening methods for gastric, lung, and breast cancer are not standardized in Japan; thus, we defined the number of people screened for these cancers based on their screening recommendations [23]. For breast cancer, the sum of the people who undertook biennial mammography plus-minus visual palpation was used. The number of people over 40 years taking an annual chest X-ray examination was used for lung cancer. Some municipalities conduct sputum cytology for heavy smokers, but the eligibility criteria for the test differ among municipalities. Therefore, in an effort to maintain uniformity of data and accuracy, we did not count these people. The number of people over 40 years old who underwent an annual gastric X-ray examination was used for gastric cancer. Due to issues in data availability, we did not include the number of participants in the endoscopic screening (Appendix A), which is recommended biennially for those over 50 years old. While all residents can participate in the municipality-based screening, it is often customary for employed individuals to take the screening provided by their insurer, which in most cases is their company. Those who do not have the opportunity to be screened by companies, such as those self-employed or unemployed, are the target of these municipalitybased screenings. Thus, the total population minus the number of people employed will indicate the population eligible for municipality-based screening (1). Screening rates were calculated after being stratified by cancer type, prefecture, and sex, as follows (2) The number of employed individuals and primary industry workers was obtained from the 2015 Census. Indicators of Financial Resources Three financial indicators were obtained from the 2016 Survey of Local Financial Conditions [18], and two variables to be used in the analysis were created by us using these indicators. The first variable was the sum of the municipal Health and Sanitation Expenditure and the Public Health Center Expenditure per capita (public health expenditures). Health and Sanitation Expenditure represents local municipalities' expenditures for various health projects, and the costs of projects related to cancer screening are also included in this category. Public Health Center Expenditure is the operating expense of public health centers, which are responsible for public health projects in each municipality. In some municipalities, public health centers are responsible for cancer screening services, and in such cases, the costs concerning cancer screening are recorded as Public Health Center Expenditures. Since a fixed standard for the categorization of expenses of cancer screening costs is lacking, assigning expenditure categories is at each municipality's discretion. For this study, these variables were summed and treated as a single variable. We assumed that public health expenditures reflect the cost invested in public health projects, includ-ing cancer screening. The second variable was the availability of municipalities' general revenue divided by their population (general revenue per capita). The general revenue is the municipality's budget and its use is decided at each municipality's discretion. Cancer screening services in Japan are implemented using this budget in all municipalities. We assumed that the general revenue per capita reflects the financial capacity to implement the projects, including projects aimed at improving cancer screening. Indicators of Medical Resources Four variables were obtained from surveys conducted in 2016 as indicators of health care resources [18]. The number of nurses and public health nurses per 1000 individuals in each prefecture was noted. We included public health nurses, as previous studies show that they have an impact on the screening rate, as they are responsible for the practical work of cancer screenings, such as sending screening invitations to the residents. The third variable was the number of hospitals, including clinics, per 1000 individuals. We excluded hospitals and clinics specializing in psychiatry as they rarely provide cancer screening. The last variable for medical resources was the number of physicians per 1000 individuals in each prefecture. Indicators of Screening Interventions Eight indices were used as indicators of screening interventions. All indicators were obtained from the Survey on Cancer Screening Practices in Municipalities [24], a national survey. We used the rate of municipalities that met the following conditions: sending screening invitations (Call), re-invitations to the unscreened after the call (Recall), providing cancer screenings free of charge (Charge-free), providing screenings on an evening or holidays (After-hours), providing an opportunity for the residents to take screening in other municipalities (Extra-region), providing cancer screenings using a modality outside the recommendation of the national guideline (Modality extension), providing cancer screenings with little evidence of mortality reduction (Out of evidence), and limiting the number of people who can participate in the cancer screenings (Upper-limit set). Other Indicators The aging rate was obtained from the 2015 Census [18]. Average annual household income by prefecture was obtained from the 2016 National Household Income Structure Survey [18]. Statistical Analysis The SEM was conducted to visualize the relationship between these variables and the screening rates. We defined two latent variables-medical/financial resources and screening interventions-and hypothesized that these latent variables affected screening rates directly, while medical/financial resources affected screening interventions. Other details of the prespecified model based on the hypothesis are shown in Figure 1. A pre-analysis model built on the initial hypothesis. Call, screening invitations to the residents; Recall, re-invitations to those unscreened after the call; Modality extension, providing cancer screening using a modality outside the recommendation of the national guideline; Out of evidence, providing screening with little evidence of mortality reduction; Upper limit set, limiting the number of people who can participate in the cancer screenings; Extra-region, providing an opportunity for the residents to take screening in other municipalities; After-hours, providing screenings on holidays or in the evenings; charge-free, providing cancer screenings free of charge. To evaluate the goodness of fit of the data to the model, an χ 2 test was conducted to examine the model's reliability, and the model fit indices were calculated. The fit indices assessed in this study were the goodness-of-fit index (GFI), adjusted goodness-of-fit index (AGFI), standardized root mean square of residual (SRMR), comparative fit index (CFI), and root mean square error of approximation (RMSEA). In the χ 2 test, the cut-off of the p-value was >0.05. For SRMR and RMSEA, the cut-off was <0.08, indicating a good fit [25]; and ≥0.10, indicating a poor fit [26]. For GFI, AGFI, and CFI, we considered a cut-off value of >0.90, indicating a good model fit [27]. The GFI and AGFI are strongly affected by sample size, and it is suggested that these factors should be assessed with other fit indices [26]. Therefore, we decided to use the p-value of the χ 2 test, RMSEA, SRMR, and CFI as measures to assess the acceptability of the analysis, with a cut-off of > 0.05, <0.10, <0.10, and >0.90, respectively. When all indices met the criteria, the fit of the data was considered to be good and acceptable. In the analysis, we assumed a correlation of the residual errors between the call and recall, the call and upper limit, and the general revenue and public health expense. The maximum likelihood estimation method is used in SEM, and this method requires that the data follow a multivariate normal distribution. However, some of the data used in this study, such as policy implementation rates and financial indicators, were not expected to follow a normal distribution. We performed normalization and standardization using the Box-Cox transformation to deal with such variables. For some of the variables related to screening interventions, the distributions were highly skewed and could not be approximated to a normal distribution by Box-Cox transformation. The variables that could not be transformed into normal distribution were "upper-limit" in lung and colorectal cancer screening; "extra-region," "after-hours," "out of evidence," and "modality extension" in all types of cancer screenings; and "charge-free" for all cancers except lung cancer. Since these variables had substantial biases, it was considered difficult to analyze their effects on the outcomes, so we decided to exclude these variables from the analysis. Data were analyzed using R (Ver. 4.0.3) (R Core Team, Vienna, Austria) [28], lavaan package and semPlot package [29,30]. Characteristics of Collected Data Characteristics of the obtained data are shown in Tables 1 and 2. The variables related to cancer screening interventions differed by prefectures and cancer types. Calls ranged from the lowest of 43.5% to the highest of 100%. For recall, it ranged from merely 5.6% to 93.3%. The upper limit ranged from 0% to 100% and was exceptionally high for breast cancer screening, with a median of 52%. It was notable that out of evidence showed a median of 90.5%, screening for cancers such as thyroid cancer, endometrial cancer, and prostate cancer. Recall, re-invitations to the unscreened after the call; Charge-free, providing cancer screenings free of charge; After-hours, providing screenings on an evening or holidays; Extra-region, providing an opportunity for the residents to take screening in other municipalities; Modality extension, providing cancer screening using a modality outside the recommendation of the national guideline; Out of evidence, providing cancer screenings with little evidence of mortality reduction; Upper-limit; limiting the number of people who can participate in the cancer screenings. Results of SEM For gastric cancer, lung cancer, and breast cancer, the fit indices achieved acceptable levels, and the structure of the final model was similar for these cancer types. We could not construct a model that achieved sufficient fit indices for colorectal and cervical cancers (Figures 2-6). Regarding the cancer types for which models could be constructed, interventions for the residents directly impacted the screening rate. Annual household income had a direct impact on the screening rate, except in the case of lung cancer screening among men. However, the impact was consistently smaller than that of the intervention for residents. Compared with annual household income, the standardized path coefficients of the intervention on the screening rate were approximately twice as large for gastric cancer screening, approximately 2.8 times the rates for lung cancer screening for women, and approximately 3.1 times the rates for breast cancer screening. Further, interventions for residents were influenced by medical/financial resources, and the aging rate influenced the medical/financial resources. The aging rate directly impacted the screening rate for lung cancer screening for men and breast cancer screening. The indicators related to latent variables were similar among the cancer types. In every model, public health expenditure, number of public health nurses, and general revenue per capita were affected by medical/financial resources. The standardized path coefficients between these indicators and the latent variable ranged from 0.60 to 0.95 (p-value < 0.01). The latent variable indicating screening interventions affected "call", "recall", and "upper-limit" in gastric and breast cancer screening; it also affected "call", "recall", and "charge-free" in lung cancer screening. It positively affected "call," "recall", and "charge-free," while negatively affecting upper-limit. The goodness of fit indices for constructed models are shown in Table 3. All the values are standardized. Call, sending screening invitations; Recall, re-invitations to the unscreened after the call; Charge-free, providing cancer screenings free of charge; Upper-limit, limiting the number of people who can participate in the cancer screenings. GFI (goodness of fit index); AGFI (adjusted goodness of fit index); SRMR (standardized root mean square of residual); CFI (comparative fit index); RMSEA (root mean square error of approximation). Discussion In this study, we visualized the relationships between cancer screening rates and multiple factors relating to screening providers. For gastric, lung, and breast cancer, the fit indices of the constructed model met acceptable levels, and similar relationships were observed regardless of cancer types. Medical/financial resources affected screening interventions in these cancer types, and interventions affected screening rates. According to these results, we can presume that the screening interventions such as call and recall are implemented depending on the medical/financial resources of the providers (municipalities), and the screening intervention mainly determines the screening rate. Results suggested that the number of public health nurses comprises one of the medical resources influencing interventions in the population. Previous studies showed that the number of public health nurses affected the screening rates [15,16]. Our results are consistent with these previous reports, suggesting that the relationship between the number of public health nurses and the screening rates is probably mediated by the screening interventions. Public health nurses are engaged in tasks related to screening interventions. Therefore, municipalities facing a shortage of public health nurses may be unable to implement these interventions adequately. The number of medical nurses, a similar indicator, was excluded in the final models. However, it was consistently related to the medical/financial resources in the model building process, regardless of cancer types (Appendix B). This study suggested that general revenue and public health expenditures are part of the financial resources that influence screening interventions. This result is consistent with previous studies indicating that financial pressure due to the lack of subsidies negatively influences screening rates [15,16]. The general revenue is the budget each municipality executes at its discretion. In Japan, cancer screening programs compete financially with other programs funded by the general revenue. It is claimed that this is one of the causes of Japan's low cancer screening rate [15,16,31]. It is not easy for public health providers to obtain a screening budget out of their limited financial resources. Under these circumstances, the providers may reduce their screening interventions to lessen their short-term expenditures. A similar case has been observed in Greece, which recently suffered a serious financial collapse and implemented a policy limiting screening participation to reduce short-term health care costs. This policy has been criticized for its risk of increasing cancer cases and future health care costs [32]. These previous studies and cases support the hypothesis derived from this study that financial pressures on screening providers will reduce their efforts to improve the screening rates. This study included the variables related to residents, household income, and age as factors affecting screening rates. Household income positively influenced screening rates for every cancer type, consistent with previous results [3]. The results for age are consistent with previous studies and provide further information. There was an indirect, positive effect through latent variables and a direct negative effect on screening rates, as observed in the model for breast cancer. The sum of these two effects remained positive for screening rates, consistent with previous results [4]. However, the results of this study indicate that age may affect screening rates through multiple mechanisms, suggesting that the effect of age on screening rates may change with the target population and screening system differences. Our findings showed that there may be a sequential, causal relationship in the cancer screening program, starting from medical/financial resources, through intervention, and then screening rates. A survey supporting this result was conducted among Japanese municipalities that limited screening participation, with 64% reporting that the limitation was due to the limited capacity of the screening centers and 27% reporting that it was due to financial restrictions [33]. However, the mechanisms at work between these factors have not been previously analyzed. Although previous studies report that many factors affect screening rates, how these factors interact or the underlying mechanisms revealing how they affect the screening rate remained unknown. This study is the first to analyze multiple factors affecting screening rates not independently, but as a model that considers the complex mechanism, including the interacting effects. This result would not be achieved by conventional regression analysis, and the novelty of this study is the description of this mechanism by introducing SEM. These results enable us to undertake a more informed strategy to improve cancer screening. Our results indicate that expanding medical/financial resources may help implement screening interventions and consequently improve screening rates. Therefore, identifying and resolving the resource bottleneck of each municipality may help improve screening rates. For example, for those municipalities experiencing difficulty making invitations due to a shortage of public health nurses, recruiting more public health nurses may be an effective way to improve the screening rates. Regarding the system, financial subsidies limited to the use of screening may lead to an increase in screening rates. Previous findings support this presumption in that cutting cancer screening expenses by 10% in the municipalities surveyed was associated with a 9.3% decrease in screening attendance compared with the previous year [15]. This result is consistent with the presumption made throughout our study. This study had some limitations. First, this study covered only screening by municipalities, and screening performed by companies was excluded, as the associated data for that screening were unavailable, potentially causing a selection bias. Second, this study did not consider some individual factors such as educational background [4,5] and socioeconomic status [6,7] that are known to affect screening rates. Our preliminary analysis using a multiple regression included educational backgrounds, but their impact on screening rates differed depending on the screening system, whether it was conducted by a municipality or a company. Thus, we excluded this variable from the analysis to reduce systemic error. The small sample size, given the number of prefectures in Japan, was also a limitation of our study that restricted the number of estimators in the SEM. We attempted to reduce the estimators to one-fifth of the observation (number of prefectures) to acquire the most reliable result possible [34]. The results we obtained were reasonable and consistent, and the fit indices were acceptable for gastric, lung, and breast cancer. Therefore, although the sample size was small, we consider it to be reasonable. Finally, there was a limitation in the review process. Other studies may have analyzed the relationship between the screening rates and the factors affecting it using different statistical methods. In addition, since we searched only English and Japanese literature, we did not examine previous studies in other languages. However, to the best of our knowledge, no previous studies have analyzed the entire relationship among factors affecting screening rates as a model, nor are there any studies that used SEM to investigate the relationship between screening rates and the factors affecting them. Conclusions Our findings indicate that interventions by screening providers to promote cancer screening affect screening rates and that interventions to improve cancer screening rates directly impact screening rates. In addition, the availability of health care resources and the economic status of screening providers may affect screening rates due to interventions. These results indicate that by improving the financial situation of screening providers and expanding their medical resources, screening providers may strengthen their interventions and thus, improve screening rates. The results of this study suggest that identifying and supporting the financial/medical resources lacking in each municipality could improve screening interventions and, consequently, increase screening rates. Supplementary Materials: The following supporting information can be downloaded at: https: //www.mdpi.com/article/10.3390/ijerph191811477/s1, Table S1: Data sources of the interventions for the residents; Table S2 The number of screened residents obtained from e-stat; Table S3: Data sources for other variables used in the analysis. Conflicts of Interest: The authors declare no conflict of interest. Appendix A. Regarding Gastric Cancer Screening Participants For gastric cancer screening, we could not include the number of participants in the biennial endoscopic screening due to data availability. The reasons were as follows: the Japanese guidelines for gastric cancer screening recommend both annual gastric X-ray examination and biennial gastric endoscopy. To calculate the number of participants in biennial endoscopic screening, the number of examinees in the subject year and previous year, and the number of examinees who participated in cancer screening in two consecutive years, are required. However, the number of people who had an endoscopic screening for two consecutive years was not included in the data; hence, the correct number of participants in the endoscopic screening could not be ascertained. Therefore, only those participants of gastric X-ray screening were used in this study for accuracy. This may lead to a relative overestimation of the number of examinees in prefectures where endoscopic examinations cannot be performed due to financial/medical resource limitations, and an underestimation of the number of examinees in prefectures where endoscopic examinations are performed more actively, and X-ray examinations are reduced.
6,150.2
2022-09-01T00:00:00.000
[ "Medicine", "Economics" ]
Spectroscopic Observations of Magnetic Reconnection and Chromospheric Evaporation in an X-shaped Solar Flare We present observations of distinct UV spectral properties at different locations during an atypical X-shaped flare (SOL2014-11-09T15:32) observed by the Interface Region Imaging Spectrograph (IRIS). In this flare, four chromospheric ribbons appear and converge at an X-point where a separator is anchored. Above the X-point, two sets of non-coplanar coronal loops approach laterally and reconnect at the separator. The IRIS slit was located close to the X-point, cutting across some of the flare ribbons and loops. Near the location of the separator, the Si IV 1402.77 A line exhibits significantly broadened line wings extending to 200 km/s but an unshifted line core. These spectral features suggest the presence of bidirectional flows possibly related to the separator reconnection. While at the flare ribbons, the hot Fe XXI 1354.08 A line shows blueshifts and the cool Si IV 1402.77 A, C II 1335.71 A, and Mg II 2803.52 A lines show evident redshifts up to a velocity of 80 km/s, which are consistent with the scenario of chromospheric evaporation/condensation. Introduction Solar flares (see a recent review by Fletcher et al. 2011) are energetic events in the solar atmosphere, which are believed to be powered by magnetic reconnection in the corona (Priest & Forbes 2002;Shibata & Magara 2011). The energy released by reconnection usually heats the local plasma and accelerates particles. Through thermal conduction and/or non-thermal particle beams, the energy is then transported downward to the lower atmosphere. Consequently, the chromospheric plasma is heated and emits an enhanced radiation, which outlines the flare ribbons. An impulsive energy deposition leads to a local pressure excess that drives the heated plasma up into the corona, referred to as chromospheric evaporation (Neupert 1968;Hirayama 1974;Acton et al. 1982). The evaporated hot plasma fills the flare loops which are clearly visible in soft X-ray and EUV passbands. Magnetic reconnection, as the dominant energy release mechanism in flares, has been reported in spectroscopic observations from different instruments. Using the Solar Ultraviolet Measurements of Emitted Radiation (SUMER; Wilhelm et al. 1995) spectrometer on the Solar and Heliospheric Observatory (SOHO), Innes et al. (2003a,b) observed evident bluewing enhancements at 800-1000 km s −1 in the Fe xxi 1354.08Å line on the top of flare arcades (viewed at the solar limb), which are associated with supra-arcade (or reconnection) downflows (McKenzie & Hudson 1999). A high blueshifted jet (with a velocity up to 600 km s −1 along the line of sight) and a redshifted jet (∼300 km s −1 ) were also recorded by SUMER in the Fe xix 1118.07Å line near the top of erupting loops, both of which were explained as reconnection outflows (Wang et al. 2007). In the era of the Hinode EUV Imaging Spectrometer (EIS; Culhane et al. 2007), Hara et al. (2011) reported reconnection outflows with a velocity of ∼200-400 km s −1 in the Fe xxiv 192.03Å and Ca xvii 192.86Å lines as well as reconnection inflows with a velocity of ∼20 km s −1 in the Fe xii 195.12Å and Fe x 184.54 A lines around the loop-top region. In addition, Simões et al. (2015) detected high redshifts (40-250 km s −1 ) in the EIS Fe xxiv 192.03Å and Fe xii 192.39Å lines at a coronal source in terms of reconnection downflows. Recently, using the high-resolution UV spectra from the Interface Region Imaging Spectrograph (IRIS; De Pontieu et al. 2014), Tian et al. (2014) reported a large redshift (∼125 km s −1 ) in the Fe xxi 1354.08Å line on the cusp-shaped structure and interpreted the redshift as a signature of reconnection downflows. The authors also observed a downward-moving blob as manifested by a greatly enhanced redshifted component at ∼60 km s −1 in the Si iv 1402.77Å line. Moreover, Reeves et al. (2015) found intermittent fast flows of 200 km s −1 in dome-shaped coronal loops in the IRIS Si iv 1393.76 A line and considered the fast flows as a result of magnetic reconnection between an erupting prominence and the pre-existing overlying magnetic field. Reconnection signatures were also observed in the cooler Hα and Ca ii 8542Å lines by the ground-based Fast Imaging Solar Spectrograph (FISS; Chae et al. 2013) manifested as bidirectional outflows with velocities of ±(70-80) km s −1 (Hong et al. 2016). Note that evidence of magnetic reconnection has also been found in small-scale explosive events (Dere et al. 1989). For example, broad non-Gaussian Si iv line profiles with both wings extending to hundreds of km s −1 have been observed and explained as the consequence of bidirectional reconnection jets (Dere et al. 1991;Innes et al. 1997Innes et al. , 2015Tian et al. 2016). Chromospheric evaporation, i.e., a dynamic response to the energy deposition from magnetic reconnection, can be detected by Doppler shift measurements in spectral lines. In general, the evaporated (or upward) plasma motions generate blueshifts (or blueshifted components) in soft X-ray and EUV lines. Based on momentum balance (Canfield et al. 1987(Canfield et al. , 1990, chromospheric evaporation is usually accompanied by a compression of chromospheric plasma, called chromospheric condensation, which produces redshifts (or red-wing enhancements) in some relatively cool lines. Blueshifts (redshifts) caused by chromospheric evaporation (condensation) have been reported in a large number of studies, for example, blueshifted components with velocities of 200-400 km s −1 in the Ca xix 3.18Å line (Antonucci et al. 1982(Antonucci et al. , 1985Antonucci & Dennis 1983;Zarro & Lemen 1988;Wülser et al. 1994;Ding et al. 1996) from the Bent and Bragg Crystal Spectrometer (BCS; Acton et al. 1980) on board the Solar Maximum Mission (SMM) and Yohkoh/BCS (Culhane et al. 1991), blueshifts of 60-300 km s −1 in the Fe xix 592.23Å line (Teriaca et al. 2003(Teriaca et al. , 2006Del Zanna et al. 2006;Brosius & Phillips 2004) from the Coronal Diagnostic Spectrometer (CDS; Harrison et al. 1995) on board SOHO, and redshifts of tens of km s −1 in some chromospheric and transition region lines (like Hα, He ii 303.78Å, O iii 599.59Å, and O v 629.73Å; Wülser et al. 1994;Ding et al. 1995;Czaykowska et al. 1999;Teriaca et al. 2003Teriaca et al. , 2006Brosius 2003;Kamio et al. 2005;Del Zanna et al. 2006). In particular, blueshifts and redshifts can appear at a given flaring location in different emission lines, as observed by Hinode/EIS (Milligan & Dennis 2009;Chen & Ding 2010;Li & Ding 2011;Doschek et al. 2013), confirming the coexistence of chromospheric evaporation and condensation. Recently, IRIS also observed blueshifts of hundreds of km s −1 in the hot Fe xxi 1354.08Å line and redshifts of tens of km s −1 in the cool Si iv 1402.77Å (or 1393.76Å), C ii 1335.71Å (or 1334.53Å), and Mg ii 2803.52Å (or 2796.35Å and 2791.59Å) lines at flare ribbons or kernels, which have been explained by chromospheric evaporation and condensation, respectively (Tian et al. 2014Young et al. 2015;Li et al. 2015a;Graham & Cauzzi 2015;Brosius & Daw 2015;Battaglia et al. 2015;Li et al. 2015b;Polito et al. 2015Polito et al. , 2016Sadykov et al. 2015Sadykov et al. , 2016Dudík et al. 2016). Note that the Fe xxi line is usually blueshifted as a whole, while the Mg ii, C ii, and Si iv lines typically only show a red-wing enhancement. According to the observed Doppler shifts (or depending on the heating rate), chromospheric evaporation can be divided into two types: gentle evaporation and explosive evaporation (Fisher et al. 1985a,b,c;Milligan et al. 2006a,b). When the hot lines (such as Fe xxi, Fe xix, and Ca xix) show blueshifts and the cool lines (such as Hα, He ii, O iii, and Si iv) show redshifts, this case is referred to as explosive evaporation. When only blueshifts are detected, this corresponds to gentle evaporation. Both types of evaporation have been observed in a single flare (Brosius 2009;Li & Ding 2011). In addition, explosive evaporation could occur in major flares (Milligan et al. 2006b;Veronig et al. 2010) as well as in microflares (Brosius & Holman 2010;Chen & Ding 2010). In this paper, we present spatio-temporal variation of the UV spectra of Si iv, C ii, and Mg ii observed by IRIS for an atypical X-shaped flare, in which magnetic reconnection takes place at a separator . The separator reconnection creates four chromospheric ribbons that converge at an X-point as revealed by the IRIS slit-jaw 1330Å images (SJIs). Accordingly, two sets of non-coplanar flare loops take part in the reconnection, as shown in the EUV images from the Atmospheric Imaging Assembly (AIA; Lemen et al. 2012) on board the Solar Dynamics Observatory (SDO). The IRIS slit was located near the X-point, capturing some of the X-shaped flare ribbons and also non-coplanar flare loops involved in the separator reconnection. From the observed spectra, we detect convincing upward and downward reconnection outflows near the location of the separator in the Si iv line, which are rarely reported in previous flare studies. In addition, we find some interesting features corresponding to chromospheric condensation at flare ribbons, such as entirely redshifted Si iv line profiles. Observations and Data Reduction The X-shaped flare is a GOES M2.3 event that occurred on 2014 November 9 in the active region NOAA 12205 near the disk center (N14E11). It started at 15:24 UT, peaked at 15:32 UT, and lasted until 16:05 UT (see the GOES 1-8Å soft X-ray flux in Figure 1(a)). IRIS observed this flare from 15:17 UT to 16:05 UT (indicated by the short vertical blue lines in Figure 1(a)), covering the entire rise phase and almost all the decay phase of the flare. The IRIS SJIs at 1330Å show that four chromospheric ribbons appear in a quadrupolar magnetic field and converge at an X-point (Figure 1(b) and Animation 1) where a separator is anchored ). In addition, two sets of non-coplanar coronal loops approach laterally and reconnect at the separator, as revealed by the AIA images ( Figure 1(b) and Animation 1). IRIS also observed spectra over a small region (marked by the white dotted lines in Figure 1(b)) to the west of the X-point with an offset of ∼4 ′′ . This small region contains some of the flare ribbons as well as coronal loops that are associated with the separator reconnection . The slit has a width of 0. ′′ 33 and is located in the middle of the SJIs that have a pixel scale of 0. ′′ 167. For this flare, IRIS observed an area of 119 ′′ ×119 ′′ in SJIs at 1330, 2796, and 2832Å with a cadence of 37 s, the former two of which are sensitive to the plasma of upper chromosphere and the latter one to the upper photosphere (De Pontieu et al. 2014). The slit scanned a small area of 6 ′′ ×119 ′′ with four steps (i.e., each step moves 2 ′′ across the slit). It took 37 s in each run with an exposure time of 8 s at each step. In the present study, we focus on SJIs at 1330Å for the X-point region (with an area of 36 ′′ ×36 ′′ ; see the white box in Figure 1(b)) as well as the spectra from the first and fourth steps (referred to as S1 and S4 hereafter and marked by the two magenta dotted lines in Figure 1(b)) within the time range of 15:20-15:50 UT (denoted by the two magenta dash-dotted lines in Figure 1(a)). Note that S1 is the closest while S4 is the farthest to the X-point. We use the IRIS level 2 data that have been processed with the subtraction of dark current as well as the corrections for flat field, geometry, and wavelength. The spectra studied here include the Mg ii line at 2803.52Å (with a formation temperature of ∼10 4.0 K), the C ii line at 1335.71Å (∼10 4.3 K), the Si iv line at 1402.77Å (∼10 4.8 K), and the Fe xxi line at 1354.08Å (∼10 7.0 K). The chromospheric Mg ii and C ii lines are optically thick (i.e., formed through a complex radiative transfer process) and usually show a central reversal in the line core. We therefore adopt a moment method to analyze these two lines and obtain the spectral parameters, i.e., total intensity (the zeroth order moment), line shift (the first order moment), and line width (the second order moment). The transition region Si iv line is usually regarded as an optically thin line that can be fitted by a Gaussian function (Brannon et al. 2015). It should be noted that, in some observations, the Si iv line profiles significantly deviate from a Gaussian shape. Considering these, we first do the moment analysis on all of the observed Si iv profiles; then, in some specific places that exhibit line profiles with Gaussian shapes, we also implement a single or multiple Gaussian fitting. For the coronal Fe xxi line that is optically thin, we just apply a Gaussian fitting to derive the spectral parameters. Note that the Fe xxi line is blended with some other weak lines, thus we adopt a multiple Gaussian fitting to separate the Fe xxi component from the other components (Li et al. 2015a). To calculate the Doppler velocity from the line shift, we first determine the reference line center by averaging the observed line centers before the flare onset for the Mg ii, C ii, and Si iv lines 1 . For the hot Fe xxi line that cannot be seen before the flare onset, we use the theoretical line center, i.e., 1354.08Å. This value is very close to the reference line centers independently determined by Young et al. (2015) and Brosius & Daw (2015). The uncertainty in the Doppler velocity for all the lines is estimated to be less than 10 km s −1 (Li et al. 2015a). In this study, we also use the AIA EUV and UV images with a pixel scale of 0. ′′ 6 and cadences of 12 s and 24 s, respectively. The images of AIA and IRIS are co-aligned by comparing the sunspot features visible in the AIA 1700Å and SJI 2832Å images (as shown in Animation 2). The IRIS SJIs themselves are also co-aligned by correcting a drift of ∼2 ′′ in the X-direction throughout the flare (see the slit position in Animation 2). The uncertainty in the co-alignments of different images is estimated to be ∼1 ′′ . Spatial Context of the Event The morphological evolution of this X-shaped flare has been well described in Li et al. (2016). In short, the four chromospheric ribbons approach each other and converge at the X-point around the flare peak time; then they move outward with the two ribbons on the right separating away from the polarity inversion line (the ribbon motion pattern 2 is also shown in Figure 2(a)). The observed ribbon motions as well as the reconstructed magnetic topology 3 (see Figure 3 and more details in Li et al. 2016) suggest that magnetic reconnection takes place at a separator connecting to the X-point. More specifically, the inward and outward motions of ribbon brightenings illustrate that the reconnection occurs along a curved separator (or current sheet) which consists of a vertical part above the Xpoint and a horizontal part extending to the right (see the sketched Figure 4 in Li et al. 2016). The IRIS slit cut across parts of the flare ribbons near the X-point (see Figure 2 and Animation 1). It also crossed some flaring loops and perhaps the curved separator as well ( Figure 3 and Animation 1). These provide us an opportunity to study the dynamics near the location of separator reconnection and at the flare ribbons from the observed UV spectra. From Figure 2 and Animation 1, it is seen that the first step of the slit, i.e., S1, is 6 ′′ closer to the X-point than the fourth step, S4, and that S1 cuts across only one of the flare ribbons while S4 cuts both ribbons to the right of the X-point. To investigate the ribbon dynamics, we present space-time diagrams of different spectral parameters for S1 and S4 (Figures 4-7) as well as some typical line profiles (Figures 8 and 9) at three ribbon pixels R1, R2, and R3 (R1 is located at the north ribbon cut by S1, and R2 and R3 are located at the north and south ribbons cut by S4, respectively). In addition, we show the line profiles (Figures 10 and 11) for some other pixels outside the flare ribbons, labeled as L1, L2, and L3 (L1 is on S4 and L2 and L3 are on S1), which display quite different dynamic features. The three pixels L1-L3 are presumably located on some loop structures or even at the separator as suggested in Figure 3. It should be noted that there is an overlapping effect along the line of sight and some of the sample pixels may correspond to different structures in different passbands. In the following section, we first present the spectral features at the flare ribbons, particularly at locations R1-R3 (S4.1); then we show the distinct dynamic features at locations L1-L3 outside the flare ribbons (S4.2). General Picture from the Moment Analysis Based on the moment method, we generate space-time diagrams of the total intensity, line shift (or Doppler velocity), and line width of the Si iv line for S1, as shown in Figure 4. One can see that the north ribbon (where R1 is located) spreads up toward the north as time evolves (Figure 4(c)). This can also be seen in the SJIs 1330Å and AIA 1600Å images (Animation 1). Along with the apparent motion of the ribbon, evident redshifts appear in the Si iv line as revealed on the velocity map (see the green contour in Figure 4(d)). These redshifts also match some broadenings of the line (see the same green contour in Figure 4(e)). Note that there are some other brightenings, redshifts, and broadenings around R1 before the flare peak time (see ∼15:30 UT), which are likely related to the dynamics at the east footpoint of a flux rope (marked by the black arrow in Figure 3) that erupts early in this event. Similar features are also visible in the cooler lines. From the moment maps of the C ii and Mg ii lines in Figure 5, it can be seen that evident redshifts appear and extend as the ribbon spreads (see the green contours). The redshifts are also coincident with significant broadenings in these two lines. A little different from S1, S4 cuts both the north and south ribbons where R2 and R3 are located, respectively. The two ribbons move apart from each other as the flare evolves, which can be clearly seen from the intensity maps for S4 in Figures 6 (Si iv) and 7 (C ii and Mg ii). Along with this, two redshift (and broadening) bands spread out. Such outward motion is more significant at the south ribbon (see the green contours in these figures). We find that the lifetime of such dynamic features, i.e., simultaneous redshifts (mostly >20 km s −1 ) and line broadenings (above 50% of the maximum), particularly in the Si iv line, is about 1-8 minutes at a given site and the spatial scale is about 1-5 arcsecs along the slit. Line Profiles at R1-R3 We further present temporal evolution of the spectra at three ribbon locations, i.e., R1 on S1 and R2 and R3 on S4, and show typical line profiles with prominent dynamic features at some selected times (marked by square symbols in Figures 4-9). The top panels of Figure 8 show the results for the Si iv, C ii, and Mg ii lines at R1. It is seen that all these cool lines are brightened, redshifted, and broadened around the flare peak time (denoted by a square). The line profiles at the peak time (15:32 UT) are over-plotted with green solid curves. We can see that the Si iv profile is Gaussian-like and redshifted as a whole, which can be well fitted by a single Gaussian function (red dashed curve). The derived velocity from the Gaussian fitting is 56 km s −1 , very similar to the value of 59 km s −1 derived from the moment method. The C ii and Mg ii lines also show significant redshifts with velocities of 62 and 38 km s −1 , respectively. In particular, these two line profiles do not show a central reversal that is a common feature in the observed C ii and Mg ii profiles in quiet-Sun regions (see the white curves). All these spectral features at R1, including entirely redshifted Si iv and singly peaked C ii and Mg ii, are also shown in the line profiles at R2 and R3, as plotted in the middle and bottom panels of Figure 8. Note that the Si iv line is slightly saturated at these ribbon locations. We also check the hot Fe xxi line at R1-R3 as shown in Figure 9. It is seen that the Fe xxi emission is not evident at R1 at 15:32 UT, which might be hidden in the enhanced continuum background; yet this hot emission is clearly visible at R2 and particularly at R3. We then use a multiple Gaussian function to fit the line profile at R3 and obtain a blueshift velocity of 28 km s −1 for Fe xxi. Note that the Fe xxi emission at R2 (marked by the red arrow) corresponds to a strong blueshift velocity of ∼170 km s −1 . In addition, we notice that all of the three locations show an enhanced continuum emission at 15:32 UT, which is a common feature for flare ribbons. Interpretation and Discussion The blueshifts in the hot Fe xxi line and in particular, the evident redshifts in the cool Si iv, C ii, and Mg ii lines, along with the ribbon spreading and line broadenings in the impulsive phase of the flare are well consistent with the scenario of chromospheric evaporation/condensation (e.g., Tian et al. 2015;Li et al. 2015a) caused by an energy deposition at the flare ribbons. The existence of both blueshifts (indicative of upflows) and redshifts (downflows) revealed by different spectral lines also suggests that an explosive evaporation occurs in this flare. It is interesting that in this X-shaped flare the Si iv line profiles at the ribbons are redshifted as a whole and thus can be well fitted by a single Gaussian function. This result is somewhat different from previous studies. Li et al. (2015a) analyzed an X1.0 flare that was also observed by IRIS on 2014 March 29 and found that the Si iv profiles at four ribbon pixels (see the top right panels of Figures 3-6 in their paper) exhibited a redshifted component plus a rest component, which were better fitted by a double Gaussian function. Tian et al. (2015) also reported that the Si iv line is often not entirely redshifted but just shows an evident red-wing enhancement at the ribbons. It is worth mentioning that some fully redshifted Si iv profiles were reported by Warren et al. (2016) and also reproduced in numerical simulations by Reep et al. (2016). However, those Si iv profiles generally exhibit multiple components and might be better fitted by a double or multiple Gaussian function. The result of the entirely redshifted Si iv line reported here implies that this line is formed within a layer (around the transition region) that is moving downward as most probably the chromospheric condensation, and absence of a rest component suggests that IRIS may spatially resolve the condensation region in this particular flare. Another possibility is that almost all the plasma at the transition region temperature is pushed downward, presumably by the overpressure of a local energy deposition, which might imply that considerable energy is deposited around the narrow transition region. We notice that in this X-shaped flare, the ribbons near the X-point are not correlated with any evident non-thermal hard X-ray emission; while in the X1.0 flare in Li et al. (2015a) and also the events in Tian et al. (2015) and in Warren et al. (2016), hard X-ray sources are co-spatial with the ribbons. Thus, we speculate that the released energy near the X-point in this flare might be more thermal and deposited primarily in a relatively higher and narrower layer (say, the transition region) as compared with the non-thermal case (usually in the chromosphere). The shape of the Si iv line profiles, i.e., wholly shifted or not, and their potential relation to the energy deposition will be studied in detail in more flare events as well as by numerical simulations in the future. In the X-shaped flare, we find that the time and spatial scales of the ribbon dynamics are about 1-8 minutes and 1-5 arcsecs, respectively. The spatial scale is similar to that of the X1.0 flare (2-3 arcsecs) reported by Li et al. (2015a) but the time scale is a little longer than the one (1-2 minutes) reported in that flare. As pointed out by Li et al. (2015a), these scales are determined by several factors, such as the spread speed of flare ribbon, the duration of energy deposition, the hydrodynamic time scale, and the temporal and spatial resolutions of observations. We find that in this X-shaped flare, the spread speed of the flare ribbons, especially for the north and south ribbons cut by S4, is ∼15 km s −1 around the flare peak time and decreases to several km s −1 in the late decay phase. These speeds are smaller than the apparent speed of ribbon front in the X1.0 flare, which is ∼20 km s −1 . The smaller speeds here may lead to a narrower band, say ∼1 arcsec, with dynamic features in the late decay phase. The relatively longer time scale may be explained as follows. Firstly, the slight drift of the IRIS slit during the observations may have caused the time scale of dynamic features to be a little bit longer than that of actuality. Secondly, the energy is released and deposited on many small-scale strands within a single IRIS pixel. This multi-thread scenario was modeled by Reep et al. (2016) and the authors reproduced a long duration redshift of the Si iv line. Finally, we do not exclude the possibility that a longer duration of energy deposition occurs in this X-shaped flare as compared with the X1.0 flare in Li et al. (2015a). We notice that the time scale of dynamics in the Si iv line appears to be longer than the time scale of energy deposition in most flare models. Nevertheless, the lasting redshifts of up to 8 minutes at some ribbon locations are most likely contributed by chromospheric condensation flows rather than cooling downflows as discussed in Brosius (2003) and Tian et al. (2015) for the following reasons. (1) These redshifts are co-spatial with the ribbon brightenings (i.e., signatures of magnetic reconnection) as represented by the enhancements in SJI 1330Å and AIA 1600Å, which continue to show up until 15:42 UT (see Figure 2(a)). (2) The redshifts are accompanied by hot Fe xxi emissions that exhibit blueshifts (indicative of chromospheric evaporation; as seen in Figure 9(b)). Note that some redshifts and also brightenings are still visible in the Si iv line in the late decay phase (for example, after 15:42 UT), which might be unrelated to chromospheric condensation and caused by cooling downflows. Spectral Features Outside the Flare Ribbons As described above, evident redshifts along with significant broadenings in the cool Si iv, C ii and Mg ii lines show up at the flare ribbons around the flare peak time, indicative of chromospheric condensation. Meanwhile, significant line broadenings but no evident intensity enhancements or line shifts appear in some other places outside the flare ribbons such as at locations L1-L3. From Figures 4-7, it is seen that L1 (L2) shows clear broadenings before (after) the flare peak time (indicated by plus symbols) and L3 displays intermittent broadenings throughout the flare observations. We examine these broadened line profiles ( Figure 10) and find that their line cores are almost unshifted and that both of their line wings are markedly enhanced extending to hundreds of km s −1 particularly in the Si iv line. Note that at L1-L3, it is hardly seen any hot Fe xxi emission (see Figure 11) when the cool lines show broadened wings. Broadened Line Wings From Figures 6 (for Si iv) and 7 (for C ii and Mg ii), it is seen that all the cool lines exhibit evident broadenings at L1 several minutes before the flare peak time. The temporal evolutions of the spectra at L1 are plotted in the top row of Figure 10 (black-white images). Some featured line profiles, for example at 15:28 UT (indicated by a plus symbol), are overplotted in the figure (yellow curves). One can see that the Si iv profile deviates from a single Gaussian shape with the line core at rest. Both the blue and red line wings are significantly enhanced and extend to 200 km s −1 . Here we use a multiple Gaussian function to fit the line profile and obtain a blueshifted component with a velocity of 64 km s −1 and a redshifted component with a velocity of 59 km s −1 . The C ii and Mg ii lines also show broadened wings with an unshifted and centrally reversed core. In particular, these two profiles exhibit an extended and more intense red wing (i.e., red asymmetry). Broadened line wings are found at L2 as well, but a few minutes after the flare peak time. The featured line profiles, for example at 15:39 UT (indicated by a plus symbol), are given in the middle row of Figure 10. It is seen that the Si iv line is unshifted in the core and especially shows some bumps at the far wings, somewhat similar to the featured Si iv profile at L1 (15:28 UT). Here we also use a multiple Gaussian function to fit the Si iv profile and derive a blueshifted component with a velocity of 150 km s −1 and a redshifted component with a velocity of 151 km s −1 . Moreover, the Mg ii and especially C ii lines at L2 exhibit a red asymmetry, which is similar to L1 as well. Line broadenings in Si iv, C ii and Mg ii are also seen at L3, however, there exist some differences among L1-L3. The line broadenings at L3 start before the flare onset and persist for a long time (about 30 minutes; see Figures 4 and 5), while the ones at L1 and L2 only appear during the flare and last for a relatively short time (only several minutes). We notice that the broadenings at L3 appear intermittently and can be seen from the temporal evolution of the Si iv, C ii, and Mg ii spectra (see the bottom row of Figure 10). For each spectral line, we plot a featured profile from 15:40 UT (marked by a triangular symbol). One can see that the Si iv line shows bumps at both wings with velocities of about ±100 km s −1 from a multiple Gaussian fitting, and that the C ii line displays an obvious red asymmetry, which actually looks quite similar to the ones at L1 (15:28 UT) as well as at L2 (15:39 UT). Interpretation and Discussion The dynamic features shown at L1-L3 are distinct from the spectral features at the flare ribbons. The broadened line wings, especially the bumps at 100-150 km s −1 , in the Si iv line profiles cannot be caused by micro-turbulence but are very likely a result of bulk plasma flows, similar to the high-speed jets in explosive events (Innes et al. 1997(Innes et al. , 2015. We propose that the Si iv profiles with bumps at both wings indicate the existence of bidirectional (i.e., upward and downward) flows that are located closely within the formation layer of the line. In addition, the C ii and Mg ii profiles with a red asymmetry may imply downward flows in the formation layer of the two lines. Based on the flare morphology, L1 and L2 are supposed to be at the location of the separator, where magnetic reconnection occurs in the X-shaped flare. The strong bidirectional flows detected at L1 before the flare peak time (15:28 UT) and at L2 after the peak time (15:39 UT) are thus likely the upward and downward reconnection outflows around the separator. From the AIA 131Å images (∼10 MK) at 15:28 UT as plotted in Figure 3, one can see that L1 corresponds to the hot flux rope along the line of sight. We speculate that magnetic reconnection probably occurs at a relatively low site under the flux rope where the reconnection outflows (both upward and downward) are mainly captured in the Si iv line. Moreover, we find that the north and south ribbons of the flare seem to separate from each other starting from ∼15:28 UT around L1 (see Animation 1). This supports the scenario that L1 could be the reconnection site at that time. We conjecture that the strong bidirectional flows detected at L2 at 15:39 UT are also possible reconnection outflows produced in the decay phase of the flare. From Figure 2(a), one can see that the footpoint brightenings continue to appear at the outer edge of the X-shaped ribbon until 15:42 UT, demonstrating that the reconnection is proceeding into the decay phase. The reconstructed magnetic topology (as plotted in the left panels of Figure 3) also provides some insight into the separator reconnection at L1 and L2. It is seen that the slit at steps S1 and S4 crossed the location of the separator (denoted by the red curve) calculated from the model. Some small spatial offsets may come from the uncertainty in the connectivity of the magnetic topology and/or be caused by the projection effect. Therefore, it is conceivable that bidirectional reconnection outflows can be detected at L1 and L2. The bidirectional reconnection outflows at L1 and L2 are mainly captured in the Si iv line, indicating that the separator reconnection (at least part of it) most likely occurs in the transition region. The transition-region reconnection could also produce some dynamic responses in the lower atmospheric layers, such as downward outflows visible in chromospheric lines. The Mg ii and especially the C ii line profiles at L1 (15:28 UT) and L2 (15:39 UT) display a red asymmetry, implying that the downward reconnection outflows (or reconnection downflows) are observed in the chromosphere. It is worth noting that bidirectional reconnection outflows revealed in Si iv are usually reported in small-scale explosive events but very rarely in large-scale flare events in previous studies. This is because flare reconnection generally takes place in the corona and the reconnection outflows are primarily detected in hot coronal lines including Fe xxi and Fe xix from SUMER, Fe xxiv and Ca xvii from EIS, as well as Fe xxi from IRIS. In this Xshaped flare, however, we detect convincing reconnection signatures of bidirectional outflows in the Si iv line at a transition-region temperature. This is owing to the special geometry of the separator reconnection. The reconnection in this flare can happen at a very low layer as the chromospheric ribbons converge at the X-point. To the best of our knowledge, it is the first time that we have detected such reconnection signatures in the cool Si iv line during an X-shaped flare. Finally, the intermittent bidirectional flows revealed in the Si iv line at L3 could also correspond to repetitive upward and downward reconnection outflows. In fact, intermittent loop brightenings occurred around L3 (see the SJIs at 1330Å in Animation 3) where flux cancellations can also be seen from the HMI magnetograms (indicated by the arrows in Animation 3). These intermittent brightenings look quite similar to the explosive events that occur repetitively due to small-scale reconnections in the transition region. Such sporadic small-scale reconnections occurring close to the X-point start before the flare onset and persist throughout the flare. We conjecture that these reconnections might play a role in triggering the flare separator reconnection by destabilizing the magnetic structure and enabling the coronal loops to flow more easily into the X-point. Summary In this paper, we have presented the spatio-temporal variation of the UV spectra including the Si iv, C ii, and Mg ii lines for an atypical X-shaped flare observed with IRIS. Distinct spectral features are found at the flare ribbons and outside the flare ribbons, which could be explained by chromospheric evaporation/condensation and separator magnetic reconnection, respectively. At the flare ribbons, evident redshifts (up to 80 km s −1 ) along with line broadenings are present in the cool Si iv, C ii, and Mg ii lines in the impulsive phase of the flare. Meanwhile, blueshifts are observed in the hot Fe xxi line. These blueshifts/redshifts are well consistent with the scenario of chromospheric evaporation/condensation, and suggest an explosive evaporation occurring in the flare. We find that the dynamic features spread out in the same manner as the ribbon separation with a time scale of 1-8 minutes and a spatial scale of 1-5 arcsecs, respectively. We also find that the Mg ii and C ii lines are singly peaked without showing a central reversal in the line core, which is consistent with earlier studies. The interesting result is that the Si iv line is entirely redshifted with no rest component. This is different from some previous studies and will be investigated in a future work. More importantly, at some locations outside the flare ribbons, all the cool lines exhibit significant line broadenings along with low intensity and little line shift. In particular, the Si iv line presents broadened blue and red wings that extend to 200 km s −1 . Such broadened wings likely indicate strong bidirectional flows, which can be interpreted as upward and downward outflows produced by the separator reconnection based on the observed SJIs and AIA images as well as reconstructed magnetic topology. This kind of spectroscopic signatures of separator reconnection are rarely reported in previous flare studies. Moreover, some intermittent bidirectional outflows are detected before and during the flare and could play a role in triggering the separator reconnection. IRIS is a NASA small explorer mission developed and operated by LMSAL with mission operations executed at NASA Ames Research center and major contributions to downlink communications funded by the Norwegian Space Center (NSC, Norway) through an ESA PRODEX contract. SDO is a mission of NASA's Living With a Star Program. The authors thank Dana Longcope for valuable discussions and thank the referee for constructive comments to improve the manuscript. Y.L., M.D.D., and W.Q.G. are supported by NSFC under grants 11373023, 11403011, 11733003, 11233008, and 11427803, by NKBRSF under grant 2014CB744203, and by ASO-S grant U1731241. Y.L. is also supported by CAS Pioneer Hundred Talents Program, Key Laboratory of Solar Activity of National Astronomical Observatories of the Chinese Academy of Sciences (KLSA201712), and by ISSI and ISSI-BJ from the team "Diagnosing Heating Mechanisms in Solar Flares through Spectroscopic Observations of Flare Ribbons" led by Hui Tian. The work at MSU is supported by NSF grant 1460059. Part of the work was conducted during the NSF REU Program at MSU. in view of the X-point region (right). The first and fourth steps (S1 and S4) of the IRIS slit are plotted in magenta lines on which some sample pixels are selected for study (R1-R3 are ribbon locations and L1-L3 are locations outside the ribbons). (b) HMI magnetogram before the flare and AIA 1600 and 304Å images around the flare peak time for the X-point region. The X-point is indicated by a blue cross. Figure 2). The blue cross indicates the X-point. In the left column, the blue and dark green lines show the modeled separatrix traces, and the red line indicates the separator (i.e., intersection of separatrices) connecting to null points. The cyan, yellow, and orange lines in the AIA 94Å image highlight some representative field lines (cyan for the pre-reconnection domain; yellow and orange for the post-reconnection domain). The black arrow in the AIA 131Å image marks the erupted flux rope in the event. Over-plotted are some featured line profiles (in green) at the times marked by square symbols. The Doppler velocities from moment analysis are given in each panel (positive values for redshifts). Note that the Si iv line profile can be well fitted by a Gaussian function (in red), yielding a Doppler velocity as well. The vertical blue dotted line in each panel represents the reference wavelength of the corresponding spectral line. The line profile plotted in white refers to a typical profile from a relatively quiet region, which is multiplied by a factor shown in the right-hand corner of each panel. Over-plotted are some featured line profiles (in yellow) at the times marked by plus and triangular symbols. For the Si iv line profiles, we apply a multiple Gaussian fitting (5-Gaussian for L1 and 3-Gaussian for L2 and L3). The total fitting is plotted by a magenta dashed curve and the components are plotted by red solid curves (blueshifted or redshifted components) and green dotted curves (relatively stationary components). The velocities for the blueshifted and redshifted components are given in the Si iv panels. The vertical blue dotted line in each panel represents the reference wavelength of the corresponding spectral line. The line profile plotted in white refers to a typical profile from a relatively quiet region, which is multiplied by a factor shown in the right-hand corner of each panel. Fig. 11.-Temporal evolution of the Fe xxi spectra at the three locations L1-L3 outside the flare ribbons, shown as the black-white images. Over-plotted are some featured line profiles (in yellow) at the times marked by plus and triangular symbols. The vertical blue dotted line represents the reference wavelength of the Fe xxi line. It can be seen that no enhanced Fe xxi emission shows up in these line profiles.
9,777.6
2017-08-29T00:00:00.000
[ "Physics" ]
Multilayer modelling of waves generated by explosive submarine volcanism . Theoretical source models of underwater explosions are often applied in studying tsunami hazards associated with submarine volcanism; however, their use in numerical codes based on the shallow water equations can neglect the significant dispersion of the generated wavefield. A non-hydrostatic multilayer method is validated against a laboratory-scale experiment of wave generation from instantaneous disturbances and at field-scale submarine explosions at Mono Lake, California, utilising the relevant theoretical models. The numerical method accurately reproduces the range of observed wave characteristics for 5 positive disturbances and suggests a previously unreported relationship of extended initial troughs for negative disturbances at low dispersivity and high nonlinearity parameters. Satisfactory amplitudes and phase velocities within the initial wave group are found using underwater explosion models at Mono Lake. The scheme is then applied to modelling tsunamis generated by volcanic explosions at Lake Taup¯o, New Zealand, for a magnitude range representing ejecta volumes between 0.04 – 0.4 km 3 . Waves reach all shores within 15 minutes with maximum incident crest amplitudes around 4 m at shores near the 10 source. This work shows that the multilayer scheme used is computationally efficient and able to capture a wide range of wave characteristics, including dispersive effects, which is necessary when investigating submarine explosions. This research therefore provides the foundation for future studies involving a rigorous probabilistic hazard assessment to quantify the risks and relative significance of this tsunami source mechanism. with data revealing areas of varying exposure to hazardous waves, of above 4 m near source to 0.3 m in the most sheltered areas, and waves reaching throughout the lake within 15 minutes. A probabilistic investigation is required to assess the full range of possible scenarios at this location, including eruption geometry and size, while potentially considering further complexity, such as any syneruptive variation in initial conditions. This will help resolve the significance of this hazard source compared 410 to alternative tsunamigenic sources and volcanic hazards across varying magnitudes of eruption. Further work could include numerical investigation of non-detonation eruptions involving sustained jetting. . However, tsunamis are often neglected from hazard maps of volcanos -recent events such as Anak Krakatau in late 2018 have underlined the need to consider them in any disaster risk response (Grilli et al., 2019;Williams et al., 2019;Ye et al., 2020). This is especially pertinent for communities that are not exposed to or are less familiar with seismogenic tsunamis and may not consider themselves at risk. 25 While experimental data and detailed field studies of these phenomena are rare, some reliable observations exist, for example, at Karymskoye Lake, 1996, where a Surtseyan-style eruption was partially witnessed from the air including six explosions followed by tsunamis and base surges. Later ground investigation revealed run-up along the lake ranging from 19 -1.8 m at distances 0.5 -3 km from the vent, debris flows down the Karymskaya River, and boulder transportation up to 60 m inland (Belousov et al., 2000;Torsvik et al., 2010;Ulvrová et al., 2014;Falvard et al., 2018). The Ritter Island volcano generated 30 one of the largest known tsunamigenic flank collapses in 1888, leaving only a small remnant above the water surface, and has since experienced occasional submarine eruptive activity and small local tsunamis in 1972in , 1974in and 2007in (Johnson, 1987Dondin et al., 2012). In the Caribbean near Grenada, Kick'em Jenny volcano was discovered mid-eruption in 1939 and has been regularly active and progressively shoaling, with eruption columns breaching the surface in 1939 and 1965 generating minor tsunami waves Shepherd, 1993, 1996). Many other candidate eruptions are historically documented with 35 small amplitude waves such as Kavachi, Solomon Islands, or lack detailed proximal observations, which leaves uncertainty as to the source mechanism responsible, for example, the 1952 Myojin-Sho submarine eruption which destroyed a naval research vessel, or the 1883 eruption of Krakatau (Dietz and Sheehy, 1954;Nomanbhoy and Satake, 1995). Explosive volcanic eruptions are characterised by a directional gas-driven escape from the source, exsolution of water vapour and, in submarine settings, potentially violent vaporisation of sea or lake water on interaction with hot magma. This 40 phreatomagmatic eruption can lead to rapid expansion of the resultant water vapour at depth leading to disturbance of the water surface and propagation of waves. To investigate the potential hazard range, we need to understand the relationship between source parameters of the eruption and the nature of waves they generate is needed. Underwater explosions are well documented (Cole, 1948;Mirchina and Pelinovsky, 1988;Le Méhauté, 1971;Kedrinskiy, 2006;Egorov, 2007) primarily owing to military reports and research in blast mitigation and structural response in, for example, 45 ship hulls and other coastal or off-shore structures (Klaseboer et al., 2005;Aman et al., 2012;Liu et al., 2018). As a result, significant research efforts have usually been focused on non-linear fluid-structure interactions such as pressure loading from shock waves rather than any wave generation relationships. Still, some tests were conducted on this matter during the nucleartesting age and led to the development of theoretical models describing explosion-surface interaction and dynamics of the resultant wave field (Le Méhauté and Wang, 1996). Physical experimentation since the end of nuclear testing has been rare 50 due to cost, practicalities, environmental concerns and the challenges of scale experienced by previous tests. In their place, numerical investigations are now the predominant area of research and offer the most likely route to advance the understanding of these processes. The current theoretical models summarised by Le Méhauté and Wang (1996) have been used in recent years to simulate the wavefield generated from events that produce analogous water surface cavitation such as submarine volcanic explosions 55 (Torsvik et al., 2010;Ulvrová et al., 2014; and asteroids impacting in ocean 2 https://doi.org/10.5194/nhess-2021-109 Preprint. Discussion started: 7 July 2021 c Author(s) 2021. CC BY 4.0 License. basins (Ward and Asphaug, 2000). However, numerical solutions often either utilise the empirically derived relations without validating their use in a numerical scheme against a suitable explosive physical experiment or test a generation mechanism in the local spatial range only at the cost of neglecting investigation of the generated wave field. Often, models such as those based on non-linear shallow water equations are applied to these problems without considering how dispersive the resultant 60 waves may be . This work uses a recently developed non-hydrostatic multilayer solver for free-surface flows to model the physical problem. Firstly, the method is validated against a laboratory-scale experiment of released columns of water to ascertain the numerical solution's robustness in resolving a simplified comparable wave generation mechanism. Secondly, data from one of the last military explosive test series focused on surface wave observations is compared with results produced by implementing the 65 theoretical model's initial conditions in the numerical method. These tests are to establish fitness of the underlying models, which are then applied to hypothetical explosive submarine eruptions at Lake Taupō, New Zealand. Underwater eruption model An explosive subaqueous volcanic eruption is a dynamic and complex event involving abrupt fragmentation, volume change 70 and numerous high energy interactions between pressurised magma, volatiles and water. Their wave generation capability depends on numerous physical parameters, including eruptive energy, depth, duration, and vent geometry (Egorov, 2007;Paris, 2015). Scarce availability of field observations combined with practical limitations both in field and laboratory necessitates simplifications to be made for an explosive eruption model such as considering it as a point-source explosion, as proposed and utilised by Torsvik et al. (2010), Ulvrová et al. (2014), and . 75 The models developed for submarine explosions and their waves are derived from experimental data and visual observations from chemical and nuclear explosive testing during the 20th century. As documented at the time, water disturbances are born from the generation and rapid expansion of a gas bubble that interacts with the free-surface by collapsing into a crater-like cavity, accompanied by central jets of water and the initial dissipative cylindrical bore, which radially expands outward. The resultant cavity rapidly fills under gravity to produce the second, larger jet which produces a further cylindrical bore, after 80 which the disturbance oscillates until rest, precipitating waves of decreasing amplitude. This free-surface interaction is strongly linked with the depth of explosion relative to its energy; small-yield or deep detonations lead the explosive bubble to transfer a large portion of its energy to the surrounding water through rapid oscillations and significantly reduce wave-making efficiency. (Le Méhauté, 1971;Le Méhauté and Wang, 1996) Bubble dynamics is a very active area of research in computational fluid dynamics (CFD), though, in the explosive realm, 85 the focus is usually on pressure waves and solid interactions . These studies are usually short in temporal range and are very computationally expensive as modelling the full problem requires accounting for compressibility and multiphase flow; thus, this has spawned specialist codes for their solution (Hallquist, 1994;Li et al., 2018). Only in recent years have studies appeared that directly simulate expanding explosive bubbles interacting with deformable beds and a free-surface such an explosion would occur at maximum depth where z = h, and crater diameter CD can be measured or calculated using estimated ejecta volume V with Eqs. (9-10). (Petrov and Schmidt, 2015;Daramizadeh and Ansari, 2015;Xu et al., 2020). However, there is minimal focus on subsequent 90 surface waves, let alone relations tested for their generation mechanisms or far-field propagation. Following development of the physical theory of underwater explosions, mathematical models were developed by applying inverse methods to experimental time series and simplifying the result to a two-parameter model corresponding to initial conditions on the free-surface elevation (η 0 ) representing the maximum surface displacement from the explosive disturbance (Le Méhauté and Wang, 1996). η c corresponds to the maximum depth of the disturbance below equilibrium and R to its radius 95 extent. These parameters physically represent the size of the initial cavity and are functions of explosive yield E, water depth h, burst depth z and bed characteristics for which calibration is made with empirical data. The initial disturbance can therefore be 4 https://doi.org/10.5194/nhess-2021-109 Preprint. Discussion started: 7 July 2021 c Author(s) 2021. CC BY 4.0 License. described analytically by one of a number of candidates for the general profile as Eq. (1-2). Note that Eq. (1) is discontinuous at its edge, while Eq. (2) returns back to zero. A schematic diagram illustrates the problem and initial profiles in Figure 1a-c. Depth classification The relations between parameters η c , R and explosive characteristics described here are derived empirically after many series of small and larger scale experimental observations and are well described and reviewed by Le Méhauté and Wang (1996). These relations depend on classifications of water depth h and charge depth below water surface z relative to explosive energy 105 released E. In terms of a depth parameter D = ch 3 √ E , where an imperial unit conversion constant c = 406.2, three categories are specified when considering wave generation: For deep and intermediate cases, the cavity parameters are defined as: For shallow cases, it is implicitly assumed that the explosion develops a cavity that extends through the entire water column and exposes the bed, and therefore its radius is larger than the water depth (R > h). In this instance, cavity radius is defined as: 120 The data calibrating these models include charges ranging from small (< 500 lb or < 9.5 × 10 8 J) to a handful larger (< 9500 lb or < 1.8 × 10 10 J) and further include a 23-KT nuclear test (Le Méhauté and Wang, 1996). Volcanic context For a volcanic case, illustrated in Fig. 1d, such explosions would occur on or near an edifice, meaning that the charge depth is equivalent to the water depth at that point (z = d), therefore events that are capable of hazardous wave generation fit into the . This is valid where the released explosion energy is estimated from volcanic crater diameter C D using the following empirical relationship by Sato and Taniguchi (1997): This method has recently been used for probabilistic hazard analysis of volcanogenic tsunamis at the Campi Flegrei caldera, Italy and at Taal Lake, Philippines (Pakoksung et al., 2021). Some land studies suggest that the size of a volcanic crater cannot be assumed to directly reflect the size of the largest explosion causing it (Valentine and White, 2012), so a further relation from Sato and Taniguchi (1997) relates the ejecta volume V with the released explosion energy: Numerical method To compute simulations of the models described earlier, we use the open source CFD framework, Basilisk (Popinet, 2013). The software is widely used in studies involving multiphase problems from jet dynamics to viscoelastic and surface tension investigations and includes several free-surface solvers with application to tsunamis, wavefield transformation and other hy- Multilayer Scheme The majority of the following work utilised the non-hydrostatic multilayer free-surface solver developed and described by Popinet (2020). A brief outline is given here. The scheme of n layers is a horizontally gridded and vertically discrete approximation of the incompressible Euler equations with a free-surface and gravity. It is described by the system: where, in the x-z reference frame, k is the layer index, h k layer thickness, g gravitational acceleration, u k , w k the horizontal and vertical velocity components, φ k the non-hydrostatic pressure, η the free-surface height (sum of layer thicknesses and bathymetry height z b ), and 160 the height of layer interfaces. Between them, the set expresses the evolution of layer thickness (Eq. 11), conservation of momentum (Eq. 12, 13), and conservation of volume/mass (Eq. 14). The framework allows the model to be built modularly, starting from the hydrostatic case where φ = 0 and vertical momentum conservation (Eq. 13) is removed; these effectively become the generalised multilayer Saint-Venant (SV) or stacked shallow water equations. More components are added, for example, vertical remapping, adaptiv-165 ity, non-hydrostatic and Keller box vertical projections, and a wave breaking method. The latter is implemented by introducing dissipation. This is handled by limiting the maximum vertical velocity by setting where g |H| ∞ is the characteristic horizontal velocity scale of the wave and b is a specified breaking parameter smaller than one. sgn and min are sign and minimum functions respectively. Lastly, terrain is handled by looking up a pre-processed 170 k-dimensional tree indexed database of heights and the model is able to discriminate and resolve areas of wetting and drying. The scheme has been tested against numerous benchmark cases, including standing waves, sinusoidal wave propagation over a bar, the Tohoku tsunami of 2011 and its dispersive features, viscous hydraulic jumps, and breaking Stokes waves propagation and shore run-up. (Popinet, 2020) Other Schemes Accompanying models used for validation and comparative purposes include a Volume-of-Fluid method (N-S/VOF) in the same framework which solves the two-phase Navier-Stokes equations for interfacial flows, including variable density and surface tension (Popinet, 2009(Popinet, , 2018, a solver for the shallow water or Saint Venant equations (SV), and finally another 180 for the Serre-Green-Naghdi equations (SGN), a Boussinesq higher order approximation for non-linear and weakly dispersive flows (Popinet, 2015). Their inclusion is to support and inform evaluation of the multilayer scheme against well known and commonly used methods. In this work's context, the main discriminations between the schemes used are both the hydrostatic assumptions involved and their resolution of vertical gradients such as the velocity profile: in the hydrostatic SV solver, a constant velocity profile 185 is assumed; the non-hydrostatic SGN equations represent a Boussinesq-type analytical approximation of the vertical structure; the multilayer method resolves the vertical to n layers with capability to include non-hydrostatic terms; the N-S/VOF scheme fully resolves the vertical as an additional dimension. The addition of having to fully solve a 2D slice through the vertical versus 1D means the N-S/VOF has a larger domain to compute for the same region when considering water waves. 3 Laboratory-scale validation 190 To determine suitability of the numerical method for use in modelling a submarine disturbance, validation against a suitable case study is required. Prins (1958) conducted a flume experiment investigating surface waves produced from an instantaneously raised or depressed column of water. This case is replicated using the multilayer scheme and, additionally, the N- (SV) and Boussinesq-type (SGN) schemes. While the SV solution exhibits a good approximation of the phase velocity of the initial wave (4.39 ft s −1 ), it is, unsurprisingly, unable to resolve any of the trailing wave field. The SGN scheme resolves this element at far-field well; however, it often overestimates initial wave amplitude and the overall dispersivity. The multilayer scheme is shown to be very accurate compared to the N-S/VOF result and the experimental trace and reinforces the good fit 220 found in Fig. 2. This suggests that the scheme faithfully replicates the process of a collapsing water column or infilling of a uniform depth and the associated wave generation, along with reflections from behind the initial disturbance. The generation process is shown in Figure 4 which illustrates how the four models resolve an initial disturbance and reveals significant variations in handling the vertically critical jump in the first half-second. The SV source quickly develops into the classic steep-fronted crest often seen in dam-break problems where the amplitude is proportional to the initial disturbance solver. Also, note graphical artefacts on free-surface height at the leading gradient in the N-S solution which occur at coarser regions of the adaptive grid. Despite these minor variations, it is clear that the multilayer model has greater validity in application to this case than either of the single layer models and is remarkably consistent with the physical experiment as well as the directly simulated approach. 235 Finally, Table 1 presents performance metrics for the numerical schemes across three maximum refinement levels for an example run case. All were performed with OpenMP parallelism on eight CPU cores. The multilayer scheme fits between the SV and SGN methods in terms of wall time processing and offers a vast improvement in computational efficiency compared 11 https://doi.org/10.5194/nhess-2021-109 Preprint. Discussion started: 7 July 2021 c Author(s) 2021. CC BY 4.0 License. with the N-S/VOF solver considering result similarity. It also demonstrates superior speed than the dispersive SGN method, primarily due to the computation time required to solve the higher order approximation. Wavefield classification All simulated cases are plotted in Figure 5 by the size of initial disturbance relative to water depth using a nonlinearity parameter |Q| h against a dispersivity parameter k*h where k*= 2π 2L . Six additional simulations were run outside of the original experimental parameter space to expand the model dataset reach. As done by Prins (1958), the +Q runs are categorised into groups with similar wavefield characteristics starting with strong oscillatory properties (blue) where k*h > 10 |Q| h , tending 245 through increasingly solitary wave properties (purple and green) once k*h ≤ 10 |Q| h until a succession of diminishing amplitude solitary waves result where k*h < |Q| h (orange). Beyond this region, the initially generated bore survives far enough down the numerical flume before it would likely separate (black). For the −Q domain, all resultant wave fields were similar except for the length of initial trough relative to the following periodic wave group. This ratio initially remains approximately unity in the same region as the +Q oscillatory character group and grows larger towards higher |Q| h and lower k*h. Results of the +Q disturbance (Fig. 5a) corroborate the experimental wave field descriptions of Prins (1958), including the transition of an oscillatory field through to solitary initial waves. Bore formations in the first stages were also observed during the experiment, those of which last a considerable length of the flume match model results. Accounting for tolerance in qualitative descriptions, groups match closely and the additional results beyond experimental scope further confirm these definitions in the studied range. Such an analysis of the −Q part was not attempted in the original research; however, effort in 255 this area can be made with the numerical results (Fig. 5b). The length ratio of initial trough to the following oscillatory waves increases with higher Q/h and lower k*h. This matches the trend towards solitary characteristics with +Q. Intriguingly, this pattern holds regardless of the length ratio that defines the initial disturbance (i.e. Q/L). In suitable replication of the experimental findings, the present numerical scheme is seen to be fit for generating accurate waves from initialised disturbances and modelling their near-field propagation across a significant regime range where non-260 linear and dispersive effects may be prevalent. The method also demonstrates suitability for further investigations either beyond or in complement, such as to widen parameter spaces with relatively low computational expense. Field-scale validation The next stage is to assess use of the underwater explosion models of Sect. 2.1 within the numerical scheme. To do this, we utilise datasets from the Mono Lake test series in 1965, conducted by the Waterways Experiment Station, and documented by 265 Walter (1966); Wallace and Baird (1968); Whalin et al. (1970); Pinkston et al. (1970). This was one of the largest chemical explosive test series designed to investigate subsequent water wave generation and shore effects. A series of ten approximately 9,250 lb (∼4,196 kg) spherical TNT charges were detonated off the south shore of Mono Lake, California. The test area is illustrated in Figure 6a, and this also shows the location of wave gauges arranged in four radials directed away from ground zero (GZ), along with contoured terrain. GZ was located at a site of approximate water depth h = 39 m. are given in Table 2. Considering doubts regarding the charges raised in the report of Walter (1966), additional lower yield simulations were added. Figure 7 to show maximum crest amplitudes for all gauges. The maximum amplitude of the initial envelope decreases with radial distance from GZ as expected, matching experimental observations. Shoaling is most pronounced at the closest incident shoreline, in the region 500 m east of Radial 1, and becomes far less significant with distance as seen on the western shore of the region. Maximum crest amplitudes at gauge locations follow similar patterns in all runs; however, the higher energy yield 285 simulations produce greater amplitudes (additional 0.11-0.05 m) throughout and experience more significant shoaling in all shallow areas. The lower yield simulations are a closer match to experimental observations for both shots, especially in shallow zones; however, the experimental data has noticeably greater variation at shore. The experimental gauge time series at locations closest to GZ are plotted alongside the lower yield model traces in Figure 8 and are useful to compare the phase arrival times and the initial development of the wave train. For both simulations, the first 290 arriving phases match the experimental record very well, with the exception of minor noise immediately following detonation in Shot 9 which is likely to be shock or debris related. The time of maximum envelope amplitude also conforms well, where the differences are 9 and 7 seconds for Shots 3 and 9 respectively. The latter part of the initial wave group maintains higher amplitudes in the experimental trace for both records, whereas the envelope decay is sooner in the numerical model. Shot 3 also seems to exhibit a positive amplitude shift in the early part of the experimental envelope. Model implications While the models used fit the experimental results and trends well overall, it is significant that the lower yield data is a much better fit. In reports following the test series (e.g. Whalin et al. (1970)) it is noted that, except for two charges, all shots delivered below average expected maximum crest amplitudes as predicted by earlier experimentation, particularly the deep water shots. A further similar deep test at Mono Lake in the following year delivered much greater amplitude waves in line with expectations, 300 leading to the suggestion by Wallace and Baird (1968) that, beyond scaling effects or measurement issues, the charges used in the series may have been faulty and delivered a lower yield. This is supported by the numerical data as, when energy is reduced, the maximum crest amplitudes can be accurately predicted in addition to the resultant early wave group and individual phases. Many data from the experimental series were unreported or discarded following the series due to various problems, including excess noise generated from the explosion itself or wind-driven waves and are thus missing from comparisons in this work. 305 Instrumental issues meant gauges near the shore were often unable to be acceptably calibrated for amplitude. Many experienced +ve noise due to wave breaking and bores, responsible for some spikes in measurements; however, these were kept for reliable arrival times and periods. (Wallace and Baird, 1968) Despite the experimental challenges, the multilayer scheme is shown to be excellent at resolving the initialised disturbance into a wave field very consistent with that generated by an intermediate depth submarine explosion. With the capability to 310 accurately propagate such a source in the relatively deep near-field through to the shore, it demonstrates suitability to model such events at these scales. However, this test did not contain any shallow depth underwater explosion tests, for which there are no case examples at this scale. Taupō Scenario Lake Taupō is New Zealand's largest freshwater lake of area approximately 616 km 2 . It lies in the southern section of the Taupō 315 Volcanic Zone and conceals most vent sites and features of Taupō volcano; the lake itself is the result of caldera collapse after the c. 25.5 ka Oruanui supereruption, with modifications in following events. The lake has experienced numerous volcanic episodes post-Oruanui, with some 21 events occurring from 7.05-1.8 ka, including the ∼ 232 CE Taupō eruption which is globally one of the most powerful eruptions in the Holocene. (Barker et al., 2020) Figure 9 illustrates the lake bathymetry and surrounding terrain accompanied by geological features and settlement locations. 320 The lake is fed by multiple rivers and notably from Tongariro hydroelectric power station via the Tokaanu Tailrace Canal. The sole outflow is the Waikato River, controlled by gates at the largest settlement on the lake, Taupō, which leads to numerous further hydroelectric dams downstream. Surrounded by abundant geothermal resources, strong trout fishing and agricultural industries, the area also boasts plentiful tourism opportunities, hosting over one million tourists each year, and is of great cultural significance accompanied by the surrounding land. 325 In building a model representing an example tsunamigenic explosive eruption in Lake Taupō, it was required to build a terrain dataset by combining a bathymetric model of the lake (Rowe et al., 2002) with an elevation model generated using LiDAR datasets from the Waikato Regional Council which cover the entire foreshore. The limiting resolution of the resultant digital terrain model is that of the bathymetric model (10 m), therefore the simulations were performed at a grid refinement level of 11 resulting in a horizontal resolution of 16 m. The eruption site chosen is within the region where most Holocene vents are located (Barker et al., 2020) and, for this example, two simulations are run corresponding to the range of most eruptions occurring during this period, excepting three larger episodes. The estimated ejecta volume V is between 0.04 -0.4 km 3 (Wilson, 1993) and, using Eqs. (9-10), the equivalent resulting models were run for 1000 seconds of simulated time. A further simulation of the scenario was performed with the Saint Venant scheme with identical terrain and model geometry for comparison. The initial wave at all locations is a crest, reflecting the positive amplitude lip of the initial condition. Earliest arrivals at 345 shore occur about the closest point east of the source at 3 min, after which arrival time generally scales with radial distance excepting for areas with extended shallow zones. The highest wave heights incident to the shore are, unsurprisingly, located nearest the event on neighbouring eastern and northern shores where crest amplitudes reach over 4 m. The lowest are found in the further area of the south-west besides Gauge 8 (Tokaanu) and in sheltered parts away from direct paths such as by Gauges 10 and 11 (Kinlock). Taupō township is relatively sheltered compared to the surrounding shoreline due to shielding from the 350 lake morphology. Figure 11 presents numerical gauge time series for both runs. Throughout the domain, the high ejecta volume run returns significantly higher maximum crest amplitudes than the low run. Wave periods vary from ∼65 s early in group to ∼15 s towards its end, 10 min later. The first arrival is generally the longest period wave but rarely the greatest amplitude at the gauge locations. Gauge 4 and 5, positioned in shallow zones near the eruption, initially record bores of amplitude 1.6 m and 3.5 m 355 respectively in the high run. While phase velocities are very similar between runs, the generated group velocity is slower for the low run. This is best seen in Gauges 2, 3 and 6 which are positioned with relatively unobstructed direct paths from the source. numerical wave breaking therefore adds robustness to the near-shore solution for this type of source. Notably, the explosion model prescribes an initial condition that intersects the bathymetry in the present high ejecta volume case, resulting in additional 365 mass added via the volume of lip surrounding the cavity. It would be expected that a higher energy explosion in similarly shallow water depth would transmit less energy into the water and towards wave-making. To rectify mass imbalance, the lip height could be lowered to better match the excavated volume; however, no alteration to the explosion model is made in present work. Hazard implications 370 These preliminary results suggest many potential implications for wave hazard from explosive subaqueous eruptions. If an eruption of sufficient magnitude at Lake Taupō produces an initial explosion there is clearly a threat posed to nearby shores. As with most lacustrine tsunami hazards, there is minimal time from source to shore impact; no possible warning system would ever be able to respond to an eruption with sufficient speed. Instead, resilience should come from preparation for events including disaster management and exclusion zone planning by local authorities, with ongoing monitoring of the volcanic 375 system such as seismicity, ground deformation, changes to the geothermal system, and geophysical imaging. The underlying caldera has frequently experienced minor unrest (Potter et al., 2015), and current thought suggests eruption probabilities in the near-term are not negligible; for instance, an event of magnitude at least in the upper range considered here (0.2 km 3 ) is estimated to have 5% probability within 100 years (Bebbington, 2020). Taupō volcano can produce far greater magnitude events than considered here, including the aforementioned CE 232 Taupō eruption (24 km 3 ) and the c. 25.5 380 ka Oruanui supereruption (> 1100 km 3 ). Events of such magnitude undoubtedly carry wide ranging hazards well beyond the lake's proximity, and even those towards the lower end of the scale could produce numerous volcanic dangers, including ashfall and pyroclastic density currents. Therefore, an additional complexity of modelling the suite of hazards posed by submarine volcanism will be to determine the relative weight each source component possesses with events of varying location and magnitude. 385 Only a brief effort is made at present on modelling this hazard as an example application of the multilayer numerical scheme and its benefits on resolving the resultant wavefield and significant outputs. Further work should incorporate this or similar numerical methods into conditional probabilistic hazard models for assessing the relative significance of this tsunami source mechanism, using wide parameter space comprising all likely sublacustrine eruption locations and magnitudes. Such a model would be able to take advantage of this scheme's broad wave regime validity and computational efficiency, potentially able to 390 investigate inundation in detail at various points onshore with, for instance, building data or similar additional model layers. Conclusions The non-hydrostatic multilayer scheme used in this paper has been shown to accurately replicate the collapse of various initial disturbances into a resultant wavefield that exhibits varying degrees of non-linear properties and frequency dispersion. By capturing some depth aspects of the model without fully resolving the vertical, it is superior in accuracy to shallow water 395 equation based schemes while being far more computationally efficient than direct numerical methods. The method was used to verify experimental results of positive amplitude disturbance wave generation and probe their validity at a wider parameter range while also further investigating negative amplitude disturbances, revealing extension of the leading trough relative to trailing oscillations for larger size disturbances and smaller water depths. Initialisations of wave generation via underwater explosion were tested for use with the multilayer scheme by simulating 400 detonations of explosives as done in a US Army test series at Mono Lake. Including consideration of uncertainty with experimental data, the combination of empirically derived underwater explosion model and the numerical scheme was able to capture the significant elements of the generated waves as measured experimentally and help validate use of the underlying empirical relations. These were then used to simulate a volcanic explosion under Lake Taupō based on estimated eruptive energy by ejecta 405 volume. Implications for tsunami hazard under lakes are suggested for small to medium magnitude phreatomagmatic eruptions, and surrounding topography. We acknowledge the use of New Zealand eScience Infrastructure (NeSI) high performance computing facilities as part of this research.
7,753.2
2021-07-07T00:00:00.000
[ "Geology", "Environmental Science", "Physics" ]
Decision Support Simulation Method for Process Improvement of Electronic Product Testing Systems : Spread of the Jidoka concept can be phrased as a trend at the production of electronic products. In most of the cases, with the application of this concept, the development of testing procedures (for quality assurance purposes) of the finished products can be avoided. In those cases, when the production process of the appropriate quality product cannot be implemented safely for the establishment of the product testing process (following the production process), changing the number of variety products, change of requirements concerning the electronic products (e.g., instructions related to energy consumption, noise level) and the variation of the required testing capacity make the modification of the established testing process necessary. The implementation of related plans often leads to problems (e.g., not the appropriate storage area, material flow process or material handling equipment having been chosen). The method of process configuration a ff ects the sustainability, since the poorly established process can lead to additional usage of non-renewable natural resources and unjustified environmental impact. For one of the tools of Industry 4.0, we developed such a state-of-the-art testing method with the use of simulation modelling by which the change of testing process can be e ff ectively examined and evaluated, thus we can prevent the unnecessary planning failures. The application of the developed method is also shown through a case study. Introduction Nowadays, companies can remain competitive, which, besides low unit costs, are able to satisfy individual customer needs [1,2]. The individual customer needs primarily mean the increase of the number of variety of products to be manufactured, and the continuous changing of quantities, which increases the complexity of the testing process of the finished products [3]. In the course of testing electronic products, we check whether the products comply with the statutory provisions (energy efficiency rating, noise level, etc.), and with the customer needs (appropriate operation of functions, life expectancy, etc.) [4]. A lot of factors can trigger the modification of the already established product testing process, including, most importantly, the following: • significant change in the quantity of the products to be manufactured, • testing of new product type(s), • changing of legal provisions (the testing of those parameters that not have been investigated before is necessary), • lean process development [5,6]. The replanning of the existing process can result in plenty of failures (inappropriate material handling equipment, testing machine and process scheduling), which affects sustainability. The literature defines sustainability as the satisfaction of the present needs of mankind at the same time with the preservation of the environment and natural resources for the next generations [7]. In the field of logistics, we can make steps towards sustainability by the rationalization of our processes and/or application of environment friendly technologies, and its significance is continuously increasing because of the population growth [8,9]. At the modification of the testing process of electronic products, we can make steps for the sake of increasing sustainability by keeping the following in mind: • usage of renewable energy resources at the operation of facilities, material handling tools and testing equipment, • choosing material handling device(s) [10][11][12], testing equipment with the appropriate capacity (no unnecessary acquisition of assets, thus the usage of non-renewable natural resources decreases), • minimization of material handling path and the size of inter-operational storage areas at the establishment of testing process (smaller area is required for performing the given activity, thus the usage of non-renewal natural resources needed for infrastructural investments decreases). • application of line planning method with minimum material handling capacity (energy consumption decreases) [13,14], • checking of the establishment of workflow processes for the sake of avoiding customer complaints (in the case of delivering a defective product, the usage of non-renewable natural resources significantly increases) [15]. The application of the simulation modelling technology can provide a substantial support for one of the tools of Industry 4.0 at the establishment of testing process because of the increasing complexity. If we would like to define simulation, then, it is basically a method which is capable of realistic modelling of processes and systems, thus its status changes become evaluative [16]. We can classify the simulation models according to several aspects. If the input data of the simulation model contains probability element, then we speak about a stochastic simulation model, otherwise it is a deterministic one [17]. According to another classification aspect, we can distinguish continuous and discrete simulation models. In case of the discrete model, the system statuses change in the countable points, while in the continuous model, they change as the continuous function of time [18,19]. Due to the regular replanning of the processes, there is a stronger and stronger need to the simulation examination of the testing process of the electronic products. However, the domestic and international literature do not sufficiently address this field. In this paper, we present a concept of such a parameterizable discrete simulation model, by which the modification of processes become evaluative even before the implementation, thus the unnecessary investment costs and the losses inherent in the processes can be avoided. Introduction of the Important Tools of Industry 4.0 Fundamentally, the industrial revolutions can be connected to social, economical and technological changes, since the primary condition of invention and spread of each technology is the availability of the appropriate economical and social environment [2]. The beginning of the first revolution is dated to the invention of the modern steam engine, while the biggest novelty of the second industrial revolution was the invention of electricity. In the third industrial revolution, the spread of electronic, IT systems and automation was crucial. Nowadays, as the result of the increase of the cohesion between information technology and the automation of the cyber-physics systems being developed, brought us the beginning of a new era, i.e., the fourth industrial revolution [20][21][22]. Figure 1 shows the main features of the industrial revolution. Appearance of a lot of tools can be connected to the fourth industrial revolution. With their application, the production and service companies have newer opportunities [23,24]. The important tools are the following:-Additive manufacturing (AM): At additive manufacturing, the parts are made with the help of powder ingredients and laser light, contrary to the separating and forming procedures. Its advantage is that parts with more complex geometry can be manufactured without tools [25,26]. • Simulation: Method capable of realistic modelling of processes and systems, thus their status changes can become evaluative. • Digital twin: The digital transformation of manufacturing is promoted by the digital twin technology to a significant extent. It realizes the digital twin of technological devices, employees, processes and systems with the help of different hardware and software. The digital twin is connected to its physical pair through the Internet of Things (IoT), about which it collects data with the help of sensors [27,28]. • Big Data concept [29,30]: With the disclosure of correlation between the data, we can draw useful conclusions and create new services from the large quantity of data (e.g., forecast of the price of the airline ticket). Some say that the big data concept will significantly change the future, since relying on the large quantity of data, we will most likely be able to make appropriate decisions even without knowing the causal relationship. • Internet of Things (IoT): The term "Internet of Things" was first used by Kevin Ashton in 1999 [31]. There are a lot of Hungarian expressions for this term, but maybe the "Dolgok internete" expresses the essence of it the best. The IoT makes the access of different devices (car, fireplace, safety system, parts, material handling devices, etc.) possible through the internet/some kind of network, and in certain cases, it provides communication between devices. • Cyber-physics systems [32,33]: The development of informatics and automation, and increase of cohesion between them made the application of cyber-physics systems possible (cyber-physics systems are electronic devices having control items and network connection). The cyber-physics are able to collect data from their environment, and they act after analyzing their location. • Virtual reality (VR): The virtual reality is an artificial world made by a computer-based environment, in which the users try to involve themselves into the events happening in the given virtual reality as much as they can [34]. • Augmented reality (AR): Some kind of virtual extension of reality, when we project virtual elements into a real environment, with e.g., telephone or special glasses [35][36][37]. • Machine learning: The machine learning is one branch of the Artificial Intelligence (AI), which deals with such systems able to learn, i.e., they generate knowledge from experience [38,39]. Features of the Applied Simulation Framework We adapted the elaborated simulation method to the version 10.1 of Plant Simulation framework [40]. Naturally, if necessary, the model can be applied to other frameworks as well, following its adaptation (e.g., simul8, arena, etc.). The most important features of the applied framework are the following [41]: • Discrete event-driven operation: It makes the fast running of the prepared models possible by that the software always examines the discrete moments of the next substantial event (e.g., a truck arrives, a product is finished, etc.). • Object-oriented approach: The framework contains predefined objects, whose behavior in most cases can be set with the help of predefined data entry fields (if necessary, the SimTalk programming language can be applied). Interactive intervention option: It is possible to modify the input data (even in the course of running the program). • Access to external databases: The joining of the framework to external databases can be implemented (e.g., Oracle, SQL, ODBC, XML, etc.). • The main structural elements of the framework are the following: • Class library: It includes all the objects needed for the preparation of the simulation model. We call one element of the class library a class, whose parameters can be modified optionally and new classes can be created as well (by copying or by legacy). • Toolbar: It makes the accelerated access of the objects possible. Its elements refer to objects, which are originally available in the class library. Thus, an unambiguous relationship can be matched between certain objects of the toolbar and of the class library. • Modelling area: As a matter of fact, the display of a "frame" in which the simulation model can be created. Within a "frame" more "frames" can be created (even in a hierarchical structure), which is for ensuring the transparency of the model (for example, at modelling the site of a company, a separate draft can contain the raw material warehouse and the production plant). • Console: With the help of it, we can display information about the current status of the elements of the model (e.g., value of the variables, error messages, etc.) during the running of the simulation model. Parameterizable Simulation Model of the Testing Process of Electronic Products For the preparation of the simulation examination model of electronic products, we have elaborated a process with 10 steps (see Figure 2). Determination of the Goal of the Simulation Examination Before the preparation of the simulation program, the goal(s) of the examination [42] must be clearly determined, which can be the following, in the case of finished products testing processes: • Avoiding planning failures (e.g., determination of the inappropriate material supply methods and technological equipment with inadequate capacity). • Comparison of planning changes (e.g., comparison of more versions with the use of values of indicators originating from simulation examination). • Determination of limit powers and limit states (e.g., necessary storage capacities, testing capacity). Assignment of the Examined Logistics System On the basis of the goals, the parts and limits of the examined logistics system must be determined. Getting Acquainted with the Operation of the Examined System The persons performing the simulation examination must be informed about the material flow and operation properties of the elements of the limited logistics system in order to make all the factors available, which are important in connection with the modelling. From the aspect of examining the testing process, particular attention should be paid to the possible product deficiencies and for the implementation of the related logistics processes. Preparation of the Simulation Model of the Material Flow Process The framework suitable for the preparation of simulation examination contains predefined objects, which make the preparation of the simulation model of the examined material flow system possible. The important objects are described below. The limitation of the examined material flow system is performed with the help of the source and drain objects (Figure 3a). The elements can be connected by the connector object (Figure 3b). The source object is suitable for generating moveable units (e.g., parts, storage unit, transport vehicle ( Figure 3c)) according to the set time interval and frequency, while the absorption object is for absorbing the prepared moveable units (besides recording the statistical data). The examined logistics system can be created in the modelling area between these objects. Depending on whether we need to model the testing tasks of one or more product, we can speak about elementary and parallel action object (Figure 4a). In case, in the course of testing, we need to perform assembling or disassembling action, then the object in Figure 4b can be used. Modelling of the inter-operational storage and the storage tasks can be performed by the objects, which can be seen in Figure 4c. For the modelling of the manpower in the simulation framework, a separate, predefined object system was created. The Figure 6a object is used for designating the place of performing work, the Figure 6b object is for controlling the work of employees, while the Figure 6c object is for selecting a temporary place for the workers (when there is no work, the workers can stay here). Preparation of the Simulation Model of the Information Flow Process For the implementation of the simulation of material flow processes, the data structures necessary for the production, material handling and testing of electronic products to be tested must be created and they must be filled with data. These data structures will be introduced in the following: • Testing plan data table (Table 1): It shows that in the case of manufacturing electronic products on more production lines in the given shift, which product type with which type of product deficiencies' proportions will be manufactured. The proportions of product deficiencies were recorded on the basis of empirical data, and their exploration is virtually done in the testing process. It must be also noted separately, that the operation time should be interpreted at the material handling equipment, on which the testing task is also performed (e.g., roller conveyor stops for performing a short measurement). Maximum quantity of gathered products is the number of products that can be aligned behind each other (e.g., on a roller conveyor), while the rolling time is the time needed for the rotation of the rotating table. Table 3 contains the parameters of the actions included in the testing process. It shows the following: what turn-on, testing and turn-off period are connected to the examination of the given product type on a given technological equipment. DeterMination of Evaluative Indicators For the sake of rationalizing of plans made for the modification of the testing process, it is important to choose the most suitable indicators for the goal of the examination. • In case of the decision to be made on the storage capacity: In the course of running of the created simulation model from the utilization rate of the inter-operational storages, we can determine the ideal storage capacity, which can help us in determining the ideal storage areas. Important factors: maximum inventory level of inter-operational storage, relative frequency function of the utilization rate of storage areas. • At the decision regarding the material handling devices to be applied: As a result of the running of the simulation examination, we can determine the performance data of the material handling devices having set parameters, and on the basis of this, we can decide on the possible modification of the type and number of the devices to be purchased. Important factors: average and maximum utilization rate of material handling devices, capacity of material handling devices. • At the making of decision regarding the testing equipment: In the course of performing simulation examination we can get information about the compliance of the testing equipment, i.e., we can determine whether the given type and quantity fulfils the requirements or not. Important factors: average and maximum utilization rate of testing equipment, capacity of testing equipment. • At the making of decisions regarding the testing process: Before making the decision regarding the modification of the planned testing process, it is important to examine the fulfilment of requirements defined by the investigated company. Important factors: operation cost of the testing process, capacity of testing process. Implementation and Testing of the Operation of the Simulation Model After the placement of objects necessary for the material and information flow, the following actions must be made for the operation and testing of the simulation program. • Setting of the preparation of product types: With the use of the data of the data structures recorded in Table 1, we must set the way of generating product types in the input object(s). At starting the simulation program, this object will generate the objects with good quality and those having different product deficiencies at regular intervals and frequency. • Setting of the path of material flow: Having regard to the material flow objects, the good quality forwarding path must be set, which is to be applied in connection with products having different product deficiencies. In principle, it means that when exiting from an object, the product deficiency attribute will be checked, and the direction of forwarding will be determined on the basis of its value. • Setting of parameters of objects performing material handling: The parameter setting of objects (e.g., material handling devices, human resources) applied in the material flow system must be based on the data structure values recorded in Table 2. The recording of parameter values of the objects performing material handling is performed at zero time instant of the running of the simulation with the help of an application created for recording parameters. • Setting of the parameters of testing equipment: The setting of parameter values of testing operations must be realized on the basis of the data structure values recorded in Table 3. The recording of parameter values of the testing devices is performed at zero time instant of the running of the simulation with the help of an application created for recording parameters. Testing: The operation of the examination model must be certified together with corporate professionals (during certification, we examine whether the elaborated model reflects reality or not). It can occur in many cases that smaller corrections must be made on the examination model for the sake of appropriate operation (e.g., data and process correction, etc.). Running, Setting of the Indicator(s) From the data collected in the course of running the simulation model, the factors necessary for making development decision(s) are created. In order to minimize the risks inherent in development decisions, it may be necessary to consider the sensitivity of the parameters relevant to the decision. During this examination, we observe the effect of changing the parameter(s) to be determined on the logistic indicators. With the help of the test, we can determine the ideal parameters and parameter intervals (e.g., operating time, speed, etc., for given technological equipment). Decision about the Implementation of the Development Plan On the basis of the completed examinations, a decision is whether to perform further examinations (continuing from step 4) or to realize the prepared plan. Implementation of the Development Plan The approved development concepts are implemented. Application of Simulation Examination Method at the Modification of the Testing Process of Refrigerators For the presentation of the practical application of the examination method shown in chapter 4, see the following: Preparation and application of parameterizable simulation model suitable for the review of the development plan of product testing process of a company dealing with the production of refrigerators. For reasons of confidentiality, the examined company was not identified and data used in the case study were not fully presented. Goal of the Simulation Inspection The product testing process of a company dealing with the production of refrigerators was replanned for the sake of satisfying future increased needs. Main objectives of the inspection: • Determination of the ideal operating time for packaging equipment to reduce purchasing costs and maximize performance Limitation of the Examined Logistics System The production of refrigerators is performed on two production lines. Afterwards, the testing of products is performed through two symmetrical testing processes. The examined logistics system includes the process from the manufacturing of products (Figure 7, limit lines A and B) until the storage of finished product (Figure 8, limit lines C and D). Getting Acquainted with the Operation of the Examined System Before the preparation of the simulation model, we got acquainted with the operation of the existing process, and we recorded, in details, the modifications planned in connection with the transformation of the process on the basis of the conciliations with the professionals of the company. Preparation of the Simulation Model of the Material Flow Process At the preparation of the model, we modelled the system limits with two input objects and two output objects (Figure 3a). In the examined material flow system, the material handling was performed by roller conveyors, driverless forklifts and human resources. Thus, the belt conveyor object (Figure 5a), and the traffic route object (Figure 5b) were used. The modelling of human work was done with the use of 6.1-6.3 objects. The modelling of the testing operations was realized by the elementary action object (Figure 4a). We realized the connection between the placed objects with connecting element (Figure 3b). Preparation of the Simulation Model of the Information Flow Process For the implementation of the simulation of material flow processes, the data structures necessary for the production, material handling and testing of electronic products to be tested must be created and they must be filled with data. These data structures will be introduced in the following: • Testing plan data Table 4: It shows in which shift and within which intervals do the product types arrive from the production to the testing process. Regarding the product types, we define the probability of the arising product deficiencies. • Data table containing the parameters of material handling units: In the examined logistics, process roller conveyors, driverless forklifts and human resources perform the material handling tasks. Table 5 summarizes the important parameters of these units. It must be also noted separately, that the operation time should be recorded at the material handling equipment, on which the testing task is also performed (e.g., roller conveyor stops for performing a short measurement). Maximum quantity of gathered products is the number of products that can be aligned behind each other (e.g., on a roller conveyor), while the rolling time is the time needed for the rotation of the rotating table. • Data table belonging to the performing of testing operation: Table 6 contains the parameters of the actions included in the testing process. It shows the following: what turn-on, testing and turn-off periods are connected to the examination of the given product type on a given technological equipment. Determination of Evaluation Indicator(s) After running the simulation model, the following two indicators need to be defined: • System performance: Shows how many refrigerators can be tested at given packaging times i and j. It is defined using Equation (1). where θ II i is the II. and θ III j is the III. set of unique identifiers of products tested during the testing process. • System maximum performance: Shows the packaging times i and j for maximum testing performance. It is determined on the basis of the Equation (2). Implementation and Testing of the Operation of the Simulation Model After the placement of objects necessary for the material and information flow, the following actions were made for the operation and testing of the simulation program. • Setting of the preparation of product types: The product types are prepared for the I and II testing processes with the help of input objects. The generating is performed by the values recorded in Table 4 as per a method prepared by us. • Setting of the path of material flow: In connection with both testing processes, 6 material flow routes were set (appropriate product, and 5 product type failures). After performing a given operation, the direction of product forwarding is determined with the help of the quality attribute assigned to the product. • Setting of parameters of objects performing material handling: The parameter setting of objects applied in the material flow system is performed on the basis of data recorded in Table 5. There is a separate method for setting, which is run at the start of the simulation. • Setting of the parameters of testing equipment: Setting of the parameter values of the testing operations was performed on the basis of the values of Table 6. There is a separate method for setting, which is run at the start of the simulation. • Setting of the running time of the simulation model: In the simulation framework, a 24-h-long running time was set. • Setting of the preparation of indicators: The values of the indicators to be prepared can be determined from the statistical data belonging to the packaging equipment and the absorption objects. • Testing: We tested the examination model together with corporate professionals. The placing of objects, the establishing of connection between them and the setting of parameters necessary for the running were in accordance with the plans, thus the simulation program is suitable from the aspect of the examination. Running, Setting of the Indicator(s) From the data collected in the course of running the simulation model, the factors necessary for making development decision(s) were created. • Performance values determined by different packaging times: The results of running the simulation program at different packaging operation times are shown in Table 7 and Figure 9. Figure 9. Performance of the testing process as a function of packaging time. Decision about the Implementation of the Development Plan Based on the simulation study carried out on the basis of the developed improvement plan, it can be stated that the performance of the system meets the expectations. For process II, a packaging machine with an operating time of at least 24 s/db and process III for 25 s/db is required. Implementation of the Development Plan Planning and implementation of the approved development concepts. Summary This paper highlights the fact that with the appropriate establishment of the logistics processes, we can make significant steps towards sustainability by using renewable resources and/or by reducing environmental impact. As a result of digitalization hallmarked by the fourth industrial revolution, more and more accurate data become available, thus the application of the simulation modelling technology is becoming more and more effective. We described that the replanning of the testing process of the electronic products can become necessary in several cases (e.g., significant change in the legislative provisions, and in the quantity of products to be tested in the future). Thus, the significance of application of parameterizable simulation examination models is increasing. According to our knowledge, the domestic and international literature have not dealt with the method of establishing these kind of simulation examination processes, thus we tried to fill in the gap related to this. We have developed such a simulation examination method, in which by observing the steps, we can safely underpin the decisions related to the testing process of electronic products. We presented a case study, which included the steps of implementations of the simulation examination related to the testing process of refrigerators. With the spread of digitalization, the application of digital twin technology will be prioritized in the field of the testing process of electronic products. Due to this, we can decide over making development decisions by analyzing real-time data. Funding: This research was funded by "Younger and Renewing University-Innovative Knowledge Cityinstitutional development of the University of Miskolc aiming at intelligent specialization" project, grant number EFOP-3.6.1-16-00011. Conflicts of Interest: The authors declare no conflicts of interest.
6,587
2020-04-10T00:00:00.000
[ "Engineering", "Computer Science" ]
Salidroside alleviates oxidative stress in the liver with non- alcoholic steatohepatitis in rats Background Nonalcoholic steatohepatitis (NASH) is characterized by fat accumulation in the hepatocyte, inflammation, liver cell injury, and varying degrees of fibrosis, and can lead to oxidative stress in liver. Here, we investigated whether Salidroside, a natural phenolic antioxidant product, can protect rat from liver injury during NASH. Methods NASH model was established by feeding the male SD rats with high-fat and high-cholesterol diet for 14 weeks. Four groups of male SD rats including, normal diet control group, NASH model group, and Salidroside treatment group with150mg/kg and 300 mg/kg respectively, were studied. Salidroside was given by oral administration to NASH in rats from 9 weeks to 14 weeks. At the end of 14 weeks, liver and serum were harvested, and the liver injury, oxidative stress and histological features were evaluated. Results NASH rats exhibited significant increases in the following parameters as compared to normal diet control rats: fat droplets with foci of inflammatory cell infiltration in the liver. ALT, AST in serum and TG, TC in hepatocyte elevated. Oxidative responsive genes including CYP2E1 and Nox2 increased. Additionally, NASH model decreased antioxidant enzymes SOD, GSH, GPX, and CAT in the liver due to their rapid depletion after battling against oxidative stress. Compared to NASH model group, treatment rats with Salidroside effectively reduced lipid accumulation, inhibited liver injury in a does-dependent manner. Salidroside treatment restored antioxidant enzyme levels, inhibited expression of CYP2E1 and Nox2 mRNA in liver, which prevented the initial step of generating free radicals from NASH. Conclusion The data presented here show that oral administration of Salidroside prevented liver injury in the NASH model, likely through exerting antioxidant actions to suppress oxidative stress and the free radical–generating CYP2E1 enzyme, Nox2 in liver. Background Nonalcoholic steatohepatitis (NASH) is the progressive form of nonalcoholic fatty liver disease (NAFLD) and features of NASH on liver biopsy include steatosis, inflammation and varying degrees of fibrosis [1]. NASH is associated with obesity, type 2 diabetes and metabolic syndrome, and its increasing prevalence and clinical severity. Thus it is quickly becoming a significant public health concern [2]. While a variety of factors are involved in NASH development and pathogenesis, it is well-accepted that the development of NASH follows a two-hit model [3]. The "1 st Hit" involves excess lipid accumulation in the liver, which sensitizes the liver to the "2 nd Hit". The "2 nd Hit" involves inflammation, oxidative stress, liver damage and fibrosis. This two-hit hypothesis is helpful in understanding processes that contribute to development and progression of NASH, the risk factors and underlying cellular and molecular mechanisms in NASH development remain largely undefined, which has limited the development of therapies to prevent/treat NAFLD/NASH. Oxidative stress is thought to be a major contributor to the pathogenesis and progression of NASH [4]. Oxidative stress has been defined as an imbalance between oxidants and antioxidants in favor of the former, resulting in an overall increase in cellular levels of reactive oxygen species (ROS) [5]. In patients with histopathologically progressive NASH, production of antioxidants is reduced, and the total antioxidant capacity is apparently insufficient to compensate for oxidative stress [6]. Therefore, it is speculated that agents, such as vitamin E, that promote cellular antioxidant defense activity are likely to have therapeutic potentials in NASH prevention/treatment [7]. Salidroside (SDS, p-hydroxyphenethyl-b-D-glucoside) is a natural phenolic secondary metabolite from Rhodiola rosea L, which has been used as a herbal medicine for centuries. Salidroside, in particular, has potent antioxidation activities [8,9]. Moreover, recent studies have shown that SDS has a great protective efficacy on liver disease via its antioxidant activities [10][11][12]. However, whether SDS can provide protection against NASH with oxidative stress remains unknown. Therefore, in this study, we used a liver oxidative stress model induced by high-fat and highcholesterol (HFHC) diet in rats and evaluated the protective effects of SDS on NASH. Methods Male SD rats, weighing 140-160 g, 6 weeks of age, were provided by the Laboratory Animal Center at Dalian Medical University. Rats were housed individually. A normal laboratory diet and water were available ad libitum. The animal room was maintained at constant temperature of 23 ± 1°C and 50 % relative humidity with a 12 h (7:00 a.m.-7:00 p.m.) light/dark cycle. Food was removed the night before the experiment. All experimental procedures were examined and approved by the Animal Care and Use Committee of Dalian Medical University (Dalian, China) and performed in strict accordance with the People's Republic of China Legislation Regarding the Use and Care of Laboratory Animals. After 1 week on the normal diet, 40 animals were randomly divided into 4 groups:normal diet control rats (n = 10): rats were fed with normal diet for 14 weeks; NASH model group (n = 10): rats were fed with HFHC diet for 14 weeks; SDS-treated rats (n = 10, two groups): rats were fed with HFHC diet for 14 weeks with a daily oral feeding of SDS 150 mg/kg and 300 mg/kg respectively by intragastric (i.g.) gavage from 9 weeks to 14 weeks. After the end of 14 weeks, all rats were anesthetized at 24 h after the last treatment. Blood was collected by cervical decapitation and centrifuged at 1500 g for 20 min at 4°C to obtain serum. Liver tissue was homogenized in ice-cold PBS buffer and centrifuged at 1800 g for 10 min at 4°C to precipitate the insoluble material, and the supernatant was used in the following assays. Serum biochemical test Alanine aminotransferase (ALT) and aspartate aminotransferase (AST) in serum were used as indicators of hepatocyte function and injury. ALT and AST levels were measured by a Bayer 1650 automatic analyzer (Germany). Triglyceride, total cholesterol detection in hepatocytes For the determination of total cholesterol and triglycerides, liver samples (100 mg) were homogenized, and lipids were extracted with 2 ml of chloroform and methanol (2:1), as described by Folch et al [14]. Lipids were dissolved in 2 % Triton X-100 (Sigma, St. Louis, MO), as described by Carr et al [15]. Hepatic triglyceride and cholesterol levels were determined using commercially available reagents. Lipid peroxidation and antioxidative enzyme activity in liver Malondialdehyde (MDA) and the enzymatic activities of superoxide dismutase (SOD), glutathione peroxidase (GPX), glutathione (GSH) as well as catalase (CAT) in liver were measured using commercial testing kits according to the manufacturer's instructions. MDA and GSH levels are expressed as nmol/mg protein and mg/g protein, respectively. The enzymatic activity of the SOD, GPX, and CAT are expressed as U/mg protein. RNA isolation and quantitative real-time PCR analysis CYP2E1 and NOX2 mRNA expression were tested by Quantitative Real-Time PCR. Total RNA was extracted from liver tissue samples by the TRIzol kit (Gibco/Life Technologies) according to the manufacturer's protocol. The RNA was then reverse-transcribed to cDNA using the Super-Script II (Invitrogen) and the target genes were amplified using Power SYBR Green PCR Master Mix reagent (Applied Biosystems). The amplification was performed in real-time PCR system (Applied Biosystems) and a modified PCR cycles were used as following: initial denaturation at 95°C for 2 min, followed by 35 cycles were performed at 95°C for 30 s and 60°C for 30 s. The housekeeping gene β-actin was used as an internal control, and gene-specific mRNA expression was normalized agains β-actin expression. Relative quantification by the 2 −ΔΔCT method was realized by comparing to control groups (The sequences for the primers used for Real time PCR: CYP2E1, GenBank ID: NM_031543, Forward-GACTGTGGCCGACCTGTT, Reverse-ACTAC GACTGTGCCCTTGG; NOX2, GenBank ID: NM_0011 28123, Forward-TCAAGTGTCCCCAGGTATCC, Reverse-CTTCACTGG CTGTACCAAAGG; β-actin Forward-TG TCACCAACTGGGACGATA, Reverse-AACACA GCCTGG ATGGC AC). Statistical analyses Differences among groups were examined by one-way ANOVA followed by Tukey-Kramer multiple comparison tests. Values are expressed as the mean ± SD. A value of P < 0.05 was considered statistically significant. All statistical analysis was performed by using SPSS software (version 11.0, SPSS, Inc.). Treatment with SDS prevents NASH-liver injury and steatosis To study the protective effects of SDS, we established a HFHC diet-induced NASH rats model. As shown in Fig. 1, liver steatosis that affected a large number of hepatocytes with foci of inflammatory cell infiltration throughout the lobule (arrow) was observed in H&E stained liver sections from HFHC diet rats. In contrast, the liver sections were normal from normal control rats (Fig. 1a, b). Also, significantly increased serum ALT, AST (Fig. 2a, b) and liver TG, TC were detected in HFHC diet rats (Fig. 2c, d). These results indicate that a three-weeks HFHC diet feeding sufficiently induced NASH in rats. Importantly, compared to NASH model group, a dramatic reduction both in lipid droplets and in the inflammatory infiltration were detected in the liver from SDS-treated group (Fig. 1c). Consistent to the observations from H&E staining, the average NASH scores were significantly reduced in HFHC diet rats (Fig. 1d). In addition, the levels of ALT, AST in the serum and TG, TC in the hepatocyte also decreased in the SDStreated group when compared with those in the NASH (Fig. 2a-d). These results demonstrate that SDS is an potential agent for prevention of rat from HFHC diet -induced NASH. SDS suppress liver injury-induced oxidative stress mediators The imbalance between oxidative and anti-oxidative stress response has been well accepted as a critical pathogenic factor for NASH development. As SDS has an antioxidative activity, we speculated that SDS may protect rat from NASH through this antioxidative function. To test this hypothesis, we analyzed the expression of antioxidation enzymes. HFHC diet feeding resulted in a significant increase in MDA level, and largely decreased the antioxidation enzymes including SOD, GSH, GPX, and CAT in the liver as compared to the normal controls (Table 1). Similarly, the mRNA levels of CYP2E1 (Fig. 3a) and Nox2 (Fig. 3b) were also increased in HFHC diet rats liver. Interestingly, compared to untreated HFHC diet group, the SDS treatment significantly reduced MDA and increased SOD, GSH, GPX, and CAT levels in a dose dependent manner. Further analysis indicated that SDS dosedependently suppressed CYP2E1 and Nox2 mRNA expression in the liver. Therefore, these results suggest that SDS can protect rat from NASH through inhibiting the liver oxidative stress. Discussion Our preclinical studies demonstrated that SDS protects rat from HFHC diet -induced NASH through suppressing the oxidative stress-induced liver damages, as evidenced by the results that SDS significantly reduced the increased MDA and increased the suppressed antioxidation enzymes, like SOD, GSH, GPX, and CAT, in the NASH-injured liver. At the same time, SDS significantly reduced the increased CYP2E1and Nox2 mRNA expression in the liver. These results suggest that SDS can protect the liver from NASH-induced injury, which is most likely to take place through the inhibition of oxidative stress mediators. In this experiment, rats were treated by oral administration of salidroside from 9 weeks to 14 weeks. By the ninth week, mild to moderate pathology should be estabilished. Therefore, salidroside can recover the severe pathology after the full development of the pathology. Oxidative stress is significantly closely associated with NASH. Antioxidants can ameliorate the development of NASH formation [16]. The current work revealed that SDS rendered the increased CYP2E1 less pronounced in injured liver with NASH in rats. The CYP2E1 enzyme is a hepatic cytochrome P 450 isoform, which create free radicals in phase I enzymes [17]. CYP2E1 is critically important in NASH development by promoting oxidative, inflammation. CYP2E1 overexpression results in increased oxidative stress and nitrosative stress in mouse model of non-alcoholic fatty liver [18]. But CYP2E1-null mice can prevent NASH progression [19]. The fact that SDS can prevent the upregulation of the cytochrome P450 enzyme CYP2E1 suggests that SDS exert hepatoprotection by acting early in the process of oxidative stress, which is probably capable of blocking the entire cascade of the process that leads to liver injury and inflammation. However, the precise celluar and molecular mechanisms by which SDS bind to targets upstream of CYP2E1 remains to be elucidated. The results of the current study also revealed that SDS treatment reduced the rate of upregulation of Nox2 mRNA expression in the NASH rat model. Nox2 is a membrane-bound enzyme complex. It has been shown to be involved in cellular respiratory bursts and free radical production in a variety of cells, including hepatocytes [20]. Nox-2-derived reactive oxygen species (ROS) may be involved in the activation of inflammatory apoptotic pathways. NOX2-generated oxidative stress is associated with severity of liver steatosis in patients with non-alcoholic fatty liver disease [21]. For this reason, Nox2 has been proposed as a potential therapeutic target to reduce ROS-related injury, such as ischemiareperfusion associated liver injury. It has been reported that NO donor KMUP-1 improves hepatic ischemiareperfusion and hypoxic cell injury by inhibiting Nox2and reactive oxygen species (ROS)-mediated inflammation [22]. The CYP pathway is known to be coupled with the NOX pathway [23,24]. The increased expression of CYP2E1 leads to the generation of more free electrons, which is coupled with the conversion of NADPH to NADP + via Nox2 and/or Nox4. The CYP2E1 reaction cycle produces ROS as a result of uncoupling of the reaction. In addition, Nox2 and Nox4 may promote the recycling of NADP + to produce superoxides and peroxides, which can further result in the generation of peroxides and ROS through the Fenton chemistry. These CYP2E1/NOX2-coupled reactions increase the caspase-3 activity, induce DNA fragmentation, and ultimately result in the apoptosis in liver tissues [20]. This is characterized by NASH-induced steatosis in liver. In this way, the Nox2 pathway is another therapeutic target for diseases that involve oxidative stress [25,26]. The current results demonstrated that SDS can suppress the increased CYP2E1 and Nox2 expression, suggesting that SDS may exert the hepatoprotective effect through inhibition of the CYP2E1/Nox2 coupling reaction, reducing oxidative stress, and ameliorating liver injury caused by NASH. This is the first report to show that SDS prevent NASH via this mechanism of action. Data are mean + SD, n = 10 per group *P < 0.01 vs Control group **P < 0.05,***P < 0.01 vs NASH model group Fig. 3 Dose-dependent inhibitory effects of SDS (150 mg/kg and 300 mg/kg) on hepatic CYP2E1 (3a) and Nox2 (3b) mRNA expression in liver with NASH-induced by high-fat and high-cholesterol diet in rats. (Data are mean + SD, n = 10 per group. ##P < 0.01 compared to normal control; *P < 0.05, **P < 0.01 compared to NASH model)
3,311
2016-04-14T00:00:00.000
[ "Biology", "Environmental Science", "Medicine" ]
A Survey Study on Relation Extraction for Web Pages : Natural language means a language that is used for communication by human. Natural Language Processing (NLP) helps machines to understand the natural language. The natural language for the web pages consists of many semantic relations between entities. Discovering significant types of relations from the web is challenging because of its open nature. In this paper we survey several important types of semantic relations. This paper also covers the relation extraction (RE) approaches which are divided into: supervised approach, which contains Feature base and Kernel base, and the unsupervised approach. Three relation extraction algorithms are discussed: Support Vector Machine (SVM), Genetic algorithm and Naive Bayes classifier This survey would be useful for three kinds of readers First the Newcomers in the field who want to quickly learn about relation extraction. Second the researchers who want to know how the various relation extraction techniques developed over time. Third the trainers who just need to know which RE technique works best in different settings 1-Introduction: Through the World Wide Web increasing information and texts, knowledge are available and found in the digital archives, it has seen that web content has been kept in HTML "Hyper Text Markup Language" [1]. In this case the web is for human use because of the displaying content as syntax based HTML. Query ambiguity reduces HTML retrieval quality. For example "bank" may be border of a water body or monetary establishment. Web pages have more information, as HTML tags, hyperlinks and anchor text with the regular text content visible in a browser. These characteristics that are placed on pages are useful for classification [2]. There has been an increasing demand in "Information Extraction" (IE), which recognizes relevant information (usually of predefined types) from text documents in a specific subject and it gathers it in a structured format [3]. One of the purposes of relation extraction is to specify the named entities, and to extract the relationship between entities and the events [4]. Relation extraction is defined as the process of discovering and describing the "semantic relations" between entities of text [5]. Most algorithms of relation extraction begin with some linguistic analysis, parsing the text to find relations directly from the sentences. [6]. The relation extraction system in (Figure 1), which is inspirited by [7], enters as input the text in a document, and produces a list of (entity, relation, entity) as its output. 2-Data Source: This research do a review about the web documents which derive its information from several sources such as: Wikipedia, ACE RDC 2003 and 2004, Social Networks (Twitter & Facebook), Clueweb09 dataset, MEDLINE, PharmGKB database and PubMed. Web document can be: 2.1 XML document "eXtensible Markup Language" is a typical format, it is used to share and transfer information in different fields, because it can transfer the content of logical structures into documents, and it is autonomous from platform [8]. HTML document Hypertext Markup Language (HTML) is the standard markup language it aims at producing web pages and web applications [9]. A document may contain many links, a technical text or a short answer to a special question [10]. 3-Text relation It is the relations between the words in the sentence. This relation can be a relation of syntax, lexical and semantic relation. Syntax relation describes how words are grouped and connected to each other in a sentence [11]. While A lexical relation is a pattern of association that exists between lexical units in a language [12]. 3.1-Semantic Relations The primary aim of recent researches is to extract relevant documents. Web development to the next generation called the "Semantic Web" [13], the attention will move from looking for documents to getting facts, useful information [12]. The increasing capability of finding the information in the form of entities, contained within documents, leads to the important results in extracting relations between these entities. [14] Relationships are fundamental to semantics because they join the meanings to the words, terms and entities [15]. The description of word semantic relationships is shown in the following: • Synonyms Synonyms relation means a word with the same or nearly the same meaning as another in the same language [16], as shown in ( or they could be opposite by adding the following prefixes to form opposites of words: un-, il-, im-, in-, ir-as shown in table 1 [6]. Good Bad Antonyms • Metonyms: are words used in place of another word which has strong relation. as shown in ( Figure 4): Figure 4: The Metonyms Relation • Hyponym and Hypernymy: The term hyponym means a subcategory of a more general class: Like a relationship between "dog" and "animal". While Hypernymy is the state or quality of being a hypernym or superordinate (a general class under which a set of subcategories is subsumed). as shown in ( Figure 5) [17]. Figure 5: The Hyponym & Hypernymy Relations • Polysemy It means a word, phrase, or concept which has more than one meaning or connotation, as shown in ( Figure 6) [18] Figure 6: The Polysemy Relation In this example "paper" in the first sentence refers to a piece of paper, in the second sentence it means a research paper and in the third one it denotes to a newspaper 4-Relation Extraction (RE) The aim of relation extraction is to discover semantic relations between entities [19]. This means confront in open-domain of the web. This relation must be able to deal with a very, huge and rapid growth in scale, multiple styles of documents and more types of relations that are exist [20]. To find these relations, a system should not expect a specific set of relation types, nor rely on a rigid set of relation argument types. It also must efficiently capable to deal with a huge size of data [21]. A huge size of hand labeled data is needed when the supervised learning algorithms are used but annotating training data is undesirable and time overwhelming job [22]. On the Web, manually labeling data of each subject area are stubbornly, the number of subjects of interest is simply very large. Relation extraction with automated labeling is called "unsupervised relation extraction". [23]. 4.1-Supervised Relation Extraction Approach Supervised approaches concentrate on relation extraction at particular area. These approaches need labeled data where each pair of entity that are mentioned, labeled with one of the pre-defined relation types. [24]. Feature Based Approach The feature-based methods are used to find useful lexical feature, syntactic structured feature and so on. As shown in Table 2 The cost in Lishuang Li e.al. [25] predication phase when combine the feature and kernel based calculation is lower than other but the computational cost in the training phase is bigger compared to the other. The feature based approach is an excellent method for extracting the logical structures of HTML tables and moving them into XML documents Yeon-Seok & Yeon-Seok [8] using area segmentation and structure analysis algorithm, as well as semantic coherency feature. While Bonnie.& Gaasterland [26] use feature based approach to identify tense of the sentences at Penn Treebank tags for parse tree. The work extracts, reanalysis, and reinterpretation of both temporal and non temporal relations between two events. Kernel based approach Kernels based approach compares the structure of two patterns using the syntax tree from the node at the top "root" to the lowest node "child". This approach still has restrictions in measuring patterns of multiple types, which decrease the act of new relation extraction. The main advantage of kernel based methods is that such explicit feature engineering is avoided [27] as shown in Table 3 Semantic relation The framework of Zhang et.al. [28] exploit "trigger words" as the semantic restrict to lead the "bootstrapping iterations". It widen a work on usual model of bootstrapping in extraction of the relation by construct a noble way for explaining trigger words, pattern representation, similarity method and evaluation method. Furthermore, a noble "bottom up kernel" algorithm was defined to calculate if the result's pattern from a new sentence is relation form or not. Maengsik & Harksoo [29] use SVM algorithm on social network application to identify name entity by using kernel based approach on social network. Zhou et.al. [3] combine different types of syntactic and semantic information into one tree structure; and they also extract such varieties via nobel context-sensitive convolution tree kernel. 4.2-Unsupervised Relation Extraction Approach It refers to the task of automatically finding interesting relations between entities in large text corpora Yulan [30], as shown in Table 4 Ya-nan et.al. [4] used a proposed "statistical score S" to calculate the familiar association between strong related events and clip relations with low S value. Ying. et.al. [31] investigated Social Network using unsupervised feature based to extract name entity feature by disambiguation system. The main advantage is the collection of the unsupervised features extracted from broad resources that can effectively improve the robustness of a disambiguation system. Bonan et.al. [21] used an algorithm handles polysemy of relation instances on Clueweb09 dataset and achieves a significant improvement in recall while maintaining the same level of precision. Yulan et.al. [30] worked on Wikipedia, their work can abstract away from different surface realizations of text. These relations expressed in different "dependency structures" with redundant information from the growing size of Web pages. 5-Relation Extraction Algorithms Throughout this section three algorithms (Support Vector Machines, Genetic algorithm and Naive Bayes classifier) have been discussed in relation extraction. 5-1 Support Vector Machines (SVM) Support Vector machine is "Vector space based machine -learning method" used to extract a decision limits between two classes. These classes are a long way from any point in the training data. separately from executing linear classification, SVMs are able to run a non-linear classification in efficient manner using what is called the "kernel trick", implied mapping their inputs into highdimensional feature spaces. [32]. Table 5 illustrates the different use of SVM in relation extraction. SVM Bonan & Ralph [19] found that "one-pass annotation" is a powerful in cost than annotation with effective assurance. While Zhou et.al [33] found that correctly unifying multi type of syntactic and semantic information into a one tree structure; and clipping such differences via a good contextsensitive convolution tree kernel. 5-2 Genetic Algorithm (GA) Christy & Thambidurai [34] show that Genetic Algorithm well performed in mining rules and features optimization of a text. [35] deploy genetic algorithm and get a high precision but low recall and they combine the benefits of ML algorithms with "rule-based" techniques to find the related arabic named entities. The effect of each algorithm used linguistic module to create important results against previous one but the method unable to capture some of the relations that exist between words that are far from the named entity locations, especially in sentences which are long and complex. Table 6 illustrates the GA in relation algorithm Text Genetic Algorithm 5-3 Naive Bayes classifier Naive Bayes classifier is a method which learns both annotated and not annotated documents in a "semi-supervised algorithm". Suresh & Kumar, [36] applied the Naive Bayes classifier on Q/A systems using "lexico-syntactic and lexico semantic feature". They reach the high precision and recall (the ideal case). 6-Evaluation Metrics A common motivated way of evaluating results of Machine Learning experiments is using Recall, Precision and F1-measure [37]. Precision measures as shown in equation (1) is the percentage of the correct retrieved items on the number of the whole retrieved items [38]. The good system produces a high precision in retrieving correct items [39]. . Recall, on the other hand, is a percentage of the total number of the correct items as computed in equation (2). The higher the Recall rate, indicates less missing correct items [40] Finally F1 measure: is the average of the precision and recall. The F-measure measure is prompt because in many studies this measure is the best measurement of the result of the classifier [40]. Equation Table 7 illustrates the evaluation metrics for different algorithms that have been used in relation extraction to extract a specified feature for a given application Conclusion This survey paper discussed importance of relation extraction techniques in natural language processing field. Also it discussed different approaches which are widely used for relation extraction task then it discussed the evaluation criteria metrics. It is obvious that the naïve bayes classifer, using "lexico-syntactic and lexico semantic features", gives the best evaluation measures near the ideal case. On the other hand, it is very important to reduce the time to extract web relations accurately without loosing efficiency. The use of pattern based with local dependency tree increases the accuracy and recall of eventarguments extraction process. Supervised approaches for the more can do well when the domain is more restricted. While the unsupervised approaches appear to be more appropriate for unrestricted domain relation extraction systems, due to they are capable of simply grew with the database size and can scale to new relations easily. Rule sets have a benefit of sentence structure and grammar to capture more specific information. Moreover, these rule sets can be sets in an ontology that allows modification of relationships and inference over them. [41] This work suggests that future work in this area could apply fuzzy logic which is a principal component of soft computing.
3,089.8
2020-03-01T00:00:00.000
[ "Computer Science" ]
Dipole-Dipole Non-Radiative Energy Transfer Mediated by Surface Plasmons on a Metallic Interface We have investigated the surface plasmon-mediated energy transfer between two optically active ions in vacuum near a metallic surface using methods of molecular quantum electrodynamics. We have studied the electric dipole-electric dipole energy transfer process only, this being the most dominant mechanism in interionic interaction between two ions in a medium when their wavefunctions do not significantly overlap. The matrix elements for energy transfer, hence energy transfer rates, are calculated using two classes of Feynman diagrams. The intermediate states for one class of diagrams do not satisfy the energy conservation principle, hence they are purely virtual states. Of particular interest are the dependencies of the energy transfer process on (1) the relative positions of the ions with respect to one another projected onto the interface and (2) the distance of each ion from the metal surface. The overall energy transfer process has been found to have both the short range and long-range components, the former being driven by virtual plasmons and the latter by the real plasmons in a non-lossy medium. © The Author(s) 2019. Published by ECS. This is an open access article distributed under the terms of the Creative Commons Attribution 4.0 License (CC BY, http://creativecommons.org/licenses/by/4.0/), which permits unrestricted reuse of the work in any medium, provided the original work is properly cited. [DOI: 10.1149/2.0121902jss] Recently there has been considerable interest to utilize surface plasmons in metallic thin films to improve the overall extraction efficiency of luminescent materials [1][2][3][4][5][6][7][8][9][10][11][12] and light emitting diodes (LED). [13][14][15][16][17][18][19][20][21][22] The presence of surface plasmons could contribute to providing additional modes to which the emitting ions could radiate thus affecting their radiative lifetime. In a recent publication, Mishra et al. 23 discussed this aspect of interaction between emitting systems and two-dimensional surface plasmon modes. In the present paper, we address another important process of luminescence, namely, the process of energy transfer [24][25][26][27] between two optical ions in the presence of surface plasmons. It is shown that, similar to the electromagnetic fields associated with radiation, the electromagnetic fields caused by the surface plasmons could facilitate the energy transfer process. Specifically, we have formulated transition rates for nonradiative energy transfer between two ions in vacuum near a metallic thin film by using methods of molecular quantum electrodynamics 28 to describe a surface plasmonmediated coupling scheme between two optical ions. Non-radiative energy transfer between two optical ions ( Figure 1) is one of the extensively studied phenomena in luminescence. It is also a topic of considerable importance for designing novel phosphor systems. [24][25][26] The process can be described as follows: optical ion A is in an excited state due to absorption of a photon. The energy is then transferred from an ion A to a nearby ion, B, resonantly. At the end of the transfer process ion B is found in an excited state and ion A returns to a lower energy state. If this energy transfer process is caused by the emission and subsequent reabsorption of a real photon, it is a radiative transfer process. When the energy transfer is mediated by a virtual photon, a messenger particle that cannot be directly observed, it is known as a non-radiative energy transfer process. In this paper, this resonant, nonradiative energy transfer process is mediated by virtual plasmons associated with the surface plasmon waves at the interface of a metallic layer and a dielectric medium. For simplicity, we have assumed the dielectric medium to be vacuum for developing a quantitative theory of surface plasmon-mediated nonradiative energy transfer. The theory developed here could be easily extended to any dielectric medium in which the optical ions are embedded. However, any broadening of the energy levels, including that which occurs in solids, and its consequence on energy transfer rates have been rigorously treated in the present work. Since the problem will be formulated in a manner similar to electron-radiation interaction, first we examine the underlying theory of non-radiative energy transfer process via coupling of the radiation field with the electrons in an optical ion. Dexter 29 developed the first comprehensive theory of non-radiative energy transfer process in a classic paper in 1953 considering the coulomb interaction between two electrons involved in an energy exchange process as a perturbation driving the process. The interaction Hamiltonian is then obtained in terms of electrostatic multipole-multipole interaction between the two electrons as a function of the interionic distance, R AB , by a Taylor series expansion. A first order time-dependent perturbation theory was used to calculate the probability of a transition between an initial composite state described by the ion A being in the excited state and the ion B in a lower energy (ground) state and a final state in which A is in the ground state and B in the excited state. These composite states describe the ion couple before and after the energy transfer. The perturbing interaction is assumed to be instantaneous. Subsequently, the energy transfer process is described in terms of electric dipole (ED)-ED, ED-electric quadrupole (QD) and QD-QD interaction between the electrons with similarity to static interaction between two charge distributions. A more elegant formulation of resonance interaction using ED-ED interaction can be found in Craig and Thirunamachandran 28 within the framework of molecular quantum electrodynamics. This method can be extended to describe energy transfer processes caused by ED-QD, QD-QD, magnetic dipole (MD)-ED and MD-MD interactions using a higher order approximation of the spatial variation of the vector potential, a, describing the radiation field near the optical ions. This approach was later adopted to describe ED-MD, ED-ED and MD-MD interactions. 30,31 In the present manuscript, we extend this formalism to describe an energy transfer process caused by the fields due to surface plasmons. The adaptation of molecular quantum electrodynamics is possible for any divergence-free electric field associated with a propagating wave. 28 This is indeed the case for plasmons associated with a propagating surface wave in the interface region between a metal and vacuum. In order to appreciate this assertion one has to understand how the electric multipolar coupling with radiation field occurs. Each ion senses the radiation field through its electric or magnetic multipoles. The energy transfer process between the two ions proceeds by an exchange of virtual photons which by their creation and subsequent annihilation couple the donor decay and acceptor excitation processes over a distance too large for any direct overlap of the wave functions of the electrons. The plasmon-mediated process is similar -the electric field associated with surface plasmons will be sensed by the electric multipoles, and the coupling of the decay and excitation processes occurs through creation of virtual surface plasmons at one site and annihilation at the other. Apart from the fact that the surface plasmons are excited in the plane of the interface, the energy transfer process mediated by plasmons is similar to that by photons. In order to calculate the transition rate for energy transfer, one needs quantized electromagnetic fields associated with surface plasmons. Fortunately Archambault et al. 32 have already accomplished this for a lossless medium. These quantized fields have been used in this work to calculate the transition probability for energy transfer from ion A to ion B. Only a case of ED-ED interaction has been studied. An extension to higher order multipoles could be done using a similar approach. Microscopic Fields Associated with Surface Plasmons The fields associated with the surface plasmons in an interface between vacuum and metal has bee extensively studied in the literature. 33 We describe the field vectors by e, b, d and h. e and b denote electric and magnetic vectors. The auxiliary fields are denoted by d and h. They are given by ε 0 and μ 0 represent the electric permittivity and magnetic permeability in the free space, respectively. The dimensionless dielectric constant and relative permeability of a homogeneous medium are denoted by ɛ and μ. In regions free of any charge and current, the Maxwell's equations are given by For a nonmagnetic medium, Eq. 6 can be expressed as [7] where v and c represent velocities of the electromagnetic waves in a homogeneous medium and vacuum, respectively, and their dependence on ε, ɛ 0 and μ 0 is implicit in Eq. 7. The electric and magnetic fields can be expressed in terms of the vector potential, a, and the scalar potential, φ, as Since the electric field is divergence free, we choose the scalar potential to vanish everywhere, Additionally we choose the coulomb gauge in order to quantize the field and use the quantum electrodynamics formalism for energy transfer between two ions discussed earlier, The wave equations are then given by The velocity, v, in the medium in Eq. 12 to Eq. 14 is given by v = 1 ε 0 μ 0 ε 1 2 [15] Now we consider the field associated with the surface plasmons on the interface between a metal (medium 1) and vacuum (medium 2). The normal to the interface ( Figure 2) is chosen to be the z-axis and the interface to be the x-y plane. With respect to an arbitrarily chosen x-axis, we define a p-polarized wave with wavevectors k 1 and k 2 inside the metallic medium and vacuum respectively. The choice of a p-polarized wave is deliberate since this allows solutions to the field equations, Eq. 3 -Eq. 6, that can be considered as a surface wave in the interface. 33 The fields associated with a p-polarized wave can then be described as follows: For z ≤ 0 inside metal below the interface, For z ≥ 0 in the vacuum, h 2 =ŷh 2y e i(k 2x x+k 2z z−ωt ) [18] e 2 = (xe 2x +ẑe 2z ) e i(k 2x x+k 2z z−ωt ) [19] The continuity of the tangential component of the electric field at the interface leads to e 1x = e 2x [20] We identify k in Eq. 21 as the wavevector of the plasmonic surface wave. Eq. 6 can be rewritten for the auxiliary field, h, for a medium with dielectric constant ε as [22] In the present case, the dielectric constant of medium 1 is denoted by ε and that of vacuum as unity. Eq. 22 together with the continuity condition for the tangential component of the electric field in Eq. 20 leads to The continuity condition for the auxiliary field, h at the interface is given by A nontrivial solution to Eq. 23 and Eq. 24 leads to the constraint The wave equation for the auxiliary field, h can be shown to satisfy with the appropriate dielectric constant, ε for the medium. Using Eq. 16 for h 1 in medium 1 and Eq. 18 for h 2 in medium 2 in Eq. 26, one obtains and Using k 2z from Eq. 25 in Eq. 27, one can show On subtracting Eq. 29 from Eq. 28, one obtains the dispersion relations for the surface plasmons, Using Eq. 30, Eq. 27 and Eq. 29, it can be shown that and When ω is less than the plasmon frequency, ω p , ε 0; thus both k 2 1z and k 2 2z will be imaginary indicating that the fields will attenuate as one moves away from the interface. This is characteristic of a surface wave whose amplitude attenuates with distance from the interface. This allows us to choose two positive attenuation constants, α 1 and α 2 , that allows the fields to attenuate away from the interface. Thus, Therefore, for ε(ω) 0, i.e., a frequency regime of interest in this work, Similarly, the attenuation constant α 2 in the vacuum is given by The electric fields e 1 and e 2 can now be expressed as Since the electric fields are divergence free (Eq. 3), and Using Eq. 36 and Eq. 38, e 1 can be expressed as Similarly, one obtains from Eq. 37 and Eq. 39 Using Eq. 22 and Eq. 33, it can be shown that Similarly, one could express the auxiliary field, h 2 , in vacuum to be We will focus on the fields in the vacuum (z > 0) from here onwards since our interest is to calculate the energy transfer rate between two ions located in this region near a metallic surface. We have also seen that all the field vectors are known if the field amplitude e 2x is known. With an eye to the formulation of this problem using quantum electrodynamics methods, we will first express this field parameter in terms of the vector potential amplitude, a 2x which will be related to plasmon density later. From Eq. 8, we can write From Eq. 41 and Eq. 44, we can assume that Thus, the magnetic field, b 2 can be expressed as R30 ECS Journal of Solid State Science and Technology, 8 (2) R27-R35 (2019) Mode Expansion of the Electromagnetic Fields Associated with the Surface Plasmons In order to make a transition to quantum mechanics, the difficulties with normalization of the continuum states are avoided by an analysis of the allowed modes for the electromagnetic fields. 28 In earlier discussions, we have assumed for convenience that the plasmon wavevector, k lies along the x-axis. Actually, the plasmon wavevector, k is confined to the plane of the interface between metal and vacuum. Therefore, for the purpose of mode counting, we introduce a virtual area of dimension, s with sides of length, l in which the plasmon solutions are confined. The fields are then required to obey periodic boundary conditions, namely, they take the same values on the opposite sides of the square. In this manner, we obtain an infinite but countable discrete set of solutions. The periodic boundary conditions restrict the allowed components of k to where n x and n y are integers. Now in the medium 2, the vector potential can be expressed as and In Eq. 52 we have introduced another length parameter, L, as a function of k, which allows us to assign a thickness to the surface modes in order to make a transition from a two-dimensional surface to three dimensional volume for proper dimensionality of the density states for use in the Fermi Golden rule. 23,32 This length parameter, L, should not be confused with l introduced here to define the virtual surface area, s. Also, in Eq. 53, ρ represents a radius vector in the x-y plane andẑ, the unit vector perpendicular to this plane. A unit vector along k is denoted byk. Then the electric and magnetic field vectors are given by Quantization of Plasmonic Fields We will follow the scheme for quantization of the plasmonic field originally proposed by Archambault et al. 32 It was shown that the total energy associated with the surface waves can be expressed as One may associate the quantum mechanical annihilation,â k and creationâ † k operators with the field amplitudes, a k and a * k in the following manner, The Hamiltonian for the surface waves can be expressed as The annihilation and creation operators in Eq. 59 satisfy the following relations,â † k n k = n k + 1 n k + 1 [60] In Eq. 60 and Eq. 61, n k is the occupation number for the plasmon mode, k and thus an eigenvalue of the (occupation) number operator a † kâ k corresponding to the eigenvector |n k . Then the operator form of the vector potential and electric and magnetic fields can be expressed asâ Matrix Element for Energy Transfer Between Two Ions Through Electron-Surface Plasmon Interaction Consider two optical ions, A and B located at R A and R B respectively in vacuum near a metallic surface (region / medium 2). The non-relativistic Hamiltonian for a system of these two ions together with the surface plasmon fields on the interface is given by where H ion corresponds to the Hamiltonian of the system of electrons of the ions, H sp to that of the field associated the surface plasmon waves and H int to that of interaction between the electrons associated with the ions and the electromagnetic fields associated with the surface wave. The total Hamiltonian, H in Eq. 65 is expressed in a manner similar to that of a system of optical ions and radiation fields in vacuum. 28,30,31 The exact form of the interaction Hamiltonian is similar to that developed for interaction between radiation fields and ions using a multipolar expansion. This approach is justified since the interaction is mediated by the electromagnetic fields associated with photons or plasmons in the coulomb gauge. Since there are no cross terms involving ions A and B in the interaction Hamiltonian, any interaction between the electrons associated with these ions is mediated by the electromagnetic fields associated with the surface waves travelling at a finite velocity, v, given by the plasmon dispersion relation in Eq. 30. Therefore, the interaction potential is fully retarded, not instantaneous, as expected. The first two terms in Eq. 65 refer to the zeroth order Hamiltonian of two canonical systems, one consisting of ions A and B, and the other to that of the surface plasmons derived earlier in Eq. 59, whose eigenstates define the composite states of the ions and the surface plasmons, and the latter in terms of mode occupancy. The energy transfer process between two ions through interaction with the radiation field within a framework of molecular quantum electrodynamics was discussed earlier. 28,30,31 The same formalism will be followed here except that the field is caused by the quantized surface waves. The interaction Hamiltonian, H int , is given by where the electric dipole operator for ion ς is given by In Eq. 67, the summation is over all the electrons, α, associated with the ion ς. Additionally, q α is the position operator of the electron, α. The process of energy transfer from ion A to ion B is schematically described in Fig. 1 1(k) A B Figure 3. The Feynman diagram corresponding to the intermediate state, . The interaction Hamiltonian that drives each ion from the initial to the final state is indicated near the corresponding vortex. and no plasmons in the interface region. Thus, we express the initial state of the ion-plasmon system as with eigenvalue E i given by After the energy is transferred from A to B, A is in a lower (ground) energy state and B is in an excited state. There are also no plasmons in the interface region. Thus, the final state, | f is described by the ion A being in the ground state |E A 0 with energy eigenvalue E A 0 and the ion B is in the excited state |E B n with energy E B n and no plasmons in the interface region. Thus, we express the final state of the ion-plasmon system as with eigenvalue, E f given by Since the energy is conserved in the energy transfer process The matrix element for transition from the initial state, |i , to the final state, | f , can be obtained from the second-order perturbation theory to be The summation in Eq. 73 is over intermediate states |I with energy E I . There will be two classes of intermediate states involved in this summation: one with both ions in the ground state and one plasmon with wavevector k and the other with both the ions in the excited state and one plasmon with wavevector k. They lead to two different types of Feynman diagrams (Figures 3 and 4) and their contributions to M f i being M 1 f i and M 2 f i , respectively. These matrix elements are evaluated separately and summed together to obtain M f i . Before we proceed to evaluate M f i in Eq. 73, we first choose a coordinate system using the position vectors of ions A and B, and the metal-vacuum interface, the normal to which is chosen as the z-axis. The ion A is then chosen to be on the z-axis so that its position vector, R A , is given by We then choose the position vector of ion B as Thus, the interionic vector from A to B, R AB is given by Therefore, the x-axis is defined by the projection of R AB on the x-y plane, i.e., the plane of the interface. The y-axis is then chosen to be perpendicular to the x-z-plane, defined by R AB , and the z-axis to yield a right-handed coordinate system. Any surface wavevector, k lies in the x-y-plane, i.e. k =kk,k = cos θx + sin θŷ [77] where θ denotes the angle between the x-axis and the plasmon wavevector, k. In this coordinate system, u 2 k from Eq. 52 can be expressed as Thus, u 2 k can now be expressed as Now we consider contributions from the first class of intermediate states, |I , as shown in Fig. 3, k being an arbitrary plasmon wavevector lying in the x-y plane. The energy denominator of the matrix element in Eq. 73 is then given by The dependence of angular frequency, ω on the plasmon wavevector k is determined from the dispersion relation in Eq. 30. Now we can evaluate the numerator of the matrix element M 1 f i using Eq. 63 for the quantized electric field in region 2 and the states of the composite system in Eq. 68, Eq. 70 and Eq. 80. Thus, which can be simplified to yield Using Eq. 81, Eq. 83 and Eq. 84 in Eq. 73, one obtains the contribution to matrix element from this class of intermediate states, X BA is the separation between ion B and ion A projected onto the xy-plane and is equal to X B in our chosen coordinate system. Now we calculate contributions to the matrix element from the second class of intermediate states, |I , as shown in Fig. 4, given by The corresponding energy denominator is given by Proceeding in a manner similar the first case of intermediate states, we obtain Both the matrix elements, M 1 f i and M 2 f i involve summations over the allowed, discrete wavevector modes, k in the x-y plane of the interface. The discrete summations over k can be replaced by integration in the plane of the interface. First we perform the integration over the polar angle. The only nonzero terms that survive integration over the polar angle involve 1, cos θ, sin 2 θ and cos 2 θ only. 34 Thus we obtain from Eq. 85 On integrating over the polar angle, θ, one obtains In a similar manner, M 2 f i can be simplified to In Eq. 91-Eq. 92, J n represent the Bessel functions of first kind of integer order n. The quantity, X BA is equal to X B , which is the same as the interatomic distance between ions A and B when projected into the x-y plane, and therefore we shall refer to X BA in the future as ρ. Now we can obtain the total matrix element M f i from contributions of the two classes of intermediate states listed in Eq. 91 and Eq. 92 Thus, Since we are considering surface plasmon-mediated energy transfer, it is convenient to express the energy difference between the excited and ground states of the optical ions, E n0 , in terms of the plasmon frequencies. Thus, we express where the plasmon frequency corresponding to plasmon wavevector, p is denoted by ω(p), the associated phase velocity being v(p). Unlike photons, the phase velocity of the surface plasmons depends on the plasmon wavenumber, p and is defined by the dispersion relation in Eq. 30, i.e., Since ε depends on the plasmon frequency, ω, hence on p, the phase velocity is a function of the corresponding plasmon wavenumber, p. Because the dominant contribution to the matrix element M f i will come from the pole in Eq. 94 satisfying resonance condition, we assume that near resonance, the phase velocity is weakly dependent on the plasmon wavenumber, k Then we can express M f i as The expression for M f i in Eq. 99 is exact except for the assumption about the phase velocity of the surface wave in Eq. 98. Additionally, the attenuation constant α 2 and the length parameter L in Eq. 100 depend implicitly on the plasmon wavevector through the explicit dependence of ε on the plasmon frequency. The length parameter, L, to be used in the present work is the same as that derived by Archambault et al., 32 and is given by Both L and α 2 are even functions of k and are real and positive. The integrand in Eq. 99 is an odd function of plasmon wave number, k. The integration in Eq. 99 cannot be performed analytically. However, we will assume that most of the contribution will come from the pole near wave number, p. 35 Evaluating near the pole, k = p , we get, Nonradiative Energy Transfer Rate Between Two Optical Ions In order to calculate the rate of nonradiative energy transfer rate using the Fermi Golden Rule for randomly oriented electric dipole moments of the optical ions we need to calculate |M f i | 2 averaged over all orientations. It can be shown that and where μ represents the magnitude of the dipole moment of the ion. Using Eq. 102, Eq. 103 and Eq. 104, one can show that where F (p, ρ) is given by Using Eq. 95 and Eq. 96, it can be shown that Substituting p from Eq. 107 in Eq. 105, one obtains [108] Using α 2 from Eq. 35, F (p, ρ) in Eq. 106 can be further simplified to give So far we have not considered the fact that the energy levels of the optical ions are usually broadened for a variety of reasons. The effect of broadening of the energy levels can be described by the line shape function for ion, A for emission, f A (E ) and that for ion B by F B (E ) for absorption. The line shape functions satisfy the following relations 36 The differential transition rate for energy transfer from ion A to ion B is then given by the Fermi Golden rule, Using |M f i | 2 from Eq. 108 in Eq. 112, one obtains We can obtain |μ(A)| 2 and |μ(B)| 2 from the lifetime of ion A, τ A and integrated absorption intensity of ion B, Q(B) respectively. They are given by 29,31 |μ and Using |μ(A)| 2 and |μ(B)| 2 from Eq. 114 and Eq. 115 respectively in Eq. 113, one obtains, [116] Upon integrating the right-hand side of Eq. 116 over E, one obtains the transition rate, , as [117] where we have ignored any explicit dependence of L on energy. Results and Discussion Eq. 117 gives the nonradiative energy transfer rate, , between two ions near a metallic surface. Similar to the case of nonradiative energy transfer mediated by photons, depends on the overlap of the emission and absorption profiles of the donor and acceptor ions and on the integrated absorption cross-section of the acceptor (ion B) and radiative lifetime of the donor (ion A). 29 The dependence of on the location of the two ions is more interesting. It is governed by two factors. The factor e −2α 2 (Z A +Z B ) depends explicitly on the distance of the ion from the plane defined by the metallic surface. The other factor, F (pρ), depends on the separation of ions A and B projected onto the plane of the interface. The origin of the first factor is obvious from the dependence of the electric field on the z-coordinates of the ions in Eq. 46. As the ions move away from the interface, they sense a field strength that decreases exponentially. Thus, decreases with exponentially with increasing z-coordinate of either ion. The coordinates of the ions are here additive, i.e., it is rather the sum of the z-coordinates that determines the exponential decay. This is in contrast to the photon-mediated energy transfer, where the energy transfer rate for a dipole-dipole interaction goes as R −6 AB . 29 The second factor, F (pρ), depends on the separation between the two ions projected onto the x-y plane, ρ. This is not surprising since the surface plasmon waves are propagating in this plane. Also, the interface destroys the isotropy of space. F (pρ) also depends on the dielectric constant of the metal, through the wavevectors p and α 2 given by Eq. 107 and Eq. 35, respectively. Specifically, F (p, ρ) is a function of the ratio p/α 2 , which has the form [118] For metal in the frequency range of interest, ε ≤ −1, so that 1 + ε = −(|ε| − 1). This leads to p Thus, for ε ≤ −1, F (pρ) in Eq. 106 can be written as [120] In order to observe the behavior of this function, let us choose the case of the photon of energy E n0 = ω, corresponding to λ = 500 nm. For a silver interface, the dielectric constant corresponding to λ = 500 nm is ε ≈ −9. 37 Under these conditions, ε 2 ≈ 81 so that F (pρ) becomes [121] A plot of F (pρ), normalized to unity at ρ = 0, is given in Fig. 5. For comparison, also shown in Fig. 5 is the plot of J 2 0 (pρ), the first term in F (pρ). Comparing the two plots, it is obvious that the J 2 0 (pρ) 2 in Eq. 120 is the dominant term, with the other terms playing a minor role. In the visible region, the dielectric constant of silver ranges from ε ≈ −3 at λ = 400 nm to ε ≈ −22 at λ = 700 nm. Thus, at wavelengths longer than 500 nm, ε becomes more negative, and the first term in Eq. 120) becomes even more dominant. At shorter wavelengths, ε becomes less negative, and the first term Eq. 120 becomes less dominant. For any given wavevector, p, the plot of F (pρ) in Fig. 5 shows that the maximum energy transfer rate occurs when ρ = 0; that is, when ion A and ion B have the same x and y coordinates, only differing in their z coordinates. Thus, the maximum transfer rate occurs when one ion is "on top" of the other as observed from nearest point on the plane of the interface. As ρ increases, the energy transfer rate initially decreases rapidly. At longer distances, the transfer rate continues to decrease, but much more slowly. Oscillations in F (pρ) are present at large and small values of ρ. The "fast" and "slow" rates of decrease in F (pρ) indicate the presence of both short-range and long-range transfer mechanisms. To estimate the distance of the short-range mechanism, let us assume a surface plasmon of wavelength 500 nm, so p ∼ 2π/500 nm −1 . Using the Fig. 5, we estimate that the "fast" component of the transfer rate extends pρ ∼ 2.5, corresponding to ρ ∼ 200 nm, which is less than one wavelength. When pρ = 50, however, we find that ρ ∼ 4000 nm, which is several wavelengths long. Our interpretation is that the short-range mechanism results from energy transfer mediated by primarily by virtual surface plasmons, and the long-range mechanism corresponds formally to energy transfer mediated by primarily by real surface plasmons. Moving from small ρ to large ρ, the mediating surface plasmons change from virtual to real in a continuous fashion. An analogous interpretation is also used for the short-and long-range mechanisms in photon-mediated resonance interaction between two molecules. 28 We note here that we have considered a loss-less medium. In a lossy medium, the long-range interactions will be inhibited. It is interesting to recall that the formula for F (pρ) (Eqs. 109 and 120) was derived from Eq. 102, which specifies the role of the various dipole moments responsible for the energy transfer. Table I shows the matrix elements that contribute to each term in Eq. 120) An examination of Eq. 120 and Eq. 121 reveal that the dominant term in F (pρ) is J 2 0 (pρ)ε 2 . As shown in Table I, this term is driven by dipole moments at both ions A and B oriented in the z-direction. This observation is consistent with the fact that only a p-polarized wave incident on a metallic interface can produce a surface plasmon. (S-polarized waves have no electric field component normal to the surface, and so cannot establish a surface plasmon.) In plasmon-mediated energy transfer, the main contribution is due to the dipoles oscillating in the z direction, leading to electric fields preferably oriented normal to the interface, allowing the establishment of surface plasmons, real or virtual. The contributions of the various dipole moments to F (pρ) evaluated at Table I. Dipole matrix elements contributions to F(pρ). Also shown are the percent contribution of the various dipole matrix elements for the case of λ = 500 nm with a silver layer at ρ = 0. J 2 0 (pρ) J 2 0 (pρ)ε 2 2J 2 1 (pρ) p 2 ρ 2 2J 2 1 (pρ)ε −2J 0 (pρ) J 1 (pρ) pρ Table I. The J 2 0 (pρ)ε 2 term (when ε = −9) is by far the most dominant contributor, accounting for 99.4% of the energy transfer. Overall, the shape of F (pρ) follows roughly that of J 2 0 (pρ), with the maxima and minima occurring roughly at the same values of pρ, and with the main difference that the minima of F (pρ) do not go to zero. Conclusions We have utilized a molecular quantum electrodynamics approach in multipolar approximation 28 to the surface plasmon-mediated nonradiative energy transfer between two ions near a metallic interface. The process may be mediated by surface plasmons. The ion-ion energy transfer rate has a distance dependence that reflects the symmetry situation; the transfer rate decreases exponentially with the distances of both ions from the metallic surface, and also decreases as the distance between the two ions projected onto the x-y plane, the plane of the interface. As in the case of photon-mediated energy transfer, the transfer rate depends on the overlap of the emission and absorption profiles of the donor and acceptor, respectively. Though we have assumed the ions to be in vacuum, the theory is easily extended to the case of the ions in a dielectric.
8,073.8
2019-01-01T00:00:00.000
[ "Physics" ]
Development of an Environmentally Friendly Resist-Removal Process UsingWet Ozone We investigated the removal of polymers with various chemical structures and the removal of ion-implanted resists using wet ozone. The removal rates of polymers that have carbon-carbon (C–C) double bonds in the main chain were high. The main chain of these polymers may be decomposed. The removal rates of polymers that have C–C double bonds in the side chain were low. The benzene ring in the side chain changes into carboxylic acid, so their ability to dissolve in water increased. The polymers without C–C double bonds were not removed. Removal of B and P ion-implanted resists became difficult with increasing acceleration energy of ions at implantation. The resist with plastic-deformation hardness that was twice as hard as that of nonimplanted resist should be removed similarly to nonimplanted resist. Using TOF-SIMS, we clarified that the molecule of cresol novolak resin was destroyed and carbonized by ion implantation. Introduction Photosensitive resin (resist) is used in the semiconductor (IC, LSI) and liquid crystal display (LCD) manufacturing process.The pattern is transferred to the resist by three processes (spin coating, exposure and development).The substrate is etched by using resist as a mask, and ions are implanted.Finally, the unneeded resist is removed.Resist removal from substrates in a semiconductor manufacturing process conventionally uses oxygen plasma [1,2] and/or chemicals (e.g., sulfuric acid hydrogen peroxide mixture, and ammonia hydrogen peroxide mixture).Environmentally unfriendly chemicals are used in large amounts and cause environmental damage [3,4].Also, oxygen plasma ashing may cause oxidation of substrates and metal wiring because this process requires high temperature (above 250 • C) [5,6].Therefore, several resist removal methods have been developed (e.g., atomic hydrogen [7][8][9][10], UV/ozone [11,12], and YAG laser [13][14][15]). We examine the wet ozone process, which is an environmentally friendly, low-temperature process.In this process, ozone gas mixed with a small amount of water is irradiated to the resist at a temperature below 100 • C, and the resist changes into hydrophilic carboxylic acid by the ozone and condensed water [16][17][18]. Figure 1 presents a schematic diagram of the experiment apparatus for wet ozone (Mitsubishi Electric-Corp.and SPC Electronics Corp.).Ozone gas mixed with a small amount of water vapor (wet ozone) is generated by bubbling ozone gas through hot water.Also, a small amount of water vapor condenses on the resist, due to the difference in temperature (ΔT = T 1 − T 2 ) between wet ozone (T 1 ) and Si wafer (T 2 ).The amount of water condensed on the resist was controlled by adjusting ΔT [19].Figure 2 depicts the chemical reaction of the carbon-carbon double bond (C-C double bond) with ozone and the hydrolysis of ozonide.In resist removal using wet ozone, the C-C double bond in the benzene ring of the resist reacts with the ozone to generate ozonide [20,21].Ozonide is hydrolyzed by the water condensed on the resist, and carboxylic acid is generated.Finally, the carboxylic acid was washed down from the Si wafer by a pure-water rinse. We removed base polymers that had different chemical structures and evaluated the chemical reactivity of wet ozone with the chemical structures of polymers.Also, we Figure 1: Experiment apparatus for resist removal using wet ozone.investigated the relationship between ion-implanted resist removability and acceleration energy of ions at implantation.We examined the structure of ion-implanted resists by SEM observation and by stripping ion-implanted resists using chemicals.We clarified the characteristics ion-implanted resists by nanoindentation [22][23][24] and time-of-flight secondary ion mass spectrometry (TOF-SIMS).We also removed a positive-tone novolak photoresist (AZ6112; AZ-Electronic Materials) as a reference. Removal of Ion-Implanted Resists Using Wet Ozone.The wet ozone irradiation conditions were the same as those described in Section 2.1.In this study, the ion-implanted resist was a positive-tone novolak resist (AZ6112; AZ-Electronic Materials) with B and P ions implanted at a dose of 5 × 10 14 atoms/cm 2 at each acceleration energy (10 keV, 70 keV, and 150 keV).We observed cross-sections of ion-implanted resists using scanning electron microscopy (SEM: JSR-6360; JEOL Ltd.).SEM images were secondary electron images with an acceleration voltage of 20 kV.Also, after dissolving the ionimplanted resist into ethylene carbonate (EC), we measured the thickness of the stripped resist film.We previously clarified the presence of a damaged layer at the surface of ion-implanted resists [25][26][27][28].Ion-implanted resists were composed of two layers (the damaged layer and the normal layer).We calculated the percentages of the damaged layer among of the ion-implanted resist.The temperature of EC was 70 • C. We examined plastic-deformation hardness by varying the maximum load from 1 to 260 mgf by nanoindentation.The loading rate was 1/2000 for loads exceeding 8 mgf and 0.004 mgf/ms (lower limit) below 8 mgf.We used a Berkovich diamond indenter with an apex angle of 115 • .Also, in order to evaluate the effect of ion implantation on resists, we normalized the plastic-deformation hardness of the ionimplanted resists with that of nonimplanted resists.We refer to this value as normalized plastic-deformation hardness H 2 . In order to evaluate the composition of ion-implanted resists, we cut resists at a slant using SAICAS (NN-04; DAIPLA WINTES) and conducted composition depth profile analysis using TOF-SIMS (TOF-SIMS 5; ION-TOF).The primary ion was Bi 3 2+ , and the acceleration voltage was 25 kV.In order to clarify the degree of hardening, we used ion-implanted resists in which B ions were implanted at a dose of 5 × 10 15 atoms/cm 2 and some acceleration energies (10 keV, 70 keV, and 150 keV). Removal of Polymers with Various Chemical Structures Using Wet Ozone. Figure 3 plots the results of removing polymers using wet ozone.Table 2 presents removal rates of each polymer using wet ozone.The removal rates of polymers that have C-C double bonds in the main chain (novolak resin and cis-1,4-polyisoprene) were highest.The removal rates of polymers that have C-C double bonds in the side chain (PVP and PS) were low compared to those of novolak resin and cis-1,4-polyisoprene.In novolak resin and cis-1,4-polyisoprene, the main chain may be decomposed by reaction of wet ozone.In PVP and PS, the benzene ring in the side chain changes into carboxylic acid by reaction with wet ozone.Thus, PVP and PS should be removed because their ability to dissolve in water increased.In contrast, the polymers that have no C-C double bond (PMMA and PVC) were not removed. Removal of Ion-Implanted Resists Using Wet Ozone. Figure 4 plots the removal of B and P ion-implanted resists using wet ozone.Removal of B and P ion-implanted resists the ion-implanted resist surfaces were removed, the resists were removed at the same rate as nonimplanted resists.The damaged layer formed at the surface of ion-implanted resists, and the lower layer was normal layer (nonimplanted resist). The estimated thicknesses of the damaged layer of B ionimplanted resists were 40 nm (10 keV) and 200 nm (70 keV) and that of P ion-implanted resist was 30 nm (10 keV). SEM Images of Ion-Implanted Resist. Figure 5 presents SEM images of B and P ion-implanted resists with various acceleration energies.The ion-implanted resists are composed of two layers.The percentage of ion-implanted resist damaged layer increased with increasing acceleration energy.Table 3 lists the damaged layer thickness measured by resist removal using wet ozone, SEM images, and resist removal using chemicals.ion-implanted resist using chemicals.The percentage of ionimplanted resist damaged layer increased with increasing acceleration energy. Plastic-Deformation Hardness of Ion-Implanted Resist. Figure 6 plots depth profiles of normalized plastic-deformation hardness of B and P ion-implanted resists versus photoresist depth.Normalized plastic-deformation hardness H 2 was obtained by dividing the plastic-deformation hardness of ion-implanted resists by that of non-ion-implanted resists.The plastic-deformation hardness and the thickness of ion-implanted resists increased with increasing acceleration energy.Therefore, removability of the ion-implanted the resist with plastic-deformation hardness five times that of normal resist should not be removed.After the ionimplanted resist surfaces were removed, the resists were removed at the same rate as nonimplanted resist.The ionimplanted resists whose plastic-deformation hardness was less twice than that of normal resist should be removed similarly to nonimplanted resist. TOF-SIMS Measurement of Ion-Implanted Resist. Figure 7 presents the secondary negative ion mass spectra We do not understand whether the material is composed only of a saturated bond.The ion intensity of C 10 H − increased with increasing acceleration energy.Therefore, it was assumed that removing ion-implanted resists would become difficult because cresol novolak resin was carbonized by ion implantation.The reason why the count of C 10 H − of 150 keV is smaller than that of C 10 H − of 70 keV is that the area where C 10 H − in 150 keV is smaller than that of 70 keV.When Figure 8 is seen, it is understood that the area (h: harden layer) of C 10 H − of 150 keV is narrower than that of 70 kev.Figure 8 presents the second negative ion images of ion-implanted resists with various acceleration energies.C 10 H − was detected from the lower layer of resist with increasing acceleration energy. In contrast, the ion intensity of C 8 H 9 O − was weak at the surface and strong at the lower layer.Therefore, resists are carbonized deeply with increasing acceleration energy.Also, the ion intensity of C 8 H 9 O − was greater than that of non-ion-implanted resist for all ion-implanted resists.It was assumed that the molecules of cresol novolak resin were easily ionized.For 10 keV, C 8 H 9 O − was detected at the lower layer than at the surface because the surface was stripped when it was cut at a slant.It was assumed that the hardened layer fell away. Next, we measured the surface profile of the resist that was cut by SAICAS using a stylus-type surface-profile measurement instrument (DekTak 6 M; ULVAC).From the result of surface profile measurement, at the resist with ions implanted at 10 keV, the resist layer of 100 nm depth from the surface was stripped.At 70 keV, the resist of 400 nm depth was stripped and that at 150 keV was 600 nm.Therefore, the estimated thickness of the hardened layer of ion-implanted resist at 10 keV was above 100 nm, that at 70 keV was above 400 nm, and that at 150 keV was above 600 nm. Conclusion We investigated the removal of polymers with various chemical structures and the removal of ion-implanted resists using wet ozone.The removal rates of polymers that have C-C double bonds in the main chain (novolak resin and cis-1,4-polyisoprene) were the highest.The removal rates of polymers that have C-C double bonds in the side chain (PVP and PS) were lower than those of novolak resin and cis-1,4polyisoprene.In novolak resin and cis-1,4-polyisoprene, the main chain may be decomposed by reaction with wet ozone.In PVP and PS, the benzene ring in the side chain changes into carboxylic acid by reaction with wet ozone.Thus, PVP and PS should be removed because their ability to dissolve in water increased.However, the polymers that have no C-C double bond (PMMA and PVC) were not removed. Removal of B and P ion-implanted resists became difficult with increasing acceleration energy.Resist with B ions implanted at an acceleration energy of 150 keV could not be removed nor could resist with P ions implanted at acceleration energies of 70 keV and 150 keV.It was assumed that the reactivity of ozone and resists decreases with increasing acceleration energy.From the results of removal of ion-implanted resist using wet ozone and nanoindentation, a resist with plastic-deformation hardness five times that of normal resist could not be removed, and a resist with plastic-deformation hardness twice that of normal resist should be removable similarly to nonimplanted resist.Using TOF-SIMS, the C 8 H 9 O − (m/z 121.08) from cresol novolak resin was detected as a component of non-ion-implanted resists.C 10 H − (m/z 121.01), which is a hydrocarbon with an unsaturated bond, was detected as a component of the damaged (hardened) layer of ion-implanted resist.The ion intensity of C 10 H − increased with increasing acceleration energy.We clarified that the molecule of cresol novolak resin was destroyed and carbonized by ion implantation. Figure 2 :Figure 3 : Figure 2: Chemical reaction of carbon-carbon double bond with wet ozone. Figure 5 : 2 Figure 6 : Figure 5: SEM images of B and P ion-implanted resist with various acceleration energies. Figure 7 :Figure 8 : Figure 7: The second negative ion mass spectra of ion-implanted resists with various acceleration energies. of ion-implanted resists with various acceleration energies."Counts" of the vertical axis of Figure7is a number of ions which entered the detector, and it becomes the number of counts for each area.The C 8 H 9 O − (m/z 121.08Chemical structure is drawn in Figure7) from cresol novolak resin was detected as a component of non-ion-implanted resists.In addition, C 10 H − (m/z 121.01), which is a hydrocarbon with a saturated bond, was detected as a component of the damaged (hardened) layer of ion-implanted resist.C 10 H − is described as follows.Hydrogen is escaped from the resist (C 8 H 9 O − ) by ion implant, and we think that it is International Journal of Polymer Science the compound such as graphitized and amorphous carbon. 1 Table 2 : Removal rates of polymers using wet ozone. became difficult with increasing acceleration energy of ions at implantation.Resist with B ions implanted at an acceleration energy of 150 keV could not be removed, nor could resist with P ions implanted at acceleration energies of 70 keV and 150 keV.Removal rate decreased with increasing acceleration energy.It was assumed that the reactivity of ozone and resists decreases with increasing acceleration energy.After Table 4 lists the percentage of ion-implanted resist damaged layer determined by SEM and stripping of Table 3 : The damaged layer thickness measured by resist removal using wet ozone, SEM images, and resist removal using chemicals. Table 4 : Percentages of ion-implanted resist damaged layer by SEM and stripping of ion-implanted resist using chemicals. resist using wet ozone decreased with increasing acceleration energy because the hardness of resist increases with increasing acceleration energy as determined by nanoindentation measurement.Based on the results of the removal of ionimplanted resists using wet ozone and nanoindentation,
3,260.6
2012-02-20T00:00:00.000
[ "Materials Science" ]
Characterization of the Anodic Film and Corrosion Resistance of an A535 Aluminum Alloy after Intermetallics Removal by Different Etching Time : The objective of this study was to improve the corrosion resistance of an A535 alloy by removing intermetallics on the alloy surface by alkaline etching to improve the morphologies and properties of the anodic film that was sealed with different sealants. It was found that alkaline etching for 4 min was suitable for dissolving intermetallic particles and simultaneously providing sufficient roughness for the adhesion of an oxide film to the Al matrix. The effect of alkaline etching revealed that a decrease in the intermetallic fraction from 21% to 16% after etching for 2 and 4 min, respectively, corresponded to the increase in the surface roughness, thickness, and consistency of the anodic film. It was also demonstrated that the surface morphology of the anodic films after stearic acid sealing was more uniform and compact than that after nickel fluoride sealing. The electrochemical polarization curves and salt spray test proved that the alloy etched for 4 min and sealed with stearic acid had better corrosion resistance as compared with the aluminum alloy sealed with nickel fluoride. Introduction A535 aluminum-magnesium casting alloy is widely used in marine and other corrosiveprone applications due to excellent resistance to corrosion in seawater, and heat treatment and natural aging are not required to reach maximum properties [1,2]. Moreover, aluminum casting alloys have clear economic advantages over wrought alloys, such as mass production of net-shape components and no requirement for homogenization [3]. However, the critical problem of corrosion resistance in Al-Mg cast alloys with more than 3 wt.% Mg is a number of intermetallic phases, which makes it challenging to enhance corrosion resistance by the anodizing process. Furthermore, these alloys have increased susceptibility to corrosion [4,5], which relates to the precipitation of Mg-rich intermetallic compound (IMC) particles at grain boundaries and the free sample surface [6,7], affecting the stress corrosion cracking [8,9] and the properties of oxide films [10]. Previous work investigated the corrosion behavior of an Al-Mg alloy (5A06) and found that the Si containing intermetallics were embedded in the anodic film surface [11]. It was also reported that the active Al 3 Mg 2 intermetallic with a lower corrosion potential of about −1.29 V vs. the saturated calomel electrode (SCE), as compared with −0.73 V vs. SCE for the matrix, led to the increased anodic activity in the corrosive environment [5]. Anodizing is an electrochemical finishing process, forming porous anodic oxide films consisting of an inner block layer and an outer porous layer, which significantly improves corrosion resistance of the alloys [3]. It is commonly used to improve corrosion resistance of aluminum alloys for military and seawater applications [12]. The application of anodizing to cast alloys (that provides the corrosion resistance without homogenization treatment) is considered to be more challenging than for wrought aluminum alloys because of the formation of Al-Mg and Al-Mg-Mn IMCs embedded in the aluminum matrix, which prevents the formation of a uniform anodic film [12]. It was also reported that anodizing often resulted in defects around IMC particles [13,14]. As a result, painting is commonly used instead of anodizing to increase the corrosion resistance of cast products. However, in this way, the corrosion resistance of aluminum castings is lower than that obtained by anodizing because anodic films are usually much stronger and better adherent as compared to paint and metal plating [15,16]. Thus, a better understanding of the anodization on aluminum castings is necessary to overcome this drawback. Surface pretreatments can play a significant role in enhancing the corrosion protection performance through controlling an appropriate surface roughness, which can result in stronger adhesion between the anodic film and the aluminum substrate [17]. Moreover, mechanical polishing and electropolishing can reduce surface roughness, assuring the surface quality for high quality anodic oxide film [18,19]. On the other hand, the formation of a porous structure on the surface can be due to the removal of intermetallics from the matrix by surface treatment [11]. Therefore, the anodic oxide films are heterogeneous, and corrosion resistance can be adversely affected. To obtain a consistent oxide film with good properties, surface pretreatment must be carefully selected. Different techniques to clean the natural surface oxides and to reduce the number of intermetallic compounds on the surface have been suggested [20]. Sealing treatment is a chemical process to reduce the porosity of oxide layers and improve the corrosion resistance of aluminum alloys [21,22]. In the case of nickel fluoride cold sealing, the pores are filled with aluminum hydroxide, aluminum fluoride, and nickel hydroxide on the top of the anodic oxide at low temperature [23,24]. However, the use of nickel or fluoride anions should be avoided due to health and safety regulations and the management of the wastewater treatment. Therefore, the use of green, non-toxic acids, such as long-chain organic stearic and isosteric acids, to seal the anodic oxide layer of anodized aluminum alloys have been developed [25,26]. Recent work reported that the mixed sealing using modified nickel fluoride in cold sealing decreased the corrosion current density, which enhanced the corrosion resistance of an anodized aluminum alloy [27]. In our previous work, we found that the corrosion resistance of an A535 alloy can be improved using stearic sealing [28]. However, such aluminum alloys contain high content of magnesium, silicon, and copper as alloying elements, which form intermetallics that impede anodizing and sealing [29,30]. Thus, there is a need for surface treatment that can remove intermetallics on the surface. Thus, this research aims to study the effect of intermetallic removal by alkaline etching in a pretreatment process on the anodic oxide layer and to compare the effects of nickel fluoride and steric acid sealing on the properties of the anodic oxide layer and, eventually, on the corrosion resistance of an anodized A535 alloy. The alloy was anodized in a sulfuricoxalic mixed acid modified with aluminum sulfate addition as a corrosion inhibitor and sealed in nickel fluoride or stearic acid. Morphologies of the etched surface and anodic oxide films were observed by scanning electron microscopy (SEM). Anodic oxide film hardness tests were conducted using a micro-Vickers hardness tester. Salt spray and electrochemical tests were used to characterize the corrosion resistance of the oxide film with respect to the alkaline etching and sealing treatment. Materials An AlMag35 or A535 aluminum-magnesium casting alloy was used in this study. The experimental alloy was prepared in a silicon carbide crucible in an induction furnace at a temperature of 750 • C from 99.9 wt.% pure Al, 99.5 wt.% pure Mg, 100 wt.% pure crystal Si, Al-20 wt.% Mn, Al-10 wt.% Ti, and Al-20 wt.% Fe master alloys. To minimize gas porosity and clean the liquid metal of entrained oxides, argon bubble technique and cover fluxes were used to trap the oxides in the molten alloy dross, which was removed before casting. Next, the alloy melt was poured into a cylindrical copper mold (30 mm diameter, 40 mm height), which permitted a uniform cooling rate. The pouring temperature was kept at 650 • C. The chemical composition of the alloy was inspected using an optical emission spectrometer (OES), and the results are shown in Table 1. Microstructure of the as-cast alloy was observed by optical microscope (Zeiss model Axiolab 5) Surface Preparation The cast cylinder was cut into several discs. Each disc had dimensions of 30 mm diameter and 5 mm thickness. The discs were used as samples in this study. Surfaces of each disc were mechanically-chemically treated as follows. First, samples were polished with silicon carbide (SiC) abrasive papers of decreasing granulometry. Next, the microstructure was observed in the center of the specimen to avoid a chill zone effect using an optical microscope. Subsequently, the specimens were etched (alkaline etching) by immersion in sodium hydroxide 5% weight by volume in aqueous solution at 45-50 • C with various durations of the etching from 1 to 4 min before desmutting by immersion in 25 vol.% nitric acid for 2 min and DI water for 30 s. The samples were cleaned with DI water in order to determine effects of alkaline etching on the anodic oxide film. An average surface roughness (Ra) of the samples was determined by a surface roughness tester (Taly Surf Series 2, Taylor-Hobson, Leicester, England). Five measurements of Ra were obtained for each sample to reflect the amount of intermetallic compound remaining at the surface after alkaline pretreatment. The area fraction of intermetallic on the surface after alkaline etching was also determined by image analysis (ImageJ software 1.53k, Wayne Rasband and contributors, National Institutes of Health, Bethesda, MD, USA). Anodizing and Sealing Treatment Individual samples were anodized in a mixed electrolyte of 175 g/L sulfuric acid, 0.16 mol/l aluminum sulfate, and 30 g/L of oxalic acid. Temperature of the solution was controlled in the range of 18-20 • C, using a constant current density of 0.1 A/cm 2 with potential ranges of 15 to 20 voltage for 30 min. Temperature of the solutions was maintained by a bath chiller with a temperature controller. The current was served by a direct current (DC) stabilized power supply (PSP-603, Good will instrument, New Taipei city, Taiwan), and a pure aluminum electrode was used as a cathode. After anodizing, all samples were sealed, rinsed in ethyl alcohol, and dried in warm air. The anodized samples were sealed with two different sealants: nickel fluoride (cold sealing) and stearic acid (hydrothermal sealing). The processing parameters of the sealing treatment are shown in Table 2. Table 3 shows the list of the experiment conditions for this investigation. Table 2. Process parameters of sealing techniques applied to anodic oxide films (based on ref. [22]). Characterization of Anodic Oxide Films The surface morphology of the anodic oxide films after sealing was examined with a scanning electron microscope (SEM, JSM-6610 LV, JEOL, Tokyo, Japan). The samples were coated with a 1-nm thick layer of gold to reduce the charging effect on the surface and thus to improve the image quality. The area fraction of voids on the anodic oxide films were examined by SEM micrograph and quantified with ImageJ software using three positions on three independent images. The average thickness of the anodic oxide film was measured according to ASTM B487 by using cross-section SEM micrographs. X-ray diffraction patterns of both oxide deposited films were investigated at room temperature using a Rigaku model smartlab operating with CuKα radiation (1.541874 Å). The intensities were recorded within 2θ from 10 to 90 at a step size of 12 degree/min with current 30 mA and voltage 40 kV. Vickers Microhardness Measurement of microhardness was performed under a load of 300 g for 15 s using a Vickers microhardness tester (FM-700e, FUTURE-TECH, Tokyo, Japan). The microhardness was measured at five points for each specimen, and the mean value of these measurements was accepted as the microhardness value of the anodic film. Corrosion Testing A salt spray test was performed to investigate the appearance of corrosion products on the surface after exposure to a corrosive environment. It was conducted according to ASTM B117. A solution of 5 wt.% NaCl was sprayed during 336 h on the specimens in a closed chamber (PT2070, PERFECT, Taipei, Taiwan) with temperature maintained at 37 • C. The appearance of corroded samples was assessed through macro-and microstructure by OM and SEM. Electrochemical corrosion tests were performed in a 5 wt.% NaCl aqueous solution at room temperature in a three-electrode system using Potentiostat-Galvanostat model Autolab P302N. The specimen was treated as a working electrode with an evaluated area of 1 cm 2 , a platinum wire was used as a counter electrode, and a saturated calomel was used as a reference electrode. The specimens were exposed in the electrolyte until the reaction reached a stable open circuit potential (OCP) value before the electrochemical tests. The measurements were done at a scan rate of 0.033 V/s. To ensure the repeatability of the measurements, the test was done on three replicates. The polarization curves were recorded with an Auto lab potentiostat, and the corrosion current density (I corr ) data were determined at the intersection of rectilinear sections of the anode and cathode from a Tafel plot by extrapolating the linear portion of the curve to E CORR using Nova Autolab software (version 1.11.2, Metrohm, Herisau, Switzerland). However, we use adjacent averaging method, which takes the average of a user-specified number of data points around each point in the data, and replaces that point with the new average value to smooth the data before fitting the slope for calculation of the polarization parameters as well as the current density (I corr ). Figure 1 shows the microstructure of an as-cast A535 alloy comprising dendrites of the aluminum solid solution and eutectic Al 3 Mg 2 IMC at the grain boundaries. These intermetallics are formed at the boundaries of Al dendrites as a divorced eutectic in alloys containing above 3 wt.% Mg and may lead to the corrosion of the alloy [7]. Some other intermetallics formed by Mn, Fe, and Si (e.g., Mg 2 Si) are also formed but are present in much less quantity. The measurements were done at a scan rate of 0.033 V/s. To ensure the repeatability of the measurements, the test was done on three replicates. The polarization curves were recorded with an Auto lab potentiostat, and the corrosion current density (Icorr) data were determined at the intersection of rectilinear sections of the anode and cathode from a Tafel plot by extrapolating the linear portion of the curve to ECORR using Nova Autolab software (version 1.11.2). However, we use adjacent averaging method, which takes the average of a user-specified number of data points around each point in the data, and replaces that point with the new average value to smooth the data before fitting the slope for calculation of the polarization parameters as well as the current density (Icorr). Figure 1 shows the microstructure of an as-cast A535 alloy comprising dendrites of the aluminum solid solution and eutectic Al3Mg2 IMC at the grain boundaries. These intermetallics are formed at the boundaries of Al dendrites as a divorced eutectic in alloys containing above 3 wt.% Mg and may lead to the corrosion of the alloy [7]. Some other intermetallics formed by Mn, Fe, and Si (e.g., Mg2Si) are also formed but are present in much less quantity. Figure 2 shows the relationship between the average roughness (Ra) and duration of etching time. An average roughness of the samples increased from 0.098 ± 0.025 µm at the beginning of etching to 0.744 ± 0.021 µm after 5 min of etching due to the surface chemical attack. This agrees well with previously reported increases in roughness with the etching time due to dissolution of Si on the surface of Al-Si alloys [31]. Figure 3a,c show the surface of the specimens that had been alkaline etched for 2 and 4 min, respectively. One can see a typical porosity with irregular pore shape on the surface after etching for 2 min due to the chemical attack onto the Al matrix ( Figure 3a). The dissolution of the surface was relatively uniform, as demonstrated in Figure 3b. After 4 min alkaline etching, round-shaped pores appeared in addition to irregular grain boundary pores (Figure 3c). At this stage, the surface acquired pronounced roughness, as revealed by cross-sectional characterization in Figure 3d, caused by the slower dissolution of the intermetallic particles as compared to that of the Al matrix. This resulted in the formation of trench or IMC groove shapes due to the preferential dissolution of the Al matrix at the periphery of IMC particles during etching and desmutting. Consequently, the particles fell off the matrix, resulting in the scallops, which subsequently expanded to larger pores as etching continued. With increased etching time, the diameter of pores significantly increased due to the higher reactivity of the aluminum matrix with the alkaline solution [32]. Similar observations were reported for the behavior of Mg 2 Si particles in an AA5052 alloy [33]. Based on these results, we selected alkaline etching pretreatments of 2 and 4 min to investigate the formation and properties of the oxide film. The result showed that the decrease in the percentage of the fraction of IMC after being alkaline etched for 4 min, which corresponded to previous research [34]. (Figure 3c). At this stage, the surface acquired pronounced roughness, as revealed by cross-sectional characterization in Figure 3d, caused by the slower dissolution of the intermetallic particles as compared to that of the Al matrix. This resulted in the formation of trench or IMC groove shapes due to the preferential dissolution of the Al matrix at the periphery of IMC particles during etching and desmutting. Consequently, the particles fell off the matrix, resulting in the scallops, which subsequently expanded to larger pores as etching continued. With increased etching time, the diameter of pores significantly increased due to the higher reactivity of the aluminum matrix with the alkaline solution [32]. Similar observations were reported for the behavior of Mg2Si particles in an AA5052 alloy [33]. Based on these results, we selected alkaline etching pretreatments of 2 and 4 min to investigate the formation and properties of the oxide film. The result showed that the decrease in the percentage of the fraction of IMC after being alkaline etched for 4 min, which corresponded to previous research [34]. Figure 4. The anodic oxide film of unsealed samples 2-A and 4-A demonstrates the remains of IMC on the surface, and the oxide film appears to be discontinuous. After just 2 min of anodizing, there are still clearly visible IMC with retained morphology (Figure 4a), whereas after 4-min anodizing, the IMC are largely dissolved with only voids and grooves remaining (Figure 4d). This can be explained by the removal of particles by prolonged alkaline etching. Surface Morphology of Anodic Oxide Films SEM micrographs of the anodic oxide film surface are shown in Figure 4. The anodic oxide film of unsealed samples 2-A and 4-A demonstrates the remains of IMC on the surface, and the oxide film appears to be discontinuous. After just 2 min of anodizing, there are still clearly visible IMC with retained morphology (Figure 4a), whereas after 4-min anodizing, the IMC are largely dissolved with only voids and grooves remaining ( Figure 4d). This can be explained by the removal of particles by prolonged alkaline etching. When the particles remain on the surface, the voids cannot be sealed completely either by NiF2 or stearic acid (Figure 4b,c). However, when the intermetallics fall off during longer etching, these voids can be covered by sealants, which results in the anodic oxide When the particles remain on the surface, the voids cannot be sealed completely either by NiF 2 or stearic acid (Figure 4b,c). However, when the intermetallics fall off during longer etching, these voids can be covered by sealants, which results in the anodic oxide film being more uniform, as demonstrated in Figure 4e,f. Apparently, the intermetallics inhibit the oxide film growth, as evidenced by the EDS element mapping of oxide film surface in Figure 5 that shows that the layer of oxide is discontinuous. This confirms the harmful role of intermetallics in the formation of a uniform oxide film and the necessity of a proper pretreatment time, as reported previously for other Al-Mg-Si alloys [35]. film being more uniform, as demonstrated in Figure 4e,f. Apparently, the intermetallics inhibit the oxide film growth, as evidenced by the EDS element mapping of oxide film surface in Figure 5 that shows that the layer of oxide is discontinuous. This confirms the harmful role of intermetallics in the formation of a uniform oxide film and the necessity of a proper pretreatment time, as reported previously for other Al-Mg-Si alloys [35]. These observations are confirmed by the quantitative analysis of void area fraction on the surface in Figure 6. Sample 4-A etched for 4 min shows a larger portion of voids on the surface due to the removal of IMC; the area fraction is 4.48%. After sealing, the area fraction of pores decreases significantly, especially after sealing with stearic acid (specimen 4-S), when it comes down to 4.25%. The area fraction of pores is as low as the plasma These observations are confirmed by the quantitative analysis of void area fraction on the surface in Figure 6. Sample 4-A etched for 4 min shows a larger portion of voids on the surface due to the removal of IMC; the area fraction is 4.48%. After sealing, the area fraction of pores decreases significantly, especially after sealing with stearic acid (specimen 4-S), when it comes down to 4.25%. The area fraction of pores is as low as the plasma electrolytic oxidation coating that formed on the surface of the Al-Mg-Si alloy [36]. The nature of these voids is related to the formation of grooves in place of intermetallics on the surface as a result of alkaline etching. Longer etching produces more voids and also accelerates the growth of the anodic oxide film [35]. The results also show that the alloy etched for 4 min and sealed with nickel fluoride still has more voids left on the surface as compared to the stearic acid sealant. This may be caused by the formation of Ni (OH) 2 during the chemical reaction. Ni(OH) 2 deposit is formed due to hydrolysis of Ni 2+ ions, with alumina mainly converted to aluminum hydroxide and F − ions reacting with alumina and AlF 3 [37]. These products form within the pores of anodic films and block the pores, as can be seen in the element mapping of oxygen and nickel in the particles on the oxide film in the inset in Figure 7a. Therefore, nickel fluoride cannot penetrate the pores beneath the substrate, which causes incomplete coverage of the surface. Stearic acid does not have these byproducts, as the stearic solution can react with the aluminum hydroxide of sealed anodic layers to transform a layer of long chain fatty soap film with alumina by chemical reaction. It was reported that these films have strong hydrophobic properties [24], as we can see from the sealing film between voids in our results. Consequently, the voids of anodic film layer were sealed entirely, as shown in Figure 7b,c. Figure 8 shows the surface morphology of unsealed and sealed anodic oxide layers. The anodizing process without sealing induces an uneven oxide layer without a superficial film, as can be seen in Figure 8a,d. After sealing in nickel fluoride solution or stearic acid, the surface is covered by a sealing film. In the specific case of nickel fluoride sealing, as can be seen in Figure 8b,e, the sealing surface is uniform, and the morphology of this overlayer looks like a hydroxide sheet that is formed by the conversion of porous alumina into Al(OH) 3 and Ni(OH) 2 that was also deposited on the oxide film surface, similar to that reported elsewhere [27,38]. These oxides were investigated and confirmed by XRD results. Figure 8g shows the XRD spectra of the sample 4-N; the XRD profile reveals that the peaks of Al(OH) 3 and Ni(OH) 2 were detected. Figure 8 shows the surface morphology of unsealed and sealed anodic oxide layers. The anodizing process without sealing induces an uneven oxide layer without a superficial film, as can be seen in Figure 8a,d. After sealing in nickel fluoride solution or stearic acid, the surface is covered by a sealing film. In the specific case of nickel fluoride sealing, as can be seen in Figure 8b,e, the sealing surface is uniform, and the morphology of this overlayer looks like a hydroxide sheet that is formed by the conversion of porous alumina into Al(OH)3 and Ni(OH)2 that was also deposited on the oxide film surface, similar to that reported elsewhere [27,38]. These oxides were investigated and confirmed by XRD results. Figure 8g shows the XRD spectra of the sample 4-N; the XRD profile reveals that the peaks of Al(OH)3 and Ni(OH)2 were detected. On the other hand, the sample sealed with stearic acid exhibits layers of rose petalshaped formations covering the entire surface, as can be seen in Figure 8c,f. These formations represent the inorganic salt of Al(OH)3 occurring during the sealing reaction when liquid acid spreads onto the surface [39]. These formations in Al-Si and some other alloys have been reported before [27] and are known to be indicative of a Cassie impreg- On the other hand, the sample sealed with stearic acid exhibits layers of rose petalshaped formations covering the entire surface, as can be seen in Figure 8c,f. These formations represent the inorganic salt of Al(OH) 3 occurring during the sealing reaction when liquid acid spreads onto the surface [39]. These formations in Al-Si and some other alloys have been reported before [27] and are known to be indicative of a Cassie impregnating wetting regime when the liquid, but not the wetting the surface, remains on it due to the special interaction with the surface roughness. Such surfaces are expected to exhibit high adhesive bond strength between the substrate and the oxide film [40]. Figure 9 demonstrates cross-sections of anodic oxide films without the sealing process after etching for 2 and 4 min (Figure 9a,d) in alkaline solution and after sealing with NiF 2 (Figure 9b,e) and stearic acid ( (Figure 9c,f). It can be observed that the oxide films without sealing show non-uniform growth disturbed by intermetallics while exhibiting flaws and voids on the surface of oxide films: the more, the longer the etching is, as indicated in Figure 9a,d. After exposure to NiF2 and stearic acid, the anodic oxide film appears to be more uniform, with flaws sealed, as can be seen in Figure 9b,c,e,f. Specifically, the samples etched for 4 min and then sealed exhibit a smooth oxide film. This may result from the greater roughness, as reported in Figure 2, that increases the adhesion between the anodic film and the aluminum substrate [17]. Thickness and Hardness of the Anodic Oxide Film on an A535 Alloy The thickness of the anodic film was also measured. The results in Figure 10 show that the anodic layer thickens with increasing the alkaline etching time to 4 min due to the removal of intermetallics and increased growth rate of anodic film [35]. The average thickness increases from 40 μm up to about 70 μm, depending on the type of sealants. In addition, the type of sealant also affects the hardness of the oxide film layer. Interestingly, the maximum thickness of the film in the sample that is sealed by nickel fluoride does not provide the highest hardness. The maximum hardness of about 550 HV is demonstrated by a thinner film (approx. 60 μm) on the sample sealed with stearic acid. This is because the hardness of an oxide film depends both on the homogeneity and the thickness of the anodic oxide layer [41]. As we demonstrated in Figure 4, there are some voids remaining on the surface because NiF2 cannot seal voids completely, which also agrees with the previous study on an AA5052 alloy [42]. The porosity of the surface layer decreases its hardness [43]. After exposure to NiF 2 and stearic acid, the anodic oxide film appears to be more uniform, with flaws sealed, as can be seen in Figure 9b,c,e,f. Specifically, the samples etched for 4 min and then sealed exhibit a smooth oxide film. This may result from the greater roughness, as reported in Figure 2, that increases the adhesion between the anodic film and the aluminum substrate [17]. The thickness of the anodic film was also measured. The results in Figure 10 show that the anodic layer thickens with increasing the alkaline etching time to 4 min due to the removal of intermetallics and increased growth rate of anodic film [35]. The average thickness increases from 40 µm up to about 70 µm, depending on the type of sealants. In addition, the type of sealant also affects the hardness of the oxide film layer. Interestingly, the maximum thickness of the film in the sample that is sealed by nickel fluoride does not provide the highest hardness. The maximum hardness of about 550 HV is demonstrated by a thinner film (approx. 60 µm) on the sample sealed with stearic acid. This is because the hardness of an oxide film depends both on the homogeneity and the thickness of the anodic oxide layer [41]. As we demonstrated in Figure 4, there are some voids remaining on the surface because NiF 2 cannot seal voids completely, which also agrees with the previous study on an AA5052 alloy [42]. The porosity of the surface layer decreases its hardness [43]. by a thinner film (approx. 60 μm) on the sample sealed with stearic acid. This is because the hardness of an oxide film depends both on the homogeneity and the thickness of the anodic oxide layer [41]. As we demonstrated in Figure 4, there are some voids remaining on the surface because NiF2 cannot seal voids completely, which also agrees with the previous study on an AA5052 alloy [42]. The porosity of the surface layer decreases its hardness [43]. It can be seen that the film thickness of the 4-S sample that was treated for a longer time with alkaline etching (4 min) and sealed with stearic acid is not much different with NiF 2 sealing, but the hardness of oxide film is significantly increased. This may be a result of the stearic solution reacting with the aluminum hydroxide of the sealed anodic layers to transform a layer of long chain fatty soaps film with alumina by chemical reaction. It was reported that these films have strong hydrophobic properties and improved surface hardness and corrosion resistance [24,44,45]. Corrosion under Salt Spray Exposure Corrosion resistance is the ultimate purpose of the treatment. An ability to resist corrosion depends upon properties of the alloy as well as how the alloy is treated. Figure 11 presents the appearance of the as-cast alloy and the alloys anodized and sealed with nickel fluoride (2-N, 4-N samples) and stearic acid (2-S, 4-S samples) before and after corrosion for 336 h in a salt spray test. The results show a significant enhancement of salt spray resistance after anodizing and sealing as compared to the as-cast alloy. For the as-cast A535 alloy, the corrosion products can be observed in Figure 11(a-2), with general darkening of the surface from the very early stages of the tests. In the case of the anodized and sealed samples, the 2-N test sample exhibits shallow local corrosion, as can be seen in Figure 11(b-2). This might be the result of the chemical attack due to insufficiently long alkaline etching (2 min) that failed to provide the required adhesion strength for the oxide film due to remaining IMCs and lesser surface roughness. For other treatments conditions, the overall corrosion resistance was good. It is known that the corrosion resistance under salt spray tests depends on the surface quality as well as the distribution and characteristics of the corrosion products that play a stabilization role [46]. Figure 12 shows SEM images of the surface of the sample etched by alkaline solution for 2 min and sealed with NiF 2 after exposure in 5 wt.% NaCl for 336 h. Obviously, the oxide film is cracked, and shallow local corrosion occurs in the places where the oxide layer is severely damaged or chipped off. This damage can be facilitated by infiltration of the nickel fluoride sealant that reacts with the Al-matrix substrate, forming corrosive products of Al(OH) 3 , Ni(OH) 2, and AlF 3 [37] (see Figure 11a). These products deposit on the surface after corrosion reaction. sealed samples, the 2-N test sample exhibits shallow local corrosion, as can be seen in Figure 11b-2. This might be the result of the chemical attack due to insufficiently long alkaline etching (2 min) that failed to provide the required adhesion strength for the oxide film due to remaining IMCs and lesser surface roughness. For other treatments conditions, the overall corrosion resistance was good. It is known that the corrosion resistance under salt spray tests depends on the surface quality as well as the distribution and characteristics of the corrosion products that play a stabilization role [46]. Figure 12 shows SEM images of the surface of the sample etched by alkaline solution for 2 min and sealed with NiF2 after exposure in 5 wt.% NaCl for 336 h. Obviously, the oxide film is cracked, and shallow local corrosion occurs in the places where the oxide layer is severely damaged or chipped off. This damage can be facilitated by infiltration of the nickel fluoride sealant that reacts with the Al-matrix substrate, forming corrosive products of Al(OH)3, Ni(OH)2, and AlF3 [37] (see Figure 11a). These products deposit on the surface after corrosion reaction. The difference between different sealants is illustrated in Figure 13. It shows the surface morphology of the anodized and sealed samples after the salt spray test. In the case of a 2-N sample shown in Figure 13a, it reveals the cracked oxide film, and there are corrosion products deposited on the surface. This may be caused by the pitting corrosion after the oxide film broke and the chloride solution reacted with the Al substrate and formed Al(OH)3 [47]. It was reported that the chlorine ions absorbed on the surface and corroded the films, and the dissolution on the surface caused the corrosion pits and immediately decreased the hardness of the oxide film [48]. In addition, the oxide film of a 4-N sample was destroyed but there are no corrosion products, which indicates that, although the anodic oxide film has been damaged, it still prevents the chemical reaction with Al matrix (Figure 13b). However, it was found that the alloy etched for 2 min and sealed with stearic acid had a better surface after exposure to chloride solution, as can be seen in Figure 13c. This may be explained from the propagation of the petal shaped sealing film, which can reduce the corrosion. On the other hand, the oxide layer on a 4-S sample showed the corroded area on the petal shaped sealing film in Figure 13d. Moreover, it can The difference between different sealants is illustrated in Figure 13. It shows the surface morphology of the anodized and sealed samples after the salt spray test. In the case of a 2-N sample shown in Figure 13a, it reveals the cracked oxide film, and there are corrosion products deposited on the surface. This may be caused by the pitting corrosion after the oxide film broke and the chloride solution reacted with the Al substrate and formed Al(OH) 3 [47]. It was reported that the chlorine ions absorbed on the surface and corroded the films, and the dissolution on the surface caused the corrosion pits and immediately decreased the hardness of the oxide film [48]. In addition, the oxide film of a 4-N sample was destroyed but there are no corrosion products, which indicates that, although the anodic oxide film has been damaged, it still prevents the chemical reaction with Al matrix (Figure 13b). However, it was found that the alloy etched for 2 min and sealed with stearic acid had a better surface after exposure to chloride solution, as can be seen in Figure 13c. This may be explained from the propagation of the petal shaped sealing film, which can reduce the corrosion. On the other hand, the oxide layer on a 4-S sample showed the corroded area on the petal shaped sealing film in Figure 13d. Moreover, it can be seen the petal shaped pattern still covers the anodic oxide film. This may be caused by a higher thickness layer of sealing film in the 4-S sample, as shown in Figure 8f. The higher thickness of the sealing layer can be also related to the pretreatment surface that had less intermetallics and more groove area on the surface after etching for 4 min. These data suggest that the stearic sealing improves the general corrosion resistance. The initially hydrophobic film formed by stearic acid can be transformed to a layer of fatty soaps by a chemical reaction with alumina, facilitating the access of the stearic acid to the oxide film and sealing of the surface irregularities [22,49], which agrees well with our data in Figure 5, which shows a lower pore fraction in 4-S samples. Figure 14 shows the potentiodynamic polarization curves of specimens that were prepared with different pretreatment etching time (2 min and 4 min) and sealed with nickel fluoride and steric acid after anodic film coating. The quantitative results of the potentiodynamic tests are summarized in Table 4. Technically, the more positive the electrochemical corrosion potential (Ecorr) and the smaller the corrosion current density (Icorr), the better the material corrosion resistance. The corrosion potential (Ecorr) of the substrate sealed by NiF2 has a lower value (Ecorr = −0.58V) for the 2-N condition as compared to the stearic acid sealing (Ecorr = −0.53V). At the same time, the Icorr for this condition is significantly higher (Icorr = 9.40 × 10 −7 mA/cm 2 ) than for the samples sealed with stearic acid (Icorr = 8.15 × 10 −9 mA/cm 2 ). This increase of the corrosion current density can be attributed to the presence of intermetallic particles on the alkaline etched surface [50], as we observed in Figure 3. Figure 14 shows the potentiodynamic polarization curves of specimens that were prepared with different pretreatment etching time (2 min and 4 min) and sealed with nickel fluoride and steric acid after anodic film coating. The quantitative results of the potentiodynamic tests are summarized in Table 4. Technically, the more positive the electrochemical corrosion potential (E corr ) and the smaller the corrosion current density (I corr ), the better the material corrosion resistance. The corrosion potential (E corr ) of the substrate sealed by NiF 2 has a lower value (E corr = −0.58V) for the 2-N condition as compared to the stearic acid sealing (E corr = −0.53V). At the same time, the I corr for this condition is significantly higher (I corr = 9.40 × 10 −7 mA/cm 2 ) than for the samples sealed with stearic acid (I corr = 8.15 × 10 −9 mA/cm 2 ). This increase of the corrosion current density can be attributed to the presence of intermetallic particles on the alkaline etched surface [50], as we observed in Figure 3. Moreover, the polarization curve instability of the oxide film can be clearly seen, as we discussed in relation to Figures 13 and 14. It shows the onset of localized corrosion. The pitting potential is determined in the anodic branch of the polarization curve with the Epit value of 0.5 V in Figure 14, which corresponds to the potential associated with the dissolution of the passive film, formed by oxides or even by corrosion products. Breakdown of oxide film is indicated by the localized corrosion in Figure 15a. This may be caused by the remaining intermetallics present on surface; when the passive oxide cracks, these intermetallics can react with the solution and then increase the positive charges in the electrolyte concentration [51]. Similar corrosion behavior was reported when the potential value changed to positive after the passivation region in an aluminum alloy, which was associated with the electrolyte concentration and corrosive environment that accompanied localized corrosion [51][52][53]. Moreover, the polarization curve instability of the oxide film can be clearly seen, as we discussed in relation to Figures 13 and 14. It shows the onset of localized corrosion. The pitting potential is determined in the anodic branch of the polarization curve with the E pit value of 0.5 V in Figure 14, which corresponds to the potential associated with the dissolution of the passive film, formed by oxides or even by corrosion products. Breakdown of oxide film is indicated by the localized corrosion in Figure 15a. This may be caused by the remaining intermetallics present on surface; when the passive oxide cracks, these intermetallics can react with the solution and then increase the positive charges in the electrolyte concentration [51]. Similar corrosion behavior was reported when the potential value changed to positive after the passivation region in an aluminum alloy, which was associated with the electrolyte concentration and corrosive environment that accompanied localized corrosion [51][52][53]. The intermetallics in question are most likely the remaining Al3Mg2 intermetallics in the alloy that was alkaline etched for only 2 min, as these intermetallics have a low corrosion potential of about (−1.5 V) to (−1.3 V) vs. the saturated calomel electrode (SCE) that commonly acts as an active cathodic electrode [5,7]. As the corrosion reaction goes, the oxide film degrades, and the oxidation reaction of the active intermetallic Al3Mg2 takes place with the electrolytic solution. However, the detrimental effect of intermetallics can be alleviated by alkaline etching for 4 min and sealing with stearic acid, as can be seen in Figure 14. It is clear that the alloy alkaline etched for 4 min and sealed with stearic acid has a higher corrosion potential (Ecorr = −0.35 V) as compared to the alloy that was etched for 4 min sealed by NiF2 that has Ecorr of −0.85 V, which indicates that the metal oxidation is low at the stage of corrosion reaction. It was reported that the corrosion potential of the alloy depends on the type of the active intermetallic [7] and the type of sealants, and it was shown that these films had strong hydrophobic properties and improved the surface hardness [24,37,44]. Electrochemical Corrosion Behavior The morphology of the oxide film formed after different etching times and sealing after the electrochemical corrosion in 5 wt.% NaCl solution are shown in Figure 15. After 2 min of alkaline etching and sealing with either of the two sealants, the corroded surface shows voids similar to corrosion pits (Figure 15a,b). In contrast, the exposed surface of a sample subjected to 4-min alkaline etching and sealed with stearic acid (Figure 15d) is pore-free and appears to be more uniform that that for the 4-N sample (Figure 15b). This observation is further confirmed by a closer observation of the surface in Figure 16. It shows that the corrosion pits on the oxide film sealed with NiF2 are about 5-10 μm, whereas the alloy sealed with stearic acid still has a petal-shape layer on the surface, which is beneficial for the corrosion resistance. The intermetallics in question are most likely the remaining Al 3 Mg 2 intermetallics in the alloy that was alkaline etched for only 2 min, as these intermetallics have a low corrosion potential of about (−1.5 V) to (−1.3 V) vs. the saturated calomel electrode (SCE) that commonly acts as an active cathodic electrode [5,7]. As the corrosion reaction goes, the oxide film degrades, and the oxidation reaction of the active intermetallic Al 3 Mg 2 takes place with the electrolytic solution. However, the detrimental effect of intermetallics can be alleviated by alkaline etching for 4 min and sealing with stearic acid, as can be seen in Figure 14. It is clear that the alloy alkaline etched for 4 min and sealed with stearic acid has a higher corrosion potential (E corr = −0.35 V) as compared to the alloy that was etched for 4 min sealed by NiF 2 that has E corr of −0.85 V, which indicates that the metal oxidation is low at the stage of corrosion reaction. It was reported that the corrosion potential of the alloy depends on the type of the active intermetallic [7] and the type of sealants, and it was shown that these films had strong hydrophobic properties and improved the surface hardness [24,37,44]. The morphology of the oxide film formed after different etching times and sealing after the electrochemical corrosion in 5 wt.% NaCl solution are shown in Figure 15. After 2 min of alkaline etching and sealing with either of the two sealants, the corroded surface shows voids similar to corrosion pits (Figure 15a,b). In contrast, the exposed surface of a sample subjected to 4-min alkaline etching and sealed with stearic acid (Figure 15d) is pore-free and appears to be more uniform that that for the 4-N sample (Figure 15b). This observation is further confirmed by a closer observation of the surface in Figure 16. It shows that the corrosion pits on the oxide film sealed with NiF 2 are about 5-10 µm, whereas the alloy sealed with stearic acid still has a petal-shape layer on the surface, which is beneficial for the corrosion resistance. Conclusions The surface of as-cast A535 alloy samples after different alkaline etching times and anodizing and sealing with either nickel fluoride (NiF2) or stearic acid was studied, and its effect on the corrosion performance of the anodic oxide film was evaluated through salt spray and electrochemical testing. The key conclusions are drawn as follows: • A suitable alkaline etching time of 4 min can effectively remove intermetallic particles from the matrix surface, providing the grooved surface with higher roughness for anodic oxide film growth. In addition, at a lower etching time, the corroded surface showed local pitting, which resulted from the remaining intermetallics that reacted with the electrolyte and increased the positive charges in the corroded area. • Etching the surface of the alloy for 4 min assured lower porosity and more uniform anodic film with a sufficiently large thickness of up to 70 μm, with the maximum hardness of 550 HV. • The corrosion resistance of an A535 alloy can be improved by sealing with stearic acid, which provided better quality of the sealed layer than NiF2 due to the fewer reaction products and improved wettability. • After a salt spray test for 336 h, the samples sealed with stearic did not show pitting sites and corrosion products. • The corrosion behavior evaluated by polarization fitting curves revealed that the anodic oxide film sealed with stearic had better corrosion resistance compared to the anodic oxide film sealed with NiF2, which was quantified by the lower corrosion current density (Icorr). Highlights • Alkaline etching can remove intermetallic particles at the surface of an A535 alloy. • Remaining intermetallic phase on the alkaline etched surface was responsible for the current density changing, which lead to pitting corrosion. • Etching for 4 min produces better surface adhesion of the anodic film and fewer voids. • An anodized A535 alloy sealed by stearic acid has better corrosion resistance. Conclusions The surface of as-cast A535 alloy samples after different alkaline etching times and anodizing and sealing with either nickel fluoride (NiF 2 ) or stearic acid was studied, and its effect on the corrosion performance of the anodic oxide film was evaluated through salt spray and electrochemical testing. The key conclusions are drawn as follows: • A suitable alkaline etching time of 4 min can effectively remove intermetallic particles from the matrix surface, providing the grooved surface with higher roughness for anodic oxide film growth. In addition, at a lower etching time, the corroded surface showed local pitting, which resulted from the remaining intermetallics that reacted with the electrolyte and increased the positive charges in the corroded area. • Etching the surface of the alloy for 4 min assured lower porosity and more uniform anodic film with a sufficiently large thickness of up to 70 µm, with the maximum hardness of 550 HV. • The corrosion resistance of an A535 alloy can be improved by sealing with stearic acid, which provided better quality of the sealed layer than NiF 2 due to the fewer reaction products and improved wettability. • After a salt spray test for 336 h, the samples sealed with stearic did not show pitting sites and corrosion products. • The corrosion behavior evaluated by polarization fitting curves revealed that the anodic oxide film sealed with stearic had better corrosion resistance compared to the anodic oxide film sealed with NiF 2 , which was quantified by the lower corrosion current density (I corr ). Highlights • Alkaline etching can remove intermetallic particles at the surface of an A535 alloy. • Remaining intermetallic phase on the alkaline etched surface was responsible for the current density changing, which lead to pitting corrosion. • Etching for 4 min produces better surface adhesion of the anodic film and fewer voids. • An anodized A535 alloy sealed by stearic acid has better corrosion resistance.
11,277
2022-07-04T00:00:00.000
[ "Materials Science", "Engineering" ]
Distinct directional couplings between slow and fast gamma power to the phase of theta oscillations in the rat hippocampus It is well‐established that theta (~4–10 Hz) and gamma (~25–100 Hz) oscillations interact in the rat hippocampus. This cross‐frequency coupling might facilitate neuronal coordination both within and between brain areas. However, it remains unclear whether the phase of theta oscillations controls the power of slow and fast gamma activity or vice versa. We here applied spectral Granger causality, phase slope index and a newly developed cross‐frequency directionality (CFD) measure to investigate directional interactions between local field potentials recorded within and across hippocampal subregions of CA1 and CA3 of freely exploring rats. Given the well‐known CA3 to CA1 anatomical connection, we hypothesized that interregional directional interactions were constrained by anatomical connection, and within‐frequency and cross‐frequency directional interactions were always from CA3 to CA1. As expected, we found that CA3 drove CA1 in the theta band, and theta phase‐to‐gamma power coupling was prominent both within and between CA3 and CA1 regions. The CFD measure further demonstrated that distinct directional couplings with respect to theta phase was different between slow and fast gamma activity. Importantly, CA3 slow gamma power phase‐adjusted CA1 theta oscillations, suggesting that slow gamma activity in CA3 entrains theta oscillations in CA1. In contrast, CA3 theta phase controls CA1 fast gamma activity, indicating that communication at CA1 fast gamma is coordinated by CA3 theta phase. Overall, these findings demonstrate dynamic directional interactions between theta and slow/fast gamma oscillations in the hippocampal network, suggesting that anatomical connections constrain the directional interactions. In the entorhinal-hippocampal network, the gamma-band activity is often divided into slow gamma (~25-55 Hz) and fast gamma (~60-100 Hz) oscillations (Colgin et al., 2009). Slow and fast gamma oscillations emerge in different phases of the theta rhythms and thus provide a mechanism to temporally segregate potentially interfering information (Hasselmo, Bodelon, & Wyble, 2002a). It has demonstrated that slow gamma activity was associated with prospective coding while fast gamma was related to retrospective coding (Zheng, Bieri, Hsiao, & Colgin, 2016). Moreover, fast and slow gamma oscillations have recently been shown to be associated with different running speeds in rats. Fast gamma activity driven by medial entorhinal cortex is predominant during high running speeds, whereas slow gamma arriving from CA3 is prevalent at low running speeds (Ahmed & Mehta, 2012;Zheng, Bieri, Trettel, & Colgin, 2015). Taken together, slow and fast gamma oscillations are suggested to correspond to distinct functional states in the entorhinal-hippocampal network (Colgin, 2015a). While there is strong evidence that gamma power is coupled to the theta phase, it remains unclear if the theta phase causally modulates the gamma power or whether burst of gamma power phase adjusts the theta oscillations. Local field potentials (LFPs) recordings from hippocampal structures in behaving rats provide an excellent opportunity to investigate the directional interactions between theta oscillations in relation to slow and fast gamma activity. To this end, we analysed LFPs recorded from CA1 and CA3 in rats exploring an open field environment. We used spectral Granger causality (GC) and phase slope index (PSI) to investigate the dominant within-frequency information flow between CA3 and CA1 (Dhamala, Rangarajan, & Ding, 2008;Nolte et al., 2008). While PSI is mainly a measure estimating the dominant direction of interaction, GC allows for estimating bidirectional interaction (Nolte, Ziehe, Krämer, Popescu, & Müller, 2010). To reconcile PSI and GC, we therefore also estimated interregional GC differences and compared it to PSI. More critically, a novel measure of cross-frequency directionality (CFD), which can evaluate the directional coupling between the phase of slow oscillations and the power of fast oscillations in a robust manner, was applied to these data (Jiang, Bahramisharif, Gerven, & Jensen, 2015). The CFD measure was validated by means of simulation studies in Jiang et al., 2015, which could be associated with delays. We speculate that these delays are a consequence of neural transmission, albeit this still needs to be demonstrated by further electrophysiological recordings. Along the tri-synaptic pathway within the hippocampus, it is well-established that CA3 projects to CA1 but not the reverse, because pyramidal cells of CA3 provide a major input to CA1 through Schaffer collaterals (Amaral & Witter, 1989;Andersen, Bland, Myhrer, & Schwartzkroin, 1979). As hypothesized based on anatomical connection constrains, we predicted that theta and gamma within-frequency and cross-frequency directional interactions were always from CA3 to CA1. | Animals Six male Long Evans rats weighing between 350 and 500 g were used in this study. Recording methods are similar to those described previously (Bieri, Bobbitt, & Colgin, 2014;Zheng et al., 2015). In brief, electrode drives containing independently moveable tetrodes ("hyperdrives") (Gothard, Skaggs, Moore, & McNaughton, 1996) were implanted in CA1 and CA3. The rats' light-dark cycle was inverted (lights off from 8:00 to 20:00 and lights on from 20:00 to 8:00) to maintain a behavioural test facility (Beeler, Prendergast, & Zhuang, 2006), and the behavioural sessions were executed during the dark cycle. Behavioural training and data collection started at least one week after recovery from the surgery. The rats were food-deprived to the level of about 90% of freefeeding weight during data collection. All experiments were approved by the University of Texas at Austin and conducted according to the protocol of the United States National Institutes of Health Guide for the Care and Use of Laboratory Animals, in accordance with the Society for Neuroscience's Policies on the Use of Animals in Neuroscience Research. | Tetrode placement Over the course of a few weeks after surgery, tetrodes were slowly lowered towards either the CA1 or CA3 stratum pyramidale. For each hyperdrive, one tetrode was placed at the corpus callosum or higher and used as a reference, which typically is used for such recordings as it is considered relatively silent (Bieri et al., 2014;Zheng et al., 2016Zheng et al., , 2015. To make sure that reference tetrodes were placed in a silent location, they were recorded continuously against the ground. After the experiment finished, all recording locations were histologically verified. The final recording sites were located in or close to the CA1 and CA3 stratum pyramidale ( Figure 1). CA1 tetrodes were selected as the tetrodes that were located approximately in the middle of the proximodistal axis of CA1. CA3 tetrodes were selected as those tetrodes located as close as possible to the middle of the proximodistal axis of CA3 (i.e. as close as possible to the middle of CA3b). | Data collection Data collection began when cells were recorded approximately at the proper depth for the region of interest with amplitudes exceeding ~4-5 times the noise levels. EEG characteristics (e.g. polarity and amplitude of sharp waves in hippocampus, theta modulation) additionally helped establish recording locations. We used a Neuralynx data acquisition system (Neuralynx) to record the data. A unity gain, multichannel headstage (HS-54; Neuralynx) was connected to the recording drive. Continuous LFP recordings were sampled at 2000 Hz and digitally 0.1-500 Hz bandpass filtered. By using a breakout board (MDR-50 breakout board; Neuralynx), we duplicated the reference signal and the reference signal was recorded against the ground continuously. LFPs were then obtained by differentiating against the referenced signal. | Behaviour Rats restarted behavioural training at least one week after recovering from surgery. Six rats were trained to run in a 60 cm × 60 cm open field enclosure in three 10-min sessions each day. Small pieces of cookies were randomly scattered throughout the enclosure to motivate rats to run. To make sure rats were familiar with the environment, data acquisition was conducted after two days of familiarization training. Following each recording session, rats had about 10 min of rest in a towel-lined, elevated flowerpot. In each recording session, the LFP recording from the whole session was divided into 1s epochs and averaged running speed was calculated for each epoch. To make sure the rats were active exploring, only the epochs with highest 33.3% (speed rank > 67.7%) running speed were used for later data analysis. | Spectral analysis The analyses were done using the FieldTrip toolbox (Oostenveld, Fries, Maris, & Schoffelen, 2011) and in-house MATLAB scripts (MATLAB and Statistics Toolbox Release 2014b, The MathWorks, Inc., Natick). Spectral coherence, GC and PSI estimations were computed by a fast Fourier transform (FFT) using a multitaper approach (7 Slepian tapers) (Mitra & Pesaran, 1999). The 1 s epoch lengths resulted in 1 Hz spectral resolution and a ± 2 Hz spectral smoothing by multitapering. For GC analysis, non-parametric spectral matrix factorization was applied to the cross-spectral density. Non-parametric GC analysis is superior to the parametric approach since it does not require the autoregressive model order to be estimated (Dhamala et al., 2008). Additionally, PSI was applied to assess the within-frequency directionality. PSI is a robust method to estimate the direction of information influx by computing the slope of phase differences in a pre-specified frequency range (Nolte et al., 2008). We used 2 Hz bandwidth to calculate the phase slope. | Cross-frequency coupling (CFC) and directionality (CFD) analysis Let x s denote the raw signal at segment s. Let y v,s denote the power envelope of segment s where v is the frequency of the fast oscillation. We define X s and Y v,s as the FFTs of x s and y v,s , respectively. Let v,s = X s (Y v,s ) * be the cross-spectrum between X s and Y v,s where "*" denotes the complex conjugate. Each of these complex-valued vectors is centred at frequencies f ∈ 0,Δf ,2Δf , … , n FFT 2 Δf where Δf = Fs∕n FFT is the frequency resolution with sampling frequency Fs and length of the Fourier transform n FFT . We use notation X s (f ) ,Y v,s (f ) and v,s (f ) to denote elements of these vectors centred at f. The measure of cross-frequency coherence is based on the coherence between the power envelope of a high-frequency signal and low-frequency of the raw signal centred at f: where S is the number of data segments. (1) CFD is computed to evaluate the directionality of interactions between neuronal oscillations, which is based on the PSI between the phase of slower oscillations and the power envelope of faster oscillations (Jiang et al., 2015). PSI is a robust method to quantify directionality because it allows one to infer whether one signal is leading or lagging a second signal by considering the slope of phase differences in a pre-specified frequency range (Nolte et al., 2008). The assumption is that a constant lag in the time domain can be translated into phase differences, which will change linearly with frequency in the considered range. Let the complex coherency be defined as. It should be noted that Equation (2) and Equation (1) are related because Equation (1) is the modulus squared of Equation (2). The CFD between signal x and the power envelope of the signal y v at frequency tile (v,f ) is defined as: where β is the bandwidth used to calculate the phase slope and Im denotes the imaginary part. For the CFC and CFD calculations, the high-frequency power envelope was extracted using a sliding time window approach. This was implemented by applying a discrete Fourier transform to successive segments of the data after multiplying with a Hanning taper (5 cycles long with respect to the frequency of interest). High-frequency power envelope extraction was done from 20 to 100 Hz in steps of 2 Hz. To calculate CFD, the bandwidth β for estimating the phase slope index was set to 2 Hz at central phase frequency from 2 to 20 Hz in 1 Hz steps. It was worth to mention that we used zero padding to increase the frequency resolution to 0.5 Hz, so the number of frequency bins to compute the phase slope index was 4. CFC and CFD values are normalized by dividing the absolute maximum value of all intraregional and interregional interactions per animal, thus all CFC and CFD values are in the range (0, 1] and [−1, 1], respectively. | Statistic testing To assess the significance of within-frequency GC differences, we applied a non-parametric cluster permutation approach to account for multiple comparison correction (Maris & Oostenveld, 2007). First, for every frequency bin, we computed a two-sided two-sample t test between CA3 to CA1 and CA1 to CA3 Granger influences, resulting in t statistic map. Cluster candidates were determined by t values that exceeded the 95% percentile of the t statistic (p < .05). To form clusters, at least two neighbouring t map candidates were required and cluster scores were computed as the summed t values within the cluster. Next, we circularly shifted a random number of time points in CA1 while keeping the original CA3, and recomputed the GC, permuted t values and cluster scores. This procedure was done 1,000 times, resulting in 1,000 maximum cluster scores in the cluster level reference distribution. By comparing to the cluster permutation distribution, observed cluster scores higher than the 97.5th percentile or lower than the 2.5th percentile were considered statistically significant at p < .05. For PSI statistic assessment, the procedure was similar, but we used one-sample t test as the test statistic instead because PSI is symmetric. To evaluate the CFC and CFD statistics, we used R Package "ARTool" for nonparametric two-way repeated measure ANOVAs (https :// cran.r-proje ct.org/web/packa ges/ARToo l/index.html) due to the small number of animals (n = 6). The ARTool relies on a preprocessing step that "aligns" data before applying averaged ranks, after which point common ANOVA procedures can be used (Wobbrock, Findlater, Gergle, & Higgins, 2011). | Within-frequency interaction in the hippocampal network First, we quantified the power spectra in the CA1 and CA3. During the active exploration, there was a prominent peak in the 6-10 Hz theta band in the power spectra in both CA1 and CA3 (Figure 2a). The interregional CA1 and CA3 synchronization was then computed by the coherence metric. The coherence spectrum generally revealed two distinct regimes: one in the theta band (~4-10 Hz) and a less prominent broad peak in the slow gamma band (~30-50 Hz) (Figure 2b). Next, we investigated frequency-specific directional influences by computing the spectral GC and PSI, respectively (Dhamala et al., 2008;Nolte et al., 2008). The GC measure from CA3 to CA1 revealed a distinct peak in the theta (~8 Hz) (Figure 2c). A cluster permutation approach was applied to statically access the difference in GC values while controlling for multiple comparisons over frequency bins. This was done by keeping the original CA3 while circularly shifting a random number of time points in CA1 1,000 times and recalculating the GC difference t-maps to obtain a reference distribution. This analysis confirmed that the GC influence from CA3 to CA1 was significantly stronger than the influence from CA1 to CA3 in the theta band ( Figure 2c). Likewise, PSI showed that theta mediated information flow was from CA3 to CA1 (Figure 2d), similar to our GC findings. In short, this within-frequency directionality analysis demonstrates that CA3 activity drives CA1 activity in the theta band more than the reversed direction. | General cross-frequency interactions in the hippocampal network Next, we quantified the cross-frequency couplings in the hippocampal network. This was first done by calculating the grand average of the interactions both within and between CA1 and CA3 regions (CA1 phase to CA1 power; CA3 phase to CA3 power; CA1 phase to CA3 power; CA3 phase to CA1 power). CFC provides a measure for determining how much the power envelope of faster oscillations correlates with the phase of slower oscillations. The grand average CFC map showed that the phase of theta oscillations (4-10 Hz) was strongly coupled to gamma power (~30-100 Hz) (Figure 3a). We next estimated the CFD. The CFD quantifies whether the phase of slower oscillations drives the power of faster oscillations (positive CFD) or conversely whether the power of faster oscillations drives the phase of slower oscillations (negative CFD). Visual inspection of the grand averaged CFD map revealed two distinct gamma patterns: negative CFD in the range of theta (4-10 Hz) phase to 30-50 Hz gamma power and positive CFD in the range of theta (4-10 Hz) phase to 60-90 Hz gamma power ( Figure 3b). Thus, we defined these distinct two gamma frequency bands as slow gamma (30-50 Hz) and fast gamma (60-90 Hz), respectively. | Hippocampal subregional crossfrequency interactions We then investigated hippocampal subregional cross-frequency interactions by quantifying CA1 and CA3 intraregional and interregional cross-frequency interactions. Since slow and fast gamma power was directionally coupled to theta phase differentially in the hippocampal network, we evaluated theta phase (4-10 Hz) to respectively slow gamma (30-50 Hz) and fast gamma power (60-90 Hz) for CFC and CFD effects separately. The measures we calculated by averaging the corresponding values in the defined phase and power frequency ranges identified in Figure 3. To assess the statistical significance, two-way repeated measure ANOVA analyses were performed for intraregional and interregional interactions. We first quantified the CA1 and CA3 intraregional cross-frequency interactions, using a two-way repeated F I G U R E 2 Power spectrum, coherence, GC and PSI in CA1 and CA3 regions. (a) CA1 (red line) and CA3 (blue line) power spectra. (b) Coherence between CA1 and CA3 (blue line). (c) GC spectra between CA3 and CA1 in both directions (CA1 to CA3, blue line; CA3 to CA1, red line). Frequency ranges with significant differences marked by black dashed line (p < .05). (d) PSI between CA3 and CA1. Frequency ranges with significant differences marked by black dashed line (p < .05). Note that PSI is symmetric since PSI from CA3 to CA1 is the negative number of PSI from CA1to CA3. The positive PSI suggests that information flow is from CA3 to CA1 and vice versa. The shaded area represents standard deviations [Colour figure can be viewed at wileyonlinelibrary.com] F I G U R E 3 Grand averaged CFC and CFD in the hippocampal network obtained by averaging CA1 phase to CA1 power, CA3 phase to CA3 power, CA1 phase to CA3 power, and CA3 phase to CA1 power interactions. measure ANOVA analyses with factors region (CA1 vs. CA3) and gamma range (slow gamma vs. fast gamma power). We only found significant main effect of region (F 1,20 = 16.08, p < .005) (Figure 4a), indicating CA3 intraregional CFC between theta phase and gamma power (mean = 0.24, SD = 0.13) was significantly higher than CA1 intraregional CFC (mean = 0.09, SD = 0.06). With respect to the CA1 and CA3 intraregional CFD, there were no significant main or interaction effects (Figure 4b). With regard to interregional directional phase-topower coupling, the CA1 and CA3 interregional ANOVA in CFD revealed the significant main effect of direction (F 1,20 = 8.71, p = .008) (Figure 5b). Specifically, the negative CFD of CA1 phase to CA3 slow gamma power coupling (mean = −0.071, SD = 0.070), implicating CA3 slow gamma power was driving CA1 theta phase. Conversely, the F I G U R E 4 CA1 and CA3 intraregional CFC and CFD. (a) CA1 and CA3 intraregional CFC. Left panel: CA1 and CA3 intraregional CFC phase-power comodugrams; Right panel: Mean theta phase (4-10 Hz) to slow gamma power (30-50 Hz) or fast gamma power (60-90 Hz) CA1 and CA3 intraregional CFC, which were obtained by averaging the CFC values in the defined phase and power frequency ranges. The error bars represent standard deviations across six rats. (b) Similar to A but for CA1 and CA3 intraregional CFD. *** p < .005 [Colour figure can be viewed at wileyonlinelibrary.com] positive CA3 phase to CA1 fast gamma power (mean = 0.26, SD = 0.13), implicating CA3 theta phase controlling CA1 fast gamma power. Additionally, there appeared to be two separate CFD patterns in the 60-90 Hz fast gamma range: positive 4-8 theta to fast gamma CFD and negative 8-10 Hz theta to fast gamma CFD. However, after closer inspection, this was driven by one rat and thus not general. The statistics revealed that the main effect of gamma range was also significant (F 1,20 = 5.35, p = .03) with positive CFD for fast gamma (mean = 0.24, SD = 0.14) and negative CFD of slow gamma (mean = −0.07, SD = 0.05). There was no significant interaction between direction and gamma range factors in the CA1 and CA3 interregional CFD. Of note, when further examining each CA1 and CA3 interregional CFD individually, we found significant negative CA1 theta phase to CA3 slow gamma power CFD (Wilcoxon signed-rank test, p = .03) and positive CA3 phase to CA1 fast gamma power CFD (Wilcoxon signed-rank test, p = .03), suggesting they were the main driving factors underlying the significant main effects of direction and gamma. Overall, these indicate distinct directional couplings between theta phase and slow/fast gamma power in the rat hippocampus CA1-CA3 circuit. In particular, the CA1-CA3 circuit cross-frequency dynamics are mainly reflected by CA3 slow gamma driving CA1 theta phase and CA1 theta phase controlling CA3 fast gamma. | DISCUSSION We here have provided novel insights into the neuronal dynamics supporting both within-frequency and cross-frequency directed information flow between hippocampal subregions. This was done by analysing LFP recordings from hippocampal subregions CA1 and CA3 in exploring rats. We found that CA3 drove CA1 in the theta band. We then confirmed the presence of prominent coupling between theta phase and gamma power within and between CA3 and CA1 regions (Belluscio et al., 2012;Bragin et al., 1995;Colgin et al., 2009). Importantly, we demonstrated distinct directional functional couplings between slow and fast gamma power to the phase of theta oscillations in the rat hippocampus CA1-CA3 network: CA3 slow gamma activity is in control of CA1 theta oscillation while CA3 theta phase controls CA1 fast gamma activity. | Distinct directional couplings in slow and fast gamma power to theta phase What might be the purpose of the coupling between CA3 slow gamma power to the CA1 theta phase from a functional point of view? Theta-modulated slow gamma has been proposed to facilitate memory retrieval (Colgin, 2015b). Memory retrieval is thought to be supported by CA3 due to its extensive recurrent collaterals (Brun et al., 2002;Steffenach, Sloviter, Moser, & Moser, 2002;Treves & Rolls, 1991). Phase synchronization between CA3 and CA1 in the slow gamma band might facilitate the transfer of retrieved memory representations from CA3 to CA1 (Carr, Karlsson, & Frank, 2012;Colgin et al., 2009). One possibility is that bursts of slow gamma activity in CA3 may phase-reset theta activity in CA1 to ensure that memory representations reflected by gammaband synchronization are effectively transmitted from CA3 to CA1 within discrete theta cycles. This is consistent with the notion that related information is packaged together within individual theta cycles (Colgin, 2013). What is the functional role of CA3 theta oscillation entraining CA1 fast gamma activity? Theta-modulated fast gamma has been proposed to facilitate memory encoding (Colgin, 2015b). One explanation is that CA3 theta phase coordinates multi-item memory information represented by CA1 fast gamma activity. This theta-gamma code is used to format memory encoding, whereby different information is represented in different fast gamma subcycles of a theta cycle. | What makes an oscillation distinct and how should we define their ranges? While gamma oscillations in the hippocampus are often reported and discussed as if they are well defined, the exact gamma sub-band ranges are often opaque and inconsistent across studies (Belluscio et al., 2012;Bieri et al., 2014;Fernandez-Ruiz et al., 2017;Schomburg et al., 2014;Zheng et al., 2015). This is mainly due to methodological differences, agreed-upon phenomenon. For example, Belluscio et al. defined the ranges based on distinct gamma bands associated with different phases of theta waves (Belluscio et al., 2012). Schomburg et al. (2014) identified distinct gamma sub-bands with phase-amplitude coupling comodugrams. Fernandez-Ruiz et al. (2017) determined different gamma via current source density analysis as well as independent component analysis decomposition of the multi-electrode LFP. Here, we defined gamma sub-bands with phase-amplitude directional coupling comodugrams. Although it is difficult to determine the best practice of defining gamma sub-band ranges, future studies should try to reconcile as much as possible. | Concerns on artifactual coupling We characterized the relationship between theta oscillations (4-10 Hz) and gamma oscillations . In particular, we demonstrated CFC between theta and gamma oscillations by quantifying theta phase-to-gamma power coupling. A concern when interpreting CFC results pertains to whether fast oscillations are associated with distinct neuronal activity in the gamma band or if the coupling is explained by the non-sinusoidal shape of theta oscillations (Aru et al., 2015). In the latter case, coupling would be artifactual. The point has been made that non-sinusoidal wave shapes in the theta band can create spurious phase-amplitude coupling (Kramer, Tort, & Kopell, 2008;Lozano-Soldevilla, Huurne, & Oostenveld, 2016). To check this potential confound, we examine the sharpness of theta activity in relation to CFC (Cole et al., 2017). Sharpness defines the asymmetry ratio between the sharpness of the oscillatory signal peaks compared with the troughs as follows: If the signal is perfectly symmetric and sinusoidal, the sharpness ratio is equal to 1. The sharpness ratios of CA1and CA3 are bigger than 1 (CA1: 1.09 ± 0.08; CA3: 1.47 ± 0.19;), indicating they are non-sinusoidal and raising the question whether the slow/fast gamma power were independent oscillators or the by-product of the non-sinusoidal properties. To test between these possibilities, we conducted complementary time-frequency representations (TFRs) of induced power locked to theta oscillation peaks analysis. If slow/fast gamma power is phase locked to the theta rhythm as the CFC suggested, specific theta phase segments (i.e. peaks or thoughts) should be associated with power modulations. As illustrated in Figure 6, we observed (1) Slow gamma appeared around the theta peak while fast gamma exhibited around the theta trough In CA1; (2) Slow gamma was phase locked to the theta peak in CA3. Therefore, slow and fast gamma power indeed modulate within the theta cycle. Moreover, we checked the bicoherence in CA1 and CA3 (Figure 7). Bicoherence has been suggested to assess the level of phase-power and not of phase-phase coupling as commonly accepted (Hyafil, Giraud, Fontolan, & Gutkin, 2015;Shahbazi Avarvand et al., 2018). In the bicoherence comodugrams, true phase-power coupling is characterized by strong bicoherence outside the diagonal regions, while spurious phase-power coupling due to sharp peaks and harmonics is reflected by bicoherence in the diagonal regions (Kovach, Oya, & Kawasaki, 2018). Therefore, the theta-theta bicoherence in both CA3 and CA1 might be related to the harmonic of theta, while the theta-slow/fast gamma bicoherence might suggest the true phase-power coupling. Lastly, both the slow and fast gamma oscillations were associated with fast neuronal oscillations as seen in the spike-field recordings (Colgin et al., 2009). Taken together, these reduce concerns of the CFC in the rat hippocampus being created by non-sinusoidal theta oscillations (see (Jensen, Spaak, & Park, 2016) for discussion). | Relation to other studies At the single neuron level, Csicsvari et al. demonstrated CA3 pyramidal neurons discharging CA3 and CA1 interneurons at latencies, in which CA3 pyramidal neurons fired significantly earlier than CA1 interneurons (Csicsvari et al., 2003). Here, we found CA3 slow gamma led CA1 theta activity and CA3 theta activity led CA3 fast gamma. Taken F I G U R E 6 Grand average of time-frequency representations (TFRs) of induced power locked to theta oscillation peaks, which were identified after applying a 4-10 Hz bandpass filter to the data. (a) Top panel: Grand averaged TFRs time-locked to theta peaks (t = 0 s) in CA1. TFRs were calculated for each 0.2 s time window (−0.1 to 0.1s) around the theta peak and then averaged. The colour bar represents the relative power change normalized to the whole 0.2s time window. Bottom panel: Grand mean representation of theta peak-triggered trace over the 0.2 s time window around the theta peak in CA1. (b) Similar to A but for CA3 [Colour figure can be viewed at wileyonlinelibrary.com] F I G U R E 7 Grand average of CA1 and CA3 bicoherence. Prominent theta-theta bicoherence in both CA3 and CA1 indicates the second harmonic of theta. Additionally, there are relatively strong theta-slow/fast gamma bicoherence in CA3 and CA1 [Colour figure can be viewed at wileyonlinelibrary.com] together, these might suggest the firing order of CA1 and CA3 neurons might be reflected at the general neuron population level in terms of theta activity, slow gamma activity and fast gamma activity between CA3 and CA1 region. Noting that, slow gamma driving theta activity was also found in the hippocampal circuit previously. For example, both vitro and vivo LFP recordings in rats showed CFC between CA3 gamma (25-40 Hz) and subicular theta (Jackson et al., 2014). To assess if the theta-gamma coupling was associated with a particular delay, they shifted the theta phase across different lags while keeping the gamma amplitude timing constant. Directed CA3 slow gamma to subicular theta interaction was found to be the dominant directional interaction when rats were exposed to a novel open field environment. Moreover, Vaidya and Johnston (2013) demonstrated gamma-to-theta power conversion in the dendrites of CA1 pyramidal neurons. Gamma frequency synaptic bursts could generate theta-frequency components important for oscillatory synchrony. In our study, this gamma-totheta power conversion might occur at slow gamma band between CA3 and CA1 since we found CA3 slow gamma power driving CA1 theta activity. Lastly, Nandi, Swiatek, Kocsis, & Ding (2019) investigated the hippocampus -prefrontal cortex and dentate gyrus (DG) -the Ammon's horn (CA1) interregional directional interactions. The ground truth was provided by the known anatomical connections predicting hippocampus → PFC and DG → CA1. They found that (1) hippocampal high-gamma amplitude was significantly coupled to PFC theta phase, but not vice versa; (2) DG high-gamma amplitude was significantly coupled to CA1 theta phase, but not vice versa. Similarly, we found that the theta and gamma within-frequency and cross-frequency directional interactions were always from CA3 to CA1, suggesting that anatomical connections were constraining the directional connectivity in the hippocampal CA3-CA1 network. In conclusion, our analysis reveals complex directional interactions between theta and slow/fast gamma oscillations in the hippocampal network. In particular, CA3 slow gamma activity entrains the onset of CA1 theta cycles while CA3 theta oscillation controls CA1 fast gamma activity. These findings provide novel insight into how information flow is controlled in the hippocampus. In future studies, it would be of great interest to study these directional interactions in other behavioural states such as during anesthetized, sleep and memory tasks.
7,075.6
2019-12-13T00:00:00.000
[ "Biology", "Physics" ]
The democracy and economic growth nexus: do FDI and government spending matter? Evidence from the Arab world The purpose of the paper is to examine the direct and indirect links between democracy and economic growth. To do so, the authors estimate a dynamic panel simultaneous equations model on a sample of 16 Arab countries during the period 2002–2013. This study focuses on two particular channels through which democracy affects growth, namely FDI inflows and public consumption expenditure. The results show that there is no clear relationship between democracy and economic growth in the Arab countries, which confirms the skeptical approach. The ambiguity of this relationship can be explained by the fact that democracy promotes growth indirectly by stimulating FDI inflows and hinders growth by generating higher public consumption expenditure. (Published in Special Issue FDI and multinational corporations) JEL C3 O40 P16 Introduction In the wake of the popular uprisings of 2011 which were first broken out in Tunisia and subsequently widespread in neighboring countries, the Arab world seemed to witness a new phase of socio-political changes marking a turning point in the history of the region. The peaceful protests pursued in the name of freedom and democracy 1 have enabled some Arab countries to finally break with the persistent authoritarian regimes, which have escaped from various waves of democratization that invaded the world. In light of these political upheavals, studying the effect of democracy on economic growth in the Arab world context is of key importance given that such a relationship could be influenced by the specificities of this region. From both theoretical and empirical points of view, democracy has an ambiguous effect on economic growth as existing studies on this topic provide evidence of positive, negative and even no significant relationship between democracy and economic growth (Sirowy and Inkeles, 1990). Investigating the economic consequences of democratization in the Arab countries is obviously relevant in that little empirical studies examining this issue have been conducted on this set of countries. In addition, most studies carried out on this sample of countries have been limited to merely studying the direct link between democracy and growth while neglecting the transmission channels through which democracy may affect economic growth (Elbadawi, 2005;Elbadawi and Soto, 2014;Selim and Zaki, 2014;Rachdi and Saidi, 2015). This paper aims to fill this gap by examining the direct and indirect relationship between democracy and economic growth in the Arab world. To this end, we estimate a dynamic panel simultaneous equations model on a sample of 16 Arab countries during the period 2002-2013 2 , using public consumption expenditure and FDI inflows as potential transmission channels. The choice of these two channels stems from the importance of state intervention in Arab economies and the increasing evolution of FDI flows as an outcome of globalization. 1 The Arab revolutionary movements also appear as a response to the economic downturns resulting from the global financial crisis, the low economic performance of the Arab countries and their inability to deal with high unemployment, the lack of economic opportunities and the spread of corruption. 2 Several political and economic events have marked this time interval. The most remarkable events were the invasion of Iraq by the United States in 2003 in an attempt to establish democracy in the Middle East after the attacks of September 11, 2001 and the emergence of the global economic crisis in 2008 that affected the economies of the Arab countries, notably those of the oil-rich countries and North African countries that have close ties with the EU. The remainder of the current study is organized as follows. Section 2 briefly reviews the related literature. Section 3 displays the econometric methodology and the data. Section 4 presents the empirical findings. Section 5 reports the robustness checks of the obtained results. Finally, section 6 concludes and provides some policy implications. Literature review Theoretical and empirical studies that have examined the effect of democracy on economic growth have revealed a lack of consensus on the nature of the relationship between democracy and economic growth. Theoretically, the direct link between democracy and economic growth has been analyzed on the basis of three approaches: the "compatibility view" which sustains that democracy promotes economic development, the "conflict view" according to which democracy hampers economic development, and the "skeptical view" which advocates that there is no systematic relationship between democracy and economic development (Sirowy and Inkless, 1990;Helliwell, 1994;De Hann and Siermann, 1995;Feng, 1997). The ambiguity of this relationship could be explained by the fact that democracy can affect economic growth indirectly through various channels (Helliwell, 1994;Barro, 1996;Tavares and Wacziarg, 2001;Baum and Lake, 2003). Nevertheless, these channels may have controversial indirect effects. In fact, several studies have shown that some of these channels show a positive impact of democracy on economic growth, while others show a negative influence. From an empirical perspective, a number of studies have used simultaneous equations models to examine the direct and indirect relationship between democracy and economic growth. Interestingly, Helliwell (1994) has constructed a two-equation system for a sample of 125 countries during the period 1960-1985. The results suggest that democracy has a negative direct effect on economic growth and a positive indirect impact via education and investment. Helliwell (1994) also argues that this positive indirect effect offsets the negative direct effect and that the net effect of democracy on economic growth seems impossible to discern. Further evidence of the negative and insignificant correlation between democracy and economic growth is provided by Tavares and Wacziarg (2001) for a sample of 65 industrialized and developing countries covering the period 1970-1989. The results show that democracy stimulates growth indirectly by promoting the accumulation of human capital and by reducing income inequality. However, it negatively affects economic growth by hindering the accumulation of physical capital and increasing public consumption. In the same vein, Kurzman et al. (2002) have shown on the basis of a panel of 106 countries covering the period 1951-1980 that no significant direct effect between democracy and growth is captured. However, the authors have identified two potential channels through which democracy affects growth. On the one hand, democracy stimulates investment, which is considered as a key factor in economic growth. On the other hand, democracy tends to reduce public spending, which is detrimental to economic growth. Using data for a sample of 128 countries over a 30-year period, Baum and Lake (2003) conclude that there is no direct influence of democracy on economic growth. These authors find that democracy tends to promote economic growth via improving access to education and public health. However, using instrumental variables technique for a sample of 175 countries during the period 1960-2010, Acemoglu et al. (2014) find a positive and significant effect of democracy on economic growth. These authors argue that democracy promotes growth by encouraging economic reforms, stimulating investment in primary education and health and mitigating social unrest. Similarly, Gründler and Krieger (2015) have demonstrated, using the GMM estimation technique, that democracy promotes economic growth as it is associated with more developed education, higher investment rates and lower fertility rates. Econometric methodology and data The aim of this paper is to study the channels through which democracy may affect economic growth. To this end, we use a panel dynamic simultaneous equations model for 16 Arab countries from 2002 to 2013. We consider that the effect of democracy on economic growth operates mainly through its impact on FDI and public consumption expenditure. On the one hand, in the wake of globalization, FDI flows have grown rapidly in the world economy. FDI inflows to Arab countries have increased considerably since the early 2000s (IMF, 2016). Like many developing countries, Arab policy-makers have paid particular attention to FDI inflows. These additional resources are needed to improve the recipient country's economic performance (Borensztein et al., 1998;Agosin and Mayer, 2000). More specifically, FDI inflows favor the increase of the country's production and productivity, encourage local investment and stimulate development and technological progress. On the other hand, public spending plays an important role in the Arab economies, particularly in the oil-producing countries, where a large share of government revenues comes from the export of oil and hydrocarbons. Although public spending is highly sensitive to fluctuations in oil prices, a disproportionate share of these expenditures are allocated for wages, subsidies and security. In fact, the proportion of public servants in the region as a whole is twice the world average (Malik, 2016). Specifically, more than 50 per cent of the budgets of these countries are devoted to public consumption spending, including public sector wages and social services provision. Indeed, Arab governments use public employment as a political tool to ease social tensions and preserve stability. Moreover, in order to preserve internal security, the Arab countries, in particular those of the GCC, devote an enormous proportion of public expenditure to defense and national security. This may explain the stability of the Arab regimes and the persistence of authoritarianism in the region. Model specification The equations of our model are formulated on the basis of previous theoretical developments. Thus, the system of equations can be written as follows: Eq. (1) examines the determinants of economic growth based on a standard growth model that relates the growth rate of real GDP per capita to the initial level of real GDP, the investment rate and the population growth rate. Our growth equation is augmented by a set of variables: democracy, our variable of interest, whose effect on growth is ambiguous (Helliwell, 1994;Tavares and Wacziarg, 2001), FDI inflows that are expected to stimulate growth by promoting technology and knowledge transfer (Borensztein et al., 1998), public consumption expenditure which is considered as non-productive and harmful for growth (Barro, 1997;Afonso and Furceri, 2010) natural rents that should stimulate economic growth by generating resources to finance development and trade openness which is supposed to have a positive effect on growth (Frankel and Romer, 1999). Eq. (2) examines the determinants of democracy. According to the "modernization theory", democratization is influenced by income per capita and other socioeconomic variables such as economic growth (Lipset, 1959). However, many studies have advocated that the positive impact of income on democracy disappears once it is reached through oil wealth (Ross, 2001). Democratization is also affected by external factors. Indeed, countries that are more open to international trade are likely to be more democratic (Csordas and Ludwig, 2011). Eq. (3) highlights the impact of democracy on FDI inflows. Many studies argue that a democratic regime can create an attractive institutional environment for FDI by providing better protection of property rights (Busse and Hefeker, 2007), promoting economic freedom (Mathur and Singh, 2011) and guaranteeing better control of corruption (Kalenborn and Lessmann, 2013). Other determinants of FDI have been included in the equation, namely, economic growth which increases the country's attractiveness for receiving FDI (Asiedu and Lien, 2011), natural resources that tend to attract FDI (Poelhekke and van der Ploeg, 2010), trade openness that positively affects FDI flows destined to serve foreign markets and negatively affects those destined to serve domestic markets (Blonigen, 2005), inflation to take into account the detrimental effect of macroeconomic instability on FDI (Schneider and Frey, 1985) and law and order to check whether good institutional quality stimulates FDI (Staats and Biglaiser, 2011). Eq. (4) evaluates the impact of democracy on public consumption expenditure. The literature suggests that democracy favors the rising of public spending due to increased redistribution demands (Aidt et al., 2006), trade union pressure for wage increases (Rodrik, 1999) and the opportunistic behavior of politicians during elections (Drazen and Eslava, 2010). A number of explanatory variables are introduced into the equation: economic growth which leads to an increase in demand for public services (Adsera and Boix, 2002), the population growth which is assumed to have a negative effect on public consumption due to economies of scale (Alesina and Wacziarg, 1998), natural rents that are often used to finance public expenditure (Ross, 2001), public debt that has a crowding out effect on public expenditure (Mahdavi, 2004), inflation that can lead to a reduction in public spending due to the deterioration in the real value of tax revenues (Zakaria and Shakoor, 2011) and trade openness which can lead to lower taxes and thus lower spending (Schulze and Ursprung, 1999). Estimation method The main econometric problem that may arise when estimating simultaneous equations model for dynamic panel data is that of the endogeneity of the explanatory variables. This endogenous bias 3 is due essentially to the problem of reverse causality between economic development and democracy (Przeworski and Limongi, 1993;Barro, 1996;Tavares and Wacziarg, 2001). In fact, as noted above, according to the modernization theory (Lipset, 1959), economic development may lead to the emergence of democracy. Similarly, the dynamic structure of the model makes the traditional estimators (Fixed effect, Random effect) biased since the lagged level of the dependent variable is correlated with the error term. To overcome this problem, we use the difference-GMM estimator suggested by Arellano and Bond (1991). This estimation method makes it possible to instrument the lagged dependent variable as well as the endogenous explanatory variables with their own past values. This method controls not only the endogeneity of the lagged dependent variable but also that of some explanatory variables. The validity of the instruments is tested using the Hansen test and the Arellano-Bond test for second-order autocorrelation. The null hypothesis of the Hansen test is that the instruments are uncorrelated with the error term whereas that of Arellano and Bond (1991) assumes the absence of second-order autocorrelation of the residuals. Data In this study, we employ an unbalanced panel of 16 Arab countries covering the period 2002-2013 (See Appendix for the country list). We use two different measures of democracy. Our main democracy measure is the Freedom House index widely used in the political science literature. This measure is composed from two indices: the political rights index which refers to how fair and free elections are held and the civil liberties index which involves a set of fundamental rights and freedoms mainly freedom of expression and belief, associational and organizational rights, rule of law and individual rights. More specifically, the Freedom House index defines democracy by the set of freedoms it is supposed to assure, thus leading to a maximalist definition of democracy (Munck and Verkuilen, 2002). The Freedom House index is constructed by averaging the sum of political rights and civil liberties sub-indices. The index is measured on a 1-7 scale, with 1 representing the most free and 7 representing the least free. The scale has been inverted, so that higher values indicate more democratic countries. To assess the robustness of our results, we use the Polity2 index from the Polity IV database as an alternative measure of democracy. The Polity2 index ranges from -10 to 10, with higher values reflecting more democratic countries. In contrast to Freedom house index, Polity IV index defines democracy by the set of rules and procedures that ensure political power transfer and electoral participation, thereby providing a minimalist definition of democracy. Both the Freedom House and the Polity IV measures of democracy are normalized between zero and one, with higher values indicating a higher level of democracy. In this paper, we suppose that democracy affects economic growth through its impact on FDI inflows and public consumption expenditure. Fig. 1 and Fig. 2 present scatter plots of democracy against FDI and public consumption expenditure over the period 2002-2013, respectively. The dispersion diagram shown in Fig. 1 indicates a positive correlation between democracy and FDI inflows. This positive relationship between the two variables is also displayed in the correlation matrix reported in Table A.3 of the Appendix. This points out that the emergence of democracy in the Arab countries tends to promote the attractiveness of the region for FDI. Likewise, the positive slope shown in the Fig 2. suggests that there is a positive correlation between democracy and public consumption expenditure. This amounts to saying that democracy tends to stimulate public consumption expenditure in the Arab countries. Variables description and data sources as well as summary statistics of the main variables used in the current study are provided in the Appendix. Results The estimation results of the growth equation are presented in Table 1. The regressions suggest that democracy has a positive and insignificant effect on economic growth confirming the skeptical approach according to which there is no clear relationship between democracy and growth. This result is similar to those obtained by Helliwell (1994), Tavares and Wacziarg (2001), Kurzman et al. (2002) and Baum and Lake (2003). Regarding the other explanatory variables, the results obtained are consistent with those reported in prior empirical studies dealing with the determinants of economic growth. The conditional convergence hypothesis is verified since the initial GDP coefficient is consistently negative. Similarly, the population growth rate seems to have the expected negative sign. The effect of investment on economic growth, although positive, is found to be insignificant. In fact, investment in the Arab countries is largely considered unproductive. The low productivity is mainly due to the predominance of public investment and to the low level of private investment 4 (Sala-i- Martin and Artadi, 2003;Hakura, 2004;Makdisi et al., 2006). For the natural resource rents, the positive and significant coefficient result indicates that natural resources in Arab countries are a blessing rather than a curse for economic growth, which contrasts with Elbadawi and Soto (2014) and Selim and Zaki (2014) who argue that natural resource revenues in the Arab world are negatively associated with economic growth due to the poor institutional quality and to the persistence of authoritarian regimes in these countries. Contrary to our expectations, trade openness appears to have a negative and significant effect on economic growth. This can be attributed to the fact that exports from Arab countries are not very diversified and more concentrated on low value-added products (Galal and Selim, 2012;IMF, 2015). Lipset (1959) according to which an increase in income per capita stimulates democracy. In addition, economic growth seems to favor democracy, which reinforces the conclusions of Lipset (1959). In line with Csordás and Ludwig (2011), we find no significant relationship between trade openness and democracy. The results also reveal a negative and significant effect of natural resource rents on democracy. These findings are consistent with recent research suggesting that natural resources are a barrier to the emergence of democracy (Elbadawi and Makdisi, 2007;Tsui, 2011;Fayad et al., 2012;Bougharriou et al., 2017). This is tantamount to saying that, in resource-rich countries, governments use the rents derived from these resources to reduce social pressure and ensure their stay in power. (2) tests indicate that we cannot reject the validity of our instruments. *, ** and *** denote significance at the 10%, 5% and 1% level, respectively. In light of the estimation results of the FDI equation presented in table 3, it seems that democracy stimulates FDI inflows significantly. These results are in line with those of Busse (2004) and Jakobsen and Soysa (2006). This brings us to the point that democratic countries tend to create an investment climate that provides better protection of property rights, better control of corruption and efficient legal system that guarantees economic freedom, thereby attracting foreign investors. In line with our expectations, economic growth appears to be positively and significantly related to FDI inflows. These findings support those of Moosa (2009) and Mottaleb and Kalirajan (2010). The estimates also show that inflation has a negative and statistically significant effect on FDI. This result, consistent with that obtained by Schneider and Frey (1985), implies that an unstable macroeconomic environment impedes the entry of foreign firms. Similarly, trade openness seems to have a negative and significant coefficient. This may be justified by the fact that FDI in Arab countries is essentially horizontal in nature, generally intended for the local market, thus confirming the tariff jumping hypothesis (Almounsor, 2007). Moreover, we find that natural resources affect positively, but not significantly FDI inflows. This is not surprising in view of the fact that several studies sustain that the effect of natural resources on FDI flows depends on institutional quality (Poelhekke and van der Ploeg, 2010;Asiedu, 2013). More specifically, natural resources tend to stimulate significantly FDI only in countries with good institutional quality. This is well illustrated by the positive and significant coefficient associated with the "law and order" variable, reflecting that a strong legal system creates an investment-friendly environment and strengthens foreign investors' confidence (Biglaiser and Staats , 2010;Alexander, 2014). The results shown in Table 4 indicate that democracy stimulates public consumption expenditure. Our findings are consistent with those reported by Aidt et al. (2006) and Profeta et al. (2013) who advocate that the extension of the right to vote to the masses, most notably the poor, causes an increase in demands for income redistribution, which favors the increase of public spending and social transfers. Workers' unions can also lobby for wage increases. In such a situation, the political elites find themselves obliged to meet these requirements in an attempt to remain in power. This is illustrated by the fact that, in response to the events of the Arab Spring, Arab governments have increased wages and employment in the public sector in order to alleviate social discontent. The results also suggest a negative and significant relationship between economic growth and public expenditure. This implies that, in times of economic downturn and in order to absorb unemployment, governments increase public spending by stimulating public sector employment and rising subsidies to calm social frustration. Similarly, population growth appears to have the expected negative effect. Regarding macroeconomic indicators, we find that inflation is negatively associated with public expenditure. These findings support those of Zakaria and Shakoor (2011) and Eterovic and Eterovic (2012) who argue that high inflation tends to reduce the real value of tax revenues, which can hamper the growth of government spending. As well, the results reveal no evidence that public debt and trade openness have a significant explanatory power. The estimates also indicate that an increase in natural resource revenues favors that of public spending. This result can be explained by the fact that in the resource-rich Arab countries, oil rents have led to the expansion of public spending, mainly on wages. In fact, politicians tend to increase employment in the public sector in order to retain popular support and contain political protests so that they can ensure their political survival (Ali and Elbadawi, 2012). Notes: Standard errors are in parentheses. Diff-GMM regression uses robust standard errors clustered by country. We employ the two-step GMM estimator with the Windmeijer (2005) finite sample correction for standard errors. To avoid overfitting endogenous variables, we collapse the instrument set as suggested by Roodman (2009). The Hansen and AR(2) tests indicate that we cannot reject the validity of our instruments. *, ** and *** denote significance at the 10%, 5% and 1% level, respectively. Robustness checks To check the robustness of our results, we use the Polity2 index of the Polity IV database as an alternative measure of democracy. As can be clearly seen in table A.4 of the appendix, democracy does not appear to have a significant effect on economic growth in the Arab countries even when measured by the Polity IV indicator. As a result, it is important to mention that our core results are not affected by the democracy index employed. Similarly, table A.4 shows that the effect of FDI and public consumption expenditure on economic growth is significant and the estimated coefficients have the signs initially obtained. It also seems that the results remain unchanged for most control variables. For the democracy equation, the results reported in Table A.5 of the appendix show that the initial level of income per capita continues to be consistently positive even after using an alternative measure of democracy, which confirms again the modernization theory. As for the other explanatory variables, the results are consistent with those obtained previously. With regard to the FDI equation, table A.6 of the appendix indicates that democracy continues to have a positive and significant effect on FDI. The control variables seem to exert the same effects as those obtained in our benchmark model, except for inflation which becomes insignificant. The reported estimates of the public consumption expenditure equation in Table A.7 of the appendix confirm the positive effect of democracy on public expenditure. The results show as well that some control variables retain their significance and keep the same sign, while others gain significance. Conclusion and policy implications The revolutions of the Arab Spring have fostered the fall of some Arab authoritarian regimes that have held power for several decades, opening the way for democratic changes in the region. In light of these political developments, it is particularly interesting to study the relationship between democracy and economic growth in the Arab world context as little empirical research has been conducted on this subject. The purpose of the paper is to examine the direct and indirect links between democracy and economic growth. To do so, we estimate a dynamic panel simultaneous equations model on a sample of 16 Arab countries during the period 2002-2013. This study focuses on two particular channels through which democracy affects growth, namely FDI inflows and public consumption expenditure. The results show that there is no clear relationship between democracy and economic growth in the Arab countries, which confirms the skeptical approach (Helliwell, 1994;Tavares and Wacziarg, 2001;Kurzman et al. 2002;Baum and Lake, 2003). The ambiguity of this relationship can be explained by the fact that the impact of democracy on economic growth operates through different channels, each of which affects growth differently. Interestingly, our model shows that democracy promotes growth indirectly by stimulating FDI inflows and hinders growth by generating higher public consumption expenditure. More specifically, a democratic country offers a favorable climate for investment that ensures the rule of law and the protection of private property, thereby making itself more attractive to foreign investors. At the same time, democracy is associated with higher public spending. In fact, to cope with social pressures and to keep themselves in power, politicians increase public spending by rising social transfers and subsidies to satisfy citizens' demands for income redistribution and by stimulating public employment to reduce unemployment during economic recession periods. These results are robust to the use of an alternative measure of democracy. In view of the results obtained from our model, it should be emphasized that democracy has a growth-enhancing effect only if its benefits outweigh its costs. In other words, the benefits of FDI must exceed the costs of public spending. Hence, a number of policy implications for the Arab countries may arise from our findings. First, as democracy is associated with an increase in administrative salaries and expenses, a reduction in current expenditure is of paramount importance. Accordingly, the adoption of public sector reforms is highly desirable. On the one hand, it is essential to create incentives to motivate public servants to move towards employment in the private sector. On the other hand, Arab governments have to undertake expenditure reforms and improve the quality of their budget institutions. Indeed, the implementation of effective spending rules can help control public spending. Reducing the excessive dependence on natural resources and fostering the economic diversification are as well expected to lower public spending. Second, improving institutional quality and the business environment seems to be a key solution to attract more FDI. Therefore, reforms aimed at promoting good governance are needed. Stimulating economic diversification in the Arab countries and attracting FDI concentrated in the non-oil sector would as well enhance economic growth (IMF, 2016). In view of the above, it is important to note that the simultaneous equation model cannot take into consideration all the costs and benefits of democracy. In fact, the current research is limited to studying only the effects of two transmission channels which are supposed, from our point of view, to be the most influential in the Arab world context. Nevertheless, other channels can also be taken into account while examining the link between democracy and economic growth. This may be the subject of future research. law The law and order index lies between 0 and 6, with higher values indicating more efficient legal system. Notes: Standard errors are in parentheses. Diff-GMM regression uses robust standard errors clustered by country. We employ the two-step GMM estimator with the Windmeijer (2005) finite sample correction for standard errors. To avoid overfitting endogenous variables, we collapse the instrument set as suggested by Roodman (2009). The Hansen and AR(2) tests indicate that we cannot reject the validity of our instruments. *, ** and *** denote significance at the 10%, 5% and 1% level, respectively. (2005) finite sample correction for standard errors. To avoid overfitting endogenous variables, we collapse the instrument set as suggested by Roodman (2009). The Hansen and AR(2) tests indicate that we cannot reject the validity of our instruments. *, ** and *** denote significance at the 10%, 5% and 1% level, respectively. Notes: Standard errors are in parentheses. Diff-GMM regression uses robust standard errors clustered by country. We employ the two-step GMM estimator with the Windmeijer (2005) finite sample correction for standard errors. To avoid overfitting endogenous variables, we collapse the instrument set as suggested by Roodman (2009). The Hansen and AR(2) tests indicate that we cannot reject the validity of our instruments. *, ** and *** denote significance at the 10%, 5% and 1% level, respectively. Notes: Standard errors are in parentheses. Diff-GMM regression uses robust standard errors clustered by country. We employ the two-step GMM estimator with the Windmeijer (2005) finite sample correction for standard errors. To avoid overfitting endogenous variables, we collapse the instrument set as suggested by Roodman (2009). The Hansen and AR(2) tests indicate that we cannot reject the validity of our instruments. *, ** and *** denote significance at the 10%, 5% and 1% level, respectively.
6,949.6
2019-03-11T00:00:00.000
[ "Economics" ]
Effects of Touch Location and Intensity on Interneurons of the Leech Local Bend Network Touch triggers highly precise behavioural responses in the leech. The underlying network of this so-called local bend reflex consists of three layers of individually characterised neurons. While the population of mechanosensory cells provide multiplexed information about the stimulus, not much is known about how interneurons process this information. Here, we analyse the responses of two local bend interneurons (cell 157 and 159) to a mechanical stimulation of the skin and show their response characteristics to naturalistic stimuli. Intracellular dye-fills combined with structural imaging revealed that these interneurons are synaptically coupled to all three types of mechanosensory cells (T, P, and N cells). Since tactile stimulation of the skin evokes spikes in one to two cells of each of the latter types, interneurons combine inputs from up to six mechanosensory cells. We find that properties of touch location and intensity can be estimated reliably and accurately based on the graded interneuron responses. Connections to several mechanosensory cell types and specific response characteristics of the interneuron types indicate specialised filter and integration properties within this small neuronal network, thus providing evidence for more complex signal processing than previously thought. The medicinal leech possesses a relatively simple and easily accessible neuronal system 11,12 with individually identifiable, monopolar neurons 13 , and accurate behavioural patterns. Three types of mechanosensory cells with distinct receptive fields [14][15][16][17][18][19] (see Fig. 1) are situated in each segmental ganglion of the leech: six T (touch) cells, four P (pressure) cells and four N (nociceptive) cells 14 . Additionally, each ganglion contains interneurons (INs) and motor neurons (MNs) and as a result, one isolated ganglion, with its 400 neurons in total, is sufficient for eliciting this behaviour 10,11 . Earlier studies focused on P cells as a main trigger for the local bend response, since T cells showed only minor contributions to muscle movements during the behaviour 3,9,18,20 . However, Thomson and Kristan 1 found that electrical stimulation of two ventral P cells with overlapping receptive fields resulted in a less precise muscle movement than induced by mechanical skin stimulation. Indeed, we showed in preceding studies 21,22 that T cells encode touch locations very precisely. These studies suggest that T cells might play a substantial role for the local bend response. At the next network level, at least nine types of INs are known to be involved in the local bend response 5 . These neurons have synaptic connections on MNs, which elicit the muscle contraction or elongation during the local bend 4,5 . Most of the local bend INs receive input from all four P cells in one ganglion indicating that these INs are not specialised for eliciting only one local bend direction but are rather activated by a wider range of touch locations mediated by the corresponding mechanosensory cells 5 . At least some of the local bend INs also receive input from T cells 22 , but the relative contributions of the different types of mechanosensory cell inputs are not known yet. Here, we focused on two local bend INs 5 (cell 157 and 159) which respond with graded membrane potential changes and spikes of very small amplitude (spikelets) to synaptic inputs from mechanosensory cells. We Influence of touch location and intensity. We next examined how touch properties were reflected in the IN responses. All recorded cells identified as cell 157 showed an influence of the touch location on the graded response features amplitude, integral, latency (Fig. 3A-F) and slope (not shown). Each of these features depended significantly (Friedman test, p < 0.001; N cells = 5) on touch location for all examined intensities. Like mechanosensory cells 21 , cell 157 seemed to have a spatially structured receptive field (Figs 2A and 3A-C) showing more pronounced EPSPs when the touch location is closer to the receptive field centre with a body-wall location ipsilateral to the IN-cell body in the ganglion (Figs 2 and 3). This result was found for cell 157 on both sides of the ganglion (Fig. 3D-F). One exemplary recording of a cell 159 on the left side of the ganglion showed a similar tendency and responded with smaller EPSPs at +20° (right of the ventral midline) and higher amplitudes at −30° (Fig. 3G,H). These results confirm previous conjectures about the IN receptive fields 7,11,19 . In addition to touch location, cell 157 responses also reflected touch intensities: higher intensities elicited significantly stronger responses (for all graded response features, Friedman test, p < 0.001; N cells = 5). Amplitude and integral of cell 157 increased in a linear manner between 10 and 70 mN, while latency decreased (Fig. 4). At the ventral midline, responses of cell 157 on both sides of the ganglion depended similarly strong on touch intensity ( Fig. 4B-D). In response to tactile stimulation, the IN responses in both cell types started shortly after the first T cell spikes and before the first P cell spike occurred 21 (Figs 3F and 4G). For different touch intensities applied at the ventral midline, the response latencies of the two mechanosensory cells ( Fig. 4G; 'P' , red; 'T' , blue) and the cell 157 latency ( Fig. 4G; grey) reveal that the IN response reliably starts earlier than the first P cell spike. We found in a previous study that, at the ventral midline, the latencies for the two ventral mechanosensory cells of one cell type (P or T cells) are equally long but with T cells having a significantly shorter response latency than P cells 21,22 . Taking this into account, it suggests that the fast and precise T cells add valuable information about the touch stimulus to the IN response. Morphological and physiological connections between mechanosensory cells and interneurons. Connections between the different mechanosensory cells (ventral P and T cells and lateral N cells) and cell 157 as well as cell 159 were analysed morphologically by intracellular dye-fills (Fig. 5A,B) and physiologically by intracellular, paired recordings using electrical stimulations of the mechanosensory cells (Fig. 5C). We found significant changes of the membrane potential of cell 157 due to P cell spikes, T cells spikes or N cell spikes in all ipsilateral recordings (Kolmogorov-Smirnov, p < 0.001; Fig. 5C, left column) as well as contralateral recordings (Kolmogorov-Smirnov, p < 0.001). Connections of T and P cells to cell 157 were also suggested by cell-specific structural imaging (Fig. 5A). Magnifications of a subset of confocal microscope layers showed putative input sites of P ( Fig. 5A; red, arrowheads) as well as of T cells ( Fig. 5A; blue, arrows) to cell 157 ( Fig. 5A; green). Cell 159 is located near cell 157 (Figs 1 and 5B) and showed a distinct response pattern to tactile stimulation (Fig. 2). The EPSPs in response to tactile stimulation follow the response of the T cells, which typically generate a burst at stimulus onset and a burst after stimulus offset (Fig. 2). A preliminary data set of intracellular recordings and dye-filling of cell 159 and mechanosensory cells shows responses in cell 159 consequent on T cell stimulation Overall, responses of cell 157 allowed significant discriminations of very small (5°) touch location differences based on graded response features, with amplitude and integral performing similarly well (Fig. 6A). A combination of these two features did not improve the estimation significantly (Fig. 6A). As an additional response feature, we tested the spikelet count of cell 157. This feature did not yield a discrimination performance significantly higher than the 75% threshold (Fig. 6A). The classification of nine locations led to 43.75% (median) correct estimation for the integral of the cell 157 responses (Fig. 6B). The other response features also led to classification results well above chance level (Fig. 6B). Intuitively, the good estimation performance of the integral is not surprising, since this response feature depends on amplitude as well as slope and hence may reflect the IN-response shape most reliably. We tested how strongly two touch intensities have to differ to be distinguishable based on the response features. A feature combination of integral with amplitude allows the detection of 30 mN intensity differences significantly above threshold (Fig. 6C). This result is in agreement with behaviourally determined detection thresholds 10 . The best classification result for five intensities (increment 10 mN) was obtained by the integral, yielding a median of 28% correct classification (Fig. 6D). The other response features led to percentages of correct classification even closer to the chance level of 20% (Fig. 6D). However, it should be kept in mind that the ability of leeches to discriminate stimulus intensities behaviourally is higher for low intensities and falls off linearly with rising intensities 10 . Discussion Small neuronal systems can be used to investigate how information from sensory stimuli is translated into surprisingly accurate behavioural outputs. The local bend response is one of the fastest behaviours of the leech, with muscle movements starting only 200 ms after stimulus onset 10,18 . Furthermore, the precision of the animal's ability to discriminate two touch locations is comparable to the human finger tip 1,10 . After investigating touch encoding by mechanosensory cells 21 Interneuron responses to tactile stimulation. The receptive field of cell 157 (Fig. 3) fits in with the receptive fields of other local bend INs, as inferred by Lewis 19 . Most IN types are paired 5 and receive inputs from more than one mechanosensory cell 5,11 . Consequently, INs may have broader receptive fields than the latter, suggesting a receptive field up to 360° whereas the mechanosensory cells innervate an area of about 180° of the circumference 11 . Thus, the receptive fields of an IN pair span the whole circumference of the segment with a huge overlap, while on the level of the mechanosensory cells, the same area is innervated by four (P cells) respectively six (T cells) cells 11 . Responses of the INs depended also on touch intensity (Figs 4 and 6). Similar dependencies were found for mechanosensory cells 21,22 . It is not clear yet to what extent, INs of the leech are specialised in processing single touch properties or their combinations and whether multiplexing plays a similar role as found for mechanosensory cells 21 . In the stick insect, descending INs were found which were specialised for a single stimulus property, but additionally were activated by other stimulus properties 25 . The leech local bend network provides a good model system to further investigate the fundamental question of how combinations of relevant stimulus properties are processed at the IN level to elicit specific, accurate behavioural responses. Connections between mechanosensory cells and interneurons. Previous studies on the local bend behaviour focused on P cells and synaptic connections between this cell type and the local bend INs [4][5][6][7][8]18,[26][27][28] , and the first assumption was that the local bend network could be represented as a simple feed-forward circuit 11 . However, more recent studies revealed more complex mechanisms within this circuitry, like lateral connections of mechanosensory cells and motor neurons [29][30][31] . T cells receive polysynaptic, mostly inhibitory input from P cells and N cells 29 , and P cells also form inhibitory polysynaptic chemical connections on other P cells in the same ganglion 30 . These lateral interactions on sensory cell level might play a role in localization of the local bend response 23,30 . Additionally, lateral inhibition among motor neurons and a widespread type of inhibition were also found, suggesting that the local bend network may use balanced excitation and inhibition for gain control 31 . To our knowledge, no excitatory connections from T to P cells or from N to P cells were found 29 . This is consistent with our own experience, in which we never saw effects in P cells when stimulating the other mechanosensory cell types electrically. For the polymodal N cells, it was found that high-frequency stimulation can cause potentiation of P cell synapses 23,32 . However, in our experiments N cells were never firing high frequencies 21,22 . By labelling multiple cells in the leech nervous system, we found putative input sites of P and T cells to cell 157, as well as of T cells to cell 159 (Fig. 5A,B). Electrophysiological experiments confirmed these findings and showed that single N cell spikes also elicited EPSPs in cell 157 (Fig. 5C). Furthermore, we found that cell 157 was influenced by spikes of ipsi-as well as contralateral mechanosensory cells. This is in line with results shown by Lockery and Kristan 5 for paired intracellular recordings of dorsal P cells and cell 157. Remarkably, the INs showed short response latencies that were slightly longer than the T cell and shorter than P cell response latencies 21 (Figs 3 and 4). This strong T cell influence on the initiation of the IN response supports our previous findings 21, 22 clearly suggesting the involvement of T cells in the local bend network. Here, we did not explicitly test the kind of synaptic connection between cell 157 and the mechanosensory cells nor did we define the synaptic weight of single mechanosensory cells by evaluating the EPSPs based on the elicited number of mechanosensory spikes. Even though the cell-specific structural imaging might suggest monosynaptic coupling between the mechanosensory cells and cell 157 and cell 159, the connection could be monosynaptic or polysynaptic, electrical or chemical. This circuit needs to be characterized in more detail in future studies to discern correlations among stimulus properties, activity of mechanosensory cells and INs, and the behavioural muscle response. The examined INs seem to use different strategies for combining mechanosensory cells input. Cell 157 tends to integrate EPSPs coming from all three types of mechanosensory cells with a long time constant (Fig. 2A). In contrast, responses of cell 159 seem to follow mainly the fast-adapting T cell responses, leading to shorter, more transient membrane potential fluctuations (Fig. 2B). These findings may indicate principal differences in the role of different INs in the network, e.g., integration versus coincidence detection of multiplexed information of several stimulus properties. Furthermore, previous findings may suggest an involvement of cell 157 and 159 in other behaviours: Briggman and colleagues 33,34 used voltage sensitive dye recording (VSD) to investigate decision-making in the leech and found neurons that discriminated very early in time between the two behaviours of swimming and crawling 33,34 . The neurons were found in the region of the ganglion where cell 157 and 159 are located (Fig. 4C in Briggman et al. 33 ; Fig. 5B in Briggman and Kristan 34 ). Multifunctional INs relevant for several neuronal circuits were also described by Frady and colleagues 35 . The recent availability of double-sided VSD imaging could help to shed light on these multifunctional INs and to give an overview on the neuronal circuits in the whole ganglion 36 . Overall, the small system of the leech allows basic conclusions to be drawn about processing of information through a multi-layered network with a defined set of behavioural outputs. Estimation of stimulus properties based on graded signals. Most studies on neural coding and stimulus estimation have focused on the analysis of spike trains 37,38 . However, at the level of INs graded responses play a significant role in information processing, at least in invertebrate systems and also in vertebrate sensory systems like the retina 39 . Bipolar cells transfer the graded photoreceptor information to ganglion cells and this signal is modulated by retinal INs, horizontal and amacrine cells, solely through graded signals. De Ruyter van Steveninck and Laughlin 40 concluded in a computational study that graded signals are specialised for accurate information processing over short distances. In our study, features of graded responses were used to estimate underlying stimulus properties. Very small location and intensity differences could be discriminated based on responses of one IN type (cell 157) receiving input from three mechanosensory cell types (T, P, N) simultaneously. The IN responses decode the input of the mechanosensory cell population in a precise manner. The best and most reliable stimulus estimation results were obtained from the integral. The other response features, in particular the latency, yielded less reliable stimulus estimates and are more susceptible to stochastic membrane potential fluctuations and spikelets. Emergence and origin of spikelets were investigated in different species and sensory systems 24,41-44 but the role of spikelets in neuronal information processing still needs to be investigated. However, Lockery and Kristan 5 did not find a correlation between these small action potentials and motor neuron spikes in the leech. In agreement with these findings, we found in this study that spikelet counts and interspikelet intervals (not shown) did not improve the stimulus estimation and yielded results in the range of IN response latency and the slope (Fig. 6). The local bend network appears to be a small but complex neuronal circuit [29][30][31] . This study suggests that in addition to P cells, T cells and possibly N cells provide input to the network. The different response patterns of the IN types may indicate specialisation involved in multiplexed population coding as suggested by Pirschel and Kretzberg 21 . Local bend INs might process the relative latencies as coincidence detectors and consequently decode the touch location. Or they might merge, as slow integrators, the spike counts for decoding the touch intensity. Moreover, the simple nervous system of the leech processes information in the form of spike trains of mechanosensory cells which result in graded signals of INs, which are translated back into spike trains by motor neurons. Thus, this animal model allows insights into general principles of sensory coding up to behavioural importance of multifunctional INs and distinct information processing mechanisms. Materials and Methods Physiology. The leeches (adult Hirudo verbana; hermaphrodites; distributed by: bbez, Biebertal, Germany) weighed 1-2 g and were kept at room temperature in ocean sea salt at 1:1000 dilution with purified water. Bodywall preparation (Fig. 1), mechanical stimulation and electrophysiological recordings were carried out as previously described in detail by Pirschel and Kretzberg 21 . In total, 39 body-wall preparations were included in this study. Throughout, the directional terms 'left' and 'right' are from the experimenter's perspective 1,21 (Fig. 1). Touch locations to the left of the ventral midline (defined as 0°) were denoted as negative and to the right as positive numbers of degrees (Fig. 1). We performed intracellular recordings from mechanosensory cells and INs of the local bend network, while stimulating the skin mechanically 21,22 . The local bend INs were identified according to morphological and physiological properties described by Lockery and Kristan 5 . We recorded from SCIeNTIFIC RepoRts | (2018) 8:3046 | DOI:10.1038/s41598-018-21272-6 mechanosensory cells with ventral receptive fields: ventral P and T cells and lateral (polymodal) N cells [14][15][16][17] (Fig. 1). The mechanosensory cell types were identified based on their properties described in previous investigative studies 3,10,14-18 as well as their responses to tactile stimulation. Mechanical stimulation was provided using the Dual-Mode Lever Arm System 1,10,21 (Aurora Scientific, Ontario, Canada, Model 300B; poker tip size 1 mm 2 ) at the 3 rd annulus of segment 10 (identified by location of the sensilla 17 ). We present results for touch locations from −20° to +20° in 5° steps for a touch intensity of 50 mN (N cells = 5, consisting of 3 left, 2 right cells; N animals = 4; Figs 2 and 3). The stimulus intensity varied between 10 and 100 mN and was presented for low (<50 mN) and high (50-100 mN) intensity groups with a touch duration of 200 ms 1,18 (Fig. 4). All combinations of stimulus properties were presented 10 times in a pseudo-randomized order. Location estimation was done across five experiments with cells 157 which were stimulated at locations −20° to +20° in 5° steps with 50 mN stimulus intensity. For intensity estimation, cells 157 (N cells = 7, consisting of 4 left, 3 right cells; N animals = 7) were stimulated with intensities between 10 and 50 mN at location 0°. To identify synaptic connections between INs and mechanosensory cells, intracellular recordings of the INs were obtained, while mechanosensory cells were stimulated intracellularly by current pulses. The pulse strength was chosen between 1 and 2.5 nA, based on the spike thresholds of the mechanosensory cells, and lasted for 50 ms. For cell 157, we analysed 29 paired recordings with respect to their location in the ganglion (definition see Lockery and Kristan 5 ): 5 ipsi-and 5 contralateral P cells; 3 ipsi-and 6 contralateral N cells; and 4 ipsi-and 6 contralateral T cells. For cell 159, one ipsilateral combination with each mechanosensory cell type was considered. The datasets generated and analysed during the current study are available from the corresponding author on reasonable request. Morphology. For anatomical studies, isolated ganglia of the 10 th segment were used. To visualise cell morphologies and points of contact, we used the same approach as described previously in Kretzberg et al. 22 Stimulus estimation. Our method provides an insight into possible encoding strategies, which may be used by the neuronal system. Following our preceding study 21 , we used two different estimation approaches, a pairwise discrimination and a classification, both based on the maximum-likelihood method 45 with a leave-one-out validation 46 . Basically, the maximum likelihood method predicts the presented stimulus that most likely elicited the neuronal response. For each neuronal response, the response features amplitude, slope, response latency, and spikelet count were determined. The presented stimulus was characterized by the value of the varied stimulus property, i.e. touch location or intensity. The estimation was expected to reveal specific response features that encoded the presented stimulus property best. To enable a fair and reliable comparison of the different response features, we used response feature classes containing ranks rather than the raw data for the stimulus estimation 21 . For feature combinations, the feature ranks were combined to yield one data set 21 . This rank-based approach simplified the comparison of response features having different statistical properties and different combination of features 21 . The leave-one-out validation was used for the definitions of test data and training data: each recording trial was used once as test data, while all other trials comprised the training data set. For the training data set, it was known which stimulus condition elicited the response. Therefore, the training data set was used to determine probability distributions of response feature classes for each stimulus condition. This knowledge provided the basis to determine the stimulus condition that had the highest probability (maximum likelihood) of eliciting the response feature value observed in the test data 21 . If this result, the estimated stimulus condition, matched the actual stimulation that elicited the response in the test data, the trial was counted as correct estimation. This procedure was repeated for each recording trial, leading to a percentage of correct stimulus estimations. Based on this approach, the pairwise discrimination 1,21 allows two stimuli to be discriminated based on their neuronal responses, resulting in minimum distinguishable differences of intensities or locations. Results are represented as mean values with standard error of the means (SEM) and fitted with a logistic function. Chance level of pairwise discrimination is 0.5 and discrimination threshold is defined as 0.75, which corresponds to 75% correct estimation 1,47 . The classification 21 compares the complete set of stimulus conditions and indicates how well these stimuli could be distinguished. For the estimation of the touch location, we used locations between −20° and +20°, which results in nine possible stimulus conditions and, since in our data set all stimuli were presented with equal probability, a chance level of 11.11%. The chance level for this method was defined as 100/N %, where N represents the number of stimulus conditions. Results are given in %-correct and displayed in boxplots (Fig. 6). Black dots mark the median values and box edges the 25th (q 1 ) and 75th (q 3 Significance tests. Significant dependencies of response features on stimulus properties were identified with the Friedman test 48 , a non-parametric version of the one-way analysis of variance (ANOVA). The Kolmogorov-Smirnov test was used to investigate significant membrane potential changes of INs in response to spikes of mechanosensory cells. To define whether the pairwise discrimination results are significantly above the performance threshold of 75%, we applied a one-tailed t-test compared to 0.75. The classification results were tested with the Wilcoxon rank sum test (equivalent to a Mann-Whitney U-test) 48,49 , with null hypothesis that the two independent data sets are from identical distributions with equal medians. Tests were computed using the Matlab Statistics Toolbox (MathWorks, Natick, MA, USA). For more detailed description see Pirschel and Kretzberg 21 .
5,734.4
2018-02-14T00:00:00.000
[ "Biology" ]
Real-time determination of earthquake focal mechanism via deep learning An immediate report of the source focal mechanism with full automation after a destructive earthquake is crucial for timely characterizing the faulting geometry, evaluating the stress perturbation, and assessing the aftershock patterns. Advanced technologies such as Artificial Intelligence (AI) has been introduced to solve various problems in real-time seismology, but the real-time source focal mechanism is still a challenge. Here we propose a novel deep learning method namely Focal Mechanism Network (FMNet) to address this problem. The FMNet trained with 787,320 synthetic samples successfully estimates the focal mechanisms of four 2019 Ridgecrest earthquakes with magnitude larger than Mw 5.4. The network learns the global waveform characteristics from theoretical data, thereby allowing the extensive applications of the proposed method to regions of potential seismic hazards with or without historical earthquake data. After receiving data, the network takes less than two hundred milliseconds for predicting the source focal mechanism reliably on a single CPU. 3) Advantage of synthetic data The argument that a model trained on synthetic data is better in 'scenarios without enough historical source focal mechanisms for training the neural network model, especially for those regions with limited seismicity but having the potential seismic hazards', only holds if the performance of models trained on real data generalizes poorly to regions outside of the training area. This does not seem to be the case: Hara et al. (2019) shows that a model trained to estimate P-wave first motions transfers well to other regions, even without finetuning. In general, CNNs built for picking tend to generalize very well, as the task is relatively simple. 4) Lack of test set on synthetic data The performance of the model is only shown for testing and validating data (Figures 2 and 3 of the Supplementary). There does not seem to be any testing set on synthetic data. It would be useful to report the model's performance on a real test set instead. 5) Diversity in the examples of real earthquakes Given that the four examples analyzed have nearly identical focal mechanisms, it is difficult to assess whether this approach would work well in general. Specifically, many damaging earthquakes occur in subduction areas where the mecanisms are not strike-slip as those analyzed here, and where seismic stations can be farther away from the epicenter (as many earthquakes occur offshore). It is unclear whether the approach would work in such cases. 2) Figure 2: When you say that the model 'output[s] the earthquake focal mechanism directly', it would be useful to show that this output corresponds to distributions of strike, dip, and rake. This figure is not very clear. This paper proposed an exciting approach using deep learning to determine the earthquake focal mechanism in near real-time. Using simulation generated synthetic waveforms as the training datasets for a fixed network in a region, the authors trained a CNN model to estimate the strike, dip and rake as a Gaussian distribution. The test results on the real Ridgecrest earthquakes looks great. Using the first part of the trained model (the compression part) as an encoder to compress the waveforms into sparse representation of the waveforms. The authors also tested the hypothesis the feature similarity in feature domain is equivalent to that in data domain. This opens the door for large database query using the compressed features. Overall, this paper is well written, and the proposed methods and tests are promising to be used in the future. The key point of this paper is to use the synthetics as the training data, in a sense, this encoded the geophysics knowledge into the deep learning model and then turn the geophysics problem into a pattern matching problem. By doing this, it can determine the focal mechanism faster without human intervention. It provides a nice proof of concept and applies to the real cases successfully. I only have some minor comments, and I would recommend a publication of this paper after minor revision. It will be a good contribution to the community. Qingkai Kong -Berkeley Seismology Lab. Comment 1: There are also some disadvantages of using this approach that may need to expand a little bit in the paper, such as the network is fixed (not so flexible), what is the effect of the velocity model (if no available good velocity model exists in a region), and only usable for larger earthquakes (because the frequency band used). I hope the authors can expand some discussion in the paper to make this clear or provide some walk around. Therefore, I suggest the authors can do some quick tests to show the stability of the method if some stations are missing (make the approach a little flexible), such as change the number of stations, for example, if there are some stations have problems during normal operation, that the recordings are not reliable, how do the results change (assuming one or two stations have no data, replace the waveforms into zeros, etc.)? Generate some examples using different velocity model, and monitor the changes of the errors (also a better way to quantify the mis-match of the FM). Authors: Thank you very much for your helpful comments and suggestions. Following the comments and suggestions, we have added several tests to report the performance of our model in the revised manuscript: 1. In the first test, we generate a new test dataset of 1,000 unseen synthetic samples with a diversity of focal mechanisms ( Supplementary Fig. S4) to test and report the model performance. This new test dataset is generated at a variety of random locations within the study area. We also add realistic noises from real recordings and picking errors into this new test data. After prediction on this new test dataset, we define a successful prediction only if the maximum values of the predicted Gaussian probability is larger than 0.7 for each test sample. With this threshold, 91.04% of the test samples can be successfully recalled. Also, we adopt the Kagan angle analysis (Kagan 1997;2001), in which each Kagan angle quantitatively characterizes the difference in rotation angle between the true and the predicted focal mechanisms, to evaluate the estimation errors. The Kagan angle distribution results ( Supplementary Fig. S5) S13). Then we predict the focal mechanism using our network model. From the test results ( Supplementary Fig. S14), we find our model can produce Gaussian probability distributions similar to the true distributions. This indicates that missing data at a few stations does not affect the prediction results very much. And we report this test in the Discussion section: "To further verify our model on the cases with outliers, we test the scenario that some of the recording stations have data issues and waveforms are missing, but the azimuthal coverage is still good (Supplementary Fig. S13). We find that the predicted probability distributions can match well with the true distribution in terms of their shape and maximum values when partial data are missing ( Supplementary Fig. S14)." 3. In the third test, we generate a new test example using a different velocity model ( Supplementary Fig. S7). We perturb the true velocity model by a maximum of 10 percent in each layer to generate the perturbed velocity model. From the prediction results ( Supplementary Fig. S8), we find that the inaccurate velocity model will increase the estimation errors. We think this is because we train the neural network associated with a particular velocity model and it is sort of model-dependent. And we report this test in the Discussion section: "In the Supplementary materials, we present a numerical study using a velocity model with perturbations ( Supplementary Fig. S7) 1. In the first test, we halve the available stations and put them on one side of the event ( Supplementary Fig. S11). From the test results ( Supplementary Fig. S12), we find that the predicted probability distributions differ from the true labels in terms of both shape and the maximum values. Two secondary local peaks in strike appear and the maximum values are lower. This test shows that a poor azimuthal coverage of stations will increase the estimation errors compared to a good azimuth coverage. This is because the azimuthal coverage, which provides the constraints for the source radiation pattern of the focal sphere, definitely affects the constraints to the focal mechanism. And we report this test in the Supplementary Fig. S11). The event is assumed to occur in an area with training data available. From the test results, we find that both strike and dip are well resolved, but the rake angle is off by nearly 30˚, and the prediction probability of rake is significantly lower (about 0.5) (Supplementary Fig. S12). Therefore, it is important to evaluate the prediction probabilities." 2. In the second test, we design an event that occurs out of the study area ( Supplementary Fig. S15). From the test result ( Supplementary Fig. S16), we find that the predicted Gaussian probability distributions tend to have smaller maximum values (about 0.6) than the true distributions. Although this test shows only one example, we can use the predicted maximum probability to evaluate the reliability of the predicted results. And we report this test in the Discussion section: "We also test a case where an event occurs out of the study area ( Supplementary Fig. S15). The test results show that the predicted probability is much smaller (about 0.6), which can help quantify the reliability of the predicted results ( Supplementary Fig. S16)." We gratefully thank you for all these test suggestions. They greatly help improve our manuscript! Comment 3: Maybe this is some future work, in Ridgecrest, there are many smaller earthquakes that have focal mechanisms till now (hundreds of M3.5+), in this paper, the authors only tested the 4 large events. But I think it is worth testing all these smaller events as well, and see the limit of the model at different frequency bands. Response: Thank you very much for this insightful comment. Yes, we agree that the extension to even smaller earthquakes (M3.5+) would be very meaningful and useful and we will consider this in our future study. We have tried to process the real waveforms (alignment, bandpass filtering, and normalization) on several smaller earthquakes (M4). But we find that the waveforms of real data are mainly dominated by noise within the selected frequency band and therefore the results are not promising. Increasing the frequency band would require a finer 3D velocity model. In our future effort, we will work on smaller earthquakes (M3.5+) with a high-resolution 3D velocity model and an efficient waveform modeling tool. Following your suggestions, we add more illustration about this in the Discussion section: "Since we use synthetics associated with a 1-D velocity model to create a dataset for training and testing, it limits the application to low-frequency data, which are generally available from moderate and large events." And, "Further development efforts are needed to combine the P-wave first motions and waveform data to handle smaller events. Generating a 3-D velocity model with great details could help model the high-frequency data as well." We shall enroll these efforts into our next research plans and we gratefully thank you again for this very insightful comment! Comment 4: When you generated the synthetics for training purposes, did you use a range of magnitude events on different grid? I cannot find this information in the paper, please specify so that the readers can see what you did. Authors: We do not consider the magnitude information, which is eliminated in the normalization step for each data sample. To specify this information, we have added texts in the Result section: "Since we normalize the waveforms of each synthetic sample based on the maximum amplitude, we choose a fixed magnitude for all events when modeling the synthetic waveforms." Comment 5: In the paper, line 279, it is saying "the FMNet does not require the pre-knowledge of the location or depth of a real earthquake as long as it is within the monitoring area". But during training and testing, you did align these waveforms based on the theoretical P, therefore, I think this statement is not accurate. In real application, how do you align these waveforms? If you are using the theoretical P, then you do need the location information. I guess if you use the trigger onset instead of theoretical P, because when you form the matrix of the input data, these stations are in order, this automatically encode some information about the travel time for the later phases. But please make this clear. Authors: Yes, we use the picked onset of P-waves to align the waveforms. To make it clear, we have added this information in the Result section: "For real data, we need take the picked onset time of each trace for carrying out the waveform alignment." Comment 6: In preparing the training data, how did you add the realistic noise? Please make it clearer in the methods section. And in the paper, it is said adding a random 10s shift error, on all the waveforms? or something else? Authors: To clarify, we have added the following information in the Result section: "The realistic noises are extracted from the real recordings at each seismic station. The random time shifts are added to each trace of the training samples to account for the picking errors." Comment 7: Please report the training time on this particular training and the specification of the GPU (if used), usually this information will be interested in the community. We gratefully thank the very helpful comments and suggestions from reviewer #2. Following the comments and suggestions, we have made substantial efforts to address all the comments by adding more test results, figures, and descriptions. These comments and suggestions greatly help improve our manuscript. Also, we have prepared point-by-point responses. (-From authors). Reviewer #2 (Remarks to the Author): The authors propose a deep learning approach for earthquake focal mechanism determination. Estimating the focal mechanism of an earthquake is of interest in order to understand its physical characteristics, in particular regarding local stress redistribution and future aftershock locations. P-wave first motion estimates by deep learning are likely to be extremely fast. Furthermore, their major advantage is that these detections are not region dependent: the same algorithm can be applied anywhere, as it is usually based on a single-station analysis. These approaches have also been found to work well on smaller events. Therefore they appear more simple and suitable for the determination of focal mechanisms than the methodology proposed here by the authors. Given that i) the We also conducted more tests and would like to further address some of the concerns. Using the P-wave first motions to invert for the source focal mechanism is a classical approach, which includes two steps: 1) the first motion estimation; 2) and the focal mechanism inversion using the estimated first motions. The first step (first motion estimation) has been greatly improved by the recent deep learning approaches We revised the Introduction as follows: "Several recent studies first apply deep learning to estimate the P-wave first-motion polarities 9,[39][40][41] , and then apply the first motions to carry out focal mechanism inversion using programs such as HASH 42 . One of such efforts leads to improved focal mechanisms in California compared to existing catalogs 9 . Several seismological studies also suggest that utilizing waveform data can provide better constraints for deriving the focal mechanism than using the P-wave first-motion polarities 37,38,43 . Our objective is to develop a seamless real-time solution for obtaining the focal mechanism in an automated fashion. Directly obtaining the focal mechanism of an event from waveform data with processing effort as little as possible is more appealing." We revised the Discussion as follows: "Different from the approach using the P-wave first motions which requires sufficient azimuthal station coverage ( Supplementary Fig. S9 and Fig. S10 [4][5][6][7] . " In addition to the differences in data contribution, please also note that our objective in this study is to develop a seamless real-time method for obtaining the source focal mechanism in an automated fashion. Conducting numerical inversions often requires fine-tuning parameters and quality control, which may be challenging in real-time. On the other hand, the FMNet approach seems to involve more efforts in the training data preparation and testing phase, but straight-forward when dealing with a new entry event. Hope our revisions, new comparison test, and explanations could help clarify this concern. Comment 2: Timeliness of estimates The authors report the computation time of the focal mechanism estimates (about 200 milliseconds). This is not what matters for applications in early warning. Indeed, estimating an earthquake's focal mechanism will require that i) the waves reach the seismic stations, and ii) that the data is processed. Therefore in real scenarios, the timing to get a focal mechanism estimation after the occurrence of an earthquake will be much larger, likely of the order of several tens of seconds. This is not analyzed at all in the paper. In addition, generating synthetics using a velocity model is not a complicated effort. For different regions and recording networks, the approach can be repeated easily using the same programs and once the data preparation is completed, the process is straight-forward for processing any new event. Fig. S5) show that our model can successfully predict a diversity of focal mechanisms with reasonable estimation errors (mostly <10˚ and maximum of 25˚). This new test validates the generalization ability of our model on predicting a diversity of focal mechanisms and also quantitatively shows the estimation errors. 2. In the second test, we assume that two stations have recording problems and their waveform signals are missing (zero amplitudes as shown in Supplementary Fig. S13). Then we predict the focal mechanism using our network model. From the test results ( Supplementary Fig. S14), we find our model can produce Gaussian probability distributions similar to the true distributions. This indicates that missing data at a few stations does not affect the prediction results very much. And we report this test in the Discussion section: "To further verify our model on the cases with outliers, we test the scenario that some of the recording stations have data issues and waveforms are missing, but the azimuthal coverage is still good (Supplementary Fig. S13). We find that the predicted probability distributions can match well with the true distribution in terms of their shape and maximum values when partial data are missing ( Supplementary Fig. S14)." 3. In the third test, we design an event that occurs out of the study area ( Supplementary Fig. S15). From the test result ( Supplementary Fig. S16), we find that the predicted Gaussian probability distributions tend to have smaller maximum values (about 0.6) than the true distributions. Although this test shows only one example, we can use the predicted maximum probability to evaluate the reliability of the predicted results. And we report this test in the Discussion section: "We also test a case where an event occurs out of the study area ( Supplementary Fig. S15). The test results show that the predicted probability is much smaller (about 0.6), which can help quantify the reliability of the predicted results ( Supplementary Fig. S16)." Thanks to the authors addressed my comments and added more tests to improve the paper. Overall, the authors answered all my comments with more tests and discussions in the paper, I only have a few follow-up comments based on this and hope the authors can answer and test. * Regarding the answers to my comment 1, since the authors have already done the tests with dropping stations, are these dropped stations randomly selected? If yes, please specify in the paper. * Also, the authors showed that when perturbing the velocity model, or the out of network events test, the performance of the model does degrade, it is better to clearly specify these limitations in the discussion instead of just list the results, unless this can be addressed. * Based on the answer to my comment 4, the authors used fixed magnitude for generating the training samples, which may introduce problems. Though the authors normalized the waveform, which reduced the effect of amplitude, there are more factors that change when the magnitude various, such as duration of the waveform, SNR, for example. For a waveform based method instead of only using the first motion polarity, I think this will have an effect, it is better to study this well. Especially the real test results only show on a few big events, it is hard to evaluate these aspects. My concern is that the trained model only tuned to estimate the results well on a very limited range of events, but in reality, you do have various cases of magnitude events that make the simulation more complicated. * Regarding the application of the model, I do agree with the other reviewer that the model currently still limited, i.e. depend on the region, network, and only works on large earthquakes. Hope the authors can continue to make improvement of the model in the future. Reviewer #2 (Remarks to the Author): Many thanks to the authors for their comments, and detailed additions that helped a lot to improve and clarify the manuscript. The modifications are very precise and well explained, with several additional paragraphs and figures in Supplementary to illustrate and quantify the tests conducted. However I have some concerns regarding the results in testing. The previous version of the manuscript did not include any measure of model perfomance, and I am puzzled by those now added in the test analysis. In what follows, comments from the authors are in brackets, and responses are not. Comment 1: Comparison with P wave first motion estimates. « We revised the Introduction as follows [...] » Thank you very much for the added litterature references, which are useful to put the proposed methodology in perspective compared to other existing methods. The paragraph added to the introduction conducts a detailed analysis of the litterature relative to automated P wave first motion estimates, and the description of waveform-based versus first motion estimates of focal mechanism is also very helpful in the context of this paper. « We revised the Discussion as follows […] » Many thanks for the added paragraphs. «We have also designed a new test to compare the performance between the P-wave first motion method and the FMNet method.» Many thanks for the additional tests which are of great interest in order to emphasize the strength of the proposed approach. Indeed the algorithm presented here seems to outperform first arrivals methods (with lower estimation errors), and to perform better when the number of stations is low. Something I'm curious about is the relative computation time of both approaches. Even in the abscence of finetuning, the proposed methodology is likely to be much faster -do you have an overall idea of the speed difference? Is the finetuning a time-consuming exercise? Overall I find these tests very convincing to highlight the advantages of the algorithm as a real-time estimator of earthquake focal mechanism. Figure S5. In particular, while Figure S5 is cut at 25 degrees, there appears to be heavy tails in the distribution (Figure attached). More than 10% of the estimates have an error larger than 20 degrees, which seems high for a model trained and tested on synthetic data. The fraction of errors above 50 or 60 degrees is also far from negligeable. Therefore while the model is probably faster than existing methods, I'm not sure that one could argue that it outperfoms them in terms of precision. Looking at a few individual examples, it is likely that a symmetry issue is impacting the estimations of the neural network. We gratefully thank the comments and suggestions from reviewer #1. Following the comments and suggestions, we have made revisions to address all the comments. Also, we have prepared point-by-point responses. (-From authors) Reviewer #1 (Remarks to the Author): Thanks to the authors addressed my comments and added more tests to improve the paper. Overall, the authors answered all my comments with more tests and discussions in the paper, I only have a few follow-up comments based on this and hope the authors can answer and test. Comment 1: Regarding the answers to my comment 1, since the authors have already done the tests with dropping stations, are these dropped stations randomly selected? If yes, please specify in the paper. Authors: Yes, these dropped stations are randomly selected. We have added this information in the Supplementary materials Section 8. "In such a case, we randomly select two recording stations and replace the waveforms with zeroes". Comment 2: Also, the authors showed that when perturbing the velocity model, or the out of network events test, the performance of the model does degrade, it is better to clearly specify these limitations in the discussion instead of just list the results, unless this can be addressed. Authors: Thank you very much for your comment. We have revised the Discussion to specify these limitations. "From these test results, we find that inaccurate velocity model, poor azimuthal coverage, or events out of the network might degrade the prediction performance with low probability. Therefore, using the predicted probability to quantify the reliability of the predicted result is essential." Comment 3: Based on the answer to my comment 4, the authors used fixed magnitude for generating the training samples, which may introduce problems. Though the authors normalized the waveform, which reduced the effect of amplitude, there are more factors that change when the magnitude various, such as duration of the waveform, SNR, for example. For a waveform based method instead of only using the first motion polarity, I think this will have an effect, it is better to study this well. Especially the real test results only show on a few big events, it is hard to evaluate these aspects. My concern is that the trained model only tuned to estimate the results well on a very limited range of events, but in reality, you do have various cases of magnitude events that make the simulation more complicated. Authors: Thank you very much for your comment. We agree that earthquakes with different magnitudes will have different source durations of waveforms and might present different SNRs. When preparing the training data, we have considered these two factors to mitigate their effect. "The current FMNet is designed for monitoring local or regional events within the coverage of a seismic network. Similar to the state-of-the-art methodology for resolving source focal mechanisms by applying moment tensor inversion, the FMNet is limited to moderate and large earthquakes that can be numerically modeled. Developing the capability to simulate waveforms of small earthquakes in high frequency warrants further study." We gratefully thank the very helpful comments and suggestions from reviewer #2. Following the comments and suggestions, we have made substantial efforts to address all the comments. These comments greatly help improve our manuscript. Also, we have In what follows, comments from the authors are in brackets, and responses are not. 'it is challenging to model the high-frequency theoretical waveforms with a simple 1D velocity model'). Adding a small caveout on the use of synthetic data for training the model would be useful. Authors: We agree with you on the limitations here for small earthquakes and we have specified these limitations in the manuscript. This is also the limitation in all of the existing methods when adopting waveform matching with a simplified 1D velocity model for source focal mechanism inversions in the current seismology. We will need to further develop the modeling capability for small earthquakes. Further modeling the high-frequency theoretical waveforms will require an accurate 3D velocity model and an efficient modeling tool with tremendous computational efforts (such as Wang and Zhan, 2020). To ease this concern, we specify the current limitations of the model in the Second, we agree that the use of the recall score for this regression problem may not be appropriate. Therefore, following your suggestion, we have omitted the use of the recall score in the revised manuscript. No evaluation of the model was provided in the original manuscript. Since the evaluation of the model presented in the new Supplementary paragraphs was strange, I took a look at the code and re-ran it on the test set provided. Computing the Kagan angles on the test set led to quite different results than those provided in Figure S5. In particular, while Figure S5 is cut at 25 degrees, there appears to be heavy tails in the distribution (Figure attached). More than 10% of the estimates have an error larger than 20 degrees, which seems high for a model trained and tested on synthetic data. The fraction of errors above 50 or 60 degrees is also far from negligeable. Therefore while the model is probably faster than existing methods, I'm not sure that one could argue that it outperfoms them in terms of precision. Looking at a few individual examples, it is likely that a symmetry issue is impacting the estimations of the neural network. Figure provided by Reviewer #2 Authors: Thank you very much for testing our codes. Following your comments, we have carefully investigated this test and we would like to clarify in the following: Please kindly follow the detailed steps specified in the "README" file when you run the codes. Figure R2. Test performance of the improved FMNet model 3. Since the improved FMNet model shows improved test performance, to ensure the consistency of our manuscript, we have also taken this opportunity to re-examine this improved FMNet model on both the real data application and other tests. Using the improved FMNet model, we redo both the real data application and other tests. Fig. S4 i. "From the testing results ( Supplementary Fig. S8), we can tell that the estimation errors for dip and rake are about 12˚ 8˚ and 30˚20˚, respectively." ii. "From the test results, we find that both the strike and dip angle are well resolved, but the rake angles is off by nearly 30˚20˚, and the prediction probability of rake is significantly lower (about 0.5) ( Supplementary Fig. S12)." To briefly summarize, following your comments, we have corrected the mistake in the Kagan angle calculation and we have improved the FMNet model. Using the improved FMNet model, we also re-examine the real data application and other tests to ensure the consistency of our manuscript. At last, we want to gratefully thank all your comments. These comments indeed greatly help improve the strength and completeness of our manuscript. Also, we hope our responses and revisions can address your comments and concerns. Authors: We are glad that you are happy with our previous revisions.
7,165.2
2020-09-25T00:00:00.000
[ "Geology", "Computer Science" ]
Safety, Tolerability, and Immunogenicity of V160, a Conditionally Replication-Defective Cytomegalovirus Vaccine, in Healthy Japanese Men in a Randomized, Controlled Phase 1 Study Cytomegalovirus (CMV) infection can cause newborn morbidity and mortality; no pharmacological method of reducing CMV infection during pregnancy is currently available. In a phase 1 study in the United States, V160, a conditionally replication-defective CMV vaccine, was immunogenic and well tolerated. This placebo-controlled study (NCT03840174) investigated the safety and immunogenicity of a three-dose V160 vaccine administered over six months. A total of 18 healthy adult Japanese males (9 seronegative and 9 seropositive) were enrolled at a single center and randomized 2:1 to intramuscular V160 or placebo. In vitro, V160 induced high CMV-specific neutralizing antibody (NAb) titers (50% neutralization titer [NT50], 3651; 95% confidence interval [CI], 1688–7895) in the CMV-seronegative per-protocol immunogenicity (PPI) population one month after the third vaccine dose was administered compared with no change in the placebo arm (NT50, <94; 95% CI < 94–115). The geometric mean titer ratio in the seronegative population versus baseline was 77.7 (95% CI, 23.9–252.4). CMV NAb titers in the CMV-seropositive PPI population were similar to baseline NAb titers observed in the CMV-seropositive population. V160 was well tolerated, and no vaccine viral DNA shedding was observed. In conclusion, the immunogenicity and safety profile of V160 in Japanese participants was consistent with other populations. Introduction Cytomegalovirus (CMV) infection is the most frequent cause of newborn malformation in developed countries, resulting in hearing loss, neurological deficits, and developmental delays in up to 20% of infants with congenital infection [1,2]. Approximately 300,000 babies born in Japan each year are at risk of congenital CMV infection [3], and around 1 in 1000 live births in Japan result in congenital CMV infection-related disability, which is similar to the incidence of Down syndrome [3]. Accordingly, the annual economic burden of congenital CMV infection in Japan was estimated to be JPY 27.6 billion in 2019, predominantly due to the social costs associated with congenital infection [4]. Approximately 83% of people worldwide express anti-CMV immunoglobulin G (CMV seropositivity), indicating past infection [5]. In Japan, the reported prevalence of CMV positivity is lower, particularly in people of child-bearing age, at approximately 58% among individuals in their 20s, increasing to approximately 73% among individuals in their 30s [6]. This means that many Japanese women of childbearing age are at risk of CMV infection, especially because adults are most often infected after being exposed to the virus when caring for infected children who are excreting CMV in their urine, saliva, or other secretions [7,8]. CMV can also be transmitted via blood transfusion, breast milk, sexual intercourse, and transplanted organs [2,9]. In addition, CMV infection is generally asymptomatic in healthy individuals [8], so when symptoms of CMV infection occur, such as fever, sore throat, fatigue, and/or swollen glands, they are often mild and easily mistaken for other infectious illnesses [10]. Therefore, maternal CMV infection is infrequently recognized [9]. Furthermore, CMV establishes lifelong latency and may reactivate after infection [8]. Preventive measures, such as educating pregnant women about the risk of CMV infection and using appropriate hygiene measures, are relied on to reduce the risk of infection. Still, overall awareness of CMV among expectant mothers in Japan is low [9,11]. There is currently no effective pharmacological method of reducing the risk of infection during pregnancy, or any recognized intervention that can effectively reduce transmission of CMV from a newly infected pregnant woman to her fetus [9]. The maternal adaptive immune response can be effective in reducing the risk of congenital CMV infection, but women who develop primary CMV infection during pregnancy are at particular risk of placental CMV transmission [2]. However, the high frequency of CMV re-infection with a different strain not recognized by the immune system, or reactivation, means that most cases of congenital CMV infection occur in mothers who are seropositive [2]. Therefore, both CMV-seronegative and -seropositive individuals are considered to be candidates for a CMV vaccine [2]. A CMV vaccination is considered to be a high public health priority, but early attempts to develop a vaccine have failed to achieve sufficient immunogenicity, particularly in women of childbearing age [7]. V160 is a vaccine that comprises a replication-defective CMV that effectively induces neutralizing antibodies and a T-cell-mediated response against wild-type CMV [12]. In particular, V160 expresses the CMV pentameric complex necessary to elicit potent neutralizing antibody (NAb) titers [7]. In a phase 1 study conducted in the United States (U.S.), V160 was generally well tolerated with no serious adverse events (SAEs) observed, and only transient injection site reactions were reported that were mild-to-moderate in severity [13]. NAb titers and T cell responses induced by V160 vaccination in CMV-seronegative individuals were consistent with those observed with natural infection and maintained for at least 18 months [13]. Vaccination with V160 has also been found to be effective against a number of genetically distinct clinical CMV isolates and to protect against viral infection of several different types of human cells in vitro [14]. This study aimed to investigate the immunogenicity and safety of the V160 CMV vaccine, including assessing post-vaccination plasma virus levels, viral DNA shedding, and leakage from the injection site, in a healthy Japanese population of both CMV-seronegative and -seropositive individuals. Study Design A phase 1, randomized, double-blind, placebo-controlled, single-center safety and immunogenicity study was conducted in healthy adult Japanese males (V160-003-00). This study evaluated the safety and immunogenicity of a 3-dose regimen of V160 human CMV vaccine (100 units with aluminum phosphate adjuvant [225 µg] per 0.5 mL dose) administered intramuscularly (IM) over 6 months. This dosage was selected based on the results from a previous phase 1 study carried out in the U.S. The study was performed in compliance with the International Conference on Harmonisation (ICH) guidelines, Good Clinical Practice guidelines, local Japanese regulations, and in line with the principles of the Declaration of Helsinki. Approval for type 1 use of a living modified organism was obtained under the Cartagena law prior to initiating the study. Institutional review board approval was also obtained prior to initiating the study. All participants provided written informed consent prior to enrollment. The study was prospectively registered on Clinicaltrials.gov (study identifier: NCT03840174) prior to enrolling the first participant. Study Population The study enrolled Japanese males who were 20-64 years of age and judged by the investigators to be healthy after obtaining a medical history and a physical examination. Each participant was serologically confirmed to be CMV-seropositive or CMV-seronegative at visit 1 (screening visit; within 21 days prior to vaccination) and agreed to remain abstinent or use contraception for the duration of the study. Participants were ineligible to participate in the study if they had a history of any allergic reaction to any vaccine component; had a recent (<72 h prior to receipt of study intervention) history of febrile illness (oral temperature ≥ 38 • C or equivalent); were immunocompromised or had been diagnosed as having an immunodeficiency, hematological malignancy, or other autoimmune disease that required immunosuppressive medication; had a condition in which repeated venipuncture or injections posed more than minimal risk for the participant; had a major psychiatric illness; had previously received any CMV vaccine; had any live virus vaccine administered or scheduled to be administered in the period ±4 weeks of receipt of study intervention; had any inactivated vaccine administered or scheduled within the period ±14 days of study intervention; had received any immunosuppressive therapy; or had received any antiviral agent (e.g., letermovir, ganciclovir, valganciclovir, foscarnet, or valacyclovir) with proven or potential activity against CMV within 14 days prior to vaccination, or was likely to receive such an agent within 14 days after vaccination. Study Procedures Participants were randomized in a 2:1 ratio with stratification by CMV serostatus (seropositive vs. seronegative) to receive 3 IM injections of V160 or placebo (saline solution) ( Figure 1). CMV seropositivity was determined by assessing serum CMV immunoglobulin G levels by enzyme immunoassay at visit 1. Study vaccinations were administered intramuscularly at day 1, month 2 and month 6. Study Population The study enrolled Japanese males who were 20-64 years of age and judged by the investigators to be healthy after obtaining a medical history and a physical examination. Each participant was serologically confirmed to be CMV-seropositive or CMV-seronegative at visit 1 (screening visit; within 21 days prior to vaccination) and agreed to remain abstinent or use contraception for the duration of the study. Participants were ineligible to participate in the study if they had a history of any allergic reaction to any vaccine component; had a recent (<72 h prior to receipt of study intervention) history of febrile illness (oral temperature ≥ 38 °C or equivalent); were immunocompromised or had been diagnosed as having an immunodeficiency, hematological malignancy, or other autoimmune disease that required immunosuppressive medication; had a condition in which repeated venipuncture or injections posed more than minimal risk for the participant; had a major psychiatric illness; had previously received any CMV vaccine; had any live virus vaccine administered or scheduled to be administered in the period ±4 weeks of receipt of study intervention; had any inactivated vaccine administered or scheduled within the period ±14 days of study intervention; had received any immunosuppressive therapy; or had received any antiviral agent (e.g., letermovir, ganciclovir, valganciclovir, foscarnet, or valacyclovir) with proven or potential activity against CMV within 14 days prior to vaccination, or was likely to receive such an agent within 14 days after vaccination. Study Procedures Participants were randomized in a 2:1 ratio with stratification by CMV serostatus (seropositive vs. seronegative) to receive 3 IM injections of V160 or placebo (saline solution) ( Figure 1). CMV seropositivity was determined by assessing serum CMV immunoglobulin G levels by enzyme immunoassay at visit 1. Study vaccinations were administered intramuscularly at day 1, month 2 and month 6. Study Assessments Serum samples were collected from all participants on visit 2 (day 1, prior to the first vaccination) and visit 8 (month 7, 1 month after the third vaccination) to assess NAb titers. Functional antibodies were measured by in vitro viral NAb assay to assess the ability of vaccine-induced immune sera to inhibit the infection of ARPE-19 cells by the AD169rev strain of CMV expressing a green fluorescent protein (GFP) reporter [15]. NAbs present in test serum prevent the entry of CMV into target cells, and the subsequent expression of the GFP reporter. Serum samples were serially diluted and mixed with an epithelial celltropic CMV before being added to cells, which were fixed after 48 h of incubation with Study Assessments Serum samples were collected from all participants on visit 2 (day 1, prior to the first vaccination) and visit 8 (month 7, 1 month after the third vaccination) to assess NAb titers. Functional antibodies were measured by in vitro viral NAb assay to assess the ability of vaccine-induced immune sera to inhibit the infection of ARPE-19 cells by the AD169rev strain of CMV expressing a green fluorescent protein (GFP) reporter [15]. NAbs present in test serum prevent the entry of CMV into target cells, and the subsequent expression of the GFP reporter. Serum samples were serially diluted and mixed with an epithelial cell-tropic CMV before being added to cells, which were fixed after 48 h of incubation with the serum/virus mixture and then subsequently scanned using an EnSight imager (PerkinElmer Inc, Waltham, MA, USA). Neutralizing activity is presented as the interpolated dilution corresponding to 50% of the maximum (median of the no-serum control wells) and the minimum (median of the no-virus control wells). The lower limit of quantitation was a reciprocal dilution of serum required to inhibit viral infection by 50% (NT 50 ) < 94. Viral DNA was extracted from plasma, urine, saliva, injection site swab, and adhesive tape swab samples and assayed for the presence of CMV (including V160) by a polymerase chain reaction (PCR) assay to evaluate viral detection in plasma, viral DNA shedding, and injection site leakage. Plasma samples were collected prior to vaccination, at 0 min (immediately following vaccination) and 3 h after the first vaccination on day 1 and on days 3, 7, and 14. Saliva and urine samples were collected prior to each vaccination, on days 3, 7, and 14 after the first vaccination, and 1 month after the third vaccination. Injection site swab samples were collected from the injection site 0 min, 10 min, 20 min, and 30 min after vaccination on day 1. Adhesive tape was placed over the injection site after the injection site swab sample was taken and the area was wiped with alcohol swabs, and the tape was replaced at 10 min intervals up to 30 min after vaccination on day 1. Swab samples were taken from both the inside and outside of the used tape at 10 min, 20 min, and 30 min after vaccination on day 1. If CMV DNA was detected in plasma, urine, or saliva samples, the V160 vaccine virus DNA and nonvaccine virus DNA were distinguished using a separate PCR assay. All participants were observed for 30 min after each vaccination for any immediate reactions. A vaccine report card was used to document solicited injection site adverse events (AEs) occurring on days 1-5 following dosing, oral temperature, solicited systemic AEs, and concomitant medications; in addition, from days 1-14 after each vaccination dose, any other injection site or systemic AEs were collected. Telephone contact was made to remind participants to complete their vaccination report card 14 days after the second and third vaccinations. Heart rate, respiratory rate, blood pressure, and oral temperature were assessed at screening, day 1, month 2, month 6, and month 7. Study Endpoints The primary endpoints were solicited injection site reactions at days 1-5 after each vaccination visit and solicited systemic AEs and vaccine-related SAEs at days 1-14 after each vaccination visit. Solicited injection site reactions included pain/tenderness, erythema/redness, and swelling. Solicited systemic AEs included headache, tiredness, muscle pain, and joint pain. Any temperature ≥ 38.0 • C oral or equivalent on days 1-14 following vaccination was considered an AE (fever). Secondary endpoints included CMV-specific NAb titer and detection of V160 viral DNA in plasma, urine, saliva, injection site swab, and adhesive tape swab. Safety and tolerability of the V160 vaccine were also assessed. AEs were recorded using the Medical Dictionary for Regulatory Activities (MedDRA) version 22.1. Statistics Participants were considered to have completed the study if they received all 3 doses of the study vaccine at the time points specified in the study protocol and completed the month 7 study visit. The primary immunogenicity analyses were based on the perprotocol immunogenicity (PPI) population, which comprised randomized participants who received all three vaccinations within the vaccination visit window specified in the protocol and had not deviated from the protocol in ways that could affect the immune response to vaccination. Supportive immunogenicity analyses were conducted using the full analysis set (FAS) population, which consisted of all randomized participants who received ≥1 vaccination and had ≥1 post-randomization evaluable serology result. Safety analyses were performed on the all-participants-as-treated (APaT) population, which included all randomized participants who received ≥1 vaccination. CMV-specific NAb geometric mean titers (GMTs) 1 month after the third vaccination were analyzed for CMV-seronegative participants and CMV-seropositive participants using an analysis of variance model. NAb GMTs were log transformed prior to analysis and the treatment difference and 95% confidence interval (CI) were estimated. Estimates of treatment difference and corresponding 95% CIs were then back-transformed to determine the GMT ratio and its 95% CI. Analyses were performed using observed data only. A sample size of 9 CMV-seronegative and 9 CMV-seropositive participants and 2:1 randomization was expected to generate evidence of ≤46% of individuals in each category administered the vaccine experiencing a specific AE with 90% confidence if that specific AE was not observed in this study. Results In total, 18 healthy Japanese males were enrolled at a single center. Nine participants were CMV-seronegative and nine were CMV-seropositive. Six CMV-seronegative and six CMV-seropositive participants were randomly assigned to V160 and the remaining six participants to placebo. The first participant had their first study visit on 8 March 2019, and the last study visit for the last participant occurred on 7 November 2019. Mean (standard deviation) age among the study population was 36.4 (14.3; range, 20-63) years. Seventeen participants received all three vaccine doses. One participant who was CMV-seronegative and randomized to V160 withdrew from the study and was excluded from the FAS population because no post-randomization evaluable serology result was available. Two participants in the V160 CMV-seronegative group (including the participant who discontinued) were excluded from the PPI population due to not receiving all three vaccinations within the vaccination visit window specified in the protocol ( Figure S1). CMV-Specific NAb Titers 1 Month after Dose 3 V160 induced high CMV-specific NAb titers (NT 50 , 3651; 95% CI, 1688-7895) in the CMV-seronegative PPI population 1 month after the third vaccine dose was administered compared with no change in the placebo arm (NT 50 , <94; 95% CI, <94-115) ( Table 1). The GMT ratio was 77.7 (95% CI, 23.9-252.4). Similar trends were observed in the FAS population. In the CMV-seropositive vaccine and placebo participants for the PPI population administered vaccine or placebo, CMV NAb titers at 1 month after the third vaccination were similar between the two groups (Table 1). CMV-specific NAb geometric mean fold rise (GMFR) at 1 month after the third vaccination was 1.8 in the V160 group and 1.2 in placebo, but this was not considered to represent a clinically relevant difference. Viral Detection in Plasma CMV viral DNA was detected in plasma on day 3 after the first dose in all participants randomized to V160 (n = 12, 100%). Although V160 viral DNA was detected in all seronegative participants, V160 viral DNA was only observed in 3 (50.0%) seropositive participants (Table S1). For the remaining 3 seropositive participants, wild-type CMV was detected on day 3 in one participant; a discriminate assay could not be performed for the other 2 seropositive participants because of low sample viral loads. CMV viral DNA was not detected in plasma at any other timepoint, except for the immediate assessment (0 min after vaccination) on day 1 in 1 CMV-seronegative participant administered V160. Viral DNA Shedding V160 viral DNA was not detected in the urine or saliva of any participants, although non-V160 CMV DNA was observed in saliva samples from 2 (33.3%) CMV-seropositive participants and in a urine sample from 1 (16.7%) CMV-seropositive participant administered V160 during the study. These events were observed on day 14 after the first V160 dose, at month 2 (prior to second dose administration), and at month 7 (after third dose administration). All events were determined to be wild-type virus (i.e., considered to most likely be reactivation of natural infection and unrelated to V160 administration). No viral DNA shedding was observed in any CMV-seronegative participants administered V160 or any participants administered placebo. Viral Leakage Injection site swabs were positive for viral leakage for all participants immediately after administering V160 (0 min), but by 30 min after the first injection, viral leakage was only observed for 4/6 (66.7%) seropositive and 3/6 (50.0%) seronegative participants administered the V160 vaccine. In addition, viral DNA was detected on swabs from the inside of adhesive tape samples for 2 (33.3%) and 3 (50.0%) seropositive participants at 10 min and 20 min after V160 vaccination, respectively, and for 1 (16.7%) seronegative participant at 10 min. Viral DNA was not detected on swabs from the inside of adhesive tape at 30 min or on any swabs taken from the outside of the adhesive tape. Safety At least one AE (solicited or unsolicited) was reported by 5 (83.3%) and 6 (100%) V160 recipients in the seropositive and seronegative populations, respectively, compared with 2 (66.7%) and 1 (33.3%) seropositive and seronegative participants administered placebo, respectively (Table 2). A higher number of injection site AEs were observed among participants administered V160 compared with those administered placebo among participants who were both CMV-seropositive and -seronegative, but the overall incidence of non-injection site AEs were similar. All injection site pain events were mild in intensity. All injection site erythema and swelling events were <2.5 cm in diameter. The only systemic AE observed in more than one seronegative participant administered V160 was fatigue ( Table 2). All systemic AEs were mild in intensity. No participant experienced fever (oral temperature ≥ 38.0 • C) during the 14-day post-vaccination period. No AEs leading to discontinuation or SAEs were reported. No participants died during the study. Discussion A vaccine against CMV infection has the potential to fill an urgent public health need by reducing the risk of congenital CMV infection [7]. However, attempts to develop anti-CMV vaccines have often resulted in suboptimal titers of NAbs and have demonstrated only modest immunogenicity against CMV infection in CMV-seronegative women [7]. In this study, the V160 vaccine was found to be generally well tolerated in healthy Japanese males and effective in inducing NAb titers in CMV-seronegative participants. It is notable that the outcome of this study is consistent with the results reported in a larger phase 1 study conducted in the U.S. that enrolled both males and females, supporting the generalizability of outcomes from phase 1 studies of V160 [13]. Furthermore, the Japanese participants in this study were younger (mean age, 36 vs. 44 years), offering a study population that was more representative of patients in an age range where parenthood may be expected [13]. This is particularly important given the higher risk of congenital CMV in younger people in Japan and in the context of ethnic differences in the risk of congenital CMV infection being observed in the past [7]. The NAb titer achieved in CMV-seronegative participants using a V160 dose comprising 100 units and aluminum phosphate adjuvant was also consistent with previous observations [13,16]. However, the vaccine administered in this study was lyophilized, whereas the U.S. study used a frozen preparation, and the assay used to make the assessments in this study has key differences. In particular, the NAb assay used in the earlier phase 1 study utilized a near-infrared dye-tagged immunostaining reagent to detect immediate early proteins expressed in CMV-infected ARPE-19 cells, whereas this study applied a NAb assay that utilized a GFP reporter. In particular, the current assay may return lower NAb values than the earlier assay (unpublished data). The safety profile in this study was also consistent with previous reports. Injection site pain was the most commonly reported injection site reaction, with swelling and erythema reported in a minority of participants [13]. Reports of headache, fatigue, myalgia, and arthralgia were also reported in this study, but at much lower rates than in the U.S.-based study [13]. Furthermore, the absence of V160 viral DNA in urine and saliva samples from all participants, except temporal detection in plasma, also confirms that the V160 vaccine has a replication-defective design, which is consistent with previous observations [13]. Leakage around the injection site, as observed in this study, may be expected following the administration of a vaccine by injection, but this study indicates that V160 is unlikely to be excreted into the environment following injection in standard clinical practice. Post-vaccination NAb levels in seronegative participants in this study were consistent with baseline NAb levels among CMV-seropositive individuals in a previous study in the U.S., but were below mean baseline levels observed in CMV-seropositive participants [13]. However, the results in seronegative participants are consistent with observations in a similar population enrolled in a phase 2b study in the U.S. [16]. In the absence of an immunologic correlate of protection for the prevention of maternal-fetal CMV transmission, natural immunity offers a reasonable benchmark for evaluating the efficacy of V160 because immunity to CMV, and an early response to primary CMV infection, can protect against maternal-fetal CMV transmission [7]. The ability to vaccinate women of childbearing age who are CMV-seronegative offers an important intervention for reducing the risk of congenital CMV infection, especially as many women with a primary CMV infection may not be correctly diagnosed and the risk of congenital CMV-related disability is greatest when primary infection occurs during the first trimester of pregnancy [9]. In particular, given that prior natural infection decreases the risk of congenital CMV infection by approximately 70%, vaccination against CMV infection may be expected to substantially reduce the burden of congenital CMV infection [9]. However, prior attempts to administer CMV vaccines to CMV-seronegative women have failed to prevent infection when exposed in a daycare setting, even though vaccination demonstrated efficacy in preventing serious disease in transplant patients [7]. The pentameric complex of proteins present on the surface of CMV particles in the V160 vaccine are a key feature because a rapid response to this complex has been linked to protection against placental transmission in pregnant women [7]. A vaccine also offers a useful alternative to other methods of preventing maternal CMV infection that have been investigated but have failed to demonstrate sufficient efficacy to justify further development, such as administering passive immunity using immune globulin [9]. This study is limited by its small sample size and short duration. However, a previous study suggests that the antibody response to V160 is durable [13]. Further information about the immune response, such as T cell responses induced by the vaccine, would also be valuable. In addition, the Japanese study population may also limit generalizability, although this study is largely complementary to an earlier phase 1 study performed in the U.S. [13]. Conclusions In conclusion, the immunogenicity and safety profile of the V160 vaccine in this study is generally consistent with the profiles observed in other populations. Further clinical investigation of V160 for the prevention of CMV infection is required to understand whether vaccination can prevent maternal-fetal CMV transmission and congenital CMV infection. Supplementary Materials: The following supporting information can be downloaded at: https:// www.mdpi.com/article/10.3390/antib12010022/s1. Figure S1: CONSORT flow diagram. Table S1: Participants with any positive viral detection in plasma by CMV serostatus and study visit after dose 1 (A) and by virus type (B). Author Contributions: S.M., T.I., Y.F. and M.S. substantially contributed to the conception, design, or planning of the study; Y.F. substantially contributed to acquisition of the data; N.O. substantially contributed to analysis of the data. S.M., T.I., Y.F. and M.S. substantially contributed to the interpretation of the results. All authors critically reviewed or revised the manuscript for important intellectual property. All authors have read and agreed to the published version of the manuscript.
6,153
2023-03-01T00:00:00.000
[ "Medicine", "Biology" ]
Classification of grape seed residues from distillation industries in Europe according to the polyphenol composition highlights the influence of variety, geographical origin and color Grape seed residues represent the raw material to produce several value-added products including polyphenol-rich extracts with nutritional and health attributes. Although the impact of variety and environmental conditions on the polyphenol composition in fresh berries is recognized, no data are available regarding grape seed residues. The chemical composition of grape seed residues from wine distilleries in France, Spain and Italy was characterized by mass spectrometry. Forty-two metabolites were identified belonging to non-galloylated and galloylated procyanidins as well as amino acids. Polyphenol concentrations in the red varieties originated from Champagne or Veneto were twice higher than in white varieties from the Loire Valley. The chemical profiles of grape seed residues were mainly classified according to the color variety with galloylated procyanidins as biomarkers of white varieties and non-galloylated procyanidins as biomarkers of red ones. The present approach might assist the selection of grape seed residues as quality raw materials for the production of polyphenol-rich extracts. Introduction Grape (Vitis vinifera L.) is one of the most valuated fruit crops on the world.In 2020, the global production of fresh grape reached 78 million tons with about 80% used for the winemaking industry (FAOSTAT, 2023).Grape pomace, the most abundant by-product of the wine industry, is produced after pressing and fermentation and consists of stalks, grape seeds and skins.In the western European countries, Spain, France and Italy, where the viticulture is an important agricultural activity, the production of grape pomace can reach 800,000 to 1,000,000 tons per year (OIV, 2018).The distilleries ensure the removal and the processing of grape pomace within a wine-producing region and a single company can process up to 90,000 t per year. After distillation of grape pomace for alcohol production, the corresponding residues contain relevant concentrations of bioactive compounds notably condensed tannins, also called procyanidins (Devesa-Rey et al., 2011).Grape seed polyphenols have been reported for preventive and therapeutic use in Alzheimer's disease (Wang et al., 2009), chemoprevention of various cancers through antioxidant activities (Mancini et al., 2023) and the prevention of aortic atherosclerosis development in cardiovascular disease (Auger et al., 2004).Alternatively, grape seed-based bioactive compounds have been proposed for several industrial applications including cosmetics and nutraceutics (Salem et al., 2023). The polyphenol fraction of grape seeds is composed by a complex mixture of monomeric flavan-3-ols as well as oligomeric and polymeric forms with high structural complexity (Ma et al., 2018;Pasini et al., 2019;Rockenbach et al., 2012).This peculiar chemical complexity makes it challenging to assess the quality and composition of grape seed residues used as raw material to produce commercial grape seed extracts with high procyanidin contents (Padilla-González et al., 2022). Metabolomics aims to explore complex small-molecule profiles of a biological system from a given genotype under the influence of environmental factors (Fiehn, 2002).Metabolomics combined to chemometric methods was successfully applied to a variety of plant products to evaluate their quality, authenticity and safety and can also be used to address the geographical origin or the control of adulteration (Pereira et al., 2023;Sarkar et al., 2023).Grape metabolomics was relevant to classify different genotypes based on berries, wines and byproducts (Billet et al., 2018(Billet et al., , 2021;;Chira et al., 2009;Mattivi et al., 2006) and the signature of the geographical origin was revealed in wine and grape quality (Anesi et al., 2015;Canizo et al., 2018).Focusing on grape seed residues from ethanol-distillation industry, only few data are available.Different extracting methods have been proposed, showing the possible recovery of valuable polyphenols even after long thermal distillation (Peralbo-Molina et al., 2013) and HPLC methods were developed to control adulteration in grape seed extracts (Govindaraghavan, 2019;Villani et al., 2015).Nowadays, standardized extracts based on grape seed residues from selected varieties are released on the market, but no studies reported the influence of the variety and geographical origin.The development of analytical approaches is therefore required to assess the complex polyphenol composition in grape seed residues in order to assist the selection of raw materials. During grapevine growth, the biosynthesis of flavan-3-ols starts before the flowering and increase until véraison with an accumulation in skin and seed of berries.During the berry development, the change in procyanidin composition is responsible for a seed color change from green to brown and this feature is used by winegrowers to estimate maturity stage.An increase of polymerization degree in grape seeds was observed during maturity but these observations remain controversial (Geny et al., 2003;Rousserie et al., 2019).Polyphenol metabolism plays a major role in plant adaptation to environmental stress including biotic and abiotic factors, consequently the occurrence of polyphenol variations in grape seed residues according to geographical origin may be suggested but was not reported.The impact of viticultural practices including leaf removal, water deficit irrigation or pruning was investigated as a possible cause of seed polyphenol changes, however no tangible impact was observed (Rousserie et al., 2019). The aim of the study was to assess the variability of polyphenol composition in grape seed residues from several distilleries in Europe covering 8 grape varieties and 4 wine-producing regions in Europe.UPLC-MS-based semi-targeted metabolomic profiling was applied to identify the compounds from the extracts of grape seed residues.The major polyphenols were quantified and allowed a ranking of raw materials according to polyphenol contents.Chemometric tools including principal component analysis (PCA), hierarchical cluster analysis (HCA) and orthogonal partial least squares discriminant analysis (OPLS-DA) were used to classify the samples and propose biomarkers. Samples preparation Polyphenol extraction from grape seed residues was based on (Narduzzi et al., 2015).Grape seed samples (10 × 5 replicates) were ground for 2 min in a cooled analytical mill (Ika-Werke A10, Staufen, Germany).Fifty mg of each sample powder were extracted with 1 mL of methanol/ water/chloroform (2:1:2; v/v/v) mixture containing 0.1% formic acid.The samples were then placed for 1 h in an ultrasonic bath filled with ice (AL04-12-230, Advantage lab) and centrifuged for 10 min at 16,800g at 4 • C. The upper aqueous phase (400 μL) was collected and added to 600 μL of water/acetonitrile (95:5, v/v) acidified with 0.1% formic acid.The samples were centrifuged a second time for 10 min at 18000 rpm at 4 • C. The supernatants were stored at − 20 • C prior to UPLC-DAD-MS analyses. UPLC-DAD-MS analyses Semi-targeted UPLC-DAD-MS method was adapted from a previous study (Billet et al., 2020) using a Xevo TQD mass spectrometer operated in positive and negative ionization modes (Waters, Milford, MA).Analytes were eluted with a linear gradient from 5 to 30% of solvent B (acetonitrile containing 0.1% formic acid) using an ACQUITY UPLC HSS T3 1.8 μm (2.1 × 150 mm) column (Waters, Milford, MA).The solvent A consisted in water containing 0.1% formic acid.Quality control (QC) samples represented the mixture of all samples from the study and were regularly injected every 10 samples during the batch. Treatment of MS data Full scan data acquisition modes in the range 50-2000 m/z were used for the metabolic profiling of grape seed extracts from the 10 different origins.Analyte identification was established following retention times, m/z values and UV spectra by comparison with commercial standards, or data from the literature when no standards were available.Moreover, electrospray ionization (ESI) in-source fragmentation provided key information for the identification.Once metabolic profiling was completed, quantitative UPLC-MS analyses were performed using selected ion monitoring (SIM) mode by targeting the 42 molecular ions, in either [M + H] + or [M-H] − .The generated chromatograms were integrated using the application TargetLynx of MassLynx 4.2 software.Every integrated peak was visually checked and manually corrected if necessary.Absolute quantification was performed for catechin, epicatechin, procyanidins B1-B4 and procyanidin C1 using 6-points calibration curve (0-10 ppm) of pure standards.Standards were injected in the same analytical conditions and in the sample set as grape seed samples.Quantification was achieved through selected ion monitoring (SIM) mode as described above, targeting the m/z corresponding to [M-H] − ions. Statistical analyses Multivariate statistical analysis were conducted on SIMCA 17.0 (Umetrics AB, Umeå, Sweden) software.Principal Component Analysis (PCA) was applied for all the samples and Hierarchical Cluster Analysis (HCA) was performed using Ward's method.Co-occurrence networks were established as previously described (Billet et al., 2023).Orthogonal Partial Least Squares Discriminate Analysis (OPLS-DA) was conducted according to the color variety to identify the Variable Important in Projection (VIP > 1).Kruskal-Wallis's tests were used for nonparametric univariate statistics. Another at m/z 287 and were assigned to galloylcatechin A (Narduzzi et al., 2015).UPLC-DAD-MS chromatograms (Fig. 2) presented a typical baseline increase from 5 min, called "unsolved hump" or "bulge", explained by the elution of condensed tannins with high degrees of polymerization (until DP20) (Villani et al., 2015;Ma et al., 2018;Peng et al., 2001;Rockenbach et al., 2012;Tsang et al., 2005;).Considering the glycosylated metabolites, only two compound; catechin-glucoside and monogalloyl glucose could be detected in the present study.It is much less compared to the 14 different flavan-3-ol monoglycosides reported for fresh grape seeds (Delcambre & Saucier, 2012).The apparent loss of glycosylated compounds in grape seeds as distillery by-products compared to grape seeds from fresh berries could be explained by a thermal deglycosylation occuring during the distillation of grape pomace. Absolute quantification The absolute quantification of the major polyphenols (catechin, epicatechin, procyanidins B1-4, procyanidin C1 and C-type procyanidins) of grape seed extracts from five white grape cultivars (Chardonnay, Chenin, Melon, Muscat, Sauvignon) and three red grape cultivars (Pinot Gris, Pinot Meunier, Pinot Noir) was performed by UPLC-MS and SIM mode (Fig. 3A; Table S1).The monomeric flavan-3ols (catechin and epicatechin) were the two major compounds in all tested samples, with highest concentrations observed in the red grape varieties compare to white ones.The highest concentration of catechin was detected in Pinot Gris and Pinot Noir from Veneto with 0.85 ± 0.06 and 0.80 ± 0.10 mg/g DW respectively, as well as in Pinot Meunier and Pinot Noir from Champagne with 0.76 ± 0.04 and 0.73 ± 0.11 mg/g DW, respectively.White cultivars were characterized by relatively lower concentrations of monomeric flavan-3-ols.The lowest concentrations in catechin were observed in Sauvignon and Chenin from Loire Valley with 0.36 ± 0.03 and 0.34 ± 0.03 mg/g DW, respectively.Following the monomeric flavan-3-ols, several B-type (dimeric) procyanidins were the most abundant compounds, with highest concentrations observed in the red grape varieties compare to the white ones.Among procyanidins B1, B2, B3 and B4, the concentrations of B1 and B2 were often higher than B3 and B4.The concentrations of procyanidin B1 were maximal in Pinot Noir, Pinot Gris and Pinot Meunier from Champagne and Veneto with concentrations ranged from 0.43 ± 0.03 to 0.49 ± 0.02 mg/g DW, whereas it was only 0.12 ± 0.01 and 0.14 ± 0.01 mg/g DW in Chenin and Sauvignon from Loire Valley, respectively.The levels in procyanidin B2 were maximal in Pinot Noir, Pinot Gris and Pinot Meunier from Champagne and Veneto with concentrations ranged from 0.38 ± 0.02 to 0.54 ± 0.01 mg/g DW, whereas Chenin from Loire Valley contained only 0.14 ± 0.02 mg/g DW.Procyanidin B3 was accumulated in the range of 0.06 ± 0.01 (Sauvignon from Loire Valley) to 0.31 ± 0.02 mg/g DW (Pinot Noir from Veneto) and Procyanidin B4 from 0.06 ± 0.01 (Sauvignon from Loire Valley) to 0.25 ± 0.01 mg/g DW (Pinot Noir from Veneto).The levels in procyanidin C1 were maximal in Pinot Noir, Pinot Gris and Pinot Meunier from Champagne and Veneto with concentrations ranged from 0.24 ± 0.01 to 0.36 ± 0.01 mg/g DW, whereas Chenin from Loire Valley contained only 0.01 ± 0.001 mg/g DW.In all varieties procyanidin C1 concentration was higher than others C-type procyanidins (Fig. 3A; Table S1).Fig. 3B presents the ranking of the polyphenol contents in grape seed residues as the sum of catechin, epicatechin, procyanidins B1-4, procyanidin C1 and C-type procyanidins.Pinot Noir, Pinot Gris and Pinot Meunier from Champagne and Veneto showed the highest concentrations with >4.39 ± 0.51 mg/g DW, whereas Chenin and Sauvignon accumulated <1.44 ± 0.17 mg/g DW (Table S1).Whereas procyanidins and flavan-3-ols were usually analyzed in grape seeds of fresh berries or during winemaking process, no report examined the correspond levels in grape seed as by-products of distilleries.The present catechin and epicatechin levels for grape seed residues were much lower compared to those described in grape seeds of fresh berries (Bozan et al., 2008;Chira et al., 2009;Popov et al., 2017;Yilmaz & Toledo, 2004).Indeed, a great portion of seed polyphenols are extracted during winemaking process and latter during pomace distillation a thermal degradation is likely to occur (Cisneros-Yupanqui et al., 2022;Rousserie et al., 2019).Interestingly, the present ranking of cultivars according to polyphenol contents corresponded to previous results in grape seeds of fresh berries with high accumulations in Pinot Noir and Pinot Gris and low ones in Chardonnay, Muscat and Sauvignon (Popov et al., 2017).Fig. 2. UPLC-DAD-MS chromatographic profile of grape seed extract from a quality control sample.The identification of the annotated peaks is presented in Table 1. T. Munsch et al. Multivariate statistical analyzes PCA was performed to show similarities and differences in the metabolomic composition of grape seed extracts depending on varietal and geographical origin (Fig. 4).The PCA score plot explained 67.6% of the dataset variability on the two first principal components, with the first principal component (PC1) accounting for 53.2% and the second (PC2) for 14.4% of the overall variance.Quality control samples (QC) appeared well grouped at the intersection of PC1 and PC2 ensuring the robustness of the measurements and the low analytical variability.A perfect separation of sample groups was obtained according to the grape varietal color as represented by the two ellipses on PCA score plot (Fig. 4A).Additionally, most of the sample groups appeared well grouped, highlighting specific metabolomic compositions (metabotypes) according to varietal and geographical origins of grape seeds.Two sample groups showed overlapping (Pinot Meunier from Champagne and Pinot Noir from Veneto), thereby revealing a close phytochemical composition.Interestingly, a supergroup of samples was projected on PC1 and PC2 negative scores corresponding to Pinot Noir, Pinot Gris and Pinot Meunier.These group of metabotypes presented some similarities even when they originated from two geographical origins (Pinot Noir from Veneto and Champagne) and corresponded to closely related genotypes previously called "Noiriens" when defined as eco-geographical group by ampelographers (Bisson, 1999;Levadoux, 1948).Nevertheless, the PCA score plot also enables to assess the impact of geographical origin, as it is shown by the separation of the two sample groups of Chardonnay and Pinot Noir originated either from Champagne or Veneto. The loading plot (Fig. 4B) presented the underlying metabolites responsible for the separations, with polyphenol-rich metabotypes that were projected on PC1 negative in opposite direction to metabotypes showing over accumulation of amino acids in PC1 positive except for Ltryptophan (m12).As an example, Sauvignon samples projected in PC1 positive were described by relative high accumulations in L-tyrosine (m2), L-leucine (m3), L-isoleucine (m4) and L-phenylalanine (m6) and poor levels in procyanidins.However, Pinot Noir and Chardonnay originated from Champagne, projected in PC1 negative, presented high content in all procyanidins and few amounts in amino acids.These opposition could be explained by the trade-off between primary and secondary metabolism in plants (Neilson et al., 2013).Variations along PC2 axis corresponded to relative compositions of galloylated and nongalloylated procyanidins.Red grape varieties, projected on PC2 negative, presented higher content of dimeric (m14, m16, m21, m23), trimeric (m7, m13, m18, m20, m22, m25, m30, m33) and tetrameric (m19, m26, m29, m34) procyanidins.While white grape varieties, projected on PC2 positive, presented higher levels of galloylated procyanidins (dimers: m31, m32, m41; trimers: m24, m28, m36, m38, m9; tetramers: m37).Consequently, galloylated procyanidins appeared as biomarkers of white grape varieties in grape seeds, and non-galloylated procyanidins as biomarkers of red grape varieties.The presence of galloyl groups is known to affect the physicochemical properties of polyphenols and usually galloylation improves biological activities of procyanidins by increasing their bioavailability (Karas et al., 2017).In the future, it could be therefore interesting to develop grape seed extracts based on selected white varieties rich in galloylated procyanidins with the aim to improve the bioavailability of bioactive compounds.A co-occurrence network based on the 42 metabolites was performed to reveal similar patterns of accumulation among the 8 tested genotypes from 4 geographical origins (Fig. 4C).It showed 507 significant positive correlations at threshold: R > 0.6 and p-value <0.05.Short node distance (Pearson correlation coefficients) indicates high correlation.As a result, the correlation network showed that structurally related compounds were intercorrelated and clustered together.Polyphenols were highly correlated in a supercluster showing specific subclusters depending on the degree of oligomerization and galloylation of procyanidins.Three amino acids; L-isoleucine (m3), L-leucine (m4) and Lphenylalanine (m6) were also correlated in a second cluster. Hierarchical clustering analysis To go further in the classification of grape seed residues, HCA was applied on the loading matrix based on the relative abundance of the 42 metabolites.The dendrogram showed overall structural similarities of metabotypes determined by Ward's clustering based on Euclidean distance (Fig. 5).A perfect separation of all sample groups was observed on the HCA, thus enabling the discrimination of grape seed residue by varietal and geographical origins.Interestingly, sample positions in the dendrogram correspond to the order observed when ranked by total polyphenol content (Fig. 3).The dendrogram structure showed subgroups, suggesting different degree of similarities between metabotypes.A subgroup was constituted of the three red varieties, namely Pinot Noir, Pinot Gris and Pinot Meunier, corresponding also to the "Noiriens" group.(Bisson, 1999;Levadoux, 1948) On the other hand, the five white varieties, Sauvignon, Muscat, Melon, Chenin and Chardonnay, were grouped together.We observed that sample groups corresponding to Chardonnay from Veneto and Champagne were positioned close to the "Noiriens" group.Previous attempts of classification based on metabolomics analyses of grape cane extracts reported close similarities between the metabotypes of Chardonnay and Pinot Noir (Billet et al., 2018).The direct lineage of Chardonnay from Pinot Noir, as confirmed by genetic studies (Lacombe et al., 2013), could explained the closeness of these metabotypes.Sauvignon from Loire Valley presented the furthest chemical signature from other sample groups explained by high amino acid amounts and low polyphenol contents. Conclusions Semi-targeted metabolomic approach was applied to characterize the composition of grape seed residues from several distilleries in Europe covering 8 grape varieties and 4 wine-producing regions in Europe.Forty-two metabolites were identified belonging to nongalloylated and galloylated procyanidins (dimers, trimers and tetramers) as well as amino acids.Polyphenol concentrations as the sum of catechin, epicatechin, procyanidins B1-4, procyanidin C1 and C-type Fig. 1 . Fig. 1.Location of wine-producing areas were grape seed residues of different varieties were collected for the present study. Fig. 3 . Fig. 3. Concentration of single polyphenols (A) total polyphenol concentration (B) in extracts from grape seed residues.Error bars represent the standard deviation.Significant differences were found between values with different letters (ANOVA, p-value <0.05). Fig. 4 . Fig. 4. Unsupervised classification using principal component analysis on metabolomic data extracts of grape seed residues.In score plot (A) colors correspond to varieties and symbols to geographical origins.In loading plot (B) colors correspond to the metabolic class and numbers to metabolite name.Co-occurrence networks on metabolites from extracts of grape seed residues (C).Threshold: R > 0.6 and p-value <0.05.Short node distance indicates high correlation. Fig. 6 . Fig.6.Supervised classification using OPLS-DA with "color variety" as discriminant variable on metabolomic data of grape seed extracts (A).VIPscores of the OPLS-DA (B).The color corresponds to the polyphenol class and the numbers to the metabolite name (Fig.4).Validation plot of 200 permutation tests for OPLS-DA model built for grape seed extracts. Table 1 List of compounds identified in the studied grape seed extracts.
4,498.2
2024-04-01T00:00:00.000
[ "Agricultural and Food Sciences", "Chemistry" ]
Loss of Leucine-Rich Repeat Kinase 2 (LRRK2) in Rats Leads to Progressive Abnormal Phenotypes in Peripheral Organs The objective of this study was to evaluate the pathology time course of the LRRK2 knockout rat model of Parkinson’s disease at 1-, 2-, 4-, 8-, 12-, and 16-months of age. The evaluation consisted of histopathology and ultrastructure examination of selected organs, including the kidneys, lungs, spleen, heart, and liver, as well as hematology, serum, and urine analysis. The LRRK2 knockout rat, starting at 2-months of age, displayed abnormal kidney staining patterns and/or morphologic changes that were associated with higher serum phosphorous, creatinine, cholesterol, and sorbitol dehydrogenase, and lower serum sodium and chloride compared to the LRRK2 wild-type rat. Urinalysis indicated pronounced changes in LRRK2 knockout rats in urine specific gravity, total volume, urine potassium, creatinine, sodium, and chloride that started as early as 1- to 2-months of age. Electron microscopy of 16-month old LRRK2 knockout rats displayed an abnormal kidney, lung, and liver phenotype. In contrast, there were equivocal or no differences in the heart and spleen of LRRK2 wild-type and knockout rats. These findings partially replicate data from a recent study in 4-month old LRRK2 knockout rats [1] and expand the analysis to demonstrate that the renal and possibly lung and liver abnormalities progress with age. The characterization of LRRK2 knockout rats may prove to be extremely valuable in understanding potential safety liabilities of LRRK2 kinase inhibitor therapeutics for treating Parkinson’s disease. Introduction Parkinson's disease (PD) is the second most common neurodegenerative disease, affecting 1-2% of the population over the age of 60 [2,3]. The cardinal clinical features include tremor, rigidity, bradykinesia and/or postural instability, as well as neuropathological loss of dopaminergic neurons in the substantia nigra (SN), decreased dopamine (DA) neurotransmission, and the presence of neuronal intracellular Lewy body (LB) inclusions [2]. In addition, non-motor features such as depression, constipation, pain, and sleep disorders are important manifestations of the disease [4]. Historically believed to have no strong genetic component, genetic mutation or variation in a number of genes is now recognized as causal or risk-associated factors involved in a growing number of PD cases [5][6][7]. Mutations in the leucinerich repeat kinase 2 (LRRK2) gene are the most common cause of familial and late-onset PD identified to date [3]. The most common LRRK2 mutation, G2019S, accounts for as much as 30-40% of Parkinsonism in Ashkenazi Jews and North African Arab-Berber populations [8,9]. Furthermore, LRRK2 mutations account for up to 2% of sporadic Parkinsonism [10]. The LRRK2 gene encodes a large multi-domain protein containing an ankyrin repeat region, a leucine-rich repeat domain, a Ras of complex protein GTPase domain, a C-terminal of Roc domain, a kinase domain, and a WD40 domain [11]. The LRRK2 G2019S mutation in the kinase domain appears to increase its enzymatic activity [12] and since LRRK2-related PD and sporadic PD display a similar phenotype [13], pharmaceutical companies are pursuing LRRK2 kinase inhibitors to reduce this gain-of-function as a promising therapeutic option for people with PD. To be viable for human therapeutic development, drug makers must demonstrate that inhibition of LRRK2 activity is safe. In the absence of optimal tool compounds (i.e., potent and selective to LRRK2), researchers have utilized genetically modified rodent models to explore potential liabilities of targeting LRRK2 kinase activity. Studies in LRRK2-deficient mice have found morphological and histopathological abnormalities in both kidney and lung tissue that have been associated with impairments in the autophagy pathway [14][15][16][17]. LRRK2 knockout (KO) mice display large kidneys that are dark red with microscopic presence of microvacuoles in the proximal tubule epithelial cells. A lung phenotype (increased number and size of lamellar bodies) has also been found in LRRK2 KO but not kinase-dead (KD) mice, suggesting that the LRRK2 protein-protein binding domains, rather than the kinase domain, may be crucial for normal lung function [16]. However, the LRRK2 mouse studies that have been published to date have not examined any clinical chemistry or other biomarkers that may be associated with these deficits This information could be critical to guiding the development of appropriate safety measures for future clinical trials. Recently, it was reported that LRRK2 KO rats at 4-months of age exhibit perturbations in renal morphology accompanied by significant decreases of lipocalin-2 (NGAL) in both urine and plasma [1]. Although consistent with reports in KO mice, this finding is inconsistent with renal damage since an increase in NGAL is an early responder of nephrotoxicity and tubular damage. The authors speculate that the decrease in NGAL may be independent of renal function but associated with alterations of immune homeostasis [1]. Furthermore, significant alterations in the cellular composition of the spleen between LRRK2 KO rats and wild-type (WT) animals were detected with subtle differences in response to dual infection with rat-adapted influenza virus and Streptococcus pneumoniae. A molecular pathway analysis of LRRK2 revealed links between LRRK2 and the thioredoxin system, which interacts with PRDX3, TXNIP, and TXNRD1. These proteins are associated with nutrient sensing, adiposity, and human obesity. The authors suggest that there might be a link between the reported LRRK2 KO weight gain, LRRK2 deficiency, and the thioredoxin pathway [1]. Given that this characterization of LRRK2 KO rats was limited to one age and there is still ambiguity regarding the clinical pathology markers associated with the LRRK2 KO renal phenotype, the present study extends these findings by examining the morphology, histopathology, ultrastructure, blood, and urine chemistry in LRRK2 KO and WT rats in 6 different age groups spanning a 16-month period. Materials and Methods Ethics Statement: All animal work in these studies is in compliance with the National Institutes of Health for humane animal welfare and has been approved by WIL Research and VA Medical Center/Portland IACUC committees. LRRK2 KO and Long-Evans WT rats Three separate cohorts of homozygous LRRK2 KO and WT male Long Evans rats from Sigma Advance Genetic Engineering (SAGE) Laboratories were maintained and aged to 1-, 2-, 4-, 8-, 12-, and/or 16-months of age. All breeding was conducted as homozygous x homozygous, so that the WT and KO rats were not littermates. For the first cohort (4-, 8-, and 12-months of age; n=4 per group), organs were examined macroscopically and weighed. Rats were euthanized by decapitation and tissues were snap frozen in liquid nitrogen. For the second cohort (1-, 2-, and 8-months of age; n=4 per group), rats were deeply anesthetized by an intraperitoneal injection of sodium pentobarbital and perfused in situ (4.0% paraformaldehyde in a 0.1 M phosphate buffer solution). Tissues were dissected and placed in 10% neutral-buffered formalin for 24-48 hours and then transferred to 70% ethanol. At the time of necropsy, the tissues were collected and placed in 10% neutral-buffered formalin fixative. For the third cohort, 16-month old rats (n=4 per group) were anesthetized and then perfused transcardially with 350 ml of electron microscopy (EM) fixative, consisting of 1% glutaraldehyde, 0.5% paraformaldehyde, and 0.1% picric acid in 0.1 M phosphate buffer. The different tissue preparations for the three cohorts arose due to the varying requirements of analyzing the samples. Microscopic Examination (Cohorts 1 and 2) Microscopic examination of hematoxylin-eosin (H&E) stained paraffin sections was performed on all tissues collected at necropsy from all animals. Also, since LRRK2 has a role in autophagy and the kidney has been shown to be affected in KO rodents, tubular lysosomes were assessed using a variety of histochemical and immunohistochemical methods (see Supplement S1 for specific methodologies). Stained histologic sections were examined by light microscopy. Grading of lesions noted on H&E stained sections and staining patterns in histochemical and immunohistochemical stained sections is detailed in Supplement S2. Lipofuscin stain (AFIP method; kidney, lung, spleen, heart, and liver). This histochemical stain using carbol fuchsin and picric acid detects residues of lysosomal digestion. Lipofuscin is considered a pigment associated with cell organelle damage and aging [18]. Chromotrope aniline blue (CAB; kidney only). This stain is used to detect protein containing hyaline droplets in the tubular epithelium -the CAB has a high affinity for protein and stains it a bright red [19]. N-acetylglucosaminidase-IHC (NAGLU; kidney only). This is a lysosomal enzyme that is involved in the breakdown of glycosaminoglycan [21]. Kidney Injury Molecule-1 IHC (KIM-1; kidney only). This protein is expressed in low levels in a normal kidney and is a type 1 cell membrane glycoprotein which regulates cell-cell adhesion and endocytosis. Endocytosis is one function of the proximal tubular epithelium where lysosomes play a crucial role [22]. Electron Microscopy (Cohort 3) Following perfusion of cohort 3, lung, liver, kidney, spleen, heart and brain were collected and placed in EM fix overnight at 4°C. Each organ was then cut into 2 mm 3 sections, EM processed using a newly developed microwave (Pelco BioWave, Ted Pella, Inc.) procedure as previously described [23], and embedded in Epon-Spurs resin overnight at 60° C. After each tissue block was evaluated for quality selected blocks were thin sectioned on the ultramicrotome to 60 nm in thickness using a diamond knife (Diatome, Hatfield, PA) and then counterstained with uranyl acetate and lead citrate. Images were then taken randomly throughout the tissue section with a JEOL 1400 Transmission Electron Microscope (JEOL, Peabody, MA) and photographed using a digital camera (AMT, Danvers, MA). Between 30 and 50 photos per section of tissue were taken. Once morphological changes between the LRRK2 KO and the WT were found, further analysis was performed, using ImagePro Plus software (Media Cybernetics, Rockville, MD). In the lung the number of lamellar bodies per cell, the area of the lamellar bodies, and the area of Alveolar Type II cells were calculated. In the liver the area of the hepatic cell and lipid droplets were calculated as well as the number of lipid droplets per cell. After data were collected, differences between the LRRK2 KO and WT groups were determined using the Students' t-test. Data were then graphed using Graphpad Prism. Clinical Pathology Hematology, coagulation, serum chemistry, and urinalysis parameters were evaluated on all animals in cohort 2 (1-, 2-, and 8-months old) just prior to the scheduled necropsy. Animals that were at least 2-months of age were fasted overnight prior to blood collection. Blood samples were collected via the jugular vein. Urine was collected overnight using metabolism cages. Anticoagulants used were potassium EDTA for hematology parameters and sodium citrate for coagulation parameters. Anticoagulants were not used for serum chemistry parameters. Clinical pathology parameters evaluated are listed in Supplement S3. Urine chemistry sodium, potassium, and chloride were measured (mEq/L) and normalized to urine creatinine (mg/L). Statistical analysis Organ weights (absolute, relative to body, and relative to brain weights) and clinical pathology parameters were analyzed by a two-sample t-test. Gross Observations and Organ Weights All animals were apparently healthy, viable, and survived to the scheduled necropsy (referred to as day 0). Mean body weights in the LRRK2 KO group were higher than the Long Evans WT group at all ages on study days -1 and 0; the differences were significant (all statistical analyses employed two-tailed t-tests) (p<0.05 or p<0.01) at 2-and 8-months of age. Absolute brain weight, brain length, and brain width values were higher in LRRK2 KO rats in all age groups and all of these differences from the control group were statistically significant (p<0.01) except for the brain length value at 1-month of age. Brain measurement changes were not associated with microscopic findings. Histopathological Observations. The most unequivocal morphologic phenotype associated with knockout of the LRRK2 gene was observed in the kidney. Kidney changes manifested grossly as dark red kidneys in 8-and 12-month old rats and microscopically as hyaline droplets, cytoplasmic vacuolation, and brown pigment accumulation in renal tubules of 2-, 4-, 8-, and 12-month old rats (Table 1). No LRRK2 KO-related microscopic changes were noted at 1-month of age. Microscopic changes observed in the kidney with the histochemical and immunohistochemical stains are detailed below and are shown in Figures 1 through 7. Microscopic observations in the Long Evans WT rats (histochemical and immunohistochemical) were generally similar amongst the 1-, 2-, 4-, 8-, and 12-month age groups with the exception of chronic progressive nephropathy incidence and severity. Figure 1 details microscopic findings of the 4-month Long Evans WT age group, which was considered generally representative of all WT age groups. Histochemical stains Kidney. Brown, granular-to-globular pigment was observed in the proximal tubular epithelium in the cortex (P1 and P2 segments) at 4-, 8-, and 12-months of age (Figures 4A, 5A, 6A) and the outer stripe (P3 segment) of the medulla of 8-and 12months of age in LRRK2 KO rats. Pigment became more abundant and globular with age and distorted the cytoplasm, especially in the cortex. Renal medullary pigment also increased in abundance but was typically finely granular. The pigment was also demonstrated using a lipofuscin stain which highlighted the less abundant pigment noted at 4-months of age and the marked increase in pigment accumulation with age ( Figures 4B, 5B, 6B). Lipofuscin positive pigment was not observed in 1-and 2-month old LRRK2 KO rats. Hyaline droplets, characterized by eosinophilic, welldemarcated, intracytoplasmic globules, were observed in the proximal tubular epithelium at 2-, 4-, and 8-months of age in LRRK2 KO rats. Hyaline droplets in the proximal tubular epithelium of LRRK2 KO rats were minimally increased in number at 2-months of age. At 4-months of age, the droplets were more prominent, irregularly-shaped, larger and greater in number. Hyaline droplets were still observed at 8-months of age, but were less numerous, irregularly-shaped and smaller. Hyaline droplets were demonstrated using CAB stain (Figures 3C, 4C, 5C, 6C) which highlighted their shape, number, and distribution, and demonstrated their variable colocalization with intracytoplasmic brown pigment. Hyaline droplets were noted on CAB stained sections of kidney in the Long Evans WT rats started at 2-months of age but were pinpoint and represented normal intracytoplasmic protein ( Figure 1C). Pigment accumulation and irregular hyaline droplets are more pronounced in the P1 and P2 segments of the proximal tubule (cortex) in the kidney of LRRK2 KO rats when compared to the P3 segment (medulla) and distal convoluted tubule (no LRRK2 changes observed). Phagocytosis and lysosomal activity along with sodium and chloride reabsorption are more extensive in the P1 and P2 segments. Alpha 2-U globulin, produced by the normal male rat liver, is phagocytosed and digested by the P1 and P2 segments. The small, regularly-shaped hyaline droplets in WT rats are consistent with normal phagocytic activity of this protein. The irregular hyaline droplets in LRRK2 KO rats suggest impaired lysosomal function since the irregular droplets are similar to those noted in alpha 2-U globulin nephropathy in male rats [24]. Cytoplasmic vacuolation of proximal tubular epithelium in the cortex was characterized by clear, well-delineated vacuoles in 4-, 8-, and 12-month old LRRK2 KO rats. Lesions consistent with chronic progressive nephropathy (CPN) characterized by -not present; + present; ++ +++ ++++ indicate the relative number, size, and/or intensity of a finding as depicted in basophilic tubules, thickened basement membranes and ± hyaline casts [24], were observed in Long Evans WT and LRRK2 KO rats at 4-, 8-, and 12-months of age. There was a slightly higher incidence of chronic progressive nephropathy in LRRK2 KO rats when compared to WT rats in the 4-month age groups but incidences and severities were similar at 8-and 12months of age. The severities were minimal to mild at 4-months and minimal to moderate at 8-and 12-months of age (see Supplement S2 for grading scheme). LRRK2 KO-related microscopic findings were not observed in the distal convoluted tubules in any age group. Liver, Lung, Heart, Spleen. The only LRRK2 KOassociated microscopic abnormality noted in 1-, 2-, 4-, 8-or 12month old rats was minimal to mild centrilobular hepatocellular vacuolation in the liver of 2-and 8-month old rats in the second cohort. This vacuolation was not associated with hepatocellular degeneration. Vacuolation was not noted in the first cohort of rats, but these tissues were snap frozen and not formalinperfused, which may have obscured hepatocellular vacuoles. There was no LRRK2 KO-related accumulation of pigment in these organs. LRRK2 KO-related abnormalities were not observed in the other tissues examined microscopically (see Supplement S4). Immunohistochemical stains of the Kidney Lysosomes and some of their components were demonstrated using LAMP-1 (CD107a), LAMP-2 (CD107b), and n-acetylglucosaminidase (NAGLU) immunohistochemistry. In addition, Kidney Injury Molecule-1 (KIM-1), a marker of tubular epithelial injury, was assessed in the second cohort of rats. WT LAMP-1 and LAMP-2 Expression. Baseline LAMP-1 and LAMP-2 expression in Long Evans WT kidneys was considered minimal at all ages. LAMP-1 staining of Long Evans WT kidneys was observed in the cortex (P1 and P2 segments) in proximal tubular epithelium and was characterized as diffuse, brown staining in the cytoplasm ( Figure 1D). Baseline LAMP-2 staining of Long Evans WT kidneys was observed in the proximal tubules of the cortex (P1 and P2 segments) and outer stripe of the medulla (P3 segment). LAMP-2 staining was an intense brown in the apical portion of the cytoplasm adjacent to the tubular lumen ( Figure 1E). LRRK2 KO LAMP-1 and LAMP-2 Expression. LAMP-1 expression in LRRK2 KO rat kidneys increased in intensity with age. In the 1-and 2-month age groups, LAMP-1 staining was minimal to mild in the cortex, diffuse in proximal tubular cytoplasm, and slightly increased in intensity when compared to the Long Evans WT group (Figures 2D, 3D). LAMP-1 medullary staining of 1-and 2-month-old LRRK2 KO rats was similar to the Long Evans WT group. LRRK2 KO rats at 4-, 8-, and 12-months displayed LAMP-1 staining that co-localized with brown pigment accumulation in the proximal tubular epithelium. In 4-month old LRRK2 KO rats, LAMP-1 staining was mild in the cortex, and minimal in the outer stripe of the medulla ( Figure 4D). In 8-and 12-month-old LRRK2 KO rats, LAMP-1 staining was moderate and granular to globular in the cortex, and diffuse and mild in the medullary proximal tubular epithelium (Figures 5D, 6D). Figure 1E) and maintained the intense apical staining (Figures 2E, 3E). LAMP-2 staining was mild in 4-month-old LRRK2 KO rats ( Figure 4E), and moderate in 8-and 12-month old LRRK2 KO rats ( Figures 5E, 6E). LAMP-2 staining in the proximal tubules of the cortex was granular to globular and co-localized with brown pigment. As pigment accumulated with age, and LAMP-2 staining became more globular and increased in intensity, some apical staining persisted in proximal tubules. Medullary LAMP-2 staining in the outer stripe was mild and increased in intensity relative to the Long Evans WT group. WT NAGLU Expression. Baseline Nacetylglucosaminidase (NAGLU) expression in Long Evans WT kidneys was considered minimal at all ages. The positive staining was noted in the proximal tubular epithelium in the cortex and the outer stripe of the medulla. Positive staining was characterized by light brown, slightly granular staining in the cytoplasm. The second cohort of rats also demonstrated intense apical cytoplasmic NAGLU staining of proximal tubular epithelium of the cortex. Light brown, diffuse cytoplasmic staining was observed in the medulla. LRRK2 KO NAGLU Expression. NAGLU expression in 1month-old LRRK2 KO rats ( Figure 2F) was similar to the Long Evans WT groups ( Figure 1F). In the 2-month old LRRK2 KO rats, the staining became slightly more globular in proximal tubules ( Figure 3F). In 4-, 8-, and 12-month-old LRRK2 KO rats, NAGLU staining was progressively mild, moderate, and severe respectively (Figures 4F, 5F, 6F). The staining was characterized by variably-sized globules in the cortical proximal tubular epithelium and granular staining of the proximal tubule cytoplasm of the outer medullary stripe. NAGLU expression colocalized with observed brown pigment but not with hyaline droplets. WT KIM-1 Expression. KIM-1 staining was examined in cohort 2 (1-, 2-, and 8-month old rats). Baseline KIM-1 staining of Long Evans WT rats was minimal and multifocal in 1-and 2month-old rats and minimal to mild and multifocal in 8-monthold rats ( Figure 7A). Positive KIM-1 staining was dark brown and in the superficial cytoplasm and lumen of tubules in the cortex, medulla, and papilla. There was rare positive staining in the parietal epithelium of Bowman's capsule. In 8-month-old WT rats, the mild staining was similar to that noted in 1-and 2month-old rats with additional tubular staining in areas of chronic progressive nephropathy. LRRK2 KO KIM-1 Expression. KIM-1 staining in 1-and 2month-old LRRK2 KO rats was similar to that of Long Evans WT rats. KIM-1 staining in the cortex of 8-month-old LRRK2 KO rats was characterized by dark brown variably-sized granules that co-localized with the brown pigment ( Figure 7B). KIM-1 staining was occasionally intensely positive in the apical cytoplasm of pigment-laden cells. KIM-1 staining in the outer stripe was cytoplasmic, light brown, and granular. In addition, positive staining was noted in areas of chronic progressive nephropathy. Electron Microscopy (16-month old rats) The ultrastructure of 16-month old rats was examined since the greatest pathology might be seen in this older age group. The major morphological changes between the LRRK2 KO and WT animals were seen within the kidneys, lung, and liver. It was found that in the kidneys of the LRRK2 KO animals, there was an increase in the area and in the numbers of lysosomes in the proximal convoluted tubules when compared to the WT group ( Figure 8). There were also further differences within the glomerulus, which had an accumulation of lipid droplets that were not seen in the WT animals. Other structures of the glomerulus and proximal convoluted tubules within the KO group were found to be morphologically similar to the WT animals. The distal convoluted tubules were found to be normal in the LRRK2 KO when compared to the wild types. Analysis of the lung revealed that Type II alveolar cells had significantly increased numbers of lamellar bodies, total area of lamellar bodies and area of Type II alveolar cells in the KO compared to WT animals ( Figure 9). The average size of the lamellar bodies and density of lamellar bodies per cell were not significantly different (data not shown). The other cell components of the Type II alveolar cell as well as the Type I alveolar cell of the KO animals were found to be morphologically the same compared to the WT group. There was increased accumulation of lipid droplets in both hepatocytes and stellate cells of the KO compared to the WT animals ( Figure 10). Densities of lipid droplets per cell were also found to be significantly increased in the KO rats when compared to the WT group ( Figure 10). The area of the hepatic cells and total area of lipid droplets in each cell were not significantly different but showed a trend towards being increased in the KOs when compared to the WT group. There was no difference between the average areas of the lipid droplets between the two groups (data not shown). There were no other morphological changes found in the liver between the KO and WT animals. Hematology and Coagulation (1-, 2-, and 8-month old rats) LRRK2 KO-related hematological changes were minimal or occasionally mild and were mostly observed in all age groups ( Table 2). There were lower red blood cell counts (all ages), lower hemoglobin values (1 month), lower hematocrit values (all ages), higher mean corpuscular volume (MCV; all ages), higher mean corpuscular hemoglobin (MCH; all ages) and mean corpuscular hemoglobin concentration (MCHC; all ages) values, higher hemoglobin distribution width (HDW) values (1and 2-months) or lower HDW values (8-months), and lower red cell distribution width values (8-months). The percent and absolute reticulocyte counts were higher at 1-and 2-months of age but lower at 8-months of age. Mean platelet counts were higher at 1-, 2-, and 8-months of age. Differences from the Long Evans wild type group cited above were statistically significant (p<0.01 or p<0.05) except for hematocrit values at 2-and 8-months of age, MCV at 1-and 8months of age, MCH at 1-month of age, MCHC at 1-month of changes were minimal to mild and suggested minimal to mild red blood cell loss with a reticulocyte response at 1-and 2months of age. The reticulocyte response was not apparent at 8-months of age suggesting regenerative ability was not as robust. There was no microscopic bone marrow abnormalities associated with erythrocyte or platelet changes. Urinalysis (1-, 2-, and 8-month old rats) Urinalysis and urine chemistry values are summarized in Table 4. Urine specific gravity in LRRK2 KO rats was lower than Long Evans WT rats at 2-and 8-months of age and was associated with higher urine total volumes. Urine creatinine was lower and urine creatinine clearance was higher in LRRK2 KO rats but this effect was most pronounced at 2-months of age. Urine creatinine clearance was similar in 2-and 8-month LRRK2 KO rats. Urine sodium was higher and urine potassium was lower in LRRK2 KO rats with the change most pronounced in 2-month old rats. Urine chloride was lower in 1-and 2-month old LRRK2 KO rats and slightly higher in 8-month old rats. Urine electrolytes were normalized to urine creatinine, correcting for urine volume variability. Urine sodium/creatinine was higher in LRRK2 KO rats with a more pronounced change at 2-months of age. Urine potassium/creatinine values were elevated in 1-month old LRRK2 KO rats but similar to or slightly lower in 2-and 8-month LRRK2 KO rats. Urine chloride/ creatinine values were higher in 1-, 2-, and 8-month LRRK2 KO rats with a more pronounced change at 2-months of age. Discussion The evidence is unequivocal that LRRK2 KO mice and rats exhibit an abnormal kidney phenotype [1,[15][16][17]. We have replicated these findings but also found that these abnormalities in the LRRK2 KO rat progress with age, coincide with clinical pathology biomarkers, and extend to the lung and liver. Interestingly, the youngest LRRK2 KO cohort examined (1-month old), display clinical pathology that is not observed with gross examination, histochemical or immunohistochemical stains. Prior to the emergence of any abnormal phenotype, this cohort exhibits cholesterol, creatinine, phosphorous, chloride, sodium, and SDH alterations (see Table 3). Furthermore, the lung and liver of the oldest cohort examined (16-month old) display abnormal ultrastructure phenotypes that have never been previously reported. This highlights the importance in examining a wide-range of age groups and employing a myriad of techniques to uncover LRRK2 KO induced phenotypes. While we have replicated some of the morphological and clinical pathology findings found in the Ness et al. study [1] (e.g., increased body weight; altered cholesterol, red blood cell counts, and hematocrit percentage), extending the analysis to other age groups has further uncovered alterations in kidney, lung, and liver. It should be noted though that the liver findings appear to reflect more of a metabolic process abnormality than a lysosomal type observed in the kidney and lung since the EM liver results of increased lipid droplets are similar to the renal glomeruli phenotype. The identification of age-related phenotypes in the LRRK2 KO rats has important implications. First, it suggests that LRRK2 deficiency has deleterious effects over time that may first emerge prior to any gross morphological alterations. These early peripheral (e.g., blood or urine) signals may become safety biomarkers for future LRRK2 kinase inhibitor clinical trials. Secondly, it facilitates the selection of which LRRK2 KO rat aged cohorts to use for pharmacological mechanism-based safety studies. Examining potential on-or off-target effects of LRRK2 kinase inhibitors requires a LRRK2 KO animal model with a phenotype that will not mask potential safety liabilities of LRRK2 kinase inhibitors. The 1-month old LRRK2 KO cohort may be a better animal model than 2-, 4-, 8-, 12-, or 16-month old animals for LRRK2 kinase inhibitor safety experiments as it exhibits a milder phenotype with regards to gross morphology and histopathology. The challenge remains to ascertain the therapeutic window for a LRRK2 kinase inhibitor. It is important to note that all of the present studies were conducted using homozygous LRRK2 KO rats. Given that the LRRK2 heterozygous KO mouse kidney is devoid of kidney abnormalities and the lung abnormality is only associated with KO and not KD mice [16], pharmacological LRRK2 kinase inhibition of less than 50% may be tolerable. To ascertain this safety window, predictive safety and efficacy animal models are needed to determine the minimal amount of LRRK2 kinase inhibition that is required for the treatment of Parkinson's disease. One challenge in developing a LRRK2 kinase inhibitor is that there is no robust in vivo model and only a few pharmacodynamic (PD) readouts (e.g., pSer935 and pSer1292) that can be used to screen the efficacy of potential LRRK2 kinase inhibitors. Without knowing the minimal LRRK2 kinase inhibition that is required to obtain efficacy, a therapeutic index is unobtainable. It is plausible, however, that genetically induced abnormal phenotypes in rodents may not translate to other species (e.g., dog, non-human primates, and humans) and/or be predictive of LRRK2 pharmacological induced toxicity. For example, the elevated cholesterol observed in the LRRK2 KO rats is not a good model of human cholesterol related diseases such as atherosclerosis [25] since rat serum cholesterol is primarily composed of high density lipoproteins. Also, elevated SDH (a marker of liver damage) observed in LRRK2 KO rats was not associated with hepatocellular degeneration. SDH is expressed in the kidney, but typically has its highest expression in the regions of the kidney that did not show any abnormalities in the LRRK2 KO rats (i.e., glomeruli and distal convoluted tubules) [26]. Therefore, future experiments need to determine the implications of chronic exposure to potent and selective LRRK2 kinase inhibitors on both rodent and non-rodent species. If these pathological observations are related to LRRK2 kinase inhibition in non-rodent species and are predictive of clinical pathology then the identification of cerebral spinal fluid (CSF) biomarkers along with peripheral (e.g., blood and/or urine) safety markers could be crucial for the development of LRRK2 kinase inhibitors. One recent promising PD approach is to measure LRRK2 released from exosomes in the CSF and urine [27]. The prediction is that LRRK2 kinase inhibitors will diminish the total LRRK2 levels secreted into exosomes and allow for the measurement of LRRK2 target engagement from accessible sampling compartments [27]. A CSF biomarker would especially facilitate a first-in-class LRRK2 kinase inhibitor human trial by allowing the clinician to monitor the relationship between brain LRRK2 kinase activity and safety. Although LRRK2 genetic rodent evidence suggests potential issues in inhibiting LRRK2 kinase activity, it is important to note that none of the LRRK2 KO induced phenotypes reported to date translate to detrimental functional deficits [1,[15][16][17]. Further pre-clinical studies examining pharmacological inhibition of kinase activity in non-rodent species and the identification of safety/efficacy biomarkers are needed. With pharmaceutical companies making advances in developing LRRK2 kinase inhibitors, it is crucial that we exhaust all means to bring a safe drug into the clinic. Supporting Information Supplement S1. Histochemical and immunohistochemical staining procedures.
7,119
2013-11-14T00:00:00.000
[ "Biology" ]
Native Language Identification Using a Mixture of Character and Word N-grams Native language identification (NLI) is the task of determining an author’s native language, based on a piece of his/her writing in a second language. In recent years, NLI has received much attention due to its challenging nature and its applications in language pedagogy and forensic linguistics. We participated in the NLI2017 shared task under the name UT-DSP. In our effort to implement a method for native language identification, we made use of a fusion of character and word N-grams, and achieved an optimal F1-Score of 77.64%, using both essay and speech transcription datasets. Introduction Native Language Identification (NLI) is the task of using a piece of writing in a second language in order to determine the writers native language. The main applications of NLI are in language teaching and also in forensic linguistics (Kochmar, 2011). In language teaching, NLI can help in determining the role of native language transfer in second language acquisition, so that course designers can change the material based on the native language of the learners (Laufer and Girsai, 2008). In forensic linguistics, NLI can be the starting point in making assumptions about the authors identity of a text which is of some interest to intelligence agencies, yielding the linguistic background of the author (Tsvetkov et al., 2013). The 2017 shared task contains 3 sub-challenges (Malmasi et al., 2017). The first challenge is predicting the native language of an English language leaner using a standardized assessment of English proficiency for academic purposes. The second challenge is native language identification using the transcriptions of spoken responses produced by test takers. The last sub-part of the NLI Shared Task 2017 is a fusion of the two, i.e. we have both written and spoken responses from test takers at our disposal in order to make a prediction about their native language. Our team, UT-DSP participated in the NLI Shared Task 2017. An account of our participation is given in this paper. Related Work The first NLI Shared Task was organized in 2013 (Tetreault et al., 2013). The task was designed to predict the native language of an English learner based only on his/her English writing. The corpus used for the training phase of the task was the TOEFL11 corpus (Blanchard et al., 2013) which contained 11000 English texts written by native speakers of 11 different languages. 29 teams participated in total, achieving an overall accuracy rate between 0.836 and 0.319. According to the NLI Shared Task 2013 report, the prevailing trend among different teams was using character, word, and POS N-grams (Jarvis et al., 2013;Henderson et al., 2013;Bykh et al., 2013). The leading team (Jarvis) used the support vector machine (SVM) method with as many as more than 400,000 unique features including lexical and POS N-grams. A number of teams employed simple N-grambased methods as the implementation of these approaches can be simpler and, as a result, less timeconsuming. (Gyawali et al., 2013) developed four different models using character n-grams, word n-grams, POS n-grams, and the perplexity rates of character n-grams. They used an ensemble of these 4 different models to achieve an accuracy rate of 0.75. (Kyle et al., 2013) used an approach employing key N-grams. They could outperform the random baseline with an accuracy of 0.59. Three years after the first NLI Shared Task, in 2016, the Computational Paralinguistics Challenge included a sub-task aiming at the prediction of native language based on recordings of spoken responses. The accuracy rates reported by participating teams ranged from 30.9 to 47.5 per cent (Schuller et al., 2016). Data Description The datasets for the NLI Shared Task 2017 were released by the Educational Testing Service (ETS). These datasets were released in 4 phases, two of which belonged to the training, and the remaining two belonging to the testing phases. Each dataset released contained an equal number of files belonging to each of the following 11 languages: Araic, Chinese, French, German, Hindi, Italian, Japanese, Korean, Spanish, Telugu, and Turkish. Train -Phase 1 In this phase, a dataset containing 12,100 essay files was released, 1,100 of which were included in a collection named dev chosen for evaluation purposes, and the rest were used for training the method. Train -Phase 2 The dataset released in this phase contained a collection of 12,100 speech files, which were added to the essay files released in the previous phase. Similar to the previous phase, 1,100 of the speech files were chosen as the dev collection, in order to be used for evaluation. The remaining files were used to train the method. As, in this stage, both essay and speech files were at our disposal, we could train a method to predict the test taker's native language, using both essay and speech datasets simultaneously, as well as using them separately. Test -Phase 1 The first test phase's purpose was to test the implemented methods for native language prediction, using speech and train collections separately. The essay and speech collections contained 1,100 files each, with no overlap among the files in the two. Test -Phase 2 The aim of this phase was to test the fusion method on a collection of files, belonging to 1,100 test tak-ers. For each test taker, an essay and a speech file were included in the collection. Methodology An N-gram-based language model is used to estimate the probability of the occurance of the next language particle (i.e. character, word, etc.) given its N previous particles of the same type, by using a maximum likelihood estimation (MLE) approach (Amini et al., 2016;Brown et al., 1992). For example, considering N (w i i−n+1 ) as the number of occurances of the word sequence w i−n+1 w i−n+2 ...w i−1 w i in a corpus, the n-gram probability of word w i based on the sequence of words w i−n+1 w i−n+2 ...w i−1 which come before it, is computed using formula 1: Our work employed a simple approach using a mixture of character and word N-grams. In order to do so, we had to train N-grams for each of the essay and speech transcription datasets in each language. The method was implemented without the use of i-vectors. To compute the character N-grams, we first extracted two separate lists of characters from the essay and speech files. Then, for each language within each of the essay and speech groups, we computed the character trigrams and 4-grams, smoothed using the additive smoothing method with α = 0.1. In order to compute the word N-grams, two separate lists of words from the essay and speech files were extracted. These two lists were then limited to the words which were encountered more than once. Afterwards, we computed the word monograms and bigrams (considering outof-vocabulary words), which were smoothed using the additive smoothing method with α = 0.01. In order to predict the native language for a text file, considering it as an essay/speech transcription, we have to compute its probabilities using character and word N-grams of essay/speech for each language. The character-level probabilities are computed using the formulas 2 and 3: (3) In which P rob l,c−N (C) stands for the character-level probability of the text by the character N-gram for language l, m is the number of characters in the text, P l,c−3 (c i |c i−2 c i−1 ) represents the character trigram probability of language l for character c i given its two previous characters, and P l,c−4 (c i |c i−3 c i−2 c i−1 ) represents the character 4-gram probability of language l for character c i given its three previous characters. The word-level probabilities are computed using the formulas 4 and 5: In which P rob l,w−N (W ) stands for the wordlevel probability of the text by the word N-gram for language l, n is the number of words in the text, P l,w−1 (w i ) represents the word monogram probability of language l for word w i , and P l,w−2 (w i |c i−1 ) represents the word bigram probability of language l for word w i given its previous word. In order to compute the character-level Ngrams, we used the 4-gram probability to predict the language of an essay file, while for speech files, we used the summation of trigram and 4gram character probabilities. In both essay and speech files, we used the sum of word-level monogram and bigram probabilities. These N-grams were chosen in a way that they could achieve the best results on the dev dataset, when trained using the train one. In order to compute the final probability of a text file for each language, we added the characterlevel and word-level probabilities together. The language with the highest probability was chosen as the predicted language for the text. To test our system on the test dataset, we trained our system using both train and dev datasets. Results In the first test phase, we achieved the macro F1score of 0.7609 and the overall accuracy of 0.7636 on the Essay track, and the macro F1-score of 0.4530 and the overall accuracy of 0.4536 on the Speech track. Tables 1 and 2 show our method's performance on each class, and Figure 1 and 2 show the confusion matrices yielded in the first test phase. In the second test phase, we tested our system using both essay, speech, and the fusion of both essay and speech datasets. Table 3 shows the results achieved in each test. As you can see, the best result was achieved in the fusion test. Table 4 shows our method's performance on each class, and Figure 3 shows the confusion matrix from the fusion result in the second test phase. All results reported in this section were officially submitted as part of the NLI Shared Task 2017. Discussion First of all, it is worth mentioning that all the results reported in this paper were achieved without the use of i-vectors, and therefore the comparisons between the results of our method with the baseline results are done only for essay, speech (transcriptions-only) and the fusion of essay and speech transcriptions. Our implemented method is useful in the native language identification of essays (outperforming the baseline F1-score of 0.710), it does not perform well on speech transcriptions (whose baseline F1-score is 0.544), and as a result the fusion of essays and transcriptions (with a baseline F1score of 0.779). The reason for this can be the fact that in speech transcriptions, the file lengths vary much more than those of the essay files. The fact that, in our method, the length of the file can affect the probabilities can lead to this result. As evident in Figure 1 to 3, most of the performance reduction was due to complications in telling Telugu and Hindi apart. Figure 2 shows that, in the speech track, both of these languages have very often been mistaken for each other; however, Figure 1 and 3 point to the fact that in the essay and fusion tracks, Hindi has been detected more accurately, while Telugu has often been labeled as Hindi. An interesting point worth mentioning is that, although our method did not yield a decent perfor- Table 2: Per Class Performance for the Speech Track mance on the speech dataset, it achieved optimal performance when implemented on the combination of both essay and speech files in the fusion phase. As explained in Section 3, our method is a rather simple one, compared to SVM and artificial neural networks. The combination of character Ngrams and word N-grams used in our method is purely experimental, and does not take advantage of a strong mathematical basis. All that being said, our method could still be used in combination with a form of supervised learning, in order to be more effective and achieve a decent accuracy rate.
2,756.6
2017-09-01T00:00:00.000
[ "Computer Science", "Linguistics" ]
Stacked Autoencoder Framework of False Data Injection Attack Detection in Smart Grid &e advanced communication technology provides new monitoring and control strategies for smart grids. However, the application of information technology also increases the risk of malicious attacks. False data injection (FDI) is one kind of cyber attacks, which cannot be detected by bad data detection in state estimation. In this paper, a data-driven FDI attack detection framework of the smart grid with phasor measurement units (PMUs) is proposed. To enhance the detecting accuracy and efficiency, the multiple layer autoencoder algorithm is applied to abstract the hidden features of PMUmeasurements layer by layer in an unsupervised manner. &en, the features of the measurements and corresponding labels are taken as inputs to learn a softmax layer. Last, the autoencoder and softmax layer are stacked to form a FDI detection framework. &e proposed method is applied on the IEEE 39-bus system, and the simulation results show that the FDI attacks can be detected with higher accuracy and computational efficiency compared with other artificial intelligence algorithms. Introduction Phasor measurement units (PMUs) can measure the voltage and current phasors directly with the help of global positioning system synchronization clock [1,2]. Due to the ability of monitoring the transient dynamics of power systems, more and more PMUs have been installed in the smart grid. Meanwhile, the rapid developments of enhanced monitoring and information technology also facilitate the malicious cyber attacks [3]. e large-scale integration of renewable energy resources poses a challenge for the security of the system operation due to inherent uncertainties of renewables [4][5][6]. e cyber attacks on the power system monitoring and data acquisition systems are the main objectives for attackers to seriously threaten the power system operating safety. Attackers launch a cyber attack by sending a malicious information to the control center from measurements. One of the most important functions of a state estimator is bad data detection, by which some malicious attacks can be detected because the value of the objective function increases dramatically when attacks are launched. However, one kind of the serious cyber attacks that cannot be detected by bad data detection in state estimations is the false data injection (FDI) attack [7]. Up to now, lots of research works have been developed on different cyber attacks. Under the assumption that the network topology and parameters are known by the attackers, the FDI attack method is proposed in [8] for the first time. However, it is hard for the attacker to obtain the full acknowledgments of power systems. Aiming at this problem, in [9], a FDI attack method is given based on only partial knowledge of the system topology and a subset of meter measurements. To reduce attack costs and detection risks, the minimal set of meters that required to be compromised is taken as the objective function in [10]. In [11], the FDI attack is combined with other kind of cyber attacks, forming an enhanced FDI attack method. Once the FDI attack is launched in power systems, it is hard to be detected. To prevent the measurements being attacked, the meters should be protected. Lots of methods for minimizing the protection costs have been presented in [12,13]. At the same time, the corresponding FDI attack detections are becoming a hot research topic. In [14], a reactance perturbation-based scheme is proposed to detect and identify originally covert FDI attacks on power system state estimation that enhances the security of state estimation without significantly increasing the operational cost in power systems. In [15], an online anomaly detection algorithm that utilizes load forecasts, generation schedules, and synchrophasor data to detect measurement anomalies is given. In [16], the feasibility and limitations of adopting the proactive false data detection approach to thwart FDI attacks on power grid state estimation are studied, and a framework to detect FDI attacks on power grid state estimation by using the proactive false data detection approach is proposed. With the rapid developments of artificial intelligence technologies, the research works of data-driven technologybased detection methods are increasing dramatically. e principle component analysis is used to analyze the FDI attacks in the real-time environment [17], providing a more accurate and sensitive response than the previous FDI detection techniques. In [18], a supervised learning using labeled data called support vector machine-based FDI attacks detection method is proposed. e principal component analysis is used to reduce the dimension of the data to be processed, which leads to lower computation complexities. Use of deep learning for solving pattern classification problems is proven to be an effective way in engineering [19]. Under the FDI attack condition, spatial and temporal data correlations may deviate from those in normal operating conditions. Based on this characteristic, a discrete wavelet transform algorithm and deep neural networks' techniques are used to construct an intelligent system for AC FDI attack detection, which is proposed in [20]. In [21], the deep learning technique is applied to recognize the behavior features of FDI attacks with the historical measurement data and employ the captured features to detect the FDI attacks in real time. Although the deep learning is an effective method to detect the FDI attacks, some drawbacks, such as the heavy computation loads and bad generalization abilities with a huge amount of inputs, restrict the further applications. Autoencoders [22,23] are one of the effective methods to cope with these problems, which can learn compressed features in an unsupervised manner, attracting more and more researchers' interests [24,25]. However, the effectiveness of autoencoder decreases when the number of hidden units is more than the dimension of input data. To address this problem, sparse autoencoders, in which the sparsity is integrated into the autoencoder model to learn more efficient sparse features, have been developed [26]. In [27], a denoising autoencoder is used in wind turbine gearbox fault diagnosis, which can learn useful features from raw inputs by denoising. Due to the abilities of abstracting robust representations from noisy data, the denoising autoencoder is applied in many fields in recent years [27,28]. In [29], autoencoders are used to reduce dimension and extract features from measurement datasets. Further, the autoencoders are integrated into an advanced generative adversarial network framework, which successfully detects anomalies under FDI attacks with a few labeled measurement data. However, the single-layer autoencoder cannot abstract entire representations of the original data. Aiming at this problem, a stacked autoencoder is proposed, which is made up of multiple autoencoders. e output of the first layer of the autoencoder is taken as the input of the second layer. In this paper, a stacked autoencoder-based FDI attack detection framework in the smart grid is proposed. e main contributions are listed: (1) A data-driven FDI attack detection framework is proposed. e topology errors and bad data are detected by state estimations. e hidden FDI attacks in measurements that cannot be identified by state estimation are detected by the intelligent algorithm. (2) e stacked autoencoder is applied to detect the FDI attacks. Compared with other methods, the performances of the stacked autoencoder are better in the condition that the amounts of ordinary and attacks' samples differ widely. (3) e proposed method is applied on the IEEE 39-bus testing system. e performances of the proposed method are better than the traditional deep learning methods, which are capable of practical applications. e rest of this paper is organized as follows. Section 2 establishes the power system linear state estimation model. e bad data detection method is also given. In Section 3, the basic principle of FDI attacks is given. In Section 4, the stacked autoencoder-based FDI attack detection method is proposed. To evaluate the performance of the proposed FDI attack detection method, the case study is carried out under different conditions in Section 5. Finally, Section 6 concludes this paper. Linear State Estimation Model. With the rapid development of PMUs, it is possible to take the linear state estimation based on phasor measurements. e linear state estimation can be solved directly without iteration. As a result, the calculation burden of linear state estimation is lighter than nonlinear estimation. e measurements of linear state estimation include real and imaginary parts of bus voltages and currents phasors which can be measured directly. In the linear state estimation, the real and imaginary parts of bus voltages are taken as states that should be estimated. e relationships between branch current measurements and states are derived from the π equivalent of transmission lines, which are shown as follows: where I ij,r and I ij,i are the real and imaginary parts of the branch current phasors going from bus i to bus j, respectively, g ij and b ij are the conductance and susceptance of branch i-j, respectively, g i0 and b i0 are the conductance and susceptance of the shunt branch at bus i, respectively, and e i and f i are the real and imaginary parts of voltage phasor of bus i, respectively. e matrix form of (1) is Equation (2) can be rewritten as . .] T , z B is the vector of the branch current measurements, and x is the vector of states. In addition to the branch current measurements, the injected currents and bus voltages can be measured by PMUs also. e measurement equation of linear state estimation is where z U and z IN are the phasor measurement vectors of bus voltages and injected currents, respectively, I 2m × 2n is the measurement matrix of bus voltages, m and n are the number of buses equipped with PMUs and the total bus number, respectively, and Y M is the injected current measurement matrix. Equation (4) can be rewritten as where z is the measurement vector, v is the measurement error, and v satisfies Gaussian distribution with zero mean and variance σ 2 . Equation (5) is linear, so the linear weighted least squares can be used to estimate the states. e objective function is to minimize the sum of weighted variances, which is shown as follows: where J is the objective function, R is a diagonal matrix, the ith diagonal element of R is 1/σ 2 i , and σ i is the variance of ith measurement. e estimated states are where x ⌢ is the estimated states. Bad Data Detection. Under the normal condition (no bad data in measurements), the sum of estimated measurement variance is under a given threshold ε; however, if the measurements experience bad data, the threshold ε would be exceeded. e sum of estimated measurement variance is given as where r ⌢ is the estimated measurement residual, e bad data can be detected by the following judgement: If the measurements experience bad data, the measurements would be removed one by one, and the states are estimated again until all bad data are removed. False Data Injection Attacks Aiming at the above bad data detection, FDI attack can construct an attack vector to the measurements that are able to bypass the bad data detection, but the estimated states deviate from the true values seriously. Assuming that the attackers can obtain the system typologies and parameters, the FDI attacks are formulated as follows: where z a is the attacked measurement and a is the attack vector. If a is not artificially designed, the sum of estimated measurement variance would exceed the threshold, and the attack would be detected. As a result, the attacker must find out a proper vector a that will satisfy the following constrain: where r ⌢ a is the estimated measurement residual under the bad data condition, x ⌢ c � x ⌢ + c, x ⌢ c is the estimated states under attack condition, and c is the estimated deviation with attacked measurements. It can be seen from (11) is will cause serious consequences on power systems, while it cannot be detected. e attacked measurements satisfy all constraints as the normal measurements, which can be presented as follows: Equation (12) shows that if the attacked measurement z a satisfies constraints (5), the estimated states will deviate from actual values. is character leads to the hardness of detecting the FDI attacks using the traditional methods. In this paper, the stacked autoencoder is proposed to abstract the intrinsic features of the attacked measurements. Stacked Autoencoder. e autoencoder is a typical unsupervised learning neural network; the inputs of it are a set of unlabeled data. An autoencoder includes two parts: encoder and decoder. A reduced dimensional feature representation can be obtained by the encoder, which is taken as the inputs of decoders. e decoder tries to reconstruct the original input according to the reduced dimensional feature. e structure of the autoencoder is shown in Figure 1. z is the measurement vector, which is taken as inputs of the autoencoder. y is the reduced dimensional feature of z abstracted by the encoder, which is the decoder input. e output z is the reconstruction of the original input z. e objective of the autoencoder is to try to copy its input to its output by two transformations: where f and g are the activation functions of the encoder and decoder, respectively, W 1 and W 2 are the weight matrixes, and b 1 and b 2 are the bias vectors. W 1 , W 2 , b 1 , and b 2 can be obtained by training the autoencoder using the unlabeled data z. It must be noted that the autoencoder can reconstruct different original inputs accordingly, which means that the feature representation y contains all information of the original input z in a lower dimensional form. As a result, the objective of the autoencoder is to minimize the gap between the output z and input z. us, in the training process, the reconstruction loss function is where J a is the loss function of autoencoders. In our FDI attack detection, once an autoencoder is trained, the output layer is useless. Only the hidden layer of the encoder is used to abstract the features of inputs. However, the application of a single encoder is limited. Aiming at this problem, the stacked autoencoder is proposed; the structure of it is shown in Figure 2. It can be seen that the outputs of one encoder are taken as the inputs of the next encoder. By this way, several encoders are stacked together to form a multilayer autoencoder. e features of original data are abstracted layer by layer. e stacked autoencoder is trained by the layer-wise unsupervised pretraining method. e encoder 1 is trained using the original data z by (14). e output of encoder 1 y 1 is taken as the input for training encoder 2. is process continues until the last encoder is trained. e output of each encoder is less than the former one. In the last, a softmax layer is trained by supervised learning using the output of the last encoder as input. e softmax layer function maps input scalars to a probability distribution; the values of it range from 0 to 1. e softmax layer is always used as the output layer for the classification problem. e probability function of the softmax layer is where ϕ is the probability function of the softmax layer, s is the input of the softmax layer, s l is the lth input element, and C is the total number of inputs. e sum of the softmax layer output elements is 1, and the value of each element represents the probability of the according classification. Framework of False Data Injection Attack Detection. e flowchart of the proposed FDI attack detection is shown in Figure 3. After the measurement z k is obtained, the linear state estimation should be taken first. en, the value of the objective function is used to detect bad data. If the value exceeds the threshold, the bad data is deleted, and the state estimation is taken again, until all bad data are deleted. FDI attacks can bypass the bad data detection, so the proposed FDI attack detection is taken in the next step. If the attack is detected, the attacked measurements should be identified, which is not the research topic of this paper. Descriptions of the Testing System and Data. To testify the validity of the proposed FDI attack detection method, the IEEE 39-bus testing system [16,19] is used in this study. e voltage and current phasors can be measured by PMUs, which are taken as the inputs of the FDI attack detector. e power system states are obtained by power flow calculation using MATPOWER [30]. To simulate the practical operating condition, the generator and load powers are created by Monte Carlo simulations. e simulated values are true values, while the measured values are generated by adding specific distributed random numbers to the true values. e measurement errors of amplitudes and angles are 2% and 2°, respectively. Assume that the attacker chooses 5 states to be attacked, and the estimated deviation c ranges from − 2 to 2. e attacked value a � Hc is added to measurement z to form z a . In practice, the attacked measurements are far less than the normal measurements. In this simulation, the training set includes 5000 normal measurement samples and 500 attacked samples; the testing set includes 3000 normal samples and 300 attacked samples. In this study, two encoders and a softmax layer are stacked to form the stacked autoencoder-based FDI attack detection framework. e overall structure as well as the input and output numbers of the stacked encoders are shown in Figure 4. e Performances of the Method. To evaluate the performance of the detection method, the confusion matrix is used to analyze the detection results quantitatively, which are defined in Figure 5. e true positives (TP) means that actual attacks are correctly classified as attacks; the true negatives (TN) means that actual normal measurements are correctly classified as no attack; the false positives (FP) means that actual normal measurements are incorrectly classified as attacks; the false negatives (FN) means that actual attacks are incorrectly classified as no attacks. e following three indexes are used to evaluate the ability of the proposed method, which are defined as where Acc, Pre, and Rec are the accuracy, precision, and recall, respectively, Acc represents the overall performances of the method, Rec evaluates performances of the attack detection, and Pre evaluates the probability that the normal measurements are not detected as attacks. e confusion matrix of the detection results is shown in Figure 6. It can be seen that the 300 attacks are detected out; the others are detected as normal measurements. e index values of Acc, Pre, and Rec are 100%, 100%, and 100%, respectively. Comparison with Other Methods. ree other detection methods, i.e., multilayer perceptron (MLP), support vector machines (SVM), and deep neural network (DNN), are applied in the simulation. e neuron number in the hidden layer of MLP is 15. If the output of MLP is smaller than 0.5, the classification is no attack; otherwise, the classification is being attacked. For the DNN, the number of hidden layers is 4, and the unit number of each hidden layer is 150. e confusion matrixes and the methods are shown in Figure 7. It shows that the TN numbers of the three methods are 3000, meaning that all normal measurements are correctly detected. However, the 300 attacks are not detected accurately; the detection performance of which can be evaluated by the index of Rec shown in Table 1. Among the three methods, the performance of the DNN method is better than the other two methods. However, it is still worse than the proposed detection method. Sensitivity Analysis. In this section, the influences of the following factors to the detection performances will be studied: Figure 8. It shows that 20 attacks are not detected in Case 1, meaning that the performance of the proposed method decreases if the neuron is less. In Case 3, 16 attacks are not detected. e reason is that the neuron number of encoder 1 is 20, which cannot abstract the full features in the measurements, although the neuron number of encoder 2 is 200. (2) e number of encoders: the influence of the encoder number stacked in the detection algorithm is studied. e following 3 cases are considered: Figure 9. It can be seen that 9 attacks are not detected in Case 1 because there is only one encoder, and the features cannot be abstracted fully. Although there are 3 encoders in Case 3, 7 attacks are not detected because the neurons of each encoder are less. (3) Attack proportions of the training set: in practice, the attacked samples are much less than the normal samples. e influence of attack proportions in the training set is studied also. e detection framework of Figure 4 is applied, and the testing samples include 3000 normal measurements and 300 attacks. e following training sets are considered: Case 1: 7000 normal samples; 500 attacks Case 2: 9000 normal samples; 500 attacks Case 3: 9500 normal samples; 200 attacks e confusion matrixes are shown in Figure 10. It shows that, with the decreasing proportion of attack samples, more attacks cannot be detected. e proposed method is sensitive to the proportion of attacks in the training set. e reason is that the features of FDI attacks are hard to be abstracted by the encoder when the attack proportion is low. Conclusion In this paper, a stacked autoencoder-based FDI attack detection framework is proposed, and it is applied on the IEEE 39-bus testing system under different conditions. e confusion matrix and 3 indexes are used to evaluate the performances of the detection methods. e simulation results show that the neuron numbers of encoders influence the detection performance. If the neurons are less, the features cannot be abstracted fully, resulting in the low Rec values. e encoder number is another aspect influencing the detection performances. If the encoders are less, some attacks cannot be detected. It should be noted that if the neurons are less, the detection performances still decrease even when many encoders are stacked. e proposed detection method is sensitive to the attack sample proportion in the training set. If too few attacks are in the training sets, the features of FDI attacks cannot be abstracted fully, and the detection performance is decreased. e FDI attack detection based on stacked autoencoders can be carried out in the following areas: the method of determining the optimal number of encoders and neurons, denoising function of the detectors, robustness to the wrong labeled samples, and detection with unbalanced data. Another interesting topic is to extend this work for detecting cyber attacks in integrated energy systems [31][32][33][34][35][36]. Data Availability e IEEE 39-bus system data used to support the findings of this study are included within the article. Mathematical Problems in Engineering 7
5,297.6
2021-07-03T00:00:00.000
[ "Engineering", "Computer Science" ]
The Great Majority of Homologous Recombination Repair-Deficient Tumors Are Accounted for by Established Causes Background: Gene-agnostic genomic biomarkers were recently developed to identify homologous recombination deficiency (HRD) tumors that are likely to respond to treatment with PARP inhibitors. Two machine-learning algorithms that predict HRD status, CHORD, and HRDetect, utilize various HRD-associated features extracted from whole-genome sequencing (WGS) data and show high sensitivity in detecting patients with BRCA1/2 bi-allelic inactivation in all cancer types. When using only DNA mutation data for the detection of potential causes of HRD, both HRDetect and CHORD find that 30–40% of cases that have been classified as HRD are due to unknown causes. Here, we examined the impact of tumor-specific thresholds and measurement of promoter methylation of BRCA1 and RAD51C on unexplained proportions of HRD cases across various tumor types. Methods: We gathered published CHORD and HRDetect probability scores for 828 samples from breast, ovarian, and pancreatic cancer from previous studies, as well as evidence of their biallelic inactivation (by either DNA alterations or promoter methylation) in HR-related genes. ROC curve analysis evaluated the performance of each classifier in specific cancer. Tenfold nested cross-validation was used to find the optimal threshold values of HRDetect and CHORD for classifying HR-deficient samples within each cancer type. Results: With the universal threshold, HRDetect has higher sensitivity in the detection of biallelic inactivation in BRCA1/2 than CHORD and resulted in a higher proportion of unexplained cases. When promoter methylation was excluded, in ovarian carcinoma, the proportion of unexplained cases increased from 26.8 to 48.8% for HRDetect and from 14.7 to 41.2% for CHORD. A similar increase was observed in breast cancer. Applying cancer-type-specific thresholds led to similar sensitivity and specificity for both methods. The cancer-type-specific thresholds for HRDetect reduced the number of unexplained cases from 21 to 12.3% without reducing the 96% sensitivity to known events. For CHORD, unexplained cases were reduced from 10 to 9% while sensitivity increased from 85.3 to 93.9%. Conclusion: These results suggest that WGS-based HRD classifiers should be adjusted for tumor types. When applied, only ∼10% of breast, ovarian, and pancreas cancer cases are not explained by known events in our dataset. Background: Gene-agnostic genomic biomarkers were recently developed to identify homologous recombination deficiency (HRD) tumors that are likely to respond to treatment with PARP inhibitors. Two machine-learning algorithms that predict HRD status, CHORD, and HRDetect, utilize various HRD-associated features extracted from whole-genome sequencing (WGS) data and show high sensitivity in detecting patients with BRCA1/2 biallelic inactivation in all cancer types. When using only DNA mutation data for the detection of potential causes of HRD, both HRDetect and CHORD find that 30-40% of cases that have been classified as HRD are due to unknown causes. Here, we examined the impact of tumor-specific thresholds and measurement of promoter methylation of BRCA1 and RAD51C on unexplained proportions of HRD cases across various tumor types. Methods: We gathered published CHORD and HRDetect probability scores for 828 samples from breast, ovarian, and pancreatic cancer from previous studies, as well as evidence of their biallelic inactivation (by either DNA alterations or promoter methylation) in HR-related genes. ROC curve analysis evaluated the performance of each classifier in specific cancer. Tenfold nested cross-validation was used to find the optimal threshold values of HRDetect and CHORD for classifying HR-deficient samples within each cancer type. Results: With the universal threshold, HRDetect has higher sensitivity in the detection of biallelic inactivation in BRCA1/2 than CHORD and resulted in a higher proportion of unexplained cases. When promoter methylation was excluded, in ovarian carcinoma, the proportion of unexplained cases increased from 26.8 to 48.8% for HRDetect and from 14.7 to 41.2% for CHORD. A similar increase was observed in breast cancer. Applying cancer-type-specific thresholds led to similar sensitivity and specificity for both methods. The cancer-type-specific thresholds for HRDetect reduced the number of unexplained cases from 21 to 12.3% without reducing the 96% sensitivity to known events. For CHORD, unexplained cases were reduced from 10 to 9% while sensitivity increased from 85.3 to 93.9%. INTRODUCTION The recognition of biallelic germline or somatic mutations in BRCA1/2 is, to date, one of the most clinically relevant and frequently used genetic biomarkers of homologous recombination repair deficiency (HRD) in the clinics (Dougherty et al., 2017;Hoppe et al., 2018). Patients harboring germline pathogenic variants (GPVs) in BRCA1/2 have a higher risk of developing breast and/or ovarian cancer (Mersch et al., 2015). Patients with germline or somatic mutations have an enhanced benefit from targeted therapies such as platinum-based chemotherapy or poly (ADP-ribose) polymerase inhibitors (PARPi) (Hennessy et al., 2010). The terms "BRCAness" or "HRD phenotype" refer to tumors with similar clinicopathological and molecular characteristics to tumors with BRCA1 and BRCA2 GPVs (Lord and Ashworth, 2016). Gene alterations occurring in other homologous recombinant associated genes, such as PALB2 (Tischkowitz et al., 2007;Thomas and Brown, 2015) and RAD51C/D (Kondrashova et al., 2017;Polak et al., 2017), have been linked to the HRD phenotype. Inactivation through promoter methylation of BRCA1 and RAD51C has also been found to result in HRD tumors (Ruscito et al., 2014;Polak et al., 2017;Staaf et al., 2019), and these tumors also demonstrate increased sensitivity to PARPi and platinum (Kondrashova et al., 2018). Advances in tumor sequencing resulted in the development of methods to identify HRD tumors independently of identifying the cause. Cancer genomes of patients with BRCA1/2 mutations are enriched with particular mutational patterns as well as a high number of distinct LOH regions. In addition, BRCA1/2-deficient tumors include small deletions with ≥4 bp flanking homology. Several structural variations are typical of BRCA1/2-deficient cancer genomes, including deletions up to 100 kb, unclustered tandem duplications of~10 kb associated with BRCA1 mutations (Willis et al., 2017), and deletions up to 1-10 kb in cancers are found in patients with BRCA2 mutations (Degasperi et al., 2020). A specific single-base substitution signature (also known as single-nucleotide variants), referred to as COSMIC signature 3, is strongly associated with BRCA1/2 deficiency (Polak et al., 2017). Whole-genome sequencing (WGS) data enable the detection of different genomic alterations such as base substitutions, indels, rearrangements, and copy number aberrations, which are the result of homologous recombination deficiency. There are two HRD classifiers that are based on features extracted from WGS data. HRDetect (Davies et al., 2017) is a weighted logistic regression model based on six input features: the proportion of small deletions with microhomology at the breakpoint junction, HRD index based on genomic scars, COSMIC signatures 3 and 8, and two rearrangement signatures 3 and 5. This model was trained on BRCA1/2-null breast cancers. The classifier of Homologous Recombination Deficiency (CHORD) (Nguyen et al., 2020) is a random forest model that uses relative counts of somatic mutation contexts from WGS data. Both classifiers classify >90% of tumors with biallelic inactivation via DNA mutation of BRCA1/2 as HRD and have generally high accuracy as measured by AUC~0.98 (area under the curve) (Davies et al., 2017;Nguyen et al., 2020). Mutations in PALB2, RAD51C/D, and BARD1 are associated with HRD signatures (Polak et al., 2017;Matis et al., 2021) and account for a small fraction of non-BRCA1/2-mutated HRD cases (Golan et al., 2021). Nguyen et al. (2020), in the paper that introduced CHORD, reported that a substantial proportion (~40%) of cancer samples identified as HR-deficient did not harbor any mutation in known HR-related genes (Nguyen et al., 2020), while Davies et al. (2017) reported more than 30% of these cases. These findings indicate that conventional testing for mutations in HR genes will miss a considerable number of HRD tumors where HRD is caused by unknown reasons. The possible source of high unexplained cases could be either technical or biological. Both HRDetect and CHORD are continuous scores, designed to determine if a tumor exhibits HRD. Both use a universal threshold that was not optimized for specific cancer types. HRDetect threshold was developed based on the breast cancer dataset but this cut-off has been used for other cancer types. The CHORD study used a 0.5 cut-off. In addition, BRCA1/RAD51C promoter methylation is not measured in most WGS studies or on only one subset of these samples. Here, we aim to examine the range of missing proportions of HRD samples across various three tumors where HRD is frequently reported (breast, ovarian and pancreas cancer) and determined the impact of cancer-type-specific thresholds as well as of promoter methylation BRCA1/RAD51C for an available subset of cases. To do so, we used published CHORD and HRDetect scores for these three cancers (Davies et al., 2017;Degasperi et al., 2020;Nguyen et al., 2020), as well as published HRDdetect scores for pancreas cancer (Golan et al., 2021) and CHORD scores that we calculated. In the case of ovarian and breast cancers, we limited our study to the subset of patients with available data for the methylation status of the BRCA1/RAD51C promoter. We determined the proportion of unexplained cases if we use cancer-type specific thresholds (for pancreas, ovarian, and breast cancer) and promoter methylation status (for ovarian and breast cancers). MATERIALS AND METHODS Datasets. Studies that performed homologous-recombination deficiency detection analysis on the same samples using the Frontiers in Genetics | www.frontiersin.org June 2022 | Volume 13 | Article 852159 CHORD (Nguyen et al., 2020) and HRDetect (Davies et al., 2017;Degasperi et al., 2020) classifiers were selected. From the selected studies, we made the largest unique intersection of sample names containing prediction scores of HR deficiency for both classifiers, CHORD and HRDetect. The dataset was divided into four major groups of HR-related cancers: breast, pancreatic, and ovarian (Supplementary Table S1), while other cancer types were put into a separate category (Supplementary Table S2) due to the low number of biallelic events and samples labeled as HRD. We included only breast and ovarian cancer samples that had verified BRCA1/2 with respect to methylation (Davies et al., 2017). The methylation status of HR-related gene promotors was considered to be an important underlying cause of HRD in tumors and we wanted to include only samples with validated methylation status to perform the downstream analysis. For the pancreatic dataset, we used 391 pancreatic samples whose data alongside the HRDetect classifier results were provided by Golan et al. (2021). For pancreatic samples, we ran the CHORD classifier using the default setting as it was previously described (Nguyen et al., 2020). The final combined dataset consisted of discrete datasets of 1) 371 breast cancers, 2) 66 ovarian cancers, 3) 391 pancreatic cancers, and 4) 1 238 samples belonging to other cancer types. For each sample in selected studies, we extracted the available methylation status of BRCA1/2 genes for the breast and ovarian cancer samples alongside biallelic and monoallelic alternations in HR-related genes for all cancer types. We considered biallelic germline inactivation to be present when a germline pathogenic variant (GPV) was the first hit with the second hit being loss-of-heterozygosity (LOH) or somatic mutation. Somatic biallelic inactivation was considered where at least one hit was a somatic mutation, while promoter hypermethylation biallelic inactivation was defined as when one hit was promoter methylation and the other one was somatic or LOH. Monoallelic inactivation was considered when only one gene had any mutation other than LOH. Samples carrying biallelic inactivation in HR-related genes were considered to be true HR-deficient tumors. A detailed summary of all biallelic and monoallelic alterations in analyzed HR-related genes can be found in Supplementary Table S1 and Supplementary Table S2, alongside the source of information regarding these gene alterations. Assessment of the accuracy of CHORD and HRDetect classifiers through ROC and precision-recall curves. To assess the accuracy of each classifier for each of the four major cancer types, we calculated receiver operating characteristics (ROCs) using the R function "roc" from package "pROC" (Robin et al., 2011) and precision-recall (PR) curves using R function "pr.curve" from package "PRROC" (Grau et al., 2015) by comparing CHORD and HRDetect probability scores against samples carrying biallelic inactivation in HR-related genes. Bootstrapping (2000 samples) was performed to estimate the 95% CI of the area under the ROC curve (AUC). Additionally, we compared the performance of these classifiers when no methylation data are available for breast and ovarian cancers to highlight the importance of promoter hypermethylation in HR-deficient tumors. Determining the optimal threshold. We applied a tenfold nested cross-validation approach to find the optimal threshold values of HRDetect and CHORD for classifying samples as HRdeficient or -proficient within breast, pancreatic, and ovarian cancers. The inner tenfolds were used to calculate the average optimal threshold, while the outer folds in the cross-validation process containing 10% of test data were used to assess the accuracy of the classification of HR-deficient samples. The reported optimal threshold for each classifier was calculated as the mean of all the average thresholds in outer loops for each cancer type. Statistical analysis. Probabilistic scores from CHORD and HRDetect classifiers were compared with Spearman correlation (Spearman, 1987) using R functions cor () or cor. test (). The onesided partially overlapping samples z-test for dichotomous variables with R function "Prop.test" from package "Partiallyoverlapping" (Derrick, 2018) was used to determine the statistically significant differences in the proportion of samples classified as HRD samples with and without evidence between CHORD and HRDetect. An one-sided Fisher's exact test using the R function "pairwise_fisher_test" from package "rstatix" (Kassambara, 2021) was used for testing the differences of explained and unexplained classifications between cancer types within each classifier. For comparison of the different ROC curves, we used the DeLong's test (two-sided, paired-samples) for two correlated ROC curves using the R function "roc.test" from package "pROC" (Robin et al., 2011). Corrections for multiple hypothesis testing were done using Bonferroni correction, and adjusted p-values were reported. All the analyses were carried out in R statistical programming language version 4.1.0. Large Proportion of Homologous Recombination Repair Deficiency Classified Tumors Is Explained by Biallelic Inactivation of BRCA1/2 To investigate the performance of CHORD and HRDetect classifiers on the same samples, we utilized the classifiers' results from previous studies (Davies et al., 2017;Degasperi et al., 2020;Nguyen et al., 2020;Golan et al., 2021) across 2,066 samples from 10 cancer types. Here, we have focused on comparing HRDetect and CHORD scores for a total of 828 tumors, composed of three cancers associated with HR deficiency: breast (n = 371), pancreatic (n = 391), ovarian (n = 66) ( Figure 1A), while the remaining seven cancers are shown in the supplementary (Supplementary Figure S2, Supplementary Table S2). When comparing the probability score of a tumor possessing HRD for each sample, we see that CHORD and HRDetect have similar probability scores (Supplementary Figure S1, Spearman correlation of 0.67). Of the total 828 samples belonging to the three important HRD-related cancers, biallelic alterations (either somatic, germline, deep deletion, or promoter hypermethylation) of HR-related genes were found in 163 samples. As expected, samples with higher HRD probability scores (in both classifiers) had a higher number of biallelic inactivation events in BRCA1/2 genes compared to samples with lower scores ( Figure 1B). Somatic homozygous deletion, labeled as deep deletion, were observed in BRCA2 in a single breast cancer patient and in pancreatic cancer (RAD51B (n = 2), RAD51C (n = 2) and XRCC2 (n = 1)) ( Supplementary Table S1). Among other cancer types, we observed four prostate samples with high HRD scores from both classifiers containing biallelic inactivation in BRCA1/2 and one biliary sample with a germline BRCA1 alteration where both HRD scores were above default (Supplementary Figure S1). Due to lack of evidence for HR deficiency in other cancers and the smaller sample size of identified HRD samples, other cancers were excluded for the downstream analysis and we only benchmarked results for breast, ovarian, and pancreatic cancer samples. We proceeded to compare the fraction of HRD classified cases that are explained by the different types of biallelic inactivation in BRCA1/2 based on HRDetected and CHORD. The most abundant biallelic inactivation patterns in the dataset included gBRCA1/2 (n = 52 + 54) and sBRCA1/2 mutations (n = 12 + 11) (Supplementary Table S1). The BRCA1 promoter methylation status was available only for breast and ovarian (n = 23) and it accounted for a significant number of the total biallelic events (23 out of 175, 12.8% (95% CI [8.7-19.3])). Nearly all of the cases with known biallelic inactivation (157 out of 163, 96.4% (95% CI [91.8-98.5])) were in tumors that are above the default threshold of either of the classifiers. The largest proportion of unexplained HRD cases was observed in ovarian cancer (14.7%, 95% CI [5.5-31.8]) using CHORD and in pancreatic samples (28.2%, 95% CI [59.7-81.6]) using HRDetect (Table 1; Figure 2). Larger fractions of unexplained cases were obtained using HRDetect compared to CHORD with the default threshold value (one-sided z-test for partially overlapping samples, p-value < 10 -13 ) (Figure 2), ranging from around 10 to 28% depending on the cancer type. When looking at each classifier closely, we see that the highest difference is between breast and pancreatic cancers and HRD unexplained cases for HRDetect (one-sided Fisher's exact test, p-value = 0.0375). Multiple biallelic inactivation events can occur in HR genes in the same patients; for instance, one ovarian sample contained sBRCA1 and promoter hypermethylation of RAD51C, while a pancreatic sample had a somatic deep deletion of both RAD51B and RAD51C (Supplementary Table S1). Performance of CHORD and HRDetect Classifiers As previously reported, both classifiers, CHORD and HRDetect, achieved exceptional performance in identifying biallelic events in breast and ovarian cancer types as shown by the high area under the ROC curve (AUC) above 0.96 and 0.9, respectively ( Figure 3). In addition, we calculated the area under the precision-recall curve (AUPRC) that was high and well above 90% across all cancer types. No statistically significant difference was detected between CHORD and HRDetect AUC values (p > 0.05, DeLong's test). Impact of Exclusion of Promoter Methylation on the Performance To assess the importance of promoter methylation in the evaluation of HRD classifiers' performance, we removed the methylation data of BRCA1/RAD51C promoters in breast and ovarian the only cancer types for which methylation data were available. We observed a significant drop in classifiers' performance for breast and ovarian samples (Figure 3) Revisiting Threshold Values for Homologous Recombination Repair Deficiency Classification of Different Cancer Types The current threshold of HRDetect( 0.7) was determined based on the breast dataset, while CHORD 0.5 was arbitrarily chosen. Considering FIGURE 2 | Proportion of samples with and without biallelic alteration in HR-genes classified as HR-deficient with default threshold of (A) HRDetect of 0.7 and (B) CHORD classifiers of 0.5. Only one alteration in the gene is shown per sample based on the hierarchical order of genes as follows: BRCA1, BRCA2, RAD51C, PALB2, and XRCC2. Frontiers in Genetics | www.frontiersin.org June 2022 | Volume 13 | Article 852159 different machine-learning algorithms underlying CHORD and HRDetect for classifying HRD in samples and different training data, we sought to determine an optimal threshold value for the individual cancer types in our cohort. For each cancer type and classifier, we performed 10-fold nested cross-validation to calculate the optimal threshold value (given in detail in the Methods section). The accuracy of both classifiers with default threshold values was similar across cancers, while the most considerable increase was detected in ovarian cancer (accuracy CHORD 0.91 and HRDetect 0.83) ( Table 2). Cancer-type-specific (optimal) threshold values differ from the classifiers' default ones, but the overall accuracy improves slightly or remains the same. The only exception is the optimal value of HRDetect in ovarian cancer where the accuracy improved by 12%. The number of samples with evidence of bi-allelic alterations in HR-related genes and classification as HR deficient by the classifiers were more abundant in optimal values of the CHORD classifier in breast and pancreatic cancers compared to the default threshold in the same cancer type. The proportion of classified HRD cases in the dataset without known biallelic evidence decreased for both CHORD,and HRDetect,. Monoallelic mutations were found in pancreatic cancer (Supplementary Figure S3). Using default threshold values, the majority of monoallelic mutation in HR-related genes occurs in homologous recombination proficient (HRP) samples, where HRDetect has more HRD unexplained cases and two monoallelic mutations in HR-related genes. The monoallelic alterations were detected in HRD-labeled samples only with HRDetect with default and an optimal threshold value. DISCUSSION Our study shows an integrated overview of detecting homologous recombination deficiency in cancers using CHORD and HRDetect classifiers. Here, we have mainly focused on three cancers most commonly associated with HRD: breast, ovarian, and pancreatic cancers. We observed that biallelic inactivation of genes explains a large fraction of samples possessing HRD when using a universal default threshold, as was demonstrated in previous studies (Davies et al., 2017;Nguyen et al., 2020;Golan et al., 2021). However, around 10-28% of patients without known underlying causes were detected by these classifiers despite their high performance based on the default threshold. In this study, we found that by applying a cancer-type-specific threshold the number of unexplained cases reduced to around 8.9-12.3% without decreasing the sensitivity of 96%. We estimate that in this dataset up to~10% of HRD cases are caused by types of alterations that still have not been associated with HRD and therefore gene-centric testing for mutations in HR genes will likely fail to identify them. Similar results apply to the analysis of other cancer types in which the HRD cancers are rarer in comparison to the four well-known HRD cancers. The low number of HRD mutations in prostate samples and other cohorts did not allow the determination of a reliable cancer-type-specific threshold. The small fraction of unexplained cases is consistent with our previous proposal (Foulkes and Polak, 2019;Matis et al., 2021) that if alterations in novel genes lead to HRD, taken together, they will all account for only a very small proportion of all HRD cases. The different cut-offs that we observed may be due to subtle differences across cancer in the mutational landscape even for tumors with different same gene defects, especially in mutational signatures (Degasperi et al., 2020). Furthermore, as it was highlighted by Nguyen et al. (2020), additional threshold optimization and validations are also required when applying classifiers to WGS data generated by other variant calling pipelines. Our cohort contained data generated by various pipelines for CHORD and HRDetect in each cancer type, which may affect the overall comparison of results between these classifiers. In addition to the threshold value, it is important to investigate other features affecting the mutation landscape of tumors, such as deficiency in mismatch repair (MMR), which may have a negative impact on the overall performance of classifiers in specific tumors. It was noted by Golan et al. (2021) that one pancreatic sample with biallelic inactivation in BRCA2 and PMS2 (responsible for MMR) was misclassified by HRDetect and CHORD classifier and had both scores near zero. In addition to cancer-type-specific thresholds that reduce the number of unexplained cases, we demonstrated the importance of including the promoter methylation status of BRCA1 and RAD51C in order to evaluate the fraction of HRD cases that are explained by known causes. In breast and ovarian cancers, for which methylation analysis is most often conducted, promoter methylation of BRCA1 accounts for at least 20% of explained biallelic inactivation cases of HRD, labeled by either of the classifiers, and lack of methylation data significantly affects the performance of classifiers. The proportion of unexplained cases in other cancer types may have been reduced if methylation analysis data existed, especially in pancreatic cancer where some detected monoallelic PVs could have other events such as promoter methylation that would explain their HRD. These observations highlight the advantage of using these classifiers alongside conventional testing for patient selection and stratification in clinics, as was already suggested by several studies (Zhao et al., 2017;Staaf et al., 2019;Chopra et al., 2020). The relationship between the presence of HRD and response to therapies such as PARP inhibitors is not precise and there is currently no "ground truth" for measuring HRD. Resistance to PARP inhibitors can coexist with HRD (Dias et al., 2021), so the presence of HRD is not by itself a direct predictor of response to PARP inhibitors and other drugs such as platinum that cause double-strand DNA breaks. Combinations of different approaches such as WGS-based, FDA-approved assays, and newer functional assays such as the RAD51 foci assay (Pellegrino et al., 2022) will ultimately lead to a better selection of HRD patients for appropriate therapies. Hence, our review re-analysis emphasizes the power of both CHORD and HRDetect in the stratification of patients possessing HRD phenotype across various cancers, as well as the importance of identification and further validation of new unrevealed oncogenic mutations. DATA AVAILABILITY STATEMENT The original contributions presented in the study are included in the article/Supplementary Material; further inquiries can be directed to the corresponding author.
5,740
2022-06-17T00:00:00.000
[ "Biology", "Medicine" ]
Genetic heterogeneity of the Spy1336/R28—Spy1337 virulence axis in Streptococcus pyogenes and effect on gene transcript levels and pathogenesis Streptococcus pyogenes is a strict human pathogen responsible for more than 700 million infections annually worldwide. Strains of serotype M28 S. pyogenes are typically among the five more abundant types causing invasive infections and pharyngitis in adults and children. Type M28 strains also have an unusual propensity to cause puerperal sepsis and neonatal disease. We recently discovered that a one-nucleotide indel in an intergenic homopolymeric tract located between genes Spy1336/R28 and Spy1337 altered virulence in a mouse model of infection. In the present study, we analyzed size variation in this homopolymeric tract and determined the extent of heterogeneity in the number of tandemly-repeated 79-amino acid domains in the coding region of Spy1336/R28 in large samples of strains recovered from humans with invasive infections. Both repeat sequence elements are highly polymorphic in natural populations of M28 strains. Variation in the homopolymeric tract results in (i) changes in transcript levels of Spy1336/R28 and Spy1337 in vitro, (ii) differences in virulence in a mouse model of necrotizing myositis, and (iii) global transcriptome changes as shown by RNAseq analysis of isogenic mutant strains. Variation in the number of tandem repeats in the coding sequence of Spy1336/R28 is responsible for size variation of R28 protein in natural populations. Isogenic mutant strains in which genes encoding R28 or transcriptional regulator Spy1337 are inactivated are significantly less virulent in a nonhuman primate model of necrotizing myositis. Our findings provide impetus for additional studies addressing the role of R28 and Spy1337 variation in pathogen-host interactions. Introduction Streptococcus pyogenes (group A streptococcus, GAS) is a strict human pathogen responsible for >700 million infections and~517,000 deaths annually worldwide [1]. Human infections range in severity from relatively mild conditions such as pharyngitis to life-threatening septicemia and necrotizing fasciitis/myositis [2]. GAS also causes skin infections such as impetigo and erysipelas [3] and post-infection sequelae, including rheumatic fever [4], rheumatic heart disease [5], and glomerulonephritis [6]. GAS strains are commonly classified based on serologic diversity in M protein, an antiphagocytic cell-surface virulence factor, or allelic variation in the 5'-end of the emm gene that encodes this protein [7,8]. More than 250 emm types have been identified, but the majority of infections in many countries are caused by a relatively small number of prevalent emm types: emm1, emm3, and emm12 [9][10][11][12]. Strains of emm28 (serotype M28) GAS are of special importance because they are among the top five emm types causing invasive infections in the USA [11,13] and several European countries [14][15][16]. They also have an unusual propensity to cause puerperal sepsis (childbed fever) and neonatal infections [17][18][19][20][21]. The molecular mechanisms contributing to the ability of emm28 strains to cause devastating peripartum infections are poorly understood. The R28 protein is a surface-associated virulence factor made by emm28 strains and has been studied as a vaccine candidate [22][23][24]. R28 originally was described by Lancefield and colleagues based on serologic studies [25][26][27]. The gene (Spy1336/R28) encoding R protein in emm28 GAS strains is nearly identical to the alp3 gene encoding the Alp3 protein in group B streptococcus (GBS), a common cause of neonatal sepsis, pneumonia, and meningitis [24,28,29]. Genome sequencing of emm28 GAS strain MGAS6180 discovered that the gene encoding the R28 protein (Spy1336/R28) is located on an integrative-conjugative element (ICE)-like element originally designated as region of difference 2 (RD2) [30,31]. In the genome of reference strain MGAS6180 [31], RD2 is a 37.4 Kb segment of DNA that has 34 annotated genes. RD2 is present in a small number of other GAS emm types and is >99% identical to a region present in the chromosome of the majority of GBS strains [31]. These characteristics suggest that RD2 has been disseminated into different streptococcal strains and species by horizontal gene transfer and recombination [32,33]; published data support this idea [33]. The presence of RD2 in GBS and emm28 GAS strains causing infections associated with the female genital tract also suggests that genes located on this ICE element are causally involved in this clinical phenotype. The Spy1337 gene is adjacent to, and divergently transcribed from, Spy1336/R28 in GAS. The Spy1337 protein is a member of the AraC family of prokaryotic transcriptional regulators. Members of this family of regulators commonly have two domains, a conserved C-terminal domain that defines the members of the family, and a variable N-terminal domain that differs among family members. The C-terminal domain comprises approximately 100 amino acids and has two helix-turn-helix (HTH) DNA-binding motifs. The N-terminal domain can be bifunctional, mediating effector-binding and multimerization of the regulator [34,35]. Genes encoding Spy1337/AraC-like proteins are frequently found adjacent to genes encoding surface antigens containing the YSIRK signal sequence motif {(YF)SIRKxxxGxxS}, which is responsible for localized secretion at the division septum [36,37]. This motif is present in the N-terminal domain of R28 at amino acid positions 19 through 30. Recently, we proposed that the Spy1337 protein is a positive transcriptional regulator of both Spy1336/R28 and Spy1337, and that by regulating expression of Spy1336/R28 and other genes, Spy1337 is involved in emm28 GAS virulence [38]. To positively regulate virulence gene expression, AraC-like transcriptional regulators usually either bind to chemical effectors present at the site of infection to cause a conformational change favoring DNA-binding to their cognate gene targets [39,40], or they bind to AraC negative regulators (ANRs) that inhibits such interactions [41]. In addition, they can regulate their own expression [42][43][44][45][46]. The Spy1336/R28 gene has a centrally-located long TR motif (referred to herein as TR R28 ), characteristic of genes encoding Alp family proteins. Due to the homologous nature of repetitive DNA sequence, regions having TRs frequently vary in size as a consequence of mutational events involving either unequal crossover or intramolecular recombination [62][63][64][65]. The R28 protein made by GAS emm28 reference strain MGAS6180 [31] has 13 identical TR R28s of 79 amino acids (aa) each. In a recent study of 492 M28 invasive isolates of GAS for which whole genome sequence and transcriptome data were available, we used machine learning to determine that a variablelength T-nucleotide homopolymeric tract (HT, referred to herein as HT Spy1336-7 ) in the intergenic region of Spy1336/R28 and Spy1337 was associated with differences in transcript levels of these two genes [38]. HTs are commonly characterized by rapid length variation [66]. When present in promoter regions, HTs can modify transcript levels by altering the distance between promoter elements [67] or changing the binding of transcription factors [68,69]. In~94% of the strains we studied, HT Spy1336-7 had either nine or ten T residues. Comparison of human infection isolates found that strains with 9Ts had significantly lower transcript levels of the Spy1336/R28 and Spy1337 genes compared to strains with 10Ts. Compared to a parental strain with 9Ts, an isogenic mutant strain with 10Ts in the HT Spy1336-7 had significantly increased transcript levels of Spy1336/R28 and Spy1337 and was significantly more virulent in a mouse necrotizing myositis infection model. In addition, compared to the isogenic strain with 9Ts, the strain with 10Ts was significantly more resistant to killing by human polymorphonuclear leukocytes ex vivo and produced more R28 protein. Thus, a one-nucleotide indel in the HT Spy1336-7 caused altered level of these two transcripts and changed the virulence phenotype [38]. Of note, there is an HT comprised of 8-15 T residues located in the regulatory region 99 nucleotides (nts) upstream of the bca gene encoding the GBS alpha C Alp protein [68]. In the present study, we determined the extent of heterogeneity in the number of TR R28s in Spy1336/R28 in a subset of 493 M28 invasive clinical isolates for which we had whole genome and transcriptome data, including the reference strain MGAS6180 [38]. In addition, we analyzed size variation of the HT Spy1336-7 region located upstream of Spy1336/R28 in >2,000 strains of emm28 GAS cultured from invasive human infections and compared transcriptome changes in isogenic strains containing variable lengths of HT Spy1336-7 . Finally, we used isogenic mutant strains to determine the contributions of Spy1336/R28 and Spy1337 to global gene expression and virulence. Ethics statement The clinical isolates strains used in this study were collected as part of comprehensive population-based public health surveillance studies of emm28 S. pyogenes infections conducted in 11 states in the United States, Canada (ON), the Faroe Islands in Denmark, Finland, Iceland, and Norway [38]. Consent for collection of these strains was waived and all data were fully anonymized. Bacterial strains and growth conditions GAS strains were grown at 37˚C in Todd-Hewitt broth (Bacto Todd-Hewitt broth; Becton Dickinson and Co.) supplemented with 0.2% yeast extract (THY medium). THY medium was supplemented with chloramphenicol (20 μg ml -1 ) as needed. Trypticase soy agar supplemented with 5% sheep blood (Becton Dickinson and Co.) was used as required. E. coli strains were grown in Luria-Bertani (LB) medium at 37˚C, unless indicated otherwise. LB medium was supplemented with chloramphenicol (Acros Organics; 20 μg ml -1 ) as needed. DNA manipulation Standard protocols or manufacturer's instructions were used to isolate plasmid DNA, and conduct restriction endonuclease, DNA ligase, PCR, and other enzymatic treatments of plasmids and DNA fragments. Enzymes were purchased from New England Biolabs, Inc (NEB). Q5 high-fidelity DNA polymerase (NEB) was used. Oligonucleotides were purchased from Sigma Aldrich. Chromosomal DNA extraction and PCR amplification of the Spy1336/R28 repeat region Chromosomal DNA extraction was performed as described [70], using Fast-Prep lysing Matrix B beads in 2-ml tubes (M P Biomedicals), or the DNeasy blood and tissue kit (Qiagen). The primer sequences used for PCR-based size determination of the Spy1336/R28 repeat region are shown in S5 Table. Three different primer sets were used. All primers were designed to bind to conserved regions located upstream, in the case of the forward (FWD) primers, or downstream, for the reverse (REV) primers, from the DNA sequence of the Spy1336/R28 gene encoding the repeat region. Primer set 1 comprised primers JE433 (FWD), binding 60-nt upstream of the repeat region, and JE431 (REV), binding 226-nt downstream of the repeat region. Primer set 2 included primers JE432 (FWD), binding 255-nt upstream of the repeat region, and JE431 (REV). Primer set 3 was comprised of primers JE410 (FWD), binding 2,264-nt upstream of the repeat region, and JE412 (REV), binding 1,098-nt downstream of the repeat region. The extension time used for the PCR reactions was adjusted to accommodate anticipated PCR fragment length. Typically primer set 1 was used first to obtain the desired PCR product and the other 2 primer sets were used if primer set 1 failed to yield an amplified product. Two different DNA ladders were used to determine the size of the PCR products, including the exACTGene 100 bp-10,000 bp DNA ladder (Fisher Scientific), and the 100-bp DNA step ladder (Promega). Both ladders were loaded at least 3 times in every agarose gel and used as reference to determine DNA fragment length (S1 Table). Analysis of the number of T residues in the HT 1336-7 A blastable database of SPADES [71] assemblies was made for 2,095 emm28 genomes [38]. Using two adjacent DNA sequences of 20-nts that flank HT 1336-7 , the DNA region surrounding these two DNA sequences, including the two sequences, was extracted from each strain, and the number of T nucleotides counted. In addition, we analyzed the number of T nucleotides present in HT Spy1336-7 in the 2,095 emm28 genomes with the command-line program Jellyfish [72], which uses k-mers, and counted the occurrences of each HT Spy1336-7 variant. Using these combined approaches we identified the length of HT Spy1336-7 in 2,074 (~99%) of the original 2,095 strains, corresponding to 30 different alleles (S2 and S3 Tables). Of these alleles, six contained indels in HT Spy1336-7 exclusively, which resulted in changes in the number of consecutive T nucleotides, with no additional polymorphisms. These six alleles were present in 2,020 (~97%) strains. Western immunoblot analysis Bacteria grown in THY were collected at OD 600 =~0.6 (ME), centrifuged at 16,100g for 1 min, and pellets were resuspended in PBS. The Western immunoblot procedure used has been described [38], with the following modifications: (i) protein transfer to nitrocellulose membranes was done for 80 ( Fig 6) or 45 min ( Fig 2C) at 120 volts, and (ii) the anti-R28 antibody [38] was diluted to 1:1,250 in PBS-T with 5% nonfat dry milk, whereas the HRP-conjugated anti-rabbit secondary antibody was used at 1:13,500 dilution. Construction of isogenic mutant strains All isogenic mutant strains used in this study are listed in S6 Table. Isogenic mutant strain MGAS27961-11T containing an 11T HT Spy1336-7 was generated using allelic exchange as described previously [73]. Briefly, primers HPN-1 and HPN-2 [38] were used to amplify ã 2,690-bp fragment using genomic DNA of MGAS11108, an emm28 clinical isolate with the naturally occurring 11T nucleotide region. The amplicon encompasses HT Spy1336-7 . The resulting PCR product was cloned into suicide plasmid pBBL740 and transformed into parental strain MGAS27961-9T. The plasmid integrant was used for allelic exchange as described previously [73]. To identify strains putatively containing the allelic replacement region encompassing the expected polymorphism (11 T nucleotides), we sequenced the Spy1336/R28 upstream region using primer HPN-seq (S5 Table). Isogenic mutant strain MGAS27961-10T-ΔSpy1336 was constructed using MGAS27961-10T genomic DNA as template for amplification. All primers are listed in S5 Table. Primer sets 1336-1 and -2 and 1336-3 and -4 were used to amplify two fragments upstream and downstream, respectively, of Spy1336/R28. The two PCR fragments were merged by combinatorial PCR and ligated into the BamHI site of suicide vector pBBL740. The recombinant plasmid containing a Spy1336/R28 deletion encompassing the entire gene was transformed into strain 27961-10T to replace the native Spy1336/R28 via allelic exchange. Isogenic mutant strains MGAS27961-10T-ΔSpy1337 and the MGAS27961-10T-ΔSpy1336/ΔSpy1337 double mutant were constructed with methods analogous to those described above. Primer sets 1337-1 and -2 and 1337-3 and -4 were used to generate MGAS27961-10T-ΔSpy1337, and 1336-1337-1 and -2 and 1336-1337-3 and -4 were used to generate the double mutant. Whole genome sequence analysis on the isogenic mutant strains confirmed the absence of spurious mutations. RNAseq library preparation, sequencing, and analysis Isogenic emm28 strains were grown in triplicate in THY and harvested at mid-exponential (ME; OD 600 = 0.46-0.52) and early-stationary (ES; OD 600 = 1.65-1.7) phases of growth. Bacteria (2 ml) from the ME phase and ES phase (1 ml) were added to 4 ml and 2 ml of RNAprotect Bacteria Reagent (Qiagen), respectively, incubated at room temperature for 20 min, and centrifuged at 4,000 rpm for 15 min. The supernatant was discarded, and the bacterial pellet was frozen in liquid nitrogen and stored at -80˚C. The RNeasy kit (Qiagen) was used for total RNA isolation, and the quality of the total RNA was evaluated with RNA Nano chips (Agilent Technologies) and an Agilent 2100 Bioanalyzer. RNA extraction for all emm28 isogenic strains was performed as described previously [38, 70,74]. The rRNA was depleted with the Ribo-Zero rRNA removal kit for Gram-positive bacteria (Illumina). The quality of the rRNA-depleted RNA was evaluated with RNA Pico chips (Agilent Technologies) and an Agilent 2100 Bioanalyzer. NEBNext Ultra II DNA library prep kit (NEB) was used to prepare the cDNA libraries, according to the manufacturer's instructions. The quality of the cDNA libraries was evaluated with High-Sensitivity DNA chips (Agilent Technologies) and an Agilent 2100 Bioanalyzer. The cDNA library concentration was measured fluorometrically with Qubit dsDNA BR and HS assay kits (Invitrogen). Analysis of the RNAseq data was performed as described previously [38]. Necrotizing myositis infection models A mouse model of necrotizing myositis was used to compare virulence of the 9T, 10T and 11T isogenic strains as previously described [38]. Briefly, 120 CD1 mice from Envigo were kept in cases containing 5 animals each, provided with chow pellets and acidified water ad libitum, and corn cob bedding with nesting material. They were inoculated in the right hindlimb with 5x10 8 CFU (n = 40 mice per strain) and followed for 7 days. They were euthanized with an overdose of isoflurane (primary), followed by cervical dislocation (secondary). A well-described NHP model of necrotizing myositis was used to compare the virulence of the wildtype and isogenic MGAS27961-10T-ΔSpy1336/R28 and MGAS27961-10T-ΔSpy1337 strains [70,75]. Three cynomolgus macaques (2-3 years, 2-4 kg) were used. Animals were randomly assigned to different strain treatment groups and inoculated with 5x10 9 CFU/kg of one strain in the right limb and a different strain in the left limb. Each strain was tested in triplicate. The animals were observed continuously and necropsied at 24 h post-inoculation. Lesions (necrotic tissue) were excised, measured in three dimensions, and volume was calculated using the formula for an ellipsoid. A full-thickness section of tissue taken from the inoculation site was fixed in 10% phosphate buffered formalin and embedded in paraffin using standard automated instruments. Histology of the three sections taken from each limb was scored by a pathologist blinded to the strain treatment groups [75,76]. To obtain the quantitative CFU data, diseased tissue recovered from the inoculation site was weighed, homogenized (Omni International) in 1 mL PBS, and CFUs were determined by plating serial dilutions of the homogenate. Statistical differences between strain groups were determined with the Mann-Whitney test. Animal studies were approved by the Institutional Animal Care and Use Committee at Houston Methodist Research Institute (protocol numbers AUP-1217-0058 and AUP-0318-0016). The humane endpoints used to determine when animals should be euthanized were for animals to be sacrificed if they either become immobile, reach lame scale = 4, reach body condition <2, develop an injection site abscess > 1 cm diameter, the injection site abscess ruptures, a metastatic (site other than injection site) abscess forms, lose >10% weight, or demonstrate other features of severe distress. Since none of the above criteria applied, the duration of the experiment was 7 days for mice and 24h for NHPs. Animals were euthanized immediately after the 7 day or 24h deadlines. The numbers of animals used was 120 (mice) and 3 (NHPs). They were all euthanized, and none was found dead. Animal care and handling was provided by the Comparative medicine core. Animal health and behavior were monitored at least once daily. All animal welfare considerations taken, including efforts to minimize suffering and distress, use of analgesics or anaesthetics, or special housing conditions were in accordance with guidelines specified by the Institutional Animal Care and Use Committee at Houston Methodist Research Institute (protocol numbers AUP-1217-0058 and AUP-0318-0016). Mice were housed in groups of 5 animals per individually ventilated techniplast cage. Cages are furnished with autoclaved quarter inch corncob bedding and a pulped virgin cotton fiber nestlet. Irradiated Teklad Global Diet 2920 pellets and acidified reverse osmosis water were provided ad libitum. Animals were examined at least once daily by the veterinary staff, attending veterinarian and investigator. Similarly, nonhuman primates were housed individually in squeeze back cages with a perch and provided chow, fresh fruit and vegetables, and water ad libitum. An enrichment program is maintained by the comparative medicine program. Neutrophil bactericidal activity assays Neutrophil bactericidal activity assays were performed in accordance with protocol 01-I-N055, approved by the Institutional Review Board for human subjects, National Institute of Allergy and Infectious Diseases. All volunteers gave written informed consent prior to participation in the study. Human neutrophils were isolated from the venous blood of healthy volunteers using a standard method [77]. Killing of S. pyogenes by human neutrophils was performed as described previously [38], except assay tubes were rotated for 3 h at 37˚C. Statistical analysis Unless otherwise stated, error bars represent standard deviation (SD), and P values were calculated using either Kruskal-Wallis, or log-rank tests. Differential expression analysis was performed using DESeq2 1.16.1. Genes were considered differentially expressed if the fold-change was greater than 1.5-fold and associated with adjusted P value (Bonferroni corrected) < 0.05. For mouse survival studies, results were graphed as Kaplan-Meier curves and data were analyzed using the log-rank test with P < 0.05 considered to be significant. For the NHP virulence studies, lesion volume and CFU data were graphed as mean +/-SEM and analyzed using the Kruskal-Wallis test with P < 0.05 considered to be significant. Heterogeneity in the number of ALP-family long tandem repeats in the Spy1336/R28 gene and protein The R28 protein has 3 domains: the amino-terminal and carboxy-terminal domains flank the size-variable central TR R28 domain. The amino-terminal domain is composed of 424 aa and it contains a secretion signal sequence with the YSIRK motif. The carboxy-terminal domain has 46 aa and it contains a cell-wall anchoring sequence with the LPXTG motif characteristic of surface-attached proteins (Fig 1A). Inasmuch as the number of TRs can vary by recombination or other mechanisms [62][63][64][65] we hypothesized that the large number of emm28 GAS strains we studied could have different size variants of the Spy1336/R28 gene as a consequence of having different numbers of TR R28s . Our previously reported Illumina paired-end 150nt read Regions encoding the amino-terminal, carboxy-terminal, and variable repeat regions are indicated. DNA binding domain refers to two predicted helixturn-helix DNA-binding motifs. Schematic is not drawn to scale. (B) Data shown correspond to 493 strains. ND, not determined. ( � ), two strains studied did not contain the RD2 mobile genetic element, as confirmed by inspection of bam files using TABLET [78]. (C) Western immunoblot of strains with R28 proteins containing different numbers of TR R28 , inferred based on gene sequence data. All strains analyzed had an HT Spy1336-7 with 10Ts because 9T strains do not produce detectable R28 protein. TR R28 , number of tandem repeats per strain. MW, inferred molecular weight of the R28 protein. https://doi.org/10.1371/journal.pone.0229064.g001 PLOS ONE length whole genome sequence data [38] could not be used to accurately assemble and determine the number of repeats in the TR R28 region, as the sequence reads are of insufficient length to span the 237nt repeated motif. Thus, to determine the extent of size variation in TR R28s in the Spy1336/R28 gene, we used PCR analysis, as described in the Materials and Methods, and studied the 493 strains analyzed previously by RNAseq [38]. Overall, we identified TR R28s size variants ranging from 1 to 17 copies of the repeat (Fig 1B and S1 Table). The most common numbers of TR R28s identified was ten (n = 71, 14.4%) then nine (n = 69, 14.0%). The inferred molecular weight of several R28 variants was confirmed by Western immunoblot analysis (Fig 1C). Heterogeneity in a homopolymeric tract in the intergenic region between Spy1336/R28 and Spy1337 We previously discovered that a single nucleotide indel located in an HT in the intergenic region between the divergently transcribed Spy1336/R28 and Spy1337 genes (Figs 1A and 2A) significantly altered the transcript levels of the two genes [38]. Specifically, strains with 9Ts in the HT produced little or no detectable transcript of these two genes, whereas organisms with 10Ts in this tract produced abundant and significantly increased levels of transcripts [38]. Moreover, increased transcript levels of Spy1336/R28 and Spy1337 resulted in increased production of the R28 virulence factor and increased virulence in a mouse necrotizing myositis infection model [38]. We reported that approximately two-thirds of 493 strains had the 10T variant of the HT Spy1336-7 region, whereas one-third of strains had a 9T variant [38]. Taken together, these observations provided the impetus to expand our study of heterogeneity in the HT Spy1336-7 region to include the entire previously described cohort of 2,095 emm28 clinical isolates recovered from invasive human infections collected in six countries over a 26-year period [38]. The HT Spy1336-7 region was analyzed in three ways, as described in detail in Materials and Methods. First, the contigs from genome assemblies generated with SPAdes [71] for all strains were searched with sequences of 20-nt flanking HT Spy1336-7 on each side using nucleotide Basic Local Alignment Search Tool (BLASTn). The identified HT Spy1336-7 target regions were retrieved, binned by alleles, and alleles were enumerated. Subsequently, the number of T nucleotides in the HT Spy1336-7 for each allele was counted. This method yielded results for the great majority of strains (~89%). As a second assembly method, we interrogated the Illumina sequencing reads for each strain using a set of eight probes of 31-nts in length corresponding to HT Spy1336-7 alleles with 6 to 13 Ts. In the aggregate, six alleles of HT Spy1336-7 were identified that differed from one another only by the number of T residues, varying in length from 8 to 13Ts (Fig 2B; S2 and S3 Tables). This analysis identified the same approximate frequency distribution for strains containing 9T or 10T nucleotides as described in our previous study [38], namely~1/3 (n = 650) and~2/3 (n = 1,226), respectively. We also discovered that~6% (n = 128) of the strains had 11Ts in the HT Spy1336-7 sequence (Fig 2C). A small number of strains had 8Ts, 12Ts, or 13Ts in the HT Spy1336-7 sequence (S2 and S3 Tables). For any strain with discrepant or no results, we visually inspected read alignment, i.e. Binary Alignment Map (BAM) files corresponding to the Spy1336/R28-Spy1337 intergenic region using TABLET [78]. Construction and growth characteristics of isogenic mutant strains We previously compared the global transcriptomes of isogenic mutant strains (MGAS27961-9T and MGAS27961-10T) that differ only in the number of T residues in the HT Spy1336-7 region [38]. In view of our finding that 6% of strains have 11Ts in this region, we constructed isogenic mutant strain MGAS27961-11T. We also constructed isogenic mutant strains MGAS27961-ΔSpy1336/R28, MGAS27961-ΔSpy1337, and MGAS27961-ΔSpy1336/ΔSpy1337 in which the target genes were deleted in a parental strain with 10Ts in the HT Spy1336-7 region. The goal of generating these strains was to perform comparative transcriptome and virulence analyses. All strains had a very similar growth curve under the laboratory conditions tested (S1 Fig). Transcriptome analysis of clinical strains We examined the transcriptome data for 442 M28 clinical isolates [38] to determine if variation in the length of HT Spy1336-7 (Fig 2B) altered the level of transcripts for Spy1336/R28 and Spy1337. Among these 442 clinical isolates, 423 strains had alleles exclusively containing indels PLOS ONE in HT Spy1336-7. All HT variants were represented in these 423 strains. We restricted examination of the transcriptome data to strains with 8 to 11Ts in the HT Spy1336-7 region because the 12T and 13T variants were represented by only one strain each. Strains with 8 and 9Ts in the HT Spy1336-7 region had low transcript levels of Spy1336/R28 and Spy1337 genes (Fig 3). In contrast, strains with the 10T variant had significantly higher transcript levels of Spy1336/R28 and Spy1337 compared to strains with the 9T variants (P<0.0001). Strains with the 11T variant (n = 25) had transcript levels for Spy1336/R28 (P<0.0001) and Spy1337 (P = 0.0021) that are significantly higher than either 10T or 9T strains (Fig 3). Transcriptome analysis of isogenic mutant strains We next used RNAseq to test the hypothesis that, when compared to a parental strain with 9Ts (MGAS27961-9T) in the HT Spy1336-7 region, an isogenic mutant strain with 11Ts (MGAS27961-11T) had an altered transcriptome. RNAseq analysis confirmed that the gene expression profile of MGAS27961-11T is modestly altered compared to MGAS27961-9T. Principal component analysis supported these findings (Fig 4A and 4B). Moreover, compared to strain MGAS27961-9T, isogenic strain MGAS27961-11T had differential expression of 4.7% (3.2% upregulated and 1.5% downregulated at ME) and 6% (0.7% upregulated and 5.4% downregulated at ES) of the GAS genome (S2A Fig). We note that at both phases of growth, transcript levels of Spy1336/R28 in MGAS27961-11T were significantly higher compared to the MGAS27961-9T strain values (Fig 4C and 4D, and S2A Fig). As expected, no detectable transcript expression of Spy1336/R28 and Spy1337 was observed in the isogenic deletion mutant strains, MGAS27961-ΔSpy1337 and MGAS27961-ΔSpy1336/ ΔSpy1337 (Fig 4C-4F). Virulence genes significantly upregulated, using a 1.5-fold cut-off, included: nga, encoding an NADase cytotoxin; slo, encoding the cytolytic protein streptolysin O; the sag operon encoding streptolysin S; mga, a positive transcriptional regulator of multiple virulence genes; emm28 encoding M protein; and sof, encoding serum opacity factor (S2 Fig). Heterogeneity in the HT Spy1336-7 region significantly affects virulence in a mouse model of necrotizing myositis To test the hypothesis that the number of T nucleotides in HT Spy1336-7 region contributes to GAS virulence, we inoculated mice intramuscularly with either the parental strain with 9Ts in HT Spy1336-7 or an isogenic mutant strain with 10Ts or 11Ts. Compared to the parental 9T strain, the isogenic 10T and 11T strains each caused significantly greater mortality and larger lesions with more tissue destruction (Fig 5). Taken together, the data support the hypothesis that the number of T nucleotides in HT Spy1336-7 significantly affects virulence in this infection model, especially when comparing the 9T to either the 10T or 11T isogenic strains. Analysis of the R28 protein made by the isogenic mutant strains In most bacteria, gene transcript levels typically correlate with amounts of the encoded proteins made, but this is not always the case [79,80]. We reported previously that the R28 protein is produced and detected by Western immunoblot in whole cell extracts and supernatants derived from MGAS27961-10T, but not from the isogenic strain MGAS27961-9T. This finding is consistent with RNAseq data showing that the transcript level of Spy1336/R28 was higher in MGAS27961-10T compared to MGAS27961-9T [38]. Next, we analyzed R28 protein production by parental and isogenic mutant strains containing the three different length variants of HT Spy1336-7 (Fig 6B). Consistent with previous data, we did not detect production of R28 by parental strain MGAS27961-9T, whereas immunoreactive R28 was produced by isogenic mutant strain MGAS27961-10T [38]. Also consistent with our hypothesis and RNAseq data, isogenic mutant strain MGAS27961-11T produced greater amounts of immunoreactive R28 compared to strain MGAS27961-10T and parental strain MGAS27961-9T (Fig 6B). Spy1336/R28 and Spy1337 contribute significantly to virulence in a nonhuman primate (NHP) model of necrotizing myositis To test the hypothesis that Spy1336/R28 and Spy1337 contribute to GAS virulence we used NHPs, a well-studied animal model that closely resembles human hosts in terms of physiology and immune response [70,[81][82][83]. We inoculated NHPs with parental strain MGAS27961- 10T and isogenic mutant strains MGAS27961-10T-ΔSpy1336/R28 or MGAS27961-10T-ΔSpy1337. Compared to the parental strain, each of the two isogenic deletion-mutant strains caused significantly smaller lesions characterized by less tissue destruction (Fig 7A and 7C). In addition, compared to the parental strain, significantly fewer CFUs of each isogenic mutant strain were recovered from the inoculation site (Fig 7B). Taken together, the data support the hypothesis that Spy1336/R28 and Spy1337 contribute to virulence in this infection model. Discussion S. pyogenes, a leading cause of human morbidity, mortality, and healthcare costs globally, produces a large number of extracellular virulence factors. Strains of serotype M28 S. pyogenes are repeatedly associated with puerperal sepsis (childbed fever) and are a prominent cause of pharyngitis in many countries. The molecular mechanisms responsible for puerperal sepsis, other invasive infections, and pharyngitis caused by serotype M28 strains are poorly understood. Several lines of evidence indicate that the Spy1336/R28 gene encodes a cell surface-anchored virulence factor [24,84] that is involved in S. pyogenes pathogenesis. We analyzed the Spy1336/ R28 gene encoding the R28 protein in approximately 2,000 emm28 invasive strains. We found DNA sequences located in the upstream regulatory region and repeat sequences in its coding sequence that make Spy1336/R28 highly polymorphic when compared across our population sample; importantly, these polymorphisms affect virulence. We previously reported a bimodal distribution in the transcripts made by Spy1336/R28 and Spy1337 in a study of 492 emm28 strains [38]. Namely, strains with 9Ts in HT Spy1336-7 located between Spy1336/R28 and Spy1337 produced low transcript levels (Figs 1A and 2A), whereas strains with 10Ts produced significantly greater transcript levels [38]. The current work confirmed and extended our observations and found that in the sample of~2,000 strains, and when considering alleles exclusively containing indels in the HT Spy1336-7 , (i) the number of T residues varied between 8 and 13, (ii) most strains (~90.4%) had 9T (32%) or 10T (61%) residues, (iii) an 11T variant was present in 6% of the strains (Figs 2B, 2C and 8), and (iv) non-isogenic clinical isolates and an isogenic strain with an 11T variant had significantly higher transcript levels of Spy1336/R28 and Spy1337, and produced more R28 protein than 9T and 10T variant isogenic strains (Figs 3 and 6, S2 and S3 Tables). Size variants of the HT Spy1336-7 may arise because HTs, especially poly(T)-containing HTs such as HT Spy1336-7 , are able to form transient mispaired regions leading to slipped-strand mispairing during DNA replication, resulting in expansion or contraction of the HT [85][86][87][88][89][90]. HTs located at the 5' upstream untranslated regions of genes such as the HT Spy1336-7 can contribute to altered regulation of gene transcript expression [68,91]. In this regard, HTs may be involved in phase variation [92] and bacterial adaptation [65,93]. The finding that the vast majority of strains had a 9T or 10T genotype suggests that this system may influence GAS-human interaction in some settings. Under one scenario, lack of or very low transcript production (9T genotype) is advantageous to the organism in certain currently undefined physiologic conditions. Conversely, the significantly-higher transcript level conferred by the 10T genotype could be advantageous in other conditions, also currently not defined. Compared to the 9T strain, the isogenic mutant strain with the 11T genotype had a higher level of Spy1336/R28 transcript ( Fig 4C and 4D). The 11T strain also had a higher level of production of R28 protein (Fig 6B), compared to both the 9T and 10T strains. We hypothesize that the increased R28 protein made by the 11T strain confers only slightly enhanced fitness to the organism (relative to the 10T genotype), an idea that is consistent with the observation that relatively few (6%) natural clinical isolates have this genotype, and substantiated by the fact that the isogenic mutant strain had only modestly increased virulence in the mouse model of necrotizing myositis. Alternatively, too much R28 protein could be detrimental during a natural course of infection in the human host by promoting too tight adherence, leading to decreased dissemination, or even promoting an increased immune response. Consistent with these ideas, the 10T and 11T isogenic mutant strains did not differ substantially in the number of differentially-expressed genes (S2 Fig, panels A and D), or in their resistance to killing by human PMNs ex vivo (Fig 9), although compared to the 9T wild-type parental strain, these two mutant strains had significantly enhanced resistance to killing by human PMN, and more differentially expressed genes (S2 Fig, panels A, B, and D). In terms of the small number of strains containing additional size variants of the HT region (8Ts, 12Ts, and 13Ts), strains with 8Ts produce very minimal levels of Spy1336/R28 transcripts (Fig 3A), and those with 12Ts and 13Ts might not further enhance pathogen fitness, or alternatively, global transcript changes occurring in strains with these genotypes in some way decrease fitness. Clearly, additional investigations are required to address these ideas. Puopolo and Madoff reported that deletion or expansion of a 5-nt repeat (AGATT) adjacent to the poly(T) tract is associated with a null phenotype for expression of the bca gene encoding the alpha C protein in GBS [68]. This region is highly conserved in GAS and GBS. However, only a small number of strains were studied. Here, analysis of 2,074 clinical isolates recovered from human patients with invasive disease identified only four strains with variation in this pentanucleotide motif (TCTAA in S3 Table), whereas most of the variation was found in HT Spy1336-7 region. Thus, in the natural populations of GAS we studied, variation in HT Spy1336-7 is the key driver of transcript level changes of Spy1336/R28 and Spy1337 [38]. Neither Spy1336/R28 transcript nor R28 protein production were detectable in an isogenic strain in which the Spy1337 transcription factor gene was deleted. In addition, deletion of either Spy1336/R28 or Spy1337 resulted in a significant decrease in virulence in an NHP model of necrotizing myositis (Fig 7). Thus, when taken together, our present and previous [38] data are consistent with a model in which the regulation of expression of the gene encoding the R28 virulence factor is partly dependent on a process whereby indels occurring in the HT Spy1336-7 affect binding of the Spy1337 transcriptional regulator to this DNA region by altering its consensus binding site, and/or changing the spacing, and therefore spatial orientation between two adjacent binding sites (Fig 2A). In this regard, the DNA sequence ATTTT present twice in the Spy1336/R28-Spy1337 regulatory region resembles part of the consensus binding site for the AraC transcriptional regulator ToxT [94]. In our proposed model, Spy1337 positively regulates the expression of both Spy1336/R28 and Spy1337 (Fig 8). The R28 virulence factor binds through its N-terminal domain to integrin receptors α 3 β 1 , α 6 β 1 , and α 6 β 4 [60], which in the human host bind to laminin, an ECM protein [61]. This constitutes an additional instance in which a pathogen binds to host integrins [95][96][97][98], a strategy described for other GAS proteins [99]. For example, GAS PrtF1 binds to α 5 β 1 integrins and could trigger integrin clustering and internalization of α 5 β 1 integrin, ultimately resulting in GAS uptake [100]. The second region in Spy1336/R28 containing repeat sequences is located in its coding sequence (Fig 1A). A 237-nt repetitive sequence, corresponding to one 79-aa TR R28 , is present in Spy1336/R28 from 1 to 17 tandem copies (Fig 1B and S1 Table). Ten is the most prevalent number of TR R28 copies present in R28, followed by 9 ( Fig 1B). Thus, emm28 GAS strains make different size variants of the R28 protein, likely arising through recombination [62][63][64][65]101]. The function of TRs in virulence factors is not well understood. One possible function would be to extend the reach of a surface-anchored protein and expose its Nterminal domain at the bacterial surface without adding new mechanistic functions [102]. Variation in the number of TRs may alternatively produce antigenic variation. In this regard, variation in the conserved region of GAS M protein generates antigenic diversity [102], and similarly, variation in the GBS alpha C protein affects antigenicity and protective efficacy [52,103]. TR number variation might decrease adhesion to and entry into host cells [104]. No correlation was found between the number of T nucleotides in HT Spy1336-7 and the number of TR R28 repeats (S3 Fig). Additional studies designed to address these ideas are warranted. To summarize, the R28 virulence factor in GAS is highly polymorphic in natural populations, both in level of transcript production and protein expression, and size. Differences in transcription levels are caused by variation in the number of Ts in a homopolymeric tract upstream of the Spy1336/R28 gene, whereas variance in R28 protein size is caused by variation Table. HT Spy1336-7 alleles found in 2,074 emm28 GAS invasive strains. (4)
8,993.6
2019-09-23T00:00:00.000
[ "Biology" ]
Research on the application of decision tree algorithm in private universities : This paper explores the application of decision tree algorithms in the analysis of private university students' online lending behavior. We utilize the decision tree classification algorithm to analyze and predict the risk levels of students' online lending behavior, and employ association rule mining techniques to identify potential risk patterns. Additionally, various data analysis methods are discussed to identify abnormal online lending behavior. The research results indicate that by comprehensively applying these methods, it is possible to effectively identify and prevent online lending risks. Introduction With the development of internet finance, the online lending behavior of college students has become an increasingly prominent social concern.Due to lack of experience and information asymmetry, private university students are prone to online lending risks.This study aims to apply data mining techniques, particularly the decision tree algorithm, to analyze and predict the online lending behavior of college students, providing a scientific basis for risk prevention. Algorithm Overview The decision tree algorithm is a popular and practical machine learning technique widely used for data classification and regression analysis.It constructs a model by learning decision rules from a labeled dataset, which can be used to predict labels for new data.The core idea of a decision tree is to break down a complex decision process into a series of simple decision steps, forming a tree-like structure.Each internal node represents a test on an attribute, each branch represents the result of the test, and each leaf node represents a class label (decision outcome). When constructing a decision tree, the algorithm selects the optimal attribute to split the dataset, based on the attribute's ability to partition the data.The quality of the split is usually evaluated using criteria such as information gain, gain ratio, or Gini impurity.Information gain measures the change in information entropy before and after splitting the dataset on an attribute, with the attribute that maximizes information gain chosen as the splitting criterion.Gini impurity assesses the disorder of the data and selects the attribute that minimizes the impurity for splitting. The advantages of the decision tree algorithm lie in its easily understandable and interpretable model, as it can be visualized as a tree structure.Additionally, decision trees can handle both numerical and categorical data with low data preprocessing requirements.However, it also has some drawbacks, such as susceptibility to overfitting and instability for certain types of data.Strategies like pruning techniques and ensemble methods (e.g., random forests) can be employed to address these issues. Data Collection and Preprocessing Before applying the decision tree algorithm, data collection and preprocessing are essential steps.Data collection forms the foundation for constructing any data mining model.In this study, the primary data collected pertain to the online lending behavior of private university students.These data may include personal information (e.g., age, gender, major), academic performance (e.g., grades, attendance rate), financial status (e.g., family income, personal spending patterns), online behavior (e.g., browsing history, online shopping records), and online lending history (e.g., loan amounts, repayment status). Collected data are often incomplete, noisy, and may exist in different formats.Therefore, data preprocessing becomes a crucial step to ensure the effectiveness of the model.Data preprocessing includes data cleaning, integration, transformation, and reduction.Data cleaning involves handling missing values and outliers, such as filling missing values with the median or mean and identifying/deleting outliers.Data integration merges data from different sources to provide a consistent data view.Data transformation converts raw data into a format suitable for algorithm processing, such as converting text data to numerical data.Finally, data reduction simplifies data through dimensionality reduction or compression techniques, reducing data volume while maintaining integrity. Model Training and Testing Model training is a critical step in applying the decision tree algorithm.In this study, the Classification and Regression Trees (CART) algorithm is utilized.The CART algorithm offers a mechanism to handle both numerical and categorical variables, using Gini impurity as the splitting criterion when constructing the decision tree. Before initiating model training, the dataset is divided into training and testing sets.Commonly, cross-validation methods are used, such as using 70% of the data for training and the remaining 30% for testing.The training set is used to build the model, while the testing set is employed to evaluate the model's performance.[1] During the training process, the CART algorithm starts from the root node, selecting the best splitting attribute, and recursively building the decision tree.Each selection is based on maximizing the reduction in Gini impurity.Once constructed, the decision tree model is used to predict the classes of the data in the testing set. Model performance evaluation involves assessing the predicted results on the testing set.Performance metrics typically include accuracy, recall, F1 score, among others.When evaluating the model, attention should be given to the issue of overfitting.To avoid overfitting, pruning techniques such as pre-pruning and post-pruning can be applied, or ensemble learning methods like random forests can be used. Finally, the trained model is deployed into real-world applications for predicting the classes of new data.Regular evaluations and updates to the model are conducted to ensure its predictive ability remains effective over time. Concepts and Algorithm Selection Association rule mining is a crucial technique in the field of data mining, used to discover interesting relationships among different items in large datasets.This technology finds wide applications in areas such as retail market analysis, network usage analysis, bioinformatics, and more.Its core objective is to identify frequent patterns, associations, correlations, or structures, particularly when these attributes are statistically associated with a specific outcome. A typical association rule mining problem can be described as follows: "If a person buys product X, what is the probability they will also buy product Y?" The solution to such problems relies mainly on two algorithms: the Apriori algorithm and the FP-growth algorithm.[2] The Apriori algorithm is one of the earliest and most famous association rule mining algorithms.It is based on the concept of frequent itemsets, i.e., sets of items that frequently appear in the dataset.The Apriori algorithm uses an iterative approach called a level-wise search, where k-itemsets are used to explore k+1-itemsets.The algorithm initially generates frequent itemsets by calculating the support of all individual items, then generates itemsets containing two elements, calculates support, and so on.This process continues until no higher-level frequent itemsets can be found.The key advantage of the Apriori algorithm lies in its simplicity and ease of understanding, but it may encounter efficiency issues when dealing with large datasets. The FP-growth algorithm is another effective method for mining frequent itemsets, avoiding the candidate set generation and testing process in the Apriori algorithm.The FP-growth algorithm first constructs a data structure called an FP-tree (Frequent Pattern tree) and then mines frequent itemsets by applying recursive decomposition on this tree.Compared to Apriori, the main advantage of FPgrowth is its performance, especially when dealing with datasets containing numerous frequent patterns, longer patterns, or dense databases.Additionally, due to the compressibility of the FP-tree, the FP-growth algorithm exhibits excellent space efficiency.[3] In the data mining research on private university students' online lending behavior, association rule mining can reveal potential connections between students' online lending behavior and other behaviors (such as spending habits, social media usage, etc.).The choice of the appropriate algorithm depends on the specific features of the dataset and the research objectives.If the dataset is relatively small or researchers aim for a more intuitive understanding, the Apriori algorithm is a good choice.For larger or more complex datasets, the FP-growth algorithm may be more suitable, providing higher efficiency and scalability. Data Mining and Analysis Before conducting association rule mining, it is necessary to collect and prepare appropriate data.For the study of private university students' online lending behavior, this may include students' personal information (e.g., age, gender, major), financial information (e.g., family income, personal spending habits), academic information (e.g., grades, attendance), social media activities (e.g., posting frequency, content types), and online lending history (e.g., borrowing frequency, amounts, repayment status).This data can be collected from the school's database, social media platforms, financial institutions, among other sources.[4] The first step in data mining is data preprocessing, which involves cleaning data, handling missing values, normalizing data formats, and more.Additionally, some data transformation may be required, such as discretizing continuous variables for better application of association rule mining algorithms. Once the data is prepared, the Apriori or FP-growth algorithm can be used to mine association rules.This process typically involves two main steps: first, generating frequent itemsets, and then generating association rules from these frequent itemsets.When generating association rules, minimum support and minimum confidence need to be set.Support refers to the frequency of an itemset occurring in all transactions, while confidence is the conditional probability, i.e., the probability of the conclusion itemset occurring given the premise itemset. In the analysis phase, researchers evaluate the generated rules and attempt to identify meaningful patterns.For example, they may discover that specific spending behaviors are closely associated with high online lending risk.These association rules can assist schools and financial institutions in better understanding students' online lending behavior and designing targeted intervention measures. Furthermore, results obtained from association rule mining can be used to enhance risk assessment models, improving the accuracy of predicting online lending defaults.For instance, if certain spending patterns or social media behaviors are found to be highly correlated with online lending defaults, this information can be integrated into the risk assessment model to assist in making more accurate loan decisions. In conclusion, association rule mining provides a powerful tool for understanding and predicting university students' online lending behavior.By revealing hidden relationships between students' spending habits, social behaviors, and online lending behavior, it can offer schools and financial institutions more effective risk management and prevention strategies.However, it should be noted that association rules can only reveal correlation, not causation.Therefore, when applying these rules, a comprehensive judgment should be made in conjunction with other information and real-world considerations.[5] Data Sources Before delving into the discussion of identifying anomalous online lending behavior, it is crucial to have a thorough understanding of the data sources.Understanding and analyzing data from multiple channels and dimensions are essential for effectively identifying potential risk behaviors.The current digital era provides a wide range of diverse data sources, including but not limited to the following key areas: Transaction Data Analysis Transaction data is a primary basis for identifying anomalous online lending behavior.By deeply analyzing borrowers' transaction histories, frequencies, amounts, and timings, we can reveal their financial situations and spending habits.For example, frequent large transactions or irregular transaction patterns may imply financial stress or unstable income sources for borrowers. Login Pattern Data Analysis Login pattern data provides detailed information about user interactions with the online lending platform.This includes login frequency, login times, login durations, and the types of devices used.Anomalous login patterns, such as frequent logins at unconventional times or multiple logins and logouts in a short period, may indicate fraud risk or suspicion of account compromise. Social Media Behavior Analysis Social media behavior has become a crucial component of modern data analysis.By analyzing an individual's activities on social media, we can indirectly understand their lifestyle, social circle, and even psychological state.Certain patterns of behavior on social media may be correlated with financial difficulties, providing valuable supplementary information for assessing the risk of online lending behavior. Other Data Sources In addition to the aforementioned primary data sources, other sources include geographic location data, device information, credit history, user feedback, and reports.Geographic location data can reveal a borrower's residential and work environments, aiding in assessing their creditworthiness.Device information analysis (such as device type, operating system, IP address) can be used to identify unconventional device access behaviors, thereby preventing fraud risk.Credit history data provides information about a borrower's past credit behavior, helping evaluate their repayment capability and willingness.User feedback and reports serve as direct risk indicators and can be used to validate the results of other data analyses. Through comprehensive analysis of these multi-channel and multi-dimensional data, we can construct a comprehensive borrower profile, enabling more precise identification of anomalous online lending behavior and the implementation of corresponding preventive measures.[6] Methods and Applications After obtaining sufficient data, various methods need to be applied to analyze this data and identify anomalous online lending behavior.This involves complex data processing techniques, including but not limited to the following aspects: Behavioral Analysis Behavioral analysis involves identifying potential risks by analyzing individual activity patterns.In the context of online lending, this includes the analysis of transaction patterns, login behavior, social media activities, and more.Establishing a baseline for a user's normal transaction patterns can help identify abnormal transaction behavior deviating from the baseline.Login behavior pattern analysis aids in identifying abnormal account access attempts, potentially indicating account compromise or fraud risk. Device Information Analysis Device information analysis focuses on the characteristics of devices used by users.By analyzing information such as device type, operating system version, IP address, etc., abnormal device access behavior can be identified.For example, logging in suddenly from an unusual location or using an uncommon device may signal a risk. Machine Learning and Artificial Intelligence Techniques Machine learning and artificial intelligence technologies play an increasingly important role in identifying anomalous online lending behavior.These technologies can handle large volumes of data and learn patterns to identify anomalous behavior.By training classification models (such as decision trees, random forests, neural networks, etc.), we can automatically identify potential risk behaviors.These models can quickly identify anomalies in large datasets, providing real-time risk monitoring. Privacy and Security Considerations When applying these methods, attention must be given to privacy and security issues.All data collection and analysis activities must comply with relevant data protection regulations to ensure that individual privacy is not violated. By integrating these methods, we can significantly improve the accuracy and efficiency of identifying anomalous online lending behavior.This is crucial for preventing financial fraud, protecting borrowers' interests, and maintaining the stability of financial markets.The ongoing development of technology and the enhancement of data analysis capabilities will bring more innovation and progress in identifying and preventing anomalous online lending behavior in the future. Case Study Design To delve into the application of decision tree algorithms and association rule mining techniques in analyzing the online lending behavior of private university students, we selected a private university with a diverse student population and typical behavioral characteristics as the case study.The aim is to gain comprehensive insights within this real-world context. Case Selection The chosen university not only has a representative student body but also exhibits diverse and challenging online lending behaviors.We will extensively collect data on students' personal information, academic performance, social media activities, and past online lending history to ensure the case's breadth and depth. Data Collection and Preprocessing During the data collection and preprocessing phase, our focus is on protecting student privacy while ensuring the quality and consistency of the obtained data.Through detailed data cleaning and anonymization processes, we will construct a high-quality dataset to provide a reliable foundation for subsequent analysis. Application of Decision Tree Algorithm Using the decision tree algorithm, we will build a model aimed at predicting the risk level of students' online lending behavior.Model training and testing will utilize carefully partitioned datasets, adjusting parameters and employing pruning techniques to enhance the model's performance and generalization capability. Analysis Using Association Rule Mining Simultaneously, we will apply association rule mining techniques to conduct in-depth analyses of students' consumption and online behaviors, seeking potential association patterns leading to online lending risks.Using algorithms like Apriori and FP-growth, we will mine frequent itemsets, generate a series of association rules, and reveal potential connections between specific consumption habits, social media usage patterns, and high-risk online lending behavior. Objectives and Significance This case study aims not only to identify students engaged in high-risk online lending behavior but also to understand the underlying reasons behind these behaviors.By combining decision tree algorithms and association rule mining techniques, this study provides a comprehensive approach, allowing for a more in-depth analysis and understanding of students' online lending behavior.This can assist private universities in developing more effective strategies for online lending risk management, promoting healthy financial behavior among students, and providing decision support to ensure the stability of financial markets. Results and Discussion Through empirical research and analysis, we obtained a series of meaningful discoveries and insights.The decision tree model successfully identified a group of students with high online lending risk, achieving high levels of accuracy and recall.This suggests that the decision tree algorithm is an effective tool for predicting and evaluating the risk of online lending behavior among university students. Through association rule mining techniques, we discovered several patterns associated with high online lending risk behavior.For instance, specific consumption habits and social media usage patterns were closely correlated with higher online lending risk.These findings offer new perspectives for understanding student online lending behavior and may aid universities and financial institutions in implementing more effective measures for risk prevention and management. Additionally, the study revealed some challenges in the practical application of data mining techniques, such as data quality control, model interpretability, and ensuring privacy protection throughout the data analysis process.These challenges underscore the need for a comprehensive strategy and methods to ensure the validity and security of results when applying these technologies in practical scenarios. In summary, this case study not only demonstrates the potential application of data mining techniques in identifying and preventing online lending risks among private university students but also emphasizes the importance of integrating various technologies and methods.These findings provide valuable information and strategies for university administrators and financial institutions to more effectively manage and prevent student online lending risks.Furthermore, the research outcomes lay the groundwork and direction for future studies in this field. Conclusion By applying decision tree algorithm and association rule mining technology, this study effectively identifies and predicts the risk of online loan behavior of private college students.The research shows that the comprehensive application of various data mining technologies can provide powerful risk prevention and management tools for universities.At the same time, these methods need to be constantly adjusted and optimized in practice to adapt to the changes in student behavior and network environment.
4,163.6
2024-01-01T00:00:00.000
[ "Computer Science", "Education" ]
A Hierarchical Aggregation Approach for Indicators Based on Data Envelopment Analysis and Analytic Hierarchy Process Abstract: This research proposes a hierarchical aggregation approach using Data Envelopment Analysis (DEA) and Analytic Hierarchy Process (AHP) for indicators. The core logic of the proposed approach is to reflect the hierarchical structures of indicators and their relative priorities in constructing composite indicators (CIs), simultaneously. Under hierarchical structures, the indicators of similar characteristics can be grouped into sub-categories and further into categories. According to this approach, we define a domain of composite losses, i.e., a reduction in CI values, based on two sets of weights. The first set represents the weights of indicators for each Decision Making Unit (DMU) with the minimal composite loss, and the second set represents the weights of indicators bounded by AHP with the maximal composite loss. Using a parametric distance model, we explore various ranking positions for DMUs while the indicator weights obtained from a three-level DEA-based CI model shift towards the corresponding weights bounded by AHP. An illustrative example of road safety performance indicators (SPIs) for a set of European countries highlights the usefulness of the proposed approach. Introduction Individual indicators are multidimensional measures that can assess the relative positions of entities (e.g., countries) in a given area [1].A Composite indicator (CI) is a mathematical aggregation of individual indicators into a single score.Two simple but popular aggregation methods in the context of multi-criteria decision-making (MCDM) are the weighted sum (WS) method and the weighted product (WP) method [2].Some researchers have recently pointed out that the WP method may have some advantages over the WS method in CI construction [3][4][5].However, the assignment of weights to indicators is still a main source of difficulty in the application of these methods.Fortunately, the recent methodological advances in operations research and management science (OR/MS) have provided us with two powerful tools, namely data envelopment analysis (DEA) and analytic hierarchy process (AHP), which can be used as weighting and aggregation tools in CI construction. Data Envelopment Analysis is a nonparametric method to assess the relative efficiency of a group of DMUs based on their distance from the best-practice frontier.In this method each DMU can freely choose its own weights to maximize its performance [6]. The standard DEA models are formulated using multiple inputs and multiple outputs of DMUs.The application of this group of models in CI construction can be found in [7][8][9].However, in recent years much more attention has been focused on the application of a new group of DEA models in the field of composite indicators which is known as the "benefit of the doubt" (BOD) approach. In the BOD approach, all indicators are treated as outputs without explicit inputs, i.e., the property of "the larger is the better" [10][11][12]. In light of the possibility of neglecting the priority of various indicators, some critics have questioned the validity and stability of CIs obtained via DEA.Decision makers (DMs), in some contexts, have value judgments concerning the relative priority of indicators that should be taken into account in CI construction. Alternatively, AHP is a systematic MCDM method to generate the true or approximate weights based on the well-defined mathematical structures of pairwise comparison metrics. The application of AHP in CI construction provides a priori information about the relative priority of indicators [13][14][15].AHP usually involves three basic functions: structuring complexities, measuring on a ratio-scale and synthesizing [16].One of the advantages of AHP is its high flexibility to be combined with the other OR/MS techniques [17].AHP can be combined with DEA in different ways.The most common approach is the estimation of parameters of weight restrictions on the DEA models.AHP estimates the appropriate values for the parameters in the absolute weight restrictions [18], relative weight restrictions [19][20][21][22][23], virtual weight restrictions [24,25] and restrictions on changes of input (output) units [26]. There are a number of other methods that do not necessarily apply additional restrictions to a DEA model.Such as converting the qualitative data in DEA to the quantitative data using AHP [27][28][29][30][31][32][33][34], ranking the efficient/inefficient units in DEA models using AHP in a two stage process [35][36][37], weighting the efficiency scores obtained from DEA using AHP [38], weighting the inputs and outputs in the DEA structure [39][40][41][42], constructing a convex combination of weights using AHP and DEA [43] and estimating missing data in DEA using AHP [44]. The recent studies by Pakkar [45][46][47][48][49][50] demonstrate the effects of imposing weight bounds on the different variants of DEA models using AHP.To this end, AHP has been applied in single-level DEA models [47][48][49][50] and two-level DEA models [45,46].Due to the complexity of the hierarchical structures of indicators, this paper applies AHP into an additive three-level DEA model in the context of CI construction.Theoretically, the approach proposed in this paper may also be considered as the additive form of the multiplicative three-level DEA-based CI approach to constructing CIs proposed by [51].In a three-level hierarchy, the indicators of similar characteristics can be grouped into sub-categories and further into categories.A three-level DEA model entirely reflects the characteristics of a generalized multiple level DEA model developed in [52,53].Since the proposed approach uses AHP in an additive three-level DEA-based model, it contributes to the set of methods currently available for CI construction. Methodology This research has been organized to proceed along the following stages (Figure 1): Computing the composite value of each DMU using one-level DEA-based CI model ( 4). The computed composite values are applied in three-level DEA-based CI model (6). 2. Computing the priority weights of indicators for all DMUs using AHP, which impose weight bounds into model (6). 3. Obtaining an optimal set of weights for each DMU using three-level DEA-based CI model (6) (minimum composite loss η). 4. Obtaining an optimal set of weights for each DMU using model (6) bounded by AHP (maximum composite loss κ).Note that if the AHP weights are added to model (6), we obtain model (10). 5. Measuring the performance of each DMU in terms of the relative closeness to the priority weights of indicators.For this purpose, we develop parameter-distance model (11).Increasing a parameter in a defined range of composite loss we explore how much a DM can achieve its goals.This may result in various ranking positions for a DMU in comparison to the other DMUs. DEA-Based CI Model A DEA-based CI model can be formulated similar to a classical DEA model in which all data are treated as outputs without explicit inputs [54].In the following, and in line with the more common CI terminology, we will often refer to outputs as "indicators".In order to eliminate the scale differences between all (output) indicators, and moreover, to ensure that all of them are in the same direction of change the normalized counterparts of indicators, using the distance to reference method, are computed as follows [1]: ). DEA-Based CI Model A DEA-based CI model can be formulated similar to a classical DEA model in which all data are treated as outputs without explicit inputs [54].In the following, and in line with the more common CI terminology, we will often refer to outputs as "indicators".In order to eliminate the scale differences between all (output) indicators, and moreover, to ensure that all of them are in the same direction of change the normalized counterparts of indicators, using the distance to reference method, are computed as follows [1]: , ŷrpmaxq " max t ŷr1 , ŷr2 , ..., ŷrn u for desirable indicators y rj " ŷrpminq ŷrj , ŷrpminq " min t ŷr1 , ŷr2 , ..., ŷrn u for undesirable indicators (2) where y rj is the normalized value of (output) indicator r (r " 1, 2.., s) for DMU j (j = 1, 2, . . ., n).Now assume that all DMUs have unit input i (i " 1, 2, ..., m).Then the fractional CCR-DEA model can be developed as follows [55]: where CI k is the composite indicator of DMU under assessment.k is the index for the DMU under assessment where k ranges over 1, 2, . . ., n. v i and u r are the weights of input i (i " 1, 2, ..., m) and (output) indicator r (r " 1, 2.., s).The first set of constraints assures that if the computed weights are applied to a group of n DMUs, (j " 1, 2, ..., n), they do not attain a composite score of larger than 1.The second set of constraints indicates the non-negative conditions for the model variables. Introducing the constraint m ř i"1 v i " 1 and performing the operation of substitution, an equivalent linear model can be formulated as follows: s ÿ r"1 u r y rj ď 1 @j u r ą 0 @r. Model ( 4) looks like a DEA model without inputs that extends the standard DEA methodology to the field of CI construction. Three-Level DEA-Based CI Model We develop a three-level DEA model to aggregate the performance of indicators under the (sub) category they belong to by a weighted-average method (Figure 2).Let y ll 1 rj be the value of indicator r (r " 1, 2, ..., s) of sub-category l 1 pl 1 " 1, 2, ..., S 1 q of category l pl " 1, 2, ..., Sq for DMU j (j " 1, 2, ..., n) after normalizing the original data.Let u ll 1 r be the internal weight of indicator r of sub-category l 1 of category l while s ř r"1 u ll 1 r " 1.Then the value of sub-category l 1 of category l for the DMU j is defined as y ll 1 j " s ř r"1 u ll 1 r y ll 1 rj .Let p ll 1 be the internal weight of sub-category l 1 of category l while Then the value of category l is defined as y lj " p ll 1 y ll 1 j .Let p l be the weight of category l.To develop a linear model, the new multiplier of indicator r of sub-category l 1 of category l is defined as: u 1 ll 1 r " p l p ll 1 u ll 1 r .Similarly, the new multiplier of sub-category l 1 of category l is defined as: p 1 ll 1 " p l p ll 1 .Consequently, a linear three-level DEA model for indicators can be developed as follows: We develop our formulation based on the generalized distance model [56,57] in such a way that the hierarchical structures of indicators, using a weighted-average approach, are taken into consideration [52,53].Let ) be the best attainable composite value for the DMU under assessment, calculated from model ( 4).We want the composite value () where t D is a distance measure and t represents the distance parameter.Our definition of "closest" is that the largest distance is at its minimum.On the other hand, the largest distance completely dominates when t .For t ,   Model (6) identifies the minimum composite loss  (eta) needed to arrive at an optimal set of weights.The first constraint ensures that each DMU loses no more than  of its best attainable We develop our formulation based on the generalized distance model [56,57] in such a way that the hierarchical structures of indicators, using a weighted-average approach, are taken into consideration [52,53].Let CI k (k " 1, 2, ..., n) be the best attainable composite value for the DMU under assessment, calculated from model (4).We want the composite value CI k pu 1 ll 1 r q, calculated from the set of weights u 1 ll 1 r , to be closest to CI k .The degree of closeness between CI k pu 1 ll 1 r q and CI k is measured as D t " rCI k ´CI k pu 1 ll 1 r q t s 1{t with t ě 1, where D t is a distance measure and t represents the distance parameter.Our definition of "closest" is that the largest distance is at its minimum.On the other hand, the largest distance completely dominates when t " 8.For t " 8, the distance measure is reduced to D 8 " max Model (6) identifies the minimum composite loss η (eta) needed to arrive at an optimal set of weights.The first constraint ensures that each DMU loses no more than η of its best attainable composite value, CI k .The second set of constraints satisfies that the composite values of all DMUs are less than or equal to their upper bound of CI j . Two sets of constraints are added to model ( 6): are indicator multipliers.This implies that the sum of weights under each (sub-) sub-category equals to the weight of that (sub-) sub-category.It should be noted that the original (or internal) weights used for calculating the weighted averages are obtained as u ll 1 r " u 1 ll 1 r {p 1 ll 1 and p ll 1 " p 1 ll 1 {p l . Prioritizing Indicator Weights Using AHP Model (6) identifies the minimum composite loss η (eta) needed to arrive at a set of weights of indicators by the internal mechanism of DEA.On the other hand, the priority weights of indicators, and the corresponding (sub) categories are defined out of the internal mechanism of DEA by AHP (Figure 3).In order to more clearly demonstrate how AHP is integrated into the three-level DEA-based CI model, this research presents an analytical process in which indicator weights are bounded by the AHP method.The AHP procedure for imposing weight bounds may be broken down into the following steps: Step 1: A decision maker makes a pairwise comparison matrix of different criteria, denoted by A , with the entries of lq a ( 1, 2,..., ) l q S  .The comparative importance of criteria is provided by the decision maker using a rating scale.Saaty [16] recommends using a 1-9 scale. Step 2: The AHP method obtains the priority weights of criteria by computing the eigenvector of matrix A (Equation ( 7)), To determine whether or not the inconsistency in a comparison matrix is reasonable the random consistency ratio, .. CR, can be computed by the following equation: . where .. RI is the average random consistency index and N is the size of a comparison matrix.In a similar way, the priority weights of (sub-) sub-criteria under each (sub-) criterion can be computed. To obtain the weight bounds for indicator weights in the three-level DEA-based CI model, this study aggregates the priority weights of three different levels in AHP as follows: In order to more clearly demonstrate how AHP is integrated into the three-level DEA-based CI model, this research presents an analytical process in which indicator weights are bounded by the AHP method.The AHP procedure for imposing weight bounds may be broken down into the following steps: Step 1: A decision maker makes a pairwise comparison matrix of different criteria, denoted by A, with the entries of a lq pl " q " 1, 2, ..., Sq.The comparative importance of criteria is provided by the decision maker using a rating scale.Saaty [16] recommends using a 1-9 scale. Step 2: The AHP method obtains the priority weights of criteria by computing the eigenvector of matrix A (Equation ( 7)), w " pw 1 , w 2 , ..., w S q T , which is related to the largest eigenvalue, λ max . Aw " λ max w To determine whether or not the inconsistency in a comparison matrix is reasonable the random consistency ratio, C.R., can be computed by the following equation: where R.I. is the average random consistency index and N is the size of a comparison matrix.In a similar way, the priority weights of (sub-) sub-criteria under each (sub-) criterion can be computed. To obtain the weight bounds for indicator weights in the three-level DEA-based CI model, this study aggregates the priority weights of three different levels in AHP as follows: e ll 1 " 1 and where w l is the priority weight of criterion l (l " 1, ..., S) in AHP, e ll 1 is the priority weight of sub-criterion l 1 pl 1 " 1, 2, ..., S 1 q under criterion l and f ll 1 r is sub-sub-criterion r (r " 1, ..., s) under sub-criterion l 1 .In order to estimate the maximum composite loss κ (kappa) necessary to achieve the priority weights of indicators for each DMU the following linear program is proposed: Min κ (10) The first sets of constraints change the AHP computed weights to weights for the new system by means of a scaling factor α. The scaling factor α is added to avoid the possibility of contradicting constraints leading to infeasibility or underestimating the relative composite scores of DMUs [58].The optimal solution to model (10) produces a set of weights for indicators that are used to compute the performance of DMUs. It should be noted that incorporating absolute weight bounds, using AHP, for indicator weights in a DEA-based CI model is consistent with the common practice of constructing composite indicators.According to this practice, the priority weights of indicators can be used directly in an aggregation function to synthetize indicators' values into composite values [1].In addition, this form of placing restrictions on indicator weights simply allows us to identify a specific range of variation between two systems of weights obtained from models ( 6) and (10). A Parametric Distance Model We can now develop a parametric distance model for various discrete values of parameter θ such that η ď θ ď κ.Let u 1 ll 1 r pθq be the weights of indicators for a given value of parameter θ, where indicators are under sub-category l 1 pl 1 " 1, 2, ..., S 1 q of category l (l " 1, 2, ..., S).Let u 1 ll 1 r pκq be the priority weights of indicators under sub-category l 1 of category l, obtained from model (10). Our objective is to minimize the total deviations between u 1 ll 1 r pθq and u 1 ll 1 r pκq with the shortest Euclidian distance measure subject to the following constraints: Because the range of deviations computed by the objective function is different for each DMU, it is necessary to normalize it by using relative deviations rather than absolute ones [59].Hence, the normalized deviations can be computed by: where Z k pθq is the optimal value of the objective function for η ď θ ď κ.We define ∆ k pθq as a measure of closeness which represents the relative closeness of each DMU to the weights obtained from model (10) in the range [0, 1].Increasing the parameter pθq, we improve the deviations between the two systems of weights obtained from models (6) and (10) which may lead to different ranking positions for each DMU in comparison to the other DMUs.It should be noted that in a special case where the parameter θ " κ " 0, we assume ∆ k pθq = 1. A Numerical Example: Road Safety Performance Indicators In this section we present the application of the proposed approach to assess the road safety performance of a set of 13 European countries (or DMUs): Austria (AUT), Belgium (BEL), Finland (FIN), France (FRA), Hungary (HUN), Ireland (IRL), Lithuania (LTU), Netherlands (NLD), Poland (POL), Portugal (PRT), Slovenia (SVN), Sweden (SWE) and Switzerland (CHE).The data for eleven hierarchical indicators that compose SPIs for these countries have been adopted from [52].The eight SPIs related to alcohol and speed are undesirable indicators while the three SPIs related to protective systems are desirable ones.The resulting normalized data based on Equations ( 1) and ( 2) are presented in Table 1. Taking the percentage of speed limit violation on rural roads as an example, Slovenia performs the best (1.000) while Poland the worst (0.014) and all other countries' values lie within this interval. The results of the AHP model for prioritizing hierarchical SPIs as constructed by the author in Expert Choice software are presented in Table 2.One can argue that the priority weights of SPIs must be judged by road safety experts.However, since the aim of this section is just to show the application of the proposed approach on numerical data, we see no problem to use our judgment alone.Solving model (6) for the country under assessment, we obtain an optimal set of weights with minimum composite loss pηq.Since the raw data are normalized, the weights obtained from this model are meaningful and have an intuitive explanation.As a result, we can later set meaningful bounds on the weights in terms of the relative priority of indicators.Taking Austria as an example in Table 3, with the composite value of one obtained from model (4), we can observe that the alcohol-related fatality rate is 26 times less important than the mean speed of vehicles on motorways and about 3.2 times less important than the daytime usage rate of child restraints.Clearly, the other indicators are ignored in this assessment by assigning zero weights which are equivalent to excluding those indicators from the analysis.This kind of situation can be remedied by including the opinion of experts in defining the relative priority of indicators.It should be noted that the composite value of all countries calculated from model ( 6) is identical to that calculated from model (4).Therefore, the minimum composite loss for the country under assessment is η " 0 (Table 4).This implies that the measure of relative closeness to the AHP weights for the country under assessment is ∆ k pηq " 0. On the other hand, solving model (10) for the country under assessment, we adjust the priority weights of hierarchical SPIs obtained from AHP in such a way that they become compatible with the weights' structure in the three level DEA-based CI models.Table 5 presents the optimal weights of hierarchical SPIs as well as its scaling factor for all countries.Note that the priority weights of AHP used for incorporating weight bounds on indicator weights in model (10) are obtained as u ll 1 r " u 1 ll 1 r α .Similarly, the priority weights of AHP at criteria level can be obtained as w l " p l α while In addition, The priority weights of AHP at sub-criteria and sub-sub-criteria levels can be obtained as e ll 1 " p 1 ll 1 {p l and f ll 1 r " u 1 ll 1 r {p 1 ll 1 , respectively.The maximum composite loss for each country to achieve the corresponding weights in model ( 10) is equal to κ (Table 4).As a result, the measure of relative closeness to the priority weights of SPIs for the country under assessment is ∆ k pκq = 1.Going one step further to the solution process of the parametric distance model (11), we proceed to the estimation of total deviations from the AHP weights for each country while the parameter θ is 0 ď θ ď κ.Table 6 represents the ranking position of each country based on the minimum deviation from the priority weights of indicators for θ " 0. It should be noted that in a special case where the parameter θ " κ " 0 we assume ∆ k pθq " 1. Table 6 shows that Switzerland (CHE) is the best performer in terms of the CI value and the relative closeness to the priority weights of indicators in comparison to the other countries.Nevertheless, increasing the value of θ from 0 to κ has two main effects on the performance of the other countries: improving the degree of deviations and reducing the value of composite indicator.This, of course, is a phenomenon, one expects to observe frequently.The graph of ∆pθq versus θ, as shown in Figure 4, is used to describe the relation between the relative closeness to the priority weights of indicators and composite loss for each country.This may result in different ranking positions for each country in comparison to the other countries (Appendix A). Appendix A Figure 1 . Figure 1.A hierarchical aggregation approach for indicators using a three-level Data Envelopment Analysis (DEA) and Analytic Hierarchy Process (AHP). is the composite indicator of DMU under assessment.k is the index for the DMU under assessment where k ranges over 1, 2,…, n .i v and r u are the weights of input i ( 1, 2,..., im  ) and (output) indicator r ( 1, 2.., rs  ).The first set of constraints assures that if the computed weights are applied to a group of n DMUs, ( 1, 2,..., jn  ), they do not attain a composite score of larger than 1.The second set of constraints indicates the non-negative conditions for the model variables. Figure 1 . Figure 1.A hierarchical aggregation approach for indicators using a three-level Data Envelopment Analysis (DEA) and Analytic Hierarchy Process (AHP). Figure 2 . Figure 2. A three-level DEA framework for hierarchical indicators. Figure 2 . Figure 2. A three-level DEA framework for hierarchical indicators. 3 . . The second set of constraints satisfies that the composite values of all DMUs are less than or equal to their upper bound of multipliers.This implies that the sum of weights under each (sub-) sub-category equals to the weight of that (sub-) sub-category.It should be noted that the original (or internal) weights used for calculating the weighted averages are obtained as Prioritizing Indicator Weights Using AHP Model (6) identifies the minimum composite loss  (eta) needed to arrive at a set of weights of indicators by the internal mechanism of DEA.On the other hand, the priority weights of indicators, and the corresponding (sub) categories are defined out of the internal mechanism of DEA by AHP (Figure3). Figure 3 . Figure 3.The AHP model for prioritizing indicators. Figure 3 . Figure 3.The AHP model for prioritizing indicators. Table 1 . Normalized data on the eleven hierarchical safety performance indicators (SPIs). Table 2 . The AHP hierarchical model for SPIs. Table 4 . Minimum and maximum losses in composite values for each country. Table 6 . The ranking position of each country based on the minimum distance to priority weights of SPIs. Table A1 . The measure of relative closeness to the priority weights of hierarchical SPIs [∆ k pθq ] vs. composite loss [θ] for each country.
6,129.2
2016-01-20T00:00:00.000
[ "Economics", "Business" ]
Direct Visualization of Deforming Atomic Wavefunction in Ultraintense High-Frequency Laser Pulses Interaction of intense laser fi elds with atoms distorts the bound-state electron cloud. Tracing the temporal response of the electron cloud to the laser fi eld is of fundamental importance for understanding the ultrafast dynamics of various nonlinear phenomena of matter, but it is particularly challenging. Here, we show that the ultrafast response of the atomic electron cloud to the intense high-frequency laser pulses can be probed with the attosecond time-resolved photoelectron holography. In this method, an infrared laser pulse is employed to trigger tunneling ionization of the deforming atom. The shape of the deforming electron cloud is encoded in the hologram of the photoelectron momentum distribution. As a demonstration, by solving the time-dependent Schrödinger equation, we show that the adiabatic deforming of the bound-state electron cloud, as well as the nonadiabatic transition among the distorted states, is successfully tracked with attosecond resolution. Our work fi lms the formation process of the metastable Kramers-Henneberger states in the intense high-frequency laser pulses. This establishes a novel approach for time-resolved imaging of the ultrafast bound-state electron processes in intense laser fi elds. Introduction Laser-induced distortion of the electron cloud of atoms and molecules is the intrinsic reason for various nonlinear phenomena of matter [1,2]. Probing the laser-induced dynamics of the bound electron is of essential importance for understanding the temporal properties of nonlinear processes of matter. With the advanced attosecond techniques [3], various laser-induced electron dynamics, such as the valence electron motion [4], the subcycle AC-stark shift [5], the impulsive response of the bound electron [6], the ultrafast charge migration [7], etc., have been probed with attosecond accuracy. Here, we reveal another interesting ultrafast bound-state electron process of atoms induced by the ultraintense high-frequency laser fields. Previous theoretical studies have shown that when an atom is exposed to an ultraintense high-frequency laser field, the electronic wavefunction is stretched and separated into two parts [8]. The distortion of the wavefunction forms the so-called metastable Kramers-Henneberger (KH) states [9][10][11], leading to the stabilization against ionization in ultraintense high-frequency laser fields [12], which is one of the most intriguing phenomena in laser-matter interaction. The KH states also play an important role in accelerating neutral atoms [13] and amplifying air lasing [14,15]. Although the atomic stabilization and the KH states have attracted extensive theoretical studies thirty years ago [16][17][18][19][20][21][22], direct observation of this distorted wavefunction has not been reported. It has been proposed that the electronic structure of the KH states can be deduced from the photoelectron momentum distribution (PEMD) [23]. Recently, it has been shown that the dichotomy of the wavefunction of the KH state would exhibit the double-slit interference pattern in the PEMDs [24], which serves as solid evidence of the existence of the KH states. In these studies, a monochromatic field with constant intensity is considered, wherein the atoms stay in a static KH state. However, in the realistic pulsed fields, the atomic wavefunction evolves from the field-free ground state to the KH states following the envelope of the laser pulses [22,25]. Moreover, for a laser pulse with rapid turning on, nonadiabatic transition occurs and then the polychotomy, instead of the dichotomy in the wavefunction, is formed [18,20]. This turning-on effect plays as the key role in determining the degree of atomic stabilization in the ultraintense laser pulses [18-22, 26, 27]. It is also responsible for the low-energy electron generation in the intense high-frequency laser fields [28][29][30][31]. Therefore, observing the evolution of the wavefunction is more appealing. However, this evolution has not yet been observed. Here, we demonstrate a method based on strong-field photoelectron holography (SFPH) to probe the deforming process of the bound-state electronic wavefunction of hydrogen exposed to the ultraintense XUV laser pulses. The concept of SFPH has been proposed about ten years ago [32]. It originates from the interference of the photoelectrons flying directly to the detector after tunneling ionization and those undergoing a near-forward rescattering [32][33][34]. The holographic pattern has been experimentally observed in strong-field tunneling ionization in different species of atoms and molecules [35][36][37][38][39][40][41] and has attracted extensive theoretical attentions [34,[42][43][44][45][46][47][48][49][50]. In this work, we employ an infrared (IR) laser pulse to induce tunneling ionization of the atom exposed to the ultraintense XUV laser pulses. The holographic patterns in the PEMDs of tunneling ionization encode dynamic information about the deforming process of the electron cloud induced by the XUV pulses. By numerically solving the time-dependent Schr€ odinger equation (TDSE), we demonstrate that the adiabatic evolution of the electronic wavefunction with the envelope of the XUV laser pulses can be directly tracked with the SFPH. For the pulse with a more rapid ramp, the nonadiabatic transition among the KH states occurs, which is also unambiguously revealed in the hologram of the PEMDs. The scheme of our method is illustrated in Figure 1. The distortion of the atom electron cloud by the ultraintense XUV pulses could be intuitively understood in the KH frame [51], in which the time-averaged potential has a double-well structure with the two wells locating at ±α 0 = ±E 0 /ω 2 , where E 0 and ω are the strength and frequency of the field, respectively. For a laser pulse with slowly varying envelope f ðtÞ, α 0 should be written as [25,52] α 0 ðtÞ = f ðtÞE 0 /ω 2 . In this case, the electron cloud evolves adiabatically following the varying double-well potential. So, it is stretched along the laser polarization direction during the rising edge of the laser pulse and then recovers to the atomic wavefunction when the laser field is falling off (see Supplement, Section 1), as shown in Figure 1(a). To detect the temporal evolution of the electron cloud, an IR pulse with moderate intensity is applied to induce tunneling ionization of the deforming atom. The interference of the direct and the rescattering electron wave packets (EWPs) forms the hologram in the PEMDs. It can be considered as the two-center interference, wherein the centers of the direct and the rescattering EWPs are the tunneling point and the rescattering center, respectively [ Figure 1(c)]. The tunneling point depends on the structure of the electron cloud at the instant of tunneling [the red dot in Figure 1(b)]. So, by retrieving the tunneling point from the hologram in the PEMDs, the structure of the electron cloud and its temporal evolution in the ultraintense XUV pulse is directly tracked. Materials and Methods To demonstrate our scheme, we solve the three-dimensional TDSE of H in the laboratory frame to obtain the PEMDs (in atomic units) (see Supplement, Section 2), Here, AðtÞ = A IR ðt + τÞ + A XUV ðtÞ describes the combined vector potential of the XUV and the IR fields. τ is the time delay between the two fields. The IR field A IR ðt + τÞ = A IR ðt + τÞe z is linearly polarized along z-axis and its envelope has the cos-squared shape lasting three optical cycles [ Figure 2(a)]. The XUV pulses A XUV ðtÞ = A XUV ðtÞ½ cos θe z + sin θe y has the Gaussian-shape envelope and it is polarized with an angle θ with the IR field. Figures 2(b) and 2(c) show the obtained PEMDs in the (p y , p z ) plane (i.e., p x =0). The polarization direction of the XUV field are θ = 0 ∘ and θ = 45 ∘ , respectively. The frequency of the XUV pulse is ω XUV = 3 a.u. and its intensity is 3 × 10 19 W/cm 2 with the full width at half maximum (FWHM) of 10 cycles (~0:5 fs). The wavelength and intensity of the IR field are 2400 nm and 1 × 10 14 W/cm 2 , respectively. (Note that the nondipole effect is significant in the PEMDs for the laser parameters in our calculations. It distorts the PEMDs in the laser propagation direction. However, the PEMD in the plane perpendicular to the laser propagation direction is not affected by the nondipole effect (see Supplement 1, Section 2.3).) The time delay between the two fields is adjusted so that the photoelectron tunneling ionized during the time window where the XUV pulse locates could be driven back to the parent ion to form the hologram in the PEMDs. In our calculations, τ = 0:67 fs. We mention that we have also calculated the PEMD by the IR field alone (see Supplement, Figure S2c). The obtained signal is orders of magnitude lower than that in Figure 2. This is because the ultraintense XUV pulse greatly lowers the ionization potential of H (see Supplement, Figure S1d). Thus, the tunneling ionization rate during the quarter cycle where the IR and the XUV pulses overlap is much higher than that of the IR field alone. The XUV field also induces ionization through single-and few-photon absorption, but the signal in the PEMD is separated from the distribution of tunneling ionization by the IR field (see Supplement, Figure S2). Therefore, the signals in the PEMDs of Results and Discussion The horizontal holographic fringes in the PEMDs in Obviously, the interference minima (maxima) for θ = 45 ∘ are shifted with respect to the result at θ = 0 ∘ . In the following, we will show that the evolution of the deforming wavefunction is encoded in this interference fringes shift. The holographic fringes are determined by the phase difference between the direct and near-forward rescattering EWPs. With the adiabatic theory [53], the phase difference can be written as [34] (see Supplement, Section 3) where t i and t r are the ionization time and rescattering time, respectively. The first term of Equation 2 accounts for the phase difference of the direct and rescattering electrons accumulated during the propagating from tunneling ionization to rescattering [32,33]. The second term is the phase of the scattering amplitude [34] and for atoms, it is symmetric about the laser polarization direction. In the third term, ϕ i ðp y ; t i Þ=argfAðp y ; t i Þg is the phase of the transverse momentum distribution amplitude (TMDA) Að p y ; t i Þ of the tunneling ionization [53,54]. It accounts for the initial phase difference of the rescattering and the direct electrons. For atoms, ϕ i ðp y ; t i Þ is approximately a constant and thus the third term in Equation (2) is absent [34]. When the atom is stretched by the intense XUV field along its polarization direction, there will be a nonzero initial transverse displacement for the tunneling EWP if the angle between the polarization directions of the IR and XUV fields is nonzero, as shown in Figure 1(b). Then, the TMDA has a linear phase distribution. The reason is as follows. The PEMD from tunneling can be approximately considered as the Fourier transform of the tunneling EWP in position space [55,56]. According to the delay theorem of Fourier transformation, a shift of position of the EWP corresponds to a linear phase in the momentum distribution, F½Ψðy − y 0 Þ = e −iy 0 p y F½ΨðyÞ. Therefore, the phase of the TMDA linearly depends on the initial transverse momentum of tunneling [49,57,58] where y 0 is the initial transverse displacement of the tunneling EWP. Then, the third term of Equation (2) becomes ϕ i ð0 ; t i Þ − ϕ i ðp y ; t i Þ = y 0 p y . So, by extracting this phase from the hologram in the PEMD, we could retrieve the initial transverse 3 Ultrafast Science displacement y 0 of the tunneling EWP. This displacement is closely related to the structure of the stretched wavefunction. Therefore, by monitoring the initial transverse displacement of the tunneling EWP with SFPH, the structure of the deforming atomic wavefunction can be traced. The relation between the initial transverse displacement y 0 and the structure of the electronic wavefunction is illustrated in Figures 3(a)-3(c), where we display three examples of the electron density distribution of the stretched atom. In our case, the longitudinal tunneling position is of about 46 a.u. So, we take the cut of the electron density distribution at z~4 a.u. (see Supplement, Figure S5), shown in the right side. In tunneling ionization, the tunneling EWP connects the bound electron wavefunction smoothly, and thus the transverse displacement of the tunneling EWP equals the position y m of the maximum of the electron density distribution in the cut around the longitudinal tunneling exit, y 0 = y m . As the atomic wavefunction is stretched longer by the XUV field along its polarization direction, the position y m increases, and the transverse displacement y 0 increases accordingly. Thus, the quantity y 0 directly reveals how long the atomic wavefunction is stretched at the instant of tunneling ionization. To retrieve the displacement y 0 , we extract the phase Δφð p y ; p z Þ from the hologram of Figure 2(c) (see Supplement, Section 4). In practice, we separately extract the phase for p y > 0 and p y < 0, denoted as Δφ + ðp y ; p z Þ and Δφ − ðp y ; p z Þ, Ultrafast Science respectively (see Supplement, Figure S4). According to Equations (2) and (3), we have Equation (4) indicates that the phase difference Δφ + ðp y ; p z Þ − Δφ − ðp y ; p z Þ is a linear function of the momentum p y , with the slope of 2y 0 . Several examples shown in Figure 3(d) indicate that the obtained phase indeed depends linearly with p y . By fitting this phase with a linear function, the displacement y 0 (half of the obtained slope) is obtained. We repeat this procedure of retrieving y 0 at each p z in the range p z ∈ ½−1:9,−0:4 a.u., and transfer the momentum p z to time through p z = −A IR ðτ + t i Þ. The displacement y 0 as a function of time is obtained, displayed in Figure 3(e). It shows that the transverse displacement y 0 increases at the rising edge of the XUV pulse and decreases during the falling edge. This reveals the stretching and restoring of atomic wavefunction with the envelope of the XUV pulse. To check the accuracy of our method, we trace the atomic wavefunction in our calculations, and trace the transverse location y m for the maximum of the electron density near the longitudinal exit point (see Supplement, Section 5). The result is displayed as the solid curve in Figure 3(e). The agreement is remarkable. It indicates that the ultrafast deforming of electron wavefunction by the intense XUV pulse is successfully revealed with attosecond resolution by SFPH. As the duration of the XUV pulse decreases, nonadiabatic transition due to the fast changing envelope becomes significant. Figure 4(a) shows the PEMD from tunneling ionization of the deforming H in a shorter XUV pulse. An intriguing bifurcation structure appears in the PEMDs at ð p y , p z Þ = ð−1:65,±0:33Þ a.u.. It is more clearly seen in Figure 4(b) where the interference term cosΔφðp y ; p z Þ extracted from the PEMD is shown. This bifurcation structure is due to the nonadiabatic transition in the ultraintense XUV pulses. For the laser parameters in our calculations, nonadiabatic transition results in the electron mainly staying at the ground 1 sσ g and the excited 2 sσ g states (see Supplement, Figure S6), i.e., ΨðtÞ = c 1 ðtÞ1sσ g + c 2 ðtÞ2sσ g e iδðtÞ , where δðtÞ is the phase difference between these two states. Then, the TMDA for tunneling ionization at time t i from this superposition is written as Aðp y ; t i Þ = c 1 ðt i ÞA 1sσ g ðp y Þ + c 2 ðt i ÞA 2sσ g ðp y Þe iδðt i Þ , where A 1sσ g and A 2sσ g are, respectively, the TMDA of the 1 sσ g and 2 sσ g states, which can be calculated with the method of partial Fourier transformation [56] (see Supplement, Section 6). When δðt i Þ = nπ (n is an integer), there is a π phase jump in the TMDA Figure 4(c). This phase jump results in the bifurcation structure in the hologram. So, the presence of the bifurcation reveals the nonadiabatic transition in the intense XUV pulse, and the location of the bifurcation indicates the instant when the phase difference between the two occupied states is nπ. In Figure 4(b), the location of p z = −1:65 a.u. corresponds to the ionization time of t i = 5:7 a.u. [obtained through p z = −A IR ðt i + τÞ]. So, the hologram in the PEMD indicates that the phase difference between the 1 sσ g and 2 sσ g states at the instant of 5.7 a.u. is nπ. To check the validity of this result, we calculate the phase difference δðtÞ with the time-dependent Floquet Hamiltonian approach [25,30] (see Supplement, Section 6), as shown in Figure 4(d). The phase difference is indeed close to π at the instant of 5.7 a.u.. So, the nonadiabatic transition and the phase difference between the occupied states are successfully revealed with our method. Conclusion In conclusion, we filmed the ultrafast evolution of the atomic electron cloud in the ultraintense XUV pulses with SFPH. The distortion of the electron cloud by the XUV fields induces characteristic phase structures in the TMDA of the tunneling EWPs. By measuring the phase structures with SFPH, the adiabatic evolution of the metastable KH states is accurately tracked, and the nonadiabatic transition among the KH states in the short XUV pulse is also successfully revealed. Our work not only confirms the existence of the KH states but more importantly reveals the ultrafast dynamics of the formation process of the KH states in the ultraintense XUV pulses. It deepens our understanding of the ultrafast response of bound electron exposed to the intense laser pulses. The dynamic information of bound-state electron is usually delivered to the phase of photoelectrons. So, measuring the photoelectron phase should be an efficient avenue in imaging the attosecond bound electron dynamics [59]. Our work demonstrated a way to track the ultrafast bound-state electron dynamics in atoms by measuring the photoelectron phase with the SFPH. Extension of this method to more complex molecules, and even nanostructures and solids is promising, and it will be a more exciting aspect in the attosecond science. Data Availability The data that support the plots within this paper and other findings of this study are available from the corresponding author upon reasonable request. Conflicts of Interest The authors declare no conflicts of interest. Supplementary Materials Section I in supplemental materials: the KH states of H in ultra-intense XUV laser pulses. Section II in supplemental materials: details of numerically solving TDSE. Section III in supplemental materials: strong-field photoelectron holography in tunneling ionization. Section IV in supplemental materials: procedure of extracting phase from the hologram in the PEMDs. Section V in supplemental materials: details of watching the adiabatic evolution. Section VI in supplemental materials: details of watching the nonadiabatic transition. (Supplementary Materials)
4,477.2
2022-11-01T00:00:00.000
[ "Physics" ]
Systematic multi-scale decomposition of ocean variability using machine learning Multi-scale systems, such as the climate system, the atmosphere, and the ocean, are hard to understand and predict due to their intrinsic nonlinearities and chaotic behavior. Here, we apply a physics-consistent machine learning method, the multi-resolution dynamic mode decomposition (mrDMD), to oceanographic data. mrDMD allows a systematic decomposition of high-dimensional data sets into time-scale dependent modes of variability. We find that mrDMD is able to systematically decompose sea surface temperature and sea surface height fields into dynamically meaningful patterns on different time scales. In particular, we find that mrDMD is able to identify varying annual cycle modes and is able to extract El Nino–Southern Oscillation events as transient phenomena. mrDMD is also able to extract propagating meanders related to the intensity and position of the Gulf Stream and Kuroshio currents. While mrDMD systematically identifies mean state changes similarly well compared to other methods, such as empirical orthogonal function decomposition, it also provides information about the dynamically propagating eddy component of the flow. Furthermore, these dynamical modes can also become progressively less important as time progresses in a specific time period, making them also state dependent. The climate system exhibits variability on a multitude of temporal and spatial scales. Due to the nonlinearity of the equation of motions, all these scales interact with each other, thereby hampering the understanding and predictability of the climate system. Here, we use multi-resolution dynamic mode decomposition (mrDMD), a physics-consistent machine learning approach, to systematically examine ocean variability of different time and spatial scales. We show that this method is able to systematically extract dynamically meaningful patterns of ocean variability. I. INTRODUCTION The ocean covers about 72% of Earth's surface and is an integral part of the global climate system. It provides essential environmental services, such as food and transportation, and affects atmospheric predictability and extreme events. Therefore, the ocean has a strong impact on vital aspects of our society, and it is of paramount importance to study and understand its underlying physical phenomena, which can vary on a multitude of time and space scales. One of the most important modes of ocean variability is the El Nino-Southern Oscillation (ENSO) (e.g., Timmermann et al., 2018). ENSO describes variations in winds and sea surface temperature over the tropical eastern Pacific ocean, which have widespread effects on surface weather and climate conditions due to teleconnection patterns (Feldstein and Franzke, 2017). ENSO appears irregularly with enhanced frequency power in the range of 3-7 years. Two other important modes of ocean variability are the western boundary currents: the Gulf Stream and the Kuroshio (Kang and Curchitser, 2013). The Gulf Stream varies on monthly through decadal time scales (Seidov et al., 2019) and can shift between a Chaos ARTICLE scitation.org/journal/cha northerly and southerly location (e.g., Pérez-Hernández and Joyce, 2014). The Kuroshio current tends to vary between a stable and an unstable state. For the former, the current is relatively strong and zonal, while in the latter case, the Kuroshio Extension tends to meander on large scales with higher eddy kinetic energy levels (e.g. Chen, 2005 andOka et al., 2015). A fascinating aspect of climate and ocean variability is that it occurs on all time scales and that due to the underlying nonlinear equations of motion, the different time scales interact with each other (Franzke et al., 2020). However, this property also makes it difficult to understand climate and ocean variability because it is not straightforward to disentangle variability on different time scales. Moreover, another important aspect of understanding ocean variability is the identification of coherent structures with dynamical relevance, such as ENSO. A widely used method for the identification of modes of variability is empirical orthogonal functions (EOFs) (von Storch andZwiers, 2003 andHannachi, 2021), also known as a principal component analysis or a proper orthogonal decomposition. Global modes of sea surface temperature have been computed by Messié and Chavez, (2011). Global EOFs identify the well-known modes of ocean variability, such as the ENSO, the Pacific decadal oscillation (PDO), and the Atlantic multidecadal oscillation (AMO). However, the dynamical relevance of some of these modes has been questioned (Clement et al., 2015 andMann et al., 2020). While EOFs are a powerful tool for multivariate data analysis, they have the drawback that the EOF patterns are mutually orthogonal. Thus, the EOF patterns lose physical interpretability since the ocean modes, or any physical modes, need not to be mutually orthogonal (North, 1984). Hence, there is a need for better methods, which are able to systematically identify dynamically relevant patterns. A promising method is dynamic mode decomposition (DMD) (Tu et al., 2014;Kutz et al., 2016a;Rowley et al., 2009;Mezić, 2005;2013;and Brunton et al., 2016), a machine learning method. DMD decomposes high-dimensional fields into complex patterns whose eigenvalues describe the growth rates and oscillation frequencies of the modes. DMD is widely used in many areas (Kutz et al., 2016a;Tu et al., 2014;Rowley et al., 2009;Brunton et al., 2016;and Mezić, 2005;2013) and recently also in geophysical and climate research (Kutz et al., 2016b;Gottwald and Gugole, 2019;and Gugole and Franzke, 2019). DMD is a dimension reduction method, which has a strong theoretical and dynamical underpinning. For a given highdimensional time series, DMD computes a set of complex modes; each of these modes represents an oscillation with a fixed frequency and a growth rate. For linear systems, these modes are analogous to normal modes. Furthermore, DMD is closely connected to principal oscillation patterns and linear inverse models (Hasselmann, 1988;Penland and Magorian, 1993;and Tu et al., 2014). However, DMD is more general. DMD approximates the modes and eigenvalues of the Koopman operator and, thus, can represent nonlinear dynamics (Tu et al., 2014 andKutz et al., 2016a). DMD is different from other popular dimension reduction methods, such as EOFs. EOFs are not directly associated with a temporal behavior, while DMDs are. However, in contrast to EOFs, DMD modes are not orthogonal. Hence, DMDs might provide a less parsimonious description of the full data set than EOFs, but on the other hand, DMDs are dynamically more meaningful, which we will show below. The multi-scale space-time structure of ocean variability calls also for multi-scale methods. The multi-resolution DMD is an attractive option for this problem since it provides a systematic multi-scale decomposition into dynamical modes (Kutz et al., 2016a;2016b). Here, we will demonstrate that multi-resolution DMD is able to identify dynamically meaningful patterns of ocean variability, which are of practical concern. While our study does not advance theory, our aim is to demonstrate the ability of DMD to systematically extract multi-scale dynamics from a complex real-world system, the ocean. Furthermore, our study shows how DMD can be used to deepen our understanding of ocean dynamics, and we also show how DMD can systematically extract multi-scale dynamics of a component of the climate system, which not many methods can do. We also demonstrate by extracting the changing annual cycle of SST how DMD can potentially lead to better predictions of the climate system. In Sec. II, we describe the ocean data sets we are using and the multi-resolution DMD method. In Sec. III, we present our results for global SST and sea surface height dynamics in the Kuroshio and Gulf stream. In Sec. IV, we summarize our study results. A. Data To demonstrate the abilities of DMD and to examine ocean variability, we use two datasets. The first one is the extended reconstructed sea surface temperature (ERSST) version 5 data set. This is a global monthly mean sea surface temperature (SST) data set on a 2 • × 2 • regular horizontal grid (https://doi.org/10.7289/V5T72FNM) (Huang et al., 2017). The data set covers the period January 1854-December 2020. This data set allows us to examine variability on medium to long time scales. The second data set is the Aviso satellite altimetry sea surface height (SSH). This is a global daily sea surface height data set on a 0.25 • × 0.25 • regular horizontal grid (Saraceno et al., 2008) (https://resources.marine.copernicus.eu/?option=com_cswtask= results?option=com_cswview=detailsproduct_id=SEALEVEL_GLO _PHY_L4_REP_OBSERVATIONS_008_047 and https://www.aviso. altimetry.fr/en/data/products/auxiliary-products/mss.html). The data set covers the period 1993-2018. This data set allows us to examine ocean eddies and larger-scale flow structures on short to medium time scales. Because of the higher temporal and spatial resolution for this data set, it is computationally challenging to consider the whole globe as a domain. Furthermore, scientifically, a better understanding of the dynamics of smaller scale features, such as eddies and meanders, are also needed. Hence, we focus on two important ocean currents, the Gulf Stream and the Kuroshio, and apply mrDMD to these two areas. We define the Gulf Stream region as the area covering 280 • E-340 • E, 30 • N-60 • N and the Kuroshio region as the area covering 120 • E-170 • E, 25 • N-50 • N. For spatial pattern correlations, we use bandpass filtering of the SSH data using a Fourier transformation based approach where the cut-off frequencies correspond to the respective mrDMD frequency bands. The bandpass filtering is necessary for the SSH data because the eddy scale changes considerably for different time scales. Thus, pattern correlations between fast time-scale mrDMD patterns and the full flow fields would lead Chaos ARTICLE scitation.org/journal/cha FIG. 1. The mrDMD approach computes successively DMDs at each resolution level (Kutz et al., 2016b). Then, the window length is halved, and the procedure is repeated for each window at that resolution level. At each resolution level, only slow modes are considered for the decomposition and tracked before repeating the procedure at the next resolution level. to small correlation values. On the other hand, no filtering is necessary for SST data because on monthly time scales, the anomalies are still relatively large scale. For the SST data, we also tested the impact of detrending the data. Detrending the data leads to qualitatively similar results to using non-detrended data. Hence, our results are robust. B. Dynamic mode decomposition We consider the following dynamical system (Kutz et al., 2016a): where x denotes the state vector, t time, µ the parameters of the system, and f is the possible nonlinear function representing the dynamics. Equation (1) can also induce a discrete-time representation for time step t, In general, it is impossible to derive a solution to the nonlinear system equation (1). DMD takes an equation-free, machine learning view where we have no knowledge of the dynamics of the system. DMD only uses observed data from the system to approximate and forecast the system. Hence, DMD computes an approximate, locally linear, dynamical system, with discrete-time representation, where the subscript k denotes discrete time. The solution of this system can be represented in terms of its eigenvalues λ j and corresponding eigenvectors φ j of the discrete-time matrix A, where b is the matrix of the initial conditions b j and j is an index. is the matrix consisting of the eigenvectors φ j . DMD now derives a low-rank eigen-decomposition of A that optimally captures the trajectory of the system in a least-squares sense so that the object is minimized across all grid points, and this is achieved by an eigendecomposition of A. The DMD algorithm is as follows: The data can be described by two parameters. •n: number of spatial grid points per time step and •m: number of time steps. We now have the following two sets of data: so that x k = F(x k ) for time step t. The DMD modes now correspond to the eigen-decomposition of A. A relates the data X ≈ AX, and thus, A = X X † , where † is the Moore-Penrose pseudo-inverse (Kutz et al., 2016a). The DMD method has a strong theoretical underpinning, as it is connected to the Koopman operator (Tu et al., 2014 andKutz et al., 2016a). DMD is a finite dimensional approximation of the modes of the Koopman operator. The Koopman operator is an infinite dimensional linear operator describing the dynamics of nonlinear systems. The Koopman operator is defined as follows (Kutz et al., 2016a): Consider a continuous-time dynamical system, where x ∈ M is a state on a smooth n-dimensional manifold M. The Koopman operator K is an infinite-dimensional linear operator that acts on all observable functions g : M → C so that The Koopman operator propagates states along with the flow F. C. Multiresolution dynamic mode decomposition Multiresolution dynamic mode decomposition (mrDMD) is an advanced DMD method for analyzing multi-scale systems, such as the ocean and the atmosphere (Kutz et al., 2016a;2016b). Basically, it performs DMD on different time scales, similar to a wavelet analysis (Lau andWeng, 1995 andKutz et al., 2016a). Figure 1 shows a schematic of the mrDMD approach. mrDMD starts with analyzing the full time series and by identifying the slowest modes of variability; then, this window is divided into two equally long windows as displayed in Fig. 1, and the DMD analysis is repeated. This is recursively repeated until the fastest dynamics are reached in the data set. The frequency of the modes is given by the eigenvalues of the mrDMD modes. As a cut-off frequency, we choose that slow modes can only perform a maximum of two oscillations in a window in order to eliminate faster modes from this level. See Kutz et al. (2016a;2016b) for more details. The eigenvalues of the mrDMD modes are related to frequencies as follows: where s = 1 ρ8π t with ρ = 2/T, where T is the window length. Only mrDMD modes with frequencies ω smaller than ρ are considered at a given level. The power of the mrDMDs is computed as in Jovanović et al., (2014) and Kutz et al., (2016a) and is based on the dynamics. The mrDMD power is computed by separating the DMD amplitude into a product of the normalized DMD modes, a diagonal matrix of mode amplitudes, and the Vandermonde eigenvalue matrix (Jovanović et al., 2014 andKutz et al., 2016a). The Vandermonde matrix captures the exponentiation of the DMD eigenvalues. The exponentiated eigenvalues determine the power of the DMD modes. Since DMDs are not normalized, the amplitude of the power spectra does not correspond to a physical unit. We have the following notation: mrDMD(i,j,k) denotes the DMD from the ith level and the jth segment, while k denotes the number of the corresponding DMD mode. For instance, mrDMD starts at the first level, i.e., the full time series. The second level denotes the two halves of the full time series. The first half is the first segment, while the second half is the second segment. The first DMD mode of the third level and the second segment is denoted: mrDMD(3,2,1). As for EOFs, the sign of the DMDs is arbitrary. A. Global sea surface temperature We start with the monthly global SST data set. The mrDMD power spectrum (Fig. 2) reveals that the maximum power is contained in the first level, while the second largest power is in the range containing the annual cycle. The first mrDMD mode of the first level, the mode whose real component of the eigenvalue is closest to one and, thus, corresponds to an almost neutral mode, has a geographical structure, which is very similar to the climatological mean state [Figs. 3(a) and 3(b)]. This level of the mrDMD decomposition has two more mrDMD modes, which have purely real eigenvalues [Figs. 3(d) and 3(f)]. These are almost neutral modes, though their eigenvalues are somewhat smaller than 1 with values Chaos ARTICLE scitation.org/journal/cha of 0.9712 and 0.8911 and, thus, are damped modes. mrDMD(1,1,2) is likely representing low-frequency behavior of the sea ice edge since in comparison with mrDMD(1,1,1), it represents an equatorward extension in the polar regions of negative anomalies and to a reduction of the meridional temperature gradient. Our mrDMD modes of the first level are also different from the global SST EOF modes of the study by Messié and Chavez, (2011) in that they do not correspond to the well-known modes of ocean variability, such as the ENSO, PDO, or AMO. The comparison with EOFs is not straightforward since with mrDMD, we focus on certain time scales at each level, while EOFs do not systematically distinguish between different time scales, though they tend to be ordered by an integrated auto-correlation time scale (Franzke et al., 2005 andMajda, 2006). We also computed pattern correlations by projecting the instantaneous SST fields onto the mrDMD modes [Figs. 3(f)-3(h)]. The pattern correlations also confirm the almost neutral behavior of these three mrDMD modes, though mrDMD(1,1,2) and mrDMD(1,1,3) show a slight reduction of correlation strength, which could be an imprint of their damped nature. A powerful feature of mrDMD is that it can identify the modes of the annual cycle, which are encoded at level 8. Figure 4 shows the periods of the mrDMD modes associated with the annual cycle, which are all around 12 months, but also vary, indicating that mrDMD has the ability to capture year to year variations in the annual cycle. Note that our segments do not correspond to the calendar annual cycle; the annual cycle dynamics are determined by the dynamics of the climate system. As an example of an annual cycle mrDMD mode, we choose the tenth segment of the eighth mrDMD level, i.e., mrDMD(8,10,2) (Fig. 5). The other annual cycle related mrDMDs of these other segments look similar, suggesting that this is a robust feature (not shown). The corresponding first mrDMD mode corresponds to the mean state over this segment and is an almost neutral mode. The second mrDMD mode, consisting of a real and imaginary component, corresponds to the annual cycle. Both components together make up a propagating mode with temperature anomalies of opposite sign in both hemispheres where the imaginary part corresponds to the transition seasons, while the real part corresponds to the peak seasons [Fig. 5(e)]. Our results are consistent with the analysis by Pezzulli et al., (2005), which found substantial interannual variability in the seasonal cycle of the univariate NINO3.4 index. The length of the annual cycle seems to be determined by other large-scale processes. In Figs. 5(f) and 5(g), we display composites of SST over long [Fig. 5(f)] and short [ Fig. 5(g)] annual cycle events, respectively. The composites are averaged over those segments when the mrDMD annual cycle period is either one standard deviation above or below its long term mean period of about 12 months, respectively. The composites indicate a hemispheric seesaw behavior of SST between both hemispheres. The mrDMD method is also able to identify ENSO events as is demonstrated in Fig. 6. For instance, mrDMD(7,52,3) corresponds to the El Nino of 1987. This mrDMD mode has a period of about 14 months, which is in the range of the typical El Nino duration of between 7 and 24 months. Both parts of mrDMD(7,52,3) show the typical El Nino anomaly in the tropical Pacific. This shows that mrDMD is able to extract physically meaningful patterns from ocean data sets. This is consistent with the results of Kutz et al., (2016a). Since ENSO is a transient phenomenon, mrDMD represents this as real and imaginary components of a DMD mode. This is in contrast to EOFs, in which ENSO would be represented by just one EOF pattern (Messié and Chavez, 2011). This illustrates that mrDMD provides patterns, which are dynamically directly interpretable since DMD analysis provides eigenvalues determining the pattern's oscillation frequency and growth rate. Two widely recognized modes of SST variability are the Atlantic multidecadal oscillation (Knight et al., 2006 andTing et al., 2011) and the Pacific decadal oscillation (Mantua and Hare, 2002). The mrDMD power spectrum (Fig. 2) does not show enhanced power at decadal time scales; the power at those scales is actually rather low. This is consistent with recent studies, which questioned the physical relevance of these modes, which are identified by a global EOF analysis (Messié and Chavez, 2011). Mann et al., (2020) provide evidence that both modes are not distinguishable from the noise background. Also, Clement et al., (2015) provide evidence that the AMO is not a dynamical oceanographic mode of variability since their model experiments do not contain ocean dynamics but still show AMO type variability. These studies are consistent with our mrDMD results that those modes are potentially not dynamically meaningful. To further demonstrate the ability of mrDMD to identify physically meaningful patterns, we now examine variability at the fourth and fifth levels in more detail. As Fig. 2 shows, the fourth level has one big amplitude event and the fifth level has five large amplitude events. First, we focus on the large amplitude event of the fourth level (Fig. 7). As can be seen from Fig. 2, this event occurred between November 1874 and October 1895. In Fig. 7(a), we display the average over this period. This period is characterized by warm SST anomalies in the North Pacific along 40 • N, in the Labrador sea and the Fram strait, and in the Southern Ocean between South America and Antarctica. Most of the remaining ocean is anomalously cold, especially the Arctic Ocean. In contrast, the anomalies averaged over all other times are much weaker [ Fig. 7(b)]; whether this is just due to averaging over a longer period or whether this suggests that DMD picks systematically dynamically relevant and active states needs further research, ideally with very long climate model data. mrDMD(4,2,2) has a period of about 20 years. Its real component is similar to the Pacific decadal oscillation (Mantua and Hare, 2002), but this mrDMD describes more complex dynamics than just a standing pattern. We now turn to the fifth level where we have five large amplitude events (Fig. 2). We now average over the periods of these five events [ Fig. 8(a)] and average over all other times [ Fig. 8(b)]. The high amplitude composite shows increased SST over most of the ocean areas with cold anomalies only in the northern North Pacific and the south of Greenland. The composite of all other times displays mainly cold anomalies. The modes mrDMD(5,3,2) and mrDMD(5,6,2) again demonstrate that those states are the result of dynamic processes. Both mrDMD modes also have similarities to the PDO. This suggests that the PDO is an important mode of ocean variability on decadal time scales. Moreover, the fact that we identify multiple DMD modes resembling the PDO is consistent with the finding that the PDO is not a single physical mode of variability, but rather is an aggregation of multiple processes, such as ENSO teleconnections, reemergence of SST, and stochastic atmospheric forcing (Newman et al., 2003;Qiu et al., 2007;Schneider and Cornuelle, 2005;and Vimont, 2005). B. Kuroshio sea surface height In the following, we examine the Kuroshio current using mrDMD. For this purpose, we use daily Aviso SSH data. The mrDMD power frequency-time plot (Fig. 9) shows that again, the maximum power is contained in the first level. However, the third, fifth, and sixth levels contain a sizable amount of power. The power of the first level is associated with the mean state (Fig. 10) as mrDMD(1,1,1) corresponds to the mean state [compare Figs. 10(a) and 10(c)]. The mrDMD(1,1,2) mode projects onto the linear trend [ Fig. 10(b)] for most of the area with the exception of the southern area of our chosen box, though with the opposite sign. The pattern correlation is positive and has a trend toward zero [ Fig. 10(f)]. This suggests that mrDMD(1,1,2) is a damped mode, which is also indicated by its positive real eigenvalue with modulus smaller than 1. The pattern itself represents weakening of the central SSH gradient of the Kuroshio and has a positive anomaly at the location of the Kuroshio large meander, southeast of Japan. To focus on specific high amplitude events in the frequency plot of Fig. 9, we display the relevant mrDMDs of the third level in Fig. 11. The time scales of this level correspond to about 7.5 years. The by far largest and dominating segment at this level is the second segment (04. 07.1999-03.01.2006), while all other segments are rather inconspicuous when it comes to the power spectrum. mrDMD(3,2,1) corresponds to the local mean state for that time segment. While mrDMD(3,2,1) of this time period is very similar to the overall mrDMD(1,1,1) in Fig. 10, one noteworthy aspect is the variations in the pattern correlations [ Fig. 11(b)]. For most of the time, the correlations fluctuate between 0.955 and 0.96. There are two excursions to values around 0.94, corresponding to the years 1999 and 2001. These years were characterized by an exceptionally meandering current with a pair of large persistent eddies off the coast of Japan (see Fig. 2 in Qiu and Chen, 2005), i.e., a northern warm core eddy and a southern cold core eddy. If we now take higher modes into consideration, these exceptional dynamics are confirmed ]. The modes mrDMD(3,2,2) and mrDMD(3,2,4) have complex eigenvalues and are, thus, propagating patterns with periods of about 986 and 874 days. Correspondingly, mrDMD(3,2,3) and mrDMD(3,2,5) are the complex conjugates of mrDMD(3,2,2) and mrDMD(3,2,4), respectively. All these modes highlight a propagating large meander around 30-35 • N, which is characteristic for this time period. Furthermore, modes 2 and 3 propagate the signal of the eddy pair between 32-37 • N Chaos ARTICLE scitation.org/journal/cha and 140-145 • E, which is such a dominating feature for both 1999 and 2001 (see Fig. 2 of Qiu and Chen, 2005). A part of this signal is also visible in modes 4 and 5, although the dominant role of these modes seems to lie in the general meandering of the current starting from this dipole eddy perturbation. To support this interpretation, the corresponding pattern correlations have been computed by projecting the mrDMD patterns onto bandpass filtered SSH fields where the bandpass filter frequencies correspond to the mrDMD frequencies associated with the respective third level modes, which correspond here to periods between 874 and 986 days. One important observation here is that the real and complex modes remain at a 90 • angle with respect to the correlations for the first 1.5 to two cycles between positive and negative correlations, which corresponds to the period up until the end of 2001. After that, they become increasingly mixed and also damped with respect to correlation amplitude. This suggests that these specific To conclude the mrDMD analysis of the Kuroshio SSH, we would like to point out that this diagnostic does not necessarily distinguish between the stable and unstable years of the Kuroshio (as discussed by Qiu and Chen, 2005). Instead, our results suggest that the mrDMD analysis discriminates between years, which are dominated by propagating large-scale anomalies and those years where the conditions are either more persistent (very long time scales) or shaped by (short lived) chaotic behavior on temporal and spatial scales that do not correspond to the respective mrDMD level. A potential drawback of mrDMD here is the strict, non-overlapping decomposition of the total time period, which can lead to specific events being split between different segments. The consequence may be that the mrDMD does not pick up those events at certain levels but may hint at them at higher levels when the data are further subdivided. One solution for this potential drawback would be to use an overlapping windowing approach. However, this would be computationally much more expensive. C. Gulf Stream sea surface height Next, we examine the Gulf Stream SSH. The multiresolution DMD power spectrum is displayed in Fig. 12. Both the frequencytime and time averaged frequency plots show that the lowest frequencies dominate the spectrum. This is mainly due to the mean state, which is captured by mrDMD(1,1,1) [compare Figs. 13(a) and 13(b)]. The climatological mean state and mrDMD(1,1,1) are very similar so that most of the low-frequency information is associated with the mean. As the eigenvalue of mrDMD(1,1,1) is 1.0, it represents a temporally neutral mode. In order to have a closer look at the structure of the associated mrDMD patterns, we also compute pattern correlations between the respective mrDMD pattern and the SSH fields. Figure 13 shows these time lag correlations for mrDMD(1,1,1) of level 1. The pattern shows for the whole time pattern correlation values between 0.94 and 0.98. For mrDMD(1,1,2), the pattern correlation shows, in absolute terms, a decreasing trend for the first 20 years before it is increasing to large absolute values again. mrDMD(1,1,2) may show an imprint of long time-scale changes of the Gulf Stream due to climate change signals and low-frequency changes in the Gulf Stream intensity and stability. Even though it has a robust correlation of around 0.3, the features in the pattern are rather small scale with some slight imprint of an emphasized north-south gradient along the Gulf Stream path. It is, therefore, a mixture of eddy driven and large-scale changes in the structure, intensity, and position of the Gulf Stream. Similarly to the mrDMD of the Kuroshio region in Fig. 9, the Gulf Stream SSH also exhibits isolated large amplitude events in frequency space for low-frequencies (0.0008; about 3.5 years), although these occur at higher frequencies than the level 3 mrDMDs described in Sec. III B for the Kuroshio, which occurred at periods of about 7 years. These events are associated with a strong meandering of the Gulf Stream (not shown). The next interesting mrDMD occurs at the sixth level during the period 20.04. 2000-08.02.2001. mrDMD(6,9,1) again corresponds to the mean state for this time window. mrDMD(6,9,2) is again a standing pattern since it has a purely real eigenvalue. The pattern correlation of this mode reveals that this mode shifts the Gulf Stream from south to north and back to south over the period 20.04. 2000-08.02.2001 with an emphasis between around 48 • W and 72 • W. The pattern has relatively close resemblance to the first EOF mode discussed by Pérez-Hernández and Joyce, (2014) (their Fig. 3). The strengthening and weakening of the correlations with mrDMD(6,9,2) coincide with a noticeable northward shift of the Gulf Stream in October 2000, also discussed by Pérez-Hernández and Joyce, (2014) (their Fig. 4). mrDMD(6,9,3) and mrDMD (6,9,5) correspond to eddy modes with periods of about 200 and 162 days, respectively. The pattern correlations confirm that these mrDMDs are propagating eddy fields as the correlations have a periodic structure and are shifted (a) mrDMD(6,9,1) with an eigenvalue of 0.992 and (b) a pattern correlation of bandpass filtered SSH with mrDMD(6,9,1). (c) mrDMD(6,9,2) with an eigenvalue of 0.959 and (d) a pattern correlation of bandpass filtered SSH with mrDMD(6,9,2). mrDMD (6,9,3) with an eigenvalue of 0.943+i0.179 (corresponds to a period of 200 days). (e) Real and (f) imaginary components and (g) a pattern correlation of bandpass filtered SSH with mrDMD(6,9,3). mrDMD (6,9,5) with an eigenvalue of 0.914+i0.216 (corresponds to a period of 162 days). (h) Real and (i) imaginary components and (j) a pattern correlation of bandpass filtered SSH with mrDMD(6,9,5). In the correlation plots, time is in days. In the correlation plots, the black line corresponds to the real and the red line to the imaginary component. (Fig. 14). The mrDMD of the sixth level, therefore, highlights this dynamic shifting event by a peak in the DMD power (Fig. 12) for the ninth segment and then splits the event into a mean component, a Gulf Stream shift, and further eddy dynamics with a propagating signature. According to Pérez-Hernández and Joyce, (2014), other extreme northward shifts of the Gulf Stream occurred in July 1995 and April 2012. These events also show high DMD power. As already noted above, the ability of DMD to capture such events depends on the length of the segments (i.e., the frequency), the dynamical nature of these shifts, which may be picked up by low DMD modes and whether these events are fully captured within one segment or whether they may be split between two consecutive segments. One, therefore, needs to look at individual segments to investigate why the power of that segment is comparatively large or low. Also, pattern correlations is a useful diagnostic for the interpretation of the DMDs. IV. SUMMARY In this study, we have demonstrated that the physics-consistent machine learning method multi-resolution dynamic mode decomposition is able to extract dynamically relevant patterns of ocean variability. We applied mrDMD to sea surface temperatures and sea surface height fields. We find that mrDMD is able to systematically decompose SST and SSH fields into meaningful patterns on different time scales. This allows for a systematic analysis of multiscale systems and the climate system, in particular. We show that mrDMD is able to identify annual cycle modes, which can vary from year to year, without supervision. This is an important aspect in the analysis of climate dynamics since a time-varying annual cycle can provide an alternative basic state for the study of climate anomalies (Wu et al., 2008 andPezzulli et al., 2005). When using a time fixed annual cycle, all changes, e.g., due to global warming, will be part of the anomalies. However, changes in the annual cycle can have pronounced impacts on the climate system and, therefore, our understanding of it. We also show that mrDMD is able to extract actual ENSO events from the SST data set without supervision. ENSO is one of the most important modes of climate variability and occurs on a broad range of time scales (Timmermann et al., 2018). This makes mrDMD a very attractive method for the analysis of multi-scale systems since no a priori filtering is necessary. mrDMD also seamlessly provides a decomposition into a local basic state and eddy fields allowing state-dependent eddy-mean flow interaction studies. Here, we used non-overlapping time windows for our mrDMD analysis, putting some constraints on identifying specific events (such as ENSO events) if they are spread across two windows. However, this can be relaxed to an overlapping windows analysis. mrDMD is also relatively computationally inexpensive, making it an attractive analysis method and potentially a prediction method. For instance, Gottwald and Gugole, (2019) use DMD to identify regime transitions in the North Atlantic region and the Southern Hemisphere. DMD also has the potential for subgrid-scale modeling as shown by Gugole and Franzke, (2019). Considering sea surface height fields for the Kuroshio and Gulf Stream, mrDMD is capable of identifying dynamically interesting and complex events related to changes in the position and intensity of the currents. While it can highlight these mean state changes similarly well compared to other methods, such as EOF decomposition, it also provides information about the dynamically propagating eddy component of the flow. Such dynamically evolving components consist of a real and an imaginary DMD mode, whose correlations with the original SSH data are cyclic and are shifted by 90 • with respect to each other, signifying a propagating signal. As highlighted by the weakening of the correlations, these dynamical DMD modes can also become less important as time progresses throughout a specific time window. The detailed mrDMD decomposition of the flow allows us to investigate isolated events and associate relevant drivers to the respective modes.
8,125.6
2022-07-01T00:00:00.000
[ "Environmental Science", "Physics", "Computer Science" ]
MicroRNA-674-5p induced by HIF-1α targets XBP-1 in intestinal epithelial cell injury during endotoxemia Intestinal mucosal integrity dysfunction during endotoxemia can contribute to translocation of intestinal bacteria and a persistent systemic inflammatory response, which both fuel the pathophysiological development of sepsis or endotoxemia. The pathogenesis of intestinal damage induced by endotoxemia remains poorly understood. Here, we identified the microRNA (miR)-674-5p/X-box binding protein 1 (XBP-1) axis as a critical regulator and therapeutic target in preventing intestinal crypt cell proliferation during endotoxemia. MiR-674-5p was markedly increased in intestinal epithelial cells (IECs) during endotoxemia and its induction depended on hypoxia-inducible factor-1α (HIF-1α). Intriguingly, gene expression microanalysis revealed that expression of XBP-1 was down-regulated in IECs with over-expression of miR-674-5p. miR-674-5p was found to directly target XBP-1 protein expression. Upon in vitro, anti-miR-674-5p enhanced sXBP-1 expression and facilitated intestinal crypt cell proliferation. Blockade of miR-674-5p promoted XBP-1 activity, attenuated intestinal inflammation, and expedited intestinal regeneration, resulting in protection against endotoxemia-induced intestinal injury in mice. More importantly, the survival in endotoxemia mice was significantly improved by inhibiting intestinal miR-674-5p. Collectively, these data indicate that control of a novel miR-674-5p/XBP-1 signaling axis may mitigate endotoxemia -induced intestinal injury. Introduction Endotoxemia is the most common cause of mortality in most intensive care units and accounts for more than 250,000 deaths in the United States annually 1 . Endotoxemia is the host inflammatory response to severe, lifethreatening infection, and results in organ dysfunction, including lung, kidney, and intestine 2 . Endotoxemiainduced intestinal injury is believed to have an important impact on the pathophysiology of endotoxemia and is considered the "motor" of the systemic inflammatory response 3,4 . Endotoxemia induces several aberrations in the intestinal epithelium involving barrier dysfunction 3,4 , magnified epithelial apoptosis [5][6][7] , and production of several inflammatory factors 8,9 . Moreover, intestinal epithelial integrity plays a vital role in physical barrier dysfunction induced by endotoxemia. The small intestinal epithelium normally renews every three and a half days with proliferated and differentiated cells moving from the crypts to the tip of villi 10 . Intestinal integrity, which is achieved by a balance of cell proliferation and cell death, has been shown to be injured in inflammatory pathological conditions such as endotoxemia or inflammatory bowel disease 11,12 . Recent evidence has suggested that impaired cell proliferation is a critical factor in disturbing intestinal epithelial integrity in endotoxemia 13 . Given that intestinal cell proliferation is important in endotoxemiainduced intestinal injury, researchers have been seeking protective agents for the intestine that would encourage intestinal cell proliferation and maintain intestinal cell homeostasis. MicroRNAs (miRNAs) are a class of noncoding RNA molecules and endogenously expressed RNAs of [21][22][23] nucleotides that bind with partial sequence homology to the 3′-untranslated region (UTR) of target mRNAs and inhibit translation 14 . High-throughput and functional studies have shown that miRNAs play crucial roles in many aspects of cellular physiology as well as pathological processes such as inflammation and tumorigenesis 14,15 . Recently, miRNAs have been shown to function as modulators in the regulation of various aspects of gut epithelial homeostasis, including intestinal cell proliferation, apoptosis, and differentiation [16][17][18] . Several intestinal epithelial-specific miRNAs, including miR-222 17 , miR-322/503 19,20 , miR-21/155 21 , miR-195 22 , miR-122b 23 , and miR-29b 24 , have been found to modulate intestinal epithelial cell (IEC) proliferation, apoptosis, and cell-to-cell interaction. However, the roles of miRNAs in endotoxemia-induced intestinal injury remain to be explored. In the present study, we identified changes in the expression of 10 miRNAs from a total of 30 novel miR-NAs chosen from the miRNA expression profile of mouse embryos 25 with endotoxemia-induced intestinal injury. Moreover, we identified miR-674-5p as a key miRNA that was markedly induced in endotoxemia-induced intestinal injury and was found to target X-box binding protein 1 (XBP-1), which in turn, inhibited intestinal crypt cell proliferation and exacerbated intestinal injury during endotoxemia. Upregulation of miR-674-5p in mouse IECs during endotoxemia-induced intestinal injury Total RNA was extracted from IECs isolated from small intestines of mice treated with LPS and used to investigate altered expression of miRNAs by real-time PCR. Of~30 miRNAs selected from the expression profile of miRNAs in mouse embryos 25 , only 10 miRNAs in IECs of mice exhibited a significant change in expression following LPS treatment. While miRNAs 681, 719, 33, and 695 were downregulated, miRNAs 711, 16-1, 345, 674-5p, 301, and 143 were upregulated (Fig. 1a, b). Among these miRNAs, miR-674-5p exhibited the highest upregulation following LPS treatment. A similar change in miR-674-5p expression was also observed in IECs of endotoxemia mice treated with S. aureus (Fig. 1c). miR-674-5p induction in IECs during endotoxemia-induced intestinal injury was confirmed by northern blotting (Fig. 1d). miR-674-5p was previously identified via parallel signature sequencing technology, but its targets and function have remained elusive. Next, we investigated the function and significance of miR-674-5p in mouse IECs following LPS treatment. These data confirm that XBP-1 has a protective impact on LPS-induced IEC injury in vitro. The miR-674-5p mimic significantly weakened luciferase expression in luciferase-XBP-1 3′ UTR-transfected cells, while the sequence-scrambled oligonucleotide had no effect (Fig. 5c). As expected, luciferase expression was not significantly changed with the miR-674-5p mimic or the sequence-scrambled oligonucleotide in luciferase-control 3′ UTR-transfected cells. Our results suggest that miR-674-5p directly blocks XBP-1 expression. In IEC-6 cells, XBP-1 expression remained relatively stable following LPS stimulation, but a dramatic increase in sXBP-1 and XBP-1 expression was observed following treatment with the anti-miR-674-5p oligonucleotide (Fig. 5d). Similarly, in mouse colonic adenocarcinoma CT-26 cells, miR-674-5p expression increased following LPS stimulation and blocking miR-674-5p markedly upregulated sXBP-1 and XBP-1 expression (Fig. 5e). These results indicate that miR-674-5p directly targets XBP-1 expression under conditions of LPS stimulation. Suppression of miR-674-5p protects against endotoxemiainduced intestinal injury by regulating XBP-1 To study the role of miR-674-5p in endotoxemiainduced intestinal injury in vivo, we used systemic injection of anti-miR-674-5p oligonucleotide to specifically lower miR-674-5p expression in mouse IECs. With injection of anti-miR-674-5p oligonucleotide three times, miR-674-5p was significantly downregulated in IECs (Fig. 6a, b). Mice that received anti-miR-674-5p oligonucleotide showed significantly less intestinal epithelium injury than those that received the sequence-scrambled oligonucleotide control (Fig. 6c, d). The inflammatory biomarkers Tumor necrosis factor (TNF)-α and interleukin (IL)-6 were also reduced in small intestinal mucosa of anti-miR-674-5p-treated mice compared the control (Fig. 6e, f). Given that intestinal inflammation is closely associated with intestinal permeability and integrity, a bacterial burden assay was performed and intestinal proliferation was assessed. We found that the bacterial burden was markedly alleviated (Fig. 6g) and IEC proliferation was significantly improved in endotoxemia mice with anti-miR-674-5p oligonucleotide (Fig. 6h). Further analysis revealed that with LPS-induced ER stress, blocking miR-674-5p distinctly boosted XBP-1 expression, but not that of eIF-2α and ATF-6 ( Fig. 6i). XBP-1 has been shown to promote cell survival in pathophysiological conditions 29 . The morphological and molecular changes in endotoxemia mice treated with anti-miR-674-5p . Values are presented as means ± SD, n = 6 in each group. *P < 0.05 versus IECs with PBS. c Western blot analysis of HIF-1α in IEC-6 cells was carried out with whole cell lysates collected at various time points after LPS stimulation. d IEC-6 cells transfected with a sequence-scrambled oligonucleotide control or HIF-1α small interfering RNA were subjected to LPS stimulation, and whole cell lysates were collected at 24 h. e Induction of miR-674-5p by LPS stimulation. HIF-1α +/+ and HIF-1α -/mouse embryonic fibroblasts were stimulated for 24 h to extract RNA for real-time PCR analysis of miR-674-5p. Fold changes over the value of HIF-1α +/+ cells with LPS (arbitrarily set as 1) are shown. *P < 0.01 versus HIF-1α +/+ cells with LPS. f HIF-1α binding to the miR-674-5p promoter during LPS stimulation. HIF-1α +/+ and HIF-1α -/cells were stimulated with LPS or PBS for 24 h. Cell lysates were collected for chromatin immunoprecipitation analysis of HIF-1α binding to miR-674-5p promoter DNA. oligonucleotide suggest that inhibition of miR-674-5p prolongs survival of endotoxemia mice (Fig. 6j). These results suggest that miR-674-5p-mediated downregulation of XBP-1 is important for the development of endotoxemia-induced intestinal injury. Blockade of miR-674-5p encourages intestinal crypt cell proliferation via XBP-1 under LPS stimulation The above in vivo experiment implied that blocking miR-674-5p has a protective effect in endotoxemiainduced intestinal injury via regulation of XBP-1. We further studied the effect of miR-674-5p in vitro under LPS stimulation. Intestinal crypt cells, regarded as intestinal progenitor/stem cells, were isolated from small intestine of mice. Ki67, known as an frequently-used measurement for propagation, was solely expressed in intestinal crypts, not in villi and in isolated crypt cells (Supplementary Fig. 1A-C). Isolated crypt cells could form large colonies within 21 days ( Supplementary Fig. 1D). However, proliferative capacity of intestinal crypt cells was reduced by 75% following LPS treatment, and was subsequently increased by~50% following treatment with anti-miR-674-5p oligonucleotide (Fig. 7a, b). The two biomarkers for proliferation Notch1 and Bmil were greatly down-regulated in isolated intestinal crypt cells after LPS treatment, which could be significantly promoted by anti-miR-674-5p (Fig. 7c, d). More importantly, the ER stress-related proteins sXBP-1, but not ATF-6 and eIF-2α, were significantly increased in intestinal crypt cells treated with anti-miR-674-5p oligonucleotide compared with the sequence-scrambled oligonucleotide control (Fig. 7e). These results further support that miR-674-5p inhibits proliferation of intestinal crypt cells in response to LPS treatment through XBP-1 pathway. Discussion This study identified an miRNA-mediated signaling pathway that regulates endotoxemia-induced intestinal injury. We demonstrated that miR-674-5p is a critical mediator in preventing IEC proliferation in the intestine in response to endotoxemia. Tissue microarray analysis and the luciferase assay demonstrated that XBP-1 was a direct target of miR-674-5p and that the miR-674-5p-mediated decrease in XBP-1 increased intestinal inflammation and inhibited intestinal crypt cell proliferation. Inhibiting miR-674-5p markedly mitigated intestinal injury induced by endotoxemia and increased survival. To our knowledge, there are only a few studies in which a single miRNA has been reported to significantly exacerbate intestinal epithelial damage following endotoxemia or sepsis by inhibiting IEC proliferation. To study the role of miR-674-5p in endotoxemiainduced intestinal injury in vivo, systemic injection of anti-miR-674-5p oligonucleotides to block miR-674-5p in IECs was performed. miR-674-5p was found to markedly inhibit IEC proliferation under conditions of endotoxemia, suggesting that directly blocking miR-674-5p expression in IECs may be a potential therapeutic target to alleviate intestinal injury following endotoxemia. Moreover, miR-674-5p induction in IECs could be directly controlled by modulating HIF-1α expression. Several studies have indicated that HIF-1α expression is significantly induced under inflammatory conditions [26][27][28][31][32][33] , which is consistent with our results (Fig. 2). HIF-1α-mediated changes in miRNA expression were found to have a critical effect on the initiation and development of several pathophysiological processes, including ischemic kidney injury, colitis, and gastric cancer 26,[34][35][36][37] . The HIF-1α/miRNA pathway induced by inflammation may be a universal feature of inflammation-associated diseases. In this study, induction of HIF-1α following LPS stimulation in IECs facilitated the increase in miR-674-5p expression, resulting in cell proliferation impairment. This was supported by the evidence that the expression of miR-674-5p was abrogated in the absence of HIF-1α in LPS-treated cells (Fig. 2), suggesting that control of the HIF-1α/miR-674-5p pathway exerts cyto-protective effects in endotoxemiainduced intestinal injury. However, whether miR-674-5p modulates HIF-1α in intestinal injury caused by endotoxemia remains to be answered. XBP-1 is a member of the CREB/ATF basic regionleucine zipper family of transcription factors and functions as a key factor in the unfolded protein or ER stress response 38 . As one of three mechanistically distinct arms of the ER stress response, which include the Ire1α/XBP-1, PERK/eIF2α, and ATF-6 pathways, cleavage of cytoplasmic XBP-1 by the endoribonuclease Ire1α under conditions of ER stress results in nuclear translocation and upregulation of its target genes, the protein products of which operate in ER-associated degradation, the entry of proteins into the ER, and protein folding, which ultimately regulate inflammation, the immune system, and cell proliferation [39][40][41][42][43][44][45][46][47][48][49] . Previous studies have demonstrated that XBP-1 can modulate cell proliferation and tissue regeneration. In angiogenesis, XBP-1 was found to boost vascular endothelial cell proliferation via growth factor signaling pathways. XBP-1 was also shown to be crucial for smooth muscle cell proliferation through transforming growth factor (TGF)-β-mediated pathways that accelerate neointimal formation 42,43 . Moreover, in epithelial cell homeostasis, XBP-1 appears to be required for proliferation of pancreatic acinar cells, β-cells, and hepatocytes, which in turn, expedite pancreatic and liver regeneration 44,46 . In epithelial malignant neoplasms such as esophageal squamous cell carcinoma and breast cancer, XBP-1 could promote malignant cell propagation via different signaling pathways 47,48 . In this study, we demonstrated that blocking XBP-1 decreased the proliferation of IECs during endotoxemia and that by restraining miR-674-5p expression, enhanced XBP-1 could accelerate IEC proliferation, especially in the crypts, in endotoxemia-induced intestinal injury. However, one study showed that XBP-1 deficiency in IECs resulted in epithelial hyperproliferation via activation of c Luciferase reporter assay was conducted using constructs with the XBP-1 3′ UTR or an antisense control sequence. CCC-HIE-2 cells were cotransfected with these constructs along with the scrambled miRNA or miR-674 mimic. *P < 0.05 versus control 3′ UTR. Three independent experiments were performed. d IEC-6 cells transfected with a scrambled control or anti-miR-674 oligonucleotide were stimulated with LPS, and whole cell lysates were collected at the indicated time points. Three independent experiments were performed. e Quantitative PCR analysis of miR-674-5p gene expression at 24 h in CT-26 cells treated with LPS. *P < 0.05 versus PBS. f CT-26 cells transfected with a scrambled control or anti-miR-674 oligonucleotide were stimulated with LPS, and whole cell lysates were collected at the indicated time points. Western blotting of these lysates is shown. STAT3 signaling 49 . The disparity between our findings and this study might be partly interpreted by the fact that PERK and XBP-1 act as two important branches of the ER stress response that can both promote IEC proliferation. IECs with knockout of XBP-1 exhibited unresolved ER stress due to hyperactivation of Ire1α; pathological ER stress could result in high expression of PERK, which could encourage IEC proliferation following injury 49 . In this study, blocking XBP-1, but not the PERK pathway, could mitigate IEC proliferation. Moreover, we identified the targeting of XBP-1 by miR-674-5p as a potential therapeutic target for improving endotoxemia-induced intestinal injury. In conclusion, this study has highlighted miR-674-5p as a critical miRNA in alleviating endotoxemia-induced intestinal injury. Elaboration of the HIF-1α/miR-674-5p/ XBP-1 signaling pathway not only provides novel and important insight into the pathogenesis of intestinal injury by endotoxemia or endotoxemia, but also suggests a novel miRNA-based therapeutic target for prevention and treatment. (see figure on previous page) Fig. 6 Suppression of miR-674-5p protects against endotoxemia-induced intestinal injury by regulating XBP-1. a Real-time PCR of miR-674-5p. RNA from IECs at 72 h after LPS treatment in mice with or without treatment with anti-miR-674-5p oligonucleotide. Values are presented as means ± SD, n = 6 in each group. *P < 0.01 versus PBS. b Northern blot analysis of miR-674-5p. Total RNA (10 μg) extracted from IECs isolated from mice at 72 h after LPS treatment was used for northern blotting. 5S rRNA was probed as a loading control. c Hematoxylin and eosin staining was performed using formalin-fixed tissue sections at day 5 after LPS treatment in mice with or without treatment with anti-miR-674 oligonucleotide or the sequence-scrambled oligonucleotide control. Magnification, ×400. d Chiu's scores were measured and compared by analysis of variance with Tukey's post-hoc test. *P < 0.01 versus scrambled control. Values are presented as means ± SD, n = 6 in each group. e Levels of TNF-α were measured at 72 h after LPS treatment in small intestinal mucosa of endotoxemia mice with or without treatment with anti-miR-674 oligonucleotide or the scrambled control. *P < 0.05 versus scrambled. Values are presented as means ± SD, n = 6 in each group. f ELISA analysis of IL-6 protein expression at 72 h after LPS treatment in small intestinal mucosa of endotoxemia mice with or without treatment with anti-miR-674 oligonucleotide or the scrambled control. *P < 0.01 versus scrambled. Values are presented as means ± SD, n = 6 in each group. g Bacterial counts in mesenteric lymph nodes at continuous time points after LPS treatment in endotoxemia mice with or without treatment with anti-miR-674 oligonucleotide or the scrambled control. *P < 0.05 versus scrambled. Values are presented as means ± SD, n = 6 in each group. h Average number of BrdU-positive cells in each crypt at 72 h following LPS treatment was determined by counting BrdU-positive cells in intact crypts. Values are presented as means ± SD, n = 6 in each group. *P < 0.05 versus scrambled control. i Western blot analysis of ER stress-related proteins of IECs isolated from endotoxemia mice with or without treatment with anti-miR-674 oligonucleotide or the scrambled control. β-actin was used as a loading control. j Survival curves of endotoxemia mice with or without treatment with anti-miR-674 or the scrambled control. Animals, experimental sepsis and endotoxemia induction, and anti-miRNAs The current study was approved by the Animal Care and Use Committee of Sun Yat-sen University, Guangzhou, China (approval number: 2018007). Experimental endotoxemia and sepsis model was induced respectively by administering lipopolysaccharide (LPS) from Escherichia coli (17.5 mg/kg, O55:B5; Sigma-Aldrich, St. Louis, MO, USA) intraperitoneally at a dose of 350 μg in 100 μL of saline or Staphylococcus aureus (10 8 colony forming units [CFU] per mouse; ATCC 14458, SEB + TSST-1 -) intravenously to 4-6-week-old mice weighing~20 g. C57BL/6 male mice were monitored at 4-h intervals through critical stages of disease and euthanized with chloral hydrate at objective, predefined endpoints: loss of circulation to tail or feet, loss of responsiveness to stimuli, or breathing rate <120 breaths per minute. Survivors were monitored intensively for 6 days and euthanized 15 days after injection of LPS. Small intestines were harvested 3 days after injection of LPS for immunological, histopathological, serological, and bacteriological analyses. Anti-miRNA administration was performed as described elsewhere 50 . Separate solutions of anti-miR-674-5p oligonucleotide and its scrambled negative control (Ambion, Austin, TX, USA) were diluted with in vivo-jetPEI solution (Polyplus-transfection) containing 10% (wt/vol) glucose at a ratio of in vivo-jetPEI nitrogen residues per oligonucleotide phosphate of 5, according to the manufacturer's instructions. All solutions were shaken for 10 s and incubated for at least 15 min at 37°C prior to injection. Each mouse received 400 μL of saline and oligonucleotide mixture (corresponding to 300 μg of oligonucleotide per dose) through tail vein injection consecutively for at least 3 days before experimental endotoxemia, and continuously received it until tissue collection or for at most 6 days after LPS injection. The intestines were harvested 24 h after the last injection. All injections were performed using a 30-gauge needle syringe with a single mouse restrainer. Histology and intestinal BrdU staining A segment of the small intestine was stained with hematoxylin and eosin. Damage of the intestinal mucosa was evaluated using the criteria of Chiu's method 51 by two independent experienced pathologists who were blinded to the study groups. A minimum of six randomly chosen fields of view from each mouse were evaluated under a microscope and averaged to determine mucosal damage, and the results of the two pathologists were averaged. Mice were injected with BrdU (150 mg/kg; Sigma-Aldrich) 4 h prior to sacrifice. For BrdU staining, sections were deparaffinized and treated with proteinase K (20 μg/ mL) for 20 min at 37°C. The staining was performed following a standard protocol with anti-BrdU antibody (1:100 in 5% bovine serum albumin [BSA], Sigma-Aldrich) and secondary antibody (Santa Cruz Biotechnology, Inc., Santa Cruz, CA, USA), and color was developed using a DAB kit (DaKo, Copenhagen, Sweden). BrdU-positive cells were counted in high-magnification (×400) fields, and the percentage of BrdU-positive cells in total crypts was scored by counting 100 intact crypts as described in the proliferative index and reported as the mean ± standard deviation (SD). Eight mice were evaluated in each group. Isolation of intestinal crypt cells Intestinal crypt cells were isolated and cultured as described in our previous study 52 . Briefly, isolated small intestines were opened longitudinally, and washed with cold phosphate-buffered saline (PBS). The tissue was chopped into~5-mm pieces, and washed again with cold PBS. The tissue fragments were incubated in 2 mM EDTA with PBS for 30 min on ice. Following removal of the EDTA medium, the tissue fragments were vigorously suspended using a 10-ml pipette with cold PBS. This fraction was passed through a 70-mm cell strainer (BD Biosciences, Franklin Lakes, NJ, USA) to remove residual villous material. Isolated crypts were centrifuged at 150-200×g for 3 min to separate crypts from single cells. The final crypts were counted and pelleted. A total of 500 crypts were mixed with 50 μl of Matrigel (BD Bioscience) and plated in 24-well plates. After polymerization of Matrigel, 500 μl of crypt culture medium (DMEM/F12 (Invitrogen)) containing growth factors (10-50 ng/ml EGF (Peprotech)), 500 ng/ml R-spondin 1 and 100 ng/ml Noggin (Peprotech)) was added. Isolated crypts were incubated in culture medium for 45 min at 37°C, following by trituration with a glass pipette. Crypt cells were passed through cell strainer with a pore size of 20 μm and collected in culture medium. Bacterial culturing For CFU analysis of E. coli in mesenteric lymph nodes (MLNs), we harvested and homogenized MLNs in sterile PBS, which were serially diluted and plated followed by incubation at 37°C for 24 h. Northern blot analysis The sequences for probing miR-647-5p in northern blot were "TACACCACTCCCATCTCAGTGC" and internal control: "CACGGGAAGTCTGGGCTAAGAGACA", which Briefly, 10 μg of total RNA isolated using the Ambion RNA extraction kit (Applied Biosystems) was resolved using a 15% acrylamide-bisacrylamide gel (19:1) containing 7 M urea in Tris-borate-EDTA buffer. Following transfer to a Hybond membrane (Amersham, Uppsala, Sweden) and ultraviolet crosslinking, the membrane was incubated with the radiolabeled hybridization probe in Ultra-Hyb-oligo hybridization buffer (Ambion). The membrane was then washed extensively before exposure to X-ray film at -70°C. Chromatin immunoprecipitation Chromatin immunoprecipitation (ChIP) analysis of HIF-1α binding to the miR-674-5p promoter was performed using an assay kit (R&D Systems, Minneapolis, MN, USA) according to the manufacturer's instructions. The primer sequences of the miR-674-5p promoter were: Forward "GCTACCACATTTCATCTGACTAGAG". Reverse "AGCAAGCACTTGATTTCACATAAC", which was purchased from Shanghai Generay Biotech Co., Ltd (13-14 Building, No.5398 Shenzhuan Rd, Songjiang District, Shanghai, 201619 China). Briefly, following fixation with formaldehyde, cell lysates were collected and sonicated to shear chromatin. The samples were then centrifuged to collect the supernatant for immunoprecipitation with anti-HIF-1α antibody. After several washes, the resulting immunoprecipitates were subjected to PCR analysis using specific primers. Microarray experiment The mixture, containing Lipofectamine (Invitrogen, Carlsbad, CA, USA) and miR-674-5p or scambled dissolved in Optimal (Invitrogen), was added to cells at 80% confluence, and 24 h after interference, and harvested. PCR was used to test successful transfection into cells. The Agilent SurePrint G3 Human Gene Expression 8×60 K Array was designed with eight identical arrays per slide, with each array containing probes interrogating 27,958 Entrez Gene RNAs. The array also contained 1280 Agilent control probes. The four scrambled samples were human IECs (FHC cells) tranfected with the scramble sequence. The six miR-674-5p samples were human IECs (FHC cells) tranfected with the miR-674-5p. Total RNA from each sample was isolated using TRIzol reagent according to the manufacturer's instructions (Invitrogen, Carlsbad, CA, USA) and was further purified using the mirVana miRNA Isolation Kit (Ambion) according to the manufacturer's instructions. The purity and concentration of RNA was tested with OD260/280 readings using a spectrophotometer (NanoDrop ND-1000). cDNA labeled with the fluorescent dye Cy3-dCTP was constructed by Eberwine's linear RNA amplification method and subsequent enzymatic reaction. The procedure was optimized using the CapitalBio cRNA Amplification and Labeling Kit (CapitalBio, Beijing, China) for producing high yields of labeled cDNA. DNA polymerase and RNase H were employed to synthesize double-stranded cDNA (dsDNA) and the dsDNA products were purified using a PCR NucleoSpin Extract II Kit and eluted with 30 μL of elution buffer. The eluted dsDNA products were evaporated to 16 μL and subjected to in vitro transcription reactions of 40 μL at 37°C for 14 h using a T7 Enzyme Mix. A Klenow enzyme labeling strategy was used following reverse transcription using CbcScriptII reverse transcriptase. Array hybridization was carried out in a hybridization oven (Agilent Technologies, Santa Clara, CA, USA) overnight at a rotation speed of 20 rpm at 42°C and washed with two consecutive solutions. Data summarization, normalization, and quality control of the array data were performed using GeneSpring software V12 (Agilent Technologies). To screen the differentially expressed genes, threshold values of ≥1.5and ≤1.5-fold change and a Benjamini-Hochberg corrected P value of 0.05 were used. The data were log2 transformed and median centered by genes using the Adjust Data function of CLUSTER 3.0 software and then further analyzed with hierarchical clustering with average linkage. Finally, tree visualization was presented using Java Treeview software (Stanford University School of Medicine, Stanford, CA, USA). Data from this study are available from the National Center for Biotechnology Information under GEO accession number GSE67764. Tumor necrosis factor-α and interleukin-6 assays The concentrations of TNF-α and IL-6 in small intestinal mucosa of mice were measured using a commercial kit (eBioscience, San Diego, CA, USA) according to the manufacturer's instructions. After the stop solution was added, the plates were read at 450 nm (570 nm correction) on a MicroPlate Reader (BioTek, Seattle, WA, USA). The results are expressed as pg TNF-α/mg protein and ng IL-6/mg protein. Statistical analysis All experiments were performed at least in triplicate. Data are expressed as mean ± SD. Six mice were used in each group. Randomization was used in each independent experiment. Statistical significance was analyzed with the one-way or two-way ANOVA test for gene and protein expression, comparing miRNAs expression, cellular proliferation, luciferase activity, inflammatory factors, BrdU positive counts, and positive rate differences between two groups. The survival data were analyzed by log-rank test using GraphPad Prism software. Differences were considered significant if the probability of the difference occurring by chance was <0.05 (P < 0.05).
6,028.4
2020-06-04T00:00:00.000
[ "Biology", "Medicine" ]
Monte Carlo Simulation on Li Monolayer System Adsorbed on Cu(001) Surface We conducted Monte Carlo simulation for a Li system adsorbed on a Cu(001) surface at various coverage of adatoms, σ > 1 / 2, σ = 1 / 2 and σ < 1 / 2. We show phase diagrams of adatom arrangements, where the axes of the diagrams are the coverage σ , strength of substrate potential λ and temperature ˆ T . First, we study the case in which the natural distance between adatoms is rather short, b nat < √ 2 a , where a is the unit length of the substrate lattice. Then we found atoms having a ”ladder structure”, which fills a surface completely at σ = 3 / 5, as experimental results showed. At σ = 1 / 2, a c (2 × 2) structure is observed; the structural factor of this arrangement shows four ( ± 1 / 2 , ± 1 / 2) spots in diffraction space. At σ < 1 / 2, we observed a complex structure including several ladder structures. The structural factor of this arrangement reproduces arced streaks connecting the four spots, as have been already observed more clearly by LEED. Second, in the case in which the natural distance between adatoms is larger, b nat ≥ √ 2 a , the ladder structure does not appear at either σ > 1 / 2 or σ < 1 / 2. Streaks appear; however, they are not arced but straight lines forming a square shape. [DOI: 10.1380/ejssnt.2005.492] I. INTRODUCTION Ordered structures with super-cells on simple metal monolayer atoms have been observed to adsorb on transition metal surfaces; in particular, Li on Ni(001) (See Fig. 1) [1], Mg on Cu(001) [2], and Li on Cu(001) have been observed experimentally by low-energy electron diffractions (LEED). All of them show similar tendencies in the arrangements of their adatoms. In the experiments, several spots were observed in diffraction spaces, and several super-cell structures were proposed. The systems have been interpreted theoretically as physi-sorption systems such as the Frenkel-Kontorova model with mutual interactions among adatoms and the substrate potential [2,3]. In Section 2 of this paper, we show phase diagrams of the arrangements obtained by Monte Carlo simulation. Further, in Section 3, we reproduce the arced streaks in reciprocal space by calculating the structural factors of the adatomic arrangements. * This paper was presented at International Symposium on Surface Science and Nanotechnology (ISSS-4), Saitama, Japan, 14-17 November, 2005. † Corresponding author<EMAIL_ADDRESS> II. MONTE CARLO SIMULATIONS AND PHASE DIAGRAMS We conducted Monte Carlo simulation for a Li system adsorbed on a Cu(001) surface at various coverages, σ > 1/2, σ = 1/2, and σ < 1/2, with a Lenard-Jones type interaction potential having a certain natural distance between neighboring atoms and a sinusoidal substrate potential. The Hamiltonian used in this simulation is as follows: where ∂W (r ij )/∂r ij = 0 at r ij = b nat ; besides, g x = (2π/a, 0) and g y = (0, 2π/a). The first term, W (r ij ) = W (| r j − r i |), is the interaction energy between adatoms, and the second, V (r j ), is the substrate potential, which depends on the locations of the adatoms. In addition, we define the "relative strength of the substrate potential", λ, as λ ≡ E s /A, and a nondimensional temperature,T = k B T /A. Hereafter, we show several atomic arrangements based on typical parameters. Then we show two types of schematic phase diagrams of the atomic arrangement of Li; one is two-dimensional, with coverage σ as the vertical axis and the strength of the substrate potential, λ, as the horizontal axis. The other is three-dimensional, with axes of σ, λ, and temperature,T . We choose two values for the natural distance between adatoms, (i) b nat = 1.39a and (ii) b nat = 1.52a. We argue each case in two subsections. A. bnat = 1.39a (the essential point is that bnat is less than √ 2a) e-Journal of Surface Science and Nanotechnology In this condition, adatoms fill the surface with a stable neighboring distance between adatoms of σ = 3/5. They make a triangular lattice stably if the substrate potential is absent. If the substrate potential is finite, the atoms form a "ladder structure", in particular, a (5 √ 2× √ 2)R45 • structure, which fills a surface completely (See Fig. 3(c)). Second, we show a schematic phase diagram of λ, σplane atT = 0.5 in Fig. 4(a). Here we find the interesting stability of the c(2×2) phase. On the other hand, the ladder structure also has stability of re-entrance at the lower coverage region, which may cause the arc-shape streaks; we will see the streaks in the following chapter. Finally, we show a schematic 3D phase diagram with axesT , λ, and σ in Fig. 4(b). Here we see also a random or liquid-like phases. Considering the case of σ > 1/2 again, adatoms are compressed at higher σ. Then, the interaction energy generally becomes larger than the substrate potential. In other words, the substrate potential can be suppressed effectively. Thus, adatoms may form a partial triangular lattice (See Fig. 3(c)). Here we find that the c(2 × 2) phase can remain at lower coverage with vacancies. On the other hand, with higher coverage, pure or mixed triangular structures appear in the arrangement (see text). Finally we show a schematic 3D phase diagram with axesT , λ and σ in Fig. 6(b). III. STRUCTURAL FACTORS AND ARC-SHAPE STREAKS Here we calculate the structural factors of the arrangements obtained by the Monte Carlo simulation. We expect that some of the structural factors of the several arrangements would show arced streaks connecting the four spots, as observed experimentally in Fig. 1. We already know that the arced streak essentially originates from a second neighbor distance between adatoms (d = 2a) in a c(2 × 2) unit in the ladder structure [4]. More specifically, the arc shape originates from a shrink and a tilt of the second neighbor pair [4] (see Fig.7). We must consider the same two conditions as the former section 2: namely, b nat = 1.39a and b nat = 1.52a. Hereafter, we argue a scenario according to Refs. 4 and 5. Since this distance is between d = a and d = √ 2a, adatoms make a ladder structure. The ladder structure includes c(2×2) units. A second neighbor atomic pair (the basic distance in a c(2 × 2) unit inside a ladder structure is d = 2a), contributes to the steaks. In detail, the pair in the c(2 × 2) unit inside the ladder structure shrinks and tilts, thereby deforming the streaks [5]. B. bnat = 1.52a (the natural distance between adatoms is slightly larger than √ 2a): Since the natural distance between adatoms satisfies b nat > √ 2a, the ladder structure does not appear either at σ > 1/2 or σ < 1/2. Streaks appear; however, they are not arced but weak straight lines forming a square shape in the reciprocal space, where we omit to show this figure. IV. CONCLUSION Using Monte Carlo simulation, we obtained adatomic arrangements in Li on Cu(001) under conditions of b nat = 1.39a and b nat = 1.52a. In the former condition where b nat is relatively short, namely b nat < √ 2a, adatoms form a ladder structure at σ = 3/5, and c(2 × 2) at σ = 1/2. At σ < 1/2, adatoms form a complex structure partially including the ladder structure. Considering the condition of σ = 3/5 with b nat = 1.39a again, adatoms fill the surface naturally and form a (5 √ 2 × √ 2)R45 • structure, which is one of ladder structures. Therefore, even at σ < 1/2, the atoms locally form a ladder structure. In the ladder structure, the secondneighbor distance of the c(2 × 2) unit (its normal distance is d = 2a) is generally shortened and tilted, especially by the existence of the zig-zag part of the ladder structure. Since the shortening and tilting of the atomic pairs occurs in many parts, greater intensity and sharpness of the arced streaks are expected. In the second condition, b nat = 1.52a, the distance is relatively long, b nat > √ 2a, and the ladder structure does not exist even at σ > 1/2. Neither does it exist at σ < 1/2; thus, no arced streaks appear. Adatoms form a partial c(2 × 2) cluster or network, but because the natural distance between adatoms is similar to the nearest neighbor distance of the c(2 × 2) structure, d = √ 2a, only square-type streaks are produced.
2,037
2005-01-01T00:00:00.000
[ "Physics" ]
Giant negative Goos-Hänchen shifts for a photonic crystal with a negative effective index The Goos-Hänchen effects are investigated for a monochromatic Gaussian beam totally reflected by a photonic crystal with a negative effective index. By choosing an appropriate thickness for the homogeneous cladding layer, a giant negative GH lateral shift can be obtained and the totally reflected beam retains a single beam of good profile even for a very narrow incident beam. The GH lateral shift can be very sensitive to the change of the refractive index of the cladding layer, and this property can be utilized for e.g. the switching applications. © 2006 Optical Society of America OCIS codes: (260.2110) Electromagnetic theory; (290.4210) Multiple scattering; (120.5700) Reflection References and links 1. F. Goos and H. Hänchen, “Ein neuer und fundamentaler versuch zur totalreflexion,” Ann. Phys. 1, 333–346 (1947). 2. S. R. Seshadri, “Goos-Hänchen beam shift at total internal reflection,” J. Opt. Soc. Am. A 5, 583-590 (1998). 3. I. Shadrivov, A. Zharov, and Y. S. Kivshar, “Giant Goos-Hanchen effect at the reflection from left-handed metamaterials,” Appl. Phys. Lett. 83, 2713–2715 (2003). 4. I. Shadrivov, R. Ziolkowski, A. Zharov, and Y. Kivshar, “Excitation of guided waves in layered structures with negative refraction,” Opt. Express 13, 481–492 (2005). http://www.opticsinfobase.org/abstract.cfm?URI=OPEX-13-2-481 5. L. G. Wang and S. Y. Zhu, "Giant lateral shift of a light beam at the defect mode in one-dimensional photonic crystals," Opt. Lett. 31, 101–103 (2006). 6. H. M. Lai and S. W. Chan, "Large and negative Goos-Hanchen shift near the Brewster dip on reflection from weakly absorbing media," Opt. Lett. 27, 680–682 (2002) 7. L. Wang, H. Chen, and S. Zhu, “Large negative Goos–Hänchen shift from a weakly absorbing dielectric slab,” Opt. Lett. 30, 2936–2938 (2005). 8. D. Felbacq, A. Moreau, and R. Smaali, “Goos-Hänchen effect in the gaps of photonic crystals ,” Opt. Lett. 28, 1633-1635 (2003). 9. D. Felbacq, and R. Smaâli, “Bloch modes dressed by evanescent waves and the generalized Goos-Hänchen effect in Photonic Crystals,” Phys. Rev. Lett. 92, 193902 (2004). 10. M. Notomi, “Theory of light propagation in strongly modulated photonic crystals: Refraction like behavior in the vicinity of the photonic band gap,” Phys. Rev. B 62, 10696–10705 (2000). 11. K. Ohtaka, T. Ueta, and K. Amemiya, “Calculation of photonic bands using vector cylindrical waves and reflectivity of light for an array of dielectric rods,” Phys. Rev. B 57, 2550–2568 (1998). 12. S. L. He, Z. C. Ruan, L. Chen and J. Q. Shen, “Focusing properties of a photonic crystal slab with negative refraction,” Phy. Rev. B 70, 115113 (2004). 13. H. M. Lai, C. W. Kwok, Y. W. Loo, and B. Y. Xu, “Energy-flux pattern in the Goos-Hänchen effect,” Phys. Rev. E 62, 7330–7339 (2000). 14. J. J. Chen, T. M. Grzegorczyk, B. Wu, and J A. Kong, “Role of evanescent waves in the positive and negative Goos-Hanchen shifts with left-handed material slabs,” J. Appl. Phys. 98, 094905 (2005). 15. T. Tamir, “Leaky waves in planer optical waveguides,” Nouv. Rev. Opt. 6, 273–284 (1975). 16. F. Schreier, M. Schmitz, and O. Bryngdahl, "Beam displacement atdiffractive structures under resonance conditions," Opt. Lett. 23, 576–578 (1998). 17. R. D. Meade, K. D. Brommer, A. M. Rappe, and J. D. Joannopoulos, “Electromagnetic Bloch waves at the surface of a photonic crystal,” Phys. Rev. B 44, 10961–10964 (1991). 18. S. Enoch, E. Popov, and N. Bonod, “Analysis of the physical origin of surface modes on finite-size photonic crystals,” Phys. Rev. B 72, 155101 (2005). #67808 $15.00 USD Received 6 February 2006; revised 21 March 2006; accepted 23 March 2006 (C) 2006 OSA 3 April 2006 / Vol. 14, No. 7 / OPTICS EXPRESS 3024 19. R. Reinisch, and M. Neviere, “Grating-enhanced nonlinear excitation of surface polaritons: An electromagnetic study,” Phys. Rev. B 24, 4392–4405 (1981). 20. T. Sakata, H. Togo, and F. Shimokawa, "Reflection-type 2×2 optical waveguide switch using the Goos– Hänchen shift effect," Appl. Phys. Lett. 76, 2841-2843 (2000). ___________________________________________________________________________ ___________________________________________________________________________ 1. Introduction The Goos-Hänchen (GH) shift refers to a lateral shift between the centre of a reflected beam and that of the incident beam when a total reflection occurs at the interface between two media.The GH shift effect has been studied both theoretically and experimentally for many years [1,2].The interest in the study of GH shift renews after the recent predictions of large or giant GH shifts (i.e., defined as a situation when the absolute value of the GH shift is equal to or lager than the waist of the incident beam [3]) at the surfaces of some special media or structure such as left-handed materials (LHMs) [3,4], one-dimensional photonic crystal (PC) with a defect [5], and some absorbing media [6,7].However, in these situations, the reflected beam may split into two beams for a narrow incident beam [3,4] or the reflectance is small [6,7], when a giant GH shift occurs.GH shifts for a PC at a frequency inside or outside a bandgap have also been discussed [8,9].However, they are only positive and not giant.Since negative refraction index can enable backward waves [4], which may give a giant negative GH shift, the giant negative GH shifts may also occur for a beam totally reflected from a PC with a negative effective refractive index.In the present paper, we will show that a giant negative GH shift can be achieved if there is a homogeneous cladding layer of appropriate thickness.Furthermore, the profile of the totally reflected beam may remain nearly the same as that of the incident beam, even when a giant GH shift is achieved for a narrow incident beam. Calculation and analysis We study the same two-dimensional (2D) PC structure of negative refraction as the one considered in Ref. [10].The 2D PC is formed by a triangular lattice of air holes (with radius 0.4a; a is the lattice constant) in GaAs background ( n =3.6).The air-PC interface is normal to the M Γ − direction.To achieve a giant negative GH shift, the GaAs background is terminated in such a way that there is a homogeneous cladding layer of thickness d over the PC halfspace [see Fig. 1(a)].For E-polarization (i.e., the electric field is perpendicular to the plane of the holes), the effective refraction index ( n eff ) of the PC is nearly isotropic and has a negative value in a frequency window ranging from 0.29 ( 2) c a [see the inset of Fig. 1(a)].The incident beam impinges on the PC structure from air.To achieve a total reflection from the PC structure (besides a giant negative GH shift), we will only consider the situation when -1 < n eff <0.If we regard this PC as a homogeneous medium, the condition for the total internal reflection (from air to the PC) at an air-PC interface [i.e., with d=0 in Fig. 1(a)] is (from Snell's law): sin( ) i θ ≥ (|n eff |/n Air ).From Fig. 1(b) one sees that the total reflection condition calculated by sin( ) i θ ≥ (|n eff |/n Air ) is consistent with the numerical result calculated by a 2D layer-KKR (Korringa-Kohn-Rostoker) method [11,12].Here the incident angle is indicated through the x component of the wave vector in air, i.e., sin( ) The Gaussian beam incident on the PC can be expressed as follows (see e.g.[5]), ( ) where , w is the waist of the Gaussian beam, i θ is the mean angle of incidence of the Gaussian beam, and k 0 is the wave number in air.For simplicity, in this paper we only consider the situation when 0 2a λ > ( 0 λ is the free space wavelength in vacuum), for which the reflection from the PC to air contains only the zero order of diffraction.We simulate the reflection of the Gaussian beam from the interface of the PC by using a layer-KKR method [11,12].Since the center of the incident beam is at x =0, the GH shift (the displacement between the centers of the incident and reflected beams) can be calculated by Figure 2 shows the GH lateral shifts as the mean incident angle increases at several different frequencies when the Gaussian beams (with beam width 25 w a = ) are totally reflected by the PC with 0 d = (i.e., no homogeneous cladding layer).The GH lateral shifts are negative for the incident beams of low incident angles or low frequencies [ 0.325(2 ) c a ω π ≤ in Fig. 2].The negative GH shifts are caused by the backward (total) energy flux flow of the evanescent wave [13,14] or leaky surface wave [15,16].However, the lateral shifts are small (less than a ; the beam waist is 25a ) when the GH shift is negative.The GH shift can be enhanced greatly by a homogeneous cladding layer due to the excited leaky or surface waves [15][16][17][18] which transfer the energy of the incident beam along the interface.These surface (or leaky) waves may be backward or forward [3,15], and a giant negative or positive GH lateral shift may occur if some appropriate surface or leaky wave is excited in the PC structure of Fig. 1(a).Figures 3(a .As expected, the giant negative GH beam shift occurs at some special values of d.The corresponding (normalized) field intensity profiles of the reflected beams are shown in the insets (the unit for the lateral axis is a) in Fig. 3. From insets (2) and (3) of Fig. 3(a) one can see that the reflected beam has double peaks.This indicates a backward leaky wave and a forward wave are exited simultaneously when a Gaussian beam is incident on such a PC structure.The main peak has a giant negative lateral shift due to the excited backward waves, while the small peak of positive lateral shift is due to the forward waves.While the small peak of positive lateral shift is due to the forward waves.After optimization (by eliminating or suppressing the excitation of any forward wave), our PC structure can give a giant negative GH shift and the peak with a positive shift due to some forward wave almost disappears [see insert (3) in Fig. 3(b)].The PC structure in air can be considered as a 3-layered structure: the air layer, the cladding layer and the PC layer, as shown in Fig. 4(a).For the configuration associated with Fig. 3, when the light (transmitted through the air-cladding interface) is incident on the PC layer, two beams of diffraction orders [i.e., the 0 th and (-1)-th] are reflected at the cladding-PC interface [see the right part of the ray diagram in Fig. 4(a)].However, only the reflected ray of 0 th diffraction order can transmit into the air, and that of the (-1)-th diffraction order is totally reflected (internally) at the air-cladding interface since k -1,x =2 a π − k 0 (k -1,x is the x component of the wave vector for the ray of (-1)-th diffraction order).The light beams of (-1)-th and 0 th reflection order are reflected at the two interfaces, and the corresponding leaky waves can be excited when one of them satisfies the following self-consistent condition [15]: ( 0 , 1 , 2 ; 1 , 0 ) where i =-1,0 correspond to the beams of (-1)-th and 0 th diffraction orders, respectively, k n is the wave number in the cladding layer, , i A ϕ is the phase shift for the corresponding diffraction order beam reflected back to the cladding layer at the air-cladding interface, and , i B ϕ is for the beam reflected by the PC layer.The resonance of the light of (-1)-th diffraction order [the three white rays in Fig. 4(a)] forms a backward leaky mode (in a zig-zag way from the right to the left), for which the energy transfers backward along the cladding layer.The resonance of the light of 0 th diffraction order gives a forward leaky mode.These leaky modes (in the cladding layer) will be coupled with the leaky surface waves on the PC-cladding interface, and enhance either the forward or the backward surface waves [16,19].Both the surface waves and the leaky waves contribute to the GH effects.If our PC structure has a thick cladding, the direction of the GH shift mainly depends on the energy flux of the leaky modes excited in the cladding layer, since the energy flux of the surface wave is small as compared with that of the leaky modes.However, if our PC structure has a thin cladding, the situation will be quite different, and it is difficult to predict analytically whether the total energy flux of the leaky waves in the whole PC structure is forward or backward.The negative GH shifts become large for the PC structure with a thin cladding layer [corresponding to the tips in Fig. 3(a)] when Eq. ( 3) for the 0 th diffraction order is satisfied.This can be explained as follows.When Eq. ( 3) for the 0 th diffraction order is satisfied, the phase difference between ray (2) (direct reflection of the incident ray at the air-cladding interface) and ray (3) [transmission (into air) of the reflected wave (at the cladding-PC interface) of the 0 th order diffraction] is exactly equal to (2p+1) π (i.e., out of phase; p is an integer).Such a destructive interference of rays ( 2) and (3) prevents the energy of ray (1) (incident ray) from being directly reflected to air at the air-cladding interface, and also makes backward surface waves at the PC-cladding interface more difficult to leak into air.Consequently, the energy flowing backward along the surface of the PC structure increases, which then enhances the negative GH shift.On the contrary, for the case of constructive interference between rays (2) and ( 3), most energy of the incident light is directly reflected to air at the air-cladding interface and thus contributes little to the GH shift.Consequently, the negative GH shift effect becomes small even a backward leaky mode in the cladding layer is excited. When Eq. ( 3) is satisfied or almost satisfied for both the (-1)-th and 0 th diffraction orders, a backward leaky mode is excited in the cladding layer.Thus, the backward energy flux increases greatly, and the negative GH shift becomes giant [as for the cases of insets ( 1)-(3) in Fig. 3] or almost giant [marked with stars in Fig. 3(a)].The resonance condition for the (-1)-th diffraction order is essential for a giant negative GH shift.We found that when the refractive index of the cladding layer is reduced to n clad =2.0 so that only the diffraction beam of the 0th order can be transmitted into the cladding layer, the giant negative GH disappears [see Fig.A narrower beam, which has a wider spatial spectrum [3,4], is easier to excite simultaneously the backward and forward leaky waves.Thus, the reflected beam may split into two (or even more) peaks and the width become much wider if we do not optimize our PC structure (e.g. the thickness of the cladding layer).After optimization, our PC structure can give a giant negative GH shift and the profile of the reflected beam remains nearly the same (i.e., a Gaussian beam without any noticeable side lobe).Figure 5 shows the simulation result (with the FDTD method [4]) for the distribution of the electric fields of such a case.The parameters for the Gaussian beam are 0.342(2 ), 10 , 57 i c a w a ω π θ = = = and the cladding thickness is d = 0.113a.From this figure one sees that the Gaussian profile of the totally reflected beam remains well and the negative lateral shift of the center of the main peak is giant [the GH lateral shift calculated by the layer-KKR is about 11a, which is larger than 10a (the waist of the incident beam)]. When the light in the cladding layer contains other high orders of diffraction (such as the +1-th diffraction order), the GH shift effects will become more complicated.Then the GH lateral shift can be giant positive or giant negative, and sensitive to the thickness of the cladding layer or a small change in the refractive index of the cladding layer at some special values of d. Conclusion In the present paper we have studied the Goos-Hänchen lateral shift effects for the total reflection upon a structure of PC with negative effective refraction index.The mechanism of our PC structure is based on both the backward waves and the grating effect, whereas the mechanism of an LHM structure is based on only the backward waves (no grating effect).For example, in our PC structure, the (-1) diffraction order (due to the grating effect) plays an important role in achieving a giant negative GH shift.By choosing an appropriate thickness d of the cladding layer, the totally reflected beam can give a giant negative GH lateral shift while keeping a single beam of good profile even for a very narrow incident beam.For an appropriately designed thickness of the cladding layer, the GH lateral shift can be very sensitive to a small change of the refractive index of the cladding layer, and this property can be utilized for the applications of switching, modulating and sensing. Fig. 1 .Fig. 2 . Fig. 1.(a) Schematic diagram of the photonic crystal structure considered in this paper.The insert shows the effective index of the PC as the frequency increases.(b) The region of total reflection in the frequency range of negative refraction when d =0.The shadow region corresponds to the total reflection region calculated by a layer-KKR method, the dashed line corresponds to total reflection boundary estimated by sin( )| | x i e f f A i r k n n θ = and the light line ) and 3(b) show the GH lateral shifts and the width of the reflected beam (which is calculated from the beam profile[3]) as the thickness of the cladding layer increases when the parameters for the incident beam are chosen as 0 Fig. 3 .. Fig. 3. (a) The GH shift as the thickness d of the cladding layer increases.(b) The width of the reflected beam as d increases.The parameters for the incident Gaussian beam are chosen as 0.335(2 ), c a ω π = 25 w a = and 45 i θ = .The insets show the profiles of the field intensity for the reflected beams. 4(b)]. Fig. 4 . Fig. 4. (a) Schematic diagram for multi-reflection of light in the PC structure.(b) The GH shift as the cladding thickness d increases when the refraction index of the cladding layer n clad =2.0.The incident Gassian beam is the same as used in Fig. 3. Fig. 5 . Fig. 5. FDTD simulation for the distribution of the electric fields of a Gaussian beam reflected from a PC structure with cladding thickness d=0.113a. Figure 6 shows the GH lateral shift as the refractive index n clad of the cladding layer varies slightly (from 3.578 to 3.606).The parameters for the incident Gaussian beam are 073 o .The thickness of the cladding layer is kept as d=0.75 a .The profiles of the field intensity of the reflected beams at three different values of n clad are also shown in the insets of Fig. 6.The center of the totally reflected Gaussian beam (the profile remains nearly the same) can shift laterally from a positive position to a negative position by over 35a (larger than the waist of the beam) when there is a small deviation of 0.02 in the refractive index of the cladding layer (the refractive index for the host medium of the PC remains unchanged).This extraordinary property has potential applications as e.g. an optical switch (the small change of the refractive index can be induced by an applied voltage or temperature change) [20], a modulator, and a sensor. Fig. 6 . Fig. 6.Goos-Hänchen lateral shift as the refractive index of the cladding layer varies slightly.The thickness of the cladding layer is 0.75 d a = .The insets show the corresponding profiles of the field intensity of the reflected beams at three different values of the refractive index.
4,639.6
2006-04-03T00:00:00.000
[ "Physics" ]
The stock market reaction to COVID-19 vaccination in ASEAN Previous studies have shown that the confirmed cases drive investor sentiment, reflecting the stock's return. Based on this, the vaccination growth is also expected to drive the investor’s sentiment, which can be reflected in the return of the stock market in ASEAN. Therefore, this study explores the vaccination impact on stock returns in ASEAN countries. This study contributes to the gap of taking the COVID-19 vaccination impact to the stock return into account by using the panel regression model with HC and Driscoll and Kraay robust covariance matrix estimator, which addresses the cross-dependency and heterogeneity problems. This study is one of the early studies of the topic, especially in ASEAN. The panel regression model with HC and Driscoll and Kraay robust covariance matrix estimator uses three variables: the daily stocks return, vaccine growth, and cases growth. It is a balanced panel data that includes six countries and 117 daily series data, making 702 observations used in the study. The results show conflicting results where daily vaccination growth negatively affects the stock return. This problem can arise for several reasons, such as the uncertainty in the financial market and cross-dependency and heterogeneity detected in the model. We can see that the investors still have a negative sentiment because COVID-19 has resulted in uncertainty on the financial market in ASEAN. This gives us practical implications that the ASEAN country members’ government needs to push vaccination policy more aggressively. Introduction No one could predict that the COVID-19 pandemic would be a long-lasting problem facing most of the world. All industries are suffering from the ongoing COVID-19 pandemic, until recently. This was shown from the estimation of the world's gross output (GO) in 2020, which was À3.5; specifically, all countries also had a negative GO. For Association of Southeast Asian Nations (ASEAN), the economic growth prospect is quite dark due to their strong dependencies on the tourism sector. This sector provided 12% of ASEAN GDP in 2016, 1 and it was predicted to rise had there been no pandemic. The pandemic would make it hard to recover the tourism sector because of the travel limitation and quarantine policies applied differently for each country's borders. 2 In the third quartal of 2020, most ASEAN countries suffered from the decrease of GDP except for Vietnam, which increased 2.91% of their GDP. 3 While pandemics are still ongoing, ASEAN will face a significant hurdle in its tourism sector, and policymakers should make a new strategy to increase economic growth. One of the efforts to overcome the rising cases of COVID-19 is the vaccination program. Since the first quarter of 2021, almost all countries have started a COVID-19 vaccination program; however, the impact of the COVID-19 vaccination on economic growth is still rarely explored because the program is still in the early stages and there are not enough data to analyze. Despite that, the vaccination program is indeed an excellent start to cope with the spread of COVID-19. The program's outcome is to slow down the spread and reduce positive cases of the COVID-19. The increasing number of vaccinated people can stabilize a saturated condition. With the growing number of people vaccinated, people will feel more secure and think positively about the future, which will help raise a reasonable expectation in the stock market. Moreover, research in Vietnam 4 about the COVID-19 impact on the stock market shows that when Vietnam announced 0 cases of COVID-19 after the lockdown, the stock market in Vietnam raised significantly and became the best performing stock in April and May 2020. Based on this, the impact of the vaccination program is expected to have a similar outcome which is the increase of the stock market performance in other countries. However, it is not always the case that the positive sentiment will always increase the market's return; otherwise, it can bring down the return because of the abnormal trading volume in the market. 5 Especially in a period of uncertainty, the investor's behavior is hard to predict, reflecting on the stock returns. Every country has different conditions and policies; this implies a variety of vaccination impacts to their stock market for each country. Therefore, panel data analysis can be used to see the stock reaction to the vaccination program in ASEAN. Panel data can control individual heterogeneity and identify the impact better than pure cross-section or time-series data only. 6,7 Panel data analysis is often used in analyzing the stock market responses study from the COVID-19 outbreak. [8][9][10][11] They used the stock return to see the market reaction and positive confirmed cases in the respective countries. However, the Ordinary Least Square (OLS) estimation of the panel regression model requires the assumption of normally distributed and homogenous errors, which are rarely met in many cases in real-life data, especially economic data such as stock prices. If the assumption of homogeneity is violated, the estimator will be biased. Many previous studies used OLS estimation but did not mention diagnostic testing. Hence, we address the issue of where heterogeneity and cross-dependency occur in the panel model error. Therefore, the Heteroskedasticity consistent (HC) estimator can be an alternative estimator to have robustness in heterogeneity. However, heterogeneity can also exist from cross-dependency, and Driscoll and Kraay's robust covariance matrix estimator addresses those problems and improve the model results for vaccination on stocks' return. Several researchers had studied the financial market performance response to the REVISED Amendments from Version 1 I have added several revisions to accommodate the reviewers' requests, such as follows: 1. In the 5 th paragraph of the introduction, We explained more about the research gaps in the previous studies using panel model regression and regarding the use of the Heteroskedasticity consistent (HC) estimator and Driscoll and Kraay's robust covariance matrix estimator in answering the research gaps. 2. In the 7 th paragraph of the introduction, We explained the reason for the selected six countries to represent ASEAN stocks. 3. In the 1 st paragraph on Background and literature review, We added a theoretical link between the previous pandemic and the stock market. 4. In the 6 th paragraph on Background and literature review, more recent studies on the impact of vaccination on the stock market were added. 5. In the 6 th paragraph on Background and literature review (near end part), We mentioned that there is no endogeneity in the model. Hence, the panel regression model is the most suitable. Any further responses from the reviewers can be found at the end of the article COVID-19 vaccination. [12][13][14] These studies show that vaccination gives a significant effect on the global market. However, the variety of vaccines and each country's vaccination policies would result in different stock market reactions. Besides, no study considers the vaccination impact in ASEAN countries, to the best of our knowledge. Based on the previous studies, this study has three significant contributions. First, this study could picture the impact of vaccination on stock returns, and it also shows the benefit or the loss for the future investor. Second, the object of this study is the six biggest ASEAN country members: Indonesia, Thailand, Philipines, Vietnam, Singapore, and Malaysia. The 6 countries are the highest GDP countries in ASEAN, so We consider these countries to significantly influence the ASEAN stock market. 15 The study's outcome is expected to help the ASEAN policymakers create a policy that considers the vaccination effect on ASEAN stock returns. Third, this study detected several problems, such as cross-dependency and heterogeneity, potentially leading to biased testing results. Thus, this study used the HC and Driscoll and Kraay robust covariance matrix estimator to address those problems and improve the model results for vaccination on stocks' return. This paper then explains the whole research process, and it is divided into several sections: introduction, background and literature review, data and methodology, results, discussion, and conclusion. Background and literature review The COVID-19 pandemic affects the stock market. 4 In the USA, COVID-19 impacted the US stock market volatility more than any other pandemic since the 1900s. 16 The stock market itself is strongly interconnected. If we learned from the previous pandemic, such as swine flu, the link from influenza to the stock market could be seen when it began to affect the key individuals massively in the stock market, such as traders, market makers, and overall investing behavior directly or indirectly via decreasing in liquidity due to the decline in information flows and production. 17 On the other hand, the pandemic can also negatively change investors' sentiment, which will affect the investment decision where it will be reflected in the stock prices. 18 In addition, research involving 64 countries 19 showed a negative impact of the COVID-19 pandemic on the stock return. In other words, stock prices reflect the investor's expectations. When the downfall of the stock market can be seen as a pause in economic activities, it also means that there is price pressure from people's expectations and fear of the investor's. 10 The research conducted in India about the stock market reaction before and after lockdown showed that the stock return reacted positively to the policy after lockdown announcement. 20 The vaccination program and policies may also impact the stock market based on that knowledge. The COVID-19 vaccination started on February 18 th , 2021, where the high-risk population is the vaccination's priority target. 21 More than 905 million vaccine doses have already been registered worldwide, which means there are 12 doses available for every 100 people. But, the gap between the available dose and the world population still exists. 22 The vaccination program in ASEAN has begun differently in its member countries. The earliest country to start the vaccination program was Indonesia on January 26 th , 2021. From Figure 1A, we can see that the three highest vaccinated countries in ASEAN are Singapore, Cambodia, and Brunei, while the lowest three are Indonesia, Philipines, and Myanmar. We also need to consider the size of the population in this case; in terms of the number of vaccinated people, Indonesia, Vietnam, and Thailand are the three highest countries in ASEAN ( Figure 1B). This shows that ASEAN countries have been conducting the vaccination program, and this study explored the stock market reaction to the vaccination program in ASEAN countries using panel data analysis. The panel data analysis has been used often in analyzing the stock market reaction. The research by 8 used the panel regression to see the effect of uncertainty and confirmed cases on the stock market returns in 43 countries; he found that the higher confirmed cases impact greater with the country which also has a higher level of uncertainty. Similar studies were also done by, 9 who studied 47 countries on the panel regression to explore the effect of people's trust in the government and society and also the confirmed cases to the stock market volatility. They found that trust in government and society is significantly crucial to market volatility. The stock market reaction study in G-20 countries by 10 also used panel data regression and event study to see the impact of the COVID-19 outbreak on the abnormal returns of the stock and found that the COVID-19 outbreak has a negative effect to the stock returns. Moreover, the research of, 11 who studied the impact of freedom and growth rate of COVID-19 daily returns, also used a panel regression. He found that the growth of COVID-19 significantly affected the returns negatively, and there is a strong negative relationship between the country's freedom and the effect of the pandemic in the stock market. The link between the vaccine program and the stock market is also mentioned in several studies. The study about the response of the global stock market to vaccine availability had been explored by using five main markets indexes such as Dow Jones, Shanghai, S&P, FTSE, and EURONEXT). The results showed that after the vaccine arrival, the stock prices significantly outweighed before the vaccine arrived. 12 Another study using the panel data model of the volatility of the stock market reaction to the vaccination program in the international financial market shows that the stock market volatility significantly dropped by the mass vaccinations. 13 The vaccine effectivity on the stock market was analyzed using a wavelet coherence approach in the USA. The COVID-19 vaccination, infection rate, and the case fatality ratio significantly influence the S&P-500 returns at the majority business cycle. In addition, a recent study shows that news about covid help equities in general, whether its positive or negative news. 23 Moreover, the Covid-19 vaccine announcement also positively impacted the Chinese stock market. Besides, many studies prove that mass campaign about Covid-19 vaccines receives positive sentiments from investors. [23][24][25][26] Therefore, the approvals of Covid-19 vaccines lighted the hope of humanity and economic recovery, and this phenomenon is reflected in the stock market. 14,25 However, none of them consider the existence of cross-dependency and heterogeneity problems. Therefore, this study incorporated the cases and vaccine of COVID-19 growth to the ASEAN stock returns. Because the COVID-19 vaccine growth rate and positive cases are unrelated, the panel model regression will be used, considering there is no endogeneity in the variables used. We also used the HC estimators to address the issue of heterogeneity and Driscoll and Kraay robust covariance matrix estimator to manage the cross-dependency in the panel data model. Ethics This study used secondary datasets obtained from investing.com and 27 that are available online. Thus, there are no ethical issues in this study. Data The variable used in this study are the daily stock returns, 10 Where R it is the daily return for index i and P i,t is the closing stock price of the i index stock at the time t and P i,tÀ1 is the closing stock price of the i index stock at the time tÀ1 (a day before). The return was calculated from the available stock indexes in ASEAN (Table 1) available (among the ASEAN members, only six countries had a country stock index) from investing.com. The other variables are the growth of the confirmed positive COVID-19 cases and the growth of the vaccinated people, which follows equations 2 and 3. 9 All data are secondary data, originated in the form of total vaccination and confirmed cases of COVID-19 that are gathered from 27 and are accessed on July 8 th , 2021. All of the analyses on this paper were done using the R software. where Case Growth i,t is the confirmed case at the i country at period t and Vaccine Growth i,t is the amount of the vaccinated people in the i country at period t. All of the variables are available in daily series, the period of this study is when the ASEAN country have already started their vaccination program from March 13 th , 2021, until July 7 th , 2021. Methodology The general model, 28 which are estimated in the panel regression and the variables used follow equation 4: Where where v it c i þ u it as the composite errors; c i is individual effect; u it is the idiosyncratic errors and t=1, …, T. 2. Fixed Effect Model which follows the equation 6. where c i ¼ z 0 i α and ε it is an error term. 3. Random Effect Model which follows the equation 7. 30 where v it ¼ c i j T þ u i ; j T is the Tx1 vector of ones and u i is a group-spesific random element (for each country). Then, after these models are estimated, several testings were carried out in order to see which model perform best such as i) The Hausman test 31 is used for determining the most suitable model between fixed effect and random effect model which the null hypothesis is Random Effect Model is more suitable for the data; ii) The Breusch-Pagan Lagrange Multiplier Test 32 in order to see the cross-sectional dependency. In the event of crossdependency exist, the Driscoll and Kraay robust covariance matrix estimator will be used 33 ; iii) The Breusch-Pagan Test 34 in order to see the homoskedasticity assumption. If there is any heteroskedasticity detected, the heteroskedasticity consistent (HC) estimation can be used for the model estimation. 30,35,36 Results Descriptive analysis This study used balanced panel data which includes six countries and 117 daily series so the total is 702 observations. From Figure 2, we can see that the growth of confirmed cases and the vaccinated people fluctuate over time. For confirmed cases growth, it fluctuates mostly in Vietnam, Thailand, Malaysia, and the Philippines. Meanwhile, the vaccination growth decreased over time and sometimes it stays flat because the growth did not change that much. If we see the series on Figure 3, the movement of each country evolved around the 0 line, which confirms that the mean of the daily stock return is near 0. Panel regression model of the stock reaction The model estimation for Pooled Model, Fixed Effect Model, and Random Effect Models are provided in Table 2. From the estimates, vaccine growth is the only one that significantly affects the return of the stock in ASEAN countries. Moreover, both pooled and random effect models produce similar results in the coefficient. All of the estimations for vaccine growth have negative signs, which means that the vaccine growth negatively influences the return of the stock, which is in contrast to the expectation. After the model estimation, several tests are conducted to see the most suitable model among all using the Hausman test and Breusch-Pagan Lagrange Multiplier test ( Table 3). The Hausman test shows that the fixed effect model is more ideal than the fixed random effect model, which means there is heterogeneity in vaccine growth and case growth on a daily basis. But then the Breusch-Pagan Lagrange Multiplier test shows that there is a cross-sectional dependence problem. This means that there is a dependence on the stock returns among ASEAN countries. In addition, the Breusch-Pagan test also shows that the variance is not homoskedastic so we estimate the HC estimators for the fixed-effect model which are robust to heteroskedasticity and Driscoll and Kraay robust covariance matrix estimator for cross-dependency problem in Table 4. The HC estimators and the Driscoll and Kraay robust covariance matrix estimators of the fixed effect model shows different results to the previous estimations in Table 2 in terms of the standard error and the p-value. For the HC estimators, all variables significantly affect the daily return of the stock in the ASEAN country. The cases growth indeed negatively influences the return of the stock significantly but the vaccine growth also Discussion Based on the results above, it is shown that the cases growth negatively impacted the stock returns in ASEAN. This is in line with all of the previous studies. 8,9,11 But we found conflicting results in the vaccination growth. The vaccination growth is supposed to impact the stock return positively, but it harms the stock return. This is in line with the theory that the positive sentiment does not necessarily raise the stock returns. 5,13 This proves that many things happen in trading. They do not move solely on investors' sentiment, but many factors interfere with the market, such as government interventions, news, abnormal trading, etc. The goal of the vaccination is herd community which makes a certain proportion of the population have the immunity of a disease. Until now, except for the USA, the world is still far behind the herd immunity threshold. 37 If the goal of herd immunity had not been achieved, then the stock market is still in a state of extreme uncertainty; that is why the investor behavior would be hard to predict. 5 Future investors must be aware of the risk if they want to invest in this situation. The practical implication of this study is that ASEAN countries need to create strategies that will outweigh the risk of this uncertainty to attract investors such as strengthening the healthcare system to ease the uncertainty. The more advanced countries' financial markets are proven to be more robust to the pandemic effect because of their advanced technology, communication, and good citizen's welfare. 38 The ASEAN countries should strengthen their citizens' trust in them to stabilize the situation like The Phillippines did. Second, the limitation in this study is the existence of either a cross-dependency problem or heterogeneity left in this model. The HC estimators only addressed the heterogeneity problem but not the cross-dependency problem. Reversely, the Driscoll and Kraay Robust Covariance Matrix Estimators is robust to the cross-dependency problem but not heterogeneity. Both cannot solve those problems simultaneously. The cross-dependency itself can be caused by spatial or spillover effect or unobserved common factors. 39 This means that every country has a dependency on each other. The spatial effect of the vaccination on the stock returns need to be explored more. Conclusions This study explores the vaccination impact on stock returns in ASEAN by using panel regression. It is found that both vaccination growth and the growth cases are impacting the return of stocks in ASEAN. While the growth cases are in line with previous studies, there are conflicting results where the vaccination growth negatively affects the return stock in ASEAN. These results are contrasted with the expectation that vaccination should bring positive sentiment to the investors. The study confirms the research objectives that we found several mixed results in vaccination impact and address the problem of cross-dependency and heterogeneity. These mixed results could be because of the investor's sentiment, which is in extreme uncertainty because of the non-presence of herd immunity until now. We can see that the investors still have a negative sentiment because COVID-19 has resulted in uncertainty on the financial market in ASEAN. This gives us practical implications that the ASEAN country members' government needs to push vaccination policy more aggressively. Even though the results showed that the vaccination still negatively influences the stock returns, the vaccination growth has not shown the distribution of the vaccinated people. For example, in Indonesia, only 56,04% of the Indonesian population gets the second vaccination. Meanwhile, the new variant of COVID-19 keeps evolving and making a new peak on confirmed cases in many countries, including ASEAN. Thus, the economic activities would also be halted if these are not carefully taken care of. So, the ASEAN countries must fasten their second vaccine distribution, and after that, they need to ensure their citizen get the third vaccine. This is a significant move to stabilize the investors' trust that governments can warranty the citizen's welfare. This would make the economy slowly recovers, and the investors' positive sentiment will follow eventually, and next, the financial market can be stabilized in the end. Second, there is a cross-dependency and heterogeneity problem in the model which can cause biased test results. Therefore, future studies suggest using another estimator or model to address heterogeneity and the cross-dependency problem simultaneously. For example, consider the spatial panel modeling or another model such as the heterogeneous panel data models with cross-sectional dependence 40 that can address this problem to avoid biased test results. Walid Bakry University of Western Sydney, Sydney, Australia I checked the revisions round 1 required and I can see that author(s) did address all of the reviewers' questions. The article is interesting, although the topic is not new, however, there are couple of points I would suggest the authors to consider: 3-The third and final point of concern for me in this article is the use of a panel model with only 2 independent variables which explain little about the change in stock returns. You need to add some other control variables, so you don't fall in the omitted variable bias issue. That is why your R 2 is so low and you are not getting much significance in Table 2. "In addition, the Breusch-Pagan test also shows that the variance is not homoskedastic so we estimate the HC estimators for the fixed-effect model, which are robust to heteroskedasticity and Driscoll and Kraay robust covariance matrix in Table 4." And in the methodology, last paragraph: "In the event of cross-dependency exist, the Driscoll and Kraay robust covariance matrix estimator will be used; iii) The Breusch-Pagan Test in order to see the homoskedasticity assumption. If there is any heteroskedasticity detected, the heteroskedasticity consistent (HC) estimation can be used for the model estimation." And also in the 5 th paragraph on the revised paper we have added more explanation about this: "However, the Ordinary Least Square (OLS) estimation of the panel regression model requires the assumption of normally distributed and homogenous errors, which are rarely met in many cases in real-life data, especially economic data such as stock prices. If the assumption of homogeneity is violated, the estimator will be biased. Many previous studies used OLS estimation but did not mention diagnostic testing. Hence, we address the issue of where heterogeneity and cross-dependency occur in the panel model error. Therefore, the Heteroskedasticity consistent (HC) estimator can be an alternative estimator to have robustness in heterogeneity. However, heterogeneity can also exist from cross-dependency, and Driscoll and Kraay's robust covariance matrix estimator addresses those problems and improve the model results for vaccination on stocks' return." A non-stationary panel time series such as VECM panel is not used due to the assumption that both vaccination rate and case growth are independent, so the model has no endogeneity or multicollinearity. Hence, we used the panel data regression model. This has been addressed in the last paragraph of the Background and literature review as follows: "Because the COVID-19 vaccine growth rate and positive cases are unrelated, the panel model regression will be used, considering there is no endogeneity in the variables used." 5.Reviewer: Low R2 and negative adjusted R2. Answer: The low R2 mainly causes the negative adjusted R2, which means most of the dependent variability was not explained by the independent variables. So it translates that the vaccination rate and case growth do not explain most of the variability of the stock prices in the initial panel model. And there is also a lack of assumption fulfilments in heterogeneity and cross-dependency in the initial panel model, so the next step is we estimate the model again using the HC estimators for the fixed-effect model, which are robust to heteroskedasticity and Driscoll and Kraay robust covariance matrix estimator for the cross-dependency problem. (This has been addressed in the previous question). © 2022 Robiyanto R. This is an open access peer review report distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Robiyanto Robiyanto Faculty of Economics and Business, Satya Wacana Christian University, Salatiga, Indonesia Good manuscript, however some improvements should be made prior to indexing. This manuscript can contribute to the reference regarding the dynamics of financial markets in the pandemic era. Minor changes required. Major points: Please explain how vaccination could affect stock return? As you know, some stock markets plummeted in the early pandemic event, but then these stock markets then recovered because the pandemic was priced in already. This is not because of merely vaccination, some robustness checking is needed. The awakening of retail investors (i.e. in Indonesia) etc. should be considered. Minor points: Please sharpen the research gaps, which need your contributions, especially the third contribution. Please describe some flaws in previous studies which need this solution and how your method could eliminate these flaws. ○ Add some recent literature. Some related literature has been published. Is the work clearly and accurately presented and does it cite the current literature? Yes Is the study design appropriate and is the work technically sound? Yes If applicable, is the statistical analysis and its interpretation appropriate? Yes Are all the source data underlying the results available to ensure full reproducibility? Yes Are the conclusions drawn adequately supported by the results? Yes Competing Interests: No competing interests were disclosed. I confirm that I have read this submission and believe that I have an appropriate level of expertise to confirm that it is of an acceptable scientific standard.
6,571.6
2022-03-29T00:00:00.000
[ "Economics" ]
Tunable-wavelength second harmonic generation from GaP photonic crystal cavities coupled to fiber tapers We demonstrate up to 30 nm tuning of gallium phosphide photonic crystal cavities resonances at ~1.5 μm using a tapered optical fiber. The tuning is achieved through a combination of near-field perturbations and mechanical deformation of the membrane, both induced by the fiber probe. By exploiting this effect, we show fiber-coupled second harmonic generation with a tuning range of nearly 10 nm at the second harmonic wavelength of ~750 nm. By scaling cavity parameters, the signal could easily be shifted into other parts of the visible spectrum. © 2010 Optical Society of America OCIS codes: (350.4238) Nanophotonics and photonic crystals; (190.4390) Nonlinear optics, integrated optics; (230.5750) Resonators; (130.3120) Integrated optics devices; (190.2620) Harmonic generation and mixing References and Links 1. H.-G. Park, S.-H. Kim, S.-H. Kwon, Y.-G. Ju, J.-K. Yang, J.-H. Baek, S.-B. Kim, and Y.-H. Lee, “Electrically driven single-cell photonic crystal laser,” Science 305(5689), 1444–1447 (2004). 2. D. Englund, B. Ellis, E. Edwards, T. Sarmiento, J. S. Harris, D. A. B. Miller, and J. Vuckovic, “Electrically controlled modulation in a photonic crystal nanocavity,” Opt. Express 17(18), 15409–15419 (2009). 3. A. M. Armani, R. P. Kulkarni, S. E. Fraser, R. C. Flagan, and K. J. Vahala, “Label-free, single-molecule detection with optical microcavities,” Science 317(5839), 783–787 (2007). 4. D. Englund, A. Faraon, I. Fushman, N. Stoltz, P. Petroff, and J. Vucković, “Controlling cavity reflectivity with a single quantum dot,” Nature 450(7171), 857–861 (2007). 5. Y. Akahane, T. Asano, B. S. Song, and S. Noda, “High-Q photonic nanocavity in a two-dimensional photonic crystal,” Nature 425(6961), 944–947 (2003). 6. A. Faraon, and J. Vuckovic, “Local temperature control of photonic crystal devices via micron-scale electrical heaters,” Appl. Phys. Lett. 95(4), 043102 (2009). 7. D. Dalacu, S. Frederick, P. J. Poole, G. C. Aers, and R. L. Williams, “Postfabrication fine-tuning of photonic crystal microcavities in InAs/InP quantum dot membranes,” Appl. Phys. Lett. 87(15), 151107 (2005). 8. G. Le Gac, A. Rahmani, C. Seassal, E. Picard, E. Hadji, and S. Callard, “Tuning of an active photonic crystal cavity by an hybrid silica/silicon near-field probe,” Opt. Express 17(24), 21672–21679 (2009). 9. A. Faraon, D. Englund, D. Bulla, B. Luther-Davies, B. J. Eggleton, N. Stoltz, P. Petroff, and J. Vučković, “Local tuning of photonic crystal cavities using chalcogenide glasses,” Appl. Phys. Lett. 92(4), 043123 (2008). 10. M.-K. Seo, H.-G. Park, J.-K. Yang, J.-Y. Kim, S.-H. Kim, and Y.-H. Lee, “Controlled sub-nanometer tuning of photonic crystal resonator by carbonaceous nano-dots,” Opt. Express 16(13), 9829–9837 (2008). 11. G. Shambat, Y. Gong, J. Lu, S. Yerci, R. Li, L. Dal Negro, and J. Vucković, “Coupled fiber taper extraction of 1.53 microm photoluminescence from erbium doped silicon nitride photonic crystal cavities,” Opt. Express 18(6), 5964–5973 (2010). 12. J.-Y. Kim, M.-K. Kim, M.-K. Seo, S.-H. Kwon, J.-H. Shin, and Y.-H. Lee, “Two-dimensionally relocatable microfiber-coupled photonic crystal resonator,” Opt. Express 17(15), 13009–13016 (2009). 13. K. Rivoire, Z. Lin, F. Hatami, W. T. Masselink, and J. Vucković, “Second harmonic generation in gallium phosphide photonic crystal nanocavities with ultralow continuous wave pump power,” Opt. Express 17(25), 22609–22615 (2009). 14. T. A. Birks, and Y. W. Li, “The Shape of Fiber Tapers,” J. Lightwave Technol. 10(4), 432–438 (1992). 15. K. Rivoire, A. Faraon, and J. Vuckovic, “Gallium phosphide photonic crystal nanocavities in the visible,” Appl. Phys. Lett. 93(6), 063103 (2008). 16. M. Kim, J. Yang, Y. Lee, and I. Hwang, “Influence of etching slope on two-dimensional photonic crystal slab resonators,” J. Korean Phys. Soc. 50(4), 1027–1031 (2007). 17. C. W. Wong, P. T. Rakich, S. G. Johnson, M. Qi, H. I. Smith, E. P. Ippen, L. C. Kimerling, Y. Jeon, G. Barbastathis, and S.-G. Kim, “Strain-tunable silicon photonic band gap microcavities in optical waveguides,” Appl. Phys. Lett. 84(8), 1242–1244 (2004). #125594 $15.00 USDReceived 17 Mar 2010; revised 11 May 2010; accepted 18 May 2010; published 25 May 2010 (C) 2010 OSA 07 June 2010 / Vol. 18, No. 12 / OPTICS EXPRESS 12176 18. T. Zander, A. Herklotz, S. Kiravittaya, M. Benyoucef, F. Ding, P. Atkinson, S. Kumar, J. D. Plumhof, K. Dörr, A. Rastelli, and O. G. Schmidt, “Epitaxial quantum dots in stretchable optical microcavities,” Opt. Express 17(25), 22452–22461 (2009). 19. E. G. Spencer, P. V. Lenzo, and A. A. Ballman, “Dielectric materials for electrooptic, elastooptic, and ultrasonic device applications,” Proc. IEEE 55(12), 2074–2108 (1967). 20. R. W. Dixon, “Photoelastic properties of selected materials and their relevance for applications to acoustic light modulators and scanners,” J. Appl. Phys. 38(13), 5149–5153 (1967). 21. I. Fushman, E. Waks, D. Englund, N. Stoltz, P. Petroff, and J. Vuckovic, “Ultrafast nonlinear optical tuning of photonic crystal cavities,” Appl. Phys. Lett. 90(9), 091118 (2007). 22. H. Altug, and J. Vucković, “Polarization control and sensing with two-dimensional coupled photonic crystal microcavity arrays,” Opt. Lett. 30(9), 982–984 (2005). 23. C. Manolatou, M. J. Khan, S. Fan, P. R. Villeneuve, H. A. Haus, and J. D. Joannopoulos, “Coupling of modes analysis of resonant channel add–drop filters,” IEEE J. Quantum Electron. 35(9), 1322–1331 (1999). 24. M. V. Dutt, L. Childress, L. Jiang, E. Togan, J. Maze, F. Jelezko, A. S. Zibrov, P. R. Hemmer, and M. D. Lukin, “Quantum register based on individual electronic and nuclear spin qubits in diamond,” Science 316(5829), 1312– 1316 (2007). 25. K. Rivoire, A. Kinkhabwala, F. Hatami, W. T. Masselink, Y. Avlasevich, L. Mullen, W. E. Moerner, and J. Vuckovic, “Lithographic positioning of fluorescent molecules on high-Q photonic crystal cavities,” Appl. Phys. Lett. 95(12), 123113 (2009). Introduction Nanophotonic cavities enhance light-matter interaction and have found many interesting uses in devices such as lasers [1], modulators [2], biosensors [3], as well as in fundamental experiments employing single quantum dots [4].Photonic crystal (PC) cavities have been particularly popular due to their small mode volumes and high quality factors [5].However, since nanofabrication techniques frequently produce cavities at wavelengths different than their intended designs, many attempts have been made to tune cavities post-processing.Mechanisms of both reversible and irreversible tuning that have been developed include local temperature control by Ohmic heaters [6], chemical etching [7], near-field tip perturbation [8], photosensitive material illumination [9], carbon dot deposition [10], and fiber taper probing [11,12].Most tuning mechanisms provide a small resonance shift of only a few nanometers and are geared toward spectrally aligning cavities with quantum dots.On the other hand large resonance shifts of light-emitting cavities may prove as useful sources of tunable visible or IR light. Here we report the broad tuning of a photonic crystal cavity using a fiber taper probe.In past studies, fiber taper tuning was limited to a few nanometers because the cavity modes were tightly confined inside the photonic crystal membrane, minimizing the effects of the silica fiber [11].In this study, we fabricate our structures in an optically thin membrane to increase the proximity effect of the fiber taper.Additionally, fiber-induced deformation of the thin membranes increases the cavity resonance shift.We use these effects to show that the second harmonic (generated by cavity enhanced process [13]) signal generated can be tuned by 10 nm, half the cavity tuning range. Taper fabrication Fiber tapers were fabricated in the same way as in our earlier work [11], using a flame brushing procedure [14] in which a standard single mode communication fiber is simultaneously heated by a torch and pulled outward by motorized stages.The pull length was kept to a few mm to maintain the mechanical stability of the taper and to provide a high enough tension to drag the taper along the sample surface.Taper diameters were approximately 1 µm to ensure single-mode behavior. PC cavity fabrication Samples were grown by gas-source molecular beam epitaxy on a (100)-oriented GaP wafer.A 160 nm thick GaP membrane was grown on top of a 1 µm thick sacrificial AlGaP layer.Structures were fabricated with e-beam lithography and etching, as described in [15].The photonic crystal cavities are three hole linear defects (L3 cavities) resonant around 1550 nm wavelength [5] with lattice constant a = 500 -560 nm, and hole radius r/a ≈0.2 -0.25.The onaxis outer holes are shifted by 0.15a in order to improve the intrinsic Q of the cavity.Figure 1(a) shows an SEM picture of a tested cavity.After fabrication, the PC membranes were clearly seen to exhibit bowing as was evidenced by a circular ring in the undercut region of the GaP layer [Fig.1(b)]. Modeling Finite-difference time domain (FDTD) simulations were performed to determine the shift in the cavity resonance frequency produced by the fiber, both from perturbation of the cavity field and mechanical deformation of the structure.It is assumed that these two effects can be decoupled and hence can be independently analyzed in simulation.We first model the GaP cavity without any perturbation as a t = 160 nm thick slab of refractive index n = 3.1 with lattice constant a = 530 nm and hole radius r = 125 nm.The fundamental mode [Fig.1(c)] has wavelength 1581 nm and quality factor of about 16,000. The effect of the cavity field perturbation by a silica fiber taper, modeled as a cylinder of refractive index n = 1.45 covering the entire length of the photonic crystal membrane, is determined by simulating the cavity resonance and Q factor as the taper is scanned along the y-axis.The cavity resonance wavelength increases linearly as the taper offset, d, is decreased; on the cavity axis (d = 0), the cavity resonance is redshifted by 16 nm from the intrinsic value [Fig.2(a)].This value is much larger than previously observed taper-induced redshifts [11] because the cavity membrane is thin, and thus the field has a long evanescent tail in the direction perpendicular to the membrane.The effective index increase of the cavity is enhanced by the greater overlap of the cavity field with the silica material in the taper.To illustrate this effect, additional simulations with slabs thicker than 250 nm exhibited shifts of ~3 nm or less. Interestingly, Fig. 2(b) demonstrates that there is not a monotonic relationship between taper offset and cavity total Q (Q tot ), but rather there appear to be points of enhanced coupling to the fiber (lower fiber Q, Q f ) at specific offsets.This is due to the fact that the cavity contains multiple polarization components with different parities and spatial patterns [e.g., see Fig. 1(c)] which couple with different strengths to the fiber depending on the taper lateral offset.Coincidentally, these points correspond to reduction of the in-plane Q (Q || ) and lossy coupling into leaky TM modes [11,16].The maximum coupling efficiency at an offset of ~400 nm is estimated as η F = Q tot /Q f = 0.75, which is determined by taking the integrated flux through the fiber facets and comparing to the total loss.Physical deformation of the PC membrane creates additional redshifts due to straininduced elongation and the photoelastic effect [17,18].Since the PC membrane is very thin, the force of a fiber taper in contact with its surface can enhance the bowing of the membrane.From geometrical considerations, the membrane can elongate by roughly 2% before it touches the GaP substrate below the undercut region.This elongation is an upper limit since if the PC membrane were to touch the substrate, cavity confinement would be lost and the resonance would disappear.FDTD simulations show that if the PC membrane is within 300 nm of the GaP substrate, the cavity Q drops significantly.Therefore a realistic maximum for the membrane strain is ~1%. Strain was modeled in FDTD as a uniform extension of both the lattice periodicity and the hole radius since the fiber taper contacts the cavity over a small region and therefore presses down on the membrane at a central contact point.Simulations indicate that for a 1% elongation, the cavity resonance shifts by 11 nm, with a linear relation between shift and elongation for other strain values.It should be noted that the curvature of the PC membrane itself produces no observable resonance shift in FDTD for the geometrically constrained bowing radius of curvature. A final contribution to the resonance redshift is expected from the strain-induced refractive index increase of the semiconductor material.Qualitatively, as the membrane expands, the electronic band gap decreases, increasing the absorption and also the refractive index due to the Kramers-Kronig relations.This behavior is often modeled as a linear photoelastic effect given by (1): where ε is the applied strain, p is the photoelastic coefficient, n is the refractive index, and ∆n is the change in refractive index [19].Each of these constants is a tensor reflecting the appropriate crystal axes.However, here we take an average value for p in order to get an approximate average index increase and assume isotropic strain.For a strain of ε = 0.01, and using an average value of p = −0.11for GaP [20], we calculate a refractive index change of ∆n ≈0.016, corresponding to a 7 nm redshift of the cavity resonance, which is in good agreement with the frequency-refractive index relation in [21].All together, the three effects of fiber taper perturbation of the cavity field, physical elongation of the PC membrane, and photoelastic refractive increase of GaP sum together to produce an expected redshift of over 30 nm. Cavity tuning Fabricated cavities were first characterized by free space cross-polarized reflectivity with a tungsten halogen lamp [22] to measure the intrinsic quality factor and resonance wavelength.Figure 3 shows a reflection spectrum from the cavity in Fig. 1(a), with the initial resonance wavelength of 1559 nm and the intrinsic Q factor (Q 0 ) equal to 3500.Fig. 3. Experimental setup for performing free space reflectivity measurement and results.Broadband IR light from a halogen lamp is linearly polarized and sent to the sample through a polarizing beam splitter.Cavity coupled light is reflected off the sample and is allowed to pass through the beam splitter into a spectrometer where it is detected.The spectrum shows the resulting fundamental mode reflectivity spectrum at 1559 nm. Fiber taper-coupled transmission measurements were performed to study the tuning behavior of the cavity.Fabricated tapers were mounted and aligned as shown in Fig. 4(a), 4(b) and as described in [11].A broadband IR source (Agilent 83437a) was coupled into the fiber, polarized, and its polarization rotated to match the cavity TE polarization.The output transmission signal was monitored with an Optical Spectrum Analyzer (OSA). The fiber taper was first positioned at an offset of d = 2-3 µm and brought into contact with the PC surface.Initially this caused no change in the transmission spectrum except a slight scattering loss.Tension was then applied in a direction perpendicular to the cavity main axis (in the y-direction) and the fiber taper began to drag towards the cavity axis.When the taper reached an offset of ~1.5 µm, an initial coupling dip appeared at 1560 nm [Fig.4(c)].This signifies weak coupling and a nearly zero resonance shift since the taper was far away.As the taper was brought closer to the cavity, the cavity resonance spectrum red-shifted progressively until a maximum of 1590 nm was reached for a zero offset (i.e., taper aligned with the cavity axis).The coupling depth follows the qualitative behavior of Fig. 2(b), which predicts maximal coupling for a 0.4 µm laterally offset taper.Also in agreement with theory is the magnitude of the total redshift which was 30 nm.In the experiment, the tensioned taper presses down on the PC membrane as it is dragged along the surface.This effect could be observed while monitoring the microscope image.Therefore, all the fiber-and strain-induced effects should be taking place for the maximum redshift attained.A close-up of the fundamental resonance tuning can be seen in Fig. 5, which plots many intermediate points between the wavelength limits.The data show the same results from before but with finer resolution.The measured Q values for the largest taper offset, largest coupling depth, and zero offset taper are 2700, 520, and 1370, respectively.In order to decouple the two effects of taper redshift versus strain redshift, we repeat the transmission experiment by slowly lowering the taper over the central axis of the cavity while monitoring the cavity resonance.When the taper-cavity gap is below ~1 µm, the initial cavity resonance appears near the intrinsic value due to weak loading.As the taper is slowly lowered the cavity resonance monotonically shifts to longer wavelengths until a maximum shift of ~17 nm is obtained at contact.Since the taper now gently rests on the surface, strain effects are minimized and the cavity resonance is redshifted because of the higher effective index of the cavity mode.From this point, the taper was tensioned while still in contact with the cavity, enhancing the visible bowing of the membrane and causing an additional 13 nm of shift.During tensioning, the contact area of the taper to cavity was unchanged and the only noticeable difference was an increase in the bowing of the membrane.Therefore we conclude that the fiber taper and strain effects sum together to produce a large tuning range for the cavity. Tunable second harmonic generation We now show that the ability to tune a resonance in a PC cavity translates into a large tuning range for the second harmonic generated (SHG) in the cavity.Gallium phosphide has a large second order nonlinearity and previous experiments have shown that the SHG signal from a PC cavity embedded in GaP can be greatly enhanced by the cavity [13].For this experiment a different cavity with tuning range of only ~20 nm at 1550 nm was used to accommodate the tuning range of the pump laser.The fiber taper was first coupled to a cavity while monitoring the broadband transmission spectrum.The input to the fiber was then switched to a tunable infrared laser that was then scanned through the cavity resonance.As this was done, SHG signal was both collected at the output of the second arm of the fiber taper and seen optically on a CCD camera (Fig. 6).The scanned output profile matches the expected Lorentziansquared curve as seen by the fit of the data with a Q of 2200.The second harmonic generation and collection can be understood as follows: pump light from the laser first couples into the cavity TE mode from the fiber taper, then circulating pump light is frequency doubled and coupled to a TM-like Bloch mode of the PC, finally the TM Bloch mode couples back into the taper and is detected at the fiber output.Even though the TM mode is delocalized over the full PC membrane, there is still finite field overlap between it and the fundamental TM fiber mode such that coupling back into the fiber will take place.Tuning of the SHG signal is performed by repeating the above process for several taper positions.At each new cavity wavelength, the pump laser was adjusted to match the resonance.Figure 7 shows a plot of five different SHG signals and a few matching transmission profiles for resonances between 1540 nm and 1560 nm, corresponding to a second harmonic tuning range of ~10 nm.Fig. 7. Tunable second harmonic signal generated from the GaP cavity and detected through the fiber.Peaks correspond to maxima of the signal generated when the pump laser is zero detuned from the cavity resonance.The fiber taper is used to redshift the cavity resonance which translates into a change of the second harmonic output wavelength.Since the taper was aligned separately for each measurement there is some variation in the transmission spectrum background and SHG output signal strength. The higher coupling efficiency of pump light into cavity attained via fiber pumping compared to free space pumping can produce a larger second harmonic signal.From coupledmode theory [23], the steady-sate cavity energy for a fiber-coupled cavity is found to be 2η F (1-η F )P in Q 0 /ω 0 , where η F is the fiber coupling efficiency (η F = Q tot /Q f as above), P in is the input pump power, Q 0 is the intrinsic cavity Q, and ω 0 is the cavity frequency.For free space pumping the steady-state cavity energy for a cavity without a fiber is given by 2η FS P in Q 0 /ω 0 where η FS is the free space coupling efficiency of focused pump light into the cavity.A typical value of η FS for this type of cavity is 5% [13] and is limited by the spatial mode matching of the pump beam and cavity.As the second harmonic generated is proportional to the cavity energy squared, the fiber-coupled cavity can produce a signal up to 25 times greater than the free space pumped cavity for η FS = 0.05. Conclusion We have both theoretically and experimentally demonstrated a 30 nm tuning range of GaP photonic crystal cavities fabricated for 1550 nm operation.In these thin PC membranes, the cavity mode evanescent tail extends out farther into the air cladding and is strongly affected by the fiber taper, which introduces a large effective index perturbation.The thin membrane also allows for enhanced taper-induced bowing effects, which deform (stretch) the cavity structure and increases the material refractive index by the photoeleastic effect.By taking advantage of the χ (2) nonlinearity in gallium phosphide along with the large tuning effect of the taper, we also demonstrate second harmonic generation that is tunable over a 10 nm range.By scaling cavity parameters, the wavelength of the tunable second harmonic can be shifted farther into the visible since the bandgap of GaP is at 555 nm.Such a source could find applications in quantum optics spectroscopy [24], biosensing, and imaging of molecules [25]. Fig. 1 . Fig. 1.(a) SEM image of a fabricated photonic crystal cavity in gallium phosphide.(b) Optical image of the same PC cavity.The central white strip is the linear cavity defect.(c) FDTD simulation profile of the dominant Ey component of the fundamental cavity resonance.The scale bars for (a) and (b) are 1 µm and 3 µm, respectively. Fig. 2 . Fig. 2. (a) FDTD simulated behavior of cavity resonance as the fiber taper is displaced away from the cavity in the y-direction [see Fig. 1(a)] showing a wavelength shift from around 1597 nm to 1581 nm.The zero offset corresponds to the taper aligned with the cavity axis.(b) Simulated total Qtot, in-plane Q||, and fiber Qf cavity quality factors as a function of taper offset.Coupling to the fiber is strongest for a 0.4 µm offset. Fig. 4 . Fig. 4. (a) Setup of fiber-coupled transmission experiment.Broadband IR signal is sent through a fiber aligned along the cavity axis [x-direction in Fig. 1(a)] and the normalized transmission spectrum is measured.The blue double-arrow indicates the direction of taper scanning [y direction in Fig. 1(a)] and OL is objective lens.(b) Cross-section schematic of the taper-induced bowing effect.The pink color indicates the GaP membrane and substrate while the red indicates the remaining sacrificial AlGaP layer.The approximate dimensions are shown and the strain, ε, is noted.(c) Transmission spectra for the cavity with decreasing taper offsets in the direction of the black arrow.Spectra are vertically offset by 1 for clarity. Fig. 5 . Fig. 5. Tuning of the fundamental cavity mode resonance by scanning a fiber taper from large offset in the y-direction (label 1) to zero offset (label 3).An intermediate point is also shown for label 2, where the transmission coupling is maximum. Fig. 6 . Fig. 6.(a) Second harmonic signal (around 772 nm) collected from the fiber as a pump laser is scanned through the cavity resonance.(b) Visible SHG signal seen from an overhead CCD.The delocalized nature of the propagating TM Bloch mode can be seen from the scattered light.
5,413.8
2010-06-07T00:00:00.000
[ "Physics" ]
FLI1 induces erythroleukemia through opposing effects on UBASH3A and UBASH3B expression Background FLI1 is an oncogenic transcription factor that promotes diverse malignancies through mechanisms that are not fully understood. Herein, FLI1 is shown to regulate the expression of Ubiquitin Associated and SH3 Domain Containing A/B (UBASH3A/B) genes. UBASH3B and UBASH3A are found to act as an oncogene and tumor suppressor, respectively, and their combined effect determines erythroleukemia progression downstream of FLI1. Methods Promoter analysis combined with luciferase assays and chromatin immunoprecipitation (ChIP) analysis were applied on the UBASH3A/B promoters. RNAseq analysis combined with bioinformatic was used to determine the effect of knocking-down UBASH3A and UBASH3B in leukemic cells. Downstream targets of UBASH3A/B were inhibited in leukemic cells either via lentivirus-shRNAs or small molecule inhibitors. Western blotting and RT-qPCR were used to determine transcription levels, MTT assays to assess proliferation rate, and flow cytometry to examine apoptotic index. Results Knockdown of FLI1 in erythroleukemic cells identified the UBASH3A/B genes as potential downstream targets. Herein, we show that FLI1 directly binds to the UBASH3B promoter, leading to its activation and leukemic cell proliferation. In contrast, FLI1 indirectly inhibits UBASH3A transcription via GATA2, thereby antagonizing leukemic growth. These results suggest oncogenic and tumor suppressor roles for UBASH3B and UBASH3A in erythroleukemia, respectively. Mechanistically, we show that UBASH3B indirectly inhibits AP1 (FOS and JUN) expression, and that its loss leads to inhibition of apoptosis and acceleration of proliferation. UBASH3B also positively regulates the SYK gene expression and its inhibition suppresses leukemia progression. High expression of UBASH3B in diverse tumors was associated with worse prognosis. In contrast, UBASH3A knockdown in erythroleukemic cells increased proliferation; and this was associated with a dramatic induction of the HSP70 gene, HSPA1B. Accordingly, knockdown of HSPA1B in erythroleukemia cells significantly accelerated leukemic cell proliferation. Accordingly, overexpression of UBASH3A in different cancers was predominantly associated with good prognosis. These results suggest for the first time that UBASH3A plays a tumor suppressor role in part through activation of HSPA1B. Conclusions FLI1 promotes erythroleukemia progression in part by modulating expression of the oncogenic UBASH3B and tumor suppressor UBASH3A. Supplementary Information The online version contains supplementary material available at 10.1186/s12885-024-12075-2. UBASH3B has similar structural domains to UBA-SH3A and some overlapping functions.However, UBASH3B suppresses T-cell receptor (TCR) signaling by dephosphorylating ZAP-70 and Syk, two key molecules involved in the amplification of TCR-triggered signals [3,14,15].UBASH3A and UBASH3B knockout mice exhibit no obvious phenotype until the T cell receptor (TCR) is stimulated.Upon stimulation, T cells from UBASH3A / UBASH3B double deficient mice are hyper-proliferative and produce more IL-2 and IFNγ than wild-type T cells [3], underscoring the vital role UBASH3A/B in T cell regulation and autoimmunity.UBASH3B expression is implicated in various cancers through its ability to bind CBL and block its ubiquitination activity [16,17]. Cells, culture conditions and drug therapy The human leukemia (HEL 92.1.7,K562) and epitheliallike HEK293T (CRL-3216) cell lines were obtained from ATCC (US) and tested negative for mycoplasma.These cell lines were cultured and maintained in Dulbecco's Modified Eagle Medium supplemented with HyClone 5% fetal bovine serum (GE Healthcare, US). RNA preparation and RT-qPCR Total RNA was extracted using Trizol reagent (Thermo Fisher Scientific, US), cDNA was synthesis using the PrimeScript RT Reagent Kit (Takara Bio, CN) and RT-qPCR analysis using the FastStart Universal SYBR Green Master Mix (Roche, CH) on a Step One Plus Real-time PCR system (Applied Biosystems/Thermo Fisher Scientific, US).The expression of the test genes was given as relative to β Actin.Three biological replicates in triplicate (n = 3) were performed for each gene.The primers sequences were listed in the Table 1. Promoter analysis and luciferase assays The UBASH3A and UBASH3B promoter regions (see Figs. 2A and 3A) were amplified by PCR, cloned into the luciferase reporter vector pGL3-basic (Promega, US), and used in a luciferase activity assay, as previously described [38].Briefly, 2.5 μg of the indicated promoter was co-transfected with either MigR1 (2.5 μg) or MigR1-FLI1 (2.5 μg) using a Lipofectamine 2000 kit (Thermo Fisher Scientific) into epithelial HEK-293T cells which seeded onto 6-well plates one day before, according to the manufacturer's protocol.Renilla luciferase (Promega, US) was used as an internal control for transfection efficiency. ShRNA and siRNA expression The construction of shFLI1 cells has been previously described [38].shUBASH3A, shUBASH3B, and shH-SPA1B and scrambled control vectors were generated by inserting the corresponding shRNA sequence containing oligonucleotides and scrambled DNAs into the restriction enzyme sites BcuI within the PLent-GFP expression vector (obtained from Vigene Bioscience, US).The lentivirus particles were generated by co-transfecting shRNA PLent-GFPs (10 µg) with packaging plasmids psPAX2 (5 µg) and pMD2G (10 µg) (Addgene plasmid #12,259 & #12,260) into HEK293T cells, using lipofectamine 2000.The supernatants were collected two days after transfection to transduce HEL cells.The positive cells were then selected via incubation with a medium containing puromycin (5 µg/ml; Solarbio, CN).ShRNA sequences are listed in Table 2. UBASH3A siRNAs and negative control were purchased from GenePharma (CN).The UBASH3A siRNA was transfected into shUBASH3B cells using Lipofectamine 2000.Two days after transfection, cells were collected, RNA was extracted, and RT-qPCR was used to detect UBASH3A.For proliferation analysis, cells were transfected with siRNA for 48 h and assessed using an MTT assay every day for three days.SiRNA sequences listed in Table 3. RNAseq analysis and bioinformatics Total RNA samples isolated from designated cells and appropriate controls were sent for RNAseq at BGI genomics (CN).BGI also performed the data preprocessing, which we used to analyze the gene expression profiles between shRNA-mediate knocked downed genes and the scrambled control group.Differentially expressed genes (DEGs) were determined by the condition (log-2FoldChange ≤ -1 or ≥ 1, padj < 0.05) and then analyzed by Kyoto Encyclopedia of Genes and Genomes (KEGG) pathway enrichment.Differentially expressed genes (DEGs) for UBASH3B and UBASH3A were shown in Supplementary Tables 1 and 2, respectively.The TCGA data analysis was obtained using GEPIA2 resources (http:// gepia2.cancer-pku.cn/). Western blotting Total protein from cell lines was extracted using RIPA buffer (Beyotime Institute of Biotechnology, CN) containing 1:100 PMSF (Solarbio, CN).The protein concentration was determined using a BCA kit (Solarbio, CN) according to the manufacturer's protocol.Load equal amounts of protein into the wells of the SDS-PAGE gel and transferred to PVDF membrane.The membrane was blocked using non-fat milk for 1 h at room temperature.Incubate the membrane with primary antibody in blocking buffer overnight at 4°C.After washed by TBST (Beyotime Institute of Biotechnology, CN) at room temperature for three times, the membrane was incubated with Anti-rabbit IgG (H + L) DyLight ™ 800 4X PEG Conjugated secondary antibody (5151s, Cell Signaling Technology, US) in blocking buffer at room temperature for 1 h.The following primary antibodies were used: anti-FLI1 (ab133485, Abcam, UK), anti-UBASH3A (15,823-1-AP, Proteintech, DE), anti-UBASH3B (19,563-1-AP, Proteintech, DE), polyclonal rabbit test primary antibodies; anti-GAPDH (G9545, Sigma Aldrich, US).Antibody dilution was conducted according to the manufacturer's instructions.The Odyssey system (LICOR Biosciences) was used for western blot membrane imaging and analysis. Apoptosis Cells were incubated with compounds or vehicle for 24 h, as previously described [29].Treated cells were washed by PBS, stained by Annexin V and PI apoptosis detection kit (BD Biosciences, US), following the kit guidelines and analyzed by flow cytometer. Chromatin immunoprecipitation (ChIp) analysis The ChIp analysis was performed, as previously published [29].In brief, formaldehyde was used to crosslink erythroleukemia HEL cells before they were centrifuged and the pellet was then resuspended in Magna ChIp A/G kit lysis solution (Sigma-Aldrich, US).The fixed pellet was sonicated using a Sonics Vibra VCX150 (Ningbo Scientz Biotechnology, CN).A small aliquot of the chromatin was taken out to serve as an input control.Protein G Sepharose beads (Cell Signaling Technology, US) were added to the chromatin and incubated for one hour at room temperature.The immunoprecipitations were performed overnight at 4°C with 1 μg of ChIp grade anti-FLI1 antibody (ab15289, Abcam, UK) and the negative control rabbit immunoglobulin G (IgG) antibody (Cell Signaling Technology, US).After centrifugations, the chromatin precipitates were washed and reverse crosslinked.The precipitated chromatins were then incubated with proteinase K at 56°C for two hours, DNA purified with one phenol chloroform extraction and resuspended in TE buffer.RT-qPCR was performed using this DNA to determine the amount of FLI1 binding within the promoter region.The percentage of input was calculated as previously described [29].Amplified DNAs were also resolved on a 2% agarose gel.The ChIp was performed at least in three independent experiments.The primer sequences for the ChIp PCRs are as follows. Statistical analysis The statistical analysis was performed using a two-tailed Student t-test or a one-way ANOVA with Tukey's post hoc test, using Prism 9 software (GraphPad Software Inc, US).The P values were indicated within the figures using a standard scheme, P < 0.05 (*), P < 0.01 (**), P < 0.001 (***), and P < 0.0001 (****).Where appropriate, the data were displayed using the mean (± SEM) from at least three independent experiments. FLI1 regulates UBASH3B positively and UBASH3A negatively in leukemic cells While FLI1 is known to promote the initiation and progression of leukemias and other cancers [37], the underlying mechanism is not fully understood.To uncover its downstream targets, RNAseq analysis was used to identify genes whose expression is modulated in response to shRNA knockdown of FLI1 (shFLI1) in leukemic HEL cells [38].Knockdown of FLI1 in HEL cells was previously shown to slow down proliferation, alter the cell cycle and induce apoptosis [29,37].Among the affected genes, UBASH3B expression was strongly downregulated in shFLI1 versus scrambled controlled HEL cells, whereas UBASH3A was elevated (Fig. 1A).These results raised the possibility that the UBASH3AB/A variants may affect erythroleukemia progression through opposing functions.First, we confirmed these results by RT-qPCR, where reduced FLI1 expression (Fig. 1B) in shFLI1 cells was indeed associated with a decreased UBASH3B (Fig. 1C) and increased UBASH3A (Fig. 1D). To determine whether the differential effect of FLI1 on UBASH3A and UBASH3B is mediated by direct transcriptional regulation, we performed an in vitro FLI1 promoter binding assays.Figure 2A depicts a schematic of the UBASH3B promoter (P1 and P2), which contains a putative FLI1 binding site at position -1611 to -1600 in P1 (Fig. 2B).The FLI1 binding site is absent in the UBASH3B-P2 promoter, which was used as a negative control (Fig. 2A).Transfection of these luciferase reporter plasmids into HEK293T cells alongside either the FLI1 expression vector (MigR1-FLI1) or vector control (MigR1) resulted in significantly higher luciferase activity for the P1 promoter when co-transfected with MigR1-FLI1.In contrast, the P2 promoter was refractory to FLI1 expression.Mutation within the FLI1 binding site on the P1 promoter (UBASH3B P1 mut, Fig. 2A) did not affect basal gene expression but conferred resistance to FLI1 over-expression (Fig. 2C).FLI1 binding to the UBASH3B promoter was further confirmed by Chromatin Immunoprecipitation (ChIp) (Fig. 2D), in which significantly higher binding was observed using FLI1 antibody versus control IgG.Moreover, in ChIPseq in GEO database, FLI1 strongly binds to promoter of the UBASH3B gene (Supplemental Fig. 2).These results demonstrate direct regulation of the UBASH3B expression by FLI1. A similar strategy was used to generate the plasmids containing UBASH3A-P1 and P2 promoters (Fig. 3A); the latter contained FLI1 binding site at positions -1321 to -1312 1E are presented in Supplementary Fig. 9 (Fig. 3B).Both the P1 and P2 promoters were associated with similar activation when co-transfected with the MigR1 expression vector (Fig. 3C), but MigR1-FLI1 inhibited luciferase activity supporting the negative regulation of UBA-SH3A by FLI1.Interestingly, MigR1-FLI1 also inhibited luciferase activity when the FLI1 binding site within the P2 promoter was mutated (UBASH3A P2 mut, Fig. 3A and C).In the ChIp assay, FLI1 failed to bind the putative binding site identified within the UBASH3A promoter (data not shown).These results suggest that FLI1 indirectly regulates UBASH3A expression through another site or transcription factor.Indeed, strong binding between the GATA2 transcription factor within the UBASH3A promoter has been identified in the ENCODE database [39] (Fig. 3D).GATA2 is regulated by FLI1 [38] and thus may mediate the negative effect of FLI1 on UBASH3A expression in leukemic cells. UBASH3A and UBASH3B downregulation affects leukemia cell proliferation As FLI1 knockdown blocks leukemia cell proliferation [38], we next examined impact of UBASH3A and UBASH3B on cell growth.To this end, UBASH3B was knocked down in HEL cells using lentivirus vectors containing four shRNAs, which resulted in reduced mRNA expression (Fig. 4A) and protein levels (Fig. 4B).Reduced expression of shUBASH3B resulted in significant growth suppression compared to the control scrambled cells (Fig. 4C).The expression of FLI1 was also lightly reduced (possibly due to a positive feedback) in shUBASH3B cells (Supplemental Fig. 3A and B).These results suggest an oncogenic role for UBASH3B in leukemia progression. Three lentiviruses (shUBASH3A1-3) were also used to knock down UBASH3A in HEL cells (Fig. 4D), resulting in reduced mRNA expression (Fig. 4D) and protein levels (Fig. 4E).Unlike the effect of UBASH3B knockdown, UBASH3A depletion increased cell proliferation compared to scrambled control cells (Fig. 4F), suggesting an inhibitory role for this protein in erythroleukemic cells.As UBASH3B knockdown inhibited cell proliferation, we examined whether UBASH3A knockdown moderated this suppressive effect.Indeed, inhibition of UBASH3A in shUBASH3B cells using siRNA (Fig. 4G) significantly reduced growth inhibition compared to the control (Fig. 4H).The expression of the FLI1 oncogene was increased in shUBASH3A1 cells (Supplemental Fig. 3C 10 and D).These results indicate that UBASH3A/B have opposing effects on leukemic cell proliferation. UBASH3A and UBASH3B regulate the expression of common and unique genes To uncover the mechanisms underlying the effect of UBASH3A and UBASH3B on leukemia progression, both shUBASH3B and shUBASH3A cells were assessed using RNAseq.Results from the Differentially Expressed Gene (DEG) analysis after UBASH3B knockdown revealed 800 genes with increased expression and 547 genes with decreased expression (Fig. 5A; Supplemental Table 2).Similarly, the DEGs in shUBASH3A1 versus scrambled control cells uncovered 317 DEGs were increased and 407 decreased (Fig. 5A; Supplemental Table 1).In comparison, in shFLI1 RNAseq data [38], we identified 1373 downregulated and 916 upregulated genes (Fig. 5A).A KEGG (Kyoto Encyclopedia of Genes and Genomes) pathway enrichment analysis for both the shUBASH3A (Fig. 5B) and shUBASH3B (Fig. 5D) regulated genes revealed significant changes associated with the MAP Kinase pathway (Fig. 5F), indicative of overlapping gene regulation.The MAP Kinase pathway genes were also altered in the shFLI1 RNAseq data (Supplemental Fig. 4A).In this analysis, 113 DEGs were common between shFLI1, shUBASH3A and shUBASH3B cells (Fig. 5E).In addition to DEGs observed in both UBA-SH3A and UBASH3B cells, we identified DEGs unique to one of the two ubiquitin associated ligases (Fig. 5E).A comparison between upregulated or downregulated DEGS from UBASH3B and UBASH3A cells is shown in Supplemental Fig. 5A-D.This analysis reveals upregulation of MAP Kinase pathway genes in UBASH3B and downregulation in UBASH3A cells.The common DEGs in the MAP Kinase pathway and expression variation between the shUBASH3A, shUBASH3B and shFLI1 effected genes are shown as a heatmap (Fig. 5F and Supplemental Fig. 4B).These changes may partially account for the suppressive and oncogenic differences between these UBASH3 isoforms in leukemia cells. UBASH3B ablation activates the AP1(FOS-JUN) pathway and blocks expression of SYK to inhibit leukemia proliferation The MAP Kinase pathway (Fig. 5F) heatmap revealed that FOS and JUN expression was elevated in shU-BASH3B knock-down cells (compared to shUBA-SH3A and scrambled control cells), and this was Fig. 3 FLI1 interacts with the UBASH3A promoter and reduces its expression.A The genomic structure of the human UBASH3A promoter and its sub-derivatives UBASH3A P1 and UBASH3A P2 as well as their derivative mutant DNA subcloned downstream of the pGL3-basic luciferase reporter plasmid.B The sequence of the UBASH3A promoter and its potential FLI1 binding site.C HEK293T cells were co-transfected with the UBASH3A P1/P2 and mutant (UBASH3A P2-mut) luciferase vectors and either MigR1-Fli1 or control plasmid MigR1.Luciferase activity was determined, as described in materials and methods.D ENCODE data showing the binding of GATA2 to the UBASH3A promoter region-Maximum fold change: 33.3354 further confirmed by RT-qPCR (Fig. 6A-C).The AP1 genes were also induced in shFLI1 cells (Supplemental Fig. 4B).Moreover, overexpression of FLI1 in K562 (K562-fli1) cells resulted in downregulation of both FOS and JUN (Supplemental Fig. 6A-C).Since elevated FOS and JUN expression was associated with growth suppression in shUBASH3B cells, we treated shU-BASH3B cells with the selective AP1 inhibitor T5224 [40], which significantly accelerated their proliferation (Fig. 6D).Moreover, T5224 significantly inhibited camptothecin (CPT, an anti-FLI1 compound [20,41,42])-induced HEL apoptosis in culture (Fig. 6E, F). RNAseq analyses in shFLI1 identified drastic downregulation in expression of the spleen tyrosine kinase SYK that was also detected in shUBASH3B cells, suggesting regulation of these genes by FLI1 [36].The SYK gene has been previously linked to leukemia progression [43].Indeed, RT-qPCR analysis confirmed downregulation of SYK in shFLI1 and shUBASH3B cells (Fig. 6G, H).Treatment of HEL cells with SYK inhibitor R406 [44] significantly suppressed growth in culture (Fig. 6I).These results suggest that UBASH3B may partially exert its oncogenic activity by suppressing AP1 and activating other oncogenic factors. HSPA1B suppression by UBASH3A accelerates leukemia cell proliferation Interestingly, expression of both Heat Shock Protein Family A (Hsp70) Member 1A (HSPA1A) and 1B (HSPA1B) increased in shUBASH3A and shUBASH3B cells relative to controls (Fig. 5F).The induction of HSPA1B in shUBASH3A1 and shUBASH3B was confirmed by RT-qPCR (Fig. 7A and B).Likewise, HSPA1B expression was significantly induced in shFLI1 cells (Fig. 7C, D and Supplemental Fig. 4B).Moreover, overexpression of FLI1 in K562 (K562-fli1) cells resulted in downregulation of HSPA1B (Supplemental Fig. 6A and D) suggesting a tumor suppressor role for this gene.To determine whether HSPA1B is involved in UBASH3A/B mediated tumor suppression, HSPA1B was then knocked-down in HEL cells using three shRNA (shHSPA1B-3; Fig. 7E).The proliferation of shHSPA1B-3 cells was significantly higher than in scrambled control cells (Fig. 7F).Thus, HSPA1B may mediate suppressive activity of FLI1.Since 4B and 4E are presented in Supplementary Fig. 11 UBASH3A is induced in shFLI1, the level of HSPA1B expected to be lower causing cell growth acceleration.In contrast, lower UBASH3B expression in shFLI1 cells caused higher expression of HSPA1B, leading to growth deceleration.In the schematic in Fig. 7G, we propose that the oncogenic activity of FLI1 through UBASH3B activation may be partly mediated through AP1 suppression in erythroleukemic cells.Previously, we showed that UBASH3B upregulation increases PKCẟ degradation, which increased drug resistance and leukemia cell survival [18].UBASH3B also activates the oncogene SYK to promote leukemia growth.Since HSPA1B is negatively regulated by both UBASH3A and UBASH3B, its tumor suppressor activity is dependent upon balance between the level of these UBASH3 proteins, negatively and positively regulated by FLI1, respectively.The balance between oncogenic and tumor suppressor activity of UBASH3B and UBASH3A, respectively, likely contributes to FLI1-induced leukemia cell proliferation. Correlation between FLI1 and the UBASH3A/B gene expression in other malignancies and prognostic impact The aforementioned results demonstrated a positive and negative correlation between FLI1 and the UBASH3B and UBASH3A genes in erythroleukemia cell lines, respectively.To examine a broader role of these UBASH genes in cancer, we examined the correlation between FLI1 and UBASH3A or UBASH3B in the TCGA database by GEPIA2.In most tumors, expression analysis revealed a higher level of UBASH3B versus normal samples (Fig. 8A).In Acute Myeloid Leukemia (AML) and whole blood cells, the expression of FLI1 was significantly correlated with the level of UBASH3B (Fig. 8B, C).Higher UBASH3B in AML, Pancreatic Adenocarcinoma, Brain lower grade glioma, Pancreatic adenocarcinoma and Lung squamous cell carcinoma were also correlated with worse prognosis (Fig. 8D, E and Supplementary Fig. 7A-C).These results further support the oncogenic function of UBASH3B in different tumors. A positive and negative correlations between FLI1 and UBASH3A were observed in various tumors (Fig. 9A).Interestingly, positive correlation between FLI1 and UBASH3A were seen in AML and whole blood cells (Fig. 9B and C).However, higher expression of UBA-SH3A had a better prognosis outcome in diffuse large B-cell lymphoma, Breast invasive Carcinoma, Colon adenocarcinoma, Head and neck squamous cell carcinoma, Liver hepatocellular carcinoma and Skin cutaneous melanoma (Fig. 9D and Supplementary Fig. 8A-E).Thymoma was the only tumor in which higher UBASH3A was significantly associated with worse patient outcome (Fig. 9E).These results suggest a tumor specific dependent suppressor function for UBASH3A. Discussion The ETS oncogene FLI1 is a major driver of tumor initiation and progression of diverse types of malignancies [38].FLI1 regulated genes have been identified to control various cancer hallmarks including cell proliferation, differentiation, apoptosis, genomic stability, and immunity [37].The combined effect of these downstream effectors contributes to robust oncogenic activity associated with FLI1 overexpression.Herein, we show that both UBA-SH3A and UBASH3B are strong downstream targets of FLI1.UBASH3B was found to be a direct target of FLI1, and its activation promotes erythroleukemia growth.In contrast, UBASH3A is indirectly downregulated by FLI1 through GATA2 or possibly other transcription factors and likely acts as an inhibitor of erythroleukemic cell proliferation.RNAseq analysis identified distinct and overlapping downstream pathways for UBASH3A and UBASH3B that likely contribute to their suppressive and oncogenic activity, respectively.This study provides novel insights into the role of these factors in leukemia progression. In Acute Myeloid Leukemia (AML) induced by the oncogene AML-ETO, UBASH3B inactivates CBL, which is predicted to inhibit the ubiquitination of its downstream effectors responsible for leukemogenesis [16].Similarly, in triple negative breast cancer, higher expression of UBASH3B promotes dephosphorylation and inactivation of CBL, which in turn loses ability to ubiquitinate and induce degradation of the epidermal growth factor receptor (EGFR), leading to accelerated cancer progression [17].We also previously identified PKCẟ as one of its downstream targets of UBASH3B [18].Interaction between UBASH3B and PKCẟ accelerated ubiquitination of this kinase, resulting in leukemia cell survival and drug resistance.Moreover, a positive correlation between FLI1/UBASH3B was observed in several cancer types associated with worse prognosis.These results confirm oncogenic activity of UBASH3B in erythroleukemia and likely other cancers. In a previous study [45], we reported regulation of FOS and JUN by FLI1 in leukemic cells.Herein, we showed that loss of FLI1 and consequently its downstream target UBASH3B in leukemia cells increased AP1 expression, leading to proliferation suppression and increased apoptosis.While AP1 is shown here to function as a tumor suppressor gene downstream of UBASHB, this transcription factor is also known to function as an oncogene in various cancers [46].Like TGF signaling, the AP1 function in cancer could go both ways [47].In our study, AP1(FOS and JUN) expression is negatively regulated during leukemia progression.Indeed, JUNB and JUNA are found critical downstream effectors of the tumor suppressor activity of another ETS gene family SPI1/PU.1, and that reduced expression of JUNB shown to be a common feature of acute myeloid leukemogenesis [48].Since FLI1 knockdown or overexpressing cells exhibit increased or decreased expression of the AP1 genes, respectively [45], we propose a tumor suppressor role for AP1 in erythroleukemia.In addition to AP1, we identified the activation of the SYK gene by FLI1 through UBASH3B.Dephosphorylation of SYK and SAP70 by UBASH3B, two main factors involved in TCR signaling, was previously reported [3,14,15].However, SYK kinase activation is also implicated in leukemia progression [43].Thus, SYK activation likely contributed to the oncogenic activity of FLI1 through UBASH3B.The mechanisms by which UBASH3B suppresses AP1 transcription and activates SYK has yet to be determined.However, the interaction between UBASH3B and CBL or downregulation of PKCẟ may modify FOS/JUN and SYK regulation.This notion remains to be investigated in future studies. Despite critical involvement in autoimmunity, the connection between UBASH3A and cancer has not yet been established.In contrast to UBASH3B, knockdown of FLI1 in erythroleukemia cells upregulates UBASH3A expression, raising the possibility of a tumor suppressor function for this variant.In support of this observation, ablation of UBASH3A in high FLI1 expressing erythroleukemic cells significantly accelerated cell proliferation in culture.Interestingly, UBASH3A expression was Higher UBASH3B transcription following FLI1 overexpression also causes inhibition of AP1, which would otherwise suppress leukemia progression.In addition, UBASH3B controls the expression of SYK and partially contributes to erythroleukemia progression.Suppression of UBASH3A transcription via FLI1 overexpression increases the expression of leukemia growth suppressor HSPA1B, which blocks proliferation.On the other hand, activation of UBASH3B by FLI1 further decreases HSPA1B expression, causing acceleration of cell proliferation.Dotted lines represent indirect regulation both induced and reduced relative to normal cells in various cancers.However, higher expression of UBA-SH3A was found to be a good prognosis marker for patient survival in most tumors, further supporting its anti-cancer activity.FLI1 indirectly controls the transcription of UBASH3A, likely through GATA2, which may warrant further investigation in future studies. RNAseq analysis of UBASH3A and UBASH3B knocked-down cells revealed the highest effects on the MAP Kinase pathway.Specifically, expression of HSPA1A and HSPA1B increased in both shUBASH3B and shU-BASH3A cells.Knockdown of HSPA1B in leukemia cells accelerated leukemogenesis indicating a role for these genes as negative regulators of leukemic cell growth.Interestingly, higher HSPA1A and HSPA1B expression was previously linked to poor survival in colon cancer.In hepatocellular carcinoma (HCC), expression of HSPA1B increased through Hepatitis B virus-mediated activation of ATF7, which accelerated cell proliferation by inhibiting apoptosis [49].In contrast to solid tumors, the data presented herein suggest an inhibitory role for HSPA1B in leukemia progression, whose expression depend upon the level of UBASH3A and UBASH3B. Finally, UBASH3A and UBASH3B knockdown affected similar as well as unique genes, as shown here for AP1, SYK and HSPA1B.Thus, the combined oncogenic and tumor suppressor activities of UBASH3A and UBASH3B and their downstream effectors influence leukemogenesis.Examining other genes regulated by UBASH3A and UBASH3B could further determine their role in leukemogenesis, and uncover additional therapeutic targets. Conclusions FLI1 is shown in this study to promote erythroleukemia progression by inhibiting UBASH3A and expression and inducing UBASH3B expression.UBASH3B acts as an TCA TTG CAA TTT CAA GAG AAT TGC AAT GAT CAT GCA GCT TTTTT shUBASH3A2 GGG ATC AAA GAC TTT GAA ATT CAA GAG ATT TCA AAG TCT TTG ATC CCT TTTTT shUBASSH3A3 CGA GTG GAA CCT GGA ATC TTT CAA GAA AAG ATT CCA GGT TCC ACT CGT TTTTT shHSPA1B1 GCT GAC CAA GAT GAA GGA GAT TTC AAG AGA ATC TCC TTC ATC TTG GTC AGC TTT TTT shHSPA1B2 GCG CAA CGT GCT CAT CTT TGT TCA AGA GAC AAA GAT GAG CAC GTT GCG CTT TTT T shHSPA1B3 GGG CCA TGA CGA AAG ACA ATT CAA GAG ATT GTC TTT CGT CAT GGC CCT TTTTT shUBASH3B1 GCG GCA GTA TGA AGA TCA AGG TTC AAG AGA CCT TGA TCT TCA TAC TGC CGC TTT TTT shUBASH3B2 GGT GAA GCC TTG TTA GAA AGT TTC AAG AGA ACT TTC TAA CAA GGC TTC ACC TTT TTT shUBASH3B3 GCG TTC AGA CTG CAC ATA ATA TTC AAG AGA TAT TAT GTG CAG TCT GAA CGC TTT TTT shUBASH3B4 GGA TAC CTC CAT CAG AGT TAG TTC AAG AGA CTA ACT CTG ATG GAG GTA TCC TTT TTT Scrambled TTC TCC GAA CGT GTC ACG TTT CAA GAG AAC GTG ACA CGT TCG GAG AAT TTTTT Fig. 1 Fig. 1 FLI1 regulates UBASH3A and UBASH3B transcription in leukemic cells.A Heatmap of UBASH3A and UBASH3B expression following FLI1 knockdown (shFLI1) versus control leukemia cells.B-D RT-qPCR analysis for expression of FLI1 (B), UBASH3B (C), and UBASH3A (D) in shFLI1 cells versus scrambled control leukemic cells.E Western blot analysis for FLI1, UBASH3A, and UBASH3B compared to the loading control GAPDH in shFLI1 versus scrambled control HEL cells.P < 0.001 (***).Relative density (Rd) determined by densitometer is shown.The full-length blots/gels for Fig. 1E are presented in Supplementary Fig.9 Fig. 2 Fig. 2 FLI1 binds to the UBASH3B promoter and activates its expression.A The genomic structure of the UBASH3B promoter its indicated derivatives UBASH3B P1, UBASH3B P2, and UBASH3B P1-mut, which were subcloned upstream from the PGL3 luciferase reporter plasmid.B The UBASH3B promoter sequence and its potential FLI1 binding site.C Luciferase activity in HEK293T cells transfected with the UBASH3B P1/P2 and UBASH3B P1-mut luciferase vectors transfected with either FLI1 expression vector MigR1-Fli1 or control plasmid MigR1.D Chromatin immunoprecipitation (ChIp) analysis of the human UBASH3B promoter in HEL erythroleukemic cells for binding to FLI1 by RT-qPCR (top panel).The lower panel shows the gel image for the immunoprecipitated PCR-amplified band relative to the input.P < 0.0001 (****).The full-length gel for Fig. 2D are presented in Supplementary Fig. 10 Fig. 4 Fig. 4 Control of cell proliferation by UBASH3A and UBASH3B.A, B The expression of UBASH3B by RT-qPCR (A) and western blot (B) in shUBASH3B and scrambled control cells.C The cell proliferation rate of shUBASH3B versus scrambled control cells.D Expression of UBASH3A in lentivirus transduced shUBASH3A1-A3 cells by RT-qPCR.E UBASH3A levels in shUBASH3A1 cells by western blot.F The cell proliferation rate for shUBASH3A1 versus the scrambled control.G Knockdown of UBASH3A in shUBASH3B cells via siRNA (siUBASH3A1-siUBASH3A4), as detected via RT-qPCR.H The proliferation of shUBASH3B cells after treatment with siUBASH3A4.P < 0.05 (*), P < 0.01 (**), P < 0.001 (***), and P < 0.0001 (****).The full-length blots/ gels for Fig. 4B and 4E are presented in Supplementary Fig. 11 Fig. 5 Fig. 5 Regulation of the MAP Kinase pathway via UBASH3A and UBASH3B.A Compared to scrambled controls, many genes were upregulated or downregulated in shFLI1, shUBASH3A1 and shUBASH3B cells.B, C KEGG pathway enrichment analysis for shUBASH3A (B) and shUBASH3B cells (C).D KEGG pathway enrichment analysis for DEGs commonly affected by both UBASH3A and UBASH3B genes belonging to the MAP Kinase pathway.E Number of common or unique DEGs in shFLI1, shUBASH3A and UBASH3B cells.F Heatmap showing the differentially expressed MAP Kinase genes in shUBASH3A1 and shUBASH3B cells Fig. 6 Fig. 6 AP1 /SYK are regulated by UBASH3B.A-C Expression of UBASH3B (A), JUN (B), and FOS (C) was assessed by RT-qPCR in shUBASH3B cells.D The proliferation of shUBASH3B cells treated with the selective AP1 inhibitor T5224 (10μM) compared to vehicle-treated (DMSO) cells.E HEL cells were treated with 10nM camptothecin (CPT) (a FLI1 inhibitor) in combination with either DMSO or T5224 for 24 h; apoptosis was measured using flow cytometry.F The data is presented using the average from three experiments.G, H The expression of SYK in shFLI1 (G) and shUBASH3B (H) versus control cells, via RT-qPCR.I The proliferation of HEL cells treated with the SYK inhibitor R406 compared to vehicle-treated (DMSO).P < 0.05 (*), P < 0.01 (**), P < 0.001 (***), and P < 0.0001 (****) Fig. 7 Fig. 7 Negative regulation of the HSPA1B by UBASH3A controls cell proliferation.A, B Expression of HSPA1B in shUBASH3B (A) and shUBASH3A1 (B) cells, via RT-qPCR.C, D Expression of FLI1 (C) and HSPA1B (D) in shFLI1 cells via RT-qPCR.E lentivirus-mediated downregulation of HSPA1B in HEL cells using the shHSPA1B1-3 expression vector, as determined via RT-qPCR.F The proliferation of shHSPA1B3 and scrambled control cells for the indicated days was assessed using an MTT assay.P < 0.01 (**), P < 0.001 (***).G Model showing the effect of FLI1 on UBASH3A and UBASH3B expression as well as erythroleukemia progression.UBASH3B induction via FLI1 overexpression suppresses PKCẟ and increased cell survival as well as drug resistance.Higher UBASH3B transcription following FLI1 overexpression also causes inhibition of AP1, which would otherwise suppress leukemia progression.In addition, UBASH3B controls the expression of SYK and partially contributes to erythroleukemia progression.Suppression of UBASH3A transcription via FLI1 overexpression increases the expression of leukemia growth suppressor HSPA1B, which blocks proliferation.On the other hand, activation of UBASH3B by FLI1 further decreases HSPA1B expression, causing acceleration of cell proliferation.Dotted lines represent indirect regulation Table 1 Primers sequences used for RT-qPCR Table 3 SiRNA sequences
7,142.8
2024-03-09T00:00:00.000
[ "Medicine", "Biology" ]
Datasets on the statistical and algebraic properties of primitive Pythagorean triples The data in this article was obtained from the algebraic and statistical analysis of the first 331 primitive Pythagorean triples. The ordered sample is a subset of the larger Pythagorean triples. A primitive Pythagorean triple consists of three integers a, b and c such that; a2+b2=c2. A primitive Pythagorean triple is one which the greatest common divisor (gcd), that is; gcd(a,b,c)=1 or a, b and c are coprime, and pairwise coprime. The dataset describe the various algebraic and statistical manipulations of the integers a, b and c that constitute the primitive Pythagorean triples. The correlation between the integers at each analysis was included. The data analysis of the non-normal nature of the integers was also included in this article. The data is open to criticism, adaptation and detailed extended analysis. a b s t r a c t The data in this article was obtained from the algebraic and statistical analysis of the first 331 primitive Pythagorean triples. The ordered sample is a subset of the larger Pythagorean triples. A primitive Pythagorean triple consists of three integers a, b and c such that; a 2 þ b 2 ¼ c 2 . A primitive Pythagorean triple is one which the greatest common divisor (gcd), that is; gcd ða; b; cÞ ¼ 1 or a, b and c are coprime, and pairwise coprime. The dataset describe the various algebraic and statistical manipulations of the integers a, b and c that constitute the primitive Pythagorean triples. The correlation between the integers at each analysis was included. The data analysis of the non-normal nature of the integers was also included in this article. All the data are in this data article Value of the data The data provides the descriptive statistics of the primitive Pythagorean triples The data when completely analyzed can provide insight on the various patterns that characterizes the primitive Pythagorean triples. The data analysis can be applied to other known numbers. That is the study of probability distribution of numbers. The data can provide more clues on the normal or non-normal nature of similar numbers. Data The data in this article is a description of some observed algebraic and statistical properties of the integers that constitute the primitive Pythagorean triples. Correlation between the pairs of the integers was investigated and different nature and strength of relationships were obtained. The line plots were used to visualize the patterns of distribution of variability of the integers. The detailed description and the contents of the data are contained in different sub sections. The descriptive statistics of the integers a, b and c The description statistics and the differences between the ordered pairs of the integers that make up the primitive Pythagorean triples can be assessed as Supplementary Data 1. Scatter plots of the three positive integers and the differences between each pair that constitute the primitive Pythagorean triples and the mean plots are shown in Supplementary Data 2. The mean is monotone increasing. Variance is the measure of variability or deviation from the mean or median. The line plots of the variance and skewness of the primitive Pythagorean triples are shown in Supplementary Data 3. The variance is increasing as the ordered sample size increases. Different types of correlation coefficients for the integers a, b and c of the primitive Pythagorean triples were obtained and shown in Table 1. There are strong positive correlations between b and c and moderate positive correlation between a and b, and a and c. Different types of correlation coefficients for the integers (b-a, c-b and c-a) of the primitive Pythagorean triples were obtained and shown in Table 2. Increase or decrease in (b-a) leads to decrease or increase in (c-b). However, (c-a) and (b-a) are strongly positively correlated. The trigonometric integers of the primitive Pythagorean triples The trigonometric aspects of the integers a, b and c that constitute the primitive Pythagorean triples were considered. The details are shown in Supplementary Data 4. The summary of scatter plots of the sine, cosine and tangent of a, b and c are shown in Supplementary Data 5. Different types of correlation coefficients for the trigonometric values of integers a, b and c of the primitive Pythagorean triples were obtained and shown in Tables 3-5. Weak correlations were the results. Table 2 Correlation coefficients of b-a, c-b and c-a. Table 3 Correlation coefficients of sine a, sine b and sine c. Table 4 Correlation coefficients of cosine a, cosine b and cosine c. The logarithmic and exponential transformations of integers of the primitive Pythagorean triples The logarithmic and exponential aspects of the integers a, b and c that constitute the Primitive Pythagorean triples were considered. The details are shown in Supplementary Data 8. The summary of scatter plots of the log, natural log and exponential of the inverse of a, b and c are shown in Supplementary Data 9. Different types of correlation coefficient for the logarithmic, natural log and exponential values of integers a, b and c of the primitive Pythagorean triples were obtained and shown in Tables 9-11. Strong positive correlations are the results. Table 9 Correlation coefficients of log a, log b and log c. The digital sum and digital root (iterative digits sum) of the integers of the primitive Pythagorean triples The digital sum and iterative digits sum of the integers that constitute the primitive Pythagorean triples were considered. The details are shown in Supplementary Data 10. The summary of scatter plots of the digital sum and iterative digits sum of a, b and c is shown in Supplementary Data 11. Different types of correlation coefficient for the digital sum and iterative digits sum values of integers a, b and c of the primitive Pythagorean triples were obtained and shown in Tables 12 and 13. Weak correlations are the main results here. Test of normality for a, b and c Normality tests are conducted to show how well the given data is fitted by normal distribution and the likelihood of the random variables that defined the given data is normally distributed. The data was subjected to some frequentist tests and the results are shown in Tables 14-16. The null hypothesis implies normality while the alternative implies otherwise. Descriptive statistics The mean, skewness, range and variance distribution was obtained for the first 331 terms of the sequence. The same statistics were obtained for the trigonometric, hyperbolic, logarithm, natural logarithm, exponential, digital root and iterative digits sum of the integers. Different data was obtained for each of the process. The descriptive analysis of the digital sum and iterative digits sum can be obtained from the analysis. Similar pattern of analysis of digits sum can be seen in [13][14][15][16]. In addition, the algebraic properties were also analyzed. Correlation Three different types of correlation coefficient were computed for all integers at the different processes. They are; Pearson product moment correlation coefficient [17], Kendall's tau correlation coefficient [18] and Spearman rank correlation coefficient [19]. In addition, three dimensional scatter plots were obtained for all the difference between the integers that constitute the primitive Pythagorean triples.
1,657.6
2017-09-01T00:00:00.000
[ "Mathematics" ]