text
stringlengths
1.23k
293k
tokens
float64
290
66.5k
created
stringdate
1-01-01 00:00:00
2024-12-01 00:00:00
fields
listlengths
1
6
Dynamical Properties of a Gene-Protein Model . A major limitation of the classical random Boolean network model of gene regulatory networks is its synchronous updating, which implies that all the proteins decay at the same rate. Here a model is discussed, where the network is composed of two different sets of nodes, labelled G and P with reference to “ genes ” and “ proteins ” . Each gene corresponds to a protein (the one it codes for), while several proteins can simultaneously affect the expression of a gene. Both kinds of nodes take Boolean values. If we look at the genes only, it is like adding some memory terms, so the new state of the gene subnetwork network does no longer depend upon its previous state only. In general, these terms tend to make the dynamics of the network more ordered than that of the corresponding memoryless network. The analysis is focused here mostly on dynamical critical states. It has been shown elsewhere that the usual way of computing the Derrida parameter, starting from purely random initial conditions, can be misleading in strongly non-ergodic systems. So here the effects of perturbations on both genes ’ and proteins ’ levels is analysed, using both the canonical Derrida procedure and an “ extended ” one. The results are discussed. Moreover, the stability of attractors is also analysed, measured by counting the fraction of perturbations where the system eventually falls back onto the initial attractor. Some properties of RBNs are robust with respect to the updating strategy, but in general there is no guarantee that this is the case. In particular, one should be very careful when dealing with the networks' dynamical properties. We have been particularly interested in the response of genetic networks to perturbations like gene knock-out and we have shown that, if the RBN model is chosen, the distribution of avalanches in gene expression levels in S. Cerevisiae that follows a single knock-out provides information about the dynamical regime of the biological network [8,16]. This result is particularly relevant, given the importance of the "criticality hypothesis", which states that biological systems should preferentially be found in dynamically critical states [13]. If we are indeed interested in biological genetic networks, such issues should be addressed in a way that does not critically depend upon the unrealistic assumption of synchronicity: different updating schemes should be considered, privileging whenever possible those that are closer to what we know about the behaviour of real gene regulatory networks. In order to do so, while retaining the simplifications related to the use of Boolean variables and to the "generic" approach of RBNs, we introduced the GPBN model (Gene-Protein Boolean Network), where the network is composed of two different sets of nodes, labelled G and P with reference to "genes" and "proteins" [9][10][11]. It is now well-established that proteins are not the only genetically-encoded products which can influence the effective expression level of other genes (think for example of miRNAs [2,3]). However, in order to simplify the model description, we will call here "proteins" all the products of gene activation that are able to influence the expression of other genes. Each gene corresponds to a protein (the one it codes for), while several proteins can simultaneously affect the expression of a gene. Both kinds of nodes take Boolean values: the state at time t + 1 of a G node depends upon the state of a fixed set of P nodes at the same time, while the state at time t + 1 of a P node depends upon the state of its corresponding G node at time t. Once a P node is set active (its state is 1), it remains active for at least a fixed number of steps. If a new activation signal comes in before decaying, the counter is reset. If no activation signal arrives, the P node is set to 0 at the end of its "lifespan". If we look at the genes only, it is like adding some memory terms, so the new state of the network is no longer "Markovian", i.e. it does no longer depend upon the previous state only. This model has been thoroughly studied and its properties have been described elsewhere [9,11]. In those papers the usual definition of dynamical criticality, based on the value of the so-called Derrida parameter, had been used. We have recently shown some limitations related to the use of that single measure to characterize critical states in RBNs [4]. In particular, the choice of a completely random initial state in the computation of the Derrida parameter has been criticized and a different measure ("extended Derrida parameter") has been proposed [18]. This prompted a more thorough analysis of the dynamics of GPBNs, whose main features are presented in this paper. The paper is organized as follows: in Sect. 2 the GBPN model is described, while in Sect. 3 the measures of dynamical criticality are discussed and the extended Derrida parameter is introduced. In Sect. 4 the results obtained by simulating GPBNs are shown and discussed, paying particular attention to the similarities and differences between the "canonical" (i.e. standard) and the extended Derrida procedures. A different way to evaluate the robustness of the network behaviour, based upon perturbations of its dynamical attractors, is also presented. Critical discussion and suggestions for further research are summarized in Sect. 5. The GPBN Model A GPBN model [9][10][11] is a bipartite oriented graph containing two types of Boolean nodes: the G nodes, which represent the genes set, and the P nodes, which represent the set of proteins (or, in general, gene products). A G node can be active or inactive (producing or not its protein), whereas a P node describes the presence (or absence) of a protein within the system. There are two types of links: synthesis links, which go from a G node to only one P node, and transcriptional regulation links, from a P node to one or more G nodes. As usual in RBNs, time evolves in discrete steps. Note that the state at time t + 1 of the GPBN model is determined by its state at time t, and the update is formally synchronous. However, due to the presence of the P nodes, the updating of the gene subnetwork is not synchronous, i.e. the states of G nodes at time t + 1 are not determined by their states at the previous time step. Each G node, say the j-th, produces its protein when active (synthesis link) and a G node is driven by the action of its k inputs (k being the number of its transcriptional regulation links, coming from P nodes), according to a fixed Boolean function f j associated to it (f j : {0, 1} k ! :{0, 1}). The topology of the transcriptional links is random, and so is the choice of the Boolean functions: each f j is generated by assigning at random to each of its 2 k possible inputs an output equal to 1 with probability p (the so-called bias of the set of Boolean functions), 0 otherwise. To each P node, say the i-th, an integer non-negative variable h i is also associated (its decay phase) which can change in time and which represents its residual lifetime. The maximum value of h i is the decay time dt i of node i, representing the lifespan of the protein, once activated (i.e. just synthesized). When a P node is activated, its decay phase h i takes the value dt i and it is later decreased by 1 at each time step, until it ends in 0 (unless the same node is not activated again in that time interval). When the incoming G node is active, then the corresponding P node resets its decay phase to the decay time. As long as the decay phase takes a nonzero value, the P node has a regulation role on its outgoing links (i.e. its value in the transition function is 1). The decay time of each node is taken randomly with uniform probability between 1 and a parameter defined as maximum decay time (MDT); note that when MDT is equal to 1 the GPBN is identical to the corresponding RBN (i.e. the one with the same topology and the same activation functions). If the value of a G node is 1 at time t then the value of the corresponding P node will be 1 at time t + 1 and its decay phase will be set to dt i , otherwise the decay phase of the P node is decremented by one unit (in case of dt i = 0, the activation of P is set to 0). On the other hand, the value of the G node at time t is immediately determined by its function f j , which depends on the states of its incoming P nodes at time t. Dynamical Regimes The asymptotic states of finite RBNs are periodic cycles; fixed points correspond to cycles with unitary period. Different dynamical regimes have been observed in RBNs [1,13,14], classified as disordered (sometimes called "chaotic", although all the attractors are indeed periodic), ordered or critical depending upon the length of their periods and the sensitive dependence upon initial conditions. In chaotic networks the cycle length sharply increases with the network size, and nearby initial states are likely to lead to different attractors, while in ordered systems the typical cycle length shows a polynomial dependence upon the number of nodes, and basins of attraction are quite regular. Given the random nature of these systems, the analysis usually concerns families of networks built by keeping fixed some parameters, like e.g. the number of nodes, the average number of connections per node and/or the average bias of the Boolean functions, while changing in different network realizations the topology of connections and the transition functions. Critical networks are those whose parameters lie on (or close to) the manifolds that separate regions in parameter space with ordered behaviours from the chaotic regions. It is important to stress that these terms refer to the typical features of networks with those parameters, while a single network realization can behave in a way very different from the typical ones. Large deviations from typical behaviours can easily be found in critical networks [15]. The asymptotic dynamics can be identified by means of the so-called dynamical Derrida parameter k [6,7], which measures the tendency of a temporary perturbation to vanish, to persist or to spread through the entire system: so, ordered, critical and chaotic dynamical regimes correspond respectively to k < 1, k % 1 and k > 1. This parameter can be determined by analysing a plot of the average distance between two states at time t + 1 versus their distance at time t (the Derrida plot) and by looking at the slope of the tangent to the curve in the limit of small initial distances. Different (static) measures of the dynamical properties have also been proposed, based on an analysis of the properties of the set of Boolean functions rather than on actual simulations: they are discussed in depth in [18] alongside with their relationships with the dynamical Derrida parameter, described above, which is the only such measure considered in this paper. Another important remark raised in [18] concerns the dependency of the dynamical Derrida parameter from the set of initial conditions. The usual recipe is that of choosing a fully random initial state, and of considering the time behaviour of its perturbed states. While this is entirely reasonable in ergodic systems (where all accessible states are equiprobable over a long period of time), RBNs with a small number of connections per node are strongly non-ergodic [20], so it may easily happen that such purely random states are never encountered in the life of the cell modelled by the Boolean genetic network. It seems therefore physically much more appropriate to determine the dynamical Derrida parameter while limiting the set of allowed initial states only to those states that are the successors of some other states. The initial state might be found by starting the network simulation from a purely random state, letting it evolve for T ev steps (T ev ! 1) and by choosing the state that has been reached as the initial state for computing the Derrida parameter. When the set of allowed initial states is limited in this way, we refer to an "extended Derrida approach", or to an "extended Derrida parameter", to distinguish it from the canonical one. Note also that different types of perturbations are possible: in GPBNs the initial perturbation could affect G nodes, P nodes, or both. In our approach a perturbation of a P node can correspond either (i) to an activity change from 0 to 1, with a decay phase h i randomly chosen within the range [1, dt i ] or (ii) to an activity change from 1 to 0, with h i = 0. A perturbation of a G node can correspond (i) to an activity change from 0 to 1, followed by the appropriate effect on the protein or (ii) to an activity change from 1 to 0 -in this case, the G node is not producing its protein, and the P node reduces its decay phase by one. Results It had already been observed in [9,11] that, as it might be apriori expected, the presence of a memory term tends to make the dynamical behaviour "more ordered". This can be shown by comparing the behaviour of networks with MDT 6 ¼ 1 with those of the corresponding network with MDT = 1 (that are identical to the corresponding RBNs). The comparison can be made for different dynamical behaviours, in this paper we will report results concerning networks that are critical if MDT = 1. Three sets of parameters, all corresponding to critical behaviours, will be discussed: [k = 2, p = 0.5], [k = 3, p = 0.21], [k = 3, p = 0.79]. The fact that two different cases are chosen for k = 3 is due to the fact that in GPBN the 0-1 symmetry of RBNs no longer holds. The stabilizing effect of memory can be seen in Fig. 1, where the number of different attractors versus the maximum decay time is shown to decrease sharply even with a short memory term [9]. Let us now turn to the dynamical regime, as determined by the Derrida procedure. As discussed in Sect. 3, perturbations can be performed either on G or on P nodes. Let us first consider this latter case. In all the simulations described here below the perturbations can be either up (i.e. setting equal to one the value of a P node which is 0) or down, depending on the not perturbed activity of the chosen P node. In each simulation series we create 50 different networks with 100 G-P node pairs, 100 different initial conditions for each network. In order to allow an easier series comparison we consider the decay time of each P node being exactly equal to MDT. 1 In Fig. 2 the behaviour of the Derrida parameter for the critical case k = 2, p = 0.5 is shown. The two curves refer to the G-node and to the P-node subnetworks. Very large values of MDT have also been considered, and it is shown that the network remains critical notwithstanding the memory term. In Fig. 3 the same parameter is shown for the two cases with k = 3. While the G-node subnetwork remains critical, here the effect of the memory term on the P subnetwork is neither that of leaving it critical, nor that of always bringing it in the , case k = 2, p = 0.5. The two curves refer to the G-node and to the P-node subnetworks, subject to a P-node perturbation 1 Subsequent simulation series where the decay time of each node is randomly chosen (with uniform probability) in [1,MDT] show that the main effect of choosing the decay times randomly with uniform probability between 1 and MDT is that of slightly soften the shape of the curves, without altering their behavior (data not shown). ordered region; this happens for the case with high bias, while the Derrida parameters becomes larger than one in the low-bias case. This behaviour may seem surprising (but see the comments in Sect. 5), therefore it is interesting to consider also the extended Derrida parameter described in Sect. 3. The results are shown in Figs. 4 and 5. In both cases T ev = 3. The two curves refer to the G-node and to the P-node subnetworks, subject to a P-node perturbation Note that, while the G subnetwork remains critical, the behaviour of the P subnetwork is different from that of the canonical Derrida parameter. In the k = 2 case, it is more ordered (k < 1 even for values of MDT slightly larger than 1) while it was critical in Fig. 2. In the k = 3, low-bias case the network is critical, while it was supercritical in Fig. 3. Only in the case of k = 3 with low bias the two behaviours are at least qualitatively the same. It should also be observed that the length of the time window T ev may affect the outcomes: for example, by choosing it equal to one in the same case as that of Fig. 5 left, one would have concluded that the P subnetwork is slightly supercritical (data not shown here). In order to complete the description of the model behaviours, let us now consider the results that have been obtained by perturbing the gene subnetwork (recall that all the previous ones referred to perturbations of P nodes). As it can be seen from Fig. 6 below, in all the cases both subnetworks are ordered even for values of MDT larger than 1. The dynamical regimes of GPBNs have been analysed so far by using canonical or modified Derrida methods, i.e. the discrete analogues of Lyapunov exponents. A major interest concerns the robustness of networks of this kind, and in order to characterize this property a different measure, independent of T ev or of any similar parameter, is given by the fraction of perturbations that, starting from an attractor cycle, end in the same attractor. Fig. 6. Extended Derrida parameter vs maximum decay time for the cases k = 2 and p = 0.5, k = 3 and p = 0.21, k = 3 and p = 0.79. In all cases T ev = 1. The curves refer to the G-node and to the P-node subnetworks, subject to a G-node perturbation These data are shown in Fig. 7. As it is expected, the fraction of perturbations that fall back onto the initial attractor decreases as the intensity of the perturbation increases. This fraction increases when a memory term is added and, like in the other cases described above, the effect is observed for small values of the maximum decay time, while further increases of MDT do not lead to any appreciable change. Conclusion The GPBN model of genetic regulatory systems maintains the abstraction level of the RBN framework and at the same time allows an explicit modelling of time delay effects. It is of course extremely interesting to compare abstract-level models with real-world data. It has indeed been possible to show that RBNs can properly describe the distribution of perturbations in gene expression levels induced by single knock-outs in S. Cerevisiae [15,16]. However, the techniques used for this purpose do not allow one to test the behaviour of the model when the perturbation affects several genes at the same timea situation that is much more frequently encountered in experiments, like those related to the effects of drugs or contaminants. In these cases the comparison of model behaviour and experimental data should concern the time behaviour of the perturbation after the initial shock, but time-course data cannot be properly compared to RBNs because of their unrealistic synchronous updating. On the contrary, the introduction of memory terms in GPBNs should make it possible to deal also with Fig. 7. The fraction of perturbations that came back to the starting attractor by varying MDT, if perturbing 1, 2, 5, 10, 15 or 20 P-nodes. Each point is the average of 50 different systems with 100 GP nodes: in each system the attractors are identified by using 100 random initial conditions; all states of the so sampled attractors are perturbed. In these experiments, we considered the same decay time for each P node. time-course data following a multiple initial perturbation, thus greatly increasing the wealth of experimental data available for testing the appropriateness of the abstract framework. The kind of memory that has been introduced has different effects in case of information transmission from G to P nodes or from P to G nodes, and pose some interesting questions about the correct way of measuring of the system dynamical regimes through Derrida-like procedures. Anyway, the robustness of the system's attractors can constitute a sort of global measure related to its general "degree of order". In the future it will be interesting to analyse a Derrida parameter modified in a way different from those of Sect. 4, i.e. computed by allowing as initial states only those that belong to an attractor. In order to understand the behaviour of the GPBN model when P nodes are perturbed, it will be interesting to consider separately the effects of up and down perturbations. Indeed, the impacts of "up" and "down" perturbations of P nodes are likely to have different intensities. The effect of a "down" perturbation, i.e. the disappearance of a protein, should typically die out quite rapidly, as the rest of the nodes resynthesize that protein. On the other hand, the impact of an "up" perturbation is likely to last longer, i.e. for a number of steps equal to its phase. Investigating the effects of the two types of perturbations by canonical and modified Derrida parameters may therefore provide important clues about the properties of the model.
5,156.2
2017-09-19T00:00:00.000
[ "Computer Science", "Biology" ]
The role of miR-497-5p in myofibroblast differentiation of LR-MSCs and pulmonary fibrogenesis Idiopathic pulmonary fibrosis (IPF) is a chronic, progressive and fatal fibrotic lung disease characterized by profound changes in stem cell differentiation, epithelial cell phenotypes and fibroblast proliferation. In our study, we found that miR-497-5p was significantly upregulated both during myofibroblast differentiation of lung resident mesenchymal stem cells (LR-MSCs) and in the lung tissues of a pulmonary fibrosis model. In addition, as determined by luciferase assays and Western blot analysis, reversion-inducing cysteine-rich protein with kazal motifs (Reck) was identified to be one of the target genes of miR-497-5p, and Reck could suppress the expression of matrix metalloproteinase-2 (Mmp2) and Mmp9, which could activate latent transforming growth factor-β1 (TGF-β1). To test the potential therapeutic significance of this miRNA, we modulated the expression of miR-497-5p in LR-MSCs and relevant animal models. The results demonstrated that upregulation of miR-497-5p could induce LR-MSCs to differentiate into myofibroblasts and promote pulmonary fibrogenesis, while inhibition of its expression could effectively retard these processes. In conclusion, our work supports that controlling pulmonary fibrogenesis via inhibition of miR-497-5p expression may provide a potential therapeutic strategy for IPF. Scientific RepoRts | 7:40958 | DOI: 10.1038/srep40958 microRNAs (miRNAs) are small noncoding RNAs that are directly associated with the developmental process of cancer, diabetes, cardiovascular disease, and lung disease through regulating gene expression [15][16][17][18][19] . To date, a large number of miRNAs have been reported to play key roles in the development of IPF. Pandit et al. first reported that let-7d is mainly localized to the alveolar epithelium in normal lungs, but is significantly decreased in IPF lungs 15 . miR-21, which was first identified as an oncogenic miRNA targeting many tumor suppressor genes 20 , has been found to be highly increased in myofibroblasts, epithelial cells, as well as the cells surrounding the fibrotic foci of human IPF lungs 21,22 . In our study, we observed that miR-497-5p was significantly higher during the myofibroblast differentiation of lung resident mesenchymal stem cells (LR-MSCs). And we also verified that miR-497-5p could target the 3′-UTR of reversion-inducing cysteine-rich protein with kazal motifs (Reck) which could suppress Mmp2 and Mmp9 synthesis and secretion, and impact on ECM integrity 23,24 . We further investigated whether this regulation was conserved in mice. In a bleomycin (BLM)-induced pulmonary fibrosis model, miR-497-5p was upregulated, and suppressing miR-497-5p expression using a lentiviral agent in vivo reduced the expression of fibrotic markers Mmp2, Mmp9 and Tgfb1 via enhancing the expression of Reck, suggesting augmented in vivo lung repair. These data reveal a potential new therapeutic approach for IPF and an intimate interplay among miR-497-5p, Reck, MMPs, TGF-β1 and pulmonary fibrosis. Reck is suppressed in TGF-β1-treated LR-MSCs and the lung tissues of a pulmonary fibrosis model. Reck is predicted to have a potential conserved binding site with miR-497-5p. We investigated the expression of Reck in TGF-β1-treated LR-MSCs and lung tissues administered with BLM. The results showed that Reck was dramatically suppressed ( Fig. 2A and B), indicating a potential relationship between Reck and miR-497-5p. miR-497-5p regulates the differentiation of LR-MSCs by targeting Reck. The seed sequence of miR-497-5p within the 3′-UTR sequence of mouse Reck was predicted to be a potential conserved binding site (Fig. 2C). To confirm whether miR-497-5p could regulate Reck expression, Reck 3′-UTR was cloned into a luciferase reporter system (the GV306 vector). A Reck 3′-UTR mutant with mutations in the predicted miR-497-5p site was also cloned into a GV306 vector. The constructs were subsequently transfected into 293 T cells with either LV-miR-497-5p or LV-NC. Co-transfection with LV-miR-497-5p, which contained WT-Reck but not MUT-Reck, evidently diminished the normalized luciferase activity indicating that miR-497-5p could bind with the 3′-UTR of Reck and suppress the transcription of luciferase (Fig. 2D). To further explore if miR-497-5p upregulation was required for the myofibroblast differentiation of LR-MSCs, these cells were transfected with either LV-miR-497-5p or LV-NC followed by culture for 7 days. As shown in Fig. 3A and B, upregulating miR-497-5p could downregulate the expression of Reck, resulting in increased protein levels of Mmp2, Mmp9 and Tgfb1. In addition, overexpression of miR-497-5p could also induce αsmooth muscle actin (Acta2) and collagen I (Col1a1) expression at both mRNA and protein levels compared with LV-NC, along with increased protein levels of Vim ( Fig. 3A and B). When LR-MSCs were transfected with either LV-miR-497-5p-inhibitor or LV-NC-inhibitor and cultured for 72 h followed by treatment with TGF-β1 for another 7 days, downregulation of miR-497-5p in LR-MSCs could decrease TGF-β1-induced Acta2, Vim and Col1a1 expression by upregulating the expression of Reck, resulting in decreased levels of Mmp2, Mmp9 and Tgfb1 ( Fig. 4A and B). We also confirmed these findings using immunofluorescence staining (Fig. 4C). These data suggested that miR-497-5p was sufficient to modulate the differentiation of LR-MSCs and these cells play an important role in pulmonary fibrogenesis 26 . miR-497-5p induces myofibroblast differentiation of NIH/3T3 cells. As miR-497-5p was also increased in the lung tissues of the pulmonary fibrosis model, we also sought to illustrate whether miR-497-5p could induce myofibroblast differentiation of fibroblasts as this process is believed to play a key role in the development of IPF. In NIH/3T3 cells, miR-497-5p could also impair Reck expression and augment the levels of Acta2 and Vim, suggesting the myofibroblast differentiation of NIH/3T3 cells (Fig. 5A). The migration assay showed that miR-497-5p could promote the migration of NIH/3T3 cells (Fig. 5B). In order to further confirm the regulation of Reck by miR-497-5p, we transfected NIH/3T3 cells with miR-497-5p and a mutated form of Reck that had a disrupted 3′-UTR sequence followed by functional analysis. Our results found that such co-transfection suppressed miR-497-5p-induced downregulation of Reck and upregulation of Mmp2, Mmp9 and Tgfb1, counteracting miR-497-5p-induced myofibroblast differentiation of NIH/3T3 cells evidenced by reduced levels of Acta2 and Col1a1 (Fig. 5C). We also reproduced this finding in LR-MSCs (Fig. 5D). Taken together, these data strongly suggest that the profibrotic effect of miR-497-5p was meditated by binding with its target gene Reck. Discussion Crucial new insights into the diagnosis, treatment, prognosis and pathogenesis of various human diseases, including tissue fibrosis, have been provided by genome-wide approaches to miRNA expression profiling 17,27,28 . The application of these approaches in IPF has demonstrated that let-7d, miR-21 and miR-199a-5p make critical contributions to pulmonary fibrosis 15,21,29 . In our study, we explored the expression, regulation, and potential role of miRNAs in the myofibroblast differentiation of LR-MSCs. Among these significantly upregulated miRNAs, we focused on miR-497-5p and observed its high expression in the lung tissues of a pulmonary fibrosis model. miR-497-5p is one of the members of the miR-15/107 group with the seed sequence AGCAGC, which is an important determinant of target recognition 30 . It has previously been reported that miR-497-5p is closely related with several cancers 31,32 . However, the relationship between miR-497-5p and fibrosis remains unknown. In our investigation, we demonstrated that miR-497-5p could most likely bind with the 3′-UTR of Reck by the luciferase assay, strongly suggesting that Reck was a target gene of miR-497-5p. Additionally, we have demonstrated that upregulation of miR-497-5p induced myofibroblast differentiation of LR-MSCs and NIH/3T3 cells in vitro. We also found that the profibrotic function of miR-497-5p was mainly mediated through the activation of latent TGF-β1 anchored in ECM by targeting the 3′-UTR of Reck, which negatively regulates, at least three different Mmps, namely, membrane-type 1 matrix metalloproteinase (MT1-Mmp), Mmp2 and Mmp9 (Fig. 8). Secreted Mmps (e.g. MT1-Mmp, Mmp2 and Mmp9) have been implicated in the release and activation of TGF-β 13,33,34 , and may also be involved in the deregulation of the synthesis and degradation of ECM protein, a process that leads to enlarged ECM deposition in fibrosis 11 . (Figs 3A and 4A), the expression of Acta2 and Col1a1 was measured by immunofluorescence. Acta2 and Col1a1were revealed with secondary 594-labeled antibodies; nuclei were revealed by DAPI staining. Pulmonary fibrosis is characterized by increased deposition of ECM and aberrant fibroblast proliferation 35 . LR-MSCs undergo injury-induced phenotypic modulation to become α-SMA positive myofibroblasts, which is a crucial step in the repair process that facilitates collagen secretion following lung injury. The collagen deposition and the proliferation of myofibroblasts at the site of injury result in scar formation, which helps to maintain alveolar structural integrity and function 36 . Activation of the TGF-β1 pathway is a significant event in the fibrogenic response, as it contributes to myofibroblast differentiation of LR-MSCs and pulmonary fibroblasts, and triggers the synthesis of ECM proteins 37,38 . In vivo, intratracheal inhalation of a miR-497-5p hairpin inhibitor was sufficient to induce the expression of Reck and decrease the release of activated TGF-β1 by suppressing Mmp2 and Mmp9 expression, which attenuated the fibrotic process. In contrast, upregulating the expression of miR-497-5p could induce the expression of Acta2 and Col1a1 and contribute to the phenotype of pulmonary fibrosis by activating the Mmps/TGF-β pathway. Moreover, as miRNAs could regulate more than one mRNA, some other miR-497-5p-predicted target genes, such as Smad7, a negative regulator of TGF-β signaling 39 , were also implicated in the phenotype of pulmonary fibrosis 40 . Considerable data have suggested that miRNAs play key roles in the development of pulmonary fibrosis and may be explored as promising therapeutic targets for pulmonary fibrosis. For example, Liu et al. demonstrated that miR-21 is overexpressed in the lungs of both mice with BLM-induced fibrosis and patients with IPF 21 . Pandit et al. found that let-7d is downregulated in the lungs of IPF patients and the number of epithelial cells that express let-7d is correlated with pulmonary function 15 . In addition, miR-29, miR-31 and miR-200 each plays an anti-fibrotic role in the lungs [41][42][43] . However, prior to our study, no experimental evidence had shown that miR-497-5p could contribute to lung fibrosis. To our knowledge, our current work represents the first report arguing that administration of a miR-497-5p inhibitor may attenuate the fibrotic processes both in vitro and in vivo. Our data supported that miR-497-5p modifying drugs may have a therapeutic benefit for IPF. miRNA microarray. Total RNA was isolated from LR-MSCs incubated with TGF-β1 (PeproTech, Rocky Hill, NJ) for 7 days using Trizol Reagent (Ambion, Foster, CA). miRNA profiling was performed using a low-density miRNA Taqman array service (Invitrogen, Shanghai, China). The miRNAs exhibiting an expression fold change (log 2 ) greater than 1 or less than -1 were deemed to be differentially expressed. We validated a series of abundantly and differentially expressed miRNAs via quantitative reverse transcription polymerase chain reaction (Q-RTPCR). Antibodies. Mouse monoclonal antibody against mouse β-actin (ab8277), mouse monoclonal antibody against mouse Mmp2 (ab86607), rabbit polyclonal antibody against mouse Mmp9 (ab38898), rabbit polyclonal antibody against mouse Acta2 (ab5694), rabbit monoclonal antibody against mouse Vim (ab92547), rabbit polyclonal antibody against mouse Col1a1 (ab34710), mouse monoclonal antibody against mouse Tgfb1 (ab27969), and rabbit polyclonal antibody against mouse Fn1 (ab2413) were purchased from Abcam (Cambridge, MA). Rabbit monoclonal antibody against mouse Reck (D8C7) was purchase from Cell Signaling Technology (Beverly, MA). Materials and Methods Cell culture. Mouse fibroblast cells (NIH/3T3) was obtained from the American Type Culture Collection (Manassas, VA). The cells that were frozen down at an early passage were cultured for a maximum of eight passages. The cells were maintained at 37 °C with 5% v/v CO 2 in Dulbecco's modified Eagle's medium (DMEM, Life Technologies/Gibco, Grand Island, NY) supplemented with 10% fetal bovine serum (FBS, Gibco). Luciferase assays. For luciferase assays, the sequence of the Reck 3′-UTR and the Reck 3′-UTRs in which the putative binding sites had been mutated were amplified with specific primers, and verified by DNA sequencing. These gene fragments were then subcloned into the GV306 vector (GENECHEM) to generate the wild-type Reck plasmid (WT-Reck) and the mutant Reck plasmid (MUT-Reck). 293 T cells were transiently co-transfected with 125 ng GV306 vector and 5 × 10 7 TU/ml LV-miR-497-5p or LV-NC lentivival vector. Luciferase assays were performed 48 h later using the Dual-Luciferase Reporter System (Promega, Madison, WI). The Renilla and firefly luciferase signals were detected using a GloMax ® -Multi + Detection System (Promega). The activity of an internal firefly luciferase was normalized by the Renilla luciferase activity. Induction and treatment of pulmonary fibrosis. All animal procedures were conducted in accordance with humane animal care standards approved by the Nanjing University Ethics Committee (Nanjing, China) and maintained under specific pathogen-free conditions. The animals were acclimated to the environment for 1 week prior to treatment. The mice were administered with BLM (Nippon Kayaku, Tokyo, Japan) intratracheally at a dose of 5 mg/kg dissolved in a total of 50 μl sterile saline. The control group was similarly treated with 50 μl of sterile saline. The lentiviral vector of LV-miR-497-5p and LV-NC (LV-miR-497-5p-inhibitor and LV-NC-inhibior, respectively) were purchased from GENECHEM. Mice were administered with the lentiviral vector intratracheally at a dose targets Reck, which leads to myofibroblast differentiation of lung resident mesenchymal stem cells and pulmonary fibrosis through activating latent TGF-β1. of 2 × 10 8 TU/ml diluted by sterile saline. Seven days later, the LV-miR-497-5p-inhibitor and LV-NC-inhibitor groups were administered with BLM intratracheally at a dose of 5 mg/kg dissolved in a total of 50 μl sterile saline. The LV-miR-497-5p and LV-NC groups were similarly treated with 50 μl sterile saline. The mice were sacrificed for lung collection at day 14 after BLM administration (n = 6 for each time point). SDS-PAGE and immunoblotting. Briefly, whole cell or tissue lysates were separated on 12% SDS-polyacrylamide gels and transferred to a polyvinylidene fluoride (PVDF) membrane (Roche, Germany) by standard procedures. Membranes were blocked by incubation for 1 h with 5% non-fat milk in PBS containing 0.5% Tween-20 (PBST) and blotted with specific antibodies at 4 °C for 12 h. After three washes in PBST, the membranes were incubated with the secondary antibody at 37 °C for 1 h. Immunoreactive protein bands were detected using an Odyssey Scanning System (LI-COR, Lincoln, NE). Q-RTPCR. For Q-RTPCR analysis of mRNAs and mature miRNAs, total RNA was extracted using Trizol Reagent (Ambion). An amount of 0.05 μg total RNA was reverse-transcribed using the Taqman MicroRNA Reverse Transcription Kit (Applied Biosystems, Foster, CA). Comparative quantitative PCR (Q-PCR) was performed in triplicate using Taqman Universal PCR Master Mix (Applied Biosystems) on the 7300 Real-Time PCR System (Applied Biosystems). Mature miR-497-5p probes were alternatively obtained from Applied Biosystems. Normalization was performed by using RNU6B probes (Applied Biosystems). Relative expression was calculated by using the comparative Ct (ΔΔCt) method. For analysis of mRNA, HiScript 1st Strand cDNA Synthesis Kit (Vazyme, Nanjing, China) was used for reverse transcription polymerase chain reaction (RT-PCR). Q-PCR was performed using the SYBR Green Q-PCR Kit (Roche, Germany). Specific primers for mRNAs are listed in Table 1. The Ct values were analyzed using the ΔΔCt method and relative changes of mRNA levels were obtained by normalization to glyceraldehyde-3-phosphate dehydrogenase (GAPDH) relative to the control. Histopathology. The mouse lungs were inflated with a neutral buffered formalin solution overnight and embedded in paraffin before sectioning into 5 μm-thick slices. The sections were stained with hematoxylin-eosin (H&E) and Masson's trichrome to assess the degree of fibrosis. Immunohistochemistry. Five μm-thick paraffin-embedded sections were deparaffinized with xylene (twice for 5 minutes each) before being rehydrated in water using an ethanol gradient. After washing with water, antigen retrieval was performed in a steamer using citrate buffer (pH 6.0, DAKO) for 20 minutes, and the samples were then cooled to room temperature. The sections were then washed with PBST, incubated with 3% H 2 O 2 for 10 minutes and blocked with the avidin/biotin blocker and the serum-free blocking reagent. The sections were subsequently incubated with mouse anti-Mmp2, rabbit anti-Mmp9, rabbit anti-Abcg2 or rabbit anti-Acta2 antibodies overnight at 4 °C. The DAB Substrate System (DAKO) was used to reveal the immunohistochemical staining. Immunofluorescent staining. The immunofluorescence analysis was performed as described 44 . Rabbit anti-Col1a1 and rabbit anti-Acta2 were employed as the primary antibody. Alexa Fluor 594-conjugated goat anti-rabbit IgG (Invitrogen) was used as the secondary antibody. Nuclei were stained with 1 μg/ml DAPI (Sigma). The images were captured using a confocal fluorescence microscope (Olympus, Tokyo, Japan). Statistical Analysis. The data are presented as mean values ± SD. Differences were analyzed for significance (P < 0.05) by one-way ANOVA using SPASS for windows version 11.0 (SPASS, Chicago, IL). Supplement files for low-density miRNA Taqman array. The raw data of profiled miRNA in lung resident mesenchymal stem cells following TGF-β1-induced myofibroblast differentiation was uploaded in the supplement file.
3,819.2
2017-01-18T00:00:00.000
[ "Biology", "Medicine" ]
Accuracy of the non-relativistic approximation for momentum diffusion Abstract.The accuracy of the non-relativistic approximation, which is calculated using the same parameter and the same initial ensemble of trajectories, to relativistic momentum diffusion at low speed is studied numerically for a prototypical nonlinear Hamiltonian system -the periodically delta-kicked particle. We find that if the initial ensemble is a non-localized semi-uniform ensemble, the non-relativistic approximation to the relativistic mean square momentum displacement is always accurate. However, if the initial ensemble is a localized Gaussian, the non-relativistic approximation may not always be accurate and the approximation can break down rapidly. Introduction Low-speed momentum diffusion in nonlinear Hamiltonian systems has been studied [1][2][3][4][5][6][7][8][9][10][11][12] extensively using nonrelativistic, Newtonian mechanics. The statistical quantity that is typically used to study momentum diffusion is the mean square momentum displacement (MSMD) [1][2][3]7,10,11]. In previous studies [1][2][3] of momentum diffusion in the Newtonian standard map for the periodically delta-kicked particle, an initially non-localized semi-uniform ensemble of trajectories (where semi-uniform means that the initial positions are uniformly distributed but the initial momenta are all the same value) was typically used in the numerical calculation of the MSMD. These studies [2,3] of the Newtonian standard map have shown that, for parameter K where accelerator mode islands (these are stable regions in the chaotic sea [13] in which the particle accelerates continuously [13,14]) exist, the MSMD has a power-law dependence on the kick n: n α , where 1 < α < 2, i.e., the diffusion is anomalous. In contrast, for parameter K where there is no accelerator mode island, the MSMD grows linearly [1][2][3], i.e., the diffusion is normal. Considerable effort has been made recently to understand anomalous diffusion in nonlinear Hamiltonian systems -see, for example, the article by Altmann and Kantz [15] and the review by Zaslavsky [16]. Recently, Matrasulov et al. [17] studied both low-speed (weak-relativistic) and high-speed (ultra-relativistic) momentum diffusion in the special-relativistic standard map for the periodically delta-kicked particle. However, a comparison of the Newtonian and special-relativistic predictions for low-speed momentum diffusion has not yet been done to ascertain if the special-relativistic prediction is always well approximated by the Newtonian prediction as would be expected [18,19]. Such a comparison is important since Newtonian mechanics is the standard theory used in practice, instead of special-relativistic mechanics, to study low-speed momentum diffusion. In this paper, we compare the lowspeed momentum diffusion predicted by the two theories, based on the same parameter and the same ensemble of initial conditions, for the periodically delta-kicked particle. In addition to the initially non-localized semi-uniform ensemble typically used in the Newtonian [1][2][3] and special-relativistic [17] calculations of momentum diffusion for the kicked particle, we also use an initially localized ensemble in our calculations for comparison. Details of the kicked particle and numerical calculations are given next, followed by the presentation and discussion of the results and conclusion. Methods The periodically delta-kicked particle is a one-dimensional Hamiltonian system where the delta kicks are due to a sinusoidal potential which is periodically turned on for an instant. The Newtonian equations of motion for the a e-mail<EMAIL_ADDRESS>periodically delta-kicked particle are easily integrated [1,20] to yield an exact mapping, which is known as the standard map, of the dimensionless scaled position X and dimensionless scaled momentum P from just before the (n − 1)-th kick to just before the n-th kick: where n = 1, 2, . . ., and K is a dimensionless positive parameter. The transition from local to global chaos in phase space for the Newtonian standard map above occurs [21] at K = 0.971635 . . .. The special-relativistic equations of motion for the periodically delta-kicked particle are also easily integrated [22,23] to yield an exact mapping for the dimensionless scaled position X and dimensionless scaled momentum P from just before the (n − 1)-th kick to just before the n-th kick: where n = 1, 2, . . .. In addition to the parameter K, the relativistic standard map (eqs. (3) and (4)) has another dimensionless positive parameter, β. Since βP 1 implies v c (i.e., low speed), where v is the particle speed and c is the speed of light. At low speed, the relativistic standard map (eqs. (3) and (4)) is [24] approximately which is close to the Newtonian standard map (eqs. (1) and (2)) since βP 1 in eq. (7). The mean square momentum displacement (MSMD) is defined [1][2][3]7,10,11] as where . . . is an average over an ensemble of trajectories. In our calculations, in addition to using an initially nonlocalized semi-uniform ensemble, we also use an initially localized ensemble where the initial positions and momentums are both Gaussian distributed with means X 0 and P 0 , and standard deviations σ X0 and σ P0 . In each theory, the MSMD is first calculated using 10 6 trajectories (each trajectory in the ensemble is time-evolved using the corresponding standard map (either Newtonian (eqs. (1) and (2)) or special relativistic (eqs. (3) and (4)))), where the degree of numerical accuracy is determined by comparing the 30-significant-figure calculation with the 35-significant-figure (quadruple precision) calculation. For example, if the former calculation yields 1.234 . . ., and the latter yields 1.235 . . ., the 10 6 calculation is accurate to 1.23 (3 significant figures). The MSMD is then recalculated using 10 7 trajectories with the same degree-of-accuracy determination. Finally, the degree of accuracy of the MSMD is determined by comparing the 10 6 calculation with the 10 7 calculation. For example, if the 10 6 calculation is accurate to 1.23 and the 10 7 calculation is accurate to 1.24, the MSMD is accurate to 1.2 (2 significant figures). The Newtonian and special-relativistic MSMD are only compared after the degree of numerical accuracy of each MSMD has been determined by varying the numerical precision and the size of the ensemble in the manner described above. This method, which is a generalization of the standard numerical method [25] of establishing the degree of accuracy of a single trajectory, ensures that any conclusion resulting from the comparison of the Newtonian and special-relativistic MSMD is not due to numerical artifacts. Results In this section, we will present three examples to illustrate the general results of comparing the low-speed MSMD predicted by Newtonian and special-relativistic mechanics for the kicked particle. In all the examples presented here, the parameter β in the relativistic standard map (eqs. (3) and (4)) is small, 10 −7 , and so the mean particle speed is low [18], at most about 0.001% of the speed of light. If the initial ensemble is semi-uniformly distributed, where the initial positions X 0 are uniformly distributed between 0 and 1 and all initial momenta are P 0 , then there is generally no breakdown of agreement between the Newtonian and special-relativistic MSMD, which grow either linearly or as a power law from the outset. An example (this is our first example) is given in fig. 1 for P 0 = 99.9 and K = 10.053 (accelerator mode island does not [2] exist for this parameter), where the two MSMD grow linearly at close rates from the outset. In the second example, the parameter K is also 10.053, but the ensemble is initially Gaussian localized in phase space with means X 0 = 0.5 and P 0 = 99.9, and standard deviations σ X0 = σ P0 = 10 −12 . Figure 2 shows that the Newtonian and special-relativistic predictions for the MSMD are very close and fluctuating for the first 10 kicks, but, from kick 11 onwards, the MSMD predicted by the two theories disagree with each other completely. For example, at kick 11, the Newtonian and special-relativistic MSMD are, respectively, 0.173781 (accurate to 6 significant figures) and 1.443340 (accurate to 7 significant figures), where the degree of numerical accuracies were determined using the method described in the previous section. In the third example, the parameter K and the means of the initial Gaussian ensemble are the same as those in the second example but the initial Gaussian ensemble is broader in both position and momentum with σ X0 = σ P0 = 10 −7 . In contrast to the result in the second example, fig. 3 shows that there is no breakdown of agreement between the Newtonian and special-relativistic MSMD in this case. The difference between the results for the second and third examples can be understood as follows. The Newtonian and special-relativistic position probability densities of the initially localized Gaussian ensemble are, as shown in [26], generally delocalized in the entire position interval from 0 to 1 when the position standard deviation reaches a saturation value of about 1/ √ 12 = 0.289, which is the standard deviation of a uniform position density in the interval 0 to 1. In the second example, the Newtonian and special-relativistic position probability densities are delocalized at kick 13 and kick 15, respectively. In the third example, the position probability densities are both delocalized earlier, at kick 8. In each theory, before the delocalization of the position probability density, the MSMD, is dominated by P n 2 − 2 P n P 0 + P 0 2 , which is approximately P n 2 − 2 P n P 0 + P 0 2 , since P n P 0 ≈ P n P 0 . Moreover, the mean trajectory ( X n , P n ) of the ensemble is well approximated by the central trajectory (X n , P n ), that is, the single trajectory with the same initial conditions as the mean trajectory: X 0 = X 0 and P 0 = P 0 . Hence, the MSMD in each theory is approximately given by the square momentum displacement, of the central trajectory -see figs. 4 and 5 for, respectively, the second and third examples-before the position probability density is delocalized. In the second example, the breakdown of agreement between the Newtonian and special-relativistic MSMD at kick 11 (see fig. 2), before the delocalization of the position probability densities, is thus due to the breakdown of agreement between the Newtonian and special-relativistic central-trajectory square momentum displacements at kick 11, which is triggered by the breakdown of agreement between the Newtonian and special-relativistic central trajectories at kick 10. The breakdown of agreement between the central trajectories is not due to sensitivity to system parameter or initial conditions of the central trajectories since they are exactly the same in both theories; instead, it is due to the small difference involving v/c between the Newtonian map (eqs. (1) and (2)) and the special-relativistic map at low speed (see eqs. (6) and (7)). In contrast, in the third example, there is no breakdown of agreement between the Newtonian and special-relativistic MSMD because there is no breakdown of agreement between the Newtonian and special-relativistic central trajectories (the two central trajectories are the same as those in the second example where the breakdown occurs at kick 10) to trigger it before the position probability densities are delocalized at kick 8. The second and third examples illustrate that the agreement between the Newtonian and special-relativistic MSMD breaks down after some time if the initial Gaussian ensemble is sufficiently localized in phase space such that the Newtonian and special-relativistic position probability densities are delocalized after the breakdown of agreement between the Newtonian and special-relativistic central trajectories. The breakdown of agreement between the two MSMD is triggered by the breakdown of agreement between the two central trajectories and occurs one kick after the agreement between the two central trajectories breaks down. The MSMD breakdown of agreement therefore occurs rapidly if the two central trajectories are chaotic, as the second example shows, but it would take a long time to occur if the two central trajectories are non-chaotic because the difference between the two trajectories only grows, on average, linearly, instead of exponentially [24]. In each theory, after the position probability density is delocalized, the behavior of the MSMD calculated using an initially Gaussian ensemble is generally similar to the behavior of the MSMD calculated using an initially semi-uniform ensemble for the same parameter K, which is either linear growth or power-law growth. The linear growth rates or power-law exponents of the former MSMD (based on the initially localized ensemble) and latter MSMD (based on the initially non-localized ensemble) are close. In the second and third examples where K = 10.053, the growth is linear (see the bottom plots in figs. 2 and 3). In the second example, although the Newtonian and special-relativistic MSMD both grow linearly at close rates after the delocalization of the position probability densities, they start at different values and therefore the two MSMD remain different from one another (see the bottom plot in fig. 2). On the other hand, in the third example, the two MSMD remain close after the delocalization of the position probability densities because they grow linearly at close rates from close values (see the bottom plot in fig. 3). Conclusion In summary, there is no breakdown of agreement between the Newtonian and special-relativistic MSMD at low speed if the two MSMD are calculated using an initially non-localized semi-uniform ensemble. However, if an initially sufficiently localized Gaussian ensemble is used instead for calculations, the agreement between the two MSMD breaks down after some time due to, essentially, the small difference between the Newtonian and special-relativistic maps at low speed. Since the small difference between the Newtonian and special-relativistic equations of motion at low speed is generic, we expect similar breakdown of agreement between the Newtonian and special-relativistic predictions for low-speed momentum diffusion to occur in other nonlinear Hamiltonian systems. Therefore it should not be assumed that the Newtonian calculations will always yield approximately the same results as special-relativistic mechanics.
3,197.6
2016-06-29T00:00:00.000
[ "Physics" ]
On the Power-Splitting Relaying Protocol for SWIPT with Multiple UAVs in Downlink NOMA-IoT Networks , Introduction Unmanned aerial vehicles (UAVs) communications and non-orthogonal multiple access (NOMA) are envisioned as two key technologies for unlocking the potential of the fifth-generation (5G) and future networks [1,2]. In particular, the use of UAVs for 5G and beyond-5G networks has received much attention over the past few years [3]. Owning to distinctive characteristics, UAVs have been adopted for a variety of wireless networks and communication applications, for example, UAV-carried flying base stations (BSs) for capacity and coverage improvement, public safety scenarios, information dissemination, UAV-based wireless backhaul, and cellular connected UAVs as mobile users [4]. The NOMA technique exhibits benefits such as increased user data rate, massive connectivity, reduced end-to-end latency, improved fairness among users, high spectral efficiency (SE), and more energy efficiency (EE) than that of traditional orthogonal multiple access (OMA) [5][6][7][8][9][10][11]. The NOMA exploited two mechanisms including superposition coding (SC) and successive interference cancellation (SIC) [12,13]. Some recent studies have showed that the integration of NOMA into UAV networks can enhance the network performance in terms of max-min rate [14], sum rate [15], and mission completion time [16], over conventional OMA schemes. Two protocols are exploited for energy harvesting (EH) at relays including power splitting relay and time switching relay in the decode-andforward (DF) cooperative communication network [17,18]. NOMA in UAV-enabled cooperative amplify-andforward (AF) with the outdated relay selection algorithm is investigated [19]. Related Works. In this section, we review the novel study works on NOMA and UAV networks. UAV-assisted communication has become a potential application in industry and academic fields. NOMA is one of the important candidates to integrate UAV into 5G and future networks due to its overwhelming characteristics, such as superior spectral efficiency, low latency, and massive connectivity [2]. UAVs have also been recently studied as a promising solution to proven their potential in civil applications such as aerial photography, enhanced freight distribution, and wildfire management and disasters [4,20]. A resource allocation scheme was proposed in NOMA UAV communication to enhance the user transmission rate with worse channel state information (CSI) [21]. UAVs can act as moving BSs or relays to facilitate reliable and efficient communication with multiple users [22,23]. In [24], the authors discussed NOMA-based UAV-aided communications such as UAV-BS-enabled NOMA and NOMA-assisted cellular-connected UAVs. In [25], the authors conducted the complete performance analysis of a NOMA-aided UAV communication system, including selection combining double diversity receivers on UAVs, communicating over bivariate Rician shadowed fading channels. The work in [26] investigated the impact of residual hardware impairments on the performance of UAV-aided NOMA multiway relay networks by deriving the approximate analytical expressions for the achievable sum rate. Specifically, the asymptotic analysis in the high signal-to-noise-ratio (SNR) regions is carried out by invoking a high SNR slope and high SNR power offset. In [27], the authors investigated the impact of residual hardware impairments on the performance of UAV-aided NOMA multiway relay networks by deriving the approximate analytical expressions for the achievable sum rate. Specifically, the asymptotic analysis in the high signal-to-noiseratio (SNR) regions is carried out by invoking a high SNR slope and high SNR power offset. In [28], the authors studied a 5G-based IoT to access the 5G spectrum to transfer 5G and IoT information simultaneously. The 5G network can be used at the IoT nodes to transmit voice and video information while using the IoT network for forwarding sensing data. The work presented in [29] investigated a novel UAV relay-assisted IoT model with the emergency communication system that takes into account the latency requirement of Internet of Things (IoT) devices and the limited storage capacity of the UAV. A novel joint content caching and EH scheme is studied to improve UAV communications in the IoT NOMA network wherein a UAV acts like an aerial relay for serving users on demand [30]. In [31], the authors studied the throughput maximization problem with a focus on UAV-assisted wireless communication, considering a communication system with one couple of source and destination, where a UAV serves like an aerial relay based on an AF scheme and EH using a PS protocol. Performance of the IoT system with an EH UAV-enabled relay with downlink NOMA under Nakagami-m fading using the DF and AF schemes is investigated, where the time switching and adaptive power splitting protocols are utilized for the UAV [32]. Based on the above review, in this paper, we focus on power-switching relaying (PSR) for an RF EH and AFbased multiple-UAV NOMA in a SWIPT IoT system. Motivation and Contribution . This paper researched the engagement of EH and AF-based multiple-UAV NOMA in a SWIPT IoT system wherein a UAV serves as a rotary-wing relay to communicate with two IoT devices (IDs). We also look at the UAV option (UAVO) scheme because it requires CSI knowledge of one-hop links. Hence, the use of rotary-wing relay AF in the UAVO model is greatly desirable in practice when complicated problems occur as the main anxiety. According to the depicted system model, the outage probabilities (OPs), the system throughput, and EE of the NOMA schemes in AF transition systems with UAVO were studied. The major contributions of this paper are outlined as follows: (i) We exploit NOMA access technology in a dual-hop network to improve the SE of the network (ii) A system model is studied in this work that consists of a BS and n types of UAVs and two IDs (iii) The use of one SWIPT based EH and information processing (IP) protocol, specifically BS-based PSR, is exploited at a UAV that serves as a rotary-wing relay in this model (iv) We derive closed-form expressions of OPs, system throughput, and EE at two IDs to assess the performance of the PRS scheme in SWIPT based on multiple-UAV cooperative NOMA systems 1.3. Organization. The remainder of the paper is arranged as below: Section 2 shows the proposed system model and assumptions. Section 3 analyzes the performance parameters with OP, throughput, and EE of the system. Section 4 discusses the simulation results. Ultimately, Section 5 is the main conclusion. System Model We investigate a downlink cooperative two-hop rotary-wing relay system in which a base station (BS) aims to send the signal to two IDs, i.e., D 1 and D 2 with the support of one out of N AF UAVs ðUAV 1 , UAV 2 , ⋯, UAV N Þ with N > 1, as shown in Figure 1. It is assumed that there is no communication between the UAVs. We mainly focus our attention on a homogeneous network topology in which all wireless links show non-selective Rayleigh block fading and additive white Gaussian noise (AWGN). As shown in Figure 1, without loss of generality, we assume that the channels of two IDs have been ordered as h SD 1 ≤ h SD 2 . We also investigate that D 1 and D 2 are coupled together to implement NOMA downlink cooperative system. Hence, two successive phases are implicated for completing the information transmission that can be combined by selection combining. Selection combining (SC): Out of the N signal received, the strongest signal is selected. When 2 Wireless Communications and Mobile Computing signal N is independent and distributes Rayleigh, the increase is shown in the power ratio as follows ∑ N n=1 ð1/nÞ. Table 1 below lists the parameters used throughout the paper, unless otherwise stated. 2.1. BS-Based PSR Protocol of EH at UAV n . At the UAV n , we consider the EH mechanism BS-based PSR protocol. Figure 2 depicts a diagram illustration of BS-based PSR scheme for EH at UAV n in the block time of T. For direct link: In the whole time, T, BS transmits information directly to two IDs D 1 and D 2 , with transmission powers For indirect link: The collected signal power at UAV n is indicated by P. Assuming that the BS transmits the information to UAV n in the half-block of T, while the information is sent from UAV n to two IDs D 1 and D 2 in the remaining time of T (i.e., T/2). BS concurrently sends the superposed coding signal to R n . Therefore, the sent signal at BS can be given by Based on the employment of superposition of the sent signals at BS like in the NOMA scheme, the surveillance at UAV n can be given by The signals received at two IDs D 1 and D 2 , respectively, are given by Since D 1 is further to UAV n than D 2 , the power is allocated for D 1 more than that for D 2 to ensure the user fairness. With no generality losses, 0 < a 2 < a 1 satisfies a 1 + a 2 = 1. Based on PSR protocol, UAV n divides the collected power into two portions consisting of collected energy and IP energy. The energy harvested at UAV n is obtained by where η relies on the rectifier and the EH circuitry at UAV n . The total energy harvested in the EH phase is consumed at UAV n while forwarding the decoded signal to D i , i ∈ f1, 2g. The transmission power at UAV n depends on E H and is determined by where G E = jh SR n j 2 ηβ indicates the EH coefficient in the PSR protocol. Information Processing at UAV n and D i . In the first phase, BS transmits signals x 1 and x 2 for all UAVs and two IDs as in Equation (1). During the second phase, UAV n transmits the signal x R n = G n y R n to two IDs D 1 and D 2 , where G n denotes the amplifying gain at UAV n , i.e., where ρΔ = P S /N 0 depicts the sent SNR and the random variables (RVs) A i = ρjh SD i j 2 , B n = ρjh SUAV n j 2 , and C in = ρjh UAV n D i j 2 describe the instantaneous SNRs of the links BS ⟶ D i , BS ⟶ R n , R n ⟶ D i , respectively. Thus, the signals at D 1 and D 2 are forwarded by UAV n and can be given by as follows: During the first phase, consider x 2 like interference in y D 1 . The immediate SINR at D 1 is provided by Figure 1: System model. Wireless Communications and Mobile Computing In the same way, the instantaneous SINR at D 2 is provided by By following the NOMA scheme, D 2 decrypts the message specified for D 1 first and deletes it with SIC; then, it decrypts itself without interfering. Hence, the immediate SNR at D 2 is provided by During the second phase, the instantaneous SINR calculation is same as the first phase. Therefore, the immediate SINR at D 1 relative to the link is expressed as Finally, by using (10)-(15), it is assumed that the signals from the forward link and the direct link are associated by selection combining (SC). The instantaneous SINR per ID can be written as In the next, we will compute the OPs of two IDs assuming a UAVO method [33]. Therefore, the transition index is chosen and its corresponding SNR is given by The complex flat coefficient between UAV n and D i , i ∈ 0, 1 f g Wireless Communications and Mobile Computing in which the instantaneous SNR at UAV n is denoted by γ BSUAV n = B n: Performance Analysis 3.1. Outage Performance. The target SINR of two IDs is determined by the ID's request for the quality of service (QoS). Hence, each ID has a target SINR, 3.1.1. Outage Probability at D 1 . Based on [4], the RVs' cumulative distribution functions (CDF) and, correspondingly, can be expressed by where Ω A i = ρΩ 1,D i , Ω B n = ρΩ BSUAV n * , Ω C in = ρΩ UAV n * D i represent the average SNR of the links, respectively. According to the NOMA principle, one outage event happens if there is indirect transmission or no successful forward sending. Hence, the OP at D 1 is given by Theorem 1 The OP at D 1 can be derived as (see (23)). Proof. See Appendix A. Outage Probability at D 2 . Because D 2 needs to decode the signal of the first D 1 , D 2 will be outage if both the first and the second stage are outages. Therefore, OP at D 2 can be formatted as Theorem 2 The OP at D 2 can be derived as (see (25)). Proof. See Appendix B. Asymptotic Outage Probability at D 1 . Applying the McLaurin expression, we get that e x ≃ 1 + x and K 1 ðxÞ ≃ x −1 for small x. Hence, it turns out that An expression for the asymptotic OP for P D 1 , whose exact analysis is presented in (23), is simplified as follows: Hence, we can obtain a fraction of the asymptotic approximation for P D 1 in (24) as follows: With ∑ N n=1 N n ! ð−1Þ n−1 = 1: We have Wireless Communications and Mobile Computing Ultimately, from (31) and (34), an asymptotic OP expression for P D 1 in (25), we obtain 3.1.4. Asymptotic Outage Probability at D 2 . The asymptotic OP for P D 2 is expressed the same as P D 1 , whose exact analysis is portrayed in (27); we have Thus, we can obtain a portion of the asymptotic approximation for P D 2 in (28) as follows: Parameters Value The standardized distance between the BS and UAV n d = 0:3 The path-loss factor m = 2 The target rate R 1 = 0:5, R 2 = 0:25 Energy harvesting efficiency η = 0:8 Power splitting ratio β = 0:7 Power allocation coefficient for signal Outage probability for user1 Ultimately, from (36) and (39), an asymptotic OP expression for P D 2 in (29), we have 3.2. System Throughput. In this case, the BS sends information at a constant target rate, rely on the performance of the OP because of the wireless fading channel. The system throughput of the NOMA indirect link is shown by as follows. 3.3. The System Throughput at D 1 . where P D 1 is given in (23). The System Throughput at D 2 . where P D 2 is given in (25). Energy Efficiency. EE is known as the ratio of sum throughput to total power consumed in the whole network system. Energy Efficiency at D 2 . where τ D 2 is calculated by using (38). Simulation Results This section confirms the derived analytical results shown by the previous sections. The distance between BS and IDs is standardized to unity, i.e., Table 2 lists the simulation parameters 10 Wireless Communications and Mobile Computing for assessment circumstances of the SWIPT based on multi-UAV cooperative NOMA systems. Figure 3 describes the OP at D 1 with PSR protocol versus SNR without a direct link. Theoretically correct curves for the OP at D 1 for NOMA are plotted according to the corresponding points (3.24) and (3.27). Exact probability curves are consistent with Monte-Carlo simulation results. The value ID 1 is assumed to be N = f1, 2, 10g, and Ω SD 1 = 0. Figure 4 shows the OP at D 1 with PSR protocol versus SNR with direct link. Theoretically correct curves for the OP at ID 1 for NOMA are plotted according to the corresponding points (3.24), (3.27), and (3.37). Assume that the value ID 1 is N = 1 and Ω SD 1 = 1. Figure 5 describes the OP at D 2 with PSR protocol versus SNR indirect link. Theoretically correct curves for the OP of D 2 for NOMA are plotted according to the corresponding points (3.24) and (3.27). Exact probability curves are consistent with Monte-Carlo simulation results. The value ID 2 is assumed to be N = f1, 2, 10g, and Ω SD 1 = 0. Figure 6 shows the OP at D 2 with PSR protocol versus SNR with direct link. Theoretically correct curves for the OP at ID 2 for NOMA are plotted according to the corresponding Equations (3.24), (3.27), and (3.37). Assume that the value ID 2 is N = 1 and Ω SD 1 = 1. Figure 7 shows the OP at D 1 and D 2 with PSR protocol versus SNR indirect link. Theoretically correct curves for the OPs at D 1 and D 2 for NOMA are plotted according to the Equations (3.24), (3.27), (3.28), and (3.30), respectively. The values at D 1 and D 2 are assumed N = 3, and Ω 1 = Ω 1 = 0. Figure 8 depicts the OP at D 1 and D 2 of PSR protocol versus SNR indirect link. Assume that the values at D 1 and D 2 are N = 3 and Ω SD 1 = Ω SD 2 = 1. Figure 9 shows the throughput at D 1 and D 2 of PSR protocol versus SNR with direct link. Theoretically correct curves for the throughput at D 1 and D 2 for NOMA are plotted according to the Equations (3.43) and (3.44). Throughput curves are consistent with the results of the Monte-Carlo simulation. The values at D 1 and D 2 are assumed N = 3, and Ω SD 1 = Ω SD 2 = 1. Figure 10 shows the energy efficiency at D 1 and D 2 of PSR protocol versus SNR with direct link. The theoretically correct curves for the energy efficiency at D 1 and D 2 for NOMA are plotted according to the points (3.45) and (3.46). Energy efficiency curves consistent with Monte-Carlo simulation results. Assume that the values at D 1 and D 2 are N = 3 and Ω SD 1 = Ω SD 2 = 1. Conclusion This paper has investigated the BS-based PSR protocol for the NOMA system. We used an AF rotary-wing relay network with cooperative UAV systems with UAV options. Closed-form expressions of OP for the two IDs were derived. Based on a simulation of the OP, throughput, EE, the results indicated that NOMA with the UAV option improves the efficiency increase by increasing the number of UAVs, but hardly any outage performance when the number of UAVs increases from 2 to 10 in the high SNR region. The indicates that the NOMA protocol with the UAVO will not need to use more than two UAVs. Numerical results confirm that our derived analytical results matched precisely with the Monte-Carlo simulation results in connection with all possible system parameters. Furthermore, we also can deploy the multiple antennas system at two IDs along with the investigation of Rayleigh Rician fading channel for enhancing the system performance that would be in our future works. Data Availability The data used to support the findings of this study are included in the paper. Conflicts of Interest Authors declare there is no conflict of interest in this manuscript.
4,481.4
2022-09-05T00:00:00.000
[ "Computer Science" ]
A Robust Control Chart for Monitoring Dispersion Most robust control charts in the literature are for monitoring process location parameters, such as mean or median, rather than process dispersion parameters.This paper develops a new robust control chart by integrating a two-sample nonparametric test into the effective change-point model. Our proposed chart is easy in computation, convenient to use, and very powerful in detecting process dispersion shifts. Introduction Statistical process control (SPC) has been widely used in various industrial processes.Most SPC applications assume that the quality of a process can be adequately represented by the distribution of a quality characteristic, and the in-control (IC) and out-of-control (OC) distributions are the same with only differing parameters. While parametric methods are only useful in certain applications, there is often a lack of enough knowledge about the process distribution.For example, univariate process data are often assumed to have normal distributions, although it is well recognized that, in many applications, particularly in start-up situations, the underlying process distribution is unknown and not normal, so that statistical properties of commonly used charts, designed to perform best under the normal distribution, could potentially be (highly) affected.So robust charts are needed in such situations.A chart is called robust or distribution-free if its IC run-length distribution is nearly the same for every continuous distribution [1]. In the last several years, robust control charts have attracted much attention.For example, Bakir and Reynolds [2] proposed a cumulative sum (CUSUM) chart for group observations based on the Wilcoxon signed-rank statistic. McDonald [3] considered a CUSUM procedure for individual observations based on the statistics called "sequential ranks." An exponentially weighted moving average (EWMA) chart for individual observations proposed by Hackl and Ledolter [4] is constructed by the "standardized ranks" of observations, which are determined by IC distributions.If the distribution is not available, they recommended using the ranks in collected reference data instead.The robust charts considered by Chakraborti et al. [5,6] are based on the precedence test.Recently, a Shewhart-type chart and a scheme using change-point formulation based on the Mann-Whitney test statistic were investigated by Chakraborti and van de Wiel [7], Zhou et al. [8], and Hawkins and Deng [9].Jones-Farmer et al. [10] developed a rank-based robust Phase I control scheme for subgroup location.Other developments include Albers and Kallenberg [11] and Bakir [12,13].A nice overview on the topic of univariate robust control charts was presented by Chakraborti et al. [1].In addition, robust control charts in multivariate cases have been discussed by Liu [14], Qiu and Hawkins [15], and Qiu [16]. Most of the robust charts mentioned above focus on monitoring process median, but monitoring the process dispersion is also highly desirable.However, there are far fewer robust control charts which can monitor process dispersion.Zou and Tsung [17] proposed a chart which incorporates a powerful goodness-of-fit (GOF) test [18] using the nonparametric likelihood ratio into an EWMA chart.It can detect more general changes than location shifts and is also very easy in computation but leaves a tuning parameter to choose.This paper develops a new robust control chart by integrating a two-sample nonparametric test [19] into the effective change-point model.Simulation studies show that the proposed method is superior to other robust schemes in monitoring dispersion.As it avoids the need for a lengthy data-gathering step before charting (although it is generally necessary and advisable to have about at least 20 warm-up samples) and it does not require knowledge of the underlying distribution, the proposed chart is particularly useful in startup or short-run situations. The rest of this paper is organized as follows.The control chart for Phase I is given in Section 2. The control chart for Phase II is derived in Section 3. The performance comparisons with two other robust control charts are discussed in Section 4. The conclusion is given in Section 5. The Control Chart for Phase I We begin by considering the Phase I problem of detecting a change point in a fixed-size sequence of observations.We denote the observations by { 1 , . . ., }, and the goal is to test whether they have all been generated by the same probability distribution.We assume that no prior knowledge is available regarding this distribution other than that it is continuous.Using the language of statistical hypothesis testing, the null hypothesis is that there is no change point, and all the observations come from the same distribution, while the alternative hypothesis is that there exists a single changepoint in the sequence which partitions them into two sets, with 1 , . . ., coming from the prechange distribution 0 and +1 , . . ., coming from a different postchange distribution 1 : We can test for a change point immediately following any observation by partitioning the observations into two samples 1 = { 1 , . . ., } and 2 = { +1 , . . ., } of sizes 1 = and 2 = − , respectively, and then performing an appropriate two sample hypothesis test.For example, to detect a change in location parameter without making assumptions about the distribution, Mann-Whitney statistic would be a proper test statistic [9].In order to monitor the process dispersion, we will consider the Mood test.The Mood test uses a statistic like the following: where 1 is the rank of the th observation in the pooled sample. 1 could be computed as The mean and variance of the Mood test statistic are 12 , We reject the null hypothesis that no change occurs at if , > ℎ , for some appropriately chosen value of ℎ , .The statistic can be integrated into the change-point model and is easy to compute.Now, since we do not know in advance where the change point is located, we do not know which value of to use for partitioning.We therefore specify a more general null hypothesis that there is no change at any point in the sequence.The alternative hypothesis is then that there exists a change point for some unspecified value of .We can perform this test by computing , at every value 0 < < and taking the maximum value.This leads to the maximized test statistic: If > ℎ for some suitably chosen threshold ℎ , then the null hypothesis is rejected, and we conclude that a change occurred at some point in the data.In this case, the best estimate τ of the location of the change point is at the value of which maximized .If ≤ ℎ , then we do not reject the null hypothesis and hence we conclude that no change has occurred.The choice of this threshold will be discussed further in the following section. The Control Chart for Phase II Having considered the problem of detecting changes in a fixed-size sample, we now turn to the task of sequentially Phase II monitoring where new observations are being received over time.Let denote the th observation where is increasing over time. First, there are only a finite number of ways to assign ranks to a set of points; the statistic can only take a discrete set of values.This creates a problem for threshold choice when is small, since it may not be possible to find a value for ℎ which gives the exact ARL 0 required, which is a general problem when dealing with discrete valued test statistics.Therefore, we recommend that Phase II monitoring only begins after the first 20 observations have been received, which gives sufficient possibilities for rank assignments to make most ARL 0 s achievable.This seems a reasonable compromise, since in practice it would be very difficult to detect a change that occurred during the first 20 observations.Then; we make some modifications to the statistic.Suppose there are 0 warm-up data.Because it is impossible to have a change point in these warm-up data, we set the statistic as follows: Once a new observation is received, we then regard { 1 , . . ., } to be a fixed-size sample and employ our proposed method based on the above modified statistic to test if a change point has occurred.The problem of sequentially monitoring is then reduced to performing a sequence of fixed-size tests.Suppose it is desired to have an IC average run length (ARL 0 ) of .This can be achieved if we choose the ℎ values so that the probability of incurring a false alarm at the th observation equals to 1/.We hence require that for all ( It is not trivial to find a sequence of ℎ values which satisfy this property.The approach in Hawkins and Deng [9] is to use Monte-Carlo simulation.We will follow in the same way.One million realizations of the sequence { 1 , . . ., 1000 } were generated.Because the distribution of is independent of the distribution of the observations, these values can be sampled from any continuous distribution so long as they are independent and identically distributed.Then, for each value of , is computed for each of the million realizations.The values for ℎ corresponding to the desired ARL 0 can then be read off from them.Table 1 shows the values of ℎ which gives various commonly used values for the ARL 0 .Note that these values appear to have converged by the 1000 observation, so if the stream contains more than 1000 observations it is reasonable to let ℎ = ℎ 1000 for > 1000.Now we denote our chart by ROBUSTD, implying Robust Control Chart for monitoring process dispersion. To be used in practice, our approach requires a computationally efficient method for computing the ROBUSTD statistic .First, we denote +1 1 as the rank of the th observation in all ( + 1) observations.Although computing these +1 1 values seems like it may be computational expensive, this can be greatly reduced by noting that the arrival of a new observation +1 only has a small effect on the values of the +1 1 values.It can easily be shown that where denotes the possible change point.Therefore, we can compute ,+1 based on these +1 1 values and get the +1 value ultimately. Performance Comparisons We now evaluate the performance of our chart.As is standard in the quality control literature, we measure performance as the average time taken to detect a change of magnitude , which we denote by ARL 1 ().We consider changes which affect the process dispersion.Three different process distributions are considered: the standard normal distribution (0, 1), the Student t distribution with 3 degrees of freedom (3), and the chi-square distribution with 3 degrees of freedom 2 3 .The latter two correspond to the heavy tailed and skewed distributions, respectively. Because our chart can be treated as a self-starting chart, the number of observations available before the change may have a large impact on its performance.We will consider changes which occur after both 50 and 100 observations, that is, ∈ [50, 100].We compare our ROBUSTD chart to two other change-point detection algorithms.The first is the method described in Hawkins and Deng [9] for location shifts, which we will denote by MWCPM.It uses a similar change-point model to ours, but there test statistic is the Mann-Whitney statistic.Second, we compare our ROBUSTD chart to Zou and Tsung [17], which integrates the nonparametric likelihood ratio test framework into the EWMA chart.We notice that their chart contains a tuning parameter used in the EWMA scheme.Large values of produce a chart which is more efficient to large changes, while small values of produce a chart which is sensitive to small changes.We choose to use = 0.1 which is a value considered in their paper, and we denote their chart by NLREWMA.To allow fair comparisons, we set the ARL 0 of every chart at 500.Similar results hold for other values of ARL 0 , but we omit them for space reasons.For each of the three distributions, 10000 sequences were generated, and the change consists of multiplying to all postchange observations, respectively.The average time taken to detect the change is then recorded for each chart. Tables 2, 3, and 4 show the average time required to detect shifts in dispersion, from which we can get the following conclusions. (i) Our chart is much better than the MWCPM at all cases of dispersion shifts. (ii) Our chart is much better than the NLREWMA at most cases of dispersion shifts. So we can conclude that when we want to monitor dispersion shifts, our chart is the best choice since it gives excellent performance across all magnitudes of shifts considered based on comparisons previously mentioned. Conclusions We proposed a new robust and self-starting control chart to detect dispersion shifts by integrating a two-sample nonparametric test [19] into the effective change-point model. Our chart is much better than some other nonparametric methods at most cases for shifts in dispersion.As it avoids the need for a lengthy data-gathering step before charting (although it is generally necessary and advisable to have several warm-up samples) and it does not require knowledge of the underlying distribution so the proposed chart is particularly useful in start-up or short-run situations. Table 1 : Values of the threshold sequence ℎ corresponding to ARL 0 of 200, 500, and 1000. Table 2 : ARL 1 () for dispersion shifts in the (0, 1) distributions, for several values of the change time . Table 3 : ARL 1 () for dispersion shifts in the heavy tailed distribution (3), for several values of the change time . Table 4 : ARL 1 () for dispersion shifts in the skewed distribution 2 3 , for several values of the change time .
3,137.4
2013-07-24T00:00:00.000
[ "Engineering", "Mathematics" ]
Ionothermal synthesis of magnetic N-doped porous carbon to immobilize Pd nanoparticles as an efficient nanocatalyst for the reduction of nitroaromatic compounds Carbon materials play important roles as catalysts or catalyst supports for reduction reactions owing to their high porosity, large specific surface area, great electron conductivity, and excellent chemical stability. In this paper, a mesoporous N-doped carbon substrate (exhibited as N–C) has been synthesized by ionothermal carbonization of glucose in the presence of histidine. The N–C substrate was modified by Fe3O4 nanoparticles (N–C/Fe3O4), and then Pd nanoparticles were stabilized on the magnetic substrate to synthesize an eco-friendly Pd catalyst with high efficiency, magnetic, reusability, recoverability, and great stability. To characterize the Pd/Fe3O4–N–C nanocatalyst, different microscopic and spectroscopic methods such as FT-IR, XRD, SEM/EDX, and TEM were applied. Moreover, Pd/Fe3O4–N–C showed high catalytic activity in reducing nitroaromatic compounds in water at ambient temperatures when NaBH4 was used as a reducing agent. The provided nanocatalyst's great catalytic durability and power can be attributed to the synergetic interaction among well-dispersed Pd nanoparticles and N-doped carbonaceous support. Synthesis of nitrogen-doped carbon substrate (N-C substrate) To prepare a nitrogen-doped carbon substrate, in the first step, glucose (33.3 mmol), zinc chloride (66.03 mmol), and histidine (20 mmol) were blended well in a mortar to procure a homogeneous composition.Subsequently, 20 mL of water was added to the mixture and was mixed well.Next, the resulting combination was decanted into a 50 mL-Teflon autoclave and was placed in an oven at 180 °C for 20 h.After cooling to the ambient temperature, the admixture was rinsed with distilled water multiple times and immersed in hydrogen chloride (0.25 M) overnight.Subsequently, the black residue was washed with ethanol and water to eliminate the excess salt and hydrogen chloride.Eventually, the resulting product was collected by centrifuge and dried at 40 °C. Synthesis of the magnetic substrate (N-C/Fe 3 O 4 ) Briefly, 0.5 g of the obtained sample (N-C substrate) was dispersed in distilled water (120 mL).A blend of 2.5 mmol of FeCl 2 .4H 2 O and 5 mmol of FeCl 3 .6H 2 O were added and stirred for 1 h at room temperature.Then under reflux conditions, the reaction temperature was brought to 60 °C while 10 mL of NH 4 OH was added dropwise to the above mixture and was continuously stirred for another 1 h.Later, the magnetic sediment was separated by an external magnet, washed three times with distilled water, and dried up at room temperature. Synthesis of Pd nanoparticles on the magnetic substrate (Pd/Fe 3 O 4 -N-C) In the final step to synthesize Pd nanoparticles on the magnetic nitrogen-doped carbon substrate, 0.2 mmol palladium (II) chloride, and 60 mL acetonitrile were initially constantly stirred at 55 °C for 1 h.Afterward, 250 mg of the previous step precipitate was added to the above solution and stirred at the same temperature for another 30 min, and then 0.5 mL of a solution of hydrazine hydrate (0.5 mL) in deionized water (2 mL) was added dropwise and stirred for 24 h.The final catalyst was magnetically separated, deterged multiple times with distilled water, and dried at room temperature.The procedure of Pd/Fe 3 O 4 -N-C nanocatalyst synthesis is demonstrated in Fig. 1. Performances of Pd/Fe 3 O 4 -N-C nanocatalyst in the hydrogenation reaction The prepared Pd/Fe 3 O 4 -N-C nanocatalyst by this ion-thermal method had high catalytic efficiency toward the hydrogenation of nitroaromatics.In an aqueous solution, reduction reactions were conducted at ambient temperature, with NaBH 4 as the reducing agent.Accordingly, 3 mL of water and 0.5 mmol of nitroaromatic compounds in a round-bottom flask (10 mL) were stirred potently at room temperature.Afterward, 5 mg of Pd/Fe 3 O 4 -N-C and three mmol sodium tetrahydroborate were added to the mixture and stirred as long as the reaction was complete.Thin-layer chromatography was used to monitor the progress of the reduction reaction (n-hexane: ethyl acetate 7:3).At the end of the reaction, Pd/Fe 3 O 4 -N-C nanocatalyst was removed using an external magnet, washed out with distilled water and ethanol, and dried to reuse for the next cycle.Additionally, the final product was recrystallized for purification.2a, the absorption bands in 814 cm −1 and 1065 cm −1 are related to the N-H bending vibration and the C-O stretching vibration, respectively.The peak occurring in the 1434 cm −1 mainly belongs to the stretching vibrations of the C-N bond.The two absorption bands observed in 1559 cm −1 and 1621 cm −1 correspond to the stretching vibration of the C=N bond.The absorption bands at 2854 cm −1 and 2924 cm −1 indicate the stretching vibration of the C-H bond.The broad peak at 3400 cm −1 refers to the stretching vibration of N-H and O-H.Therefore, these outcomes reveal the presence of nitrogen in the carbonaceous framework.In Fig. 2b, the band that developed at 577 cm −1 characterized vibration of the Fe-O bond, representing the Fe 3 O 4 formation.In Fig. 2c, it is observed that by adding palladium nanoparticles to the surface of the magnetic substrate, no significant change has been made, which indicates that the N-C/Fe 3 O 4 substrate was stable during the synthesis of palladium nanoparticles 55 . Information obtained from the XRD pattern confirmed the constitution of Fe 3 O 4 and Pd nanoparticles.Based on Debye-Scherer's equation, the crystallite size of Pd and Fe 3 O 4 nanoparticles was computed to be 29.5 nm and 17.2 nm. FESEM, TEM and HRTEM SEM, TEM and HRTEM images of the Pd/Fe 3 O 4 -N-C nanocatalyst surface, as depicted in Fig. 4, were studied to assess its morphology of surface, particle size, and uniformity.Figure 4a,b reveals that Fe 3 O 4 and Pd nanoparticles immobilized on the amorphous carbon support possess a spherical morphology with nanoscale particle size, and the estimated size of nanoparticles is 35-40 nm.These images also exhibit a well-decorated N-doped carbon substrate with nanoparticles of Pd and Fe 3 O 4 .The magnetostatic interaction between the particles led EDS-Mapping and ICP-MS The results obtained from EDX mapping analysis in Fig. 5 confirmed the presence of carbon, nitrogen, oxygen, iron, and palladium elements in the Pd/Fe 3 O 4 -N-C nanocatalyst.Additionally, the presence of nitrogen and carbon in elemental analysis indicates that the nitrogen-doped carbon substrate is successfully composed.Thus, the synthesis of the new catalyst is confirmed wholly.Moreover, the ICP-MS test was used to prove the exact amount of Pd.This analysis illustrates that its concentration is 4.11%.To investigate the synthetic nanocatalyst stability, the used nanocatalyst by EDS-Mapping analysis and ICP was studied.The results showed that during several uses, palladium's percentage as the main center of the nanocatalyst decreased slightly (3.81%), indicating the suitable stability of the synthesized nanocatalyst. N 2 adsorption-desorption isotherm Using the Brunauer-Emmett-Teller (BET) method, N 2 adsorption-desorption isotherms were measured to assign the specific surface area of the nitrogen-doped carbon substrate (N-C) and Pd/Fe 3 O 4 -N-C nanocatalyst. As shown in Fig. 6a, isotherms were determined as type IV, which corresponds to the porous structure of the .Also, the data produced from adsorption isotherms are given in Table 1.www.nature.com/scientificreports/ Vibration sampling magnetometer (VSM) analysis Utilizing the VSM technique, the magnetic feature of the Pd/Fe 3 O 4 -N-C nanocatalyst was assessed and illustrated in Fig. 7. Based on the curve, Pd/Fe 3 O 4 -N-C has magnetic properties with a saturation magnetization (M s ) value of 40.3 emu/g.Also, this nanocatalyst exhibited superparamagnetic properties owing to its lack of a hysteresis loop.The superparamagnetic behavior of Pd/Fe 3 O 4 -N-C affords particles to collect quickly in the attendance of an outer magnetically field.Anyway, as soon as the outer field is deleted the particles are easily diffuse.An effective approach for examining the electron characteristics of the species generated on the surface is X-ray photoelectron spectroscopy (XPS), which can reveal information on the environment of the electrons, their oxidation state, and the binding energy of the metal's core electron.The Pd/Fe 3 O 4 -N-C XPS spectrum is displayed in Fig. 8. Fe 2p's XPS spectrum has two peaks in it.At 712.6 and 726.2 eV, respectively, there are two significant peaks that correspond to the usual Fe 2p1/2 and Fe 2p3/2 XPS signals of magnetite.Furthermore, the Pd nanoparticles are stable in their metallic form in the nanocomposite structure, as shown by peaks for Pd 3p3/2 and Pd 3p1/2 at 531.8 and 553.4 eV, respectively, in the Pd/Fe 3 O 4 -N-C study at the Pd 3p level.The Pd peaks in the Pd/Fe 3 O 4 -N-C shifted to lower binding energies than Pd0 standard binding energies (Pd 3p3/2 of about 532.4 eV and Pd 3p1/2 of about 560.2 eV).It has been reported that the position of the Pd 3p peak is usually influenced by the local chemical/physical environment around Pd species besides the formal oxidation state, and shifts to lower binding energy when the charge density around it increases.In the XPS elemental scan of the catalyst, the peaks for oxygen (O 2 s), carbon (C 1 s), and nitrogen (N 1 s) are also clearly discernible. Optimum conditions for the nitro compounds reduction reaction As a model reaction to optimize the nitroaromatic compounds reduction conditions, the reduction of 4-nitrophenol (0.5 mmol) was assessed.Therefore, the amount of the Pd/Fe 3 O 4 -N-C nanocatalyst, type of solvent, and temperature were evaluated, as shown in Table S1.To this end, outset, nanocatalyst was discussed in various amounts.The experiments illustrated that in the absence of the catalyst, a reduction reaction did not occur, so the presence of Pd/Fe 3 O 4 -N-C nanocatalyst is necessary to perform the reaction (Table S1, Entry 1).In line with the results, 5 mg of Pd/Fe 3 O 4 -N-C was selected as the optimal amount of nanocatalyst (Table S1, Entry 2-5).Moreover, the increased amount of nanocatalyst caused an increment in yield and a shorter reaction time. In addition, the activity of the nanocatalyst before and after the addition of Fe 3 O 4 nanoparticles was investigated, the efficiency of the nanocatalyst did not change significantly with the addition of Fe 3 O 4 nanoparticles, which indicates that Fe 3 O 4 nanoparticles only facilitate the separation of the nanocatalyst from the reaction medium and have no significant effect on catalytic activity. After determining the optimal amount of Pd/Fe 3 O 4 -N-C nanocatalyst, to peruse the effect of temperature on reaction progress, the model reaction was conducted at 25 °C and 50 °C (Table S1, Entry 6).The proper and ideal reaction temperature was 25 °C owing to the green chemistry laws and less energy expenditure. Eventually, the model reaction was accomplished with several solvents (Table S1, Entries 7-13).As determined by the results, water represented the best performance with a 98% yield and was selected as the optimal solvent because of being environmentally friendly and inexpensive. Following the determination of optimal conditions, to verify the effectiveness of Pd/Fe 3 O 4 -N-C nanocatalyst, the reduction reaction of various types of nitroaromatics was investigated under optimal conditions, and the results are indicated in Table 2. Comparison of Pd/Fe 3 O 4 -N-C catalytic activity and other catalytic systems reported in the hydrogenation of 4-nitrophenol Catalytic performance of Pd/Fe 3 O 4 -N-C was compared to some recent catalysts, and the results were reported in Table 3.As can be seen, all catalysts illustrated admissible performance toward hydrogenation of nitroaromatics, however, Pd/Fe 3 O 4 -N-C nanocatalyst exhibited more notable activity than reported catalysts.One of the remarkable benefits of this catalyst is using glucose and histidine as bio-friendly and green precursors.This work has some benefits compared to the reported catalyst-for instance, mild reaction conditions such as green solvent, low temperature, and short reaction time. Reusability study of the Pd/Fe 3 O 4 -N-C nanocatalyst in the hydrogenation of nitroaromatics In a study on the reusability and recyclability of the Pd/Fe 3 O 4 -N-C catalyst for the reduction of nitroarenes, the catalyst displayed remarkable recyclability.A magnet was used to separate the catalyst from the reaction mixture, and then it was repeatedly cleaned in ethanol before being used in the following cycle.Figure 9 showed that catalysts may be recycled up to six times without significantly altering their weight or performance. Conclusion In order to create a new reusable magnetic nanocatalyst that is N-doped porous and magnetic and immobilized by Pd nanoparticles, a straightforward and effective ionothermal approach has been developed in this research.The porous N-C substrate used to make this nanocatalyst offered a large number of active sites for the even distribution of Pd nanoparticles.The Pd/Fe 3 O 4 -N-C was effectively synthesized and employed as an effective heterogeneous nanocatalyst in reducing nitroaromatic compounds based on the results of the various characterization procedures.In the presence of a Pd/Fe 3 O 4 -N-C nanocatalyst (5 mg), 4-nitrophenol in an aqueous medium was reduced with an efficiency of > 99% over a period of 7 min.The Pd/Fe 3 O 4 -N-C nanocatalyst could be separated using an external magnet and reused up to six times without significant changes in performance. The synergetic Fe and N active sites in Pd/Fe 3 O 4 -N-C gave it a higher efficiency than other known catalysts. Because of these benefits, catalyst provision is quite valuable for real-world applications. Characterization of Pd/Fe 3 O 4 -N-C nanocatalyst FT-IR Spectroscopy of Pd/Fe 3 O 4 -N-C For a more detailed peruse of the structure of the Pd/Fe 3 O 4 -N-C nanocatalyst, the FT-IR spectrum of its construction steps of (a) N-C substrate, (b) N-C/Fe 3 O 4 (c) Pd/Fe 3 O 4 --N-C were investigated which is shown in Fig. 2. As shown in Fig. Table 3 . Comparison of the catalytic activity of Pd/Fe 3 O 4 -N-C and other reported catalytic systems in 4-nitroaniline hydrogenation.
3,186.4
2023-10-16T00:00:00.000
[ "Chemistry", "Materials Science" ]
An Exact Analytical Solution for the Second Order Slip-Corrected Reynolds Lubrication Equation We derive a general slip-corrected compressible Reynolds lubrication equation, valid for any choice of the slip velocities, and show that it possesses the exact analytical solution. It is obtained by a suitable transformation of the dependent variable, and it yields both the pressure distribution in the bearing and the mass flow rate through it. It can be usefully applied for testing the other, experimental or numerical results obtained under the same or similar physical conditions, against this solution. INTRODUCTION New fabrication techniques developed during the last decade or so, in particular the production of micro-scale devices, have led to an intense application of microelectro-mechanical systems (MEMS) technologies in our everyday life [1].On the other hand MEMS technologies have brought several new problems to the scientific community.In particular, in fluid mechanics it turns out that the behaviour of flow in a micro-scale device is not necessarily the same as the one experienced in the macroscopic world.For example, in the context of compressible gas dynamics rarefaction effects must be accounted for, and their presence can be recognized by the values attained by the Knudsen number Kn.As a rule of thumb, for   the Navier-Stokes equations are still valid, provided slip boundary conditions are implemented at the walls of the flow boundaries (slip-flow regime).In the range 1 10 10 Kn    (transitional flow regime) the Navier-Stokes equations break down, and some "higher-order", more complex, Burnett equations are necessary, or the individual particle-based direct simulation Monte Carlo (DSMC) approach is to be employed.Finally, for 10 Kn  .theflow has to be treated as a free molecular flow amenable to the methods of kinetic theory of gases. Most MEMS devices in use today operate in the slip-flow regime.That is why the most of the literature referring to these problems is devoted to the modelling of the slip boundary conditions at the walls (for an excellent review on these problems s.[2]).We note in passing at this point that there are several attempts in the literature to modify the existing slip-boundary conditions in a purely empirical way, so as to encounter all regimes mentioned above -the entire Knudsen number regime [3]. Roughly speaking all rarefied gas flows appearing in MEMS devices can be divided into the pressure driven and the shear driven flows.Typical pressure driven flow is a flow through a channel or a pipe.In contrast to the classical, incompressible flow case with no-slip boundary conditions, such a flow in rarefied gas dynamics context is characterized by a nonlinear pressure drop in the direction of flow.The nonlinear first order differential equation governing the pressure distribution in a channel or a pipe can be readily derived from the basic flow equations and solved analytically exactly for the so-called second-order slip boundary conditions [4]. Typical shear driven flows are the Coutte flow or any other flow appearing in a problem of gas lubrication.The pressure distribution in a gas lubricated bearing is governed by the so-called Reynolds equation.Under certain conditions it can be readily derived from the basic flow equations for both no-slip and slip boundary conditions [5][6][7].This equation is also nonlinear.To the best of our knowledge only one exact analytical solution of this equation exists and it is presented in [8].It was found by suitably transforming the independent variable in the slip-corrected Reynolds equation. In this paper it is shown that the same slip-corrected Reynolds equation can be also analytically solved by appropriate transformation of the dependant variable (pressure) and by the direct integration of the derived differential equation in the closed form by quadratures.The validity of the solution is proved by comparison with numerical results available in the literature. DERIVATION OF THE GENERAL SLIP-CORRECTED REYNOLDS EQUATION For completeness of the presentation we will first briefly derive a general Reynolds equation -the equation valid for an arbitrary model of the slip velocity.We will consider the lubrication problem depicted in Fig. 1.Within the well known approximations made at the derivation of the Reynolds equation [5,6], the extremely simplified Navier-Stokes equations expressing the balance between the pressure forces and highest viscous forces only, reads: where   p x is the pressure and .const   is the gas viscosity, while the other denotations clearly seen in Fig. 1.The equation ( 1) should be solved with the following boundary conditions: where 0 ) ( u x and 1 ) ( u x are arbitrary slip velocities. The solution of equation ( 1) with boundary conditions (2) is easily found to be: This solution is further used in the continuity equation that expresses the constancy of the mass flow rate M  through the bearing: where is the variable gas density. Inserting (3) into (4) one gets an equation governing the pressure distribution in the bearing.At that, when the independent variable x is replaced by   h x (s.Fig. 1) and the equation of state for an ideal gas where R is the gas constant and T is the temperature (presumably constant) is utilized, it reads For the solution of this equation two boundary conditions are available (s. Fig. 1) , where indices i and e refer to inlet and exit bearing cross sections respectively.Any type of integration of this equation (analytical or numerical) as a result yields not only the already mentioned pressure distribution, but also the mass flow rate M  , which is not know beforehand.In what follows it is instructive to write (5) and the belonging boundary conditions in nondimensional form.We introduce the non-dimensional quantities in the following way (s.Fig. 1): is the velocity of the infinite plate positioned at 0 y  ), so that the equation ( 5) with its boundary conditions becomes: In there Before proceeding further we will evaluate the sum of the slip velocities 0 1 U U  for the case of the second order boundary conditions.For the problem considered herein they are [2]: where, in addition to denotations already used earlier,  is the molecular free path, and A 1 and A 2 are some constant corrective factors (first and second order slip coefficients).For an isothermal flow  is simply inversely proportional to the pressure, thus depend on x only [9].The first and second order slip coefficients, 1 A and 2 A , are differently defined by several authors in the literature.Schamberg [10] predicted theoretical values for the slip coefficients as , Hsia and Domoto [12] . The review of the second order velocity slip boundary conditions are presented by Barber and Emerson [2] and Lockerby et al. [13] and according to them there is no consensus concerning the values of A 1 and A 2 . Careful evaluation of slip velocities (8) by using the general velocity field (3) yields now the following expression for the desired sum of the non-dimensional slip velocities: where Kn is the local value of the Knudsen number:  , and the equation ( 6) will finally attain the form: ANALYTICALLY EXACT SOLUTION OF THE SLIP-CORRECTED REYNOLDS LUBRICATION EQUATION Suitable transformation of the dependent variable P that enables us to get analytically exact solution of equation (11) with the boundary conditions (7) reads: A simple physical meaning can be given to the new dependent variable   Π H .It follows from ( 10) that e Π Kn Kn  , i.e. it is the ratio between the local value of the Knudsen number and its exit value.Now the equation ( 11) is transformed into: where and C m M M    .At the same time boundary conditions (7) become: We tested the form of equation ( 13) against some other models of slip velocities existing in the literature and widely used [2] and obtained the same form of the from model to model.However, in all cases tested   0 0 F  and, as expected, the equations ( 11) and (13) reduce to their well known form for a no-slip compressible flow through a bearing.Equation ( 13) can be written in the form in which variables H and Π are separated: and its solution can be obtained by quadratures.For example, when applying the second of boundary conditions (15) we get: where t is dummy variable. Application of the first of boundary conditions ( 15) in (17) leads to: The expression (18 where parameter m is found by putting the first of boundary conditions (15) in into eq.( 20) For the case 2 Now parameter m is found by putting the first of boundary conditions (15) into eq.( 22) The pressure distribution, which is obtained by eqs.( 20) and (21) or by eqs.( 22) and ( 23), is defined by the bearing number  , the reference Knudsen number e Kn , and the ratio of the inlet and exit microbearing height i H . First, parameter m is determined from eqs.C 4C  is positive or negative.Although variable Π could not be explicitly expressed from eqs. ( 20) and ( 22), correlation between Π and H is completely defined by eqs.( 20) and ( 22).According to the boundary conditions (15), the value of at the bearing inlet and 1 Π  at the bearing outlet.For the Π values in that range, appropriate values of H are found from eq. ( 20) or (22) while the coordinate X is determined from the channel cross section varying function.Then, for each pair of Π and H the pressure is defined as In Fig. 2 ).The presented results show the reliability of obtained analytical solution for the slip flow regime ( e 0.1 Kn  ), as well as for the part of the transitional regime ( e 0.2 Kn  , e 0.5 Kn  ).The second order boundary condition defined by Schamberg [10] leads to the best fit of the analytical solution with the numerical solution of the Boltzman equation obtained by Fukui and Kaneko [15] in the slip regime ( e 0.1 Kn  ), while for the beginning of the transition flow regime ( e 0.2 Kn  ) Deissler [11] boundary condition is the most appropriate.For the higher Knudsen number value ( e 0.5 Kn  ) the analytical solution obtained with Hsia and Domoto [12] slip coefficients value is in good agreement with the numerical solution of the Boltzmann equation.Thus, it is confirmed that the analytical solution is valid even for a higher Knudsen number value up to e 0.5 Kn  .For all results presented in Fig. 2 Beskok et al. [4] boundary condition gives pronounced deviation from the Fukui and Kaneko [15] results. In Figs. 2 the analytical solutions which correspond to the Maxwell first order boundary condition are also depicted.It is obvious that Schamberg [10], Daissler [11] and Hsia and Domoto [12] second order boundary conditions provide higher accuracy then the Maxwell [14] first order boundary condition.a) [8].This indicates that despite the fact that final analytical solutions obtained by transformations of the dependent and independent variable don't have the same form, the both solutions give the same result.Beside results presented in Fig. 2, more extensive validation of this analytical solution was already presented in [8] by comparison of the analytically obtained results with available numerical results.Namely, the slip flow results for a wide range of Knudsen number and the continuum flow conditions, provided by the general analytical solution from this paper and [8], are in excellent agreement with Fukui and Kaneko [15] numerical solution of the Boltzmann equation. ТАЧНО АНАЛИТИЧКО РЕШЕЊЕ and the flow is described by the Navier-Stokes equations using conventional no-slip boundary conditions.In the range Figure 1 . Figure 1.Microbearing geometry flow rate in a Couette flow. at the exit cross section.If the upper boundary in Fig.1is in the form of an inclined plate, we will have: (21) or (23) iteratively by supposing the initial value for m taking into account whether 2 1 2 Pressure distribution in the microbearing obtained with presented analytical solution and different slip coefficients in the boundary conditions and with Boltzmann equation Figure 2. [15] for Λ=1, H i =2 and: a) Kn e =0.1, b) Kn e =0.2, c) Kn e =0.5.20 ▪ VOL.43, No 1, 2015 FME Transactions 4. CONCLUSION This [8]er presents a new approach to the derivation of the analytical solutions of the compressible slip corrected Reynolds lubrication equation and the classical compressible Reynolds lubrication equation for continuum flow conditions.The first analytical solution of the isothermal steady compressible quasiunidirectional lubrication problem was first reported in the open literature in[8].It was achieved by the proper change of independent variable.Here presented new approach is based on the suitable transformation of the dependent variable (pressure).The obtained differential equation written in the form in which variables H and Π are separated (eq.16) can be transformed in the differential equation with separated variables presented by equation(2.14)in
2,984.8
2015-01-01T00:00:00.000
[ "Physics" ]
Volume Variation Process of High-Density Polyethylene During Tensile and Creep Tests Volume Variation Process of High-Density Polyethylene During Tensile and Creep Tests — Samples of high-density polyethylene have been subjected to tensile tests and creep experiments by means of a video-controlled testing system (VidéoTraction©). The evolution of specific volume in this semi-crystalline polymer is determined versus true strain. In the elastic stage, we measure a hydrostatic expansion and then, in the plastic stage, we observe a competition between a compaction effect and a dilatation phenomenon. Although compaction is probably overestimated in the present testing technique, it represents a pertinent mechanism that is ascribed to the orientation of the amorphous chains during stretching. This phenomenon is characterized by X-ray diffraction measurements that show a reduction of average distance between amorphous chains. Dilatation process is explained by the diminution of crystallinity and by the formation, growth and coalescence of crazes inside and between spherulites. Electron microscopy reveals these defects. The competition between compaction and dilatation, controlled by the mobility of the amorphous phase, depends on temperature and time. Deformation of Solid Polymers Déformation des polymères solides D o s s i e r Oil & Gas Science and Technology – Rev. IFP, Vol. 61 (2006), No. 6 716 INTRODUCTION Deformation damage, characterized by volume changes induced by plasticity, has always been the object of an active debate among the community of polymer researchers.Not only the identification of voiding mechanisms at various scales is of major scientific interest, but also the quantitative modeling of cavitation rate is technologically important for the optimization of structural polymers. Several techniques were developed to measure volume strain of polymeric samples in real time during mechanical tests, with significant improvements in terms of flexibility and precision.Pioneer systems were essentially based on fluid dilatometers [1,3] but their performances were rather limited because they only gave access to global volume changes and were very sensitive to temperature fluctuations.Another family of systems utilized multiaxially disposed mechanical extensometers but: -they were complicated to manipulate; -they caused unwanted indentation at the surface of soft polymers; -they were generally limited to low temperatures [4,[8][9][10]13]. Although the above techniques are still applied by some researchers, the computerized video techniques developed during the last decade brought a decisive contribution to the assessment of volume strain in polymers.Originally, they were limited to the pre-necking deformation stage [12,[14][15][16][17].However, the novel video-controlled testing system that was recently developed in this laboratory has several interesting features [18]: -it provides in real time the true stress/strain behavior locally within a representative volume element (RVE) situated at the center of the neck; -it gives access to the volume strain in the same RVE; -it allows the user to regulate dynamically either true strain rate (for tensile tests) or true stress (for creep tests).Characterization of microstructural mechanisms has also been the object of many papers [5][6][7][8][9].While the elastic volume strain was simply modeled on the basis of Poisson's ratio, ν [10,11], it was shown that the important dilatation of polymers under stretching is essentially due to the formation of voids in the amorphous layer between crystalline lamellae [12]. The objective of this study is to apply the video-controlled system to high density polyethylene (HDPE), considered as a model semi-crystalline polymer, and to correlate the volume variation with microstructural mechanisms.An important feature of the method is that the method remains applicable after necking occurred, opening the way to the large strains range.Tensile and creep tests are performed under different experimental conditions.Microstructural evolution is analyzed after unloading and recovery at different strains during tensile tests.Characterization includes scanning electron microscopy and wide-angle X-ray scattering.Experimental information is interpreted in a deformation scheme taking into account the concurrence of shear and cavitation processes in plastic deformation. Material The HDPE investigated in this work was manufactured by DuPont (Canada) under the reference Sclair 2907.Its number and weight average macromolecular weights, determined by previous authors [19] are equal to = 16 800 g/mol and 93 600 g/mol, respectively.Cylinders 110 mm in diameter were extruded by the Plastifab Company of Montréal (Canada).Analysis of the material by differential scanning calorimeter (DSC) gives access to the index of crystallinity, X cw = 77 wt%, to the melting temperature, T m = 136°C and to the average crystallite thickness, L = 12.6 nm.Dynamic mechanical analysis (DMA) at a frequency of 1 Hz indicates the glass transition temperature, T g = -113.5°C.Hydrostatic weighting provides the density, ρ = 0.962 g/cm 3 .Microscopic observation shows that the material is characterized by a spherulitic morphology with regularly twisted lamellae. Mechanical Tests Plates of thickness 7 mm were sawn out of the cylinders parallel to the axis.Samples were machined in these plates with overall dimensions of 90 × 16 × 6 mm 3 .A geometric defect was milled in the center of the above specimens with a large radius of curvature until the smallest cross-section is equal to 6.8 × 6 mm 2 . Uniaxial tensile tests and creep tests are carried out on a universal traction machine MTS 810.With the tensile samples described above, the deformation localizes in the central region.With the video system utilized here (VidéoTraction © , Apollor SA, Lunéville, France) the mechanical variables are determined by analyzing the displacements of seven dot markers printed on the main face of the specimens with a proprietary fluorescent painting, prior to deformation.Five markers A, B, C, D, E are aligned along the tensile axis x 3 and three markers F, C, G are aligned along the transversal axis x 1 (Fig. 1).The position of each dot is assessed through the coordinates (x 1, x 3 ) of its center of gravity.The representative volume element investigated is a virtual slice of material, with a thickness of 0.2 mm, situated at the smallest crosssection.The following variables are simultaneously determined in this RVE while the sample is stretched: axial true strain, transverse true strain, axial true strain rate and axial true stress.From the displacement of each dot in the longitudinal group, the system calculates four axial Hencky strains, , , ε 33 and two transverse Hencky strains from the displacements of dots in the transverse group, and .The axial true strain in the RVE, ε 33 , is obtained from a polynomial interpolation of the four axial strains , and the transverse true strain in the same RVE is the average value The volume strain in the RVE is the sum of the principal strains Here we consider that the two transverse strains along specimen width, ε 11 , and thickness, ε 22 , are equal under the assumption of transversal symmetry.The appropriate stress definition associated with Hencky strain is the Cauchy stress (also called "axial true stress").It takes into account the reduction of the cross-sectional area, S < S o , undergone by the specimen while it is stretched: σ 33 = F/S o × exp (-2 ε 11 ).More details on the experimental device and on the determination of the strains have been described elsewhere [18,20]. Tensile tests are run at different temperatures for a given strain rate of 10 -3 s -1 and at different strain rates for a given temperature of 40°C.Creep tests are run at different stresses for a given temperature of 40°C.Volume strain is characterized during unloading and recovery after tensile tests performed at ambient temperature under a strain rate of 10 -3 s -1 . Microstructural Characterization Microstructure is observed by scanning electron microscopy (SEM) for different residual strain in the RVE on cryofractured surfaces obtained by fragile rupture of notched samples immersed in liquid nitrogen for 10 minutes.The microscope is a Philips FEG XL 30.Before observation, the samples are coated with a gold layer.Amorphous phase is characterized by wide-angle X-ray scattering (WAXS) by means of a 2D diffraction system (Inel, France) equipped with a copper anode (λ K α1 = 0.154 nm).The point of incidence of the incident X-ray beam on the specimen is adjusted precisely at the center of the neck by means of a laser system.We analyze the diffracted intensity distribution in the amorphous halo, I(φ, 2θ).For each inclination angle, φ, the diffraction angle at maximum intensity, 2θ max , gives access to an equivalent Bragg's distance, d(φ) = λ/2 sin (θ max ) [21,22].The average Bragg's distance, 〈d〉, is obtained by integration of d(φ) over the whole range of inclination angles.This distance is presumably representative of local packing of the macromolecular chains in the amorphous phase. Material Behavior During Monotonous Loadings The curves in Figure 2 show the influence of strain rate and temperature on the response of HDPE to large strains under uniaxial tension.When temperature increases or when strain rate decreases, one notes a significant decrease of yield stress σ y .The recorded values of the yield stress, σ y , are between 23.9 MPa (at T = 40°C and = 1.10 -3 s -1 ), and 29.8 MPa (at 23°C and 1.10 -3 s -1 ).During the plastic stage, a continuous strain hardening is observed except at = 5.10 -3 s -1 where the curve show strain softening followed by a weak strain hardening.The corresponding volume strain -true strain curves are displayed in Figure 2. As shown before [10,11], the dilatation in the elastic stage is modelled in terms of hydrostatic stress and bulk modulus (ε v = σ 33 / 3 K), or alternatively expressed from the Poisson's ratio: Consequently, from the curve in Figure 2, one finds ν = 0.39.The evolution of volume strain under larger strains differs with experimental conditions.Monotonous dilatation is observed for T = 23°C/ = 1.10 -3 s -1 and T = 40°C / = 5.10 -3 s -1 , volume strain reaching ε v = 0.59 and 0.32, respectively, at ε 33 = 1.50.By contrast, for T = 40°C / = 1.10 -3 s -1 , volume strain evolution shows compaction and dilatation successively.The extreme compaction, ε v = -0.026, is measured at ε 33 = 0.33.Ultimately, the dilatation reaches ε v = 0.27 at ε 33 = 1.5. Material Behavior During Unloading and Recovery Stages In Figure 3, we display the recovery of ε 33 and ε v for a specimen subjected to a tensile test carried out at T = 23°C/ = 1.10 -3 s -1 , subsequently unloaded from ε 33 = 1.0, and eventually left to recover at zero stress for a period of 3 hours.It is noted that axial strain and volume strain decrease rapidly during the unloading sequence (b to c) and more and more slowly during the recovery sequence (c to d).The recovery process under zero stress is due to the progressive spring back of the amorphous chains that are mobile at room temperature.It needs to be taken into consideration for the evaluation of the actual strain state in stretched samples whose microstructure is characterized by X-ray diffraction (WAXS) and scanning electron microscopy (SEM).Although previous authors showed that extensive shrinkage was obtained after several months at room temperature [23] or several hours at temperatures higher than 120°C [24], the recovery saturates rapidly in our experiments.In particular, we verified that that ε 33 and ε v did not change significantly any more in the period between 3 hours and 24 hours after unloading the samples.Since the WAXS and SEM characterization of stretched samples was performed in a delay after unloading that never exceeded 24 hours, the above property ensures that the features revealed by these samples correspond to the "residual strains" (ε 33r and ε vr ) systematically determined after the nominal delay of 3 hours.The complex curve in Figure 4 shows that, for increasing strains followed by unloading and relaxation, the relative variations of volume strain and true strain between the loaded state and the relaxed state decrease gradually.For example, at ε 33 = 0.2, axial and volume strains decrease by 128% and 74% respectively after maximum recovery, while corresponding values of 41% and 12% are recorded at ε 33 = 1.5.After recovery, the ε vr vs. ε 33r envelope (black dots in Fig. 4) does not show the elastic dilatation any more, but the compaction stage is more pronounced than under load, residual volume strain attaining -0.02 for ε 33r = 0.14.At large residual axial strains, large residual dilatation is measured (ε vr = 0.18 for ε 33r = 1.3). Creep Tests The creep experiments presented in Figure 5 were performed at 40°C under constant true stresses corresponding to 40%, 50% and 60% of the yield stress undergone by the HDPE when tensile tested under a strain rate of 5.10 -3 s -1 .After the instantaneous elastic deformation recorded as stress is rapidly applied, the evolution of true strain versus creep time shows two successive stages: -primary creep with a decreasing axial strain rate; -secondary creep with a constant strain rate.Under higher stress, one notes a faster increase of strain rate in primary creep and an earlier onset of secondary creep.As for the ε v vs. time curves, they show also two stages after elastic dilatation: -plastic compaction at moderate strains; -ultimate dilatation. One observes that the compaction process increases as the applied stress is increased from 11.6 MPa to 14.5 MPa and then, it decreases from 14.5 MPa to 17.4 MPa.The extreme compaction, ε v = -0.06, is recorded for σ 33 = 14.5 MPa for t = 10 000 s.A rapid dilatation is observed under 17.4 MPa after a short compaction stage, ε v reaching 0.09 after 20 000 s. Volume strain vs. true strain of HDPE under growing tensile loads followed by unloading and recovery stage. SEM Investigation Evolution of HDPE microstructure with residual strain is shown in Figure 6.The non-deformed state is characterized by a set of micro-cracks leading to a blocky structure.Microcracks are constituted by ridges corresponding to microdomains deformed during the propagation of micro-cracks [25].For a residual strain ε 33r = 0.29, one notes the occurrence of cavitation.The biggest cavities correspond to crazes that are known to grow through fibrillation of the polymer.When residual strain increases, cavitation phenomena are amplified.Evidence of craze coalescence is acknowledged by the observation of superimposed cavities for ε 33r = 0.48.The residual volume strain ε vr = 0.18 is characterized by the orientation of cavities toward the tensile direction. X-ray Diffraction The diffraction pattern I(φ, 2θ) and various scans I(2θ) corresponding to a residual strain ε 33r = 0.48 are represented in Figure 7.The non-homogeneity of diffraction rings denotes the strain-induced orientation of crystallized chains.This orientation phenomenon is also shown in the different scans I(2θ) where the peaks corresponding to the (001) m , (110) o and (200) o planes (monoclinic and orthorhombic systems) and the amorphous halo are identified.From φ = 0 to 90 degrees, a reinforcement of the intensity of crystalline peaks and amorphous halo is observed.Consequently, these planes and the amorphous diffraction entities rotate towards tensile direction.The diffraction patterns provide two interesting pieces of information.Firstly we determine the weight-based degree of crystallinity, X cw , which is classically determined by the relative surface peak of the amorphous bump, after integration over the azimuth angles.The result of this computation is displayed in Figure 8.It is noted that the degree of crystallinity decreases dramatically, from 73% to 53 wt%.Secondly, the diffraction analysis provides information on the local order in the amorphous phase through the variation of the diffraction angle, 2θ a , at the top of the amorphous bump.Unexpectedly, it is noted that 2θ a is systematically larger in the deformed PE than in the original material and depends on the azimuth angle.It is nearly equal to the value of the non-deformed material for φ = 0 and maximum for φ = 90 degrees.This result shows that the Bragg's distance between chains oriented parallel to stretching direction is d = 0.415 nm.The evolution of average intermolecular distance 〈d〉 vs. residual strain is showed in Figure 8.At the non-deformed state, 〈d〉 = 0.428 nm.A rapid decrease of 〈d〉 is observed until the residual strain reaches 0.2.Subsequently, the decrease of 〈d〉 is slower.When ε 33r = 1.30, 〈d〉 = 0.417 nm which indicates a 2.6% decrease of 〈d〉 compared to the nondeformed state. DISCUSSION As shown above, the volume strain experienced by HDPE is due to the combination of elastic dilatation, plastic compaction and damage dilatation.The first phenomenon, caused by the effect of hydrostatic stress on the van der Waals bonded material, is out of the scope of this work.In the three sections that follow, we will discuss more particularly the pertinence of the compaction effect.Finally we will analyze briefly the damage dilatation mechanisms. Bibliographic Data on Compaction Amplitude The compaction phenomenon was much less documented than the other processes and is much more controversial as for its absolute amplitude and underlying mechanisms. Powers and Caddell [3] assessed volume variation of PE with mechanical extensometers during tensile tests at ambient temperature under a stretching rate equal to 0.05 cm/min.They recorded a small but significant volume decrease (ε v = -0.002for ε 33 = 0.09).Unfortunately, they were limited to the pre-necking stage. Tang et al. [7] measured volume strain in PP with an axial optical extensometer and two transverse mechanical extensometers during tensile tests.They carried out the tests at ambient temperature under different strain rates.At the highest strain rates, they invariably observed a dilatation.When strain rate decreases, they noted a small compaction effect before ultimate dilatation.The maximum compaction effect obtained was ε v = -0.01 for ε 33 = 0.09 under 8.5 × 10 -3 s -1 . Gaucher-Miri et al. [15] studied volume variation in a low-density ethylene/butene copolymer with a multiaxial video extensometer during tensile tests performed at 20°C for a stretching rate of 0.5 mm/min.Like in our study, they noted a net volume compaction at the beginning of deformation, followed by volume strain increase at higher strains.The minimum volume strain was ε v = -0.03for ε 33 = 0.6.Positive volume strain was found for ε 33 > 2.0. Negative volume variations were observed during creep tests on semi-crystalline polymers [4,13,26].Cherry and Hin [13] measured volume variation of PE using three mechanical extensometers under creep at ambient temperature.After the elastic volume expansion, these authors systematically recorded a compaction process that reached ε v = -0.017for t = 25 000 s under σ 33 = 17.1 MPa.Like in our results (Fig. 5), they noted that compaction increases with applied stress. Experimental Errors on Compaction Measurement Careful analysis of the VidéoTraction © method utilized in this work reveals three critical points: -interpolation of the axial strain, ε 33 , from the longitudinal dot positions; -inhomogeneity of transverse strain across the RVE; -discrepancy between ε 11 and ε 22 . Concerning the first point, we found that the polynomial fit employed here underestimates the true axial strain, ε 33 , by about 2.5% for acute neck profiles.As for the second point it is evident that surface strains does not correspond rigorously to the average transverse strains across the specimen section.From the analysis of samples with different distances between transverse dots, we estimate that ε 11 is underestimated (too negative) by about 6% in extreme configurations.The last point is concerned with the assumption of transverse isotropy adopted in this work.In reality, precise transverse measurements at the neck of stretched samples shows that the reduction in thickness is slightly higher than the reduction in width.Quantitatively, we found in a typical case that ε 22 was smaller than ε 11 by about 6%.Since volume strain is the sum of the strains in the principal directions, ε v = ε 11 + ε 11 + ε 33 , and considering the respective errors committed on each component, one finds that ε v is systematically underestimated.In other terms, the actual compaction is certainly smaller than that measured via the present VidéoTraction © procedure.At that state of the investigation, we estimate that the error on the determination of ε v may attain 100% in extreme cases so that the actual value is between zero and the experimental value.Consequently, although the transient compaction for PE measured in the intermediate strain range is probably overestimated, the microstructural investigation confirms that it is not merely an experimental artifact.Potential methods to improve volume strain determination are in progress in this laboratory. Microstructural Compaction Mechanisms Although some authors (e.g.Powers and Caddell [3]) simply ignore the causes of the compaction they measured, many of them tried to relate this phenomenon to specific microstructural mechanisms. Strain-induced crystallization was the first process invoked for explaining compaction, based on the higher density of crystalline phase.In the case of PP, Tang et al. [7] explained his results by the above argument without providing any experimental evidence for this allegation.On the overall, except minor strain-induced crystallization facts reported in some papers (e.g.Wade Adams et al. [27] for PE), most authors (including us) have demonstrated that stretching semi-crystalline polymers globally leads to a decrease of crystallinity [28,29].Active fragmentation of the crystallites occurs while the fibrillar microstructure is progressively formed and crystallized chains are readily transferred into amorphous clusters during this destruction process.Consequently, interpreting compaction in terms of strain-induced crystallization is not relevant. Another approach, based on the concept of amorphous phase orientation, should rather be considered.From their results on LDPE-co-butene, Gaucher-Miri et al. [15] invoked strain-induced reorganization of the amorphous phase in sheared interlamellar zones.This process is confirmed by dynamic mechanical analysis that reveals an important immobilization of amorphous chains in specimens stretched up to the extension for which compaction is at its maximum.This interpretation is supported by Bartczak et al. [22], who consider that chain alignment toward stretching direction in the amorphous phase tends to form of a close-packed array with a pseudo-hexagonal symmetry.It is thus probable that the decrease of interchain distances in the amorphous phase (Fig. 8) is the key process that controls the transient plastic compaction observed in PE upon stretching. Dilatation Mechanisms Concerning the ultimate dilatation phenomenon, it has been documented in details by many authors [10,12,16], who ascribed it both to the destruction of crystalline order and to the development of voids.Since we found that the progressive loss of crystallinity on stretching is of the order of 30% (see Fig. 8), and considering that the density of the amorphous phase is about 15% less than that of the crystalline phase, a positive volume strain of about 0.04 is expected at the end of the test.Although significant, this value is small with respect to the total dilatation undergone by the material during the plastic stage.Consequently, plasticity-induced dilatation is due in majority to the cavitation process that has been observed in HDPE spherulites [30][31][32][33][34][35].Voiding initially develops in equatorial regions, subsequently in diagonal regions and finally in polar regions.Crazes can also appear between spherulites due to the presence of macromolecular defects in these regions [33].Cavitation phenomena appear to be less active at higher temperatures and lower strain-rates because chain mobility is enhanced under such conditions [12]. CONCLUSIONS In high-density polyethylene plastic compaction and dilatation compete for the control of volume strain during stretching and creep tests.Compaction is recorded as a negative volume variation during a transient stage for moderate extension ratios.It is amplified when amorphous chains have more mobility and time to accommodate macroscopic deformation, at higher temperature and lower strain-rate.Large compaction is observed at T = 40°C for = 1.10 -3 s -1 (ε v = -0.026at ε 33 = 0.33).Due to experimental errors, the volume strain in the compaction range is somewhat underestimated, so that further improvements will be necessary to characterize this process with better precision.However, the errors do not contradict the pertinence of the compaction phenomenon that results from the strain-induced orientation of amorphous chains toward the drawing direction.Dilatation, for its part, is observed at larger strains.Large dilatation is recorded at T = 40°C for = 5.10 -3 s -1 (ε v = 0.59 at ε 33 = 1.50).It is due partly to progressive crystallite destruction and partly to cavitation.The same competition between compaction and dilatation is observed during creep tests under constant true stresses.Quantitative modeling of volume strain effect for various deformation paths and histories is now in progress. Figure 1 Figure 1 Determination of true axial strain in the representative volume element. Figure 2 Figure 2Mechanical behavior of HDPE under tensile testing. Figure 3 Figure 5 Figure 6 SEM Figure 3 True stress -time and volume strain -time curves of HDPE during a monotonous load until the strain e 33 =1.0 (a -b), unloading stage (b -c) and 3 hours recovery process (c -d).
5,671.4
2006-11-01T00:00:00.000
[ "Materials Science" ]
The Reality Code: Interpreting Aggregate Larp Rules as Code that Runs on Humans proprietary code. Social apparatuses (Althusser 1970) ensure that larp code maintains its integrity as truth-command. The repeated reinforcement of social apparatuses lead players to experience a process of rules reification, leading the larp code to eventually take on a type of psychological reality. This phenomenon may have a neurological origin. The study of larp code provides a framework to approach “real world” reified power structures such as “gender,” “race,” and “capital.” The Reality Code: Interpreting Aggregate Larp Rules as Code that Runs on Humans In autumn of 2003, I traveled eighty miles through the evergreen forests of Western Washington to a summer camp that had been overtaken for the weekend by dozens of people who called themselves live-action role-players, or "larpers." Specifically, this was the Seattle Chapter of the New England Role Playing Organization (NERO), now known as Alliance Larp. It was a sight to behold-the rubber elf ears, the "magic circle" of Christmas lights, the cafeteria they called "the tavern" where players lingered between battles with duct-tape-wrapped tubes they called "swords." Needless to say, I was a bit confused by the aesthetics. Since 1996, I had taken part in gatherings such as cosplay, Renaissance Faires, SCA events, and historical reenactment-spaces in which people used lavish costumes and settings to help us imagine ourselves into the worlds we had learned to long for while watching television and playing video games. Mark Duffett theorizes that media fans rely upon a shared inner territory of emotional certainty, or "knowing field," to shape the phenomenology of participation within our fannish communities (Duffett 2013). I had grown accustomed to fan communities that used aesthetics to evoke our shared "knowing field." At this larp event, however, the phenomenology of participation was not centered around simulating the clothing or mannerisms of a time period or media genre. These larpers were using a very different methodology to approach the ideological. They had created an augmented sociality, a space in which people shouted commands at each other, and if executed properly, those commands were obeyed unwaveringly as if they were part of the universe's laws. One larper might pelt another with a beanbag while shouting "I call forth a Dragon's Breath!" to which the victim would respond by flopping onto the forest floor, to which the victim's friend might react by tapping her on the shoulder with a beanbag and saying "I call upon the Earth to Cure Wounds," to which the victim jumps up and rejoins the fray. To facilitate interactions like this, players must memorize a lengthy book of rules. The rules provide the framework to allow a myriad of "un-real" or undesirable activities to become a fluid part of the game's sociality without people having to actually enact them-activities like casting love spells, being maimed, and forging magic weapons. The rules provide a logistical model of command and consequence, allowing larpers to swiftly resolve the occurrence of "un-real" events without any debate over "what just happened," and to do so with relative autonomy from the game's staff. As I delved more deeply into the rules, first as a player then as a staff member, I came to understand that the rules were something far more complex than they appeared: that aggregate larp rules are a type of code that runs on humans. Code is a linguistic form that is both declarative and imperative; it is simultaneously truth and command (Buswell 2009). It is a specific form of language in which the declaration of a statement simultaneously makes it true. The statement "I do" during a marriage ceremony may be thought of as a type of code. Codic languages, such as larp code and computer Popular abstract: Aggregate larp rules are a type of code that runs on humans. Code can be thought of as a linguistic form that is both declarative and imperative; it is both truth and command (Buswell 2009). In aggregate larp, elements of the game's diegesis are rendered codic, or playable, allowing players a degree of autonomy from game staff. Through the methodologies of Critical Code Studies (Marino 2006)-the reading of code (code as text) and the annotation of code (code as manuscript)-the interpretation of larp rules as "code that runs on humans" takes form, allowing us to read game encounters as programs, players and staff as programmers, rulebooks as programming languages, and rule structures as platforms. In larp code, a DBMS-style relational model lends the code depth and specificity. Aggregate larp rules descend from tabletop RPGs, which emerged in tandem with the workplace proliferation of DBMS in the 1970s. With larp code and computer code, the social practice of standardization plays a role in shaping the code. With both types of code we also see the emergence of proprietary code. Social apparatuses (Althusser 1970) ensure that larp code maintains its integrity as truth-command. The repeated reinforcement of social apparatuses lead players to experience a process of rules reification, leading the larp code to eventually take on a type of psychological reality. This phenomenon may have a neurological origin. The study of larp code provides a framework to approach "real world" reified power structures such as "gender," "race," and "capital." University of Southern California <EMAIL_ADDRESS>languages, are quite rare and emerge only in situations with a specific type of captive audience. Many varieties of larp have emerged globally in last three decades, falling under loose categories such as campaign larp, freeform larp, secrets and powers larp, and pervasive larp. The one consistent thing about this mode of leisure labor is that players interact with and within a story, and strive to physically enact as much of that story as is feasible/desirable. The story in a larpwhich is to say, the series of events happening in the imaginary reality that players interact with and cocreate as they play-can be called the game's diegesis (Montola 2013). In aggregate larps such as Alliance, elements of the diegesis have been rendered codic, or playable. In this type of larp, players use a diegetic language-the signs of which include beanbags, specific phrases, foam-covered tubes, codic slips of paper, and smears of face paint-to simultaneously declare the game's diegesis while exacting their will upon it. The diegesis includes all things that are said to be "happening in the game world" such as a character blasting someone with a glowing ball of ice magic, or someone ingesting a love potion and becoming twitterpated with the next character they see. The diegesis does not include the things used to represent those things: the beanbag and phrase "30 elemental ice" or the slip of paper representing the Love Potion code. Many types of larp do not have a codic system to allow players to deploy elements of the game's diegesis, and thus rely on an authorial (rather than aggregate) power structure to resolve the game's "un-real" events (Steele 2016b). Rather than relying on code, authorial larps offer models of diegetic deployment rooted in authorship, evoking a play dynamic rooted performativity and subjection (Butler 1998). While most aggregate larps also contain authorial encounters with game staff (for example, a staff member declaring, "fire is now raining from the sky," which then becomes diegetic fact), all aggregate larps contain a codic rule structure that facilitates a decentralized (Baran 1964) deployment of elements of the game's diegesis. Through the methodologies of Critical Code Studies (Marino 2006(Marino , 2016-the reading of code (code as text) and the annotation of code (code as manuscript)-the interpretation of larp rules as code that runs on humans takes form, allowing us to read game encounters as programs, players and staff as programmers, rulebooks as programming languages, and rule structures as platforms. Interpreting aggregate larp rules as code facilitates cross-larp, cross-platform, and cross-disciplinary study of larp code, which can be thought of as both as a means to achieve an interactive collective diegesis, and also as an art form containing subtle flourishes unique to the code. The human coding that happens in larp is improvisational and takes place in real time, akin to a live coding musical performance, only rather than shaping sound, these larp coders shape an invisible mutually agreed upon reality. A video then of this type of coding should not be considered the program, but rather is a record of a program being run in the past. To facilitate a closer reading of larp code, I have annotated a troll battle that was coded in the Alliance Larp rule set around 2009 (Steele 2016a), as this is the system and version I am most fluent in and thus most prepared to annotate. At a cursory glance we see that, with the exception of the Spell Shield, the diegetic effect of each of these codic commands is pending: players do not know if their code took effect until a few seconds later because other code exists that may allow the target to nullify or redirect the code. This creates moments of lag between the codic and diegetic aspects of the game. In my annotation, I have underlined the subsystems of the code. Looking at the first line of code, we see that the seemingly small gesture of uttering the phrase "5 Silver" while hitting someone with a foam weapon evokes at least five separate rule subsystems: (1) how and when to utter the phrase; (2) how and where to swing the weapon; (3) how to ensure that a weapon has been correctly constructed and received approval from game authorities; (4) what it means for something to be a valid target; and (5) a "Body Point" subsystem that is used to determine various factors, such as whether a player can continue to stand up. Both the deployer and the receiver must have precise knowledge of all of these subsystems for this line of code to operate. Other code in this system is not as simple, as can be seen with the multistep process that underlies the deployment of a single Spell Shield. Another way to discuss the rule subsets in this human coding language is to say that this type of language relies upon a type of relational meaning, mirroring the practice of relational DBMS, or Data Base Management Systems, in which attributes are sub-categorized under entities (Codd 1970). Perhaps it is no surprise that larp rules descend from those of tabletop RPGs (a similar type of story-based play in which the diegesis is likewise rendered codic, but tabletops lack the requirement for physical enactment), and that tabletop RPGs developed in conjunction with the workforce proliferation of DBMS in the 1970s. Humans are in a reflexive relationship without our technology; as we interact with our machines, our machines likewise interact with and influence our culture (Hayles 1999). International Journal of Role-Playing -Issue 7 (Steele 2016a) Neurologists might argue that the DBMS-style of relating to larp code only occurs at the superficial level (i.e., this model does not necessarily mirror the biological structure of the brain), but those who dabble in computer science might refute that DBMS likewise don't have anything to do with a computer's hardware: they are rather an abstraction that allows humans to interact with computers on our terms, not theirs. The use of DBMS-style relational systems in our interactions with larp code lets us fluidly organize and clarify what we mean with each deployment of the code, allowing us to set parameters that increase safety and mitigate out-ofgame disadvantages. Relational DBMS gave us the Software Revolution and the RPG revolution as well. As players develop fluency in the game's code, they increase their agency within the game's sociality. The rules can therefore then be thought of as a mode of power deployment; they are a means through which to make your will more effective than/upon others. As you develop fluency in the rules, the code gradually loses its novelty and rather becomes a tool to create experiences. A seasoned larper designates her Character Stats like a banker, which is to say, like a software engineer. While a financial instrument interacts with and lends shape to the market, and whereas a line of code interacts with and lends shape to the behavior of a computer, Character Stats lend shape to the diegesis of the game. A player's Character Stats Sheet might be thought of as an artist's palate, but rather than paint, it contains the specific code they are able to deploy during gameplay. Like the creative constraints of the Oulipo writers, Character Stats provide limits to the code an individual larper can draw from during a game, facilitating the development of strategy and teamwork, while sometimes laying the groundwork for an undesirable type of hierarchy rooted in the accumulation of codic abilities. To the larp designer, genre often plays a major role in influencing their choice over which elements of the game's sociality to render codic, and genre also influences the types of signifiers to be used as signs for those truth-commands. In a Tolkien-esque action genre larp like Alliance, a bulk of the truth-commands have been written to signify the things of epic fantasy battle, while the signs designated to represent those truth-commands include hand-held items and projectiles that are swung or thrown in gestures resembling combat. Alternatively, in drama genre larps such as Vampire: The Masquerade (Rein*Hagen 2000), a bulk of the rules are dedicated to rendering supernatural and social abilities codic, with the signs that signify them often resembling dramatic theatre gestures, evoking codic deployment that takes on a theatrical tempo. The social practice of code standardization is the process by which one or more individuals dictate how code is to operate. The form this practice takes is ultimately going to affect the code's usability. The Alliance LARP standardization process parallels that of the C programming language in the 1980s, a process spanning many years during which representatives of that languages' community of speakers discussed and re-crafted the code based on their perception of its usage (Buswell 2010), a nebulous process which led to bulky code that can take new speakers years to gain fluency in. As the third generation of computer and larp coders emerged in the mid-2000s, we saw the rise of code designed for simplicity and rapid acquisition. In software during this time, design paradigms like "convention over configuration" guided the creation of Ruby on Rails, a platform that dramatically reduces the number of decisions a developer must make, laying the foundation for the Web 2.0 and the rise of social media. In larp, we saw design paradigms like "separate the core rules from setting," which led to the creation of dramatically shorter rulebooks that facilitate faster language acquisition and more fluid experiences deploying the game code, as explained by Devia developer Bryan Gregory in a phone conversation on May 2, 2016. Propriety code-in which social models facilitate a system of leasing the code for profit-has emerged in both larp code and computer code. For example, in the latest edition of the Alliance Rulebook, we find a passage that resembles the end-user licensing agreements that accompany proprietary software (see Figure 2). Within our economic system, code itself becomes a type of commodity to be leased to others, allowing them to use it to program while it continues to generate profit for those invested in its creation and maintenance. Since they are dealing with inanimate objects, computer programmers have an easy time keeping the truth-commands of their code veridical. Those who manipulate the market depend upon others (hopefully) to create and uphold the state and legal apparatuses that ensure that the codic tools of finance hold their form. When we run code on humans in larp, we must build and reinforce our own social systems to ensure that the code doesn't break down. Larpers have developed a variety of social apparatuses (Althusser 1970) to reproduce the conditions of play. These social apparatuses include Ideological Game Apparatuses (IGAs), which entail the player-to-player positive reinforcement of the rules, and Repressive Game Apparatuses (RGAs), the disciplinary actions that occur out-of-game, often via referees or "Rules Marshals," when a failure to follow the rules has occurred. The IGAs might be thought of as the culture surrounding the rules, both during game play and also when interacting with the code outside of the game, such as a group of larpers hanging out in a coffee shop talking about the modifications they plan to make to their Character Sheets. RGAs include those awkward twenty minutes when the game has been paused for a Rules Marshall to adjudicate a contested bit of code deployment. The ultimate punishment for breaking the rules is exclusion, either for a period of time or permanently from the game. When a game's rules are diegetic code, breaking them threatens the integrity of the story and the world of the game. As seasoned larpers become fluent in their game's codic language, they come to understand and interpret the language's signs as story-elements occurring in real time, which is to say, the signs become reified. Reification is a process by which social constructs come to be mistaken for facts of nature (Lukács 1923). The reification of larp code is upheld through the repeated social reinforcement of IGAs and RGAs. Seasoned players can tell you about the uncanny moment when the rules finally "clicked"the beanbag starts to feels like a fireball, the game money takes on a kind of weight. You know it shouldn't be, but while the game is running, it is. Perhaps this psychological sensation can be explained by the evolutionary history of the human brain. Our species' capacity for language and tool making are believed to have developed simultaneously in Broca's Area of the brain (Uomini and Meyer 2013) through a gene-culture co-evolutionary dynamic (Morgan et al. 2014). Perhaps in reification, this neurological ubiquity between language and tool making creates a type of psychological optical illusion, a "toolification" of socially-reinforced fantasies that have been codified as language, lending them that canny sense of being "real." In the world outside of the game, fantasies such as "capital," "gender," and "race" also undergo the social process of reification, allowing slips of paper to be mistaken for congealed labor, arbitrary assessments of one's genitals at birth to be mistaken for consent to a pre-determined set of lifelong activities, and persistent split-second assessments of the amount of melatonin in one's skin and/or the shape of features limited to their face to evoke a fantasy that someone is either an equal or needs to be punished/saved/appropriated/excluded. The process of reifying these "real world" fantasies is no different than that which makes a beanbag into a fireball in larp, but they are lacking a "game off" mechanism. Reified social values contain their limits within their origin, leading to ad-hoc systems of infinitely expanding modes of reiterating the reified without contributing any new value outside of the reification system's own self-containment. Could it be that larp represents a new cultural-evolutionary advancement: reification with an "off" switch? The fireball gets to become a beanbag again. Or was there always already an off-switch, and this is all really about power? Histories of oppression are being actively being held in place, eclipsed by the smooth surface of "race" "capital" and "gender." Does the fireball only get to become a beanbag again because no one has power invested in keeping it that way? Returning to Duffett, we can turn the analysis of fandom back towards the "real world" and say that the phenomenology of participation in the "reality" of Late Capitalism is shaped by a shared "knowing field" rooted in oppressive fantasies like "race," "capital" and "gender." Does the fireball only get to become a beanbag again because no one has power invested in keeping it that way? Returning to Duffett, we can turn the analysis of fandom back towards the "real world" and say that the phenomenology of participation in the "reality" of Late Capitalism is shaped by a shared "knowing field" rooted in oppressive fantasies like "race," "gender," and "capital." Drawing from the Catarealist art movement, which posits itself as "below and against the real" (Trigger 2015), we find larp code operating within a type of revolutionary potentiality, as below and apart from what is "real." Larp's revolutionary potentiality does not, however, prevent the reified social fantasies of the out-of-game world from creeping into a larp's deigesis. Many larp rule systems, for example, contain "Racial Abilities" that reinforce out-of-game essentialist fantasies about "race." In Alliance, if a character has green skin, it changes the way their character sheet works, no matter their backstory may be. This creates a type of race that is more than race-the fallacy of bioessentializing assumptions about culture has been written into the code that governs the universe of the game, making it impossible for the society within the game to ever dismantle "race." Should the rules be changed then, perhaps splitting base personhood and culture into separate subsystems? Or perhaps the term "race" could be replaced with something more apt in larp rulebooks, like "species"? Or perhaps such things should be done away with? These are questions for the next generation of larp designers as they contemplate their craft.
4,933.6
2016-12-02T00:00:00.000
[ "Computer Science", "Linguistics" ]
Controlled positioning of nanoparticles on a micrometer scale For many applications it is desirable to have nanoparticles positioned on top of a given substrate well separated from each other and arranged in arrays of a certain geometry. For this purpose, a method is introduced combining the bottom-up self-organization of precursor-loaded micelles providing Au nanoparticles (NPs), with top-down electron-beam lithography. As an example, 13 nm Au NPs are arranged in a square array with interparticle distances >1 µm on top of Si substrates. By using these NPs as masks for a subsequent reactive ion etching, the square pattern is transferred into Si as a corresponding array of nanopillars. Introduction Nanoparticles (NPs) still play a major role in nanoscience from both an application and a fundamental point of view. Common to both aspects is the interest in possible new properties when reducing the sample size of a material down to the nanoscale. Quite generally, all material properties display in practice such size effects, while not all of them are advantageous for applications. An example for the latter case is provided by magnetic NPs, which for smaller and smaller particle volumes start exhibiting strong directional fluctuations in their magnetization and, thus, render their use for magnetic storage impossible at ambient temperature. On the other hand, this superparamagnetism poses experimental challenges to try and test new materials and alternative arrangements or novel concepts on the nanoscale to satisfy high-density magnetic data storage [1][2][3][4]. In this context, percolating magnetic media may be mentioned or "race track" arrangements, both relying on well-defined and positioned pinning sites for magnetic domain walls [5,6]. In a magnetic thin film, such pinning could be realized by local holes ("antidots") leading immediately to quite a different application of NPs: using them as masks for subsequent etching procedures to transfer the NP pattern into their supporting substrate. In this respect, the notion of a nanoparticle should include as well colloids and micelles since their use for patterning is more widely spread [7][8][9][10][11][12]. Of course, in addition to their magnetic behavior, NPs offer attractive optical [13,14] or electrical [15,16] properties. In these cases, NPs fabricated from the complete spectrum of materials, i.e., insulators, semiconductors and metals are required. As a consequence, preparational progress in that field is still of utmost importance [17,18]. Assuming that a fabrication recipe has been developed for NPs of a desired material, there is, however, for many applications still another demanding requirement: positioning the NPs at predesigned locations, either with respect to geometry, such as forming squares or triangles, or, at least, with respect to interparticle distances, or even both. Restricting these distances to the nanoscale as well, some self-organization approaches exist that exploit hierarchical structure formation, allowing at least partial fulfillment of the above requirements [19][20][21][22]. For interparticle distances of some tens of nanometers creative ideas have been realized based on even three-dimensional DNA spacers linked to Au NPs [23]. Somewhat more flexible with respect to the type of NPs is their positioning, exploiting wettability contrast of a substrate previously prepared by, e.g., microcontact printing [24,25] or improved direct nanoscale embossing [26]. Though, in this case, the interparticle distances can be largely enhanced, the difficulty here is to avoid obtaining more than one particle at a given location. For interparticle distances of some hundred nanometers colloidal approaches have been successfully demonstrated. Though related to twodimensional non-close-packed colloidal crystals [11] and, thus, primarily leading to the formation of hexagonal arrays of NPs, the method is novel in that it applies colloids carrying metal precursors. Once the colloidal carriers form a self-assembled ordered array, plasma processes are exploited to remove the organic matrix and to reduce the precursors into metallic NPs [10,12]. Though this technique appears quite versatile with respect to the type of NPs, it still has restrictions related to geometries other than hexagonal symmetry and distances well above 1 µm. It is exactly this problem of combining the nanowith the micro-scale that is the focus of the present contribution. In the following approach, NPs prepared by exploiting the self-organization of precursor loaded micelles formed from diblock-copolymers play a major role as a starting point. Thus, the genuine symmetry of their original arrangement again will be hexagonal. However, as will be demonstrated below, combining the micellar method with conventional electron-beam lithography not only extends interparticle distances from typically 100 nm into the micrometer range, but additionally allows a broad variation of geometries for the finally arranged NPs. Preparation of Au nanoparticles (NPs) The starting point of the present approach is the fabrication of hexagonally arranged Au NPs applying a previously reported recipe based on the self-organization of precursor-loaded micelles [7,8,21]. In short, commercially available diblockcopolymers [polystyrene-block-poly-2-vinylpyridine (PS-b-P2VP) from Polymer Source Inc, Canada] forming spherical reverse micelles in an apolar solvent, such as toluene, are loaded with HAuCl 4 salt as precursor. After optimized dip coating of the substrate (presently n-doped (001)-oriented Si wafers; in general, however, any reasonably flat substrate material is suitable), one single layer of hexagonally ordered micelles is obtained. By exposing such micellar layers to a hydrogen plasma the organic species can be completely removed and the precursor can be reduced to metallic Au NPs. The most attractive features of this approach are the control over the size of the NPs (determined by the amount of added precursor) as well as over the interparticle distance (determined by the total length of the diblock-copolymer and the substrate velocity during dip coating [8]). Furthermore, and most important for the present work, the final position of the Au NPs mirrors the self-assembled hexagonal array of the micellar carriers. This is demonstrated by the SEM image given in Figure 1 showing a typical array of Au NPs on top of a Si substrate. The high degree of hexagonal order is clearly visible, although deviations from perfect order are obvious as well. In the present work, exclusively Au NPs with average diameters of 13 ± 1.6 nm were used. Smaller Au NPs, however, with diameters down to 2 nm would be easily available. Also, the interparticle distance was fixed at an average value of 102 ± 3 nm, for reasons to be discussed further below. Selecting Au nanoparticles on the micrometer scale The basic idea behind selecting individual Au NPs on the micrometer scale is outlined by the schematics presented in Figure 2. A negative resist (AR-N7500-18, Allresist, 6000 rpm, thickness approximately 300 nm) is spin coated above the primarily deposited Au NPs. Prior to this step, it is important to give the Si substrate with the NPs a short HF dip (2% HF, 10 s), which significantly enhances the adhesion of the resist. After a standard prebake of the resist (60 s at 85 °C on a hot plate), a square arrangement of circles is written into the resist by an electron beam (20 kV, 15 pA). The diameter of these circles has to be adjusted with respect to the interparticle distance of the Au NPs since each written resist disk should cover just one single NP. For the presently used mutual particle distance of 100 nm, a diameter of the resist disks of also 100 nm was chosen. This choice is the appropriate compromise to avoid having either no Au NPs covered by the circular resist island or more than one. By writing various square arrays of disks the optimum electron dose is determined, and the resist is thus developed (developer: 140-160 s, AR300-47 with water as stopper) followed by a postbake (80 s at 120 °C on a hot plate) of the exposed disks. The situation after this resist-removal step is illustrated by the SEM image shown in Figure 3. The four resist disks arranged in a square are clearly visible by their darker contrast while the bright dots image the residual Au NPs. Obviously, due to the development process the original hexagonal order of the NPs (Figure 1) is almost completely destroyed and some of the original Au NPs are even removed together with the unexposed resist. Figure 2). The bright dots image the still present residual Au NPs, which have completely lost their hexagonal order during removal of the unexposed resist. Next, the residual uncovered Au NPs are removed by dipping the substrate into an I/KI solution for 30 s followed by the final stripping of the resist (1-2 min acetone, 20 s IPA). In principle, this last step finalizes the process delivering 13 nm Au NPs arranged in a square lattice with mutual distances in the micrometer range. However, to enhance visibility of these NPs in an overview SEM image, the particles are used as a mask during a subsequent reactive ion etching (RIE) of the Si substrate transforming the NPs into nanopillars. The result is demonstrated in Figure 4. Figure 2). Distance between pillars: 1.8 μm. Inset: magnified SEM image (tilted by 30°) of one nanopillar with residual Au mask as cap. Further squares of correspondingly prepared nanopillars can be visualized by reducing the interparticle distance from 1.7 μm in Figure 2). Distance between pillars: 1.2 μm. Problems and compromises Though the SEM images presented in Figure 4 and Figure 5 successfully deliver a proof of principle for the presently suggested positioning procedure, some problems should be addressed as well. The first point is related to the absolute precision of positioning the Au NPs. Writing any pattern such as the square array of disks by the electron beam is performed relative to a predetermined rectangular coordinate system fixed within the sample surface. When restricting the patterning to a 100 μm × 100 μm area, no mechanical movement of the sample holder is necessary, rather all programmed positions are approached by steering the electron beam. During the writing process, however, one observes a time-dependent drift, which in the present case of 100 nm disks arranged in squares added up to approximately 50 nm. Added to this error is the uncertainty of the exact position of the Au NP within any disk. Due to the finite hexagonal order, over larger areas this position can be assumed as random within the disk area. Thus, a very conservative estimate of the deviation of the NP location from an ideal square position is <150 nm, i.e., on the order of 10% in the present examples. For many applications, however, positional precision of the NPs is not the primary goal. Rather, the NPs should be well separated from each other and individually identifiable against the background. Two classes of applications may illustrate these requirements. The first example is spectroscopy applied either directly to nanoparticles or, indirectly, on, e.g., molecules specifically ligated to the NPs, such as bonding to Au NPs through a thiol-group. To suppress interactions between nanoparticles or the molecules bound to them, usually interparticle distances of 50 nm are sufficient (for a recent study on near-field effects around a single dot see [27]). To guarantee single particle/molecule spectroscopy significantly larger distances are necessary as provided by the present method, depending in detail on the wavelength of the exciting radiation or the achievable focus size. In a second class of experiments, metallic NPs may be used as electrical contacts connected to the backside of the substrate by vias (vertical interconnect access), which, in turn, are further connected to pads on the micrometer scale. An example would be contacting a biological cell with typical lateral extensions of more than 10 µm at well-defined positions, e.g., 1 µm apart. Though the presently obtained lateral precision of the particle positioning is sufficient for the just mentioned applications, further improvements appear possible. A necessary prerequisite for this would be a better long-range order of the starting NPs. For this, changing to self-assembled precursor-loaded colloids rather than micelles is promising [10][11][12]. In the ideal case, positioning of the resist disks would no longer be purely statistical but instead conform to multiples of the lattice parameter of the underlying hexagonal colloid lattice. To exploit the high long-range colloidal order, however, a sample holder with laser-interference-controlled translations becomes a must. In this way, positioning with a precision of better than 50 nm appears possible. Conclusion A general procedure is introduced to position nanoparticles on the micrometer scale on top of a given substrate. The method is demonstrated for Au NPs (diameters 13 nm) on Si wafers in a square lattice with interparticle distances above 1 µm. The underlying idea is to combine the self-organization of precursor loaded micelles formed from diblock-copolymers in toluene, which is a bottom-up process providing nanoparticles, with topdown electron-beam lithography. As a first simple application, the resulting array of Au NPs is used as a mask for a subsequent reactive-etching process delivering correspondingly arranged Si nanopillars.
2,945.6
2012-11-20T00:00:00.000
[ "Physics" ]
Transcriptomic analysis of human sensory neurons in painful diabetic neuropathy reveals inflammation and neuronal loss Pathological sensations caused by peripheral painful neuropathy occurring in Type 2 diabetes mellitus (T2DM) are often described as ‘sharp’ and ‘burning’ and are commonly spontaneous in origin. Proposed etiologies implicate dysfunction of nociceptive sensory neurons in dorsal root ganglia (DRG) induced by generation of reactive oxygen species, microvascular defects, and ongoing axonal degeneration and regeneration. To investigate the molecular mechanisms contributing to diabetic pain, DRGs were acquired postmortem from patients who had been experiencing painful diabetic peripheral neuropathy (DPN) and subjected to transcriptome analyses to identify genes contributing to pathological processes and neuropathic pain. DPN occurs in distal extremities resulting in the characteristic “glove and stocking” pattern. Accordingly, the L4 and L5 DRGs, which contain the perikarya of primary afferent neurons innervating the foot, were analyzed from five DPN patients and compared with seven controls. Transcriptome analyses identified 844 differentially expressed genes. We observed increases in levels of inflammation-associated transcripts from macrophages in DPN patients that may contribute to pain hypersensitivity and, conversely, there were frequent decreases in neuronally-related genes. The elevated inflammatory gene profile and the accompanying downregulation of multiple neuronal genes provide new insights into intraganglionic pathology and mechanisms causing neuropathic pain in DPN patients with T2DM. Supplemental Figure 1 Nonsignificant differences between group demographics of interest: No significant difference was found in A) age (Mann-Whitney Test, p = 0.0530), B) BMI (Mann-Whitney Test, p = 0.4596), or C) sex distribution (Fisher's Exact Test, p = 0.9999) between diabetic and non-diabetic groups. Supplemental Figure 2 Principal component analysis (PCA) of human DPN transcriptome data: PCA plot A) before and B) after including sex as a covariate as part of the DESEQ2 analysis. PC1 is the first principal component direction where the most variance is occurring, and PC2 is the second most one that is orthogonal to PC1. With sex as a covariate, the DPN donors and the controls then separate into two independent groups. Supplemental Figure 3 Ingenuity pathway analysis (IPA) of transcriptomic data: The IPA report using all 844 dysregulated genes predominantly centered on the immunological functions occurring in the DRG of the DPN individuals. Thereby, further assessment of the DEGs was separated between A) the upregulated largely inflammatory gene responses and B) the downregulated gene expression changes, where synaptogenesis was appears to be affected by decreases in gene expression. Supplemental Figure 4 Interaction network of neuronally related genes: 62 genes out of our DEG list were considered to perform as a cellular component of a neuron (GO:0097458). Most genes were downregulated (n=51) while a few were upregulated (n=11) in the DPN donors. To further determine how these dysregulated genes might impact neuronal function, all 62 gene were separately evaluated using STRING (https://string-db.org/) for additional enrichment analysis. A) About 66% of the neuronal genes (n=41) were synaptically related (blue -GO:0045202), while 39% (n=24) were associated with the neuron cell body (green -GO:0043025). B). In terms of biological function, a few genes in red were qualified as being associated with neurotransmitter secretion (GO:0007269). Supplemental Figure 5 Interaction network of immune responses: 89 genes from our DEG list were registered as immune response related genes (GO:0006955). The immune response genes were subsequently reentered into STRING to identify the nature of the inflammatory reactions and to determine possible protein network interactions. Gene changes related to both adaptive and innate immune responses were recognized in our gene list. 38 genes (red) are considered part of an innate immune response (GO:0045087), while 17 genes (blue) are associated with a humoral immune response (GO:0006959) and 12 genes (green) are involved in T cell activation (GO:0042110). Supplemental Table 1 DRGs used in this study were acquired post-mortem from the cadaveric donors. As organ donors, information about their medical history was provided through an extensive interview with a family member by a trained interviewer. Included in the table is a list of medications taken by the donors and available data on the duration of DPN. Supplemental Table 2 Hematoxylin and eosin-stained slides from 5 controls and 5 DPN donors were scored by a pathologist in a blinded fashion on a scale of 0-3 (0 being no ganglionic cell loss/within normal limits and 3 being severe cell loss). Supplemental Table 3 Significant genes (adjusted p-value cutoff of 0.05 by Benjamini Hochberg's False Discovery Rate). Supplemental Table 4 A list of all genes, including base mean, log2 fold change, and adjusted p-values (padj). Supplemental Table 5 Normalized data (DESEQ2 normalized counts) Supplemental Table 6 Table of 71 dysregulated immunoglobulin genes including IGHG1-4, IGHA1-2, and IGHM. Table 7 Further gene enrichment was conducted using ToppGene Suite (https://toppgene.cchmc.org). In the DisGeNET database of gene-disease associations, 79 dysregulated genes were listed as involved in pain (C0030193). Interaction network of neuronally related genes: 62 genes out of our DEG list were considered to perform as a cellular component of a neuron (GO:0097458). Most genes were downregulated (n=51) while a few were upregulated (n=11) in the DPN donors. To further determine how these dysregulated genes might impact neuronal function, all 62 gene were separately evaluated using STRING (https://stringdb.org/) for additional enrichment analysis. A) About 66% of the neuronal genes (n=41) were synaptically related (blue -GO:0045202), while 39% (n=24) were associated with the neuron cell body (green -GO:0043025). B). In terms of biological function, a few genes in red were qualified as being associated with neurotransmitter secretion (GO:0007269).
1,260
2021-07-25T00:00:00.000
[ "Biology", "Medicine" ]
Quantum Field Theory with Electric-Magnetic Duality and Spin-Mass Duality but Without Grand Unification and Supersymmetry I present a generalization of quantum electrodynamics which includes Dirac magnetic monopoles and the Salam magnetic photon. This quantum electromagnetodynamics has many attractive features. (1) It explains the quantization of electric charge. (2) It describes symmetrized Maxwell equations. (3) It is manifestly covariant. (4) It describes local four-potentials. (5) It avoids the unphysical Dirac string. (6) It predicts a second kind of electromagnetic radiation which can be verified by a tabletop experiment. An effect of this radiation may have been observed by August Kundt in 1885. Furthermore I discuss a generalization of General Relativity which includes Cartan’s torsion. I discuss the mathematical definition, concrete description, and physical meaning of Cartan’s torsion. I argue that the electric-magnetic duality of quantum electromagnetodynamics is analogous to the spin-mass duality of Einstein-Cartan theory. A quantum version of this theory requires that the torsion tensor corresponds to a spin-3 boson called tordion which is shown to have a rest mass close to the Planck mass. Moreover I present an empirically satisfied fundamental equation of unified field theory which includes the fundamental constants of electromagnetism and gravity. I conclude with the remark that the concepts presented here require neither Grand Unification nor supersymmetry. The Model The quantization of electric charge is well-known since the discovery of the proton in 1919 [1].This remarkable observation remained unexplained within the framework of quantum electrodynamics [2]. Further quantized charges have been established.The group SU (2) of the weak interaction explains the quantization of isospin [3], and the group SU (3) of the strong interaction explains the quantization of colour charge [4]. For this reason we propose the analogy postulate: The quantization of electric charge results from the underlying group structure of the electromagnetic interaction.Hence, we will require neither quantum gravity (electric charge as a topological quantum number [5]), nor spontaneous symmetry breaking (monopoles of soliton type [6]), nor unification with other forces (charge quantization resulting from the group structure underlying grand unified theories) [7]. The electromagnetic angular momentum generated by the Lorentz force in a system consisting of a magnetic monopole and an electric charge is independent of their separation [8].Angular momentum is quantized in units of h/2, where h = h/2π denotes the reduced Planck constant.This condition can be satisfied only if both electric and magnetic charge are quantized [9].This is the famous Dirac quantization condition eg = h, where e and g denote unit electric and unit magnetic charge. Magnetic monopoles were discussed long before this finding.The motivation was to describe electric and magnetic fields equivalently by symmetrized Maxwell equations.We will elevate this to the symmetry postulate: The fundamental equations of the electromagnetic interaction describe electric and magnetic charges, electric and magnetic field strengths, and electric and magnetic potentials equivalently. Dirac [9] was the first to write down these symmetrized Maxwell equations.Let J µ = (P, J) denote the electric four-current and j µ = (ρ, j) the magnetic four-current.The well-known four-potential of the electric photon is A µ = (Φ, A).The four-potential of the magnetic photon is a µ = (ϕ, a).Expressed in three-vectors the symmetrized Maxwell equations read, and the relations between field strengths and potentials are The second four-potential is required not only by the symmetry postulate, but also by the proven impossibility to construct a manifestly covariant onepotential model of quantum electromagnetodynamics. Although only one of the suggested two-potential models explicitely states the possibility of the existence of a magnetic photon [10], the other two-potential models were eventually considered as two-photon models [11]. Any viable two-photon concept of magnetic monopoles has to satisfy the following conditions. (i) In the absence of both magnetic charges and the magnetic photon field, the model has to regain the U (1) gauge symmetry of quantum electrodynamics. (ii) In the absence of both electric charges and the photon field, the symmetry postulate requires the model to yield the U (1) gauge symmetry of quantum magnetodynamics. (iii) The gauge group has to be Abelian, because the photon carries neither electric nor magnetic charge.Because of the symmetry postulate also the magnetic photon has to be neutral. (iv) The gauge group may not be simple, because quantum electromagnetodynamics includes the two coupling constants α E = e 2 /4π and α M = g 2 /4π. The only gauge group that satisfies these four conditions is the group U (1) × U (1). A two-photon model has already been suggested by Salam [10].According to his model the photon couples via vector coupling with leptons and hadrons, but not with monopoles.The magnetic photon couples via vector coupling with monopoles and via tensor coupling with hadrons, but not with leptons. This model came under severe criticism.Although positron and proton have the same electric charge and no magnetic charge, the model can discriminate them (i.e. leptons and hadrons).For this reason Salam's model does not generate the Lorentz force between electric charge and monopole.As a consequence, it does not satisfy the powerful Dirac quantization condition.For this reason Salam's model was rejected by Taylor [11]. The problem raised by Taylor can be overcome by the following argumentation.Salam considered the tensor coupling of the hadron-monopole system as derivative coupling.This kind of coupling is well-known from meson theory where vector mesons are able to interact with baryons via both vector and tensor coupling.However, derivative coupling is possible only where the particles are composite.Hence, Salam's model includes no interaction between lepton and magnetic photon.-We emphasize the correctness of the interpretation of tensor coupling as derivative coupling in meson theory. To generate the Lorentz force between electric and magnetic charges we have to introduce a new kind of tensor coupling.This is required also, because here we have two kinds of interacting charges (electric and magnetic). The Coulomb force between two (unit) electric charges is e 2 /4πr 2 Because of the symmetry postulate the magnetic force between two (unit) magnetic charges is g 2 /4πr 2 And the Lorentz force between (unit) electric and (unit) magnetic charge is egv/4πr 2 , where v denotes the relative velocity of the two charges. This suggests the introduction of velocity coupling: (i) The photon couples via vector coupling with electric charges. (ii) The magnetic photon couples via vector coupling with magnetic charges. (iii) The photon couples via tensor coupling with magnetic charges.In contrast to meson theory, however, the u µ of tensor coupling, σ µν u ν , has to be interpreted as a four-velocity (velocity coupling). (iv) The magnetic photon couples via tensor coupling (interpreted as velocity coupling instead of derivative coupling) with electric charges. In the case of the interacting monopole-electric charge system the exchanged boson (either photon or magnetic photon) is virtual and the four-velocity of velocity coupling is the relative four-velocity between the charges. Charged quanta are required to emit and absorb the same bosons as real (on-mass-shell) particles as those virtual (off-mass-shell) bosons via whom they interact with other charged quanta.This is because the Feynman rules are symmetric with respect to virtual and real particles. In the case of emission and absorption reactions of real bosons, u µ cannot be interpreted as a relative four velocity between charged quanta in the initial state, as there is only one charged quantum present.As a consequence, u µ has to be interpreted as the absolute four-velocity of the initial charged quantum. In contrast to general belief an absolute rest frame is not forbidden.Instead, a number of reasons support its existence (see below). The aether drift of the Sun was discovered and measured to be 370km/s. Formalism The Lagrangian for a spin 1/2 fermion field Ψ of rest mass m 0 , electric charge Q, and magnetic charge q within an electromagnetic field can be constructed as follows.By using the tensors the Lagrangian of the Dirac fermion within the electromagnetic field reads, By using the Euler-Lagrange equations we obtain the Dirac equation By introducing the four-currents the Euler-Lagrange equations yield the two Maxwell equations Evidently, the two Maxwell equations are invariant under the U (1)×U (1) gauge transformations Furthermore, the four-currents satisfy the continuity equations The electric and magnetic field are related to the tensors above by Finally, the Lorentz force is where ε µν σ denotes the totally antisymmetric tensor.This formula for the Lorentz force is rather trivial for the classical theory.Non-trivial is that this formula can be applied to the quantum field theory.This becomes possible because of the introduction of the velocity coupling which includes a velocity operator and allows the definition of a force operator. Suggested Experiment This model does not contain any free parameters.Hence, it allows clear and decisive predictions for its verification.The electric-magnetic duality is: electric charge -magnetic charge electric current -magnetic current electric conductivity -magnetic conductivity electric field strength -magnetic field strength electric four-potential -magnetic four-potential electric photon -magnetic photon electric field constant -magnetic field constant dielectricity number -magnetic permeability The absolute frame predicted above gives rise to local physical effects.In a terrestrial laboratory the interaction cross-section of a free magnetic photon (with conventional matter in the terrestrial rest frame) is predicted to be smaller than the one of a free electric photon (= conventional or Einstein photon) of the same energy.The suppression factor is the square of the absolute speed of the laboratory in units of the speed of light.Hence, each reaction that generates electric photons generates also magnetic photons.Magnetic photons are harder to create, to shield, and to absorb than electric photons of the same energy. The refractive index of an insulator is the square root of the product of the dielectricity number and the magnetic permeability.Therefore it is invariant under a dual transformation.This means that electric and magnetic photon rays are reflected and refracted by insulators in the same way.Optical lenses cannot distinguish between electric and magnetic photon rays. By contrast, electric and magnetic photon rays are reflected and refracted in a different way by metals.This is because electric conductivity and magnetic conductivity determine the reflection of light and they are not identical.The electric conductivity of a metal is several orders larger than the magnetic conductivity. Light in metal behaves wave-like (polariton, more or less a combination of light wave and sound wave). The interpretation of the basic equations of quantum electromagnetodynamics is the following.Electric charges can couple to both the four-potential of the electric photon (via vector coupling) and the four-potential of the magnetic photon (via tensor coupling).So an electric charge generates both an electric four-current density (vector part of the electric four-current density above) and a magnetic four-current density (tensor part of the magnetic four-current density above).According to the Lagrangian, the four-potential of the electric photon can couple only to the electric four-current density, and the four-potential of the magnetic photon can couple only to the magnetic four-current density.The main difference between the vector part and the tensor part of the four-current density is the appearance of the four-velocity.For emission and absorption processes I interpret this velocity as the absolute velocity of the laboratory (for a terrestrial laboratory: 10 −3 in units of the speed of light).So the magnetic current density is 10 −3 times the electric current density.According to Ohm's law, current density is equal to conductivity times the electromagnetic field.Therefore the magnetic conductivity is 10 −3 times (where ε 0 = 1) the electric conductivity of a given conductor in a terrestrial laboratory. Within a conductor, the penetration depth of light of a given frequency is proportional to the square root of the reciprocal value of the conductivity (for a more precise formula see the following subsection).So I predict that the penetration depth of magnetic photon light is greater than that of electric photon light of the same frequency. The result would be that in iron (August Kundt experiment, see below) the penetration depth for red light is 7 nm for electric photon light and 472 nm for magnetic photon light.In aluminium the penetration depth for green light (λ = 532nm) is 3.35 nm for electric photon light and 152 nm for magnetic photon light. Note that electric conductivity and magnetic conductivity determine the reflection of electric and magnetic photon light, respectively (see equations below).The electric conductivity of a metal is predicted to be larger than the magnetic conductivity.This results in a stronger reflection of electric photon light than magnetic photon light.To give an example: I predict that silver reflects 94% of the electric photon light, but only 13% of the magnetic photon light, if green light of the wavelength 532 nm is used.Therefore the use of mirrors (for reflection) should be avoided for the search for the magnetic photon light. How to Verify the Magnetic Photon Rays The easiest test to verify/falsify the magnetic photon is to illuminate a metal foil of thickness 100, . . ., 1000nm by a laser beam (or any other bright light source) and to place a detector (avalanche diode or photomultiplier tube) behind the foil.If a single foil is used, then the expected reflection losses are less than 1%.If a laser beam of the visible light is used, then the absorption losses are less than 15%.My model predicts the detected intensity of the radiation to be times the intensity that would be detected if the metal foil were removed and the laser beam would directly illuminate the detector.Here is the absolute velocity of the laboratory.The absolute velocity of the Sun as measured by the dipole anisotropy of the cosmic microwave background radiation is The mean velocity of the Earth around the Sun is The rotation velocity of the Earth is The latitude of the dipole with respect to the ecliptic is The latitude of the dipole with respect to the equator (declination) is The latitude of the laboratory is for Strassbourg and Vienna and ϕ = 43 • for Madison.The sidereal year is A sidereal day is The zero point of the time, t = 0, is reached on December 9 at 0:00 local time.The speed of light is denoted by c.The factor for losses by reflection and absorption of magnetic photon rays of the visible light for a metal foil of thickness 100, . . ., 1000nm is To conclude, quantum electromagnetodynamics predicts the value f ∼ 10 −12 . Possible Observation of Magnetic Photon Rays In Strassbourg in 1885, August Kundt [12] passed sunlight through red glass, a polarizing Nicol, and platinized glass which was covered by an iron layer.The entire experimental setup was placed within a magnetic field.With the naked eye, Kundt measured the Faraday rotation of the polarization plane generated by the transmission of the sunlight through the iron layer.His result was a constant maximum rotation of the polarization plane per length of 418, 000 • /cm or 1 • per 23.9nm.He verified this result until thicknesses of up to 210nm and rotations of up to 9 • .In one case, on a very clear day, he observed the penetrating sunlight for rotations of up to 12 • .Unfortunately, he has not given the thickness of this particular iron layer he used.But if his result of a constant maximum rotation per length can be applied, then the corresponding layer thickness was ∼ 290nm. Let us recapitulate some classical electrodynamics to determine the behavior of light within iron.(The following equations are nearly identical for electric photon light and magnetic photon light.The only difference is that the electric conductivity has to be replaced by the magnetic conductivity, which is 10 −3 times the electric conductivity in a terrestrial laboratory.There is no interaction between electric current and magnetic current, because in the absence of magnetic charges the vector part of the electric four-current couples only to the four-potential of the electric photon, and the tensor part of the magnetic four-current couples only to the four-potential of the magnetic photon.)The penetration depth of light in a conductor is where the wavelength in vacuum can be expressed by its frequency according to λ = 1/ ν 2 ε 0 µ 0 .The extinction coefficient is where the refractive index is n = √ ε r µ r .For metals we get the very good approximation The specific resistance of iron is its permeability is µ r ≥ 1.For red light of λ = 630nm and ν = 4.8 × 10 14 Hz we get the penetration depth δ = 6.9nm. Only a small fraction of the sunlight can enter the iron layer.Three effects have to be considered.(i) The red glass allows the penetration of about ε 1 ∼ 50% of the sunlight only.(ii) Only ε 2 = 2/π 64% of the sunlight can penetrate the polarization filter.(iii) Reflection losses at the surface of the iron layer have to be considered.The refractive index for electric photon light is given by For metals we get the very good approximation The fraction of the sunlight which is not reflected is and therefore ε 3 0.13 for the system considered.Taken together, the three effects allow only ε 1 ε 2 ε 3 ∼ 4% of the sunlight to enter the iron layer. The detection limit of the naked eye is 10 −13 times the brightness of sunlight provided the light source is pointlike.For an extended source the detection limit depends on the integral and the surface brightness.The detection limit for a source as extended as the Sun (0.5 • diameter) is l d ∼ 10 −12 times the brightness of sunlight.If sunlight is passed through an iron layer (or foil, respectively), then it is detectable with the naked eye only if it has passed not more than Reflection losses by haze in the atmosphere further reduce this value.Kundt's observation can hardly be explained with classical electrodynamics.Air bubbles within the metal layers cannot explain Kundt's observation, because air does not generate such a large rotation.Impurities, such as glass, which do generate an additional rotation, cannot completely be ruled out as the explanation.However, impurities are not a likely explanation, because Kundt was able to reproduce his observation by using several layers which he examined at various places. Quantum effects cannot explain the observation, because they decrease the penetration depth, whereas an increment would be required. The observation may become understandable if Kundt has observed a second kind of electromagnetic radiation, the magnetic photon rays.To learn whether Kundt has indeed observed magnetic photon rays, his experiment has to be repeated. Consequences The observation of magnetic photon rays would be a multi-dimensional revolution in physics.Its implications would be far-reaching. (1) The experiment would provide evidence of a second kind of electromagnetic radiation.The penetration depth of these magnetic photon rays is roughly one million times greater than that ofelectric photon light of the same wavelength.Hence, these new rays may find applications in medicine where X-ray and ultrasonic diagnostics are not useful.X-ray examinations include a high risk of radiation damages, because the examination of teeth requires high intensities of X-rays and genitals are too sensible to radiation damages.Examinations of bones and the brain may also become possible. (2) The experiment would confirm the existence of a new vector gauge boson, Salam's magnetic photon from 1966 [10].It has the same quantum numbers as Einstein's electric photon [13], i.e. spin of one, negative parity, zero rest mass, and zero charge.The vanishing rest mass for both the electric and the magnetic photon is required to satisfy the Dirac quantization condition of electric and magnetic charge. (3) A positive result would provide evidence of an extension of quantum electrodynamics which includes a symmetrization of Maxwell's equations from 1873 [14]. (4) The experiment would provide indirect evidence of Dirac's magnetic monopoles from 1931 and the explanation of the quantization of electric charge [9].This quantization is known since Rutherford's discovery of the proton in 1919 [1]. (5) My model describes both an electric current and a magnetic current, even in experimental situations which do not include magnetic charges.This new magnetic current has a larger specific resistance in conductors than the electric current.It may find applications in electronics. (6) Dirac noticed in 1931 that the coupling constant of magnetic monopoles is much greater than unity [9].This raises new questions concerning the perturbation theory, the renormalizability, and the unitarity of quantum field theories.(7) The intensity of the magnetic photon rays should depend on the absolute velocity of the laboratory.The existence of the absolute velocity would violate Einstein's relativity principle of special relativity from 1905 [15].It would be interesting to learn whether there exist further effects of absolute motion.(8) The supposed non-existence of an absolute rest frame was the only argument against the existence of a luminiferous aether [15].If the absolute velocity does exist, we have to ask whether aether exists and what its nature is. (9) When in 1925 Heisenberg introduced quantum mechanics, he argued that motion does not exist in this theory [16].This view is taken also in the Copenhagen interpretation of quantum mechanics formulated in 1927/1928 by Heisenberg and Bohr [17].The appearance of a velocity operator in my model challenges this Copenhagen interpretation.Mathematically, the introduction of a velocity (and force) operator means that quantum mechanics has to be described not only by partial but also by ordinary differential equations. (10) Magnetic photon rays may contribute to our understanding of several astrophysical and high energy particle physics phenomena where relativistic absolute velocities appear and where electric and magnetic photon rays are expected to be created in comparable intensities.(11) Finally, the other interactions may show similar dualities.The new dual partners of the known gauge bosons would be the magnetic photon, the isomagnetic W-and Z-boson, and the chromomagnetic gluons.In 1999 I argued that the dual partner of the graviton would be the tordion [18].This boson has a spin of three and is required by Cartan's torsion theory from 1922 [19] which is an extension of Einstein's general relativity from 1915 [20]. 2 Absolute Space and Time Space and Time Before General Relativity According to Aristotle, the Earth was resting in the centre of the universe.He considered the terrestrial frame as a preferred frame and all motion relative to the Earth as absolute motion.Space and time were absolute [21]. In the days of Galileo the heliocentric model of Copernicus [22] was valid.The Sun was thought to be resting within the centre of the universe and defining a preferred frame.Galileo argued that only relative motion was observed but not absolute motion.However, to fix motion he considered it as necessary to have not only relative motion, but also absolute motion [23]. Newton introduced the mathematical description of Galileo's kinematics.His equations described only relative motion.Absolute motion did not appear in his equations [24]. This inspired Leibniz to suggest that absolute motion is not required by the classical mechanics introduced by Galileo and Newton [25]. Huyghens introduced the wave theory of light.According to his theory, light waves propagate via oscillations of a new medium which consists of very tiny particles, which he named aether particles.He considered the rest frame of the luminiferous aether as a preferred frame [26]. The aether concept reappeared in Maxwell's theory of classical electrodynamics [14].Faraday [27] unified Coulomb's theory of electricity [28] with Ampère's theory of magnetism [29].Maxwell unified Faraday's theory with Huyghens' wave theory of light, where in Maxwell's theory light is considered as an oscillating electromagnetic wave which propagates through the luminiferous aether of Huyghens. We all know that the classical kinematics was replaced by Einstein's Special Relativity [15].Less known is that Special Relativity is not able to answer several problems that were explained by classical mechanics. According to the relativity principle of Special Relativity, all inertial frames are equivalent, there is no preferred frame.Absolute motion is not required, only the relative motion between the inertial frames is needed.The postulated absence of an absolute frame prohibits the existence of an aether [15]. According to Special Relativity, each inertial frame has its own relative time.One can infer via the Lorentz transformations [30] on the time of the other inertial frames.Absolute space and time do not exist.Furthermore, space is homogeneous and isotropic, there does not exist any rotational axis of the universe. It is often believed that the Michelson-Morley experiment [31] confirmed the relativity principle and refuted the existence of a preferred frame.This believe is not correct.In fact, the result of the Michelson-Morley experiment disproved the existence of a preferred frame only if Galilei invariance is assumed.The experiment can be completely explained by using Lorentz invariance alone, the relativity principle is not required. By the way, the relativity principle is not a phenomenon that belongs solely to Special Relativity.According to Leibniz it can be applied also to classical mechanics. Einstein's theory of Special Relativity has three problems. (i) The space of Special Relativity is empty.There are no entities apart from the observers and the observed objects in the inertial frames.By contrast, the space of classical mechanics can be filled with, say, radiation or turbulent fluids. (ii) Without the concept of an aether Special Relativity can only describe but not explain why electric and magnetic fields oscillate in propagating light waves. (iii) Special Relativity does not satisfy the equivalence principle [32] of General Relativity, according to which inertial mass and gravitational mass are identical.Special Relativity considers only inertial mass. Special Relativity is a valid approximation of reality which is appropriate for the description of most of the physical phenomena examined until the beginning of the twenty-first century.However, the macroscopic properties of space and time are better described by General Relativity. General Relativity: Absolute Space and Time In 1915 Einstein presented the field equations of General Relativity and in 1916 he presented the first comprehensive article on his theory [20].In a later work he showed an analogy between Maxwell's theory and General Relativity.The solutions of the free Maxwell equations are electromagnetic waves while the solutions of the free Einstein field equations are gravitational waves which propagate on an oscillating metric [33].As a consequence, Einstein called space the aether of General Relativity [34].However, even within the framework of General Relativity do electromagnetic waves not propagate through a luminiferous aether. Einstein applied the field equations of General Relativity on the entire universe [35].He presented a solution of a homogeneous, isotropic, and static universe, where the space has a positive curvature.This model became known as the Einstein universe.However, de Sitter has shown that the Einstein universe is not stable against density fluctuations [36]. This problem was solved by Friedmann and Lemaître who suggested a homogeneous and isotropic expanding universe where the space is curved [37]. Robertson and Walker presented a metric for a homogeneous and isotropic universe [38].According to Gödel this metric requires an absolute time [39].In any homogeneous and isotropic cosmology the Hubble constant [40] and its inverse, the Hubble age of the universe, are absolute and not relative quantities.In the Friedmann-Lemaître universe there exists a relation between the actual age of the universe and the Hubble age. According to Bondi and Gold, a preferred motion is given at each point of space by cosmological observations, namely the redshift-distance relation generated by the Hubble effect.It appears isotropic only for a unique rest frame [41]. I argued that the Friedmann-Lemaître universe has a finite age and therefore a finite light cone.The centre-of-mass frame of this Hubble sphere can be regarded as a preferred frame [42]. After the discovery of the cosmic microwave background radiation by Penzias and Wilson [43], it was predicted that it should have a dipole anisotropy generated by the Doppler effect by the Earth's motion.This dipole anisotropy was predicted in accordance with Lorentz invariance [44] and later discovered experimentally [45].Peebles called these experiments aether drift experiments [46]. The preferred frames defined by the Robertson-Walker metric, the Hubble effect, and the cosmic microwave background radiation are probably identical.In this case the absolute motion of the Sun was determined by the dipole anisotropy experiments of the cosmic microwave background radiation to be (371 ± 1) km/s. General Relativity: Rotating Universe and Time Travel It is well-known that planets, stars, and galaxies rotate.So Lanczos and Gamow speculated that the entire universe may rotate and that the rotating universe might have generated the rotation of the galaxies [47]. Gödel was the first to show that a rotating universe is a strict solution of Einstein's field equations for a homogeneous and anisotropic universe.He considered a non-expanding universe and has shown that it allows closed timelike curves, i.e. time-travel.He predicted that the original order of the rotation axes of galaxies was parallel to the universal rotation axis [39]. Raychaudhuri presented a model for an expanding and rotating universe which is a generalization of both the Friedmann-Lemaître universe and the Gödel universe.This cosmology, too, includes closed time-like curves [48]. Possibly, the Raychaudhuri universe did not start from a singularity (big bang), but from a closed time-like curve, i.e. from a time-machine. Gregory, Thompson, and Tifft discovered that the distribution of the rotation axes for both the spiral and ellipsoid galaxies of the filament-like Perseus-Pisces supercluster is bimodal.One of the peaks is roughly aligned with the major axis of the supercluster while the second peak is roughly 90 • from the first [49].This anisotropic distribution cannot be explained by conventional models of galaxyformation.Therefore I suggested that this might be a remnant of the original aligned distribution of galactic rotation axes generated by a rotating universe [50]. A rotating universe with both vorticity and shear would generate an anisotropy of the cosmic microwave background radiation.Collins and Hawking were able to set tight bounds on this effect [51].However, Korotky and Obukhov showed that the generation of this anisotropy is an effect of shear and not of vorticity alone.So the observed isotropy of the cosmic microwave background radiation does not contradict the idea of a rotating universe, where the rotation period could be as high as the Hubble age of the universe [52]. There is some discussion whether General Relativity could allow local timemachines.Carter has shown that the Kerr metric [53] of rotating spherical bodies can generate closed time-like curves [54].This inspired Tipler to investigate a rapidly rotating cylinder with 100 km length, 15 km radius, 10 14 g/cm 3 density, and a rotational speed of 70% of the speed of light.This object yielded closed time-like curves [55].However, until now it has not been proved that an observer outside the gravitational field would also see time-travel. To conclude, General Relativity requires a cosmology which includes a preferred frame, absolute space and time and which may include a rotating universe and time-travel.Such a universe may have originated not from a singularity (big bang), but from a closed time-like curve (time-machine). The Model The torsion tensor can be viewed as the translational field strength.It represents a closeure failure of infinitesimal displacements.Infinitesimal parallelograms do not close in a world with torsion.This concept is required by most gauge theories of gravity. The connection used by general relativity [20] is symmetric.After Eddington [56] suggested to generalize general relativity by introducing an asymmetric connection, Cartan [19] associated angular momentum with the antisymmetric part (= torsion) of an asymmetric connection. The introduction of quantum mechanics [16] required a quantum theory of gravity whose quantities are no longer classical, but operators.After Yang and Mills [57] suggested to describe quantum field theories by gauge theories, Kibble [58] and Sciama [59] attempted to describe gravity by a gauge theory, where they associated intrinsic spin [60] with Cartan's torsion.The successful description of the quantum field theory of the electroweak interaction by a spontaneously broken gauge theory [3] and the subsequent proof that gauge theories are renormalizable [61] inspired an increasing number of theorists to further develop gauge theories of gravity (for a review see [62]).We will briefly review the arguments for the need for a gauge theory of gravity and the need for a torsion field which we will show to be massive. Classical electrodynamics and general relativity have well-known analogues.Resting electric charges are the sources of the static Coulomb field and rotating electric charges generate an extra magnetic field and an associated Lorentz force.The field equations of classical electrodynamics are the Maxwell equations, where the matter-free equations describe electromagnetic waves.By analogy, resting masses are the sources of the static gravitational field and rotating masses generate an extra gravitational field associated with the recently discovered [63] Lense-Thirring effect [64].The field equations of general relativity are the Einstein field equations, where the linearized matter-free equations describe gravitational waves. But there are also well-known differences.Electrodynamics can be quantized and the Maxwell equations remain the field equations of quantum electrodynamics.Quantization and renormalization are possible, because (in rationalized units) the Lagrangian has dimension -4 and the coupling constant dimension zero.By contrast, general relativity cannot easily be quantized, because the Lagrangian has dimension -2 and the coupling constant (Newton's constant) has dimension 2. Hence, a quantum version of general relativity is not renormalizable. The aim is to find a quantum theory of gravity.Quantum field theories have to yield finite results for all orders of perturbation theory.Infinite contributions have to cancel one another via renormalization.The only quantum field theories yet known to be renormalizable are gauge theories [61]. Hence, the aim is to find a (quantum) gauge field theory of gravity.The first step is to find the appropriate gauge group. The group underlying special relativity is the Poincare group.Since general relativity is locally Lorentz invariant, the Poincare group is a candidate for the gauge group underlying the gauge theory of gravity [62]. The translational part of the Poincare group is associated with the energymomentum tensor and therefore with mass.As the metric tensor is of rank two, the gauge boson (graviton) associated with mass has intrinsic spin two. The Einstein field equations are symmetric and can describe only spinless matter.This is because intrinsic spin is antisymmetric.The description of a Dirac field (which has spin h/2) requires the introduction of torsion (which is antisymmetric) [62]. The need for torsion and its association with angular momentum can be seen as follows.The Maxwell equations do not describe electricity and magnetism equivalently.An equivalent description requires the introduction of magnetic charges, where the U (1) group of quantum electrodynamics is extended to the U (1)×U (1) group.The associated gauge bosons are the Einstein electric photon and the Salam magnetic photon. By analogy, general relativity does not describe the translational part and the rotational part of the Poincare group equivalently.An equivalent description requires the introduction of torsion (in analogy to magnetic charge).Furthermore, from the analogy between the Thirring-Lense effect and the Lorentz force we can infer the analogy between angular momentum and magnetic charge.Hence, both torsion and angular momentum are analogous to magnetic charge and therefore associated with one another.The effects of orbital angular momentum are already described by general relativity (Lense-Thirring effect, Kerr metric [53]).Hence, only intrinsic spin can be connected with torsion. The analogy with isospin suggests that spin is not simply a quantum number, but also the source of a gauge field.Like spin, isospin is described by the group SU (2) [65].When Heisenberg [65] introduced isospin, he supposed the (weak) nuclear force is an exchange interaction, analogous to the spin exchange interaction with which he and Bethe were able to explain ferromagnetism and antiferromagnetism [66].Later, the Weinberg-Salam theory [3] has shown that isospin is not simply a quantum number, but also the source of the weak nuclear interaction. The presented arguments suggest a gauge theory of gravity which requires a gauge boson of spin three that is associated with both torsion and intrinsic spin. Various gauge theories of gravity which include either massless or massive torsion fields have been suggested (for a review and a detailed reference.We will now argue for a non-zero rest mass of the tordion. (i) According to gauge theories charge is conserved if and only if the rest mass of the associated gauge boson is exactly zero.In contrast to total angular momentum, which is the sum of intrinsic spin and orbital angular momentum, intrinsic spin alone is not conserved.Hence, the tordion has to be massive. (ii) Accelerated charges radiate.In rationalized units the spin h/2 of an electron is greater than its electric charge e.If a tordion were massless, then the torsional part of the synchrotron radiation emitted by the electron would be stronger than its electromagnetic part.This would result in a significant difference between the theoretical (according to the standard model) and the actual energy of electrons after acceleration.Such a difference, were it real, is unlikely to have escaped discovery in particle accelerators. (iii) According to Dirac [9], the electric-magnetic duality (i.e. the introduction of magnetic charges) yields quantized electric and magnetic charges.This result, however, is correct if and only if the electromagnetic field (i.e. both photon and magnetic photon) is massless.By contrast, the spin-mass duality introduced by Kibble [58] and Sciama [59] does not yield quantized charges.Gravitational mass is not quantized.In the linearized approximation of general relativity a massive graviton would change deflection of light by the sun to 3/4 its Einstein (and observed) value [67].Hence, to agree spin-mass duality and massless graviton with non-quantized mass, we have to assume that the tordion is the massive gauge boson. (iv) In rationalized units both Fermi's constant [68] of V -A theory [69] and Newton's constant have dimension two.In Weinberg-Salam theory [3], Fermi's constant turns out to be, up to a constant of order unity, the dimensionless coupling constant times the square of the inverse W-boson rest mass.By contrast, Newton's constant is equal to the square of the inverse Planck mass which, however, is not the rest mass of the (massless) graviton.A possibility is to interpret the Planck mass as the rest mass of the second gauge boson of gravity, the tordion. To conclude, the quantum field theory of gravity is presumably a gauge theory whose underlying group is the Poincare group.This theory is supposed to include a massive torsion (and associated intrinsic spin) field which breaks the gauge invariance (spontaneously?).The Lagrangian is expected to have the dimension -4 and the coupling constant should be dimensionless.Finally, the classical, low energy limit has to regain general relativity. What is Cartan's Torsion? When a four-vector C k is parallely displaced from the four-position x k to x k + dx k , then it changes according to the prescription, This is the definition for the position-dependent affine connection Γ k ij .According to general relativity [20], it has only a symmetric part, which is called "Christoffel symbol."The anti-symmetric part of the affine connection is called "Cartan's torsion" [19], According to general relativity, the torsion tensor is zero.The introduction of a nonzero torsion tensor means therefore an extension of general relativity.Quite remarkably, the torsion tensor transforms as a tensor under local Lorentz transformations [70], whereas the Christoffel symbol does not. The torsion tensor can be viewed as the translational field strength.It represents a closure failure of infinitesimal displacements.In spacetimes which include torsion, infinitesimal parallelograms do not close. We know from Einstein's general relativity [20] that gravitational mass is connected with curvature via where is the Einstein tensor, Σ ij is the stress-energy (energy-momentum) tensor, R ij is the Ricci tensor, g ij is the metric tensor, R k k is the Ricci scalar, and κ = −8πG/c 4 is the Einstein constant. Analogously, intrinsic spin is connected with Cartan's torsion via where τ ijk is the spin tensor [62].The equations above show the analogy between the duality of mass and spin and the duality of curvature and torsion, respectively.Directly from the definition of the affine connection one obtains the differential equation of autoparallel curves, where the infinitesimal interval ds between x k and x k + dx k is given by Quite remarkably, only the symmetric part of the metric tensor contributes to the square of the infinitesimal interval.Readers who would like to learn more about the formalism of torsion are invited to read the excellent review, Ref. [62]. Why Do We Need Torsion? The energy-momentum tensor Σ ij of a Dirac field Ψ (spin 1/2 field [9]) is antisymmetric [71], where is the covariant derivative.By contrast, the energy-momentum tensor of general relativity [20] is symmetric.In order to couple a spinor field (Dirac field) to a gravitational field, one has to use an energy-momentum tensor which includes anti-symmetric parts.Therefore general relativity has to be generalized by the introduction of Cartan's torsion [58].I have shown that the duality between mass and spin is analogous to the duality between electric charge and magnetic charge [18].The electric-magnetic duality is, where J i is the electric four-current, j i is the magnetic four-current, and the field strength tensors are given by, where A j is the electric four-potential which corresponds to Einstein's electric photon [13], and a j is the magnetic four-potential which corresponds to Salam's magnetic photon [10]. Comparison of the equations above demonstrates the analogy between the electric-magnetic duality and the mass-spin duality. The electric-magnetic duality is required to explain the quantization of electric charge [9].I argued above that magnetic photon radiation may have already been observed by August Kundt in 1885 [12]. Is There Observational Evidence for Torsion? The rotation axes of the galaxies of the Perseus-Pisces supercluster are aligned.This alignment exists over a distance of at least 40 Mpc (130 million light years) [49].Such a large alignment cannot be explained within the framework of conventional models of galaxy-formation.Therefore I suggested [50] that this alignment is either a topological defect (torsion wall) or a remnant of the original aligned distribution of galactic rotation axes generated by a rotating universe [39]. 4 Do We Need Grand Unification and Supersymmetry? Between 1971 and 1974 supersymmetry has been suggested by several researchers independently [72].In 1976 researchers suggested a local supersymmetry called supergravity [73].In 1981 Edward Witten has shown that supersymmetry can solve several shortcomings of Grand Unified theories [74].In 1984 Michael Green and John Schwarz have shown that string theory and supersymmetry can be combined.This is the superstring theory [75].In 1995 Edward Witten has shown that the membrane concept can agree the 11-dimensional supergravity with the 10-dimensional superstring theory.Both theories are limit cases of an 11-dimensional M-theory [76].Supersymmetric theories predicted that the elementary particles of the standard theory of particle physics (leptons, quarks, photon, gluons, W-and Zboson, Higgs boson) have supersymmetric partners.These supersymmetric particles (called neutralinos, photino, gluinos, Winos, Zinos, squarks, and sleptons) were all predicted to have rest masses between 50 and 300 GeV.Now the ATLAS Collaboration of the LHC (Large Hadron Collider) presented data [77] which do not confirm the gluino.It would have been detected if its rest mass were less than 700 GeV. I am not so surprised that signs of light supersymmetric particles have not been detected.I predict that supersymmetry will not be confirmed.My arguments are the following. (1) The main reason for supersymmetry is that it can explain some shortcomings of minimal Grand Unified Theories, i. e. the mass-hierarchy problem (i.e. the fact that W-and Z-boson do not have rest masses of 10 15 GeV, although they should have eaten (coupled to) the Higgs bosons of Grand Unification) and the non-observation of the proton decay (lower limit: mean proton lifetime of 10 33 years). But this argument requires that there is Grand Unification.In 1997 I suggested a generalization of quantum electrodynamics, called quantum electromagnetodynamics [42].This theory is based on the gauge group U (1) × U (1).In contrast to quantum electromagnetodynamics it describes electricity and magnetism as symmetrical as possible.Moreover it explains the quantization of electric charge.It includes electric and magnetic charges (Dirac magnetic monopoles) and two kinds of photon, the conventional Einstein electric photon and the hypothetical Salam magnetic photon.The electric-magnetic duality of this theory reads: electric charge -magnetic charge electric current -magnetic current electric conductivity -magnetic conductivity electric field strength -magnetic field strength electric four-potential -magnetic four-potential electric photon -magnetic photon electric field constant -magnetic field constant dielectricity number -magnetic permeability Because of the U (1) × U (1) group structure and the Dirac quantization condition eg = h (unit electric charge times unit magnetic charge is equal to the Planck constant), this theory is hard to agree with Grand Unification.Although a group such as SU (5) × SU (5) is in principle not impossible. (2) Another reason for supersymmetry is that it can explain the existence of (anti-symmetrical) fermions in an otherwise symmetrical theory (such as Special Relativity and General Relativity). However, it has long been known that a generalization of General Relativity which includes anti-symmetry is Einstein-Cartan theory.The affine connection of this theory includes not only the non-Lorentz invariant symmetrical Christoffel symbol but also the Lorentz invariant anti-symmetrical torsion tensor. Within the framework of a quantum field theory, the torsion tensor corresponds to a spin-three boson called tordion, which was introduced in 1976 by F. W. Hehl et al. [62]. In 1999 I discussed the properties of the tordion [18].Moreover I suggested that the electric-magnetic duality is analogous to the mass-spin duality.This analogy reads: • electric charge -magnetic charge mass -spin • electric field constant -magnetic field constant gravitational constant -reduced Planck constant • electric four-potential -magnetic four-potential metric tensor -torsion tensor • electric photon -magnetic photon graviton -tordion
10,431.4
2011-03-01T00:00:00.000
[ "Physics" ]
Unambiguous Entropic Evaluation of the Efficiency of Complicated Technologies of Complex Processing of Natural Resources State-of-the-art processing of natural resources is characterized by constantly increasing volumes of mining industry. On the one hand, it leads to the involvement of ever growing volumes of depleted natural resources into industry, since rich sources have been practically exhausted. On the other hand, ecological requirements to the processing industry are ever growing. These two circumstances make it necessary to advance processing technologies for maximal usage of all valuable components of natural raw materials. As an example of such enterprises, we can mention the processing of multicomponent ores of nonferrous metals or the production of various mineral materials and even metal materials from the Dead Sea water. It is impossible as yet to evaluate unambiguously the total efficiency of such combined industries. This makes it difficult to manage and optimize them. Such situation requires the development of a method allowing an unambiguous estimation of the completeness of complex usage of raw materials at all stages of the technology, which is sometimes rather branched. The proposed criterion of such kind is based on the properties of entropy, which is the principal invariant of modern natural science. This parameter is perceived ambiguously and is permanently discussed in technical literature. Physical nature of this parameter is substantiated in detail by the author in [1] [2] [3], where its universality for the analysis of complicated systems during their variation is demonstrated. In the present paper, the development of such a criterion for a complicated technology of complex raw material processing is considered. However, such an approach can be also used for the analysis of complicated technological projects in other fields of human activities. This article represents a continuation of the author’s developments [1]. Introduction Justification of such a criterion can be illustrated best of all on the example of plants producing various products from the same raw material, for example, "Norilsk Nickel" (Russia) or Dead Sea Works (Israel).The first of these enterprises produces, side by side with nickel, a number of other nonferrous metals.As for the Dead Sea Works, they produce around five or six different products from sea water, each of them being a final product of an individual plant.Many similar examples of multiple usages of natural resources can be given for other countries.To develop the mentioned method, it is necessary, first of all, to solve the problem of numerical evaluation of the state of a complicated system. Here some explanations are needed.The main problem is reduced to the development of methodology of unambiguous numerical estimation of the state of a system of any complicacy.As a rule, complicated systems consist of several components, their number being different.It is very important to have a notion about the relationships between these components in the system.Rather often, if the number of components is small, it is sufficient to determine the ratio between them.But the most complete idea is provided by the usual percentage reduced to 100%.The estimation can also be reduced to fractions of unity, which correlates it with probability.The probability is determined as ( ) Here i x can have any dimension (tons, dollars, kilograms, percentage, pieces, etc.). Entropic Estimations of Complex Systems If a system consists of two components, a specified content of one component automatically defines the content of another, since the sum of their contents is unity. Hence, for a binary system, a single-valued estimation can be obtained specifying the content of one of the components. The situation is different if a system consists of more than two components.In this case, the content of one component does not define those of others.If the contents of all components of the system are specified simultaneously, it gives a multiple (and not a single-valued) estimation.Therefore, in this case we use other characteristic instead of the probability-a measure of uncertainty introduced by Hartly in 1929 [4] [5] [6] [7] and then used by Shannon in 1948 when developing the theory of information.The notion of the measure of uncertainty can be clarified by the following elementary example.We assume that a random value i x has k equiprobable outcomes (When tossing a coin, 2 k = , when casting a die, According to the probability definition, ( ) The uncertainty is a function of the number of outcomes, and it can be de-E.Barsky noted by ( ) Shannon [3] has shown that the only function of the number of outputs is a quantity proportional to the logarithm of the number of outputs. ( ) ( ) where A is the proportionality coefficient; ( ) H x -the uncertainty of a random value; log k -a quantity determined with the accuracy up to a constant, because the base of logarithm is not determined yet. According to [1], we can write the following for the entropy of a system: , P P are the probabilities of the components contents in a binary system. For a multi-component system, the composition entropy is determined by the expression [1]: As for the bases of logarithms, they can be whatever; any assumed values give results differing by a constant.At the comparison of successive computations in the process of the system change, the influence of the logarithm base is leveled.Therefore, for greater convenience in practical computations, we can recommend to use decimal logarithms, and for theoretical derivations-natural logarithms.There are no distinctions in kind between them. Having clarified all these nuances, we can pass to the consideration of a general situation in the analysis of the state of the production of many products on the basis of complex usage of a raw material. Evaluation of Complicated Technology Perfection on the Basis of Entropy Criterion Our analysis is based on the plants realizing multiple usages of natural raw materials mentioned in the introduction.First of all, it is necessary to have data on the number of tons of ore or the number of cubic meters of sea water annually used by the respective plants.Besides, complete data on the material composition of ore or sea water are necessary.Denote the reference quantity of these parameters by F tons/year.If we denote the fraction of useful components in raw material by k , then their total amount coming into production is kF tons/year.Besides useful components, there is waste rock in this raw material, whose fraction is denoted by m .Clearly, k m + = The waste rock can contain various substances, sometimes very valuable, but at present they are not the target product. Determine the complicacy of composition of useful components that can be obtained purely theoretically at their ideal extraction.Denote the relative content of each component in this idealized balance by i x , and then their sum is The components can be calculated in fractions of unity or percentage-wise. The entropy of this initial composition is determined, by definition, as x .It is also clear x is calculated as a fraction of the ideal initial composition, and The entropy of the obtained total production is The efficiency of production using complicated technology can be unambiguously determined as Besides the total technological evaluation of the entire production complex, the analysis of obtaining each product separately can be performed in a similar way using the relation However, it also visually shows a simple percentage extraction of each component separately.This makes it possible to analyze the perfection level of the technology of obtaining each product.It is especially important that using Equation (11) we can analyze the level of the entire combined production.Such analysis allows revealing technological reserves and deciding where and to what extent it must be modernized in order to increase the general effect. Such analysis confirmed by financial calculations can even allow sometimes a decrease in the output of one product at the expense of increasing that of another in order to reach an increase in the total effect both technologically and financially. For the sake of simplicity and clearness, we present a concrete example of the application of the suggested method.Usually the number of products made from the same raw material is not very high-maximum five or six.The calculations are presented in Table 1. To clarify the total technological efficiency of all departments of the enterprise, it is necessary to sum up the initial entropy of the raw material in line 2 and the obtained entropies of manufactures in line 4 and to compose a ratio of the obtained numbers. Besides, this shows the presence of unemployed reserves in the total production. At the same time, the obtained result shows that the general usage of raw material is satisfactory. Conclusions The possibility of a unique evaluation of a complicated technological process based on the entropic parameter is demonstrated.This parameter is determined for any complicated systems that can be interpreted by a certain estimate within the bounds of the probability theory or mathematical combinatorics. Both the production process as a whole and its separate steps can be analyzed on the basis of the entropic criterion and its poorly performing sectors can be revealed.This will make it possible to intensify the cumulative effect at the expense of greater usage of complex raw material components, which determines the assignment of financial means.The unequivocal evaluation gives the production managers a powerful argument for optimal management.
2,150.4
2017-02-21T00:00:00.000
[ "Environmental Science", "Engineering" ]
Resources Allocation and Failures in Step Topology under Distributed Computing System In the past years, distributed computing is gaining the popularity due to reduction in execution time and low cost involvement. On the basis of this, Mobile Adhoc Network (MANET) is also increasing worldwide with major advantage that it has no involvement of wire and transfer of data can be done by the virtual paths if the existing path is congested. In the present work, MANET is considered in the form of step topology which consists of heterogeneous collection of the devices. The work demonstrates the resources allocation for execution of tasks and it consists of selection of right path if the link failures and by pass link failures. It also consists of the resource management over the new proposed step topology. Entire work is modeled with the help of well known modeling language known as Unified Modeling Language (UML) and model demonstrates the resources allocation for execution of the tasks. Introduction In the current scenario, many of the computing labs have been shifted from the centralized computing system to distributed computing system due to several advantages of distributed system.In the distributed system, communication among the nodes called as devices is done by the message passing techniques for sharing of resources, data, etc.Let us first describe the research work available on the execution of tasks in the critical section under the distributed system.In the mutual exclusion, if one process is inside the critical section then no other process or task is allowed to use the critical section.The mutual exclusion of tasks in distributed system are well explained in [1].Under the mutual exclusion, management of the resources is a very tough task for the operating system.In this connection configuration management, fault management, performance management, security management as well as management of resources are well explained in [2].A well know reference Tanenbaum [3] has explained the conditions for mutual exclusion i.e.only one process can enter in critical section at a time and only that process utilize the resources other process is not allowed to use the resources.Hwang [4] has explained the handling of tasks under distributed computing environment alongwith the data dependencies during the execution of the tasks.In [5], mutual exclusion of tasks are well described by Milenkovi. Lamport [6] has described the time and ordering of events according to timestamps for executing the processes under the mutual exclusion for distributed computing system.Later on the mutual exclusion Lamport's algorithm is modified by Ricart and Agrawala [7].The problem of mutual exclusion is solved by Maekawa [8] by using the sets and proposed a distributed algorithm for symmetric execution of process.Performance analysis of networks topology in agent based connectivity architectture for decision support system has been explained in [9].Since in the present work, a new kind of Mobile Adhoc Network (MANET) step topology is considered therefore it is necessary to explain some of the important references related to the mobile adhoc network.Cheng and Zhaung [10] have proposed the concept of Downward Vertical Handoff (DVH) which changes the mobile connection to better network.A routing scheme for adhoc wireless networks has been proposed in [11] with the shortest path technique.Johnson and Maltz [12] have proposed the dynamic source routing for the adhoc wireless networks.A lightweight mechanism which is used to perform the effective congestion control was explained by Johnson et al. [13].Later on the performance of Dynamic Source Routing (DSR) protocol is improved by Das et al. [14].Yu and Li [15] have developed an analytical model for analysis and evaluation of routing algorithms, which helped for studying the performance and characteristics of routing algorithms.Ahuja et al. [16] have explained the swarm intelligence technique which is helpful for finding the global solution for network problem. In the present work a well known Unified Modeling Language (UML) is used to model the resources allocation and many of the researchers have used this platform independent modeling language for the distributed computing system.First time, performance metrics for distributed and parallel applications using UML profiles has been suggested by Pllana and Fahringer [17,18].A well known researcher H. Gomma [19] has used the UML in the various fields.The author has used the UML for distributed, concurrent and real time application concepts of UML.The various aspects and versions of UML are well explained by the Object Management Group (OMG) and available in [20,21].Recently, Saxena and Zaidi [22] have proposed a static topology for static interconnection of distributed systems by taking variations in cable segments.Routing protocol for adhoc wireless network using backpressure restoration has been propsed by Singh et al. [23].Krunakaran and Thangaraj [24] proposed a cluster based congestion control protocol for mobile adhoc networks. In the present work, a UML modeling is used for the resources allocation for the computer systems attached under the distributed environment for the MANET.The systems are arranged by a new kind of topology called as the step topology.Different kinds of the link failures are also observed for the step topology.The different tasks are executed inside the critical section by taking the resources but follow the concept of the mutual exclusion under distributed computing environment.The selection of the path is done with the help of dynamic source routing protocol.UML class and sequence diagrams are also presented for the completion of execution of the process. Distributed System Distributed system consists of multiple computers which are interconnected by message passing techniques.Each node connected in the network consists of process and local memory.The message passing technique allows point to point static connection among nodes.Distributed system provides resource sharing, improved performance and reliability and requires low installation cost.Communication is carried out by message passing among nodes are connected through the network.A sample diagram for the distributed network is shown below in Figure 1. In the above distributed network system, all devices along with resources are attached through the step topology.The devices may be Computer system, laptop, hand_held devices, mobile devices and resources are loaded on computer system. Resources Allocation and Routing Under distributed computing system resources can be shared by number of devices attached through the adhoc network represented in form of step topology.When the numbers of systems are connected around the globe then under distributed computing system they can share the resources which are flowing in adhoc network.The category of resources may be file sharing, data sharing, videos sharing, audio sharing etc.In this system if we want to execute the task by taking the device which is allocated at a very far distance then by taking on remote the task can be executed on this device.This is called as the remote access of the devices and sharing of the resources of that device.Under the distributed system the device are shared as represented in the above figure. From the complete step topology as defined in the [22], authors have taken a segment of five nodes as shown in Figure 2.For each node a routing table is defined according to number of hops as represented in figure.The five nodes can share the resources according to path selection method.If any link failures then the desired node cannot share the resources for execution of a process.In the adhoc network the selection of path is based according to Dynamic Source Routing (DSR), which determines the destination path according the virtual link if the middle link failures or congested in the distributed network. DSR Path Selection Method Dynamic Source Routing is designed for multihop wireless adhoc networks and in this technique sender determines the path from source to destination node.It is used for small distance between 5 -10 hops, it is also based on link state algorithm as each node is capable to save the best way to reach destination.If any change appears in network topology then flooding occurs in the network which it controlled by DSR by selecting the virtual route In the network, the delay can also be computed as Number of Packets Received Delay Simulation Time  (2) By Pass Link Failure In this case if the bypass route BY in network fails then A cannot transmit data via link Y and ABYDE route does not come in existence then a virtual by pass link ADE comes in existence and the other route ABCE also comes in existence and then data is transferred to the destination device.This is shown below in the following Figure 5. UML Class Diagram A class diagram shows the static representation of the research problems in which the numbers of classes are arranged for interaction among them by using association, aggregation, inheritance, etc.A UML class model is designed for allocation of the resources and shown below in From the above Figures 4 and 5, the transfer of data depends upon speed of the network measured as throughput which is the number of useful bits per unit of time.The delivery ratio is given by the following: In the network, the delay can also be computed as only one process can enter into the critical section and requesting for the grant of the resources handled by class name as Resources.After exexuting the task, the output is transferred to the Process via Memory and Register classes.In this interpretation the task/process may be treated as the subroutine, subprogram, macro, a segment of program for transferring of data. UML Sequence Diagram A UML sequence diagram shows the dynamic behavior of the system and it shows the working of the system according to the clock of the device which is always moving forward.The vertical lines shows the lifeline of the object which is represented at the top.The objects are initialized and after the use these are automatically destroyed in the dynamic modeling.The difference between the end of the object and the start of the object is the lifeline of the object.The execution time of the process can be represented as the life line of the object assigned to the process.From the literature, it is observed that many of the researches are using the UML sequence diagram for representing the dynamic aspects of the research problems, therefore, a UML sequence diagram is designed for allocation of the resources to the process as represented in Figure 7.In the sequence diagram, a process object requests for the threads.If thread is available then it is to be assigned to the process and if not available, process will wait for thread assignment.When the thread is assigned it will send the request to the processors for assigning the critical section which are attached under the distributed computing system.If processor is available then it will search for the critical section and as per the availability of critical section the process is executed after getting the resources from the resource object and finally the output data is transferred into the memory and the Memory object sends the output to the Process object.After the completion of execution the Process object is terminated.The entire working of dynamic modeling is shown below. Conclusion and Future Work From the above work, it is observed that UML is a powerful modeling language used to represent the static and dynamic behavior of the research problem.In the above, link failures in the MANET arranged in a new kind of step topology developed by the authors and procedure for resources allocations is described through the modeling.The present work can be extended by assigning the loads on the each node attached through step topology and speed of data transfer and delay in the packets can be measured.In place of MANET, step topology can be used for the static interconnection of the devices. 3 .From the Figure 1 , 4 . the data.The different paths from source (N1) to destination (N5) are shown below in Figure The following types of failure may occur: the five nodes are arranged for sharing of the resources and after execution of process acknowledgements are transferred to desired node.This is shown below in Figure It is observed that when node C fails the primary path ACE is to be removed which shows that A cannot transmit data via link C. Then a virtual path is created by bypass method via route ABYDE which becomes the primary path to transmit the data as shown below in following Figure 4. Figure 6 . As per Figure 2, the five computer systems are arranged in step topology and transferring the data from source node to destination node as per the path selection list represented in Figure 3.The UML class diagram represented in Figure 6 is designed for the N processors in general and it consists of nine UML classes namely Server, Thread, Process, CIN, Processor, Critical_Section, Figure 6 . Figure 6.Uml class model for resources allocation. Figure 7 . Figure 7. Uml sequence diagram for resources allocation.
2,945.6
2013-01-30T00:00:00.000
[ "Computer Science" ]
Selecting effective siRNA sequences by using radial basis function network and decision tree learning Background Although short interfering RNA (siRNA) has been widely used for studying gene functions in mammalian cells, its gene silencing efficacy varies markedly and there are only a few consistencies among the recently reported design rules/guidelines for selecting siRNA sequences effective for mammalian genes. Another shortcoming of the previously reported methods is that they cannot estimate the probability that a candidate sequence will silence the target gene. Results We propose two prediction methods for selecting effective siRNA target sequences from many possible candidate sequences, one based on the supervised learning of a radial basis function (RBF) network and other based on decision tree learning. They are quite different from the previous score-based siRNA design techniques and can predict the probability that a candidate siRNA sequence will be effective. The proposed methods were evaluated by applying them to recently reported effective and ineffective siRNA sequences for various genes (15 genes, 196 siRNA sequences). We also propose the combined prediction method of the RBF network and decision tree learning. As the average prediction probabilities of gene silencing for the effective and ineffective siRNA sequences of the reported genes by the proposed three methods were respectively 65% and 32%, 56.6% and 38.1%, and 68.5% and 28.1%, the methods imply high estimation accuracy for selecting candidate siRNA sequences. Conclusion New prediction methods were presented for selecting effective siRNA sequences. As the proposed methods indicated high estimation accuracy for selecting candidate siRNA sequences, they would be useful for many other genes. Background Although RNA interference (RNAi) has been successfully used for studying gene functions in both plants and invertebrates, many practical obstacles need to be overcome before it becomes an established tool for use in mammalian systems [1][2][3][4][5][6]. One of the important problems is designing effective siRNA sequences for target genes. The short interfering RNA (siRNA) responsible for RNA interference varies markedly in its gene silencing efficacy in mammalian genes, where the gene silencing effectiveness depends very much on the target sequence positions (sites) selected from the target gene [7,8]. Since different (page number not for citation purposes) siRNAs synthesized for various positions induce different levels of gene silencing, the selection of the target sequence is critical to the effectiveness of the siRNA. We therefore need useful criteria for gene silencing efficacy when we are designing siRNA sequences [9,10]. Zamore et al. and Jayasena et al. showed that 5' end of the antisense strand that was incorporated into RNA-induced silencing complex (RISC) more efficiently was less tightly paired to its complement and began with an A-T pair, whereas the strand incorporated less efficiently had a G-C terminus [11,12]. Other factors reported to be related to gene silencing efficacy are GC content, sequence features, specific motif sequences and secondary structures of mRNA. Several siRNA design rules/guidelines using efficacy-related factors have been reported [13][14][15][16][17]. Although sequence characteristics for siRNA designs seem to be the most important factor determining effective siRNA sequences, there are few consistencies among the reported rules/guidelines [18][19][20][21][22]. This implies that these rules/guidelines might result in the generation of many candidates and thus make it difficult to extract a few for synthesizing siRNAs. Furthermore, there is in RNAi a risk of off-target regulation: a possibility that the siRNA will silence other genes whose sequences are similar to that of the target gene. When we use gene silencing for studying gene functions, we have to first somehow select highpotential siRNA candidate sequences and then eliminate possible off-target ones [23]. Here we therefore focus on identifying high-potential siRNA sequences from many possible candidates and propose the prediction methods for selecting effective siRNA target sequences from many possible candidate sequences by using the radial basis function (RBF) technique and decision tree learning of a large number known effective and ineffective siRNAs [24][25][26]. We also propose the combined prediction method of the RBF network and decision tree learning. The effectiveness of the proposed methods were confirmed by using them to evaluate siRNA sequences recently reported to effectively or ineffectively suppress the expression of various genes (see Methods). As the average prediction probabilities of gene silencing for the effective and ineffective siRNA sequences of the reported genes by the proposed three methods were respectively 65% and 32%, 56.6% and 38.1%, and 68.5% and 28.1%, the methods imply high estimation accuracy for selecting candidate siRNA sequences. Although the proposed methods are different from the previous scoring methods and are therefore difficult to compare with them, the evaluation results indicate that the proposed methods would be useful for many other genes. They will therefore be useful for selecting siRNA sequences for mammalian genes. Results and Discussion We propose two prediction methods for selecting effective siRNA sequences from many possible candidate sequences, one based on the supervised learning of RBF and other based on the learning of decision tree. Learning based on the RBF network and the decision tree A radial basis function (RBF) network is a type of artificial network for application to problems of supervised learning, such as regression, classification and time series prediction. As RBF networks are nonparametric models, there is no a priori knowledge about the function that is to be used to fit the training set [24,25]. RBF networks are supervised learning models with a single middle layer of units. They are similar back propagation neural networks but usually faster to train because the RBFs used in the units mean that fewer weight adjustments are needed. Also, RBF networks tend to be more resistant to noisy data than back propagation networks. Decision tree learning is one of the most widely used and practical methods for inductive inference. A decision tree is a tree in which each branch node represents a choice between a number of alternatives, and each leaf node represents a classification or decision [26]. The proposed algorithms of the RBF network and the decision tree learning for selecting siRNA sequences effective are described in Methods. Verification of the proposed methods After carrying out the learnings of the RBF network and decision tree using 860 effective and 860 ineffective sequences, we obtained eight clustered (C1 to C8) listed in Table 1 Prediction analysis by the RBF network The average prediction probability of gene silencing for the MG1 effective siRNA sequences was 66.3% with the standard deviation 23.2%, whereas the average probability for the ineffective siRNA sequences was 33.6% with the standard deviation 17.2%. As there is a clear difference between the prediction probabilities of the effective and ineffective siRNA sequences, the predicted probabilities correspond to the effectiveness indication of the proposed method. The average prediction probabilities of effective siRNA sequences for MG2, MG3, MG4 and MG5 were respectively 66% (standard deviation: 17.4%), 57.4% (21.9%), 78.3% (16.7%) and 57.9% (16.7%), whereas the average prediction probabilities of the corresponding ineffective siRNA sequences were 25.5% (19.7%), 40.7% (21.4%), 20.7% (6.2%) and 30.1% (15.4%). As there are also clear differences between the averages of the effective and ineffective siRNA sequences for these genes, the individual predicted probabilities indicate the effectiveness of the proposed method. Relations between the average prediction probabilities of the effective and ineffective siRNA sequences for the recently reported siRNAs are shown in Figure 4. With regard to gene classes, MG1, MG2 and MG5 indicate distinctions between the effective and ineffective siRNAs more clearly than MG3 does and MG4 indicates distinctions remarkably clearly. These results therefore imply that there are some differences individual nucleotide frequencies at each position of the siRNAs effective for these gene classes. Although MG3 indicates differences between the effective and ineffective siRNAs, the ratios of the effective to ineffective ones are less than 20%. This implies that there is no big difference between the individual nucleotide frequencies of the siRNAs effective and ineffective for silencing this class of genes. The entire average of 103 effective sequences for these genes was 65% (20.5%), whereas that of 93 ineffective ones was 32% (19.1%). Prediction analysis by the decision tree learning We also computed the average prediction probabilities for MG1 to MG5 by using the decision tree learning. Relations between the average prediction probabilities of the effective and ineffective siRNA sequences are shown in Figure 5. Comparing Figure 4 with Figure 5, we can understand the differences between the average prediction probabilities of the RBF and decision tree methods. Although the average prediction probability for MG1 effective siRNA sequences was 53% (20%) by the decision tree learning, the corresponding probability by the RBF network was 66.3% (23%). This is 13% higher than that of the decision tree learning. There are similar relations among the average prediction probabilities for MG2 to MG5. The entire average prediction probability of 103 effective siRNA sequences for these genes was 56.6% (18.9%), whereas that of 93 ineffective siRNA sequences was 38.1% (16.3%). Although the method of the RBF network might be superior to that of the decision tree learning, different results imply that two methods have their own prediction criteria. Combined method of the RBF network and decision tree learning Since there were different prediction features in the two methods, we combined both methods to increase prediction capability. That is, if a candidate sequence is predicted as a high prediction probability one in either method, it can be inferred as a high prediction probability one. For example, if some sequence in MG2 effective siR-NAs were predicted as 50% gene silencing by the RBF network and the same sequence were predicted as 65% one by the decision tree learning, it can be considered as 65% gene silencing in the combined method. The average prediction probabilities of gene silencing for various genes by using the combined method are shown in Figure 6. It is clear that the combined method indicates better prediction probabilities for MG1 to MG5 than those by the RBF network and decision tree learning. The average prediction probabilities for the total effective and ineffective siRNA sequences are respectively 68.5% (17.7%) and 28.1% (17.1%). Comparison with other reported methods The proposed methods use the supervised learning techniques by the RBF network and decision tree for selecting effective siRNA candidates, whereas most of the previous methods use scoring techniques [27]. Although the proposed methods can estimate the probability of gene silencing in the range from 0 to 1, the scoring methods cannot indicate this probability. The scoring method basically sets score values for candidate siRNA sequences according to the designated design rules. Consequently if an siRNA candidate for a specific gene completely satisfies the required design rules, it is expected to get a high score. Even though a high-potential siRNA would be obtained, however, it is difficult to estimate the probability that this siRNA would actually accomplish the expected gene deg- radation. In addition, as the previous scoring methods are dependent on their designated rules, the obtained scores vary depending on the individual rules. It is therefore quite difficult to compare these different scoring methods with the proposed methods. As the important role of the scoring methods is to show the priority of the siRNA candidates, it is necessary to be clear as to score differences between effective and ineffective siRNAs. That is, the scores of the effective siRNAs should be indicated by a set of high values, whereas those of the ineffective ones should be indicated by a set of low or negative values. From this point of view, we examined scores of the siRNAs effective and ineffective for MG1 to MG5 by using the previously reported scoring methods [27]. As a result, it was clear that the previous methods do not always clearly distinguish between effective and ineffective siRNA sequences (Fig. 7). Decision tree diagram for known 860 effective and 860 ineffective siRNA sequences. The top of the branch node indicates the position and nucleotide attribute, e.g., "4, T or A" represents the cDNA position 4 with the nucleotide T or A. "X" indicates an arbitrary nucleotide, i.e., A, G, C or T. The bottom of the branch node shows yes (Y) and no (N). The leaf node indicates the number of effective siRNA sequences and its percentage, e.g., "299, 74%" means that the number of effective siRNA sequences is 299 and its percentage indicates 74% (= 299/404). Branch node Leaf node order: That is, the scores of the ineffective siRNAs are larger than those of the effective ones. In addition, although the methods of Ui-Tei et al. and Amarzguioui and Prydz provide correspondences between the individual average scores and the siRNAs effective and ineffective for MG1 to MG5, the relative score differences between the effective and ineffective siRNAs are not large (Fig. 7). In the case of using the method of Ui-Tei et al., for example, the average scores of the siRNAs effective and ineffective for MG1, MG3, and MG4 are respectively 0.8 and -1, 0.86 and -0.4, and 0.86 and 0.29. These results imply that this method might result in producing many same-score siRNA candidates because of the difficulty of setting the candidate priorities. The proposed method, on the other hand, by estimating the gene silencing probability of the siRNA candidates can, as shown in Figure 6, clearly indicate differences between effective and ineffective siRNAs. This therefore implies that the proposed method can easily be used for selecting high-potential siRNA sequences. Conclusion We proposed two prediction methods for selecting effective siRNA target sequences from many possible candidate sequences by using a radial basis function (RBF) network and decision tree learning. They are quite different from the previous score-based siRNA design techniques and can predict the probability that a candidate siRNA sequence will be effective. The proposed methods were evaluated by applying them to recently reported effective and ineffective siRNA sequences for various genes. In addition, we also proposed the combined method of the RBF network and decision tree learning. As the average prediction probabilities of gene silencing for the effective and ineffective siRNA sequences of the recently reported genes by the proposed three methods were respectively 65% and 32%, 56.6% and 38.1%, and 68.5% and 28.1%, the methods imply high estimation accuracy for selecting candidate siRNA sequences. The evaluation results indicated that the proposed methods would be useful for many other genes. It should therefore be useful for selecting siRNA sequences for mammalian genes. Prediction probability distributions of siRNA sequences effective and ineffective for MG1 to MG5 by the proposed RBF method Supervised learning for effective siRNA classifications by using the RBF network Preparation To use a RBF network for selecting effective siRNA sequences, we need to represent individual nucleotides (A, G, C and T) as numerical data. We therefore transform the symbols A, G, C and T into the following numerical representations: A = 1, G = 2, C = 3 and T = 4. Other numerical data representations for individual nucleotides are, of course, also possible. The RBF network can be constructed by adding the hidden and output layers as shown in Figure 8. To carry out the supervised learning for effective siRNA classifications by using the RBF network, we partitioned the data (known effective and ineffective siR-NAs) into two sets, one of training data and the other of validation data. The processes of the classifications are carried out two phases: training and validation. Training phase The training of the RBF network proceeds in two steps. First the hidden layer parameters are determined as a func-tion of the input data (vectors) and then the weights between the hidden and output layers are determined by comparing the target data and the output of the hidden layer. The hidden layer parameters to be determined are the parameters of hyperellipsoids that partition the input data (vectors) into discrete clusters or regions. The parameters locate the center (i.e., the mean) of each ellipsoid's (region or cluster) basis function and describe the extent or spread of the region (i.e., the variance or standard deviation). The centers of individual clusters are determined as follows: (1) Randomly choose m vectors from the input data set to be the centers of m basis functions. (2) For each vector i in the input dataset compute the Euclidean distance D i,m to each of the m basis function centers. Prediction probability distributions of siRNA sequences effective and ineffective for MG1 to MG5 by the proposed decision tree method Sequence ID prediction probability Effect. Ineffect. (a) where i is input vector number, e.g., i = 1,2, ..., TN (the maximum number of vectors in the set of training data, X i is i-th input vector, 19 ) and M m is the location vector or center of the basis function for hidden node m, M m = (μ m,1 , μ m,2 , ...., μ m,19 ). (3) Determine for each input data vector the closest basis function center: = Min{D i,1 , D i,2 ,...,D i, m } for i = 1,2,...,TN, (2) where C bf,i is the closest basis function for the input vector i. Comparison of the average prediction probabilitiesof the effective ("Effect") and ineffective ("Ineffect") sequences for MG1 to MG5 by the RBF method. The radial basis function GR(i, m) for the hidden unit m output of the input vector i is defined as a Gaussian function in the following way: where σ m 2 is a measure of the size of the cluster m (i.e., the variance or the square of the standard deviation). Then all that remains is to find the linear combination of weights that produces the desired output (target) values for each input vector. Since this is a linear problem, convergence is guaranteed and computation proceeds rapidly. This task can be accomplished with an iterative technique based on the perceptron training rule or with various other numerical techniques. Technically, the problem is a matrix inversion problem: where T is the target vector, W is the to-be-determined weighting vector and B is the matrix of output values from each hidden unit in response to the input data (calculated from the basis functions, e.g., equation (4)). The matrix is usually not square, so a pseudo inverse may be used to give a minimum least-squares solution. In the case of the supervised learning, we have already obtained gene silencing results for all input vectors, e.g., i = TN. Comparison of the average prediction probabilities of the effective ("Effect") and ineffective ("Ineffect") sequences for MG1 to MG5 by the decision tree learning method Figure 5 Comparison of the average prediction probabilities of the effective ("Effect") and ineffective ("Ineffect") sequences for MG1 to MG5 by the decision tree learning method. Therefore, w 1 , w 2 , ..., w m are determined by solving the above linear equations. After determining the weighting variables, we can compute the percentages of effective and ineffective siRNAs in the individual clusters. Validation phase To evaluate whether the RBF network carried out appropriate (not overtraining) classifications, we verified individual clusters in the classifications by using the validation data. The differences between the percentages of effective and ineffective siRNAs for the training and validation data are computed for individual clusters. If there are few differences between the percentages of effective and ineffective siRNAs for the training and validation data in some classification, we can infer that the classification was carried out appropriately. If, on the other hand, there are large differences between them, we must conclude that the classification was not appropriate. The differences therefore indicate the effectiveness of individual classifications by the RBF network. The summation of the differences -the entire error of this partition (cluster) number m -is used to compare the error of this partition with other errors of other partitions (clusters). Determination of the number m of clusters The number m of basis functions corresponds to the number of partitions (clusters) and is determined on the basis of the minimum error of the individual clusters by using the validation data. That is, after carrying out several classifications while changing the number m of clusters, the errors of individual clusters are checked and the number of clusters yielding the minimum error is the desired number, i.e., the optimal classification. Average prediction probabilities for MG1 to MG5 in this study Figure 6 Average prediction probabilities for MG1 to MG5 in this study. Effective ("Effect") and ineffective ("Ineffect") siRNAs are predicted using decision tree learning (DT), RBF network (RBF) and the combined method (DT+RBF). To carry out the supervised learning for effective siRNA classifications by using the decision tree learning, we partitioned the training instances into two sets, one for the growth of the decision tree (training data) and other for the decision tree pruning (validation data). The processes of the classifications are carried out in two phases: the growth and pruning of the decision tree. The growth of the decision tree The algorithm, in outline, is as follows: (1) if all the instances belong to a single class, there is nothing to do (except create a leaf node labeled with the name of that class). (2) otherwise, for each attribute that has not already been used, calculate the information gain that would be obtained by using that attribute on the particular set of instances classified to this branch node. The information gain can be computed in the following way (28). where p is a number of effective siRNA sequences for this attribute and n is a number of ineffective siRNA sequences. The information gain is therefore obtained as follows: (3) use the attribute (position) with the greatest information gain as a branch node. (4) if the information gain becomes less than the specified criterion, stop the growth of the decision tree and create leaf nodes. Decision tree pruning Working backwards from the bottom of the tree, the subtree starting at each nonterminal node is examined. If the error (misclassification) rate on the validation data improves by pruning it, the subtree is removed. The process continues until no improvement can be made by pruning a subtree. Training, validation and evaluation data of the proposed methods Training and validation data As effective data, we collected 860 effective siRNA sequences (more than 80% gene silencing at the protein level) from 503 different cDNAs reported in references in the PubMed database. We also randomly generated 860 ineffective siRNA sequences as ineffective data. This is because we know that the randomly generated siRNA sequences were less effective in gene silencing as empirical knowledge. These effective and ineffective siRNAs were used as the training and validation data while partitioning RBF network representation of the relations between effective and ineffective siRNA sequences -----
5,236
2006-12-18T00:00:00.000
[ "Biology", "Computer Science" ]
Determining Thermal Conductivity of Small Molecule Amorphous Drugs with Modulated Differential Scanning Calorimetry and Vacuum Molding Sample Preparation Thermal conductivity is a material specific property, which influences many aspects of pharmaceutical development, such as processing, modelling, analysis, and the development of novel formulation approaches. We have presented a method to measure thermal conductivity of small molecule organic glasses, based on a vacuum molding sample preparation technique combined with modulated differential scanning calorimetry. The method is applied to the two amorphous model compounds indomethacin and celecoxib. The measured values of below 0.2 W/m °C indicate very low thermal conductivity of the amorphous compounds, within the range of organic liquids and low conducting polymers. Introduction The ability of a specific sample to conduct heat, measured as the material's thermal conductivity k, is an important parameter for many applications and theoretical considerations. Whilst the determination of thermal conductivity of large molecule samples, such as polymers, is a long-established procedure in many fields of material sciences [1], there is a considerable literature gap for small molecule organic glasses, especially active pharmaceutical ingredients (API). However, converting an initially crystalline API to its amorphous form is a promising strategy to counteract the drug development challenge of low aqueous solubility, especially if oral drug delivery is preferred [2,3]. Determining thermal conductivity is thus becoming more interesting and a necessity in several methods related to the research and development of amorphous API. Thermal properties are particularly important in drug processing. Milling, for example, can induce a mechanically activated disordering of the crystalline API and thermal conductivity strongly influences the temperature increase needed for complete amorphisation [4]. Furthermore, developing novel formulation approaches such as in situ amorphisation-For example, the conversion of a drug to its amorphous form prior to administration to the patient with the help of microwave heating [5]-Would benefit from having established thermal conductivity values. Moreover, thermal conductivity is an important input parameter in many solid-state models, including models for process flow, e.g., in hot melt extrusion [6,7], or models to help understand the material responses to recently developed analytical techniques [8]. Thermal conductivity and diffusivity meters, which often utilize the flash-or guarded heat flow method, are rarely part of general physicochemical characterization equipment, especially in pharmaceutical laboratories. Small-scale sample availability, sample geometry or the accessible temperature range can further limit the use of these instruments. Therefore, it is the aim of this communication to offer a practical guide to a more readily available alternative for heat conductivity measurements of small molecule API glasses. The method is based on combining vacuum molding sample preparation with regular modulated differential scanning calorimetry (mDSC), utilizing a previously reported method [9]. Sample preparation tools using vacuum compression molding are increasingly becoming part of the standard analytical equipment park within pharmaceutical research and development [10][11][12] and DSC is a regular part of sample characterization, in particular for amorphous compounds. Several methods and a few application notes, published by thermal equipment manufacturers, using DSC to measure thermal conductivity are described in the literature [9,[13][14][15][16][17]. However, all of these studies focus on well-known polymeric samples. Only one study, which employs a similar DSC method to this communication, targeted small molecule samples, such as active pharmaceutical ingredients. The aim of that study was not the preparation and measurement of small molecule organic glasses but rather a thermal conductivity estimation of a compacted powder specimen with an added correction for sample porosity [18]. Consequently, in this study, we demonstrated the feasibility of vacuum molding sample preparation and DSC for thermal conductivity measurements of the two amorphous model compounds indomethacin (IND) and celecoxib (CCX). The applied measurement technique was based on the possibility of mDSC to directly measure heat capacity. In a regular mDSC run, accurate heat capacity results are obtained when the experimental conditions facilitate maximum temperature uniformity across the sample specimen. Therefore, standard mDSC runs are performed with long modulation periods and a thin specimen encapsulated in a pan of high thermal conductivity (see also "thin sample" measurement below). If these conditions are not met, the measured heat capacity decreases, mainly because the thermal conductivity of the sample is preventing temperature uniformity [9]. To maximize this effect, a thick sample can be measured without encapsulation (see also "thick sample" measurement below) and the sample's thermal conductivity can be estimated from the difference of the two obtained heat capacity values. Sample Preparation The sample preparation process (and measurement) for the small molecule samples is presented in Figure 1. The sample was first converted from its crystalline state to an amorphous form by standard quench cooling. This comprised of covering an aluminium pan with sample powder, heating it 10 • C above its respective melting point in an oven for 5 min and quickly cooling the molten sample afterwards by transferring the pan onto a cold surface such as a metal bench. This initial amorphisation was necessary because many small molecule samples possess a very low melt viscosity (0.05 and 0.07 Pas for CCX and IND, respectively [19]) for vacuum molding, leading to non-uniform samples. By pre-quenching (i.e., amorphising) the sample, vacuum molding can be performed at temperatures above the glass transition (Tg) but below the crystallization temperature (Tc). The viscosity at temperatures in the supercooled melt of most APIs is sufficiently high to produce uniform samples [19]. Vacuum molding to obtain cylindrical samples was performed with the MeltPrep ® system (MeltPrep ® GmbH, Graz, Austria). In this study, the crushed glass was transferred into the molding tool (5 mm diameter disc tool) and kept at 10-30 • C above the glass transition for 12 min (termed: "thick sample") or 10 min (termed "thin sample"). Glass transition temperature values were obtained beforehand by a single measurement on a standard DSC (see supporting information for DSC thermograms, Figure S1). To obtain the "thick samples", 50 to 70 mg of crushed glass was used, while 5-15 mg was used for the "thin samples". Cooling was performed on the implemented cooling device without active water cooling. Samples were confirmed to be amorphous by X-ray powder diffraction (XRPD). The polymeric samples (PS and PMMA) were directly filled into the molding tool without any sample pretreatment and heated 30 to 40 • C above their respective glass transition temperature for 10 min. Modulated Differential Scanning Calorimetry Measurements on a Discovery DSC (TA Instruments, New Castle, DE, USA) were performed in triplicate and the overall DSC method was adapted from the standard test method E1952-17 [20], which was related to references [9,13]. The reader is referred to the standard test method for an overview including performance criteria alongside a study on precision and bias. The DSC in the modulated mode was calibrated beforehand for heat capacity measurements with a TA Instruments sapphire calibration disc in temperature-dependent calibration [21]. The method is summarized in the following steps (see also Figure 1): 1. The heat capacity of the "thin sample" (Cp,s) was measured in a standard run with the sample inside a DSC pan and an empty pan on the reference side. 2. The "thick sample" was weighed and its length and diameter were measured with a caliper. The apparent heat capacity of the "thick sample" (Cp,app) was measured by placing the sample on the sample side of the DSC cell. A piece of aluminium foil with a small amount of silicone oil (wetted cotton swab to apply) was placed in between the sample and cell. A similar foil was placed on the reference side. The mass of the "thick sample" was entered in the DSC software as the sample mass. 3. The thermal conductivity was calculated with the help of Cp,s, Cp,app, as well as mass, length and diameter of the "thick sample". The equations that were used have been supplied in Section 2.4. The DSC method (for the estimation of both Cp,s and Cp,app) consisted of an equilibration step at the measurement temperature followed by a 5 min isothermal step. Afterwards, data was collected over another 5-min isothermal interval. A modulation amplitude of 0.5 • C and a period of 80 s were used for all measurements in this study. The measurement procedure was first performed with a sample of known thermal conductivity (a polystyrene reference from the thermal conductivity kit supplied by TA Instruments, P/N 915064.901) to obtain the calibration factor D. Every sample measurement was subsequently corrected by this factor. To test the preparation method and measurement performance, the two polymeric samples PMMA and PS (obtained as granules) were vacuum molded and measured as described above. The PS sample was measured over a broader temperature range to test and compare the method and its accuracy to E1952-17 and validate the calibration factor at measurement temperatures. A single point measurement of PMMA (a compound with well reported literature values for thermal conductivity [22,23]) was used to further qualify the method performance. Temperatures were kept well below the respective Tg temperatures to avoid the contamination of the DSC cell from the "thick samples" due to liquefaction of the small molecule API samples. II. The sample's thermal conductivity K S (W/m • C) was calculated by: Equations Used to Calculate the Sample's Thermal Conductivity with D (determined with K m of a sample of known thermal conductivity by Equation (1)): K r : reference value W m • C . X-ray Powder Diffraction (XRPD) The sample discs were crushed and grinded prior to the XRPD measurements. The measurements were performed on a PANalytical X'Pert PRO diffractometer (PW3040/60, Alemo, The Netherlands) equipped with a Cu Kα anode (current: 30 mA, voltage: 45 kV) in the range of 4−34 • 2θ. Results After preparation, the small molecule API samples were fully X-ray amorphous. The reader is referred to the supporting information for example diffractograms with crystalline references ( Figure S2). Samples produced by the vacuum molding process were without visible air voids and well defined in geometry (see Figure 2) and therefore allowed precise thermal conductivity measurements. Table 1 lists the Tg values, molding temperatures and thermal conductivity values for all samples in this study. As can be seen by the PS and PMMA samples, the method produced thermal conductivity values, which were in agreement with literature, within the precision limits reported earlier [9]. The measured thermal conductivity alongside the measured heat capacity of the small molecule organic amorphous pharmaceuticals is further presented in Figure 3a,b. As seen in the figure insets, the specific heat capacity of both compounds increased with temperature and where available, the absolute values were in agreement with literature [25]. In the measured temperature range, thermal conductivity values obtained for IND and CCX did not indicate a clear temperature dependence. A small increase in thermal conductivity with temperature was visible for both compounds, similar to many low conducting glasses and polymers [26]. However, a clear interpretation of these minute changes of an already low conducting material was outside of the method's precision limits. Discussion With values below 0.2 W/m • C, the thermal conductivities of the small molecule samples were comparable to other disordered materials like low conducting polymers, as well as common organic liquids [26,27]. After the literature review and to the best of the authors' knowledge, there were no thermal conductivity reference values available for the two amorphous drugs tested in this study. A further discussion on the absolute values is therefore limited. The local maximum in thermal conductivity of IND at 16 • C was most likely due to a small drop in heat capacity of the measured small specimen at this temperature (see also Figure 3a, inset). Since thermal conductivity can be an important parameter in pharmaceutical manufacturing but is rather difficult to measure without specific instrumentation, estimates are often used for models describing specific processes. For example, an approximation of 0.18 W/m • C is made in a hot-melt extrusion numerical simulation using a model-based melt viscosity [7]. This study investigated binary amorphous solid dispersions of small molecule organic drugs with vinylpyrrolidone-vinyl acetate copolymer. While not having performed measurements on the described solid dispersions, our study indicated that values between 0.15-0.2 W/m • C were indeed fitting estimates for these systems. Furthermore, since absolute thermal conductivity values of the small molecule API were comparable to amorphous polymers, at lower API concentrations, reasonable estimates might be obtained from the polymer's thermal conductivity only. With special DSC tools available, lab bench molding equipment can provide a more practical sample preparation approach than other methods, such as specimen cutting from quarter-inch extruded or molded rods [9]. While polymeric samples are easily formed without further pre-treatment, small molecule samples are more challenging due to the possible need for pre-quenching and because the obtained glasses can be very fragile. Conclusions In this study, we demonstrated a method of preparing and analysing small molecule organic glasses for thermal conductivity measurements with mDSC. Vacuum molding was used to obtain well-defined samples and with the help of a previously described mDSC method, we were able to obtain thermal conductivity values for the two amorphous model compounds indomethacin and celecoxib. The values fell within the range of lower conducting disordered materials. Our study highlights the feasibility of vacuum molding and mDSC in providing thermal conductivity values of small molecule drug glasses for practical and theoretical considerations. The described approach could also be extended to drug polymer binary glass solutions.
3,098.2
2019-12-01T00:00:00.000
[ "Materials Science" ]
GC–IMS facilitates identification of carbapenem-resistant Klebsiella pneumoniae in simulated blood cultures This study aimed to identify carbapenem-resistant Klebsiella pneumoniae (CRKP) based on changes in levels of its volatile organic compounds (VOCs) in simulated blood cultures (BCs) using the gas chromatography–ion mobility spectrometry (GC–IMS) technique. A comprehensive analysis of volatile metabolites produced by Klebsiella pneumoniae (K. pneumoniae) in BC bottles was conducted using GC–IMS. Subsequently, the released VOCs were analyzed to examine differences in VOC release between CRKP and carbapenem-susceptible Klebsiella pneumoniae (CSKP). A total of 54 VOCs were detected, of which 18 (6 VOCs found in both monomer and dimer forms) were successfully identified. The VOCs produced by K. pneumoniae in BC bottles (BacT/ALERT® SA) were primarily composed of organic acids, alcohols, esters, and ketones. The content of certain VOCs was significantly different between CRKP and CSKP after the addition of imipenem (IPM). Moreover, the inclusion of carbapenemase inhibitors facilitated the identification of carbapenemase-producing K. pneumoniae based on the variations in VOCs. This study demonstrates the utility of GC–IMS technology in identifying CRKP, and reveals that changes in VOCs are closely related to the growth and metabolism of K. pneumoniae, indicating that they can be leveraged to promote early identification of CRKP bacteremia. However, further in-depth studies and experiments are needed to validate our findings. Supplementary Information The online version contains supplementary material available at 10.1186/s13568-024-01708-1. Introduction Bloodstream infection (BSI) is a systemic infectious disease that threatens human life and health.It has been shown to induce bacteremia, septicemia and sepsis.In some cases, it causes shock, disseminated intravascular coagulation (DIC), multiple organ failure, and even death (Kern and Rieg 2020;Tabah et al. 2023;Timsit et al. 2020).Epidemiologic data in China have demonstrated that Gram-negative bacteria, such as Escherichia coli and Klebsiella pneumoniae (K.pneumoniae), are the most commonly isolated pathogens associated with BSI, accounting for over 50% of BSI diseases (Chen et al. 2023).The widespread use of antibiotics and the prevalence of carbapenem-resistant Enterobacteriaceae (CRE) have made it difficult to treat these infections (Fang et al. 2023;Liu et al. 2021).Carbapenem-resistant Klebsiella pneumoniae (CRKP) is the most prevalent CRE associated with BSI.It is prevalent in developed and developing countries, with high drug resistance and mortality rates (Fang et al. 2023;Liu et al. 2021).Notably, CRKP accounts for 60-90% of all CRE isolates in China (Hu et al. 2022).In 2021, an annual report of the Blood Bacterial Resistance Investigation Collaborative System (BRICS) indicated that the isolation rate of CRKP was 15.8% (328/2076) (Chen et al. 2023).Generally, the management of CRKP BSI is challenging because of emergence of rapid spread of multidrug resistance strains, high mortality rate, and lack of antimicrobial agents.Therefore, it is important to identify CRKP BSIs and develop appropriate antimicrobial agents to improve patient treatment. Currently, CRKP is primarily identified using conventional antimicrobial drug susceptibility testing methods.Although new methods, such as the widely employed carbapenem inactivation method (CIM), have increased the accuracy of detecting specific CRKP carbapenemase (CBPM) type, these supplementary tests are time-consuming and often necessitate overnight culture, which delays the initiation of clinical treatments (Luo et al. 2023;Yu et al. 2022).Therefore, investigating novel approaches for the rapid detection of CRKP is imperative. Volatile organic compounds (VOCs) are a diverse collection of low-molecular-weight, low-boiling-point, high-vapor-pressure metabolites generated by bacteria (Kai et al. 2009).Accumulating evidence indicates that VOCs are potential biomarkers for bacterial identification (Chen et al. 2017b;Drees et al. 2019;Lu et al. 2022).Research conducted by our team on bacterial metabolomics has yielded significant findings with regard to the application of rapid mass spectrometry-based detection of bacteremia pathogens based on microbial VOC fingerprints (Chingin et al. 2015(Chingin et al. , 2016;;Hang et al. 2017;Hu et al. 2016;Zhong et al. 2019). To date, most studies have primarily investigated the involvement of VOCs in pathogenic bacteria identification.Only a handful of studies have identified strains by assessing alterations in VOCs based on the drug susceptibility and antibiotic resistance mechanisms of these pathogenic bacteria.In our previous study, we achieved early identification of CBPM-producing CRKP by examining changes in 3-methyl-1-butanol levels in trypticase soy broth (TSB) cultures (Luo et al. 2023).Furthermore, another study reported that carbapenem-susceptible Klebsiella pneumoniae (CSKP) can be differentiated from CRKP based on VOCs changes in TSB cultures (Filipiak et al. 2022).However, these studies employed gas chromatography-mass spectrometry (GC-MS), which has multiple pre-processing steps and long MS analysis time, making it unsuitable for widespread adoption.This calls for the development of more streamlined and rapid detection methods to facilitate early identification of CRKP. Gas chromatography-ion mobility spectrometry (GC-IMS)-also referred to as gas electrophoresis (GEP) or plasma chromatography (PEC)-is a gas phase technique often used for the identification and characterization of VOCs.This technique leverages the differences in gasphase ion mobility under a weak electric field, seamlessly merging the exceptional separation power of gas chromatography (GC) with the unmatched resolution, sensitivity, and accuracy of ion mobility spectrometry (IMS) (Drees et al. 2019).It has been widely investigated with regarding its ability to identify bacterial infections (Drees et al. 2019;Lacey et al. 2020;Lu et al. 2022). In this study, we aimed to investigate differential VOCs between CSKP and CRKP using the GC-IMS method.VOCs were analyzed in simulated blood cultures (BCs) of both standards, under in vitro conditions, to identify metabolites associated with carbapenem resistance.The effects of imipenem (IPM) and CBPM inhibitors on VOCs were also examined. Meanwhile, 69 K. pneumoniae isolates collected at the Second Affiliated Hospital of Nanchang University from January 1, 2016 to December 31, 2022 were analyzed.The isolates consisted of 25 CSKP isolates and 44 CRKP isolates (20 KPC-positive strains, 15 NDM-positive strains, 4 IPM-positive strains, and 5 CBPM-negative strains).The experiments involving these clinical strains were performed in triplicate for each strain. All strains (standard and clinical strains) were kept in glycerol broth (15% glycerol) (Solarbio, China) in a − 80 °C freezer for further testing. The identification of K. pneumoniae was conducted using the VITEK ® 2 system (Bio-Merieux, Inc., France) or matrix-assisted laser desorption/ionization timeof-flight mass spectrometry (MALDI-TOF/MS) system (bioMérieux).Antibiotic susceptibility testing of K. pneumoniae was performed using a Kirby-Bauer (KB) test and the VITEK ® 2 compact system, and the minimum inhibitory concentrations of ertapenem (ERT) and IPM were determined following the criteria set by the Clinical Laboratory Standards Institute (CLSI 2022).The assessment of CBPM production was conducted using two methodologies: the modified carbapenem inactivation method (mCIM) and the ethylenediaminetetraacetic acid (EDTA)-modified carbapenem inactivation method (eCIM).Moreover, the carbapenem resistance gene was characterized by polymerase chain reaction (PCR) amplification and sequencing. Schematic diagram of GC-IMS VOCs in each sample group were detected using the commercial GC-IMS equipment (GC-IMS; Flavour-Spec ® ; G.A.S., Dortmund, Germany).The integration of GC and IMS in GC-IMS enables the simultaneous achievement of high-resolution pre-separation (GC) and high-sensitivity detection (IMS).As described in Additional file 3: Fig. S1, initially, molecules from the sample are separated by the GC component based on their interactions with the stationary phase coating on the chromatographic column wall.Subsequent to the separation of molecules by GC, ionization typically occurs through a tritium source.The ionized molecules are then propelled along the drift tube under the influence of an electric field.Highly pure nitrogen gas (N 2 ) introduced into the drift tube from the opposite direction causes molecules with varying mass and charge to exhibit different travel times to reach the Faraday plate.Finally, the combination of the retention time in the GC portion and the time of drift in the IMS section aids in the identification of compounds (Drees et al. 2019;Lu et al. 2022). Culture conditions and sample preparation The culture solutions utilized for experimentation were acquired from BacT/ALERT ® SA (Ref.259789; Biomérieux, Nürtingen, Germany) BC bottles.The medium consisted of pancreatic digest of casein (1.7% w/v), papain digest of legume-based food (0.3% w/v), sodium polyanethole sulfonate (0.035% w/v), pyridoxine hydrochloride (0.001% w/v), and a combination of various amino acids, as well as hydrocarbon digests in purified water. As shown in Fig. 1, our preliminary findings demonstrated that the growth rate of K. pneumoniae exhibited its highest velocity at approximately 3 h after being subjected to the specified culture conditions (total volume: 6 mL; microbial concentration: 10 7 colony forming units (CFU)/mL; culture medium: BacT/ALERT ® SA; temperature: 37 °C; agitation: 200 rpm), subsequently transitioning into the end exponential growth phase around the 5 h mark. The sample preparation process is presented in Additional file 3: Fig. S2.The experimental strains were inoculated on Columbia blood agar plates and incubated overnight at 37 °C.Subsequently, the bacterial suspensions were transferred to test tubes containing the culture medium (BacT/ALERT ® SA), wherein the bacterial concentration was 10 7 CFU/mL, with a total volume of 6 mL, and incubation was continued at 37 °C and 200 rpm agitation.About 500 mL of bacterial culture fluid for GC-IMS analysis was taken after incubation for 3 (T0), 4 (T1), 5 (T2), 6 (T3), and 7 (T4) h.A blank sterile medium served as a control. To investigate the effect of IPM on CSKP and CRKP, a solution of IPM (Solarbio, China) was added to the bacterial suspension at the T0 time point, with a final concentration of 0.25 mg/mL (Luo et al. 2023).The incubation process was continued and all other conditions were maintained (Additional file 3: Fig. S2).Furthermore, subsequent experiments were undertaken to examine the influence of CBPM inhibitors on K. pneumoniae strains that produce CBPM.Similarly, the bacterial suspensions were simultaneously mixed with IPM and CBPM inhibitors [avibactam sodium or pyridine-2,6-dicarboxylic acid (DPA)] at T0, with avibactam sodium (Solarbio, China) concentration set at 1 mg/L (Luo et al. 2023) and DPA (Solarbio, China) concentration at 100 mg/L (Chen et al. 2017a). GC-IMS measurements GC-IMS equipped with an MXT-WAX column (a high-polar column, 15 m × 0.53 mm, 0.1 μm, RESTEK, Bellefonte, PA, America) was used.The experimental parameters utilized in the GC-IMS analysis are displayed in Additional file 1: Table S1 and the schematic representation of the experimental workflow is depicted in Additional file 3: Fig. S3.Briefly, a headspace vial comprising the test culture medium (500 µL) was positioned within an autosampler and agitated at 500 rpm for 3 min at 60 °C.Subsequently, 1 mL of headspace gas was extracted from the vial, and the specimen under examination was introduced into the GC-IMS detection apparatus for 10 min. Data analysis GC-IMS data were analyzed using the VOCal Version 0.1.3software (G.A.S. mbH, Dortmund, Germany), employing C4-C9 ketones (2-butanone, 2-pentanone, 2-hexanone, 2-heptanone, 2-octanone, 2-nonanone) as reference standards.After data calibration, the VOCs were identified based on the retention index (RI) and drift time (reactant ion peak (RIP) relative) found in the GC-IMS library (NIST and IMS libraries) (Euler et al. 2022).Subsequently, the relative peak volume values (integrating peak intensities within specific regions after comparing calibrations) of VOCs were extracted from the software for statistical analysis.Given the limited number of VOCs (n = 54), potential VOCs exhibiting fold change (FC) values > 1.20 and a P-value < 0.05 were categorized as increased, whereas those with FC values < 0.83 and a P-value < 0.05 were categorized as decreased, in the context of inter-group comparisons (Mann-Whitney U test).Data processing and statistical analysis were conducted using R (version 4.2.1), while online tools ChiPlot (https:// www.chipl ot.online/) and OmicStudio (https:// www.omics tudio.cn/ tool), as well as Microsoft Office PowerPoint 2021 (Microsoft, Seattle, WA), were utilized for data visualization. Analysis of K. pneumoniae (standard strains) metabolites in the matrix components of BC bottles A total of 54 VOCs (6 VOCs occurring as both monomers and dimers), including 4 organic acids, 3 alcohols, 3 esters, 4 ketones, 2 pyrazines, 2 benzene derivatives, and 30 unidentified compounds, were detected by GC-IMS (Table 1).Changes in the 54 VOCs throughout the time (T0-T4) were significant (Additional file 2: Table S2).K. pneumoniae reached the termination of its exponential growth phase at approximately 5 h (T2), and correspondingly, alterations in VOCs were no longer evident beyond T2 (Fig. 1).Hence, T2 was used for subsequent analysis. Changes in VOCs for each of the standard strains (ATCC BAA-1706, ATCC BAA-1705, ATCC BAA-2146, and ATCC BAA-2524) compared with the blank control group at T2 are shown in Fig. 2 and Additional file 2: Table S3.Compared with the blank control group, 26 VOCs were increased whereas 7 were decreased in the ATCC BAA-1706.For K. pneumoniae ATCC BAA-1705, 30 VOCs were increased whereas 8 were decreased.For K. pneumoniae ATCC BAA-2146, 30 VOCs were increased and 10 were decreased, and 30 VOCs were increased and 10 were decreased in the K. pneumoniae ATCC BAA-2524 (Fig. 3A). To investigate the potential of VOCs in distinguishing CRKP, a comprehensive analysis was conducted to examine the disparities in VOCs between CSKP and CRKP.Remarkably, 5 VOCs (unidentified-3, unidentified-21, unidentified-28, and butan-1-ol were increased, whereas unidentified-30 was decreased) differed significantly in the CRKP group compared with the CSKP group (Fig. 3B and Additional file 3: Fig. S4A).Subsequently, principal component analysis (PCA) showed that the 5 VOCs were effective in distinguishing between CSKP and CRKP in the T2 time point (Fig. 3C).Intriguingly, these differences disappeared with time (Additional file 3: Fig. S4B). Differential VOCs between CSKP and CRKP (standard strains) after the addition of IPM To assess the impact of IPM on VOC emission from K. pneumoniae, IPM was added to the test tubes at T0 to a concentration of 0.25 mg/mL (Additional file 3: Fig. S2).Subsequent growth curve analyses demonstrated that the introduction of IPM after a 3 h incubation period exclusively affects the growth of CBMP-negative strain (the bacteria were killed completely), while CBMPpositive strains exhibited a consistent growth pattern Represents the substance has two distinct peak positions in the GC-IMS system, with a shorter drift time corresponding to the monomer and a longer drift time corresponding to the dimer."*" Represents the formula of VOC is unknown comparable to that observed in the absence of IPM supplementation (Figs. 1 and 4).It was found that the introduction of IPM did not yield any new VOCs.However, notable alterations were observed in the content of certain VOCs of K. pneumoniae ATCC BAA-1706 in the T2 time point (after 2 h of IPM addition), which was maintained until the end of the study (Additional file 2: Table S4).Concurrently, VOCs of K. pneumoniae ATCC BAA-1705, ATCC BAA-2146, and ATCC BAA-2524 showed similar temporal change trends to those without IPM addition (Additional file 2: Table S4). Further analysis revealed that compared with K. pneumoniae ATCC BAA-1706, K. pneumoniae ATCC BAA-1705 displayed a significant divergence of 26 VOCs (21 VOCs were increased and 5 were decreased), K. pneumoniae ATCC BAA-2146 showed a substantial variation of 24 VOCs (19 VOCs were increased and 5 were decreased), and K. pneumoniae ATCC BAA-2524 exhibited a marked distinction of 23 VOCs (19 VOCs were increased and 4 were decreased) at the T2 time point (Figs. 5 and 6A and Additional file 2: Table S5). After a comprehensive analysis of these VOCs, the contents of 3-methylbutanoic acid (monomer and dimer), 2-methylpropanoic acid (monomer and dimer), acetic acid, butane-2,3-dione (diacetyl), and 3-hydroxy-2-butanone (acetoin) were significantly higher in the CRKP group than in the CSKP group, while the contents of benzaldehyde-D and butan-2-one were significantly lower.Meanwhile, 11 VOCs exhibiting an upward trend and 2 VOCs displaying a downward trend were not accurately identified (Fig. 6B).As shown in Additional file 3: Fig. S5, significant correlations were observed among the 22 increased VOCs and 6 decreased VOCs in the CRKP group.PCA further revealed that these differential VOCs were effective in discriminating between CRKP and CSKP (Fig. 6C).Interestingly, these differences persisted from T2 to T4 (Additional file 2: Table S4 and Additional file 3: Fig. S6). Potential utility of VOCs in identifying CBPM-producing K. pneumoniae based on standard strains To more comprehensively evaluate the significance of VOCs in the identification of CBPM-producing K. pneumoniae and determine its phenotypes, both IPM (final concentration, 0.25 mg/mL) and CBPM inhibitors (final concentration: avibactam sodium, 1 mg/L; or DPA, 100 mg/L) were added into the test tubes after a 3-h incubation period (T0).Moreover, changes in VOCs were assessed at the T2 time point before and after the addition of enzyme inhibitors. Potential application of VOCs in the identification of CRKP in clinical strains after the addition of IPM The growth time curve of clinically isolated strains is shown in Additional file 3: Fig. S8, which was in line with the growth curves of standard strains.Using the T2 time point as the focal point, we further examined the disparity in VOCs between CSKP and CRKP. The follow-up study encompassed 69 K. pneumoniae clinical isolates, comprising 25 CSKP strains and 44 CRKP strains.Experiments were performed in triplicates for each clinical isolate at the T2 time point utilizing GC-IMS.Moreover, the 44 CRKP strains included 20 KPC-positive strains, 15 NDM-positive strains, 4 IPM-positive strains, and 5 CBPM-negative strains.Given that CBPM-negative strains could not withstand a high concentration of IPM (0.25 mg/mL), the final concentration of IPM after a 3-h incubation period (T0) in the matrix components of BC bottles was adjusted to 16 µg/mL. Changes in VOCs emitted by CRKP and CSKP at T2 are shown in Additional file 2: Table S7.Compared with the CSKP group, KPC-positive strains exhibited a notable disparity of 22 VOCs (15 increased and 7 decreased VOCs), NDM-positive strains displayed a significant Fig. 6 The VOCs emitted by K. pneumoniae (standard strains, with IPM added).A Volcano plots of differential VOCs between the CSKP group and CRKP strains.B The heatmap of the differentially expressed VOCs generated using volcano plots (compared with the CSKP group, after binarization).C The PCA of various sample groups (treated with IPM) divergence of 27 VOCs (16 increased and 11 decreased VOCs), IPM-positive strains demonstrated a substantial variation of 26 VOCs (14 increased and 12 decreased VOCs), and CBPM-negative strains had a marked distinction of 24 VOCs (5 increased and 19 decreased VOCs) (Fig. 9A and Additional file 3: Fig. S9).Further analysis of PCA results indicated that the differential VOCs with the addition of IPM were effective in identifying CBPMproducing Klebsiella pneumoniae (KPC-positive, NDMpositive, and IMP-positive strains); however, differential VOCs could not effectively distinguish between CSKP and CBPM-negative strains (Fig. 9B). Potential utility of VOCs in identifying CBPM-producing K. pneumoniae based on clinical strains Next, we investigated the importance of VOCs in identifying CBPM-producing K. pneumoniae and its phenotypes in clinical strains, following the same manipulation procedures as those applied to standard strains (CBPM-negative strains: IPM, final concentration, 16 µg/ mL; avibactam sodium, 1 mg/L; DPA, 100 mg/L).Experiments were performed in triplicates for each clinical isolate at the T2 time point utilizing GC-IMS.Furthermore, considering that previous studies have shown that differential VOCs may differ between CSKP and CBPMproducing K. pneumoniae, changes in VOCs emitted by CSKP strains following the addition of CBPM inhibitors were not explored further. Changes in VOCs emitted by CRKP strains after addition of CBPM inhibitors are shown in Additional file 2: Table S8.After the addition of avibactam sodium, only KPC-positive strains showed changes in the content of some VOCs, including 4 increased VOCs (benzaldehyde-D, 2,5-dimethylpyrazine-D, 2-methylpyrazine-D, and butan-2-one) and 8 decreased VOCs (3-hydroxy-2-butanone (acetoin), isovalerone, and 6 unidentified VOCs) (Fig. 10A).Moreover, a separation trend was observed in a three-dimensional PCA score plot after including the 12 Fig.7 The VOCs emitted by K. pneumoniae after addition of avibactam sodium (standard strains, with IPM added).A The volcano plots displaying the differential VOCs between the pre-and post-inhibitor (avibactam sodium) addition stages for each strain.B The PCA of various sample groups (avibactam sodium unspiked and avibactam sodium spiked) potential biomarker VOCs, which were effective in identifying class A CBPM-producing K. pneumoniae (KPCpositive strains) after the addition of avibactam sodium (Fig. 10B).Subsequently, after the addition of DPA, there were significant changes in some VOCs, including 8 increased VOCs and 11 decreased VOCs in the NDM-positive strains.However, IMP-positive strains displayed comparable changes to NDM strains, characterized by an increase in 1,2-ethanediol and a decrease in 3-hydroxy-2-butanone (acetoin) and 4 unidentified VOCs (Fig. 11A).Finally, a three-dimensional PCA score plot revealed a separation trend after including these VOCs, which significantly facilitated the recognition of class B CBPM-producing K. pneumoniae strains, especially VOCs released by NDM-positive and IMP-positive strains.(Fig. 11B). Discussion The rapid and accurate identification of BSIs caused by CRKP and the characterization of CBPMs are crucial in addressing the threat posed by the rapid global epidemic of multidrug-resistant bacteria, considering the escalating detection rate of CRKP in such infections (Chang et al. 2021;Chen et al. 2022).This study has the following main findings: (1) the identification of K. pneumoniae can be facilitated through the relative composition of VOCs (VOC fingerprints) in BC bottles (BacT/ALERT ® SA). (2) The disparity in VOCs emitted by CSKP and CRKP following the addition of IPM was further substantiated, leading to the discovery of potential indicators for the discernment of CRKP in BC bottles.(3) The inclusion of CBPM inhibitors (avibactam sodium and DPA) resulted in discernible alterations in the composition of specific VOCs in the corresponding strains, thereby offering a novel approach to the detection and characterization of CBPM phenotypes. In this study, simulated BCs were conducted, excluding peripheral blood, due to the significant variability in blood metabolism across environmental factors and individuals (Kim et al. 2014;Nicholson et al. 2011).Meanwhile, compared with fresh medium without blood, a previous study (Rees et al. 2016) confirmed a 20% increase in the total number of K. pneumoniae-associated volatiles in blood-containing media but VOCs already produced in the media without blood were unaffected; however, these findings warrant further validation.By focusing solely on the metabolism of K. pneumoniae without the confounding influence of blood, we provide a reference point for future studies.Further, to mitigate the influence of adsorbed beads (BacT/ALERT FA Plus, Biomérieux, Nürtingen, Germany), and limited by the inability to establish an anaerobic environment in vitro, we ultimately opted for the BacT/ALERT ® SA culture medium.In addition, utilizing the alteration in VOCs of the standard strains as a benchmark and those of the clinical strains as a corroborative measure undoubtedly confers an additional significant advantage to the current study. In recent years, VOCs have been increasingly utilized for strain characterization (Drees et al. 2019;Kunze et al. 2013;Lu et al. 2022).In addition, it is possible to differentiate CRKP and CSKP in TSB media using changes in VOCs (Filipiak et al. 2022).Nevertheless, we note that there are few studies employing commercial BC bottles emitting VOCs to identify CRKP and determine its CBPM phenotype, further emphasizing the strengths of the present study. Fifty-four VOCs were isolated in the current study, of which 18 were successfully identified using the GC-IMS technique (6 VOCs existed as both monomers and dimers).Although 30 VOCs remained unidentified, valuable information regarding these compounds was obtained (Table 1).It is anticipated that the continuous advancement of technology and further research will results in identification of additional VOCs. Previous studies have demonstrated a correlation between the growth metabolism of K. pneumoniae and VOCs, including propionic acid (BCs) (Julak et al. 2000), acetic acid (BCs/LB) (Julak et al. 2000), 3-methyl-1-butanol (sheep blood agar/TSB/LB) (Filipiak et al. 2022;Junger et al. 2012;Luo et al. 2023), butane-2,3-dione (diacetyl) (TSB) (Filipiak et al. 2022), and 3-hydroxy-2-butanone (malt extract agar and dichloran glycerol agar/TSB supplemented with 5% hemolyzed human blood) (Kiviranta et al. 1998;Rees et al. 2016).These findings align with the outcomes of the present study.Other identified VOCs released by K. pneumoniae, including 3-methylbutanoic acid, 2-methylpropanoic acid, 1,2-ethanediol, isovalerone, and 2-methylpyrazine-M, yielded varying outcomes in our study than in other studies maybe due to the dissimilar nutrient composition between commercial BC bottles and other media.Notably, variations in the employed detection methods have resulted in difference in the identified VOCs in the samples.Additionally, VOCs that were readily absorbed by K. pneumoniae and exhibited high concentrations within the blank medium undergo metabolic processes, leading to their transformation into different compounds, including benzaldehyde (TSB) (Filipiak et al. 2022), butan-2one, and 2-methylpyrazine-D.However, it is noteworthy that butane-2,3-dione (diacetyl) and butan-2-one variations in the present study differed from that in previous studies (Boots et al. 2014;Rees et al. 2017).The results demonstrated that both substances were present at high levels in the blank medium, and due to the relatively small sample size, the results of these differential studies warrant further exploration.Interestingly, consistent with a previous study (Filipiak et al. 2022), the alterations observed in the VOCs (Additional file 2: Table S2) were generally congruent with the growth trend of K. pneumoniae (Fig. 1), as evidenced by the stabilization of VOC changes after 5 h of bacterial growth (T2), further supporting our decision to prioritize the examination of VOC changes at the T2 time point. The catabolism of leucine by K. pneumoniae through the Ehrlich pathway was shown to generate 3-methyl-1-butanol (Luo et al. 2023;Smart et al. 2019).Similarly, the observed elevation in 1,2-ethanediol levels could potentially be attributed to the metabolic processes associated with fatty acids.Meanwhile, the synthesis of 2,3-butanedione occurs through the action of bacterial decarboxylases, which convert (2S)-2-hydroxy-2-methyl-3-oxobutanoic acid derived from pyruvate metabolism to 2,3-butanedione (Whiteson et al. 2014).The release of 3-hydroxy-2-butanone-another significant VOChas been linked to glycolysis process (Chen et al. 2014).In the metabolic process of pyruvate, the final product of glycolysis and the carbon source for the citric acid cycle results in the production of acetylactate, which is subsequently transformed into 3-hydroxy-2-butanone through the action of α-acetolactate decarboxylase (Chen et al. 2014).In addition, the absorption of benzaldehyde may be affected by the enzymatic reduction of benzaldehyde by benzaldehyde dehydrogenase, which catalyzes the production of nicotinamide adenine dinucleotide phosphate (NADPH) during the growth of K. pneumoniae (Filipiak et al. 2022).However, due to the lack of precise knowledge regarding the specific constituents of the media in BC bottles coupled with the intricate nature of biochemical reactions, we did not determine the possible mechanisms underlying the remaining variations in differential VOCs. In our study, we found that CRKP and CSKP can be differentiated without inclusion of IPM, but these differences disappear over time (Additional file 3: Fig. S4B), which is consistent with previous literature (Filipiak et al. 2022;Żuchowska and Filipiak 2023).In addition, 4 VOCs could not be identified and thus their significance is unknown.The bactericidal effects of IPM have been widely documented.For instance, several studies have demonstrated that it inhibits the biosynthesis of bacterial cell walls, leading to the lysis and subsequent death of bacteria (Pai Mangalore et al. 2022).Moreover, incorporation of IPM promoted the differentiation of CSKP from CRKP, with CRKP identified based on the differential VOCs, arising from killing of the susceptible cells, as reported in a previous study (Filipiak et al. 2022).Notably, the killing of sensitive bacteria by IPM caused a significant decrease in the amount of released VOCs and an increase in the number of absorbed VOCs compared to CRKP, and this difference remained unchanged over time (Fig. 4 and Additional file 3: Fig. S6).In addition, the incorporation of IPM exhibited different metabolic profiles between the standard and clinical strains, characterized by varying VOCs compared to the CSKP, and both strains could effectively differentiate between CRKP and CSKP based on specific differential VOCs.This may be as a result of the difference in origin of the two strains, clinical strains are susceptible to antibiotic-induced mutation and have different metabolism profiles from the standard strains (Filipiak et al. 2022).However, this study and our previous one (Luo et al. 2023) have consistently demonstrated the challenging nature of identifying carbapenemase-negative CRKP, which calls for further refinement of our experimental protocols. The presence of VOCs in CRKP strains following the addition of IPM (4 h) has been documented (Filipiak et al. 2022).However, we did not obtain similar results, probably due to the limited duration of IPM addition and limitations of detection methodology, it provides valuable insights.Our findings suggest that alterations in VOCs may serve as a potential means to identify CRKP, opening new avenues for diagnosis and treatment options.Initially, we found that the inclusion of the enzyme inhibitor 3-aminophenylboronic acid (dissolved in dimethyl sulfoxide) increased the difficulty of identifying VOCs.Moreover, the alkaline nature (pH = 8.0) of EDTA also interfered with the outcomes.Consequently, avibactam sodium and DPA were selected as alternative components in the subsequent experiments.It has been demonstrated that avibactam is an effective inhibitor of class A, class C, and certain class D enzymes (Coleman 2011).In our study, the combination of averbactam sodium and IPM effectively eradicated K. pneumoniae strains that generate class A enzymes (standard and clinical strains).Furthermore, this treatment altered the levels of VOCs in class A enzyme-producing K. pneumoniae strains, which provided a reliable method for detecting it.However, our findings demonstrate that the addition of avibactam sodium was not effective in treating K. pneumoniae strains producing class D enzymes, which may be attributed to the inadequate avibactam sodium concentration.Pyridine-2,6-dicarboxylic acid (DPA), as an important type of Metallo-β-lactamases (MBL) inhibitor, can inhibit MBL by chelating, stripping, and binding Zn 2+ in the active center of MBL (Chen et al. 2017a;Wang et al. 2020).Similar to the effects of avibactam sodium, DPA showed the capacity to selectively alter the VOCs of K. pneumoniae strains which produce class B enzymes, providing a novel approach for the identification of class B enzymes. Although the findings presented in this preliminary study are interesting, several limitations should be acknowledged.The study mainly relied on clinical strains obtained from a single hospital which restricts the generalizability of the results to some extent.This limitation needs to be considered when interpreting the findings and their implications.In the future, we aim to expand the samples to detect more carbapenemase types of CRKP.Secondly, the experiments conducted to simulate blood cultures did not account for the impact of blood samples on the detection of VOCs.Studies have shown that incorporation of blood in these experiments increases the production of VOCs (Rees et al. 2016), and this needs to be further investigated.Thirdly, the use of single detection technique also limited the significance of our results.Although we successfully detected 54 VOCs, the identification was limited to 18 VOCs, of which 6 VOCs existed as both monomers and dimers.Therefore, it is imperative to incorporate additional detection methods to comprehensively analyze the fluctuations in VOCs.Lastly, the experimental designs and methods need to be further revised and improved.This will be the focus of our future investigations. In conclusion, using GC-IMS technology, we identified CRKP strains by analyzing changes in VOC.In addition, we investigated the potential application of VOCs in the detection of phenotypes within CRKP strains.Moreover, we developed VOC fingerprints to facilitate the identification of relevant strains.Nevertheless, we acknowledge that these findings need to be further validated through large-scale, multi-center, prospective experiments. changes in VOCs measured by GC-IMS (with imipenem added, standard strains).Table S5.The change of VOCs in T2 time point compared with the CSKP group (with imipenem added, standard strains, mean ± SD).Table S6.The change of VOCs in T2 time point after adding enzyme inhibitors treatment (realtive volume of VOC, mean ± SD, standard strains).Table S7.The change of VOCS in T2 time point compared with the CSKP group (with imipenem added, clinical isolates, mean ± SD).Table S8.The change of VOCs in T2 time point after adding enzyme inhibitors treatment (realtive volume of VOC, mean ± SD, clinical isolates). Fig. 2 Fig.2The heatmap of VOCs expression profile of each sample (after min-max normalization) Fig. 8 Fig. 8 The VOCs emitted by K. pneumoniae after addition of DPA (standard strains, with IPM added).A Volcano plots illustrating the differential VOCs between the pre-and post-inhibitor (DPA) addition stages for each strain.B The PCA of the sample groups (DPA unspiked and DPA spiked) Fig. 9 Fig. 9 The VOCs produced by K. pneumoniae (clinical strains, with IPM added).A The volcano plots of differential VOCs between the CSKP group and CRKP strains.B The PCA of the sample groups (with IPM added) Fig. 10 Fig. 10The VOCs produced by K. pneumoniae after addition of avibactam sodium (clinical strains, with IPM added).A Volcano plots illustrating the differential VOCs between the pre-and post-inhibitor (avibactam sodium) addition stages for each strain.B The PCA of the sample groups (avibactam sodium unspiked and avibactam sodium spiked) Fig. 11 Fig. 11 The VOCs produced by K. pneumoniae after the addition of DPA (clinical strains, with IPM added).A The volcano plots illustrating the differential VOCs between the pre-and post-inhibitor (DPA) addition stages for each strain.B The PCA of the sample groups (DPA unspiked and DPA spiked) Additional file 3 : Figure S1.A schematic diagram of GC-IMS.Figure S2.The flow chart of the sample preparation.Some images (blood agar plate and liquid transfer gun) in Fig. S2 were free and adapted from Servier Medical ART (https:// smart.servi er.com).Figure S3.The flow chart of the workflow of GC-IMS.Figure S4.The differential VOCs (CSKP vs. CRKP) produced by K. pneumoniae (standard strains, without IPM added).A Comparison of the differential VOCs in indicated groups.B The temporal changes in CRKP-characterized VOCs (T0-T4). Figure S8 . The growth curve of K. pneumoniae (clinical strains).FigureS9.The heatmap of the differentially expressed VOCs constructed using the volcano plots (compared with the CSKP group, after binarization). Table 1 Information of the specific VOCs detected by GC-IMS Table 1 (continued) GC-IMS gas chromatography-ion mobility spectrometry, VOCs volatile organic compounds, CAS# Chemical Abstract Service Registry Number, MW molecular weight, RI retention index, Rt retention time, Dt drift time a Table 2 Volatile metabolic profiles of Klebsiella pneumoniae (standard strains) in the matrix components of BC bottles (at T2 time point) Relative volume Relative volume Variation Relative volume Variation Relative volume Variation Relative volume Variation Represents no statistically significant variance when compared to the blank control group in terms of absorption or release by Klebsiella pneumoniae based on our established criteria."*" Represents the CAS# of VOC is unknown "-"VOCs volatile organic compounds, CAS# Chemical Abstract Service Registry Number, M monomer, D dimer
7,761.6
2024-04-24T00:00:00.000
[ "Medicine", "Environmental Science", "Chemistry" ]
Numerically Enhanced Stimulated Emission Depletion Microscopy with Adaptive Optics for Deep-Tissue Super-Resolved Imaging In stimulated emission depletion (STED) nanoscopy, the major origin of decreased signal-to-noise ratio within images can be attributed to sample photobleaching and strong optical aberrations. This is due to STED utilizing both a high-power depletion laser (increasing risk of photodamage), while the depletion beam is very sensitive to sample-induced aberrations. Here we demonstrate a custom-built STED microscope with automated aberration correction that is capable of 3D super-resolution imaging through thick, highly aberrating tissue. We introduce and investigate a state of the art image denoising method by block-matching and collaborative filtering (BM3D) to numerically enhance fine object details otherwise mixed with noise and further enhance the image quality. Numerical denoising provides an increase in the final effective resolution of the STED imaging of 31% using the well established Fourier ring correlation metric. Results achieved through the combination of aberration correction and tailored image processing are experimentally validated through super-resolved 3D imaging of axons in differentiated induced pluripotent stem cells growing under an 80 µm thick layer of tissue with lateral and axial resolution of 204 nm and 310 nm, respectively. Depletion beam path Depletion beam originates at the 766nm laser (PicoQuant VisIR 765 STED).It can be set manually to work at 80MHz, 40MHz, 20MHz, 10MHz, 5MHz, or 2.5MHz repetition rates and its pulse duration is approximately 0.5ns.Then it passes through the half-wave plate λ/2 (Thorlabs AQWP05M-600) and is expanded through lenses f1-f2 to match the size of the aperture of the liquid crystal on silicon spatial light modulator SLM1 (Hamamatsu X10468-02).λ/2 rotates the depletion laser polarization so that it matches the orientation of the liquid crystal molecules of the SLM1.SLM1 is shaping the depletion beam into either 2D STED (Laguerre-Gaussian phase mask) or 3D STED (Top-Hat phase mask) beams.Pair of lenses f1'-f2' is imaging the SLM1 plane onto the dichroic mirror DM2 plane.Glan laser polarizer (GLP) is blocking any polarizati ons not corresponding to the orientation of the liquid crystals in the SLM1.Depletion beam is reflected into the main beam-path through the dichroic mirror DM1 (Semrock FF720-SDi01) and at the DM2 (Chroma ZT647rdc-UF3) it is superimposed with the excitation beam.λ/2 is used to rotate the polarization of the depletion beam.From the DM2 both depletion and excitation beams are co-propagating. Depletion beam path Excitation beam is generated by the 637nm pulsed diode laser (PicoQuant LDH-P-C 640B).It operated at the same repetition rates that the depletion laser and has pulse width of 90ps. Excitation beam is coupled into the polarization maintaining single-mode fiber PMF using coupling lens FC.The output of the excitation beam has an elliptical shape and before coupling it has to be magnified over one axis using the anamorphic prism pair.This way the 75% of coupling efficiency can be obtained.λ/2 rotates the polarization of the depletion beam, lens f3 is used as collimator and such beam is then reflected from the SLM2 (Boulder Nonlinear Systems 512x512), which is used for correcting aberrations of the excitation beam.Lenses pair f4-f5 are imaging SLM2 plane onto the DM2 plane.DM2 is reflecting the excitation beam onto the main beam path and superimposes excitation beam with depletion beam. Imaging beam paths There are 3 imaging modes: brightfield transmission mode, quasi-widefield reflection mode and confocal fluorescent mode. For imaging using a brightfield mode, sample is illuminated from top using white light diode. Transmitted light is directed onto the CCD camera using 92:8 pellicle beamsplitter BS1 a nd focused using tube lens f12. Reflected light, for imaging reflective objects such as gold beads, is de-scanned using RM and GM mirrors and then reflected using 92:8 pellicle beams splitter BS2.Then, lens f13 is focusing the reflected light through the widefield pinhole.Multi Pixel Photon Counter MPPC (Hamamatsu C13366). Fluorescent light is de-scanned using the RM and GM mirrors, separated from the excitation beam by BM2 and the separated from the depletion beam by DM1.Lens f14 is focusing the fluorescent signal into the multi-mode fiber MMF, which acts as a confocal pinhole.The size of the MMF is set to 0.8 Airy units.Bandpass emission filter EF (Semrock 676/37) is rejecting any remaining non-fluorescent signal. Pulse delay scheme The system uses pulsed diode lasers with the ability to trigger one another.In order to achieve the most efficient depletion, the depletion laser pulse needs to arrive right after the excitation laser pulse.We have experimentally estimated that the most efficient depletio n was achieved when the depletion laser pulse arrived at the sample approx.160ps after the excitation laser pulse.For easy pulse alignment we used an electronic picosecond pulse delayer (Picosecond Delayer, MPD, Bolzano, Italy) which allowed to precisely set the pulse delay with 10ps delay resolution. Image processing For image processing we used Matlab and the widely available scripts for Fourier Ring Correlation (code available in supplementary information of the Nieuwenhuizen et al. 1 ) and BM3D (code available at the website of Department of Signal Processing of Tempere University of Technology uploaded by the authors of the original BM3D 2 ).For the estimation of the FRC resolution we need two identical images that only differ by the noise distribution.Since raw images collected by our STED microscope system already contain two mirrored images (due to the use of a resonant mirror for sample scanning), we only need to find the correct translation between them.For this, we use the Fourier cross correlation algorithm, 3 which finds the translation between two images with subpixel resolution.This step is very important, as FRC highly depends on a lack of drift between images. 4After removing all drift, the FRC is calculated. BM3D denoising was performed on the raw images using the σ as shown in main text and the normal profile setting. Lenses f6 and f7 are imaging the DM2 plane onto the plane located between closely aligned pair of galvanometric mirrors GM (ScanLab Dynaxis XS) and lenses f8 and f9 image the GM plane onto the 16kHz resonant mirror RM (Electro-Optical Products Corp. SC-30) plane.Combination of GM and RM is used for scanning the sample -RM scans fast axis, GM scans slow axis.Lenses f10 and f11 are imaging RM plane onto the back focal plane of the microscope objective MO (Nikon CFI Plan Apo Lambda 100X Oil NA 1.45).MO is placed on the focusing piezo stage (Piezoconcept HS1.70) while sample is mounted on the motorized XY stage (ASI Imaging ASI S31121010FT).
1,477
2019-12-16T00:00:00.000
[ "Physics" ]
GIFFT: A Fast Solver for Modeling Sources in a Metamaterial Environment of Finite Size The GIFFT (Green's function interpolation and FFT) algorithm is one of a class of fast solvers for large periodic structures. The GIFFT algorithm is a modification of the adaptive integral method (AIM), a technique based on the projection of subdomain basis functions onto a rectangular grid. This paper extends the GIFFT algorithm to allow for a complete numerical analysis of a periodic structure excited by dipole source. In addition to reducing the computational burden associated with large periodic structures, GIFFT now permits modeling these structures with source and defect elements. It is important to note that, although a metamaterial layer with a dipole antenna excitation is considered, as per the extended GIFFT algorithm, defect elements could be considered as well INTRODUCTION ue to the recent explosion of interest in studying the electromagnetic behavior of large (truncated) periodic structures such as phased arrays, frequency-selective surfaces, and metamaterials, there has been a renewed interest in efficiently modeling such structures. Since straightforward numerical analyses of large, finite structures (i.e., explicitly meshing and computing interactions between all mesh elements of the entire structure) involve significant memory storage and computation times, much effort is currently being expended on developing techniques that minimize the high demand on computer resources. One such technique that belongs to the class of fast solvers for large periodic structures is the GIFFT algorithm (Green's function interpolation and FFT), which is first discussed in [1]. This method is a modification of the adaptive integral method (AIM) [2], a technique based on the projection of subdomain basis functions onto a rectangular grid. Like the methods presented in [3]- [4], the GIFFT algorithm is an extension of the AIM method in that it uses basis-function projections onto a rectangular grid through Lagrange interpolating polynomials. The use of a rectangular grid results in a matrix-vector product that is convolutional in form and can thus be evaluated using FFTs. Although our method differs from [3]-[6] in various respects, the primary differences between the AIM approach [2] and the GIFFT method [1] is the latter's use of interpolation to represent the Green's function (GF) and its specialization to periodic structures by taking into account the reusability properties of matrices that arise from interactions between identical cell elements. The present work extends the GIFFT algorithm to allow for a complete numerical analysis of a periodic structure excited by dipole source, as shown in Fig 1. Although GIFFT [1] was originally developed to handle strictly periodic structures, the technique has now been extended to efficiently handle a small number of distinct element types. Thus, in addition to reducing the computational burden associated with large periodic structures, GIFFT now permits modeling these structures with source and defect elements. Relaxing the restriction to strictly identical periodic elements is, of course, useful for practical applications where, for example, a dipole excitation may be of interest or, as is often the case for metamaterials, defective elements are introduced in the structure's fabrication process. The main extensions of the GIFFT method compared to [1] are the following: 1) Both periodic "background" and "source" or "defect" elements are now separately defined in translatable unit cells so that, in the algorithm, mutual electromagnetic interactions can be computed. 2) The near-interaction block matrix must allow for the possibility of "background-tosource" or "background-to-defect" cell interactions. 3) Matrices representing projections of both "background and source" or "background and defect" subdomain bases onto the interpolation polynomials must be defined and appropriately selected in forming the matrix-vector product. It is important to note that, although here we consider a metamaterial layer with a dipoleantenna excitation, as per the extended GIFFT algorithm, "defect" elements could be considered as well. A. Background: The GIFFT Method for Periodic Stuctures In [1] the GIFFT method is applied to periodic structures (arrays, in particular) with polygonal boundaries. Only one element of the array is meshed and provided as input, while all other array elements are accounted for by taking advantage of the reusability properties of periodic structures comprising identical elements. The GIFFT method begins by setting out a regular grid of Green's function interpolation points across the entire array. The points are typically chosen so there are four to six points per half-wavelength array cell ( Fig. 1(b)). The points are used as equi-spaced interpolation nodes for Lagrange interpolating polynomials that approximate the Green's function as where ′ i, i are double indices representing interpolation point locations overlaying the observation and source cells, respectively. The Green's function is sampled once for each unique value of the difference index -′ i i representing separation between source and observation interpolation points. It can be seen from the above that the Green's function approximation is of convolutional form, and a matrix-vector product involving it may utilize an FFT. After the Green's function is sampled, the basis functions are projected onto the interpolating polynomials. A correction is performed for neighboring elements by accounting for the interaction of a periodic cell with its neighbors via an accurate numerical integration. An iterative solver is then used that employs the FFT to perform the discrete convolution associated with the computation of matrix/vector products. B. Modeling Sources The GIFFT method requires that only distinct cell geometries be meshed and provided as input to the electromagnetic solver code. For very large structures this has the advantage of condensing the input data and reduces the chance of introducing mesh errors for complex structures. Thus, to model a dipole source over a finite metamaterial layer, we provide GIFFT the geometry for two distinct structures. The first consists of the mesh geometry for unit cells making up the "background" metamaterial layer. For the structure shown Fig. 1, the "background" unit cell can be taken as two split-ring resonators SRRs oriented along the z-direction. The second geometry description is that of the unit cell containing a single dipole plus two SRR elements (the "source" element). For the GIFFT technique, only the explicit meshing of these two distinct unit cells is required, with a replication of these "mother" cells automatically occurring in the computational part of the algorithm. (For the structure shown in Fig. 1, we have one "source" element and twenty-four "background" elements). It is significant to note that, for this implementation, the GF sampling grid has to be large enough to include both "background" and "source" elements (independently of their location) to form a large brick volume (only the y-z plane is shown in Fig. 1(b)) where the FFT algorithm is then applied. ANALYSIS OF A DIPOLE OVER A HIGH IMPEDANCE SURFACE A finite-sized periodic material made of a two-layer array of capacitively-loaded splitring resonators (SRR) is studied here with a short strip dipole placed above the metamaterial, at a height h above the top of the upper SRR, as shown in Fig. 1. The flat strip dipole is placed in the x-y plane (zero thickness along z) and is of width W = 0.4 mm in the x-direction and length L = 2.4 mm along the y-direction. It is fed by a deltagap voltage generator at its center and meshed with three basis functions along its length. The metamaterial layer has the dimensions given by W = 4.06, L = 2.54, T = 0.457, U = 1.65, S = 1.245, G1 = 1.02, G2 = 0.508 (all in mm), as shown in Fig. 2. As mentioned previously, throughout this study the basic unit cell (considered the "background" unit cell in the finite analysis) consists of two SRRs with the capacitive gaps facing the zdirection. A similar SRR-based metamaterial block, of both infinite and finite extent, has been studied by Erentok et al. in [5] and shown to provide an artificial-magneticconductor performance, with agreement between experiment and simulations being demonstrated. While in [5] these SRR elements were embedded in a duroid substrate of 2.2 r ε = , for a preliminary application of the GIFFT method an air substrate is here considered since it permits use of the FFT in all three dimensions. The periodicity along the x-and y-directions of the metamaterial layer are taken to be a = 1.57 mm and b = W + G1= 5.08 mm, respectively. An analysis of memory requirements for the standard MoM method for the problem of a metamaterial layer comprising (9 × 7 periodic elements), with 52 degrees-of-freedom per element, shows that a Toeplitz storage format for the MoM impedance matrix requires 6 0.77 10 × entries. In principle GIFFT requires the storage of only 84 GF samples per cell when we choose a 3 7 7 × × points-per-cell interpolation scheme, so that a total of only 4 2.4 10 × GF samples are stored. In practice, the memory requirement is slightly higher because of the zero padding to the nearest power of 2 needed to apply the FTT. Figure 3(a) shows how the size of the SRR substrate affects the input impedance of the short dipole located at a height h = 2.5mm. Two cases have been considered: a small metamaterial substrate made of ( 7 3 × periodic elements) and a large one made of ( 33 11 × periodic elements). For the large substrate case we have also considered the dipole height h = 2 mm. The trends of the real parts of the input impedance are not changed by the size, though the exact values do vary. The radiation patterns are shown in Fig. 3(b) for the small ( 7 3 × periodic elements) and large ( 33 11 × ) substrate cases, with the dipole located at h = 2.5 mm at a frequency f = 13.73 GHz.
2,251.2
2006-07-09T00:00:00.000
[ "Engineering", "Materials Science", "Physics" ]
Design of the future circular hadron collider beam vacuum chamber EuroCirCol is a conceptual design study of a post-LHC, Future Circular Hadron Collider (FCC-hh) which aims to expand the current energy and luminosity frontiers. The vacuum chamber of this 100 TeV, 100 km collider, will have to cope with unprecedented levels of synchrotron radiation linear power for proton colliders, 160 times higher than in the LHC for baseline parameters, releasing consequently much larger amounts of gas into the system. At the same time, it will be dealing with a tighter magnet aperture. In order to reach a good vacuum level, it has been necessary to find solutions beyond the particle colliders’ state of art. This paper proposes a design of a novel beam screen, the element responsible for absorbing the emitted power. It is intended to overcome the drawbacks derived from the stronger synchrotron radiation while allowing at the same time a good beam quality. I. INTRODUCTION The Future Circular Hadron Collider (FCC-hh) is a study aiming to propose a 100 km long accelerator as a successor of the 27 km long Large Hadron Collider (LHC) [1,2]. In the FCC-hh two counterrotating proton beams would achieve an energy of 50 TeV, leading to collisions at 100 TeV at center of mass. Such energies require superconducting bending magnets providing up to 16 T, an ambitious step forward with respect to the current 8.3 T of the LHC dipole magnets, which are needed to steer a 7 TeV beam. This rise in beam energy results in a dramatic increase of the emitted synchrotron radiation (SR), attaining linear power density levels of 35.4 W=m, around 160 times higher than in the LHC, with a maximum of 0.22 W=m (see Table I). As in the LHC, the proposed FCC-hh's magnets are based on a two-in-one design, where the two beam pipes are incorporated into a common yoke cooled by superfluid He at 1.9 K. Superfluid He allows an easier and more effective cooling of the magnet. In addition, at such low temperatures all gas species condense on a surface with saturated vapor pressures lower than 10 −12 mbar, except for He. To avoid excessive beam-induced heat load transfer to the 1.9 K surfaces, beam screens are inserted in the magnet cold bores, aiming to intercept the SR power at higher temperatures. In this way, the cooling efficiency is increased [4]. This paper proposes a novel beam screen (BS) design for the FCC-hh, intended to meet with the requirements of such a challenging collider while coping with the detrimental effects arisen from the unprecedentedly high beam energy. The main challenges the FCC-hh BS has to overcome are: (i) the need of a higher pumping speed, to counter the higher gas load in the chamber (derived from the much higher SR power emission), (ii) the higher photoelectron generation (also derived from the higher SR), which may lead to an electron cloud (e − cloud) build-up, (iii) the strong Lorentz forces generated during a magnet quench, derived from the huge dipole magnetic field, (iv) and the heat management. These topics and the solutions adopted to address them are covered in this paper, paying special attention to the SR generation. The study of the gas generation and the vacuum level in the beam chamber is covered in another publication [5], owing to the otherwise unaffordable increase of length and complexity of the resulting paper. II. VACUUM SPECIFICATIONS Ultrahigh vacuum (UHV) is generally needed in particle accelerators to reduce beam-gas interaction at the required level. For a vacuum system, to quantify the residual gas remaining in the beam pipe, the gas density is reported instead of pressure when the vacuum vessels are at different temperature. This is the typical case of a set of vacuum chambers kept either at cryogenic or room temperatures. As in the LHC, the BS of the FCC-hh is holed so that gas molecules can migrate to the coldest surface of the cold bore and be cryopumped, providing enough pumping speed to keep the gas density in the beam chamber below the maximum attainable one. The maximum gas density is defined by two constraints: (i) The nuclear scattering beam lifetime (τ bg ) has to be longer than 100 h. (ii) The thermal load on the cold mass of the magnets (composed by all the elements held at 1.9 K that are directly cooled by the cryogenic system, as the coils, collars, iron yoke or the cold bore) attributed to nuclear scattering (P n ) has to be lower than 0.2 W=m in average. [6]. These constraints can be expressed with Eqs. (1) and (2). The gas density specification that fulfills both expressions (approximated by default) is the same as that of the LHC, [3,7], i.e., less than 1 × 10 15 H 2 eq =m 3 . H 2 eq means the equivalent pure H 2 density once all the different nuclear scattering cross sections for other gas species have been taken into account. For this value, τ bg results in 107.2 h and P n in 0.178 W=m. σ is the nuclear scattering cross section, 86.4 mb (taken from FLUKA [8,9]), E the beam energy (in eV), I the beam current, n the gas density; and k a is the fraction of the total scattered power in the arcs absorbed by the cold mass. For the latest design of the FCC-hh's vacuum chamber, k a has been found to be ≈0.86 as an average in the arc cell [10]. The fraction of power deposited in the BS is only 0.05. The remaining power is deposited in the tunnel walls or escapes. k b is defined as the fraction of protons whose interactions with the residual gas do not result in any energy deposition in the accelerator elements and continue around the ring, i.e., ≈0.042 in the FCC-hh [11]. III. SYNCHROTRON RADIATION IN THE FCC-hh Even if designed for a slightly lower beam currents than the LHC, the high beam energy of the FCC-hh results in a dramatic increase of the SR power (P) and critical energy (ε c ). To allow a rapid comparison, these two terms are plotted for both colliders in Figs. 1 and 2 as a function of the beam energy, using Eqs. (4) and 2. They have been derived from the expressions found in [12,13]. Compared to the LHC, the linear SR power density in the FCC-hh is ≈160 times higher, namely of the order of magnitude of modern synchrotron radiation sources. However, in the range of energies of the LHC (0.45-7 TeV), both the FCC-hh's SR P and ε c are lower, due to the larger radius of magnetic curvature (ρ), around 10.45 km in the FCC and 2.8 km in the LHC. A comparison of the SR spectrum generated by the two colliders can be found in Fig. 3. At maximum beam energy, most of the photon flux in the LHC is generated in the infrared-UV region (1.24 × 10 −3 -100 eV, around 95%), and a marginal part in the soft x-ray region, (> 100 eV, with only around 2% of the total emitted flux). In FCC-hh around 66% of the photons are emitted in the soft and hard x-ray region. One of the hypotheses present in the literature which explains the photon stimulated desorption (PSD), describes a mechanism in which photoelectrons are the source of the gas generation [14]. The extraction of photoelectrons from the chamber wall needs photon energy higher than 4-5 eV, i.e., the work functions of metals usually employed in UHV. Therefore, photons below this energy will not contribute substantially to the increase of the gas density inside of the vacuum chamber. In the LHC, for design parameters, the photon flux is 1 × 10 17 ph=ðm sÞ [see Eq. (3)], with 48% of this amount above 4 eV. In the FCC-hh, the photon flux is 1.7 × 10 17 ph=ðm sÞ with 88% of the photon energies above 4 eV. On the assumption that photoelectrons are the source of PSD, this would mean that in the FCC-hh there are around 3 times more photons emitted per meter capable of increasing the gas load in the beam chamber. IV. THE BEAM SCREEN The BS serves several purposes [15]. Among them, the most relevant one is the reduction of the SR power arriving to the cold bore [3], by directly absorbing it at higher temperatures. The removal of 1 W at 1.9 K requires nearly 1 kW of electric power, which would be translated in around 2.3 GW of cooling power for all the FCC-hh in case of the BS absence, making the machine totally unfeasible. From the vacuum point of view, its most important function is to screen of the gas condensed on the cold bore from the SR direct impact, avoiding the desorption of the accumulated gas back into the system [16] and the consequent drastic reduction of pumping speed. In addition, the BS is also responsible of mitigating the e − cloud effect generated by the beam's presence and of ensuring a sufficiently low beam impedance. At the same time, the BS must preserve the magnetic field quality and the minimum clearance for the beam, has to respect the tight aperture of the magnet bore, and has to ensure its structural integrity during the magnet quenches. The latest FCC-hh BS design for dipole magnets is shown in Fig. 4. The BS elements and their main purpose are hereunder presented. A. Primary chamber The primary or inner chamber is the innermost part of the beam screen. Its volume is delimited by two 1.3 mm thick copper colaminated P506 [17] stainless steel (SS) sheets. The P506 SS, 1 mm thick, is used to achieve a high stiffness while yielding low relative magnetic permeability (<1.005). The OFE copper layer is 0.3 mm thick (in the LHC it was 0.075 mm thick [18]) and has a RRR of at least 100. It is used to achieve low impedance values. Based on machine optics considerations, the inner chamber has to guarantee a clearance to contain a 15.5 σ beam aperture [19] while yielding a low beam impedance. The inner copper surface is kept as cold as possible to minimize the copper's resistivity [20]. Provided that the e − cloud effect is effectively suppressed, the primary chamber does not receive any significant heat load besides of that of the image currents The SR reflected back from the secondary chamber is minimal, and only the outer angular extremes of the SR beam hit it directly, carrying a negligible amount of power. Its temperature is thus directly determined by that of the BS coolant (supercritical helium), with less than 0.5 K of difference. The central slot in the inner chamber, which leads to the secondary one, has an aperture of 7.5 mm. It is optimized to transfer 99.9% of the generated SR power to the secondary chamber even for the worst case of 2 mm vertical misalignment (see Fig. 5), whilst covering the inner area of the secondary chamber as much as possible. During the beam injection at 3.3 TeV, the vertical misalignment could go up to 4 mm for a short time [21], during which the SR beam would hit directly the wall of the primary chamber. Nevertheless for that low beam energy the SR P and ε c are considerably low, resulting in negligible temperature variation and gas desorption rate. The edges of the colaminated P506 SS inner chamber sheets, which mark the boundaries of the 7.5 mm slot, are coated with 100 μm of copper to keep the impedance within the requirements. Even if they would present a very small SS surface exposed to the beam's sight in case of not being coated, SS is three orders of magnitude more resistive than copper at cryogenic temperatures, surpassing the allocated impedance budget and being necessary to cover it. The chosen coating solution has to guarantee an electrical conductivity of at least 6.5 × 10 8 S=m at 50 K [22]. Cold spray and electrodeposition are the initially envisaged options, which are compatible with the thermomechanical behavior of the BS. Additional studies are required to fully assess the different technological options and features to produce this copper layer on the edge in a reliable and cost effective way. To mitigate the e − cloud effect, it is proposed to treat part of the inner chamber surface with Laser Ablation Surface Engineering (LASE) [23][24][25][26] or to coat it with amorphous carbon (a-C) [27,28]. These treatments are able to lower the secondary electron yield (SEY) below 1 for a range of electron energy of 0-1000 eV. From the manufacturing point of view, LASE is preferred over a-C since it is possible to apply it during the series production under atmospheric pressure, lowering considerably the manufacturing costs if scaled up to the 100 km twin-bore machine. The drawback that LASE entails, however, is a worse surface resistance owing to its high aspect ratio. That being said, its resistance can be minimized if the ablation ratio, and thus the SEY reduction, are low [29]; and/or if the treatment is applied in parallel to the beam's direction, achieving at cryogenic temperatures surface resistance values quite similar to a-C ones, even for high ablation rates [30]. B. Secondary chambers Two lateral baffles, which are symmetrically assembled, close horizontally the annular space between the primary and secondary chambers. These baffles are composed of 1 mm thick P506 SS sheet and a 75 μm copper layer, which acts as a heat carrier. The thickness of this layer, same value as in the LHC's BS, has been optimized in order to minimize at the same time the forces generated during a magnet quench and the temperature increase on the irradiated baffle (less copper means less force but also less heat transfer). The SR fan hits directly one of the baffles of the secondary chambers with ≈29 W=m in average and with an approximated vertical size of 2 mm (see Fig. 5). The average grazing angle of incidence of the SR on the BS is 0.10°(1.8 mrad), higher than the angular offset, 0.077°, due to the long travel path of the photons, which causes that the SR emitted at the end of each bending magnet (MB) misses the following magnet and impacts on the 2nd one in the line, with doubled angular offset. A sawtooth surface finishing is present on the irradiated baffle, as in the LHC. This finishing, applied by means of a roller with a jagged relief on its surface, leaves perpendicular triangular teeth in the SR trajectory, setting the grazing incidence angle close to 90°. Given the hardness of P506 SS, the copper layer is also needed for this reason. Owing to the reflectivity properties of x-rays, the perpendicular incidence increases the SR absorption on the impact area. Being the average impact grazing angle in the FCC-hh much lower than in the LHC, the LHC's sawtooth structure has been adapted, making the teeth two times longer. This minimizes the amount of SR hitting the rounded tips of the teeth (present due to manufacturing limits), which increase the residual SR scattering. In order to properly model these rounded areas in the computer tools, an LHC BS sample was measured with an optical profilometer. Results are shown in Fig. 6. A dedicated experimental plan led by LNF-INFN (Frascati, Italy) was also arranged with the objective of measuring the reflectivity and photoelectron yield of the sawtooth surface and other materials used for the beam screen, in the optics beamline of BESSY-II light source [31]. With the obtained data [32][33][34], the simulations were improved and validated, and an equivalent and simple model of the sawtooth surface was created in order to save computing resources. As a conservative approach, the area of the found surface has been multiplied by a standard factor of two, enhancing the resulting reflectivity. Figure 7 displays the results of the simulated reflectivity of an ideal sawtooth surface compared with a nonideal, pessimistic one, without perfectly sharp teeth. The theoretical reflectivity of an untreated copper surface is also shown. It can be noticed how this treatment is highly efficient in absorbing high energy photons. For the proposed sawtooth profile, the performed simulations show an absorption of around 98% of the total incident SR power and more than the 80% of the total photon flux at 50 TeV. If no sawtooth finishing was present, the absorption would be around 46% for the power and 20% for the flux. With this surface finishing the gas load attributed to PSD is lowered, since the total irradiated area is smaller and the SR incidence perpendicular. The number of photons reflected back to the primary chamber is also diminished, lowering the generation rate of e − seeds for the e − cloud effect (N e ). In case that an improvement of the SR absorption was required, there is the possibility of increasing the roughness of the rounded areas treating them with LASE, with the initially envisaged drawbacks of increasing the manufacturing costs and surface resistance. Thanks to its high surface aspect ratio, LASE provides an exceptionally high absorption rate, as found in the performed experiments [32][33][34]. Furthermore, using LASE on the sawtooth would also result in a further reduction of the gas load due to its low PSD molecular desorption yield [36], and due to the lower N e in the inner chamber (see Sec. V). It is encouraged to study this strategy in the future. In order to allow the gas to reach the cold bore, each baffle has two rows of pumping holes designed to maximize the pumping speed while minimizing the SR leaked to the 1.9 K cold bore and guaranteeing enough mechanical stiffness. The pumping holes are placed behind the inner chamber, as far as possible from the SR impact area [see Fig. 4(a)], being protected from a direct irradiation by the SR and from the e − cloud impingement, since electrons generated on the sawtooth surface are forced to follow the magnetic field lines (see Fig. 11), and the baffle's curvature prevents a vertical leakage. Electrons generated close to the pumping holes, do not receive any significant kick from the beam's positive space/charge, preventing their multiplication in the secondary chamber. Thanks to this double chamber layout, the electron shields present in the LHC's BS [37] are no longer necessary. In addition, since the beam has no direct sight of the pumping holes, their contribution to the BS impedance is negligible. Not being the impedance a constraint, and being protected from the direct SR irradiation, the pumping holes can be much larger than in the LHC, enhancing in this way the pumping speed. Without the double chamber layout, their dimensions would be unaffordable [38]. C. Cooling channels Two P506 SS cooling channels are placed top and bottom of the BS. They are welded to the inner chamber sheets and to the lateral baffles. Supercritical He flows through them, cooling half a cell in a row (≈107 m). At nominal current and beam energy, the He is at 40 K in the inlet and 57 K at the outlet [39]. Compared with the LHC cooling channels, a considerable increase of the cross section area was necessary to dissipate the higher SR power D. Cold bore The cold bore is a SS 316 LN pipe, 1.5 mm thick and with an inner diameter of 44 mm. It is kept as 1.9 K and it is the only means of pumping in the machine during normal operation. It separates the superfluid He surrounding the magnet coils and the vacuum chamber. The BS is supported inside the cold bore by means of periodic P506 SS spring sets every 750 mm, designed to minimize the heat conduction to the cold bore and ease the insertion of the BS inside it (see Figs. 20 and 23). The solution used in the LHC, short bi-metallic rings [41,42], has been discarded for the FCC-hh. Even if cheaper, they are not so efficient at isolating thermally the BS from the cold bore, an effect which would be exacerbated in this new BS due to its higher temperature. E. Interconnects The continuity of the beam screen in the arcs is broken by the magnets interconnects, in which the SS bellows and RF fingers absorb the offset angle between the magnets, thermal displacements and mechanical tolerances. In order to protect these areas of direct irradiation, a copper absorber, shown in Fig. 8, is proposed to be placed at the end of each magnet, stopping a maximum of 41 W of SR power and delivering a shadow of around 1.2 m afterwards, following the beam direction. The absorber slope should be treated with LASE to minimize the SR scattering and photoelectron generation, as it is difficult to apply a sawtooth finishing to this area. As shown in Table IV, the resulting power in the copper transition pieces, rf fingers and bellows of the interconnect is less than 0.2 W, effectively excluding this area of requiring active cooling and minimizing the related outgassing. Other solutions to avoid the irradiation of the rf fingers are also feasible. The absorber can be shortened or even removed, as long as the diameter of the rf fingers and its adjacent transition elements is larger than the BS, absorbing a very small amount of SR on the last copper transition, which should also include LASE. F. General remarks The presented BS is intended to minimize as much as possible the beam impedance. The impedance calculation, however, has been proven to be challenging due to lack of maturity in the studies carried out on LASE technology. The resulting pumping speed is considerably high, surpassing the LHC's even at the same normalized temperature, and being sufficient to guarantee the gas density requirement within a reasonable conditioning time [5]. The calculated values are shown in Table II. The calculation done with the outgassing (Q) applied on the sawtooth, represents the closest case to the reality, where PSD dominates the gas load. When Q is applied on the inner chamber, it represents a pessimistic calculation, where all the gas desorption happens on the inner copper layer (caused either by electrons or reflected photons) being this value the lowest attainable. Even if the complexity of the FCC-hh BS is much higher than that of the LHC, it is also compatible with large scale production technologies [43] and affordable from the economic point of view, representing a very small fraction of the collider's cost [44]. The secondary electron emission of the vacuum chamber surfaces can drive an avalanche multiplication effect, filling the beam chamber with a cloud of electrons. The interaction of the proton beam and the e − cloud can lead to a series of detrimental effects on the collider's performance, such as emittance growth, transverse instabilities, heat load on the surfaces bombarded by electrons, and a deterioration of the vacuum quality owing to the electron stimulated desorption. The BS has to therefore comply to a series of design constraints in order to achieve a low electron density and minimize its impact on the collider's performance. The e − cloud build up depends on the SEY of the chamber surfaces, on the chamber geometry, the beam current, the bunch spacing, and the photoelectron generation rate. Within the parameters depending on the BS design, the SEY features the highest influence on the electron density. A series of SEY constraints have been therefore defined. As a first step, these requirements are expressed in a fast way through the multipacting threshold, namely the maximum value of the SEY curve above which the exponential electron multiplication starts independently from the number of photoelectron seeds. They have been estimated with simulation studies of e − cloud build-up with the PyECLOUD code [47,48], using a secondary emission model [49][50][51] based on measurements on samples of the LHC copper co-laminated beam screens [52][53][54]. The calculated SEY requirements can be found in Table III. The requirements have been calculated for each bunch spacing option, for dipoles, quadrupoles and drift spaces (without magnetic field), and for nominal and injection energies. 12.5 ns results to be the most demanding option, whilst 25 ns, the FCC-hh's nominal value, is the least demanding one. Conditioned copper can reach SEY values of around 1.2-1.4, as displayed in Fig. 9. Consequently, it is necessary to use a SEY mitigation solution for all the quadrupole magnets, which require a lower SEY, at least 1.1 in the best scenario. For the dipole magnets, in case the 12.5 ns bunch spacing option were decided to be discarded, raw, untreated copper could be used if conditioned. Otherwise, since the 12.5 ns option requires an SEY < 1.1, a SEY mitigation solution should be applied on them. As for the drift spaces, the calculation is indicative, but they do not present any strong requirement. For the common range of electron energies in the beam chamber, LASE can reach SEY values under the unity even without beam conditioning, and well below one after high doses (see Fig. 9). Nevertheless, it is relevant to point out that there are different properties and SEY values that LASE can present, depending among other factors on the surface ablation level [24], a feature which increases the surface aspect ratio and apparent blackness. The improvement in SEY is proportional to this feature, but it also affects the surface resistance [25]. Therefore, and as future work, and in case LASE is finally accepted as the chosen solution for the FCC-hh, it is important to adjust the properties of this treatment to match optimally the SEY requirements minimizing at the same time its impact on the impedance. More information of the FCC-hh's baseline LASE parameters can be found in [55]. Additionally, even if the SEY is below the requirements (see Table III), transverse instabilities can happen if the electron density in the chamber is high enough. The maximum allowable electron density (ρ e;th ) can be estimated using Eqs. (6)-(9) [56,57]: where ω e , K, and Q are defined as: Q ¼ minðω e σ z =c; 7Þ ð 9Þ where r p and r e are the classical proton and electron radii, ν s is the synchrotron tune, λ p is the bunch line density, σ x;y;z are the RMS transverse beam sizes and bunch length, β x;y are the machine beta functions and L is the length of the machine over which the e − cloud extends. Using FCC-hh specifications, the threshold electron density results in 6 × 10 10 e − =m 3 at 3.3 TeV (injection) and 3.6 × 10 11 e − =m 3 at 50 TeV (physics). For the SEY and electron generation rate (N e ) values below the multipacting threshold, the electron density is approximately proportional N e on the areas where the e − cloud occurs, as seen in Fig. 10. To prevent surpassing the threshold and keep the gas density low, it is advisable to keep N e in the inner chamber well below 1 × 10 12 e − =ðcm 2 sÞ. N e depends on the SR flux arriving on the surface. It can be found with Eq. (10): where E max and E min are the maximum and minimum values of the SR spectrum arriving to the studied area, _ Γ ph is the photon flux associated for each energy value and Y ph the photoelectron yield, the amount of electrons released by the surface for each impinging photon. The Y ph for LHC copper and LASE was found in BESSY's experimental runs [32][33][34], for a range of photon energies of 35-1800 eV and angles between 0.25°and 1°. Y ph values for 4-50 eV were linearly extrapolated. The SR spectrum and flux arriving to the studied regions are given by the ray tracing simulations (see Sec. VI). Thanks to the sawtooth finishing, the energy of the reflected photons which reach the main chamber is very low, as shown in Fig. 15. The N e maximum calculated values during physics are 2.3 × 10 10 e − =ðcm 2 sÞ for the dipole critical build-up areas (top and bottom flat areas in the primary chamber) and 1.6 × 10 11 e − =ðcm 2 sÞ for quadrupole ones, if LASE is used. If using raw copper, the corresponding values are 1 × 10 11 e − =ðcm 2 sÞ and 6 × 10 11 e − =ðcm 2 sÞ, respectively. In all cases N e has an associated electron density under the instability threshold (see Fig. 10 for an example of the dipole with the 12.5 ns beam, with the instability threshold drawn), mainly thanks to the high SR absorption properties of the sawtooth finishing, which lowers considerably the number of photons reaching the critical areas. During the beam's injection, the lower photon flux [15 times lower, see Eq. (3)] and lower ε c (1.23 eV, much lower than copper's work function) entail negligible N e values when compared with the physics mode, rendering the latter the only concern. Treating the rounded areas of the sawtooth profile with LASE, and/or having sawtooth not only in the irradiated baffle but also in the other one will further lower the _ Γ ph reflected toward the build-up areas and thus N e , meaning in turn lower gas load ascribed to the electron stimulated desorption effect. Figure 11 shows the BS electron density map in dipoles and quadrupoles for an early version of the BS. It can be seen how the electrons are confined around the magnetic field direction lines, impacting on the top and bottom flat areas of the BS in case of the dipoles and on the corners in case of quadrupoles. The production of photoelectrons in the secondary chamber is not contributing significantly to the density value around the beam, thanks to the magnetic confinement. In contrast to Fig. 4(b), where a LASE layout for dipoles is displayed, in the quadrupoles case LASE shall be applied only on the corners of the inner chamber, every 90°, where the e − cloud impacts. The use of LASE in the drift spaces between magnets, with a much lower magnetic field, is still under study. A SEM image of the LASE sample whose reflectivity and Y ph were analysed is found in Fig. 12. The sample was provided by STFC (Science and Technology Facilities Council, UK) according to the baseline specifications. The high aspect ratio the surface exhibits can be observed. This feature is thought to be the main reason for the SEY and Y ph reductions. The electrons become trapped inside the complex morphology, and the light incidence becomes perpendicular against the roughness peaks. e − cloud mitigation based on LASE has been recently demonstrated in an accelerator for the first time [58], with positive results. VI. SYNCHROTRON RADIATION RAY TRACING In order to check the BS mechanical stability and to know the pressure levels in the vacuum chamber, several Monte Carlo and finite element simulations have been carried out. These studies need a complete map of both the photon flux and power absorbed along the vacuum chamber, found with photon ray tracing simulations. The ray tracing has been performed with SYNRAD+ [35,59], a Monte Carlo code which allows coupled vacuum simulations if used along with MOLFLOW+. SYNRAD+ includes a predefined library of reflectivity data. The results of the power distribution map for the FCC-hh BS in an arc dipole, with 14.069 m of magnetic length [2] are shown in Fig. 13. The curvature of the proton beam can be noticed, as well as the photon trajectories originating tangentially from it, represented in green. The simulation has been carried out with a nonideal 50 TeV, 500 mA beam. β has been set to 355 m, the momentum offset δp=p to 0.06% and the normalized emittance ε N to 2.2 μm [2]. For copper and steel a general roughness ratio τ ¼ S q =T ¼ 0.006 has been assigned, where S q is the RMS surface roughness and T the autocorrelation length. The physical interpretation of T is that it expresses the minimal distance between two profile points not interrelated, giving information about the surface spatial complexity. τ has been conservatively chosen according to a series of metrology studies performed on LHC BS samples at CSEM (Swiss Center for Electronics and Microtechnology). LASE areas have been set as perfectly absorbing surfaces in order to obtain pessimistic results of photoelectron generation and power deposited on the inner chamber. Looking at the color scale of Fig. 13 and the summary in Table IV it can be noticed how most of the power is absorbed on the first impact region of the SR beam, the sawtooth area on the left baffle. Owing to its high SR absorption, the cold bore and other areas receive a minimum amount of SR power, fulfilling the beam screen's main purpose. The Gaussian-like SR power distribution emitted by the proton beam can be recognized on the sawtooth region, as previously shown in Fig. 5. The linear power density along the BS of the MB is displayed in Fig. 14. The highest value can be found at the beginning of the BS (≈40 W=m), after the initial region of shadow produced by the SR absorber, and decreases progressively along the BS following the beam direction, with the exception of one small jump after 5 m owing to the change of magnetic region origin. The linear power decay is ascribed to the progressive beam curvature in the previous MB. The beam curvature decreases the SR angle of incidence against the wall, causing a higher spread of the photon fan and lowering its intrinsic power density. The average power received by the sawtooth is around 29 W=m. The SR ray tracing allows the determination of the SR energy spectrum arriving to each region. Figure 15 shows an example, representing the spectrum above 1 eV of the SR hitting the horizontal faces of the inner chamber (namely, the areas between which the electron multipacting effect takes place in dipoles) and the SR arriving to the cold bore. It can be seen that most part of the photon flux arriving to these regions carries an energy below copper and SS's work functions, effectively keeping the gas desorption and the e − cloud effect under control. VII. MECHANICAL ANALYSIS The BS has been designed to ensure an elastic behavior after a magnet quench. Eddy currents are induced in the beam screen along its beam axis and, therefore, Lorentz forces squeeze the BS as seen in Fig. 18. The numerical model used for the magnet quench study is based on the reduced field formulation by means of which no magnet coil needs to be considered. The induced resistive losses affecting the material properties are taken into account. The detailed description of the model can be found in [60]. The formulation of the specific Lorentz forces for a dipole magnetic field is expressed by: where f x is the volumetric force, B y the magnetic field, x the horizontal distance from the center of the beam screen and σðTÞ the electrical conductivity as a function of the temperature. One quarter of the periodic unit (17 mm long, see Fig. 18) has been modeled to study the mechanical response of the magnet quench in a time dependent study. A. Magnet quench behavior The evolution of the magnetic field decay [61] and the forces induced in each half of the beam screen is shown in Fig. 16. The forces attain around 135 kgf=cm on the inner chamber and 44 kgf=cm on the outer one along the axial direction. The displacement and the von Mises map of the beam screen when the Lorentz forces are the highest, i.e., at 55 ms, are shown in Fig. 17 and Fig. 18, respectively. The highest stresses are located around the aperture of the secondary chamber. Even if the maximum von Mises stress reaches values as high as 1100 MPa, it is very local and it is below the yield strength of the P506 SS, i.e., 1180 MPa at 77 K [17]. The maximum horizontal displacement during a quench is 0.64 mm, which is less than the gap of 1.5 mm between the BS and the cold bore. B. Supporting system The supporting system consists of two concentric rings on top of which five elastic fingers V shaped are welded, see Figs. 19 and 20. Only one ring is welded on the beam screen while the other ring is free to slide and allows, therefore, an adequate insertion and alignment of the BS within the cold bore. The elastic fingers on the horizontal plane have to withstand the expansion of the BS during a magnet quench without any significant plastic deformation. A radial prestress, due to an imposed radial displacement of 0.1 mm, is applied on each elastic ring to keep the beam screen well positioned with respect to the cold bore. The weight of the beam screen, 2.16 kg=m, causes a vertical displacement of −32 μm. During a quench, the most loaded elastic fingers are the ones on the horizontal plane. They are squeezed toward the cold bore by 0.64 mm. After a quench, their residual deformation turns out to be 20 μm. It is five times lower than the prestress and it is deemed, therefore, negligible. The residual von Mises stress in the horizontal fingers attains values up to around 500 MPa, see Fig. 20. However, these values are very localized and not detrimental. VIII. THERMAL ANALYSIS The temperature behavior of the beam screen has been simulated by means of the Heat Transfer in Solids and the Heat Transfer with Surface-to-Surface Radiation modules of COMSOL Multiphysics [62]. A specific geometry has been developed for the thermal analysis to take into account the various welds and the thermal contacts between interfacing components. To this purpose, the colaminated copper layers are considered as fully bonded. The welds between the secondary chamber and the cooling channels have been modeled by taking into account the actual spot welding pattern. To reflect such a pattern, an array of 500 × 500 × 75 μm 3 bonding elements placed every 1 mm has been implemented. The weld between the cooling channel and the inner chamber has been modeled by considering a bonded surface 500 μm wide along the external edges of the channel and no contact along the remaining portion. The contact surface between the rings of the supporting system and the beam screen is considered as fully bonded. The contact area between the elastic finger and the cold bore has been calculated according to the Hertzian theory of nonadhesive elastic contact [63]. Such area turns out to be 0.01 mm 2 and it has been used to dimension a cylindrical element 0.1 μm high bonding the cold bore to the elastic spring. By adopting this modeling trick, the exact contact area of the thermal transfer is guaranteed. All the thermal contacts have been conservatively considered as fully bonded. The material properties have been assigned as a function of the temperature. The heat capacity at constant pressure for the P506 is taken from [64] and for the copper from [65], while the thermal conductivity for the P506 SS from [66] and for the copper from [67]. All the internal surfaces involved in the thermal radiation have been considered as gray surfaces, while an insulation condition has been applied on the external surfaces of the cold bore. The surface emissivity of copper in the BS, considered to be at 77 K, has been set to 0.12 and for the P506 to 0.34. For the 1.9 K cold bore the emissivity is set to 0.12 [68]. To simplify the meshing of the beam screen, the area of impact of the SR has been divided in seven longitudinal regions of equal area, vertically aligned, and an absolute heat load has been applied to each one of them, emulating the typical Gaussian distribution of the SR radiation (see Fig. 5) as a 7-step function. The temperature map of a short model has been compared with the exact Gaussian profile of the heat load resulting in a good match, since the high thermal conductivity of copper and given that the total heat load of the simplified model is the same. Each periodic portion of the beam screen has been discretized with 395, 479 tetrahedral elements resulting in an average element quality of 0.57. The analysis has been performed in stationary conditions for the lowest and highest temperature of the coolant, 40 K and 57 K. A. Temperature of the beam screen in nominal conditions The modeling of a periodic unit of the beam screen (17 mm long) is sufficient to determine the temperature distribution in nominal conditions. The main source of heat is the SR, around 40 W=m at the highest point, as represented in Fig. 14. Other minor loads are the image currents and the e − cloud, with a budget of around 3 W=m [4] and 0.1 W=m, respectively. The heat load intercepted in the secondary chamber is transferred through the copper layer and, ultimately, through the welding points joining the secondary chamber to the cooling channels. The heat transfer is limited by the P506 SS as in this temperature range, its thermal conductivity is around a factor 100 lower than copper. The temperature map of the BS, considering the maximum SR power load and the He inlet temperature of 40 K and 57 K, is shown in Figs. 21 and 22, respectively. As the inner chamber is thermally decoupled from the secondary one where the SR is intercepted, the temperature of the inner chamber increases by 0.3 K above the base temperature of the cooling channel. Therefore, such temperature remains within the defined range between 40 K and 60 K needed to maintain the beam impedance low. B. Expected heat loads on the 1.9 K cold mass For each magnet the 40% of its thermal budget is represented by the heat loads from inside the cold bore, i.e., 0.3 W=m=aperture [6]. The nuclear scattering accounts for most of the total heat load. Its average contribution has been calculated with Eq. (2). A beam screen assembly 750 mm long has been modeled to determine the heat losses to the 1.9 K cold bore from the supporting system as it is placed every 750 mm. Its maximum heat loss is estimated to be around 45 mW=m for a base temperature of 40 K and 67.7 mW=m for a base temperature of 57 K, see Fig. 23. The other considered heat sources are the thermal radiation produced by the 40-60 K BS and the leaked SR, both of them playing a minor role. All the heat sources are displayed in Table V along with their ratio over the total. In average, the total heat load is situated well within the budget. Nevertheless, it is expected to be surpassed at some points owing to the high variability of the nuclear scattering power deposition along the cell elements. Considering only this source, the cold mass of the most impacted dipole can receive up to 278 mW=m [10]. IX. CONCLUSIONS A new beam vacuum chamber design for the FCC-hh has been presented. It is intended to overcome the challenges derived from the increase of the state-of-art beam energy, from the 7 TeVof the LHC up to the 50 TeVof the FCC-hh, which raises the linear power density from 0.22 W=m up to 35.4 W=m. The design aims to minimize the electron cloud build up, the outgassing triggered by the synchrotron radiation and the heat leakage to the cold mass, maximizing at the same time the beam screen's pumping efficiency. The performed calculations have shown a pumping speed more than three times higher than the LHC's, a heat transfer to the cold mass within the heat budget, and an e − cloud density below the instability limits. The e − cloud would be effectively suppressed thanks to the new SEY mitigation features, not present in the LHC, and the low reflectivity properties of the sawtooth finishing. In spite of having a SR linear power density around 160 times higher than the LHC, the FCC-hh beam screen is able to keep cold the copper surfaces surrounding the beam, keeping the resistivity low. All the stresses generated during magnet quenches are also well sustained. The resulting design complexity is much higher than that of the LHC's, but still economically affordable in a large scale production. As open points remain the precise calculation of LASE's impact on the impedance and the determination of its exact manufacturing features match in an optimal way the collider requirements.
10,434.2
2020-03-06T00:00:00.000
[ "Physics" ]
An account of new developmentalism and its structuralist macroeconomics This is a personal account of the definition of “new developmentalism” — a national development strategy alternative to the Washington consensus —, and of a “structuralist development macroeconomics”: the sum of models that justifies theoretically that strategy. It is personal account of a collective work involving Keynesian, institutionalist and structuralist economists in Brazil that are forming a new school of thought in Brazil: a Keynesian-structuralist school. It is Keynesian because it emphasizes the demand side or the investment opportunities’ side of economic growth. It is institutionalist because institutions obviously matter in achieving growth and stability. It is structuralist because it defines economic development as a structural change from low to high value added per capita industries and because it is based on two structural tendencies that limit investment opportunities: the tendency of wages to grow below productivity and the tendency to the cyclical overvaluation of the exchange rate. Over the past ten years, in cooperation with a skilled group of Keynesian and structuralist economists, I have been developing a structuralist macroeconomics of development, that is, a demand-side theory of development based on structural tendencies that constrain investment opportunities and limit the rate of growth of developing countries.On the other hand, based on the Latin American experience with national developmentalism and the past 20 years' growth experience of dynamic Asian countries, we have been drafting a national development Brazilian Journal of Political Economy,vol 31,nº 3 (123), pp 493-502, July-September/2011 * Emeritus professor of Getulio Vargas Foundation.E-mail<EMAIL_ADDRESS>strategy: new developmentalism.Several economists in various countries are developing the new ideas, but I will limit myself to Brazil.I exposed them in a systematic manner in the book Mondialisation et Compétition (2009). Structuralist development macroeconomics and the new developmentalism both take account of medium-income countries that have already undergone their national and capitalist revolution.New developmentalism is a third discourse, an alternative, on one side, to the Washington consensus for which the solution of all problems lies in reducing the public deficit, and, on the other side, to the populist approach that views fiscal expansion as such magic solution and is not responsible in exchange rate terms as it proposes growth with foreign savings.Instead, new developmentalism proposes a strategy based on fiscal responsibility and principally foreign exchange responsibility. Structuralist development macroeconomics, in its turn, is the new Keynesianstructuralist theory that founds new developmentalism.It is based on two structural tendencies that limit investment opportunities: the tendency of wages to grow below productivity and the tendency to the cyclical overvaluation of the exchange rate.With this second tendency and the two models behind -the Dutch disease model and the critique of growth with foreign savings -the exchange rate is viewed as key macroeconomic price for development economics.While structuralist economics focused in the critique of the law of comparative advantages, structuralist development macroeconomics see a chronically overvalued currency as the major impediment to growth.While neoclassical economics sees the exchange rate to fluctuate softly around the current account equilibrium, and Keynesian economics sees it fluctuating with high volatility around such equilibrium, structuralist development macroeconomics sees it as going from currency crisis to currency crisis due to the Dutch disease and capital inflows caused by the growth cum foreign savings policy, the adoption of exchange rate anchors to control inflation, and exchange rate populism.The exchange rate plays the role of a "light switch" that connects, or disconnects, local manufacturing business enterprises using technology in the state of the art from foreign markets if the exchange rate is, or is not, in equilibrium or competitive.In so far as the exchange rate is overvalued -does not correspond to the "industrial equilibrium" -local entrepreneurs are denied profitable export oriented investment opportunities, and the country fails to take profit from its major advantage in catching up: its low labor costs. THE CRITIQUE OF GROWTH WITH FOREIGN SAVINGS My two first attempts toward a structuralist development macroeconomics were the paper that I wrote in 1999 while in Oxford, just after leaving the Brazilian administration, "Latin America's quasi-stagnation", and a short article where I drafted the critique of growth with foreign savings, "A fragilidade que nasce da dependência da poupança externa" [The fragility that springs out of dependency on foreign savings] (2001).Upon my return to Brazil, I wrote "Uma estratégia de desenvolvimento com estabilidade" [A strategy of development with stability] (2002) with Yoshiaki Nakano, my partner in many academic battles.That paper carried out our first systematic criticism of the Central Bank of Brazil's high interest rate policy, and showed that this rate did not correspond to Brazil's sovereign risk, but to a policy of high interest rates that the Brazilian society had come to accept in so far as it was persuaded that it was a condition to keep inflation under control.In this paper was present an idea that came to be known as the "Bresser--Nakano interest rate hypothesis": the causal link between sovereign risk and interest rates reversed beyond a certain threshold: high interest rates become a determinant of the risk of default.The paper caused for the first time in many years intense debate involving orthodox and heterodox economists. I was convinced, however, that in addition to criticizing the interest rate policy, there was also a need to reevaluate the role of the exchange rate in economic growth.I knew for long that a "relatively depreciated" foreign exchange rate was crucial to economic development.In 2001, while attending a meeting of the National Forum organized by João Paulo dos Reis Velloso, it suddenly became clear to me that the foreign exchange rate was kept chronically appreciated as a result of the policy of growth with foreign savings, that is, of growth with current account deficits.I first inverted the usual connection between foreign exchange rate and the current account deficit, arguing that the policy of growth with foreign finance or current account deficits caused the overappreciation of the exchange rate.The policy was the independent variable and the foreign exchange rate, the dependent one.Secondly, I established a connection between foreign exchange rates and growth.On one hand the overappreciated exchange rate stimulated consumption as it increased artificially real wages; on the other hand, it reduced the export oriented investment opportunities, making the investment and the growth rates smaller than otherwise would be.The foreign exchange rate is a demand-side factor of economic development.With a competitive foreign exchange rate, business enterprises using modern technology will have access to the entire foreign demand; with an appreciated rate, this access is barred.In 2001 I began my critique of the growth cum foreign savings strategy in a short article already referred.In the next year, I invited Yoshiaki Nakano to co-write "Economic growth with foreign savings?" (2003).Also in 2002, I applied the new ideas including the critique of the opening of the capital account (that was not part of the first but of the second Washington consensus) in the paper "Financiamento para o subdesenvolvimento: o Brasil e o Segundo Consenso de Washington" [Financing development and the second Washington consensus].In 2007, in a paper with Paulo Gala, "Why foreign savings do not cause growth", this model explaining why foreign savings do not cause growth was formalized, and, in the next year, again with Gala, in "Foreign savings, insufficiency of demand, and low growth", the key relation between exchange rate overvaluation and demand was explained (2008). It was clear to me that a new theory and a new set of economic policy proposals were emerging.In early 2003, Nakano and I had already assembled a set of ideas that justified a specific name for the proposals we were making.I asked him what we might call them, and immediately accepted his suggestion: "new developmentalism".I was the writing the fifth edition of Development and Crisis in Brazil (2003) and, in its final chapter, "Resuming the national revolution and the new developmentalism," I, for the first time, used this expression.The new developmentalism was based on a strategic role for the state, on growth with domestic savings, on fiscal balance, on a competitive foreign exchange rate, and on the development of a domestic mass consumer market. At the same time, I was attempting to gather around the new ideas younger and competent macroeconomists, either Keynesian, such as Fernando Cardim de Carvalho, Luiz Fernando de Paula, José Luiz Oreiro, Fernando Ferrari and João Sicsú, or structuralist, like Ricardo Carneiro, Daniela Prates and Franklin Serrano.The annual meetings of the Political Economy Society were helpful to this end.An important step was taken in 2005 with the publication of Novo-desenvolvimentismo [New developmentalism], a book that João Sicsú, Luiz Fernando de Paula and Renaut Michel edited.In the introduction they defined new developmentalism as being characterized by the following guidelines (2005, p. xxxv): "(1) there is no strong market without a strong state; (2) there will not be sustained growth [...] without strengthening [...] the state and the market and without the implementation of appropriate macroeconomic policies; (3) a strong market and a strong state can only be built by a national development project that aligns growth [...] and social equity; and (4) it is not possible to [reduce] inequality without economic growth at high and sustained rates".In this book I made my first attempt to present a model of the Brazilian economy accordingly: "Macroeconomia pós-Plano Real: as relações básicas" [Pos Real Plan macroeconomics: the basic relations]. In 2006, I wrote my first systematic paper on the new developmentalism, "The new developmentalism and conventional orthodoxy" (2006), in which I argued that from 1930 to 1980 Latin American countries, mainly Brazil and Mexico, had experienced strong growth based on structuralist ideas and on a national developmentalist strategy, but they fell into the foreign debt crisis in the 1980s and, since the end that decade bowed to the Washington consensus.I then compare the new developmentalism with old national developmentalism, and with conventional orthodoxy, arguing that the new developmentalist policies are best funded theoreti- 2007), in which he shows that the several financial crises that developing countries faced in the 1990s were caused not by excessive public deficits, but by current account deficits, that is, by the policy of growth with foreign savings. THE DUTCH DISEASE In the same time, I was working on another model relating foreign exchange and economic development: the problem of the Dutch disease.In 2005, in a short article in the Folha de S Paulo, I raised the question and a broad discussion on whether or not the Dutch disease occurred in Brazil followed.Realizing that I had really new ideas on the subject -a possible progress in relation to the classical paper of Corden and Neary (1982), I decided to write the theoretical paper, "The Dutch disease and its neutralization: a Ricardian approach" (2008), while I coauthored with Nelson Marconi a first study on the Dutch disease in Brazil (2007).I had, however, a problem in concluding my model.If Brazil had always been a case of Dutch disease, how had the country industrialized so successfully between 1930-1980 without acknowledging and deliberately fighting the obstacle?I posed the question to Gabriel Palma, who promptly answered: "but Luiz Carlos, we, in Latin America, did nothing other in that period than neutralize the Dutch disease".He didn't have to say anything else.I immediately remembered the Brazilian controls on the foreign exchange rate and the taxation of coffee exports known as "confisco cambial" ("foreign exchange expropriation"), and went on on writing my paper.Writing it was a theoretical adventure, a succession of discoveries.I defined the Dutch disease as the long-term overappreciation of the exchange rate due to Ricardian rents associated with one or more commodities that can be exported with a profit at a more appreciated foreign exchange rate than the one required by manufacturing industries using world state of the art technology because their cost or production is substantially smaller than their international price.Another way of defining the Dutch disease is to say that is characterized by two equilibrium exchange rates: the "current equilibrium" that balances intertemporally the country's current account, and the "industrial equilibrium" -the one required by efficient manufacturing industries.I showed that the Dutch disease is a permanent market failure, since the country fails to industrialize but keeps its foreign accounts in balance.I showed that its gravity varies according to the size of the Ricardian rents or the gap between the industrial and the current exchange rate equilibrium.I showed that countries endowed with cheap labor and a wide range of wages such as China also need to neutralize their Dutch disease.I showed that its neutralization occurs mainly through the imposition of a tax on the exports of the commodity originating the disease, because the tax shifts up the supply curve of the commodity in relation to the nominal exchange rate (not the international price that remains constant).Such neutralization policy gets stronger with the creation of a sovereign fund, so that the proceeds of the tax are do not imply capital inflows.I showed that the countries that did neutralize it had current account surpluses and, in theory, fiscal surpluses as well.I rejected the distinction between the Dutch disease and the "natural resources' curse" -a distinction that allows its advocates to "forget" the overappreciated foreign exchange rate and blame the country's low growth rates on the rent-seeking or corruption that the export tax (usually insufficient) instigate among local politicians and bureaucrats.Although this ethical problem does exist, it must not be used to dismiss the economic problem that lies in overappreciation.After writing this article, I stop saying that economic development requires a "relatively depreciated" foreign exchange rate; instead, what it needs is a competitive exchange rate, i.e., an exchange rate kept on the industrial equilibrium.My book Macroeconomia da Estagnação (2007) translated to English as Developing Brazil -Overcoming the Failure of the Washington Consensus (2009), applied theses models to the Brazilian economy. THE TWO TENDENCIES But the configuration of a structuralist macroeconomics of development only got completed in the following year, when I defined the two structural tendencies that characterize developing countries: the tendency towards the cyclical overappreciation of the exchange rate and the tendency of wages to grow less than productivity.The two tendencies reduce demand -foreign in the case of the former and domestic in the case of the latter -and consequently reduce investments and savings.On the first tendency I wrote "A tendência à sobreapreciação da taxa de câmbio" [The tendency to the overappreciation of the exchange rate] (2009) while La Découverte published, in French, Globalization and Competition -a book that in 2010 also appeared in English, Portuguese and Spanish. In this book, although not mentioning the constitution of a structuralist development macroeconomics, I was for the first time sum up the new ideas.The French and the Portuguese editions counted with a foreword by Robert Boyer in which he mentions that a school of thought is emerging in Brazil.Although domestic demand is fundamental to economic development, I did not invest my time on the discussion of the tendency of wages to grow below the productivity rate because many other economists, and mainly Celso Furtado, had already discussed the matter sufficiently and because the book focused on the foreign exchange rate -on the theoretical claim that fast economic development depends crucially on a competitive exchange rate. In that book I attempted to lay the groundwork for a macroeconomics of development, but it only became clear to me that it was a structuralist macroeconomics in 2009, after the book's publication in France.Earlier that year, José Antônio Ocampo had invited me to write a paper on the new developmentalism for the Handbook on Latin American Economics he was editing with Jaime Ross.I did write it.Soon afterwards, however, Osvaldo Sunkel asked me to write a paper, again about the new developmentalism, for the Revista de la Cepal That was when, in a conversation with Paulo Gala, I realized that the new ideas that had been emerging might stand as a second moment in the structuralist theory of development.The first one covered the 1940s-1960s and became exhausted in the 1970s under misguided criticism from the "theory of dependency" and, later, beginning in the 1980s, under criticism from the prevalent neoclassic orthodoxy.Now, however, a body of thought was emerging that might supplement and update structuralist thinking -not only the Latin American structuralist thinking, but the entire system of thought of development economics which, as according noted by Albert Hirschman (1981), had also fallen into a crisis in the 1970s.According to Osvaldo Sunkel's 2009 invitation, the paper should have been published in issue #100 of the Revista de la CEPAL, but I was about 20 days late delivering it, and it was eventually slated for the next issue and, finally was translated and ready for publication in issue #102.Meanwhile in Brazil, the October 2010 issue of Revista de Economia Política published its version in Portuguese, "Structuralist macroeconomics of development".For this formal reason, and despite Osvaldo Sunkel's disagreement, the Eclac's bureaucracy refused publication.I am now (2011) preparing a new paper, "Structuralist macroeconomics and new developmentalism", that will summarize the new thing in English. TEN THESES ON NEW DEVELOPMENTALISM In the meantime, and notwithstanding the Eclac's bureaucracy, new developmentalism and structuralist development macroeconomics continued to gain ground.In 2009, José Luís Oreiro and Luiz Fernando de Paula circulated the paper "O novo desenvolvimentismo e a agenda de reformas macroeconômicas para crescimento sustentado com estabilidade de preços e equidade social" [New developmentalism and the agenda of macroeconomic reforms for sustained development with prices stability and social equity], in which they state that "the new macroeconomic model for Brazil should be based on the pillars: flexible inflation targets regime, a fiscal regime based on the generation of government current account surpluses generation, and foreign exchange rate management, thereby creating the conditions for a lower interest rate and a more competitive foreign exchange rate."In September 2010, at the third international meeting of the Brazilian Keynesian Association, in São Paulo, I presented the basic ideas of the structuralist development macroeconomics.That same year, at the University of Brasília, José Luís Oreiro organized a research group for the "Structuralist macroeconomics of development" and created a blog for this group, with contributions from the already quoted economists plus Carmen Feijó, Frederico Gonzaga, Jennifer Hermann, Marco Flavio Resende, Maria de Lourdes Mollo and Rogério Sobreira.In 2011 I published in the Brazilian Journal of Political Economy "Uma escola de pensamento keynesiano-estruturalista no Brasil?"[A Keynesian-structuralist school of thought in Brazil?] in which I listed the 54 propositions that form structuralist development macroeconomics and new developmentalism. In May 2010, with support from the Ford Foundation, I organized an international workshop in São Paulo on the 10 Theses on New Developmentalism -a clear alternative to the Washington Consensus.Approved and underwritten in the months that followed by a large number of acknowledged economists and political scientists around the world, the document now has its own Website an the Ten Theses are published in various languages to allow other economists and interested citizens to underwrite them.In this way, new developmentalism became an institution.Now, in early 2011, structuralist development macroeconomics is open to additional contributions from Keynesian-structuralist economists that refuse orthodoxy in any shape because orthodoxy is ever an arrogant refusal of thinking and criticism. cally and more responsible than their neoliberal counterparts.In that same year, Luiz Fernando de Paula published "Repensando o desenvolvimentismo" [Rethinking developmentalism] (2006) and, the year next Sicsú, Paula and Michel (2007) expanded upon the introduction of the book they had previously edited: "Por que novo-desenvolvimentismo?" [Why new developmentalism?].In 2006, my student, Paulo Gala, concluded an excellent PhD dissertation: Política Cambial e a Macroeconomia do Desenvolvimento [Foreign Exchange Policy and the Macroeconomics of Development].In the next year, another student, Lauro González presented his doctoral dissertation, Crises Financeiras Recentes: Revisitando as Experiências da América Latina e da Ásia [Recent Financial Crises: Revisiting the Latin American and Asian Experiences] (
4,587.8
2011-09-01T00:00:00.000
[ "Economics" ]
Combining Word Patterns and Discourse Markers for Paradigmatic Relation Classification Distinguishing between paradigmatic relations such as synonymy, antonymy and hy-pernymy is an important prerequisite in a range of NLP applications. In this paper, we explore discourse relations as an alternative set of features to lexico-syntactic patterns. We demonstrate that statistics over discourse relations, collected via explicit discourse markers as proxies, can be utilized as salient indicators for paradigmatic relations in multiple languages, out-performing patterns in terms of recall and F 1 -score. In addition, we observe that markers and patterns provide complementary information, leading to significant classification improvements when applied in combination. Introduction Paradigmatic relations (such as synonymy, antonymy and hypernymy; cf. Murphy, 2003) are notoriously difficult to distinguish automatically, as first-order co-occurrences of the related words tend to be very similar across the relations. For example, in The boy/girl/person loves/hates the cat, the nominal co-hyponyms boy, girl and their hypernym person as well as the verbal antonyms love and hate occur in identical contexts, respectively. Vector space models, which represent words by frequencies of co-occurring words to enable comparisons in terms of distributional similarity (Schütze, 1992;Turney and Pantel, 2010), hence perform below their potential when inferring the type of relation that holds between two words. This distinction is crucial, however, in a range of tasks: in sentiment analysis, for example, words of the same and opposing polarity need to be distinguished; in textual entailment, systems further need to identify hypernymy because of directional inference requirements. Accordingly, while there is a rich tradition on identifying word pairs of a single paradigmatic relation, there is little work that has addressed the distinction between two or more paradigmatic relations (cf. Section 2 for details). In more general terms, previous approaches to distinguishing between several semantic relations have predominantly relied on manually created knowledge sources, or lexico-syntactic patterns that can be automatically extracted from text. Each option comes with its own shortcomings: knowledge bases, on the one hand, are typically developed for a single language or domain, meaning that they might not generalize well; word patterns, on the other hand, are noisy and can be sparse for infrequent word pairs. In this paper, we propose to strike a balance between availability and restrictedness by making use of discourse markers. This approach has several advantages: markers are frequently found across genres (Webber, 2009), they exist in many languages (Jucker and Yiv, 1998), and capture various semantic properties (Hutchinson, 2004). We implement discourse markers within a vector space model that aims to distinguish between the three paradigmatic relations synonymy, antonymy and hypernymy in German and in English, across the three word classes of nouns, verbs, adjectives. We examine the performance of discourse markers as vector space dimensions in isolation and also explore their contribution in combination with lexical patterns. Related Work As mentioned above, there is a rich tradition of research on identifying a single paradigmatic relations. Work on synonyms includes Edmonds and Hirst (2002), who employed a co-occurrence network and second-order co-occurrence, and Curran (2003), who explored word-based and syntaxbased co-occurrence for thesaurus construction. Van der Plas and Tiedemann (2006) compared a standard distributional approach against crosslingual alignment; Erk and Padó (2008) defined a vector space model to identify synonyms and the substitutability of verbs. Most computational work on hypernyms was performed for nouns, cf. the lexico-syntactic patterns by Hearst (1992) and an extension of the patterns by dependency paths (Snow et al., 2004). Weeds et al. (2004), Lenci and Benotto (2012) and Santus et al. (2014) identified hypernyms in distributional spaces. Computational work on antonyms includes approaches that tested the co-occurrence hypothesis (Charles and Miller, 1989;Fellbaum, 1995), and approaches driven by text understanding efforts and contradiction frameworks (Harabagiu et al., 2006;Mohammad et al., 2008;de Marneffe et al., 2008). Among the few approaches that distinguished between paradigmatic semantic relations, Lin et al. (2003) used patterns and bilingual dictionaries to retrieve distributionally similar words, and relied on clear antonym patterns such as 'either X or Y' in a post-processing step to distinguish synonyms from antonyms. The study by Mohammad et al. (2013) on the identification and ranking of opposites also included synonym/antonym distinction. Yih et al. (2012) developed an LSA approach incorporating a thesaurus, to distinguish the same two relations. Chang et al. (2013) extended this approach to induce vector representations that can capture multiple relations. Whereas the above mentioned approaches rely on additional knowledge sources, Turney (2006) developed a corpusbased approach to model relational similarity, addressing (among other tasks) the distinction between synonyms and antonyms. More recently, Schulte im Walde and Köper (2013) proposed to distinguish between the three relations antonymy, synonymy and hyponymy based on automatically acquired word patterns. Regarding pattern-based approaches to identify and distinguish lexical semantic relations in more general terms, Hearst (1992) was the first to propose lexico-syntactic patterns as empirical pointers towards relation instances, focusing on hyponymy. Girju et al. (2003) applied a single pattern to distinguish pairs of nouns that are in a causal relationship from those that are not, and Girju et al. (2006) extended the work towards part-whole relations, applying a supervised, knowledge-intensive approach. Chklovski and Pantel (2004) were the first to apply pattern-based relation extraction to verbs, distinguishing five non-disjoint relations (similarity, strength, antonymy, enablement, happens-before). Pantel and Pennacchiotti (2006) developed Espresso, a weakly-supervised system that exploits patterns in large-scale web data to distinguish between five noun-noun relations (hypernymy, meronymy, succession, reaction, production). Similarly to Girju et al. (2006), they used generic patterns, but relied on a bootstrapping cycle combined with reliability measures, rather than manual resources. Whereas each of the aforementioned approaches considers only one word class and clearly disjoint categories, we distinguish between paradigmatic relations that can be distributionally very similar and propose a unified framework for nouns, verbs and adjectives. Baseline Model and Data Set The task addressed in this work is to distinguish between synonymy, antonymy and hypernymy. As a starting point, we build on the approach and data set used by Schulte im Walde and Köper (2013, henceforth just S&K). In their work, frequency statistics over automatically acquired co-occurrence patterns were found to be good indicators for the paradigmatic relation that holds between two given words of the same word class. They further experimented with refinements of the vector space model, for example, by only considering patterns of a specific length, weighting by pointwise mutual information and applying thresholds based on frequency and reliability. Baseline Model. We re-implemented the best model from S&K with the same setup: word pairs are represented by vectors, with each entry corresponding to one out of almost 100,000 patterns of lemmatized word forms (e.g., X affect how you Y ). Each value is calculated as the log frequency of the corresponding pattern occurring between the word pairs in a corpus, based on exact match. For English, we use the ukWaC corpus (Baroni et al., 2009); for German, we rely on the COW corpus instead of deWaC, as it is larger and better balanced (Schäfer and Bildhauer, 2012). Data Set. The evaluation data set by S&K is a collection of target and response words in German that has been collected via Amazon Mechanical Turk. The data contains a balanced amount of instances across word categories and relations, also taking into account corpus frequency, degree of ambiguity and semantic classes. In total, the (2013) and our reimplementation. All numbers in percent. data set consists of 692 pairs of instances, distributed over three word classes (nouns, verbs, adjectives) and three paradigmatic relations (synonymy, antonymy, hypernymy). Intermediate Evaluation. We compare our reimplementation to the model by S&K using their 80% training and 20% test split, focusing on 2way classifications involving synonymy. The results, summarized in Table 1, confirm that our reimplementation achieves similar results. Observed differences are probably an effect of the distinct corpora applied to induce patterns and counts. We notice that the performance of both models strongly depends on the affected pair of relations and word category. For example, precision varies in the 2-way classification between synonymy and antonymy from 70.6% to 94.1%. Given the small amount of test data, some of the 80/20 splits might be better suited for the model than others. To avoid resulting bias effects, we perform our final evaluation using 5-fold cross-validation on a merged set of all training and test instances. To illustrate the performance of models in multiple languages, we further conduct experiments on a data set for English relation pairs that has been collected by Giulia Benotto and Alessandro Lenci, following the same methodology as the German collection. The English data set consists of 648 pairs of instances, also distributed over nouns, verbs, adjectives, and covering synonymy, antonymy, hypernymy. Markers for Relation Classification The aim of this work is to establish corpus statistics over discourse relations as a salient source of CONTRAST but, altough, rather . . . RESTATEMENT indeed, specifically, . . . INSTANTIATION (for) example, instance, . . . Table 2: Examples of discourse relations/markers. information to distinguish between paradigmatic relations. Our approach is motivated by linguistic studies that indicated a connection between discourse relations and lexical relations of words occurring in the respective discourse segments: Murphy et al. (2009) have shown, for example, that antonyms frequently serve as indicators for contrast relations in English and Swedish. More generally, pairs of word tokens have been identified as strong features for classifying discourse relations when no explicit discourse markers are available (Pitler et al., 2009;Biran and McKeown, 2013). Whereas word pairs have frequently been used as features for disambiguating discourse relations, to the best of our knowledge, our approach is novel in that we are the first to apply discourse relations as features for classifying lexical relations. One reason for this might be that discourse relations in general are only available in manually annotated corpora. Previous work has shown, however, that such relations can be classified reliably given the presence of explicit discourse markers. 1 We hence rely on such markers as proxies for discourse relations (for examples, cf. Table 2). Model and Hypothesis We propose a vector space model that represents pairs of words using as features the discourse markers that occur between them. The underlying hypothesis of this model is as follows: if two phrases frequently co-occur with a specific discourse marker, then the discourse relation expressed by the corresponding marker should also indicate the relation between the words in the affected phrases. Following this hypothesis, contrast relations might indicate antonymy, whereas elaborations may indicate synonymy or hyponymy. Although such relations will not hold between every pair of words in two connected discourse segments, we hypothesize that correct instances (of all considered word classes) can be identified based on high relative frequency. In our model, frequency statistics are computed over sentence-internal co-occurrences of word pairs and discourse markers. Since discourse relations are typically directed, we take into consideration whether a word occurs to the left or to the right of the respective marker. Accordingly, the features of our model are special cases of single-word patterns with an arbitrary number of wild card tokens (e.g., the marker feature 'though' corresponds to the pattern "X * though * Y "). Yet, our specific choice of features has several advantages: Whereas strict and potentially long patterns can be rare in text, discourse markers such as "however", "for example" and "additionally" are frequently found across genres (Webber, 2009). Although combinations of tokens could also be replaced by wild cards in any automatically acquired pattern, this would generally lead to an exponentially growing feature space. In contrast, the set of discourse markers in our work is fixed: for English, we use 61 markers annotated in the Penn Discourse TreeBank 2.0 (Prasad et al., 2008); for German, we use 155 one-word translations of the English markers, as obtained from an online dictionary. 23 Taking directionality into account, our vector space model consists of 2x61 and 2x155 features, respectively. Development Set and Hyperparameters We select the hyperparameters of our model using an independent development set, which we extract from the lexical resource GermaNet (Hamp and Feldweg, 1997). For each considered word category, we extract instances of synonymy, antonymy and hypernymy. In total, 1502 instances are identified, with 64 of them overlapping with the evaluation data set described in Section 3. Note though that the development set is not used for evaluation but only to select the following hyperparameters. We experimented with different vector values (absolute frequency, log frequency, pointwise mutual information (PMI)), distance measures (cosine, euclidean) and normalization schemes. In contrast to S&K, who did not observe any improvements using PMI, we found it to perform best, combined with euclidean distance and no additional normalization. This finding might be an immediate effect of discourse markers being generally more frequent than strict word patterns, which also leads to more reliable PMI values. Evaluation In our evaluation, we assess the performance of the marker-based model and demonstrate the benefits of incorporating discourse markers into a patternbased model, which we apply as a baseline. We evaluate on several data sets: the collection of target-response pairs in German from previous work, and a similar data set that was collected for English target words (cf. Section 3); for comparison reasons, we also apply our models to the balanced data set of related and unrelated noun pairs by Yap and Baldwin (2009). 4 We perform 3-way and 2-way relation classification experiments, using 5-fold cross-validation and a nearest centroid classifier (as applied by S&K). Results. The 3-way classification results of the baseline and our marker-based model are summarized in Table 3, with best results for each setting marked in bold. On the German data set, our model always outperforms a random baseline (33% F 1 -score). The results on the English data set are overall a bit lower, possibly due to corpus size. In almost all classification tasks, our markerbased model achieves a higher recall and F 1 -score than the pattern-based approach. The precision results of the marker-based model are overall below the pattern-based model. This drop in performance does not come as a surprise though, considering that the model only makes use of 122 and 310 features, in comparison to tens of thousands of features in the pattern approach. A randomized significance test over classified instances (cf. Yeh, 2000) revealed that only two differences in results are significant. We hypothesize that one reason for this outcome might be that both models cover complementary sets of instances. To verify this hypothesis, we apply a combined model, which is based on a weighted linear combination of distances computed by the two individual models. 5 As displayed in Table 3 in recall and F 1 -score, leading to the best 3-way classification results. All gains in recall are significant, confirming that the single models indeed contribute complementary information. For example, only the pattern-based model classifies "intentional"-"accidental" as antonyms, and only the marker-based model predicts the correct relation for "double"-"multiple" (hypernymy). The combined model classifies both pairs correctly. A final experiment is performed on the data set by Yap and Baldwin (2009) to see whether our models can also distinguish word pairs of individual relations from unrelated pairs of words. The results, listed in Table 5, show that the markerbased model cannot perform this task as well as the pattern-based model. The combined model, however, outperforms both individual models in 2 out of 3 cases. Despite their simplicity, our models achieve results close to the F 1 -scores reported by , who employed syntactic pre-processing and an SVM-based classifier, and experimented with different corpora. Conclusions In this paper, we proposed to use discourse markers as indicators for paradigmatic relations between words and demonstrated that a small set of such markers can achieve higher recall than a pattern-based model with tens of thousands of features. Combining patterns and markers can further improve results, leading to significant gains in recall and F 1 . As our new model only relies on a raw corpus and a fixed list of discourse markers, it can easily be extended to other languages.
3,731
2014-06-01T00:00:00.000
[ "Computer Science", "Linguistics" ]
Tackling Area Coverage Problems in a Reconfigurable Floor Cleaning Robot Based on Polyomino Tiling Theory Whilst Polyomino tiling theory has been extensively studied as a branch of research in mathematics, its application has been largely confined to multimedia, graphics and gaming domains. In this paper, we present a novel application of Tromino tiling theory, a class of Polyomino with three cells in the context of a reconfigurable floor cleaning robot, hTromo. The developed robot platform is able to automatically generate a global tiling set required to cover a defined space while leveraging on the Tromino tiling theory. Specifically, we validated the application of five Tromino tiling theorems with our hTromo robot. Experiments performed clearly demonstrate the efficacy of the proposed approach resulting in very high levels of area coverage performance in all considered experimental cases. This paper also presents the system architecture of our hTromo robot and a detailed description of the five tiling theorems applied in this study. Introduction Floor cleaning, be it commercial or domestic, is commonly considered to be boring, repetitive, and tiresome.Over the course of the last two decades, considerable work has been conducted in seeking to develop automated cleaning robots.Such work has led to a new generation of robot cleaners, and subsequent improvements in quality of life and personal productivity.Robotic cleaners are predicted to become commonplace in the years to come, with estimated sales of such devices forecast to reach $2.5 billion by 2020 [1].Currently, market leaders include Samsung, Neato, iRobot, and Dyson, with their floor-cleaning devices commonly adopting a circular, or half-circular shape.Such devices use a network of internal sensors to navigate a specific floorspace independently.A significant amount of robotics research has studied cleaning robots and their human interaction, functionality, design, independence, benchmarking, and mechanics.Such literature has led to the production of multiple new robotic devices.Gao et al., for example, proposed an innovative robot cleaner for busy locations, such as transport hubs, that utilized Swedish wheel technology to navigate such floorspaces effectively [2].Another example of such innovation can be seen in the work of Jason Yan, who created a wheelset mechanism to allow robotic devices to navigate irregular floorspaces, safe from grounding or flooring damage [3].In terms of autonomous operation, the work presented in [4] outlined a Simultaneous Localization And Mapping (SLAM)-based methodology for floor cleaning, blending magnetism and odometry to navigate set areas, and ensure optimum coverage independently.In another work presented in [5] proposes a neural-network-based architecture for robotic floor-cleaning, allowing autonomous devices to plan routes and circumnavigate obstacles in unpredictable surroundings. With a concentration on the relationship between human users and their robotic devices, Fink et al. [6] conducted a six-month ethnographic investigation into aspects of the social activity, user perception, and usage analysis.The work presented in [7] proposes a specific gesture-reliant technology, utilizing ceiling-mounted cameras, whereby users could interact with, and operate, a cleaning device.In another work, Panagiota Tsarouch et al., conducted a study on Human-Robot Collaboration (HRC) framework for the execution of the collaborative task in hybrid assembly cell [8].In this study, the proposed framework facilitates the placement of sequential task appointed to robots and humans in a distinct workspace.Sotiris Makris et al., proposed a similar human-robot collaborative work, wherein they demonstrated flexible assembly task between a dual arm robot and humans in an automotive industry case [9].Much study has also been conducted into cleaning robots in a multi-robot environment.Luo and Yang, for example, created a neural-network approach for a community of cleaning robots [10].Further to this, Janchiv et al., proposed a cellular decomposition approach, employing two internal cleaning robots for optimum surface-area coverage [11].In terms of benchmarking for floor coverage, critical performance indicators of independent movement, noise, and dust collection were highlighted by Rhim et al. [12].Further work, by Wong et al., proposed computer vision methodology for benchmarking a cleaning robot's floor coverage [13]. Despite their advantages being repeatedly underlined by existing literature, traditional technologies in the cleaning robot field are still limited, and their performance is often unreliable.Coverage presents a common issue, owing to the fixed morphologies of existing devices.There is a significant gap in the market for cleaning robots with reconfigurable morphologies, reacting to their surroundings to provide optimum floor coverage.Over the past thirty years, significant interest has been paid to the field of reconfigurable robotics.Robotic devices are commonly split into three primary groups [14]: inter-reconfigurable, nested reconfigurable, and intra-reconfigurable.For devices in the intra-reconfigurable group, they can adapt their own morphologies.Scorpio, for example, was a robot developed by Tan et al. [15] that, with an intra-reconfigurable code, could utilize separate morphologies, by climbing, crawling, or rolling across surfaces.Further examples of such intra-reconfigurable devices are an anthropomorphic robot hand developed by [16] that altered its palm to create different topologies and a reconfigurable under-actuated legged platform [17] that could create unique movement gaits.Jason Spiliotopoulos et al., presented their version of intra-reconfigurable robots wherein they developed a high-speed multi-fingered reconfigurable gripper.In this work, they have performed preliminary grasping experiments highlight its potential in robotic handling applications [18].Generally, inter-reconfigurable devices combine various technologies to create an overarching morphology, combining and separating to take on alternative ones.Sambot [19] provided an example of this, with its ability to swarm with, and separate from, other robots, in order to adopt new morphologies, along with a sub-aqueous platform that can dissect itself into separate modules, and move in an eel-like fashion through water [20].Nested reconfigurable robots involve platforms that are capable of both intra-reconfigurable and inter-reconfigurable abilities.Tan et al., presented a nested reconfigurable robot, Hinged-Tetro [21] that is capable of transforming between morphologies as a single unit as well as assemble and disassemble with a set of other fellow robots to generate global morphologies.In our earlier work, we put forward and validated a Tetris-inspired intra-reconfigurable floor cleaning robot, hTetro capable of changing its morphology to any of the seven one-sided tetromino forms towards maximizing floor coverage area [22].We also benchmarked the area coverage performance of our hTetro robot with a commercially available fixed morphology robot.The experimental design involved a human user reactively switching the morphology of the hTetro robot in relation to the perceived set of obstacles in the environment towards maximizing coverage performance with no global planning involved.In this paper, we significantly extend our earlier work by leveraging on the Tromino tiling theory to automatically generate a global tiling set that would enable our hTromo robot to cover a given area.Especially, we demonstrate the application of five Tromino tiling theorems in the context of area coverage task with hTromo robot. Polyomino tiling theory concerns the use of pre-determined polyomino types, in order to cover a surface, and has been the subject of much academic study Since the 1950s.Craig S. Kaplan [23] used the polyomino tiling theory to develop a mathematical and algorithmic methodology for computer graphics software, and Ostromoukhov et al. [24] used it to propose a method for fast hierarchical importance sampling with blue noise properties.This development was then applied to a high-quality graphical video, providing rapid sampling and greater visual quality.A similar algorithm was applied by Takefuji and Lee to tile polyominoes [25], with it being subsequently verified as a method for the insertion of components or cells in Very Large Scale Integration (VLSI)-design, creating printed circuitry and tackling 2D and 3D packaging challenges. The employment of polyomino tiling theory in gaming software has been widely detailed in the literature.In [26] for example, Jho and Lee created a new polyomino re-tiling system whereby a combination of polyomino pieces was interchanged with another set.The suggested algorithm was applied to a puzzle containing various polyomino copies.By using this algorithm, new game stages could be created, without the need for added system memory.Further examples of its employment are shown by the authors of [27], creating a unique 3D tiling method for a 3D puzzle, and a photominoes synthesizer, detailed in study [28], which applied digital images to the creation of polyomino pieces for a jigsaw.Although numerous research work has been done towards development and application of Polyomino tiling theories, they are largely limited to gaming and graphics domains.Besides, none of the previous works involving tiling theory has been applied to a robotic system towards solving area coverage problem, which opens much opportunity for research and development. In this paper, we present a novel application of Tromino tiling theory, a class of Polyomino with three cells in the context of a reconfigurable floor cleaning robot, hTromo.The developed robot platform is able to automatically generate a global tiling set required to cover a defined space while leveraging on the Tromino tiling theory.The main challenges in the proposed approach include the design of the reconfiguring mechanism, the inclusion of cleaning features and the non-trivial process of implementing theoretical Tromino tiling designs generated analytically into physical mechanisms.All these aspects are detailed in this paper, concluding with experimental results using the prototype hTromo robot that validates the proposed approach.The application of Tromino tiling theory herein presented is a critical effort towards designing a self-reconfigurable robot that is capable of autonomously generating global tiling set for any given space, identifying associated local and global optimal trajectories and generating appropriate motor primitives based on inverse kinematics and dynamics model. Rest of this article is organized as follows: Section "Polyomino Tiling Theory" introduces the concept of polyominoes and the specific Tromino tiling theorems implemented in this paper.Section "hTromo: Robot Architecture" presents a discussion on the realization of a Tromino inspired floor cleaning robot.This Section covers details on the core component modules of the developed robot namely the reconfigurable base, mobility unit, cleaning module and Android application interface.Section "Experiments and Results" details the experimental design involving our hTromo robot and application of Tromino tiling theory, test setup, and analysis of the results.Finally, the Section "Conclusion" concludes this study and discusses future work. Polyomino Tiling Theory Polyominoes are plain geometrical structures formed by endwise coupling of congruent squares [29].Based on spatial orientation, geometrical transformation and chirality, each polyomino can be categorized into free polyominoes, one-sided Polyominoes, and fixed Polyominoes.For instance, the set domino, formed by the combination of two congruent square, can have single one-sided, single free and two fixed dominoes as its subsets.Correspondingly, triominoes (3-omino) can exist as two free, two one-sided, and six fixed triominoes [29].In the case of tetromino that contains four constituent squares can form five free, seven one-sided, and nineteen fixed tetrominoes.In this paper, we used a Tromino inspired reconfigurable robot, hTromo capable of switching between one of the three forms. Polyominoes Tiling Theory The Polyominoes tiling theory deals with the problem of partitioning or filling of a geometrical region using same or multiple sub-regions.Literature offers numerous work that discusses tiling theorems with proof for distinct polyomino set.With Tromino forming the inspiration for our hTromo robot, this paper presents our first attempt at applying Tromino tiling theory to coverage problem for a floor cleaning robot.Specifically, we apply five theorems proposed in [30][31][32]. Theorem 1.An a × b rectangle can be tiled with L-and I-trominoes if and only if the area of that rectangle is divisible by 3. Figure 1 shows tiles τ2 and τ4 a right-oriented L-tromino piece, and tiles τ1 to τ3, a left-oriented L-tromino piece.In the experiments performed, we either used pre-defined testbed space that was either a complete rectangle or modified rectangle based on the theorem considered where the concerned robot was able to achieve complete coverage.Figure 2 presents a sample case for Theorem 1 involving tiling using I-and L-tromino configurations.we used a Tromino inspired reconfigurable robot, hTromo capable of switching between one of the three forms. Polyominoes Tiling Theory The Polyominoes tiling theory deals with the problem of partitioning or filling of a geometrical region using same or multiple sub-regions.Literature offers numerous work that discusses tiling theorems with proof for distinct polyomino set.With Tromino forming the inspiration for our hTromo robot, this paper presents our first attempt at applying Tromino tiling theory to coverage problem for a floor cleaning robot.Specifically, we apply five theorems proposed in [30][31][32]. Theorem 1. An a × b rectangle can be tiled with L-and I-trominoes if and only if the area of that rectangle is divisible by 3. Figure 1 shows tiles τ2 and τ4 a right-oriented L-tromino piece, and tiles τ1 to τ3, a left-oriented L-tromino piece.In the experiments performed, we either used pre-defined testbed space that was either a complete rectangle or modified rectangle based on the theorem considered where the concerned robot was able to achieve complete coverage.Figure 2 presents a sample case for Theorem 1 involving tiling using I-and L-tromino configurations.'T' is a set of four L-trominoes.Since the tromino has three constituent squares with unit area, any rectangle with an area that is a multiple of 3 can be covered using any of the trominoes pieces.This notion leads to the formulation of Condition 2 of Theorem 2. Let a and b be the dimensions of the rectangle to be tiled.Since the area of the rectangle has to be a multiple of 3, (a × b) = 3n, where n  {1, 2, 3……}.This implies that either a or b must be divisible by 3. However according to condition 2, a cannot be 3.For example, the rectangles that have a dimension of 6 × 5, 4 × 9, 12 × 6 can be tiled using the set of 'T' trominoes.Condition 1 of Theorem 2 excludes certain rectangles that cannot be tiled using 'T' trominoes.Specifically, according to this theorem, it is impossible to tile rectangles that have a dimension of 3 × 5, 3 × 7, 3 × 9 using any arrangements of 'T' trominoes.The Lemmas 1 and 2 detailed below validates Theorem 2. we used a Tromino inspired reconfigurable robot, hTromo capable of switching between one of the three forms. Polyominoes Tiling Theory The Polyominoes tiling theory deals with the problem of partitioning or filling of a geometrical region using same or multiple sub-regions.Literature offers numerous work that discusses tiling theorems with proof for distinct polyomino set.With Tromino forming the inspiration for our hTromo robot, this paper presents our first attempt at applying Tromino tiling theory to coverage problem for a floor cleaning robot.Specifically, we apply five theorems proposed in [30][31][32]. Theorem 1. An a × b rectangle can be tiled with L-and I-trominoes if and only if the area of that rectangle is divisible by 3. Figure 1 shows tiles τ2 and τ4 a right-oriented L-tromino piece, and tiles τ1 to τ3, a left-oriented L-tromino piece.In the experiments performed, we either used pre-defined testbed space that was either a complete rectangle or modified rectangle based on the theorem considered where the concerned robot was able to achieve complete coverage.Figure 2 presents a sample case for Theorem 1 involving tiling using I-and L-tromino configurations.'T' is a set of four L-trominoes.Since the tromino has three constituent squares with unit area, any rectangle with an area that is a multiple of 3 can be covered using any of the trominoes pieces.This notion leads to the formulation of Condition 2 of Theorem 2. Let a and b be the dimensions of the rectangle to be tiled.Since the area of the rectangle has to be a multiple of 3, (a × b) = 3n, where n  {1, 2, 3……}.This implies that either a or b must be divisible by 3. However according to condition 2, a cannot be 3.For example, the rectangles that have a dimension of 6 × 5, 4 × 9, 12 × 6 can be tiled using the set of 'T' trominoes.Condition 1 of Theorem 2 excludes certain rectangles that cannot be tiled using 'T' trominoes.Specifically, according to this theorem, it is impossible to tile rectangles that have a dimension of 3 × 5, 3 × 7, 3 × 9 using any arrangements of 'T' trominoes.The Lemmas 1 and 2 detailed below validates Theorem 2. 'T' is a set of four L-trominoes.Since the tromino has three constituent squares with unit area, any rectangle with an area that is a multiple of 3 can be covered using any of the trominoes pieces.This notion leads to the formulation of Condition 2 of Theorem 2. Let a and b be the dimensions of the rectangle to be tiled.Since the area of the rectangle has to be a multiple of 3, (a × b) = 3n, where n ∈ {1, 2, 3 . . . . . .}.This implies that either a or b must be divisible by 3. However according to condition 2, a cannot be 3.For example, the rectangles that have a dimension of 6 × 5, 4 × 9, 12 × 6 can be tiled using the set of 'T' trominoes.Condition 1 of Theorem 2 excludes certain rectangles that cannot be tiled using 'T' trominoes.Specifically, according to this theorem, it is impossible to tile rectangles that have a dimension of 3 × 5, 3 × 7, 3 × 9 using any arrangements of 'T' trominoes.The Lemmas 1 and 2 detailed below validates Theorem 2. Lemma 1.Let a = 3 and b ∈ {2, 4, 6} then a 3 × b rectangle can be tiled using the arrangement of 'T' trominoes shown in Figure 3a.Hence it is proved that the smallest rectangle that satisfies condition 1 Theorem 2 is (3 × 2).Let b > 6, even numbers, and c ∈ {2, 4, 6}, then b = 3n + c, where n can have a positive even integer value.As such, it allows for splitting of a (3 × b) rectangle into n (3 × 2) rectangles and one (3 × c) rectangle, as in Figure 3b.This implies, if b ≥ 2, even numbers, then a (4 × b) rectangle can be tiled using a set of 'T' trominoes.Lemma 2. According to condition 2 of Theorem 2, the smallest rectangle that can be tiled using 'T' trominoes is (2 × 3).Let a = 4; then according to condition 2, the possible b values must be divisible by 3. Let's assume the dimension of the rectangle is 4 × 6, Figure 4a; then it is possible to decompose it to a 4 (2 × 3) rectangles.Hence, a rectangle with a dimension was a > 3, and b is a multiple of 3, then the rectangle could be decomposed to n(2 × 3) rectangles as in Figure 4b, which can be tillable with 'T' trominoes.However, the rectangle with dimensions of 5 × 3, 7 × 3, and 9 × 3 cannot be tiled using any arrangement of 'T' trominoes.Lemma 2. According to condition 2 of Theorem 2, the smallest rectangle that can be tiled using 'T' trominoes is (2 × 3).Let a = 4; then according to condition 2, the possible b values must be divisible by 3. Let's assume the dimension of the rectangle is 4 × 6, Figure 4a; then it is possible to decompose it to a 4 (2 × 3) rectangles.Hence, a rectangle with a dimension was a > 3, and b is a multiple of 3, then the rectangle could be decomposed to n(2 × 3) rectangles as in Figure 4b, which can be tillable with 'T' trominoes.However, the rectangle with dimensions of 5 × 3, 7 × 3, and 9 × 3 cannot be tiled using any arrangement of 'T' trominoes. Theorem 3. A deficient n X n rectangle board can be tiled with the set of 'T' trominoes, if, and only if, either: 1. n is odd, and >5, and − 1 is divisible by 3. n is even, and >1, and − 1 is divisible by 3. Also with Theorem 3, we are utilizing set 'T' tromino pieces in order to tile the given area.The term deficient in the theorem describes a square grid of side a and b with a single cell truncated.According to Theorem 3, if the sides of the squares are odd, it should be greater than 5, and to the square of that sides subtracted by 1 must be divisible by 3 to tile the space using 'T' trominoes.In order to tile an even-sided deficient square with 'T' trominoes, the value must be greater than 1 and, to the square of that value subtracted by 1 must be divisible by 3. The above notion regarding tiling the deficient square points to Conditions 1 and 2 of Theorem 3. Lemmas 3 and 4 validate Theorem 3 by proving Conditions 1 and 2. 5a Right, the considered rectangle can be decomposed to a 2 × 3, 3 × 2, 5 × 5, and three separate L-trominoes.According to Lemma 1, a 2 × 3, and 3 × 2, a rectangle can be easily tillable with 'T' tromino set.When it comes to a 5 × 5 square, it is tileable with 'T' trominoes only if the corner cells are removed as shown is Figure 5b.Hence, the results show that an M (7 × 7) rectangle with single cell removed is tillable with 'T' trominoes.It is also possible to tile an M (7 × 7) rectangle when we delete (1,4), (2,3), (2,4), or (4, 4) cells.n is odd, and >5, and n 2 − 1 is divisible by 3. Also with Theorem 3, we are utilizing set 'T' tromino pieces in order to tile the given area.The term deficient in the theorem describes a square grid of side a and b with a single cell truncated.According to Theorem 3, if the sides of the squares are odd, it should be greater than 5, and to the square of that sides subtracted by 1 must be divisible by 3 to tile the space using 'T' trominoes.In order to tile an even-sided deficient square with 'T' trominoes, the value must be greater than 1 and, to the square of that value subtracted by 1 must be divisible by 3. The above notion regarding tiling the deficient square points to Conditions 1 and 2 of Theorem 3. Lemmas 3 and 4 validate Theorem 3 by proving Conditions 1 and 2. Lemma 3. Let M (a, b) be the modified rectangle and segmented into an (i, j) grids.Let's consider an M (7 × 7) rectangle grid with a single (1, 1) square is removed Figure 5a Left.As shown in Figure 5a Right, the considered rectangle can be decomposed to a 2 × 3, 3 × 2, 5 × 5, and three separate L-trominoes.According to Lemma 1, a 2 × 3, and 3 × 2, a rectangle can be easily tillable with 'T' tromino set.When it comes to a 5 × 5 square, it is tileable with 'T' trominoes only if the corner cells are removed as shown is Figure 5b.Hence, the results show that an M (7 × 7) rectangle with single cell removed is tillable with 'T' trominoes.It is also possible to tile an M (7 × 7) rectangle when we delete (1,4), (2,3), (2,4), or (4, 4) cells.Similarly, with the help of Lemmas 1 & 2, we know that 6 × 3, and 3 × 6 can also be tiled with 'T' trominoes.For a 4 × 4 square, it is proven that a 2 k × 2 k when k ≥ 1 can be tiled with 'T' tominoes as shown in Figure 6a.Hence, it is proved that a rectangle with even sided can be tiled using set of 'T' tromino pieces.An Aztec diamond of order n is the region obtained from staircase shapes of height n by gluing them together along the straight edges.According to Theorem 4, a non-deficient Aztec diamond can be tiled using the 'T' set of tromino pieces if and only if n (n + 1) ≡ 0 mod 3. Hence, it is clear that the tabbed th order must be divisible by 3 after applying it to n (n + 1), were n can be any integer.Similarly, with Theorem 5, it is clear that, we can tile a defected Aztec diamond (with one square removed) using 'T' tromino set only if the th order is equivalent to 3n − 2, where n can be any integer.Below mentioned Lemma 5 and 6 supports the conditions of Theorems 4 and 5. Lemma 5. Note that the only values for which n (n + 1) ≡ 0 (mod 3) holds are n = 3 k or n = 3k − 1 for some unique positive integer k.Thus, the statement is equivalent to say that for all positive integers k there is a tiling for AZ (3k) and AZ (3k − 1) as shown in Figure 7 but there is no tiling for AZ (3k − 2).The tiling of Aztech diamond can be achieved by tiling the edges of considered space.We used stairs concept in order to tile the edges of the Aztec diamond.A stair is a polyomino made-up of tromino pieces wherein their 180° rotations are connected as steps shown in Figure 8.The height of the stair was computed under the formulation of 3k + 2, for any positive integer k.The height of the stair is equal to the order of Aztec diamond.If the order of Aztec diamond is n ≤ 4, then AZ (n) can be tiled using 'T' trominoes through tiling partial or half stairs as shown in Figure 7 (Top Right & Left).If the order n ≥ 5, then the AZ (n) can be tiled using K-stair of 'T' tromino pieces.The image argument for order n = 5 is shown in Figure 7 (bottom), green colored tromino pieces tiled the edges using 1-stair.Theorem 4.An Aztec Diamond, AZ(n) can be tiled using 'T' set of tromino tiling pieces if, and only if n (n + 1) ≡ 0 mod 3, where n can be a positive integer.Theorem 5. A deficient Aztec Diamond, AZ(k) can be tiled using 'T' set of tromino tiling pieces if, and only if k = (3n − 2), where n can be a positive integer. An Aztec diamond of order n is the region obtained from staircase shapes of height n by gluing them together along the straight edges.According to Theorem 4, a non-deficient Aztec diamond can be tiled using the 'T' set of tromino pieces if and only if n (n + 1) ≡ 0 mod 3. Hence, it is clear that the tabbed nth order must be divisible by 3 after applying it to n (n + 1), were n can be any integer.Similarly, with Theorem 5, it is clear that, we can tile a defected Aztec diamond (with one square removed) using 'T' tromino set only if the nth order is equivalent to 3n − 2, where n can be any integer.Below mentioned Lemma 5 and 6 supports the conditions of Theorems 4 and 5. Lemma 5. Note that the only values for which n (n + 1) ≡ 0 (mod 3) holds are n = 3 k or n = 3k − 1 for some unique positive integer k.Thus, the statement is equivalent to say that for all positive integers k there is a tiling for AZ (3k) and AZ (3k − 1) as shown in Figure 7 but there is no tiling for AZ (3k − 2).The tiling of Aztech diamond can be achieved by tiling the edges of considered space.We used stairs concept in order to tile the edges of the Aztec diamond.A stair is a polyomino made-up of tromino pieces wherein their 180 • rotations are connected as steps shown in Figure 8.The height of the stair was computed under the formulation of 3k + 2, for any positive integer k.The height of the stair is equal to the order of Aztec diamond.If the order of Aztec diamond is n ≤ 4, then AZ (n) can be tiled using 'T' trominoes through tiling partial or half stairs as shown in Figure 7 (Top Right & Left).If the order n ≥ 5, then the AZ (n) can be tiled using K-stair of 'T' tromino pieces.The image argument for order n = 5 is shown in Figure 7 (bottom), green colored tromino pieces tiled the edges using 1-stair.Lemma 6.To tile AZ (3k − 2) with one defect a fringe appearing has been used as shown in Figure 9 left.It is easy to check that if a fringe has exactly one defect, then it can be covered with 'T' trominoes.In particular, a possible rectangle with a fringe that can be tiled by 'T' tromino is 2 × 2 square.Similarly, as lemma 5, the stair patterns were used to tile the edges of the concern defected Aztec diamond.The space with fringe placed is considered as a 2 × 2 square plot and can be tiled using 'T' trominoes Figure 9 left.The other space of defected Aztec diamond was tiled similarly to Lemma 5.In another approach, tiling pattern of Figure 8 was used wherein the k-stairs laid above and below the fringe in order to tile the Aztec diamond with 'T' trominoes, as shown in Figure 9 Right.Lemma 6.To tile AZ (3k − 2) with one defect a fringe appearing has been used as shown in Figure 9 left.It is easy to check that if a fringe has exactly one defect, then it can be covered with 'T' trominoes.In particular, a possible rectangle with a fringe that can be tiled by 'T' tromino is 2 × 2 square.Similarly, as lemma 5, the stair patterns were used to tile the edges of the concern defected Aztec diamond.The space with fringe placed is considered as a 2 × 2 square plot and can be tiled using 'T' trominoes Figure 9 left.The other space of defected Aztec diamond was tiled similarly to Lemma 5.In another approach, tiling pattern of Figure 8 was used wherein the k-stairs laid above and below the fringe in order to tile the Aztec diamond with 'T' trominoes, as shown in Figure 9 Right.Lemma 6.To tile AZ (3k − 2) with one defect a fringe appearing has been used as shown in Figure 9 left.It is easy to check that if a fringe has exactly one defect, then it can be covered with 'T' trominoes.In particular, a possible rectangle with a fringe that can be tiled by 'T' tromino is 2 × 2 square.Similarly, as lemma 5, the stair patterns were used to tile the edges of the concern defected Aztec diamond.The space with fringe placed is considered as a 2 × 2 square plot and can be tiled using 'T' trominoes Figure 9 left.The other space of defected Aztec diamond was tiled similarly to Lemma 5.In another approach, tiling pattern of Figure 8 was used wherein the k-stairs laid above and below the fringe in order to tile the Aztec diamond with 'T' trominoes, as shown in Figure 9 Right. hTromo: Robot Architecture The experiments presented in this paper involves a novel area coverage technique with Tromino tiling theory as a basis.The proposed approach was validated on the hTromo; a Tromino inspired reconfigurable floor cleaning robot that was developed based on the theory of "hinged dissection of polyominoes".Hinged dissection is a geometric analysis wherein a planar structure dissected into finite pieces connected by "hinged" points, such that the formation of one structure to another can be accomplished by continuously swinging the hinged points without breaking the chain [33].Several studies targeting the concept of hinged dissection have been reported.Pertinent efforts in this field have included the remodeling of an equilateral triangle into a polygon [34], combining several rigid duplicates of the same polyhedron [35], the creation of unique patterns through rearrangement of shapes from one to another [36], and 3D hinged dissection being used for the formation of 3D polyhedra.In robotics, the work presented in [21] studies the hinged dissection principle, with a view to creating a nested re-configurable robot module named 'hinged-tetro', and to demonstrating that Left Left Right (LLR) or Left Left Left (LLL) hinged dissection applies to the creation of all one-sided tetromino forms.The LLR hinged dissection method was utilized in the creation of the hTetro cleaning device [22], in order to achieve the transformational capability.Since hTromo robot only consists of three blocks and requires only two hinged points; we utilized Left Left (LL) hinged dissection configuration to achieve the shape-shifting ability outlined in Table 1. hTromo: Robot Architecture The experiments presented in this paper involves a novel area coverage technique with Tromino tiling theory as a basis.The proposed approach was validated on the hTromo; a Tromino inspired reconfigurable floor cleaning robot that was developed based on the theory of "hinged dissection of polyominoes".Hinged dissection is a geometric analysis wherein a planar structure dissected into finite pieces connected by "hinged" points, such that the formation of one structure to another can be accomplished by continuously swinging the hinged points without breaking the chain [33].Several studies targeting the concept of hinged dissection have been reported.Pertinent efforts in this field have included the remodeling of an equilateral triangle into a polygon [34], combining several rigid duplicates of the same polyhedron [35], the creation of unique patterns through rearrangement of shapes from one to another [36], and 3D hinged dissection being used for the formation of 3D polyhedra.In robotics, the work presented in [21] studies the hinged dissection principle, with a view to creating a nested re-configurable robot module named 'hinged-tetro', and to demonstrating that Left Left Right (LLR) or Left Left Left (LLL) hinged dissection applies to the creation of all one-sided tetromino forms.The LLR hinged dissection method was utilized in the creation of the hTetro cleaning device [22], in order to achieve the transformational capability.Since hTromo robot only consists of three blocks and requires only two hinged points; we utilized Left Left (LL) hinged dissection configuration to achieve the shape-shifting ability outlined in Table 1. hTromo: Robot Architecture The experiments presented in this paper involves a novel area coverage technique with Tromino tiling theory as a basis.The proposed approach was validated on the hTromo; a Tromino inspired reconfigurable floor cleaning robot that was developed based on the theory of "hinged dissection of polyominoes".Hinged dissection is a geometric analysis wherein a planar structure dissected into finite pieces connected by "hinged" points, such that the formation of one structure to another can be accomplished by continuously swinging the hinged points without breaking the chain [33].Several studies targeting the concept of hinged dissection have been reported.Pertinent efforts in this field have included the remodeling of an equilateral triangle into a polygon [34], combining several rigid duplicates of the same polyhedron [35], the creation of unique patterns through rearrangement of shapes from one to another [36], and 3D hinged dissection being used for the formation of 3D polyhedra.In robotics, the work presented in [21] studies the hinged dissection principle, with a view to creating a nested re-configurable robot module named 'hinged-tetro', and to demonstrating that Left Left Right (LLR) or Left Left Left (LLL) hinged dissection applies to the creation of all one-sided tetromino forms.The LLR hinged dissection method was utilized in the creation of the hTetro cleaning device [22], in order to achieve the transformational capability.Since hTromo robot only consists of three blocks and requires only two hinged points; we utilized Left Left (LL) hinged dissection configuration to achieve the shape-shifting ability outlined in Table 1. hTromo: Robot Architecture The experiments presented in this paper involves a novel area coverage technique with Tromino tiling theory as a basis.The proposed approach was validated on the hTromo; a Tromino inspired reconfigurable floor cleaning robot that was developed based on the theory of "hinged dissection of polyominoes".Hinged dissection is a geometric analysis wherein a planar structure dissected into finite pieces connected by "hinged" points, such that the formation of one structure to another can be accomplished by continuously swinging the hinged points without breaking the chain [33].Several studies targeting the concept of hinged dissection have been reported.Pertinent efforts in this field have included the remodeling of an equilateral triangle into a polygon [34], combining several rigid duplicates of the same polyhedron [35], the creation of unique patterns through rearrangement of shapes from one to another [36], and 3D hinged dissection being used for the formation of 3D polyhedra.In robotics, the work presented in [21] studies the hinged dissection principle, with a view to creating a nested re-configurable robot module named 'hinged-tetro', and to demonstrating that Left Left Right (LLR) or Left Left Left (LLL) hinged dissection applies to the creation of all one-sided tetromino forms.The LLR hinged dissection method was utilized in the creation of the hTetro cleaning device [22], in order to achieve the transformational capability.Since hTromo robot only consists of three blocks and requires only two hinged points; we utilized Left Left (LL) hinged dissection configuration to achieve the shape-shifting ability outlined in Table 1. hTromo: Robot Architecture The experiments presented in this paper involves a novel area coverage technique with Tromino tiling theory as a basis.The proposed approach was validated on the hTromo; a Tromino inspired reconfigurable floor cleaning robot that was developed based on the theory of "hinged dissection of polyominoes".Hinged dissection is a geometric analysis wherein a planar structure dissected into finite pieces connected by "hinged" points, such that the formation of one structure to another can be accomplished by continuously swinging the hinged points without breaking the chain [33].Several studies targeting the concept of hinged dissection have been reported.Pertinent efforts in this field have included the remodeling of an equilateral triangle into a polygon [34], combining several rigid duplicates of the same polyhedron [35], the creation of unique patterns through rearrangement of shapes from one to another [36], and 3D hinged dissection being used for the formation of 3D polyhedra.In robotics, the work presented in [21] studies the hinged dissection principle, with a view to creating a nested re-configurable robot module named 'hinged-tetro', and to demonstrating that Left Left Right (LLR) or Left Left Left (LLL) hinged dissection applies to the creation of all one-sided tetromino forms.The LLR hinged dissection method was utilized in the creation of the hTetro cleaning device [22], in order to achieve the transformational capability.Since hTromo robot only consists of three blocks and requires only two hinged points; we utilized Left Left (LL) hinged dissection configuration to achieve the shape-shifting ability outlined in Table 1. hTromo: Robot Architecture The experiments presented in this paper involves a novel area coverage technique with Tromino tiling theory as a basis.The proposed approach was validated on the hTromo; a Tromino inspired reconfigurable floor cleaning robot that was developed based on the theory of "hinged dissection of polyominoes".Hinged dissection is a geometric analysis wherein a planar structure dissected into finite pieces connected by "hinged" points, such that the formation of one structure to another can be accomplished by continuously swinging the hinged points without breaking the chain [33].Several studies targeting the concept of hinged dissection have been reported.Pertinent efforts in this field have included the remodeling of an equilateral triangle into a polygon [34], combining several rigid duplicates of the same polyhedron [35], the creation of unique patterns through rearrangement of shapes from one to another [36], and 3D hinged dissection being used for the formation of 3D polyhedra.In robotics, the work presented in [21] studies the hinged dissection principle, with a view to creating a nested re-configurable robot module named 'hinged-tetro', and to demonstrating that Left Left Right (LLR) or Left Left Left (LLL) hinged dissection applies to the creation of all one-sided tetromino forms.The LLR hinged dissection method was utilized in the creation of the hTetro cleaning device [22], in order to achieve the transformational capability.Since hTromo robot only consists of three blocks and requires only two hinged points; we utilized Left Left (LL) hinged dissection configuration to achieve the shape-shifting ability outlined in Table 1. Mechanism Design The hTromo robot comprises three squares, and two (LL) hinged points, to facilitate transformation.Figure 10 shows the intermediate form of the hTromo robot while transforming from one configuration to another.Figure 11 shows the detailed component list for the device.Block 1 contains the requisite components for movement, block 2 the electronic peripherals, and blocks 3 the cleaning components and functionality.A single hTromo block measures 140 × 140 × 75 mm, with 4 mm thick honeycomb walls, formed from PLA for minimum tensile strength.The device houses six DC motors, with four mounted on block 1 and controlled fundamental mobility.Two further DC motors sit on block 3, optimizing smooth mobility for the robot.For ease of navigation within an area, the robot has omnidirectional movement capabilities.Furthermore, caster wheels were attached to blocks 2 and 3 to maintain the position of the blocks.Blocks 3 also house a tailor-made vacuum module, for cleaning functionality, and the collection of dirt.The suction chamber and duct were specifically created to minimize loss of suction and dust spillage during operation.For the creation of the three available one-sided Tromino shapes, the robot houses two smart servos that sit on hinged points.Both the servos attached to, and anchoring, block 2 and driving blocks 1 and 3. Every motor fixes its position with its stall torque, allowing it to continually support robot morphology for the duration of its operation. Mechanism Design The hTromo robot comprises three squares, and two (LL) hinged points, to facilitate transformation.Figure 10 shows the intermediate form of the hTromo robot while transforming from one configuration to another.Figure 11 shows the detailed component list for the device.Block 1 contains the requisite components for movement, block 2 the electronic peripherals, and blocks 3 the cleaning components and functionality.A single hTromo block measures 140 × 140 × 75 mm, with 4 mm thick honeycomb walls, formed from PLA for minimum tensile strength.The device houses six DC motors, with four mounted on block 1 and controlled fundamental mobility.Two further DC motors sit on block 3, optimizing smooth mobility for the robot.For ease of navigation within an area, the robot has omnidirectional movement capabilities.Furthermore, caster wheels were attached to blocks 2 and 3 to maintain the position of the blocks.Blocks 3 also house a tailor-made vacuum module, for cleaning functionality, and the collection of dirt.The suction chamber and duct were specifically created to minimize loss of suction and dust spillage during operation.For the creation of the three available one-sided Tromino shapes, the robot houses two smart servos that sit on hinged points.Both the servos attached to, and anchoring, block 2 and driving blocks 1 and 3. Every motor fixes its position with its stall torque, allowing it to continually support robot morphology for the duration of its operation. Mechanism Design The hTromo robot comprises three squares, and two (LL) hinged points, to facilitate transformation.Figure 10 shows the intermediate form of the hTromo robot while transforming from one configuration to another.Figure 11 shows the detailed component list for the device.Block 1 contains the requisite components for movement, block 2 the electronic peripherals, and blocks 3 the cleaning components and functionality.A single hTromo block measures 140 × 140 × 75 mm, with 4 mm thick honeycomb walls, formed from PLA for minimum tensile strength.The device houses six DC motors, with four mounted on block 1 and controlled fundamental mobility.Two further DC motors sit on block 3, optimizing smooth mobility for the robot.For ease of navigation within an area, the robot has omnidirectional movement capabilities.Furthermore, caster wheels were attached to blocks 2 and 3 to maintain the position of the blocks.Blocks 3 also house a tailor-made vacuum module, for cleaning functionality, and the collection of dirt.The suction chamber and duct were specifically created to minimize loss of suction and dust spillage during operation.For the creation of the three available one-sided Tromino shapes, the robot houses two smart servos that sit on hinged points.Both the servos attached to, and anchoring, block 2 and driving blocks 1 and 3. Every motor fixes its position with its stall torque, allowing it to continually support robot morphology for the duration of its operation. Mechanism Design The hTromo robot comprises three squares, and two (LL) hinged points, to facilitate transformation.Figure 10 shows the intermediate form of the hTromo robot while transforming from one configuration to another.Figure 11 shows the detailed component list for the device.Block 1 contains the requisite components for movement, block 2 the electronic peripherals, and blocks 3 the cleaning components and functionality.A single hTromo block measures 140 × 140 × 75 mm, with 4 mm thick honeycomb walls, formed from PLA for minimum tensile strength.The device houses six DC motors, with four mounted on block 1 and controlled fundamental mobility.Two further DC motors sit on block 3, optimizing smooth mobility for the robot.For ease of navigation within an area, the robot has omnidirectional movement capabilities.Furthermore, caster wheels were attached to blocks 2 and 3 to maintain the position of the blocks.Blocks 3 also house a tailor-made vacuum module, for cleaning functionality, and the collection of dirt.The suction chamber and duct were specifically created to minimize loss of suction and dust spillage during operation.For the creation of the three available one-sided Tromino shapes, the robot houses two smart servos that sit on hinged points.Both the servos attached to, and anchoring, block 2 and driving blocks 1 and 3. Every motor fixes its position with its stall torque, allowing it to continually support robot morphology for the duration of its operation.For a low-level controller, block 2 houses an Arduino Mega 16-bit microcontroller to manage movement and transformation gait performance.Action commands are sent by Raspberry PI which is also mounted on block 2, with the microcontroller executing these commands, sending Pulse Width Modulation (PWM) motor primitives to the driver unit.Integrated electronics are powered by a LiPo battery, providing 7.4 v through a toggle switch.In block 2, a DC step-down controller maintains a 5 v input voltage for the Raspberry PI.The vacuum module is operated through activation of a relay switch.For user commands to reach the Raspberry PI, a wireless Bluetooth interface is used.Figure 12 shows an intuitive Android App that was developed for the control of the hTromo robot.To maneuver and transform the robot, and operate its cleaning functions, users select commands from a list of pre-set options.The app provides arrow buttons for directional commands, seven Tetris buttons for shape reconfiguring, pause and play buttons to freeze or commence actions, and on and off buttons for vacuuming operations.For a low-level controller, block 2 houses an Arduino Mega 16-bit microcontroller to manage movement and transformation gait performance.Action commands are sent by Raspberry PI which is also mounted on block 2, with the microcontroller executing these commands, sending Pulse Width Modulation (PWM) motor primitives to the driver unit.Integrated electronics are powered by a LiPo battery, providing 7.4 v through a toggle switch.In block 2, a DC step-down controller maintains a 5 v input voltage for the Raspberry PI.The vacuum module is operated through activation of a relay switch.For user commands to reach the Raspberry PI, a wireless Bluetooth interface is used.Figure 12 shows an intuitive Android App that was developed for the control of the hTromo robot.To maneuver and transform the robot, and operate its cleaning functions, users select commands from a list of pre-set options.The app provides arrow buttons for directional commands, seven Tetris buttons for shape reconfiguring, pause and play buttons to freeze or commence actions, and on and off buttons for vacuuming operations.For a low-level controller, block 2 houses an Arduino Mega 16-bit microcontroller to manage movement and transformation gait performance.Action commands are sent by Raspberry PI which is also mounted on block 2, with the microcontroller executing these commands, sending Pulse Width Modulation (PWM) motor primitives to the driver unit.Integrated electronics are powered by a LiPo battery, providing 7.4 v through a toggle switch.In block 2, a DC step-down controller maintains a 5 v input voltage for the Raspberry PI.The vacuum module is operated through activation of a relay switch.For user commands to reach the Raspberry PI, a wireless Bluetooth interface is used.Figure 12 shows an intuitive Android App that was developed for the control of the hTromo robot.To maneuver and transform the robot, and operate its cleaning functions, users select commands from a list of pre-set options.The app provides arrow buttons for directional commands, seven Tetris buttons for shape reconfiguring, pause and play buttons to freeze or commence actions, and on and off buttons for vacuuming operations. Experiments and Results In this paper, we present the application of Tromino tiling theory, a class of Polyomino with three cells in the context of a reconfigurable floor cleaning robot, hTromo.The developed robot platform is able to automatically generate a global tiling set required to cover a defined space while leveraging on the Tromino tiling theory.There are five set of theorems that this work sought to validate, as denoted in Section 2. The first set of experiments validated the application of Theorem 1 using a rectangular surface of 140 × 126 cm, split into 10 × 9 squares.Figure 13a presents the universal tiling set that was auto-generated by our path planning algorithm to cover the given area using only L-and T-trominoes.In order to validate the application of Theorem 2, Lemma 1, a rectangular area measuring 126 cm × 112 cm was utilized and further sub-divided into 9 × 8 square grids.Figure 13b illustrates the corresponding universal tiling set auto-generated by our path planning algorithm using which tiles the second test area with only T-set trominoes.The third set of experiments validates Theorem 2, Lemma 2. This was done in a rectangular area of 154 cm × 126 cm, split into 11 × 9 square grids. measuring 126 cm × 112 cm was utilized and further sub-divided into 9 × 8 square grids.Figure 13b illustrates the corresponding universal tiling set auto-generated by our path planning algorithm using which tiles the second test area with only T-set trominoes.The third set of experiments validates Theorem 2, Lemma 2. This was done in a rectangular area of 154 cm × 126 cm, split into 11 × 9 square grids. Figure 13c, shows the corresponding tiling set generated based on Theorem 2, Lemma 2. In the fourth set of experiments, obstacles were inserted within the test area in order to modify the area based on Theorem 3, Lemma 3. We utilized a square area of 154 cm × 154 cm, subdivided into 11 × 11 square grids.According to the assertion of Lemma 3, this modified area can be tiled using a 'T'-set of tetromino pieces.Figure 13d illustrates the universal tiling set that was auto-generated based on Theorem 3, Lemma 3. The fifth set of experiments focused on validation of Theorem 3, Lemma 4. The test was performed within a square area of 140 cm × 140 cm, further subdivided into 10 × 10 square grids.To meet the requirements of Lemma 4 towards realizing a modified rectangle, obstacles were inserted into the middle of the test area.The global tiling set that was auto-generated based on Lemma 4 can be seen in Figure 13e. With Lemma 5 of Theorem 4, we used a square plot as a test arena with a dimension of 168 cm × 168 cm and segmented it into 12 × 12 square grids.Since the concern theorem deals with Aztec diamond space, we modified the defined area into a 6th order diamond space by placing obstacles.Figure 13f shows the tiling set generated with 'T' trominoes according to the lemma 5 and the orange shaded areas are filled with obstacles.Similarly, for Lemma 6, we used a 112 cm × 112 cm square plot which was segmented to an 8 × 8 square grids.The considered area was filled with obstacles in order to convert the square area to a 4th order Aztec diamond.Also, we placed a separate obstacle in the 4,4 cell to modify area as a deficient diamond.Figure 13g shows the tiling set generated with 'T' trominoes according to the lemma 6. Experimental Testbed When establishing the test environment, a pre-determined floor area was split into squares, congruent with the hTromo robot blocks, and an overhead support frame mechanism erected, to accommodate a camera.Image data captured by this camera was post-processed, to evaluate the percentage of the test area covered by hTromo during each experiment.The complete area used for all experiments measured 196 × 196 cm.The limits of the specified test area were adapted using an extendable metal framework, according to the assertions of each theory.The test area was further In the fourth set of experiments, obstacles were inserted within the test area in order to modify the area based on Theorem 3, Lemma 3. We utilized a square area of 154 cm × 154 cm, subdivided into 11 × 11 square grids.According to the assertion of Lemma 3, this modified area can be tiled using a 'T'-set of tetromino pieces.Figure 13d illustrates the universal tiling set that was auto-generated based on Theorem 3, Lemma 3. The fifth set of experiments focused on validation of Theorem 3, Lemma 4. The test was performed within a square area of 140 cm × 140 cm, further subdivided into 10 × 10 square grids.To meet the requirements of Lemma 4 towards realizing a modified rectangle, obstacles were inserted into the middle of the test area.The global tiling set that was auto-generated based on Lemma 4 can be seen in Figure 13e. With Lemma 5 of Theorem 4, we used a square plot as a test arena with a dimension of 168 cm × 168 cm and segmented it into 12 × 12 square grids.Since the concern theorem deals with Aztec diamond space, we modified the defined area into a 6th order diamond space by placing obstacles.Figure 13f shows the tiling set generated with 'T' trominoes according to the lemma 5 and the orange shaded areas are filled with obstacles.Similarly, for Lemma 6, we used a 112 cm × 112 cm square plot which was segmented to an 8 × 8 square grids.The considered area was filled with obstacles in order to convert the square area to a 4th order Aztec diamond.Also, we placed a separate obstacle in the 4,4 cell to modify area as a deficient diamond.Figure 13g shows the tiling set generated with 'T' trominoes according to the lemma 6. Experimental Testbed When establishing the test environment, a pre-determined floor area was split into squares, congruent with the hTromo robot blocks, and an overhead support frame mechanism erected, to accommodate a camera.Image data captured by this camera was post-processed, to evaluate the percentage of the test area covered by hTromo during each experiment.The complete area used for all experiments measured 196 × 196 cm.The limits of the specified test area were adapted using an extendable metal framework, according to the assertions of each theory.The test area was further split, using white tape, into a 14 × 14 square grid.As shown in Figure 14, every square within this grid mirrored the measurements of a single hTromo robot block. A shake-resistant parallelepiped was created using an aluminum extrusion profile, with a camera then attached in the center, at the top of the structure.Furthermore, the image plane was verified as being horizontal to the floor, to avoid perspective projection complications for area calculation.As such, when the camera was attached to the structure, a spirit level was used to ensure it sat parallel to the ground.The camera set-up and corresponding test area can be seen in Figure 15.Auto-focus was turned off, and a set focal length employed when recording robot tiling.The raw video recordings were subsequently post-processed, in order to assess robot performance. To generate a tracking map for the hTromo movement, an image-processing calculation was employed, which comprised three key stages.The first of these stages concerned the storage of a reference image, from which a track map would be created.Secondly, the location and form of the robot were identified in each frame.After multichannel color thresholding, the algorithm recognized the robot as three spots.The center of these spots corresponds to the center of each red color.Finally, a track map was created, by marking the green squares in accordance with spots detected on the reference image.When a track map was ready, a percentage calculation was conducted using the following formula, to assess the total area coverage achieved. %Coverage Area = Pixels area of the robot Total pixels area of the testing field × 100 (1) %Coverage Area = Pixel area of the robot Total pixel area of the testing field − Total pixel area of the obtacles × 100 (2) Results and Analysis Each experiment was started after placing the hTromo robot in a predefined position inside the area.We recorded the robot in action from the beginning to the end of the experiments.Once the tests were completed, the recorded videos were post-processed using the image processing algorithm detailed in Section 4 in order to generate the track map. Figure 16 presents the track map images of the hTromo robot that were generated following the first set of experiments that sought to validate the application of Theorem 1.The green colored shading represents the area covered by our hTromo robot.The percentage of area covered was computed using Equation ( 1) and is displayed on top of the tracked images.Figure 17 presents the tiled area during different stages of our first set of experiment.The figure indicates the actual position of the robot at a specific time point, and the associated track map at that instance overlaid with the completed tiling set.The robot path was executed according to the global tiling set specified in basic tromino theory.The results clearly show that our hTromo robot covered more than 95% of the test area in the first set of experiment validating the underline theory of tromino tiling.In the second set of an experiment involving Lemma 1 of Theorem 2, the area covered by our hTromo robot was found to be over 95%. Figure 16 presents the tiled area during different stages of our second set of experiment.We extended the testing area by adjusting the metal frames as mentioned in Section 4, in order to validate Lemma 2 of Theorem 2. The same process has been followed to generate the track map in order to compute total area covered.The results show that the htromo covered more than 91% of the defined testing area thereby validating the application of Lemma 2 shown in Figure 18. Results and Analysis Each experiment was started after placing the hTromo robot in a predefined position inside the area.We recorded the robot in action from the beginning to the end of the experiments.Once the tests were completed, the recorded videos were post-processed using the image processing algorithm detailed in Section 4 in order to generate the track map. Figure 16 presents the track map images of the hTromo robot that were generated following the first set of experiments that sought to validate the application of Theorem 1.The green colored shading represents the area covered by our hTromo robot.The percentage of area covered was computed using Equation ( 1) and is displayed on top of the tracked images.Figure 17 presents the tiled area during different stages of our first set of experiment.The figure indicates the actual position of the robot at a specific time point, and the associated track map at that instance overlaid with the completed tiling set.The robot path was executed according to the global tiling set specified in basic tromino theory.The results clearly show that our hTromo robot covered more than 95% of the test area in the first set of experiment validating the underline theory of tromino tiling.In the second set of an experiment involving Lemma 1 of Theorem 2, the area covered by our hTromo robot was found to be over 95%. Figure 16 presents the tiled area during different stages of our second set of experiment.We extended the testing area by adjusting the metal frames as mentioned in Section 4, in order to validate Lemma 2 of Theorem 2. The same process has been followed to generate the track map in order to compute total area covered.The results show that the htromo covered more than 91% of the defined testing area thereby validating the application of Lemma 2 shown in Figure 18. Results and Analysis Each experiment was started after placing the hTromo robot in a predefined position inside the area.We recorded the robot in action from the beginning to the end of the experiments.Once the tests were completed, the recorded videos were post-processed using the image processing algorithm detailed in Section 4 in order to generate the track map. Figure 16 presents the track map images of the hTromo robot that were generated following the first set of experiments that sought to validate the application of Theorem 1.The green colored shading represents the area covered by our hTromo robot.The percentage of area covered was computed using Equation ( 1) and is displayed on top of the tracked images.Figure 17 presents the tiled area during different stages of our first set of experiment.The figure indicates the actual position of the robot at a specific time point, and the associated track map at that instance overlaid with the completed tiling set.The robot path was executed according to the global tiling set specified in basic tromino theory.The results clearly show that our hTromo robot covered more than 95% of the test area in the first set of experiment validating the underline theory of tromino tiling.In the second set of an experiment involving Lemma 1 of Theorem 2, the area covered by our hTromo robot was found to be over 95%. Figure 16 presents the tiled area during different stages of our second set of experiment.We extended the testing area by adjusting the metal frames as mentioned in Section 4, in order to validate Lemma 2 of Theorem 2. The same process has been followed to generate the track map in order to compute total area covered.The results show that the htromo covered more than 91% of the defined testing area thereby validating the application of Lemma 2 shown in Figure 18.In the fourth and fifth set of experiments, obstacles were placed inside the testing area as a means of modifying the test space as required by the Lemma.The boundaries of the testing area were adjusted according to the arguments in Lemma 3 and 4 of Theorem 3. We computed the total area covered by excluding the pixels associated with the obstacles by utilizing Equation (2). Figure 19 depicts the area coverage process while validating the application of Lemma 3. Results clearly show that hTromo robot covered an area in excess of 97% of the total test area, thereby validating the application of the Lemma 3 of Theorem 3. Similarly, with experiments involving Lemma 4, the hTromo robot covered an area of over 95% of the total test area.Figure 20, presents the tiled area during the different stages of a fifth and final set of experiments.In the fourth and fifth set of experiments, obstacles were placed inside the testing area as a means of modifying the test space as required by the Lemma.The boundaries of the testing area were adjusted according to the arguments in Lemma 3 and 4 of Theorem 3. We computed the total area covered by excluding the pixels associated with the obstacles by utilizing Equation (2). Figure 19 depicts the area coverage process while validating the application of Lemma 3. Results clearly show that hTromo robot covered an area in excess of 97% of the total test area, thereby validating the application of the Lemma 3 of Theorem 3. Similarly, with experiments involving Lemma 4, the hTromo robot covered an area of over 95% of the total test area.Figure 20, presents the tiled area during the different stages of a fifth and final set of experiments.In the fourth and fifth set of experiments, obstacles were placed inside the testing area as a means of modifying the test space as required by the Lemma.The boundaries of the testing area were adjusted according to the arguments in Lemma 3 and 4 of Theorem 3. We computed the total area covered by excluding the pixels associated with the obstacles by utilizing Equation (2). Figure 19 depicts the area coverage process while validating the application of Lemma 3. Results clearly show that hTromo robot covered an area in excess of 97% of the total test area, thereby validating the application of the Lemma 3 of Theorem 3. Similarly, with experiments involving Lemma 4, the hTromo robot covered an area of over 95% of the total test area.Furthermore, with the sixth set of experiments, we changed the test bed size to 168 × 168 cm by adjusting the metal frames.We placed obstacles inside the test area in order to make the square space into an Aztec diamond.We again followed the same experimental procedure to generate the tack maps to compute the total area covered.Since we placed obstacles inside the test arena; we used Equation (2) to compute the total area covered.Figure 21 shows the area coverage process during the validation of Lemma 5.The results show that the hTromo robot covers more than 97% of the defined area, thereby validating the application of the Lemma 5 of Theorem 4. The experiments that involve Lemma 6 of Theorem 5 was tested in a 112 cm × 112 cm area.The area coverage of hTromo robot under this experiment was computed using the Equation (2). Figure 22 shows the coverage process of the hTromo robot during the validation Lemma 6.The results show that hTromo robot can achieve a coverage area of more than 92%, through validating the application of Lemma 6 of Theorem 5.The experimental results clearly indicate that there are significant untapped research and development opportunity related to the application of the polyomino tiling theory within the area coverage problem.Furthermore, with the sixth set of experiments, we changed the test bed size to 168 × 168 cm by adjusting the metal frames.We placed obstacles inside the test area in order to make the square space into an Aztec diamond.We again followed the same experimental procedure to generate the tack maps to compute the total area covered.Since we placed obstacles inside the test arena; we used Equation (2) to compute the total area covered.Figure 21 shows the area coverage process during the validation of Lemma 5.The results show that the hTromo robot covers more than 97% of the defined area, thereby validating the application of the Lemma 5 of Theorem 4. The experiments that involve Lemma 6 of Theorem 5 was tested in a 112 cm × 112 cm area.The area coverage of hTromo robot under this experiment was computed using the Equation (2). Figure 22 shows the coverage process of the hTromo robot during the validation Lemma 6.The results show that hTromo robot can achieve a coverage area of more than 92%, through validating the application of Lemma 6 of Theorem 5.The experimental results clearly indicate that there are significant untapped research and development opportunity related to the application of the polyomino tiling theory within the area coverage problem.Furthermore, with the sixth set of experiments, we changed the test bed size to 168 × 168 cm by adjusting the metal frames.We placed obstacles inside the test area in order to make the square space into an Aztec diamond.We again followed the same experimental procedure to generate the tack maps to compute the total area covered.Since we placed obstacles inside the test arena; we used Equation (2) to compute the total area covered.Figure 21 shows the area coverage process during the validation of Lemma 5.The results show that the hTromo robot covers more than 97% of the defined area, thereby validating the application of the Lemma 5 of Theorem 4. The experiments that involve Lemma 6 of Theorem 5 was tested in a 112 cm × 112 cm area.The area coverage of hTromo robot under this experiment was computed using the Equation (2). Figure 22 shows the coverage process of the hTromo robot during the validation Lemma 6.The results show that hTromo robot can achieve a coverage area of more than 92%, through validating the application of Lemma 6 of Theorem 5.The experimental results clearly indicate that there are significant untapped research and development opportunity related to the application of the polyomino tiling theory within the area coverage problem. Conclusions In this paper, we proposed a novel area coverage approach for a reconfigurable floor cleaning robot, hTromo using Tromino tiling theory.Specifically, we validated the application of five tromino tiling theorems with our hTromo robot.Experiments performed clearly demonstrate the efficacy of the proposed approach resulting in very high levels of area coverage performance in all considered experimental cases.By automating the process of generating a global tiling set for tackling area Conclusions In this paper, we proposed a novel area coverage approach for a reconfigurable floor cleaning robot, hTromo using Tromino tiling theory.Specifically, we validated the application of five tromino tiling theorems with our hTromo robot.Experiments performed clearly demonstrate the efficacy of the proposed approach resulting in very high levels of area coverage performance in all considered experimental cases.By automating the process of generating a global tiling set for tackling area coverage in our hTromo robot, we hope to simplify the path planning problem greatly.This paper also introduced the system architecture of our hTromo robot and details of experimental design and testbed.Future research will focus on: (1) integration of infrared, ultrasonic, and bump sensors for obstacle avoidance functions; (2) Optimizing robot's path planning by implementing the Decision Making Framework for Human-Robot Collaborative Workplace Generation framework proposed by Panagiota Tsarouchi et al. [37] (3) Experimenting the proposed approach in a larger space by increasing the floor area and computing the total area covered within a given time period and (4) exploring global and local path planning by utilizing the images captured from the overhead camera. Theorem 2 . Let a, b be the integers such that 2 ≤ a ≤ b.An a × b rectangle can be tiled with set 'T' trominoes if and only if one of the following conditions holds:1.a = 3 and b is even; 2. a ≠ 3 and ab is divisible by 3. Theorem 2 . Let a, b be the integers such that 2 ≤ a ≤ b.An a × b rectangle can be tiled with set 'T' trominoes if and only if one of the following conditions holds: 1. a = 3 and b is even; 2. a = 3 and ab is divisible by 3. Lemma 1 . Let a = 3 and b  {2, 4, 6} then a 3 × b rectangle can be tiled using the arrangement of 'T' trominoes shown in Figure3a.Hence it is proved that the smallest rectangle that satisfies condition 1 Theorem 2 is(3 × 2).Let b > 6, even numbers, and c  {2, 4, 6}, then b = 3n + c, where n can have a positive even integer value.As such, it allows for splitting of a (3 × b) rectangle into n (3 × 2) rectangles and one (3 × c) rectangle, as in Figure3b.This implies, if b ≥ 2, even numbers, then a (4 × b) rectangle can be tiled using a set of 'T' trominoes. Figure 3 . Figure 3. Image Argument for Lemma 1.(a) a = 3 and b is even; (b) Decomposing a × b rectangle into sub-rectangles. Figure 3 . Figure 3. Image Argument for Lemma 1.(a) a = 3 and b is even; (b) Decomposing a × b rectangle into sub-rectangles. Figure 10 . Figure 10.hTromo robot's transformation from one configuration to another.(a) Transformation from I-tromino to L-tromino; (b) Transformation from L-tromino to I-tromino.Figure 10. hTromo robot's transformation from one configuration to another.(a) Transformation from I-tromino to L-tromino; (b) Transformation from L-tromino to I-tromino. Figure 10 . Figure 10.hTromo robot's transformation from one configuration to another.(a) Transformation from I-tromino to L-tromino; (b) Transformation from L-tromino to I-tromino.Figure 10. hTromo robot's transformation from one configuration to another.(a) Transformation from I-tromino to L-tromino; (b) Transformation from L-tromino to I-tromino. Figure 13c , Figure13c, shows the corresponding tiling set generated based on Theorem 2, Lemma 2. In the fourth set of experiments, obstacles were inserted within the test area in order to modify the area based on Theorem 3, Lemma 3. We utilized a square area of 154 cm × 154 cm, subdivided into 11 × 11 square grids.According to the assertion of Lemma 3, this modified area can be tiled using a 'T'-set of tetromino pieces.Figure13dillustrates the universal tiling set that was auto-generated based on Theorem 3, Lemma 3. The fifth set of experiments focused on validation of Theorem 3, Lemma 4. The test was performed within a square area of 140 cm × 140 cm, further subdivided into 10 × 10 square grids.To meet the requirements of Lemma 4 towards realizing a modified rectangle, obstacles were inserted into the middle of the test area.The global tiling set that was auto-generated based on Lemma 4 can be seen in Figure13e.With Lemma 5 of Theorem 4, we used a square plot as a test arena with a dimension of 168 cm × 168 cm and segmented it into 12 × 12 square grids.Since the concern theorem deals with Aztec diamond space, we modified the defined area into a 6th order diamond space by placing obstacles.Figure13fshows the tiling set generated with 'T' trominoes according to the lemma 5 and the orange shaded areas are filled with obstacles.Similarly, for Lemma 6, we used a 112 cm × 112 cm square plot which was segmented to an 8 × 8 square grids.The considered area was filled with obstacles in order to convert the square area to a 4th order Aztec diamond.Also, we placed a separate obstacle in the 4,4 cell to modify area as a deficient diamond.Figure13gshows the tiling set generated with 'T' trominoes according to the lemma 6. Figure 14 . Figure 14.Defined test area with segmented square grids. Figure 15 . Figure 15.Test area with parallelepiped structure with provision for camera mount. Figure 14 . Figure 14.Defined test area with segmented square grids. Figure 15 . Figure 15.Test area with parallelepiped structure with provision for camera mount. Figure 15 . Figure 15.Test area with parallelepiped structure with provision for camera mount. Figure 20 , presents the tiled area during the different stages of a fifth and final set of experiments. Table 1 . LL hinged points of hTromo robot.LL: Left Left. Table 1 . LL hinged points of hTromo robot.LL: Left Left. Table 1 . LL hinged points of hTromo robot.LL: Left Left. Table 1 . LL hinged points of hTromo robot.LL: Left Left. Table 1 . LL hinged points of hTromo robot.LL: Left Left. Table 1 . LL hinged points of hTromo robot.LL: Left Left.
17,295.6
2018-02-28T00:00:00.000
[ "Computer Science", "Engineering" ]
How Has the Adoption of Business Intelligence Impacted Performance of Higher Education Institutions: Empirical Evidence from Malaysia Higher Education Institutions (HEIs) are lagging behind in the adoption of Business Intelligence (BI). Although the level of Business Intelligence (BI) adoption is high in large organizations, the level of BI adoption in Higher Education Institutions (HEIs) is still low. There were limited studies that look at the impact of BI adoption in developing countries. This study examines how BI adoption impacts the performance of HEIs in Malaysia. This study applies resource-based theory to explore the relationship between BI adoption and performance. Data was collected through a web-form survey of 162 HEIs in Malaysia listed in the Malaysia Qualification Agency (MQA). Partial least square (PLS) structural equation modelling was used to analyse the data. The results showed that there is a significant impact on the performance of HEIs depending on their level of BI adoption. These research finding will hopefully help to encourage BI adoption among HEIs in Malaysia. Introduction Business Intelligence (BI) has embraced the massive quantity of information collected, combined, accessed, and analysed by many organizations in their activities (Olszak, 2016).A recent survey from Gartner (2019) shows that BI is ranked as a top differentiating technology for their organizations and is considered as the most strategic technology area. Dresner Advisory Service (2018) reported that the top four BI objectives: (1) making better decisions; (2) improving operational efficiency; (3) growing revenues; and (4) increased competitive advantage. BI objectives include enhanced customer service and higher degrees of compliance and improved risk management. BI is usually thought of just as tools used only by for-profit and large corporations. However, Higher Education Institutions (HEIs) are underrated candidates that are ignored in pursuing greater BI adoption. According to Dresner Advisory Service (2018), higher education has shown the low penetration of BI adoption as compared to other industries, whereas the insurance industry leads the field in BI adoption, followed by the technology industry, with 40 per cent of technology organizations having adoption rates of 41 per cent or higher. PricewaterhouseCoopers (2017) reported that many HEIs are looking at adopting technological practices used by business corporations to address emerging challenges such as business sustainability. BI is an increasingly vital tool for the higher education environment and has made great inroads thus far (DELL, 2013). BI adoption can enable HEIs to develop plans for improvement and take action to improve efficiency in their operations (EDUCASE, 2017). As the higher education system in Malaysia grows, HEIs are becoming more regulated to guarantee a higher quality of education. In Malaysia, the national quality assurance and accreditation body for education is the Malaysia Qualification Agency (MQA), which was established to ensure greater oversight of HEIs, especially regarding quality and performances. There are several reasons to show the importance and relevance of this study on BI adoption and its adoption in Malaysian HEIs. First, BI adoption is increasingly popular among Malaysian organizations. For example, Gartner forecasted revenue for BI projects in Malaysia to reach RM114.5 million (USD37 million) in 2013, an improvement of 9 per cent from 2012. This is as opposed to global revenue projections at USD13.8 billion, a 7 per cent increase (Gartner, 2013). Gartner again estimated Malaysia would continue as the second-largest business intelligence market in ASEAN after Singapore, reaching USD30.4 million by the year 2017, while the market for BI in the Asia Pacific is expected 7.4 per cent growth to reach almost USD1.4 billion in revenue in the year 2014 and more than USD1.6 billion by the year 2017 (Gartner, 2017). Thus, the possibility of BI is certainly vivid within the context of Malaysia and hence, the significance of the BI adoption research in the HEI setting. Second, the higher education sector has undergone several rounds of reforms to further improve the quality of education provided (Malaysian Ministry of Education, 2015). These reforms have led to the flow of international students to HEIs in Malaysia. Available statistics indicated in the year 2015, a total of 74,748 international students from over 150 nations registered to study in Malaysian HEIs. 26,405 of those international students are in public HEIs while the remaining 48,343 are in private HEIs. Given the rise in international student enrolment in Malaysian HEIs from a variety of backgrounds, the adoption of BI remains critical in the efficient management of student data and other HEI operations. Graduate Tracer Study by the Ministry of Education Malaysia (MoE) (2015) indicated that about sixty per cent of those unemployed are below age 24. Every year, one out of five fresh graduates fail to secure employment six months after graduation. To put that number in context, Malaysia produces more than 250,000 graduates in a year. Among these fresh graduates, about 26 per cent of first-degree holders are unemployed. 52 per cent of these unemployed graduates are from arts and social science study backgrounds. Most unemployed fresh graduates come from Public Universities (50 per cent) and 47 per cent from Private Universities (Ministry of Education Malaysia, 2015). Therefore, HEIs need to rely strongly on the information of the student making a critical and strategic choice (Wong et al., 2018). HEIs collected and tracked more student information than ever before, from student entry to student departures such as application data, course registration information, attendance information, online learning information, performance information, extracurricular information, internship, and employability information (Ong, 2016). The dominant group of HEIs generally covers five enterprise areas: (1) student affairs; (2) academic staff affairs; (3) finance matters; (4) research and development affairs; and (5) infrastructure and development affairs (Rahmat, Ahmad, & Ta'a, 2016). Each business area needs to be integrated and make use of application systems to help with daily tasks. The data from each application will generate useful information that can be accessed by multiple departments such as HEI senior management, faculty members, administrative staff, scientists, and other relevant parties. Administration HEIs is complicated and generate volumes of data across departmental while striving for academic excellence. There is a diversity of explanations to explain the relatively low BI adoption rate among HEIs in developing countries. In the context of BI adoption in developing countries, many researchers claimed that the level of BI adoption in Malaysia is lagging when compared to the other such as Singapore, the Philippines, and Thailand (Boonsiritomachai et al., 2016;Hatta, Miskon, & Abdullah, 2017). The typical reasons for relatively low BI adoption include technical complexity issues, the inflexibility of the software tools, lack of senior management focus and difficulty in accessing benefits provided to the organization. Thus, this study intends to investigate how BI adoption impacts organizational performance of HEIs in Malaysia Theoretical Foundation: Resource-Based Theory (RBT) According to Bhanu and Magiswary (2010), RBT resource concepts and taxonomies are still tricky for researchers due to unclear concepts of organizational resources. The effect of technology adoption on organizational performance remains a topic of discussion, although many researchers have claimed that IT adoption can drive organizational performance and enable organizations to achieve a competitive advantage (Bhanu & Magiswary, 2010). Organizations could reflect themselves as an extensive set of assets, which are the main drivers of organizational performance. Barney (1991) indicated that to attain competitive advantage, organizations need to position themselves strategically based on their value, rare, inimitable, and non-substitute resources, rather than goods and services obtained from those assets. Mahoney and Pandian (1992) studied organizational performance based on RBT and found that there are differences between organizations within the same industry as well as within the narrower boundaries of groups within industries. Wieder and Ossimitz (2015) argued that when it comes to BI adoption, a strong sense of purpose and strategy, strong implementation, and support of BI has a positive effect on data quality, information quality, and the scope of BI. The positive effect in combination with other factors translates to a positive effect on the quality of the decision-making process. In specific, BI adoption can play a pivotal role in the decision-making process by collecting high-value data and information. This scenario makes sense because when organizations manage their BI use with a clear strategy of why, how, and where the BI will be implemented and maintained. Then, BI will be able to collect high-quality information that is relevant, transparent, and trustworthy. Rezaie, Ansarinejad, Haeri, and Nazari-Shirkouhi (2011) asserted that BI reduced time used and increased the efficiency of the decision-making method by enabling BI to analyse obtain information and knowledge from vast amounts of data. The benefit of BI is coherent with Wieder and Ossimitz's (2015) argument. These high-quality data and information have a beneficial impact on the performance of the decision-making system if the user has access to large amounts of data and has the authority to manage it through an insightful and purposeful use of BI. Organizational performance, according to Gavrea, Ilies, and Stegerean (2011), has been identified as one of the most vital factors in management studies. Georgopoulos and Tannenbaum (1957) defined organizational performance as the level to which organizations perceived as a social system achieved their goals and assessed results depending on the job, individuals, and organizational structure. In the early 1960s and 1970s, it was defined as the ability of an organization to leverage its workplace for the recovery and use of restricted resources (Seashore & Yuchtman, 1967). In the 1980s and 1990s, organizational performance was understood as an organization using a minimum of resources (efficiency) to achieve its goals (effectiveness). The concept of performance resulted in profits becoming one of the many performance indices (Boonsiritomachai et al., 2016;Gavrea et al., 2011). However, Lebas and Euske (2007) have recently outlined a set of definitions to explain the concept of organizational performance. The first definition involves performance evaluated as a collection of financial and non-financial factors that comprise of data on the number of goals and outcomes achieved. This study hypothesizes that BI Adoption has a significant positive relationship with organizational performance. Methodology Sample The HEIs listed by the Malaysian Ministry of Education are the selected population for this research. There are 769 HEIs in Malaysia according to the Malaysia Qualification Agency (MQA). This number is obtained when looking at the total number of public and private universities, polytechnic, private college universities, private colleges, and public community colleges in Malaysia. Table 1 shows the type and numbers of HEIs in Malaysia as accredited by Malaysia Qualification Agency (MQA). The respondent for this study is either the Chief Information Officer (CIO), IT Director, or IT Manager, who is actively participating in IT Management. Data Collection An email was sent out to 769 participants via email, a link to the questionnaire was attached. The questionnaire was done with an online data collection approach (Google Form). Respondents are able to immediately access the online data collection form (Google Form) when they clink on the attached link. The researcher initiated multiple follow-up attempts (i.e., sending out an email invitation to participate in the study to the direct respondents' email address) to maximize the responses. Significant challenges were faced as respondents did not respond or were unwilling to participate. This is despite the fact that some respondents were agreeable when contacted by the researcher via calls of WhatsApp messaging. Hence, even though 769 emails were initially sent out, only a total of 162 responses had been received at the end of the data collection period (January -June 2019). The response rate was 21.06 per cent from the online data collection approach, as shown below in Table 2. Measures of Construct The questionnaire had four sections with a total of 25 items: screening questions, BI adoption, organizational performance, and demographic variables. The purpose of the screening is to achieve the required feedback as closely as possible. As the study focuses on investigating BI adoption among HEIs in Malaysia and the unit of analysis of this study is at the organizational level, a qualifying question was placed at the beginning of the questionnaire to be answered by potential respondents. This screening is to ensure that only those who are actively participating in the BI adoption process in their HEI participated in the study. The qualifying question asked was, "Do you involve in business intelligence adoption in your institutions? Yes/No. Only those people who answer yes to the qualifying question can proceed with answering the rest of the questionnaire. The dependent variable in the research model is BI adoption. It is a categorical variable comprising of five stages, which are: operate, consolidate, integrate, optimize, and innovate. The constructs and measures for classifying these levels of BI adoption information use in organizations are adapted from the (Davis, Miller, & Russell, 2006;Sacu & Spruit, 2010). As the model in this study categorizes organizations into five levels of BI adoption based on the five dimensions of infrastructure, knowledge process, human capital, culture, and application. The questionnaire was designed to pose five questions that represent those five dimensions. The researcher used thirteen items to measure organization performance. These measures were taken and then customized for the researcher's study from Owusu (2017) to measure organizational performance constructs by using a 5-point Likert scale ranging from 1 (strongly disagree) to 5 (strongly agree). Data Analysis Data collected were processed and analysed using a form of the structural equation model (SEM). SEM is a second-generation multivariate data analysis technique that analyses and explains a research structure with multiple variables (Hair et al., 2017). Theory testing and causal modelling is not complete without SEM especially in terms of applied multivariate analysis. Table 3 illustrates the demographic profile of the HEIs which participated in this survey. The majority of the HEIs are from private colleges (27.8 per cent), followed by private universities (24.7 per cent). Regarding geographical location, most HEIs are in Selangor (25.9 per cent) and Kuala Lumpur (13.0 per cent). Based on Table 4, for the infrastructure dimension, more than one-third of the respondents indicated that their institutions were at consolidate stage. More than a quarter indicated that their organizations were at the operate stage, followed by 19.8 per cent at the integrate stage. Only 9.3 per cent of respondents indicated that their organization's infrastructure was at the optimizing stage, with only a few respondents (3.1 per cent) selecting the innovate stage. From the knowledge process dimension, nearly one-third of the respondents indicated that the knowledge process of their organizations was at the operate stage followed by 25.9 per cent at the consolidate stage and 24.7 per cent at the integrate stage. Only 19.8 per cent of respondents indicated that their organization's knowledge process was at the optimize stage and none at the innovate stage. From the human capital dimension, a majority of the respondents (41.4 per cent) indicated that their organizations' human capital was at the consolidate stage, followed by 35.2 per cent at the integrate stage, 17.3 per cent at operate stage, 6.2 per cent at the optimize stage and none at the innovate stage. From the culture dimension, a majority of the respondents (42.0 per cent) indicated that their organizations' culture were at the consolidate stage, followed by 25.3 per cent at the integrate stage, 20.4 per cent at the optimize stage, 12.3 per cent at the operate stage and none at the innovate stage. Last but not least, from the application dimension, a majority of the respondents (37.0 per cent) indicated that their organizations' application were at the consolidate stage, followed by 25.3 per cent at the integrate stage, 22.8 per cent at the operate stage, 4.9 per cent at the optimize stage and 9.9 per cent at the innovate stage. Table 5, a majority of the HEIs (37.65 per cent) indicated that their institution's level of BI adoption was at the integrate stage, followed by 37.04 per cent at the consolidate stage, 19.14 per cent at the optimize stage, 6.17 per cent at the operate stage and none at the innovate stage. The BI Adoption among HEIs described by the 162 respondents is summarized in Error! Reference source not found.. A majority of the HEIs (37.65 per cent) indicated that their institution's level of BI adoption at the integrate stage, followed by 37.04 per cent at the consolidate stage, 19.14 per cent at the optimize stage, 6.17 per cent at the operate stage and none at the innovate stage. After categorizing the level of BI adoption in participating HEIs into four stages, each stage was profiled based on descriptive statistics in terms of frequencies and percentages. The result allowed a more detailed description of characteristics in the BI adoption of each stage. The comparison is only made between different levels of BI adoption. Table 6, a majority of the HEIs (37.65 per cent) indicated that their institution's level of BI adoption was at the integrate stage, followed by 37.04 per cent at the consolidate stage, 19.14 per cent at the optimize stage, 6.17 per cent at the operate stage and none at the innovate stage. After categorizing the level of BI adoption in participating HEIs into four stages, each stage was profiled based on descriptive statistics in terms of frequencies and percentages. The result allowed a more detailed description of characteristics in the BI adoption of each stage. The comparison is only made between different levels of BI adoption. Table 7 showed the descriptive statistics of responding HEIs across the level of BI adoption among HEIs. and Organizational Performance (Q 2 =0.048) were more than 0, indicating that the research model had adequate predictive relevance. BI Adoption explained that 9.5 per cent of the variance in organizational performance. The R 2 value of 0.627 was above the 0.26 value as suggested by Cohen (1988), which indicated a substantial model. Furthermore, the R 2 value of 0.109 was above 0.10, as suggested by Falk and Miller (1992) that recommended that the R 2 value should be equal or greater than 0.10 for the variance explained of an endogenous construct to be deemed adequate. Therefore, the R 2 for the research model constructs was comparable to recent findings in the literature (Hair et al., 2017). The validity and reliability of the measurement items were analysed to ensure accuracy. This can be done by looking at factors such as the individual loadings, internal composite reliability, and discriminant validity. Two types of validity tests can be performed to assess the validity of the reflective measurement model: the convergent validity test or the discriminant validity test. The researcher assessed the convergent validity and discriminant validity. First, the reflective measurement model is evaluated for convergent validity. The evaluation is based on indicator loadings, composite reliability (CR), and average variance extracted (AVE) (Hair et al., 2017). The results are presented in Table 8. Table 9 indicates that all constructs exhibited satisfactory discriminant validity, where the square root of AVE (diagonal) was larger than the correlations (off-diagonal) for all reflective constructs. In other words, indicators should load more strongly on their constructs rather than on other constructs in the research model, and the average variance shared between each construct and its measure should be higher than the variance shared between the construct and other constructs (Hair et al., 2017). The HTMT criterion developed by Henseler et al. (2015) was used to assess the discriminant validity. Table 9 illustrates that the criterion of the HTMT.90 and the HTMT.85 was fulfilled by all values. The discriminant validity is determined from an analysis of the results. Structural Model Assessment Based on the evaluation shown in Table 10, the hypothesis relationship has a t-value > 2.33 which is significant at 0.01 level of significance. Findings Business Intelligence Adoption among Higher Education Institutions in Malaysia Data analysis found that none of the participating HEIs is at the highest level of BI adoption, which was the innovate stage. A study by Owusu et al. (2017) found that most private universities in Malaysia are currently involved in level 2 of BI adoption. Only 6.17 per cent of the participating HEIs are at the lowest level of BI adoption, which is the operate stage. This is then followed by the optimize stage (19.14 per cent), which is the fourth level of BI adoption. Most of the participating HEIs are at the integrate stage (37.65 per cent), which is the second level of BI adoption and the consolidate stage (37.04 per cent), which is the third level of BI adoption. Due to the large numbers of HEIs categorized in the second and third level of BI adoption, HEIs in Malaysia are at a moderate level of BI adoption. The operate stage is the starting point in the BI adoption journey because BI adoption among HEIs in this level is not complex and is not resource-intensive. More resources will be necessary for HEIs to proceed to the consolidate and integrate stages that emphasize more on analytical processes. These stages require HEIs to overhaul their technology infrastructures, create a culture of sharing, improving the institution's human talents and fine-tuning knowledge processes. However, as such resources are not available to HEIs in Malaysia, only a small number of HEIs were categorized as being at the upper levels of BI adoption, which is the optimize stage. Based on these findings, there are opportunities to increase BI adoption in Malaysia's HEIs to greater levels. HEIs administrations wishing to encourage BI adoption needs to consider factors which significantly influence BI adoption among HEIs in Malaysia. Outcome of BI Adoption The hypothesis has suggested that BI adoption will be significantly related to organizational performance. However, from the result of the tested hypothesis, it showed that BI adoption (β = -0.304, p < 0.05) has influenced the organizational performance. Therefore, the hypothesis is accepted based on the collected data. The results of the analysis show that organizational performance of HEIs is significantly impacted by BI adoption. The finding indicates that BI adoption among HEIs helps in value creation for students, monitoring of how HEIs deliver service and highlights opportunities for removing operating inefficiencies. Kaplan and Norton (1996) noted that through continuous improvement attributed to BI adoption, the HEIs can determine the processes and competencies which are most critical and specify measures, including cycle time, quality, employee skills, and productivity to track them. These can lead to HEI managements improving the focus of their priorities for different business areas. Implication of the Study The implications of this study are: Firstly, the findings in this study can guide the HEI administrations, especially those who are trying adopt BI in their operations. Since the findings point to moderate levels of BI adoption among HEIs in Malaysia, there are still opportunities for increasing BI adoption levels among HEIs to a higher level. To encourage higher BI adoption levels, there needs to be an emphasis on HEI administrators being aware of and understanding the advantages of BI adoption. HEIs can create policies to boost BI adoption and attain a higher rate of BI adoption by initiating awareness campaigns to convince all associated departments of the presumed prospective benefit of BI adoption. Secondly, in using this study, future results could act as a starting point for the government and BI providers to determine the current state of BI adoption among HEIs. They can create strategies and customize offerings that are better adapted for the needs of HEIs by categorizing these institutions as a separate group. This would hopefully accelerate BI acceptance among HEIs. Government and BI providers will then be equipped with the necessary knowledge to guide their policies and allocate resources more effectively to address challenges surrounding BI adoption among HEIs. From the perspective of BI providers, they can provide test intervals to HEIs so that HEIs can try the BI systems before completely accepting adopting them. Implementing trials of BI would grow understanding and show the advantages of sophisticated BI for HEIs. Additionally, it can allow the BI provider to better assist the HEIs in selecting the appropriate BI models that reflect the HEIs needs. Thirdly, from the view of the HEI, the essential effect of BI's compatibility demonstrates that change management for BI adoption is a vital issue to be resolved before BI procurement. Thus, a consistency assessment before embracing BI is probable to be a great concept. Thus, the problem of resistance to modify can be monitored and reduced to the lowest effect. Only when issues and problems can be minimized then BI adoption would generate the presumed maximum advantage. Otherwise, if more time is required to tackle issues, the goal of deploying new IT innovations may be defeated. The decision to start and offer up BI adoption should be dependent on a thorough cost-benefit analysis. Fourthly, this study indicates that BI adoption could affect organization performance from both the financial and the non-financial aspects. The results showed that HEI administrators should holistically assess the benefits of BI adoption that are both tangible and intangible It is suggested that the benefit of having greater control of finances could encourage HEIs to adopt BI in their operations. Conclusion In the last two decades, BI has become an increasingly essential component of the organizational decision-making process. BI adoption has reached a mature phase in large organizations, while HEIs are still slow in BI adoption. BI adoption is expected to assist organizations in attaining competitive advantages and improving organizational performance by turning operational data gathered into assets that drive strategic decisions. Given these findings, researchers, government bodies, and IT service providers should recognize the potential of BI as a tool and emphasize the need for BI adoption among HEIs in Malaysia to boost organizational performance. Also, the researcher expects that the empirical findings in this research from the validated template will provide further knowledge of the benefits of BI adoption among HEIs in Malaysia. The researcher also hopes that in the context of HEIs, the model used in this study can be used to examine the impact of adopting other forms of technological innovations. Research Contributions The theoretical contribution of this study is adding the literature of technology innovation in order to enrich detailed knowledge and understanding the process of organizational IT adoption. It also contributes to the theory by evaluating the applicability of Resource-Based theory (RBT) that was developed and applied in developing countries such as Malaysia. The practical contribution of this study, is an extending knowledge of analytical instruments in the enterprise to fill the knowledge gap in BI adoption and giving HEI administrators a stronger understanding that helps develop favourable attitudes towards BI adoption. HEI administrators will also be motivated to become more proactive in BI adoption to improve their likelihood of achievement in decision-making by enhancing productivity and increasing competitiveness.
6,186.6
2021-01-29T00:00:00.000
[ "Business", "Education", "Computer Science" ]
AST-GIN: Attribute-Augmented Spatiotemporal Graph Informer Network for Electric Vehicle Charging Station Availability Forecasting Electric Vehicle (EV) charging demand and charging station availability forecasting is one of the challenges in the intelligent transportation system. With accurate EV station availability prediction, suitable charging behaviors can be scheduled in advance to relieve range anxiety. Many existing deep learning methods have been proposed to address this issue; however, due to the complex road network structure and complex external factors, such as points of interest (POIs) and weather effects, many commonly used algorithms can only extract the historical usage information and do not consider the comprehensive influence of external factors. To enhance the prediction accuracy and interpretability, the Attribute-Augmented Spatiotemporal Graph Informer (AST-GIN) structure is proposed in this study by combining the Graph Convolutional Network (GCN) layer and the Informer layer to extract both the external and internal spatiotemporal dependence of relevant transportation data. The external factors are modeled as dynamic attributes by the attributeaugmented encoder for training. The AST-GIN model was tested on the data collected in Dundee City, and the experimental results showed the effectiveness of our model considering external factors’ influence on various horizon settings compared with other baselines. Introduction Traffic information forecasting plays an important role in smart city management. Generally speaking, traffic information contains the link speed, traffic flow, vehicle density, traveling time, facility usage condition, and so on [1]. With the rapid development of EV technologies, the proportion of EVs is growing annually [2], and Figure 1 shows the worldwide EV sales statistics. However, limited endurance and charging stations and a much longer charging time compared with the short refueling time of petrol cars cause serious mileage anxiety for EV drivers [3]. As one of the most-significant infrastructures of the EV system, EV charging stations have attracted more attention recently. Some studies have shown that EV charging behavior has obvious periodicity [4], thereby an accurate EV charging station usage condition forecasting system can effectively alleviate range anxiety and improve road efficiency [5]. Benefiting from the huge number of smart sensors, real-time station-level monitoring has been realized [6]. Most canonical facility usage condition prediction methods are dependent on past traffic features to make predictions. However, EV charging station availability is much more complex than other time series forecasting issues because the future availability not only depends on the historical values, but is also influenced by the topological relationship and complex external influences [7]. For example, within the campus or Central Business District (CBD) road section, the usage of the charging station will be highly affected by the commute time. An obvious rise of availability can be observed around the off-duty time, which is the reverse inside the residential area, even though the two road structures are similar [8]. Another example is that bad weather, such as heavy rain, can increaseand delay people's commute and further affect charging station usage [9]. It is quite a challenge to take into consideration the randomness caused by these external factors [10]. (a) (b) Figure 1. EV sales statistics and EV charging station example. Global EV sales increased 108% from a 4.2% market share in 2020 to a 8.3% market share in 2021 [2]. With the development of deep learning technologies, several forecasting methods have been proposed to solve this issue [11], such as the Auto-Regressive and Integrated Moving Average (ARIMA) method [12], the Convolutional Neural Network (CNN) method, the Long Short-Term Memory (LSTM) method, the GCN method [13], and the Transformerbased method [14]. Each algorithm has its own strengths and limitations. However, most of the models do not have the capability to obtain the augmented attributes during the data processing. Correspondingly, the perception of external factors is poor. In the next section, a detailed introduction to the related work is given. Compared to the recent related works, we built a novel neural network extracting both spatiotemporal information and external influences to predict the charging station usage condition. The contributions can be summarized as follows: • As far as we know, our study is one of the few research works on the deep learning approaches for the EV charging station availability forecasting problem. • The AST-GIN's structure is firstly proposed to deal with the EV charging station availability forecasting problem by combining the Attribute Augmentation Unit (A2Unit), the GCN, and the Informer network. • The proposed AST-GIN model was verified and tested on real-world data. The comparison results showed that the AST-GIN has better prediction capability over different horizons and metrics. The rest of the work is arranged as follows: The second section describes the related research on deep learning approaches for traffic facility usage forecasting and external factors' influence during the time series prediction. The third section illustrates the problem statement and proposed model structure. The fourth section shows the detailed experiments with an analysis. The final section summarizes the contributions and possible future plans. EV Charging Issue Recent research has shown that charging is a challenge for the operation of a fleet of EVs, since frequent charging sessions are needed [15]. Alleviating charging station congestion has become significant to improve the efficiency of charging infrastructure management [16,17]. Two main research directions for the EV charging problem have been studied recently. One direction focuses on modeling individual EV charging loads and charging stations. The objective is to predict the parameters of the charging load profiles for a smart charging management system [18]. Existing studies mainly apply statistical models [19], such as Gaussian mixture models [18], and deep learning approaches [20], such as a hybrid LSTM neural network [21,22], to forecast charging loads at EV charging stations. In [23], the authors reviewed the most-popular techniques for EV load modeling, including deterministic and probabilistic methods. From short-term to long-term perspective, researchers have proposed several forecasting methods. Utilizing the advantages of the Internet of Things (IoT) technology, the real-time interactional view of charging stations and the server-based forecasting application have been realized [24]. In [25], the authors proposed a daily joint adversarial generation interval-forecasting method for EV charging load distribution by considering the influence of the spatial correlation and characterizing the randomness. Some researchers even presented a mid-and long-term systematic method to predict the additional loads of EV charging by considering the EV charging profiles and future EV ownership [26]. The other direction analyzes modeling and predicting the charging occupancy profile at the chargers, which is quite similar to the parking availability prediction problem [27,28]. The purpose is to design the scheduling algorithm to allocate EVs among eligible chargers to realize the global or local optimal charging waiting plan [29,30]. Canonical Forecasting Model For the traffic forecasting issue, the approaches have undergone several stages, and the methods can generally be divided into two types: canonical models and deep-learningbased models [31]. Canonical forecasting models usually build mathematical models and treat traffic behavior as the conditional process. There are many famous models, such as the Historical Average (HA) model, the K-nearest neighbor model, the ARIMA model, and Support Vector Regression (SVR) model [32]. Most of them consider the trend of the data and make the strong assumption that time series data are stable, which makes it difficult for them to respond to the rapid change of the inputs [33]. Deep Learning Forecasting Model Recently, deep-learning-based forecasting methods have been widely applied to solve time series prediction problems [34]. Benefiting from their capability to extract nonlinear relationships across an input sequence, the Recurrent Neural Network (RNN) model, the Stacked Autoencoding Neural Network (SAE), the Gated Recurrent Unit (GRU) [35], LSTM [36], Transformer [37], and their variants have been verified to be much more efficient at extracting temporal information than canonical forecasting models. To adaptively predict comprehensive traffic conditions, some works have been performed to improve the results, such as integrating a GCN to extract the spatial dependencies [13,38]. External Factors in Forecasting As mentioned above, external factors have an influence on the future usage conditions of EV stations. To integrate the information of a variety of external inputs, such as surrounding POIs [39] and weather conditions [40], previous studies have demonstrated great efforts, leveraging multi-source data to specifically design the model structure. In [41], the authors proposed an LSTM-based structure integrating an encoder to aggregate external information and treat multi-source data as the sequential inputs. In [35], the authors applied the feature fusion technology to process the input weather data for traffic prediction. In conclusion, existing methods can be further improved by considering external information's influence. Therefore, motivated by the related works and the challenges, the AST-GIN network for EV charging station availability forecasting, which integrates both spatiotemporal and external factors as the input to enhance the model's perception capability during predicting, is proposed. In the next section, the architecture and principles of the proposed model are illustrated. Definition of EV Charging Station Availability In this section, we first give the mathematical definition of EV charging station availability. Availability represents the occupancy status of EV charging facilities, such as charging connectors. If all connectors in a charging station are occupied for charging, the availability is regarded as 0. On the other hand, if all connectors are in the unused status, the availability is regarded as 1. Based on the definition, the availability of the charging station can be calculated as: where M i,usedconnector is the number of charging connectors being used at station i at time t; M i,allconnector is the total number of charging connectors at station i at time t. Herein, the ultimate target variable is x, where x it is the availability of the charging station i at time t. Note that the value of x it ∈ [0, 1]. Before introducing the model for predicting the availability of the EV charging stations, we subsequently describe the attributes that exert an impact on the our target variable x, i.e., the charging station availability. Weather Condition Attribute The purpose of this study was to predict future EV stations' usage condition based on historical states and associated information. Based on the prior knowledge introduced, the demand of EVs has a strong periodicity, and external factors have a high correlation with the usage of EVs. As shown in Figure 2, the weather is classified into 5 types: sunny, cloudy, foggy, light rain, and heavy rain, which are labeled 1 to 5. To better present the relationship, the weather data were normalized in the range of 0 to 1. Therefore, the Y-axis for the availability data refers to the availability of the chargers in Figure 3. At the same time, the Y-axis for the weather data refers to the different kinds of weather. The higher the value, the worse the weather. The graph shows a general pattern by which the availability becomes higher when the weather becomes better and vice versa. Meanwhile, the time index in the figure refers to the recording time. In this work, the weather factor is organized as a matrix W, where W it ∈ [0, 1] is the weather condition of location i at time t. Road Network and POI Attributes In this paper, the road network was treated as a weighted undirected graph G = {V, E}. EV stations work as nodes inside the graph, which are denoted by |V| = N, where N is the number of stations and E represents the graph edge set representing stations' connectivity. The corresponding adjacency matrix A ∈ R N×N can be constructed based on node and edge information. With the road map, based on the latitude and longitude of the charging points, the road distance between EV stations can be estimated. The adjacency matrix elements are calculated using the Gaussian kernel weighting function [42]: where dist(v a , v b ) represents the distance between station v a and station v b ; σ is the standard deviation of dist(v a , v b ); κ is the filter removing small distances. Besides, we also integrated the POIs' distribution information as an external factor, which is denoted as where poi i is the POI category score for location i. Assume we have k POI categories, then we have poi i ∈ {1, 2, . . . , k}. Problem Formulation At time t, the EV charging station availability matrix, X t ∈ R N×C , contains the high-dimensional information of EV station availability, where C is the hyperparameter, which manually defined. Thus, the known L steps' historical usage data are defined as X = [X t−L , X t−L+1 , ..., X t ] and used as partial inputs to predict the next M steps' states Ŷ t+1 , ...,Ŷ t+M . Further, the influence of external factors is regarded as the affiliated attributes matrix F. These factors construct an attribute matrix [F 1 , F 2 , ..., F l ], where l is the dimension of the attribute information. At time t, the set of j-th affiliated information is F j = [j t−L , j t−L+1 , ..., j t ]. In conclusion, the issue of EV charging station availability forecasting considering external factors is refined to finding the relationship function f based on the historical usage data X, attribute matrix F, and road graphic structure G, to achieve the future usage valuesŶ: whereŶ t+m means the estimated value of X t+m at the future time t + M, m = 1, 2, . . . , M, and M is the prediction horizon. AST-GIN Architecture In this subsection, the principle of the AST-GIN model is introduced in detail. The AST-GIN model contains three layers: A2Unit, which can integrate the external information, the GCN layer, and the Informer layer. The historical time series data and external data are firstly fed into the A2Unit for attribute augmentation. Then, the processed information is fed into the GCN layer for spatial information extraction. Finally, the Informer layer will take the outputs from the GCN layer to extract the temporal dependencies. The overview of the architecture of the AST-GIN is illustrated in Figure 4. A2Unit As mentioned, both historical data and external factors affect the EV charging conditions. Thus, different from traditional time series deep learning forecasting model, additional structure aggregating external factors are needed [41]. To comprehensively take external factors' influence into consideration, dynamic attributes and static attributes are selected, respectively, for the objective region. EV stations' historical availability tensor X, road structure G, and two types of attribute matrices are integrated into the A2Unit for augmentation. We use α ∈ R N×p to represent the static attribute matrix containing p categories' attributes, and α is time-invariant. Similarly, β ∈ R N×(w * t) represents w different dynamic attributes with cumulative effects, which change over time. To aggregate the cumulative influence of dynamic attributes, the L length historical window is selected. Thus, the final augmented matrix processed by A2Unit at time t is stated as: where E t ∈ R N×(p+1+w * (L+1)) , and the same processing procedure is applied for every time stamp inside traffic feature matrix X. GCN Layer The distance between the vehicle location and the target charging station obviously influences the decision of the drivers [43]. To enhance the understanding of the EV charging behavior pattern, the spatial dependencies among charging stations were taken into consideration. Some related works have proposed CNN-based neural networks to deal with the spatial prediction issue [44,45]. However, the distribution of EV charging stations connected by the non-Euclidean road network cannot be processed well by a CNN. Thus, here, the GCN was selected to extract the spatial dependencies of the input data, which still retains the convolutional functionality [46]. The framework of the GCN is shown in Figure 5. In principle, graph convolution neural network perform the convolution over the nodes of the graph to capture the spatial information, which is similar to image processing by convolution neural networks. The general convolution theorem in the spatial domain states that where g is the kernel operated on function f . When performing the convolution in the spectral domain, the graph convolution formula can be expressed as where g θ is the graph convolutional kernel, U is the eigenvector matrix, and Λ is the diagonal matrix of eigenvalues. To simplify the computation, people usually perform the first-order Chebyshev approximation [46]. The graph convolution formula now is reorganized as where D is the diagonal degree matrix, D ii = ∑ j A ij ;Ã = A + I n ;D ii = ∑ jÃij . Furthermore, the GCN convolutional formula can be rewritten as: where γ is the activation function; H l+1 is the l-th layer output. Informer Layer It is significant to obtain the global temporal dependency while forecasting. With the rapid development of Transformer-like neural networks, which employ an encoderdecoder architecture, the time series prediction ability has been improved greatly based on the attention mechanism compared to some canonical deep learning methods [47], such as the GRU and LSTM. Thus, one of the latest variants of Transformer, Informer [48], was applied here as the temporal extraction layer to understand the global sequence. The structure of Informer is shown as Figure 6. Loss Function During the model training, the objective was to eliminate the gap between the ground truth and predicted value. Thus, the loss function can be written as Y t andŶ t represent the recorded value and the predicted value, respectively. λL reg [49] is the L2 regularization term to avoid overfitting during training. Empirical Analysis To evaluate the AST-GIN model's performance, the necessary experiments were performed on the EV charging station availability dataset. We chose five efficient time series forecasting baseline models for comparison. During the experiment, the performance of the AST-GIN model with a static external factor only, the model with a dynamic external factor only, and the model with both static and dynamic factors were evaluated separately. EV Charging Station Data Dundee EV charging dataset: This dataset is a record of the EV charging behaviors in Dundee, Scotland, available at https://data.dundeecity.gov.uk/dataset/ev-charging-data (accessed on 10 October 2022). There are 57 charging points in Dundee, which can be divided into three types: slow chargers, fast chargers, and rapid chargers. In total, three valid datasets recorded during three different time periods, 01/09/17 to 01/12/17, 02/12/17 to 02/03/18, and 05/03/18 to 05/06/18, are accessible. Meanwhile, the geographical locations of all the charging points are also provided, and they are shown in Figure 7. In the present study, the dataset we used was recorded during 05/03/18 to 05/06/18. There were 16773 charging sessions recorded in total, and each of the charging session records contains the charging point ID, charging connector ID, starting and ending charging time, total consumed electric power, geographical location, and type of charging point. There were 40 slow chargers with 5894 charging sessions recorded, 8 fast chargers with 1416 charging sessions recorded, and 9 rapid chargers with 9463 charging sessions recorded. Moreover, Dundee's weather data are available, which is the weather record in Dundee City. The weather is recorded every hour, and each record includes the general description of the weather, temperature, wind, humidity, and barometric pressure. Static External Factors We classified the surroundings of the charging points into eight types: transportation services, catering services, shopping services, education services, accommodation, medical services, living services, and other. The category of the surroundings with the largest proportion was labeled as the POI value of a charging point based on its geographical location. Dynamic External Factors The weather for Dundee was divided into five types: sunny, cloudy, foggy, light rain, and heavy rain, with different labels from 1 to 5. Since the time interval of the weather data from the source dataset was 1 h, the weather in the covered period was regarded as the same, which means that, if the weather at 17:50 is recorded as sunny, the weather at 18:20 would be labeled as sunny. Evaluation Metrics In this study, five commonly used metrics were selected to evaluate the model's forecasting performance, including the Root-Mean-Squared Error (RMSE), the Mean Absolute Error (MAE), the Accuracy, the Coefficient of Determination (R 2 ), and the Explained Variance Score (EVS). Each of the metrics is defined as follows: where y ij andŷ ij represent the ground truth availability and the predicted one for the i − th charging station at time j. M is the time instant number; N is the charging station number; Y andŶ are the set of y ij andŷ ij , respectively;Ȳ is the average of Y. Baseline Settings As far as we know, there is no directly published model for this specific problem; hence, typical models were selected for comparison. Comparing the proposed model with the typical sequence models of the GRU, LSTM, Transformer, and Informer involved in our model, we can testify to the importance of incorporating the spatial dependencies captured by the GCN. As expected, when we compared the proposed model with Informer, the significance of the GCN in the architecture can be verified. The baselines are described as follows: • GRU: The commonly used time series model, which has been proven effective in traffic prediction problems and can alleviate the problem of gradient explosion and vanishing. • LSTM: Together with the GRU, they are two popular variants of the RNN. LSTM has a more complex structure than the GRU. • Transformer: The classic Transformer model with the self-attention mechanism [37]. • Informer: A new Transformer variant proposed to process the long-sequence prediction issue without spatial dependencies' extraction. • STTN: A new proposed framework utilizing two Transformer blocks to capture both spatial and long-range bidirectional temporal dependencies across multiple time steps [50]. Hyperparameters In this study, a three-layer GCN structure was used. For each Informer block, the number of encoder layers was 2, while the number of decoder layers is 3. During the experiment, the data were randomly divided into 50% for training, 33% for the evaluation, and the remaining 17% for testing. The network was optimized using the Adam optimizer. The dropout rate was set to 0.05. The learning rate started from e −4 and decayed by 10 times every two epochs. The total number of epochs was 50 with an early stopping criterion. The batch size was chosen as 32, and the whole network was trained on a GPU RTX3060. Experimental Results We used five state-of-the-art baselines to compare the performance of our proposed AST-GIN model, including the GRU, LSTM, Transformer, Informer, and STTN. Based on the 30-minute time interval of the EV charging availability dataset, we deployed the selected models to predict the availability in the next 30 min, 60 min, 90 min, and 120 min horizons. The numerical results are shown in Table 1. For both short-term (30 and 60 min) and long-term (90 and 120 min) EV charging station availability forecasting, the AST-GIN model effectively captured the temporal dependance of the data and outperformed the baseline models, as measured by all the metrics. In the 30 min forecasting, AST-GIN model with the dynamic external factor achieved an accuracy of 0.8388, while the best model, the STTN, among the baseline models achieved an accuracy of 0.7521. The AST-GIN outperformed the STTN in the 90 min horizon with a 11.54% higher accuracy. In the long-term forecasting, 120 min for example, the AST-GIN still had the best performance with the highest accuracy compared to the other baseline models. The accuracy of the AST-GIN, 0.7517, was 8.97% higher than the best model, the STTN, among the baseline models. Among the three kinds of external factors used, which were the static factors only, dynamic factors only, and the combination of both of the factors, the use of the external factors' combination led to a better performance in general for the prediction horizons of 30 min and 60 min. The combination of both static and dynamic factors led to higher prediction accuracy in the relatively short-term horizon. The difference among static the factors only, dynamic factors only, and the combination became negligible in the relatively long-term horizon. For a better comparison, we plot the prediction accuracy of all models in terms of all prediction horizons in Figure 7. Measured by the explicitly interpretable accuracy metric, it can be observed that the overall forecasting capability of the proposed model was better than the baselines over all prediction horizons. For the prediction horizons of 90 min and 120 min, the proposed AST-GIN performed better by incorporating either the POI feature or the weather feature than when incorporating both factors. The external factors' combination should be considered extensively for short-term prediction. Such results showed that we could distinguish the attribute inputsin terms of short-term prediction or long-term prediction. Results' Analysis To analyze the prediction results of the proposed model and the benchmarks, we plot the cumulative distribution function of the absolute prediction errors in Figure 8, where the absolute error was calculated as |y ij −ŷ ij |. Note that, as shown in Table 1, the prediction results of Informer were always better than that of Transformer; hence, in this part, we excluded the results of Transformer in Figure 8. From the CDF of the errors, we can find that the absolute error of the proposed AST-GIN model occurred earlier and faster than the three baselines when the prediction horizons were 90 min and 120 min. This means that the long-term prediction performance is of great significance for the EV charging station availability estimation, which is because the drivers can know in advance with an adequate time to head to the available station. In addition, inevitably, noise during the data collection, such as sensor error and system delay, may undermine the model's forecasting capability. To verify the noise immunity of the AST-GIN model, the model's robustness was tested via perturbation experiments. The normalized random noise, which obeys a Gaussian distribution, was added to the data to check the robustness. The resultant fluctuation was small. Conclusions In this paper, a deep learning model, the AST-GIN, was proposed and verified for EV charging station availability forecasting considering external factors' influence. The model contained an A2Unit layer, GCN layer, and Informer layer to augment time series traffic features and extract the spatiotemporal dependencies of EV charging station usage conditions. The AST-GIN and the baselines were tested on the data collected in Dundee City. The experiments showed that the AST-GIN has better forecasting capability over various horizons and metrics. To summarize, the AST-GIN can effectively comprehensively consider the external attribute influences and predict the EV charging station usage condition. Future plans regarding the robustness are ongoing to further improve the prediction system. Funding: This study is supported under the RIE2020 Industry Alignment Fund-Industry Collaboration Projects (IAF-ICP) Funding Initiative, as well as cash and in-kind contribution from the industry partner(s), and A*STAR under its Industry Alignment Fund (LOA Award I1901E0046). Institutional Review Board Statement: Not applicable. Informed Consent Statement: Not applicable. Data Availability Statement: The dataset used in this paper can be found at https://data.dundeecity. gov.uk/dataset/ev-charging-data (accessed on 10 October 2022). Conflicts of Interest: The authors declare no conflict of interest.
6,284.6
2023-02-01T00:00:00.000
[ "Computer Science" ]
Analysis of Ultra-Deep Pyrosequencing and Cloning Based Sequencing of the Basic Core Promoter/Precore/Core Region of Hepatitis B Virus Using Newly Developed Bioinformatics Tools Aims The aims of this study were to develop bioinformatics tools to explore ultra-deep pyrosequencing (UDPS) data, to test these tools, and to use them to determine the optimum error threshold, and to compare results from UDPS and cloning based sequencing (CBS). Methods Four serum samples, infected with either genotype D or E, from HBeAg-positive and HBeAg-negative patients were randomly selected. UDPS and CBS were used to sequence the basic core promoter/precore region of HBV. Two online bioinformatics tools, the “Deep Threshold Tool” and the “Rosetta Tool” (http://hvdr.bioinf.wits.ac.za/tools/), were built to test and analyze the generated data. Results A total of 10952 reads were generated by UDPS on the 454 GS Junior platform. In the four samples, substitutions, detected at 0.5% threshold or above, were identified at 39 unique positions, 25 of which were non-synonymous mutations. Sample #2 (HBeAg-negative, genotype D) had substitutions in 26 positions, followed by sample #1 (HBeAg-negative, genotype E) in 12 positions, sample #3 (HBeAg-positive, genotype D) in 7 positions and sample #4 (HBeAg-positive, genotype E) in only four positions. The ratio of nucleotide substitutions between isolates from HBeAg-negative and HBeAg-positive patients was 3.5∶1. Compared to genotype E isolates, genotype D isolates showed greater variation in the X, basic core promoter/precore and core regions. Only 18 of the 39 positions identified by UDPS were detected by CBS, which detected 14 of the 25 non-synonymous mutations detected by UDPS. Conclusion UDPS data should be approached with caution. Appropriate curation of read data is required prior to analysis, in order to clean the data and eliminate artefacts. CBS detected fewer than 50% of the substitutions detected by UDPS. Furthermore it is important that the appropriate consensus (reference) sequence is used in order to identify variants correctly. Introduction The continued improvement of DNA sequencing technologies has led to the development of next generation sequencing (NGS) methods, including ultra-deep pyrosequencing (UDPS), which are capable of sequencing many thousands of nucleotides, quickly and at a low cost per nucleotide. These technologies have overcome the disadvantages of the traditional dye-terminating DNA sequencing technology developed by Frederick Sanger [1]. These disadvantages include the relatively high cost per nucleotide, in terms of money and time, and the fact that Sanger sequencing is only capable of detecting sequence variants, which are present in 20% or more of a quasispecies population [2,3]. Moreover, NGS methods also overcome several of the drawbacks of cloning based sequencing (CBS), such as the time, money and expertise required to prepare samples, especially when a large number of clones is required [4]. NGS methods are used primarily for de novo or ''shotgun'' sequencing of new or known genomes. This produces a very large number of short reads, which are then assembled to produce a complete sequence. Several algorithms and tools exist to process these short reads [5]. In addition to producing short reads, the pyrosequencing platform can be used for amplicon re-sequencing (UDPS). These longer reads are typically an amplicon covering a genomic region of interest. At present, the GS Titanium UDPS chemistry produces reads of approximately 400 bases in length. Few bioinformatic tools, which are affordable and accessible to resource-constrained environments, are currently available to assist with the processing and analysis of amplicon re-sequencing data. The Roche AVA software (http://www.454.com/products/ analysis-software/#amplicon-tabbing), although free of charge, can only be installed on a computer running a particular GNU/ Linux distribution, and a number of commercial software packages cost several thousand US dollars for a single license. Alignment and visualization tools, which are used routinely for smaller datasets, are not suitable for datasets containing hundreds or thousands of reads. Additionally, many of these software solutions require a level of technical expertise, which many biological researchers may not possess. Pyrosequencing is an error-prone technique [6]. Distinguishing between a true biological variant and an error (artefact) is a vital step in analysing pyrosequencing data. Although a number of studies discuss error correction in pyrosequencing data [6,7], there is currently no consensus regarding the error threshold, which should be applied. Knowledge of well-characterized regions of a genome is important in order to develop tools to examine pyrosequencing data and to distinguish between artefacts and true variations. Hepatitis B virus (HBV) displays remarkable sequence heterogeneity, with 9 genotypes (named A to I) currently recognized [8,9]. The precore/core (PC/C) open reading frame (ORF) of HBV encodes for both the hepatitis B e antigen (HBeAg) and the core protein (HBcAg). This region is preceded by the basic core promoter (BCP) region, which controls transcription of both the PC/C mRNA and the pregenomic RNA (pgRNA) during the replication cycle [10]. The BCP/PC ORF overlaps the X ORF. HBeAg is a soluble, non-particulate protein that is secreted in the serum or expressed on the surface of the hepatocyte [11,12]. Conventionally, HBeAg expression is an indicator of active HBV infection and on-going viral replication [12]. However, HBeAg expression may be reduced or completely suppressed by various viral mutations, even in the presence of viral replication. Mutations in two regions may affect HBeAg expression: precore mutations (for example, G1896A) [13] and BCP mutations (for example, A1762T/G1764A) [14]. The viral capsid is composed of HBcAg [15]. Mutations may occur more frequently in N-terminal or central region of the core protein, which does not overlap other reading frames [16]. Using a segment of this well-characterized BCP/PC/C region of HBV as a model, the objectives of this study were to: N develop bioinformatics tools to explore UDPS data, N test and use them to determine the optimum error threshold, and N compare results between UDPS and CBS using HBeAgpositive and -negative sera infected, with either genotype D or E. Sample Selection Written informed consent was obtained from all participants and the consent was approved by the Sudanese Ministry of Health, who gave permission for the sera to be used for research purposes. The Human Ethics Committees of the University of the Witwatersrand and the University of Khartoum approved the study. Four serum samples were selected from our previous study on HBV from monoinfected individuals, where the HBV genotype was determined using phylogenetic analysis [17]. Sample #1 was HBeAg-negative and infected with genotype E of HBV (GenBank, KF170783), sample #2 was HBeAg-negative, genotype D (KF170739), sample #3 was HBeAg-positive, genotype D (KF170740) and sample #4 was HBeAg-positive, genotype E (KF170788). Wet Laboratory Work Ultra-Deep Pyrosequencing (UDPS). A region of the HBV genome (1653-1959 from EcoR1 restriction site) was amplified using a slight modification of a previously described method [18]. Primers 1606 (+) and 1974 (2) were used for the first round PCR, and 1653 (+) and 1959 (2) for the second round PCR. The first round PCR was followed by gel-purification using Zymoclean Gel DNA Recovery Kit (Zymo Research Corp, Irvine, CA, USA). For the second round PCR, modified primers, which were ligated to adaptors and tags, were used (Table 1). Following second round PCR, the amplicons were gel-purified and subjected to UDPS in the forward direction on the Roche 454 GS Junior platform (454 Life Sciences, Roche Company, Switzerland), which provided reads covering the region of interest (coordinates 1653-1959). The UDPS sequencing data has been submitted to the GenBank SRA database, as BioProject accession: PRJNA239442 and the following are the BioSample accessions: SAMN02664575, SAMN02664576, SAMN02664577, SAMN02664578. Dry Laboratory Work Data pre-processing. UDPS data for three sequencing runs, for each of the four samples, was processed and analyzed as shown in the flow diagram ( Figure 1). The data from each run, for each sample, was processed individually. Separate binary standard flowgram format (SFF) files were opened in the R statistical programming language [19], using the ''raw'' clip-mode parameter (which does not perform any clipping or trimming) of the ''rSFFreader'' library [20]. Sequence data were searched for the forward and reverse primer sequences and the adaptor sequence for verification. Sequence lengths in each file were plotted and examined statistically (data not shown). The distribution of all sequence lengths was examined and a length range was selected, which excluded reads with very low counts. Several Linux command-line BASH scripts and Python programming language scripts (available on request) were written to include only reads within a specified length range (between 330 to 360 nucleotides) for further processing. A genotype D reference sequence (GU456684) was then added to each dataset, and the file was aligned with the Muscle program [21]. Each alignment was then processed by a Python script, which scanned the reference sequence in the alignment and removed any reads from the alignment with an insertion (a residue aligned with a gap in the reference sequence). In the remaining alignment (excluding reads with insertions), positions (columns), containing only gaps, were collapsed and this alignment was ''Dataset 1''. The repeated runs for all ''Dataset 1'' sequences for each sample were then combined into one dataset, the final ''Dataset 1''. The file containing reads with insertions was ''Dataset 2'' for each run and these were processed individually because of variable read lengths, as a result of insertions at different positions in the reads. Development of deep threshold tool. For pyrosequencing data of human immunodeficiency virus (HIV), a probability of error, ranging from 0.5% to 1%, has been used [6]. In the present study, using HBV data, a web-based tool (the ''Deep Threshold Tool'') (http://hvdr.bioinf.wits.ac.za/tools/) was developed to examine the number of errors in each position (column) in an alignment, depending on the probability of error value. In order to examine the number of errors, the tool requires an input alignment in FASTA format, the lower and upper bounds of the probability of error, and an increment value ( Figure 2A). A nucleotide mapping offset can be specified, so that the resulting output coordinates reflect the correct position of the sequence in the entire genome. Potentially untidy ends of reads (such as the reverse primer region) can be excluded from the analysis by specifying a length shorter than the sequence length. Statistical calculation of the threshold. A nucleotide was considered an ''error'' if its frequency in a column in the alignment was less than the threshold, which was determined as follows. An expected frequency of E = probability of error6read depth (R) was used. A Pearson's x 2 test statistic was calculated as follows: Figure 2. The input pages of the bioinformatics tools (A) ''Deep Threshold Tool''. The first field specifies the input FASTA file. Fields are available for the user to specify the nucleotide offset mapping of the first position in the input file, the number of nucleotides (length) to process, the starting and ending probabilities of error to examine, and the probability of error increment (step) to use. (B) ''Rosetta Tool''. The first field specifies the input FASTA file. Fields are available for the user to specify the nucleotide offset mapping of the first position in the input file, the position of the first in-frame nucleotide of the coding region of interest, the last in-frame nucleotide of the coding region of interest, the amino acid offset of the first amino acid in the coding region of interest, and the probability of error to use. doi:10.1371/journal.pone.0095377.g002 with O being the observed value, starting at 1. If M was less than the x 2 distribution (with a = 0.05 and one degree of freedom), then O was incremented by a value of one and the test was repeated. The value for O at which the x 2 distribution was exceeded, was considered the threshold value (count). This threshold was calculated for each position in the alignment. Any nucleotide with a frequency below this threshold was considered an error or artefact. Development of rosetta tool. Amino acid data were examined using the newly-developed ''Rosetta Tool''. This tool requires the same input file as the ''Deep Threshold Tool ''. It also requires a nucleotide offset mapping and the start and end positions of a protein region. This does not have to include the position of the start or stop codon; any region of a protein can be processed, as long as the number of nucleotides specified by the range is a multiple of three. The probability of error at which the data must be analyzed is also required ( Figure 2B). Results A total of 10952 reads were generated on the 454 GS Junior platform for the three runs for all four samples. Of these, 9738 reads (88.9%) were included in the study (2002,3049,1955 and 2732 reads for samples 1, 2, 3 and 4, respectively) and 1214 reads (11.1%), which were considered either too short or too long, were excluded. These 9738 reads were split into Dataset 1 (8967 reads, 92.1%) and Dataset 2 (771 reads, 7.9%) (Figure 1). Ninety-two clones were generated for all four samples: 23 clones for sample #1, 22 for sample #2, 20 for sample #3 and 27 for sample #4. Deep Threshold Tool Output The output page generated by the Deep threshold Tool includes a table for each increment of the probability of error (Figure 3), which shows the distribution of nucleotides at all columns at which at least one base can be considered an ''error''. Because a nucleotide was considered an ''error'', if its frequency in a column in the alignment was less than the threshold, any variation above the threshold was considered a legitimate variant for that probability of error ( Figure 4). Figure 5 summarizes the results graphically. This summary table was consulted and the lowest probability of error, at which established, well-characterized variants are still detected, was selected. In the present study, the Rosetta Tool Output Alignments generated from direct sequencing, UDPS or CBS can also be submitted to the Rosetta Tool. This would typically be done in order to make use of the nucleotide/amino acid alignment viewer component of the tool. The tool produces a number of output tables (Figures 6-8). Figure 6 is an alignment showing each codon followed by the amino acid. Amino acids have been colourcoded according six different categories: Aliphatic (Glycine, Alanine, Valine, Leucine and Isoleucine), Hydroxyl (Serine, Cysteine, Threonine and Methionine), Cyclic (Proline), Aromatic (Phenylalanine, Tyrosine and Tryptophan), Basic (Histidine, Lysine and Arginin) and Acidic (Aspartate, Glutamate, Asparagine and Glutamine). The display of nucleotides or amino acids can be toggled on or off for ease of reference. Figure 7 shows the distribution of each residue at each position at which at least one residue is considered an error. Such error residue counts are highlighted with a black background for reference. Figure 8 contains separate tables for each codon at which at least one residue is an ''error'', and shows the distribution of codons and amino acids at this position. Synonymous and non-synonymous mutations can be differentiated. Rows containing substitutions occurring below the threshold, ''error'' nucleotides are highlighted with a black background. In order to analyze the data downstream, the Rosetta Tool produces a ''masked'' data file, which is generated by replacing all ''error'' residues in the nucleotide alignment, with an ''X'' character. This alignment is then be translated into amino acids, with an amino acid of ''X'' used whenever at least one ''X'' character per codon occurs. Both the nucleotide and amino acid masked files can be downloaded in FASTA format. Using the selected probability of error of 0.5%, masked files were generated and the UDPS data were then analyzed using the two newly developed tools and the Mutation Reporter Tool [22]. Analysis of Pyrosequencing Reads Each sample in Dataset 1 was then analysed using the newly developed ''Deep Threshold Tool'' and a probability of error of 0.5% was selected, because this was the lowest probability of error at which all three well characterized mutations (T1753G/C, T1773C and G1896A) were present. The resulting threshold (count) value will differ depending on the number of reads (depth) in each file, for a given probability of error. For each sample, output of the ''Deep Threshold Tool'' lists the loci detected at above threshold value and these were then analyzed using the Mutation Reporter Tool, with a reference motif being the corresponding consensus sequences for each genotype or subgenotype. The distribution of substitutions at the nucleotide level in the BCP/PC/C region varied between samples, depending on the HBV genotype and HBeAg status (Figure 9). At 0.5% probability of error or above, substitutions were identified at 39 unique positions in the four samples:31 in the X region (1674 to 1838 from the EcoR1 site; 165 nucleotides), three in the PC region (1814 to 1900; 87 nucleotides) and five in the core region (1901 to 1939; 39 nucleotides) ( Figure 9). Ten of the 39 positions were present in at least two samples. Based on the fact that direct sequencing is capable of detecting substitutions occuring in $20%, of the quasispecies population substitutions were classified as high frequency ($20%) and low frequency substitutions (,20%). High frequency substitutions were found at 11 positions and low frequency at 28 positions. A consensus of genotype E was used to identify substitutions in genotype E (samples #1 and #4). The T1741C substitution was detected in both samples at a high frequency, regardless of the HBeAg status, while the following substitutions: A1757G, A1762T, G1764A, G1896A, G1937A/T and A1938C, were found at a high frequency in HBV from a HBeAg-negative patient (sample #1) (Figure 9). Substitutions A1735G, G1742A, A1747C, T1753C and T1909C were found at a low frequency in sample #1, and T1707C was found at a low frequency in sample #4 (Figure 9). Similarly, when the genotype D sequences (samples #2 and #3) were compared to their corresponding consensus sequence, 1678T was found in sample #2 and 1678C in sample #3. The consensus of genotype D had 1678T. From phylogenetic analysis carried out in our previous study, HBV from sample #2 belongs to subgenotype D1 and from sample #3 to subgenotype D6 [17]. The consensus of subgenotype D1 has T at 1678 and that of subgenotype D6 has C. Therefore, when sample #3 was compared to the consensus of subgenotype D6, only low frequency substitutions were detected (T1696C, G1733A, G1745A, G1748, G1751A, G1756A and T1765C) (Figure 9). When the reference sequence was changed from the D to D1, the mutation pattern of sample #2 (subgenotype D1), changed (Figures 9). Using either reference sequence D or D1, the following substitutions were detected with high frequency: A1727G, C1730A, A1761C, G1764A, A1775G and G1896A, whereas the frequency of 1773T and 1912T decreased when using D1 instead of D as the reference sequence (Figure 9). The following substitutions relative to D1, occurred in sample #2 at low frequency: T1678C, A1680C, C1706T, T1724C, A1725C, G1728A, G1736A, G1739C/T, T1741C, G1745A, G1748A, G1751, T1753G, A1772T, T1773C, T1842C, T1909C, T1912C and C1913G. Summarizing the above, in the four samples substitutions were identified at 39 unique positions. Sample #2 (HBeAg-negative, genotype D) had substitutions in 26 positions, followed by sample #1 (HBeAg-negative, genotype E) in 12 positions, sample #3 (HBeAg-positive, genotype D) in 7 positions and sample #4 (HBeAg-positive, genotype E) in only four positions. The ratio of nucleotide substitutions between isolates from HBeAg-negative and HBeAg-positive patients was 3.5:1. Moreover, genotype D isolates showed greater variation in the X, PC and core regions, compared to genotype E isolates, with the two genotype D samples having 33 substitutions compared to the 16 detected in the genotype E samples. The ''Rosetta Tool'', which was developed as part of this study, was used to analyze sequence data at the amino acid level. Substitutions identified at the nucleotide level were translated into amino acids and classified as synonymous or non-synonymous. Fourteen substitutions, 12 in the X region and 2 in the C region, were synonymous. Twenty-five, 19 in the X region, three each in the PC and C regions, were non-synonymous mutations. All nonsynonymous mutations occurred within single, non-overlapping reading frames (1653 to 1814, and 1839 to 1939 from the EcoR1 restriction site), and the region between the start of the PC and the end of the X (1814 to 1838) was completely conserved in all ultradeep pyrosequences. Most of the 21 insertions found in Dataset 2 occurred within homopolymeric regions and were therefore considered to be PCR or pyrosequencing artefacts [23]. Analysis of CBS and Comparison to UDPS At least 20 clones were generated per sample. The BCP/PC region sequenced is relatively short and does not differentiate genotypes D and E following phylogetic analysis. Both identical and multiple clones were generated, with HBV from HBeAgnegative sera showing greater divergence ( Figure 10). CBS data was analyzed at the 39 loci, previously recognized by UDPS, using the Mutation Reporter Tool and a consensus sequence for each genotype/subgenotype as the reference sequence. In the four samples, substitutions at 18 of the 39 positions (46.2%) were detected by CBS (Table 2) (Figure 11). CBS detected all high frequency substitutions but only 25% (7/28) of the low frequency substitutions (Table 2). Moreover, the following nucleotide substitutions were detected in different samples by either UDPS or CBS, at position: Discussion The aims of this study were to build bioinformatics tools to assist in determining the threshold at which pyrosequencing data should be analyzed, and to compare quasispecies distributions obtained using UDPS and CBS, and compare results between UDPS and CBS using HBeAg-positive and -negative sera infected, with either genotype D or E. Direct (Sanger) sequencing produces a single ''read'' for each sample. After curating the sequence and resolving ambiguous bases, the sequence is ready for further downstream processing. Whilst UDPS, which generates several thousand reads per sample, is a powerful technology, the analysis of the read data before downstream processing is critical. The depth of coverage provided by UDPS is also one of its shortcomings, as the data needs to be carefully curated for errors (artefacts), which may have been introduced by the PCR amplification and/or the sequencing process [2,23]. The increased sensitivity of the platform to detect thousands of reads also means that it may generate such artefacts. A probability of error of between 0.5% and 1% for UDPS has been used previously for HIV samples [6]. Subsequent studies on HBV sequence data have either used the same probability of error, or have not reported details of this component of the analysis [2,3,24]. The probability of error, which is used, will influence the downstream detection of variants. As such, selecting an appropriate probability of error is an essential step in the analysis. In response to the lack of consensus in selecting a probability of error and determining a threshold, we developed an online bioinformatics tool to explore this aspect of the analysis. The ''Deep Threshold Tool'' provides the researcher with detailed output of variation at different probabilities of error. The analysis is objective and repeatable, and the selected probability of error can be reported and defended. Data for a project can be processed by the tool, so that a probability of error can be selected for that specific project, organism or assay. Using a fixed, predetermined probability of error for the UDPS platform as a whole is overlybroad and too general, as it is not possible to indicate how a particular probability of error would be applicable to a different organism, genomic region or investigation. Using the ''Deep Threshold Tool'' developed in the present study, a probability of error of 0.5% was selected for the BCP/PC/C region of HBV, which agrees with previous reports for HIV [6]. The output must be interpreted in light of existing biological knowledge of the variation known to occur in the sequenced region. The tool is objective and outputs results for different probabilities of error ''blindly''. There is no ''right answer'' or absolute correct threshold, as we cannot possess complete knowledge of all the stochastic processes, from the sample to the PCR to the sequencing platform to the sequence results. Variation may be introduced at the various PCR stages, rather than by the sequencing hardware itself [23]. What we can do, however, is to interrogate these data at different probabilities of error, and make an informed decision on which value to select. It is important that the method used to process and curate the UDPS data, as well as any numerical values used (such as probability of error or threshold), be reported in all UDPS studies. Failure to provide this level of detail makes it difficult to accurately assess and relate any results reported. The emergence of G1896A mutation in the PC region is known to be associated with HBeAg seroconverion [13]. The presence of wild-type (G) at 1896 in sample #1 and sample #2,which were isolated from HBeAg-negative patients, confirms the ability of UDPS to detect minor populations, which may not be detected by Sanger sequencing [25,26]. Similar results have been reported in Figure 9. Graphs showing mutation distribution of the UDPS data at the nucleotide level using either genotype E or D consensus sequence as the reference. A star indicates a non-synonymous mutation. The graphs were built using the Mutation Reporter Tool [22]. doi:10.1371/journal.pone.0095377.g009 more recent HBV studies. The HBV population from HBeAgpositive sera showed a high percentage of stop codon mutations in the precore region, while isolates from HBeAg-negative carriers had a low percentage of wild-type residues at codon 28 [24]. Although the selection of genotype D samples was random, we later discovered that sample #3 belonged to subgenotype D6, while sample #2 belonged to subgenotype D1. As illustrated in Figures 9 and 10, knowledge of the genotype and subgenotype of HBV is important when determining the presence of mutations. Depending on the reference or consensus sequence used, the variant at a particular position, may either represent the signature of a particular subgenotype or be a legitimate mutation. Therefore, where possible, a consensus sequence of the genotype or subgenotype should be used, to ensure that variants are examined in the appropriate context. Six mutations (A1757G, A1762T, G1764A G1896A, G1937A/ T and A1938C) were found in high frequency (.20%) in sample #1, genotype E isolated from a HBeAg-negative patient. The G1896A mutation is known to create the stop codon at amino acid 28 and to abrogate HBeAg expression [13], while the double Figure 10. A rooted phylogenetic tree of 92 cloned BCP/PC sequences (position 1653 to 1939 from EcoRI site) from four serum samples. Sample #1 was HBeAg-negative and infected with genotype E of HBV, sample #2 was HBeAg-negative, genotype D, sample #3 was HBeAg-positive, genotype D and sample #4 was HBeAg-positive, genotype E. Bootstrap statistical analysis was performed using 1000 datasets, indicated as percentages on the nodes. The letters, D and E, represent the genotypes. doi:10.1371/journal.pone.0095377.g010 mutation A1762T/G1764A is known to down-regulate the transcription of precore mRNA that is translated into HBeAg [14]. Although A1757G is a synonymous mutation and thus has no effect on the protein sequence, it overlaps cis-regulatory elements within the basic core promoter. In the present study, 1757G was found to be associated with A1762T/G1764A. This association has also been shown by others, who found that chronic hepatitis patients infected with HBV with 1757G/1762A1764A had higher HBV DNA levels compared to patients infected with the wild-type 1757A/1762T1764A [27]. Moreover, A1757G was found to in HCC patients infected by genotype C [28]. Nonsynonymous mutations G1937A/T and T1938C within the core region occurred at a high frequency (Figure 9). These mutations are located within a T-cell epitope, which is an important Figure 11. Graphs showing mutation distribution of the CBS data at the nucleotide level using either genotype E or D consensus sequence as the reference. A star indicates a non-synonymous mutation. The graphs were built using the Mutation Reporter Tool [22]. doi:10.1371/journal.pone.0095377.g011 component of the host's immune response to HBV infection [29]. These two mutations have recently been reported in strains of HBV genotype B isolated from Taiwanese patients [30]. Other substitutions (T1707C, A1735G, A1747C and T1909C) were found at low frequencies (,20%) and have not been reported in previous studies. In sample #2 (genotype D isolated from HBeAg-negative), mutations A1727G, C1730A, A1761C, G1764A, A1775G and G1896A were detected at high frequency. A1727G and C1730A are located in the Enhancer II region and have been detected in cirrhotic patients [28] and are associated with reduced HBcAg expression and HBV DNA levels in the liver [31]. A1761C has previously been detected within a mutational motif (1761-1766) in isolates from patients with cirrhosis and chronic hepatitis [32]. The A1775G is associated with loss of HBeAg in Taiwanese children [33]. T1678C, G1753A and T1773C, which were found in the minority of the quasispecies population, have previously been associated with severity of HBV infection and progression to HCC [28,34]. The following substitutions were found as minor populations and have not previously been documented. In HBV from HBeAgnegative samples: A1735G, G1742A, A1747C and T1909C in genotype E and A1680C, C1706T, T1724C, A1725C, G1728A, G1736A, G1739C/T, G1751A, A1772T, T1842, T1909C, T1912C and C1913G in genotype D ( Figure 9) and in HBeAgpositive samples: T1696C, G1733A and G1751A in genotype D and T1707C in genotype E. Mutations G1745A and G1748A were found in both HBeAg-negative and HBeAg-positive genotype D samples. It is possible that these have not previously been detected because direct (Sanger) sequencing can only detect variation that occurs in 20% or more of the population. More extensive studies may reveal the relevance of these minor variants. The genotype E isolates were found to harbour fewer mutations in the X, PC and core regions compared to genotype D, which is in agreement with previous studies showing low genetic diversity of genotype E [35,36]. Furthermore, a greater number of mutations were found in HBeAg-negative samples of both genotype D and E compared to HBeAg-positive samples. It was reported that the frequency of HBV mutations is higher in HBeAg-negative patients, this is as a result the immune response of the host against the virus before the loss of HBeAg [37]. However, because only four samples, belonging to the two genotypes from HBeAgpositive and HBeAg-negative samples, were analyzed, additional samples would be required before any firm conclusions can be reached about the differences in nucleotide divergences between these genotypes from HBeAg-positive and -negative sera. In this study, where 9738 sequence reads were generated by UDPS, 39 unique positions were detected by UDPS, while only 18 (46.2%) of these position were detected by CBS. High frequency substitutions were found in 11 positions and were all detected by CBS, whereas only 6/28 (25%) low frequency substitutions were detected by CBS (p,0.05) (Figures 9 and 10). Although the testing of the tools was done on a small sample set and the findings cannot be generalized, it is evident that the data generated by the increased read-depth provided by UDPS should be approached with caution. Appropriate curation and examination of the reads are required to ensure that artefacts are not interpreted as variants. Moreover, identification of variants must be performed against a suitable reference or consensus sequence, as a ''mutation'' of interest may simply be a known signature or variant when examined in the correct genotypic or subgenotypic context. UDPS detected a greater number of substitutions than CBS. Relative to CBS, UDPS is cheaper to undertake, both in terms of time and expense. However, without rigorous and careful examination and interpretation of read data, the results generated by UDPS may be misleading. As illlustrated in the present study, a thorough knowledge of the genome of interest and its known variants is essential in order to accurately and reliably interpret the high resolution read data generated by UDPS.
7,310.8
2014-04-16T00:00:00.000
[ "Biology" ]
Tirf Microscopy with Ultra-short Penetration Depth Total internal reflection fluorescence microscopy (TIRF), in both commercial and custom-built configurations, is widely used for high signal-noise ratio imaging. The imaging depth of traditional TIRF is sensitive to the incident angle of the laser, and normally limited to around 100 nm. In our paper, using a high refractive index material and the evanescent waves of various waveguide modes, we propose a compact and tunable ultra-short decay length TIRF system, which can reach decay lengths as short as 19 nm, and demonstrate its application for imaging fluorescent dye-labeled F-actin in HeLa cells. Sub-diffraction-limit imaging by stochastic optical reconstruction mi-croscopy (STORM), " Nat. Methods 3, 793–796 (2006). 7. S. W. Hell and J. Wichmann, " Breaking the diffraction resolution limit by stimulated emission: stimulated-emission-depletion fluorescence microscopy, " Opt. Enhancement of axial resolution in fluorescence microscopy by standing-wave excitation, " Nature 366, 44–48 (1993). 10. M. G. L. Gustafsson, " Surpassing the lateral resolution limit by a factor of two using structured illumination microscopy, " J. Imaging of single fluorescent molecules and individual ATP turnovers by single myosin molecules in aqueous solution, " Nature 374, 555–559 (1995). Use of nuclepore filters for counting bacteria by fluorescence mi-croscopy, " Appl. A system for rapid DNA sequencing with fluorescent chain-terminating dideoxynu-cleotides, In vivo fluorescence spectroscopy and imaging for oncological applications, " Photochem. TIRF imaging of docking and fusion of single insulin granule motion in primary rat pancreatic beta-cells: different behaviour of granule motion between normal and Goto-Kakizaki diabetic rat beta-cells, " Biochem. Single molecule imaging of fluorophores and enzymatic reactions achieved by objective-type total internal reflection fluorescence microscopy, " Biochem. High refractive index substrates for fluorescence microscopy of biological interfaces with high z contrast, " Proc. Total internal reflection fluorescence (TIRF) microscopy illuminator for improved imaging of cell surface events, " in Highly confined surface imaging by solid immersion total internal reflection fluorescence microscopy, " Opt. Introduction Since its invention centuries ago, optical microscopy has become an irreplaceable tool in a wide variety of fields.Over the years, many advances have been developed to expand its limitations, particularly for biological applications and research.Some of these developments, such as dark field microscopy and phase contrast microscopy [1,2], seek to increase the image contrast of an object.Others, like confocal microscopy [3,4], STORM [5,6] STED [7,8], SIM [9,10], and PSIM [11], aim to increase the effective resolution of the image.Of particular note are the techniques targeting fluorescence-labeled specimens [12], which have become incredibly useful ways to image biological samples [13][14][15][16][17]. Standard epi-fluorescence microscopy, where the entire sample is flooded with the excitation light, is commonly used for the imaging of tagged biological samples.Another fluorescence microscopy technique is Total Internal Reflection Fluorescence (TIRF) microscopy [18,19], which uses the evanescent field produced by total internal reflection to achieve extremely thin optical sectioning as well as high signal-to-noise ratios. Normally, total internal reflection occurs when light from a region with a high index of refraction is incident on a region of lower index and its angle of incidence is higher than the critical angle θ c = sin −1 (n 1 /n 2 ), where n 1 and n 2 are respectively the lower and higher indices of refraction at the boundary.Although no light is radiated to the lower index medium, there exists an exponentially decaying evanescent wave on the far side of the interface.The power of this evanescent wave, which decays as P = P 0 e −2kz where z is the distance away from the surface and k = ω c (n 2 sin θ i ) 2 − (n 1 ) 2 where θ i is the angle of incidence, can excite any fluorophores that are exposed to this field.For example, a commercially available Carl Zeiss TIRF objective (Plan-Fluar 100x/1.45Oil) can reach 75 nm penetration depth (532 nm laser @ maximum 72 degree) at the glass and water interface.As a result, TIRF microscopy can achieve very high contrast images near the interface [20][21][22][23].This capability makes TIRF a very useful imaging method, and there have recently been many proposed modifications for expanding its capabilities [24][25][26].Here, we propose and experimentally demonstrate a thin film waveguide based compact TIRF method for achieving an ultra-short tunable decay length ranging from 19 nm to 39 nm. Working principles Fig. 1.Schematic of the proposed TIRF system.The TiO 2 layer acts as a waveguide capable of supporting a variety of propagating modes, each of which have a different wavelength.The gratings are all designed to couple to different modes when illuminated with a normally incident laser. Our TIRF setup consists of a TiO 2 layer on a 22 by 22 mm glass coverslip, with several gratings fabricated 5 mm away from the central sample region (Fig. 1).The TiO 2 layer is able to support several waveguide modes, each with a unique decay length at the imaging surface.Although the evanescent fields from total internal reflection do not radiate into free space, they will excite the fluorescent particles near the surface.The gratings are used to selectively couple an incident laser into a specific waveguide mode.The waveguide modes from the gratings propagate to the sample, which fluoresces when exposed to the evanescent field.As a result, this configuration allows for the tuning of the decay length through the selection of the illuminated grating.Additionally, TiO 2 has a higher index of refraction (n=2.5)compared to glass (n=1.47),which results in a shortened decay length when compared to traditional TIRF experiments. In order to design the parameters of the waveguide, we used a commercially available finite element solver (COMSOL Multiphysics 4.3a) to identify the propagating modes for given thickness of the waveguide layer, which should follow the relationship Nλ /2n 2 = d, where the λ is wavelength of incident laser (532 nm for this experiment) ; the N is an integer and the d is the thickness of TiO 2 layer (Figs. 2(a) and 2(b)).The parameter of interest is α = k x /k 0 , which affects the penetration depth of the electric field through the formula z = (2×k 0 × (α 2 − 1)) −1 where k 0 = 2π/λ .After selecting a TiO 2 thickness of 1 μm, we then identified three modes with different values of k x /k 0 (highlighted in Fig. 2(a)), and designed our three gratings with different periods so that their first diffraction orders were matched with these respective modes.The coupling efficiency of the first-order diffraction is dependent on the grating duty cycle, which was kept the same for all three gratings.As a result, we now have a method to selectively excite waveguide modes with decay lengths of 19 nm, 25 nm, and 39 nm. Sample fabrication and characterization The sample was fabricated by conventional clean room processes including electron-beam (e-beam) lithography and thin film depositions.A glass wafer was first cleaned with Pirahna solution and deionized (DI) water, then spin-coated with a 200 nm thick PMMA layer (495 A2).Then we used e-beam lithography to generate the three grating patterns with periods of 214 nm, 270 nm, and 359 nm at the same duty cycle, which match the supported modes at k x /k 0 = 2.49, 1.97 and 1.48 respectively.Afterwards, the patterns were transferred to the glass by dry etching of reactive-ion etching (RIE, Plus 80, Oxford Ins.) to a depth of about 50 nm.After that, we used e-beam deposition to fill the concave grating with 50 nm Au and lift-off the photoresist (Fig. 3(a)).The final 1 μm thick TiO 2 waveguide layer was deposited by magnetron sputtering (AJA, RF). To characterize the decay lengths of the TIRF setup, we used fluorescent beads deposited on the imaging area.The 40 nm fluorescent beads, which had excitation and emission wavelengths of 540 nm and 560 nm (F-8792 Invitrogen T M ), were diluted to a concentration of about 3.4e3 particles/mL in DI water.Then we dropped 2 μL of the nano-bead solution directly on the sample and let it dry naturally.Afterwards, the sample with fluorescent beads was rinsed with DI water, which will wash away impurities and suspended fluorescent and leave only stable beads, and the the sample is dried again under N 2 .After the preparation described above, we placed the sample face up on the stage of the upright microscope with a Carl Zeiss objective (50X, N.A. 0.55) and a Newton CCD detector (iXon3 897 Andor Corp, 16×16 μm pixel size).Due to the total internal reflection, no excitation light will be transmitted through the collection objective, and therefore a fluorescence filter is unnecessary.The excitation laser (532 nm CW, BOSL-532-3, Brighten Optics) is positioned on the bottom side of the sample and illuminates the different periodic coupling gratings in succession at a normally incident angle via a series of movable mirrors and stages.The beam has a power of 50 mW at the grating interface, and with a coupling efficiency of 17.6% (calculated using the Fourier transform), we estimate that 8.8 mW is coupled into the waveguide.At the same time, we focus on one single bead and capture its image in the CCD detector (Fig. 3 (b) inset). The cross sectional intensities of a single nano-bead are shown in Fig. 3(b).The dots are the normalized intensities taken from the image cross-section at different decay lengths.All three coupling gratings were fabricated with the same duty cycle in order to keep the coupling efficiency constant.The response of the fluorescence will be linearly dependent on the local optical power, and therefore, changes in the decay length of the waveguide can be estimated by the cross-sectional intensity of the diffraction-limited single particle fluorescent image. In order to generate the predicted intensity of the fluorescent beads (Fig. 3(b)), we use the following estimation method: The electric field intensity above the interface of total internal reflection can be expressed as: I(z) = I 0 exp(−2k 0 z (α 2 − 1)).We then expect a captured image from a fluorescent particle to follow the form: Image(x, y) = [I(z) • φ (x, y, z)] * P(x, y)dz, where φ (x, y, z) is the function that describes the volume of the fluorescent particle, and P(x, y) is the point spread function (PSF) of the imaging system.For a diffraction-limited optical system, the PSF of a microscope objective can be modeled as the diffraction pattern of a circular aperture (or Airy disk) following the form I 0 (2J 1 (r)/r) 2 , where J 1 is the first-order Bessel function of the first kind, and r is the distance from the center of the Airy disk scaled such as the first order diffraction ring lies at the distance d = 0.61λ /NA, where λ is the wavelength of the captured light and NA is the numerical aperture of the microscope objective.Finally, all the intensity curves were normalized to the highest values of the image with 39 nm decay length, to compare them to experimental results.In Fig. 3(b), these calculated intensities are plotted against experimental measurements.Here, we can see that the measured fluorescence profiles for the beads match the calculated profiles for different decay lengths, and these calculated profiles correlate to the experimentally observed particles, which suggests that the waveguide couplers are behaving as predicted.After confirming that we were able to adjust the decay length through grating selection, we wanted to demonstrate the imaging performance on a biological sample.For this, we chose to use HeLa cells, which were cultured on TiO 2 covered substrate in Dulbecco's Modified Eagle's Medium (Sigma) supplemented with 10% fetal bovine serum and incubated in 10% CO 2 at 37 • C. Cells were fixed with 2% paraformaldehyde (in PBS) for 15 min, washed three times with PBS, and then permeabilized for 30 min with block solution (PBS containing 0.1% Triton X-100, 3% bovine serum albumin, 2% donkey serum , and 0.05 M glycine).F-Actin was labeled using Molecular Probes R rhodamine phalloidin (Ex/Em:540/565).The substrate with fixed cells was directly immersed in 10 mL PBS solution mixed with 250 μL (6.6 μM) rhodamine phalloidin in methanolic stock solution.After 30 min at room temperature, the substrate was quickly washed with PBS three times, and then the cells were dried with N 2 .The following image process is similar to the fluorescent nanoparticles experiment, in which the green laser successively hits the different gratings, and the resulting evanescent waves illuminate the cell attached on top of the TiO 2 layer.Figure 4 4(c), some of the fluorescent dye suspended in the cell as well as the blurred edges of the cell becomes visible when the decay length increases to 25 nm.For emphasis, we highlight some clusters with higher intensities (inset figure).By contrast, in Fig. 4(d) both the edge of the cell and the F-actins in the cell can be easily distinguished, as they are farther away from the sample surface.It is important to note that the purpose of these images was to demonstrate depth control in our system, so we did not attempt to identify the visible structures.These results demonstrate that the ability to control decay length can be used to discern structures with very high resolution in the z direction. Conclusions In summary, by using multiple diffraction gratings coupled to a high-index TiO 2 layer, we were able to achieve a tunable decay length between 20 and 40 nanometers in an internally reflected waveguide mode.Fluorescent nano-beads were illuminated with the various grating couplers, and their fluorescence profiles were compared to simulated profiles in order to estimate their decay lengths.Finally, we demonstrated the application of our TIRF technique in an F-actin labeled HeLa cell, and showed that the variation of decay length makes it possible to distinguish unique structures at various distances from the surface.By contrast, conventional TIRF microscopy is limited to a single decay length, and is not able to access this additional z-direction information.Therefore, the capabilities of our compact TIRF technique will be valuable for various systems and lab-on-a-chip applications, such as stretched DNA and protein detection, where information exists in multiple depth planes close to the surface. Fig. 2. A) The waveguide modes supported by different thicknesses of TiO 2 on glass.The connecting lines trace the evolution of the modes as the thickness is varied.B) The intensity distribution in-and outside TiO 2 layer of the three highlighted modes in Fig. 2(a).C) Intensity decay curve in the Z direction (perpendicular to the surface of the TiO 2 layer).The three calculated decay lengths (defined by the distance where the power decays by a factor of e −1 ) are 19 nm, 25 nm and 39 nm. # Fig. 3. A) A SEM image of the gold grating embedded in the glass layer.The scale bar is 300 nm.The left bottom inset figure shows the schematic of the grating cross-section.B) Cross-section intensities of a fluorescent nanoparticle (40 nm diameter), illuminated by evanescent waves with decay lengths of 39 nm, 25 nm and 19 nm.The experimental images are shown on the right respectively, from top to bottom.The dots display the experimental results, with the solid line showing the simulation fitting.The whole cross-section (red line) is 4160 nm. Fig. 4 . Fig. 4. Images of rhodamine phalloidan stained F-actin in HeLa cells cultured on the TiO 2 substrate.A) Direct LED illumination.B), C) and D) Evanescent wave illumination, with decay lengths of 19 nm, 25 nm and 39 nm respectively.The scale bar is 4 μm. (a) shows the original green LED (M530L2 Thorlabs) image in reflection mode through a dichroic beam splitter (FF562-Di03-25x36, Semrock), exciting band pass filter (FF01-543/22-25, Semrock) and emission band pass filter (FF01-593/40-25, Semrock).Unlike Fig. 4(a), which displays most of the fluorescent info throughout the cell, Figs.4(b)-4(d) selectively excites a TIRF image at various layers with penetration depths at 19 nm, 25 nm and 38 nm.In Fig. 4(b), there are very few visible structures illuminated by the short decay length.The inset figure with highlight clearly shows some individual fluorescent dye particles close to the surface.In Fig.
3,687.6
0001-01-01T00:00:00.000
[ "Physics" ]
HPV16 Down-Regulates the Insulin-Like Growth Factor Binding Protein 2 to Promote Epithelial Invasion in Organotypic Cultures Cervical cancer is a multi-stage disease caused by human papillomaviruses (HPV) infection of cervical epithelial cells, but the mechanisms regulating disease progression are not clearly defined. Using 3-dimensional organotypic cultures, we demonstrate that HPV16 E6 and E7 proteins alter the secretome of primary human keratinocytes resulting in local epithelial invasion. Mechanistically, absence of the IGF-binding protein 2 (IGFBP2) caused increases in IGFI/II signalling and through crosstalk with KGF/FGFR2b/AKT, cell invasion. Repression of IGFBP2 is mediated by histone deacetylation at the IGFBP2 promoter and was reversed by treatment with histone deacetylase (HDAC) inhibitors. Our in vitro findings were confirmed in 50 invasive cancers and 79 cervical intra-epithelial neoplastic lesions caused by HPV16 infection, where IGFBP2 levels were reduced with increasing disease severity. In summary, the loss of IGFBP2 is associated with progression of premalignant disease, and sensitises cells to pro-invasive IGF signalling, and together with stromal derived factors promotes epithelial invasion. Introduction Metastasis involves multiple steps, so defining the processes which regulate cancer cell invasion are crucial for understanding the initiation of the metastatic process.In particular, it will be important to monitor the molecular events that occur in the transition from a hyper-proliferative epithelium to an invasive epithelium and determine their functions.High-risk human papillomavirus (HPV) types are responsible for the transformation of the cervical epithelium and subsequent cervical cancer.Expression of the 'early' HPV genes E6 and E7 has been identified to be sufficient to immortalise primary human keratinocytes [1,2] and are required for continued proliferation of infected cells however whether this is sufficient to transform cells into a malignant form is still disputed [2][3][4].E6 and E7 proteins immortalize epithelial cells through their ability to inactivate the cell cycle checkpoints regulated by the retinoblastoma protein (pRb) and p53 resulting in enhanced proliferation and loss of differentiation [5][6][7].If not cleared, the HPV infection can persist resulting in progression to invasive disease [8].However, not all HPV infections of the cervix lead to progressive disease and so knowledge of the alterations during transition from low grade, CIN 1, to high grade disease, CIN 3 and eventual invasive disease may yield novel molecular biomarkers that distinguish lesions with a propensity to progress to invasive disease from lesions that will remain premalignant [9]. During the development of cervical cancer, numerous molecular events have been described, including: altered viral gene expression [10,11], regulation of immune-response [12], activation of proliferative signalling pathways [13][14][15], modification of chromatin [16][17][18][19], and regulation of pro-invasive genes, such as matrix metalloproteases (MMPs) [11,20].In the present study, we have investigated the factors and mechanisms which influence the invasive behaviour of the epithelium.We have examined the ability of the high-risk HPV16 E6 and E7 genes to transform primary human foreskin keratinocytes (HFKs) into an invasive epithelium and have identified a crucial role for the IGF (Insulin-like growth factor) signalling pathway in the progression to invasive growth.The invasive potential of E6/7 expressing keratinocytes is enhanced following dramatic down-regulation of insulin-like growth factor binding protein 2 (IGFBP2), resulting from enhanced histone deacetylase 3 activity at the IGFBP2 promoter.IGFBP2 has been shown to have both pro-tumourgenic properties and tumour suppressive functions, although the former tend to be independent of IGF/IGF receptor signalling [21].In this study, we have found that IGFBP2 acts to suppress IGFI/II stimulation of the IGF receptor 1 (IGF1R) but in its absence, IGFI/II signalling, in conjunction with the stromal derived growth factor, keratinocyte growth factor (KGF), stimulates the AKT pathway leading to invasion.Significantly, we have observed that IGFBP2 expression is inhibited in high-grade pre-malignant cervical lesions infected with HPV16 and propose that this down-regulation is a required step in the initiation of the invasion process. Results The high-risk HPVs are a causal factor for cervical and infection is observed in a proportion of head and neck cancers.Whilst E6 and E7 expressing keratinocytes are immortalized, we have observed in three-dimensional organotypic cultures that they do not possess the ability to invade into the stroma [1,2].The stromal compartment also regulates the invasive behaviour of the epithelium [22][23][24], and we have recently demonstrated that pRb-depleted human foreskin fibroblasts (HFFs) promote epithelial invasion, i.e. breakdown of the basement membrane and growth into the underlying collagen layer (S1 Fig).This invasion is driven through altered secretion of the keratinocyte growth factor [22].Organotypic cultures generated using earlypassage (passage 3-10) HPV16 E6/7 expressing HFKs are refractory to the pro-invasive signals from pRb-depleted HFFs.However, with continued passage (late passage) (i.e.>passage 14), these cells acquire the ability to invade (Fig 1A).We hypothesized that following extended passage.E6/7-HFKs may secrete growth factors or cytokines that could alter invasive behaviour of the epithelium. To test this hypothesis, conditioned medium (CM) from monolayer cultures of invasive cells, normal HFKs and early-passage E6/7-HFKs was transferred daily to organotypic cultures containing non-invasive early E6/7-HFKs.Medium taken from late-passage E6/7-HFKs was sufficient to induce invasion of the previously non-invasive cells, although not to the same level as late passage E6/7-HFKs (Fig 1B and 1C), suggesting invasive late-passage E6/7-HFKs were generating a pro-invasive environment.Subsequently, conditioned medium from normalised numbers of early and late-passage E6/7-HFKs were subjected to growth factor array analysis, which measured the levels of 41 growth factors.Surprisingly, the pro-invasive late-passage E6/ 7-HFKs did not secrete additional growth factors/cytokines but produced significantly lower levels of the insulin-like growth factor binding protein, IGFBP2, and the granulocyte-macrophage colony-stimulating factor (GM-CSF) at the protein (Fig 1D and 1E) and mRNA level (Fig 1F and 1G) in cell extracts.This result suggested that an inhibitor of invasion was lost as cells acquired invasive behaviour, indeed, when media from non-invasive, early pass E6/ 7-HFKs was transferred to invasive late-passage E6/7-HFKs, we observed a complete inhibition of invasion (Fig 1H and 1I). Since IGFBP2 expression was the most dramatically altered factor (>90% loss, p<0.001), we wanted to investigate its role in the invasive phenotype of late passage E6/7-HFKs.Western blot and real-time analysis from three independently generated E6/7-HFK lines confirmed that late-passage E6/7-HFKs produced very low levels of IGFBP2 in comparison to early-passage E6/7-HFKs as well as primary HFKs ( Further examination of the IGFBP family identified that there was a modest increase in IGFBP3 but only IGFBP2 was significantly regulated following continued passage of E6/7-HFKs (Fig 2C).These results implied that IGFBP2 may be down-regulated either as a consequence of long term culture or prolonged HPV16 E6/7 expression.The expression of IGFBP2 was therefore monitored in primary human foreskin keratinocytes and immortalised keratinocytes (immortalised either by hTERT or through co-culture with J2-3T3 mouse fibroblasts and the Rock inhibitor Y27632 [23]).IGFBP2 was expressed at high levels in immortalized cells relative to late passage E6/7 keratinocytes (S2C and S2D Fig) .Furthermore, we established that HPV16 E6/7 were able to mediate down-regulation of IGFBP2 when introduced into these immortalised cells whereas control cells transfected with vector only did not (S2E and S2F Fig) .This implied that the down-regulation of IGFBP2 was a result of prolonged HPV16 E6/7 expression.To test this we targeted E6 and E7 in late passage E6/7-HFKs with siRNA, which resulted in re-expression of p53, pRb, and IGFBP2 and concomitantly inhibited invasion in organotypic cultures (S3A-S3D Fig) .There is also a correlation between IGFBP2 levels and HPV in publicly available microarray datasets from various cervical cancer cell lines [24] where HPV positive cervical cancer cell lines were shown to have substantially reduced IGFBP2 expression compared to HPV negative cervical cell lines (Fig 2D).We independently confirmed this at protein and RNA levels in C33a, Caski and Hela cell lines (Fig 2E and 2F). To establish a role for IGFBP2 in the invasion process, recombinant IGFBP2 was added to organotypic cultures containing invasive late-passage E6/7-HFKs.Addition of physiological quantities of IGFBP2 to the cultures resulted in inhibition of epithelial invasion in a dose dependent manner (Fig 3A and 3B, and S4B and S4C Fig) .As IGFBP3 was found to be elevated in late passage cultures and is thought to promote invasion, we assessed whether IGFBP3 IGFBP2 expression is regulated by a variety of factors, including the IGF system itself [25] and insulin [26], however in E6/7 expressing keratinocytes, we found this was not the case (S4H Fig) .Epigenetic mechanisms have been associated with regulation IGFBP2 expression [27][28][29], so next we wanted to determine if one or more of these mechanisms played a role in IGFBP2 regulation and if manipulation of the epigenetic factors would restore the IGFBP2 levels in late pass E6/7-HFKs.We have identified that as E6/7 immortalised keratinocytes are passaged, there is acquisition of methylation marks at CpG islands close to the transcriptional start of IGFBP2 in late passage cells only (S5A and S5B Fig).However addition of 5-aza-C was unable to restore expression of IGFBP2 in the invasive cells (S5C and S5D Fig) suggesting DNA methylation may not be the critical regulator of IGFBP2.HDAC inhibitors have been shown to elevate expression of IGFBP2 in cells which readily express the protein [28,29], so to determine if this had occurred in late passage cells, which exhibit low levels of IGFBP2, we added the pan-HDAC inhibitors sodium butyrate (SB) or trichostatin A (TSA) to invasive E6/ 7-expressing keratinocytes and Hela cells.The inhibitors restored both IGFBP2 mRNA and protein (Fig 4A and 4B) in late E6/E7 keratinocytes, as well as in Hela cells (S6 Fig) .Since these results suggest that there is increased histone deacetylation in the invasive keratinocytes resulting in reduced expression of IGFBP2, we determined the expression of HDACs during the transition to an invasive epithelium.HDAC 1, 2, 3, 5 and 6 were elevated in the invasive epithelial cells (Fig 4C ), and cell survival assays suggested that these invasive cells are sensitive to HDAC inhibition (S7 Fig) .Interestingly, addition of low doses (IC 25 ) of the HDAC inhibitors TSA, SAHA (both pan inhibitors) and romidepsin (Romi, a class I inhibitor, which inhibits HDACs 1, 2, 3 but also HDAC4 and 6), which do not inhibit proliferation, was sufficient to reduce invasive frequency of the late passage E6/7-HFKs (Fig 4D and 4E). To further evaluate the mechanism through which IGFBP2 expression is regulated by histone modifications, selective HDAC inhibitors (proprietary inhibitors of HDAC6 (HDAC6i), HDAC1 and 2 (HDAC1/2i) and HDAC3 (HDAC3i)) and entinostat, which is another class I inhibitor, were employed.Inhibitors of HDAC3, the class I inhibitor, entinostat and to a lesser extent the HDAC1/2 inhibitor were sufficient to restore IGFBP2 RNA and protein expression (Fig 5A and 5B).Using a commercially available HDAC3 selective inhibitor, RGFP966, IGFBP2 expression was also restored (S8 Fig) .To further confirm a specific role for HDAC3 in the regulation of IGFBP2 expression, HDAC1-3 were individually depleted using siRNA and results showed that depletion of HDAC3 was sufficient to restore both mRNA and protein expression of IGFBP2 (Fig 5C and 5D). To assess the function of HDAC3 in regulating IGFBP2 expression we utilised publicly available data, which examined histone modifications around the IGFBP2 locus in primary keratinocytes [30] (Fig 5E).Three histone 3 lysine 9 (H3K9) acetylation sites, which can be altered by HDAC3 [31], were identified.Using CHIP-qPCR with anti-H3K9Ac CHIP-tested antibodies [30], we observed that the acetylation at these three sites is lost in invasive cells, which do not express IGFBP2 (Fig 5F ), suggesting a mechanism for the loss of expression of IGFBP2 in late passage cells.In addition, the IGFBP2 promoter is bivalent, containing both active and repressive histone modifications within the promoter region, with the activating modifications proposed to predominate over the repressive elements [32].Our results suggest that the loss of the activating marks allows the repressive elements to predominate leading to repression of the gene expression as previously suggested [33].To support this suggestion, we showed, using ChIP-qPCR, that there is an increase in the presence of HDAC3 at the transcriptional start site, but no significant enrichment of HDAC3 at other sites within the IGFBP2 locus (Fig 5G).Furthermore, HDAC3 resides in a repressive complex with NcoR1 (NcoR1) and NcoR2 (SMRT) [34] resulting in an active repressive complex [35].Analysis of NcoR1/2 at the protein and transcriptional levels showed marked elevation in invasive cells (Fig 5H and 5I) and also an enrichment at the Transcriptional Start Site (TSS) of IGFBP2 (Fig 5J ). Having established that IGFBP2 expression is lost in late passage E6/7-HFKs we next wanted to identify the significance of this loss and how it affected downstream signalling in the invasive cells.Since IGFBP2 has been shown to function through preventing IGF1 and IGF2 from binding to the IGF1-receptor [36][37][38], we treated E6/7-HFKs with IGFBP2 prior to IGF1/ 2 treatment and established that IGFBP2 is acting to block IGF1/2 induced AKT and ERK activation (Fig 6A).However, primary human foreskin keratinocytes (HFK) and early-passage E6/ 7-HFKs were unresponsive to IGF1 and IGF2 treatment (Fig 6B).These results implied that IGF-signalling is enhanced in the invasive cells, as a result of the loss of IGFBP2.The expression of the IGF-receptors 1 and 2 (IGF1R and IGF2R) in invasive versus non-invasive E6/ 7-HFKs was assessed by real-time and Western blot analysis and shown to be elevated in invasive cells whereas the related insulin receptor was unaltered (Fig 6C and 6D).This also mirrors observations that IGF1R expression is elevated in CIN3 cervical lesions [13,39]. Previous work has demonstrated that invasion of the E6/7-HFKs relies on the secretion of keratinocyte growth factor (KGF or FGF7) from stromal fibroblasts acting on the Fibroblast growth factor receptor 2b (KGFR/FGFR2b) on epithelial cells [22].Therefore, we next investigated whether IGFBP2 could alter these effects.We treated late-passage E6/7-HFKs with KGF in the presence or absence of IGFBP2 and showed that IGFBP2 was sufficient to inhibit KGF induction of ETS2 and MMP1, known modulators of the invasive process (Fig 6E) [40,41].This implied a crucial role for the IGF pathway in the regulation of invasion.To test this hypothesis, IGF1R levels were depleted by siRNA in invasive late-passage E6/7-HFKs and depletion confirmed at the protein and mRNA levels (Fig 6F and 6G).Depletion of IGF1R in the epithelium significantly reduced the frequency of invasions in organotypic rafts, suggesting that the IGF signalling pathway is pro-invasive (Fig 6H and 6I).We also tested whether IGF1R-depleted cells respond to the KGF pro-invasive stimulus, and similar to IGFBP2 treatment of these cells, KGF was unable to activate ETS2 and MMP1 in the absence of IGFR1 (Fig 6J). The ability of IGF signalling to alter the pro-invasive signalling of KGF implied that the two pathways were connected and it has recently been reported that KGF functions in a protease dependent manner, specifically activating A Disintegrin And Metalloprotease 17 (ADAM17) [42].We also confirmed in our cells that KGF activates downstream signalling events in a protease dependent manner, using the protease inhibitor GM6001 (Fig 7A).Furthermore, we found that by depleting IGF1R in the late-passage E6/7-HFKs the activation of AKT was inhibited following KGF treatment, suggesting that KGF-induced activation of AKT is both a protease-dependent and IGF1R-dependent process (Fig 7B).Late-passage E6/7-HFKs were compared to early-passage cells in terms of their expression of the ADAM family of proteins and ADAM17 expression was found to be elevated in these cells (Fig 7C).We then tested whether KGF induced activation of AKT is ADAM17 dependent using siRNA.ADAM17 was efficiently depleted (Fig 7D and 7E) and this prevented activation of the AKT pathway (Fig 7F and 7G).ADAM17 has been shown to shed various growth factors from cells, including IGF [43] so we determined if IGF is secreted from E6/7-HFKs following KGF treatment.IGF was found to be secreted from invasive E6/7-HFKs on KGF treatment, however, following knockdown of ADAM17 this secretion was inhibited (Fig 7H ), implying that KGF induced ADAM17 activation leads to enhanced shedding of IGF from invasive cells and drives activation of the AKT pathway through activation of IGF1R.In order to establish the clinical relevance of our findings, expression of IGFBP2 was assessed in pre-malignant cervical intraepithelial neoplasia's (CIN) by immunohistochemistry and dual immunofluorescence, utilising p16 INK4A (p16) staining to distinguish premalignant IGFBP2 is frequently down regulated in HPV16 infected CIN3 lesions.A) Immunohistochemical (IHC) and immunofluorescence staining of IGFBP2 localises the protein in the cytoplasm of epithelial cells of the cervix with regions of low IGFBP2 observed.In order to assess whether these were in HPV16 positive [44].IGFBP2 was found to be down-regulated in 43% of CIN1 lesions whilst in CIN3 lesions, 85% of the samples showed down-regulation (Fig 8B and 8C).There was a significant difference in the expected ratio of samples with IGFBP2 loss when comparing CIN1 and CIN3 lesions using the Fischer's exact test, Chi-square test and Z-test (p<0.001).IGFBP2 was also found to be down-regulated in invasive disease (Fig 8B and 8C), although there was no difference in the proportions of tumours with reduced IGFBP2 at the various stages examined (Fig 8D).We had follow-up data for 13 patients with CIN1 where IGFBP2 was reduced.From this group, 10 patients progressed to CIN3, the other 3 patients either regressed (1 case) or remained CIN1 (2 cases) (Fig 8E).The results imply that IGFBP2 is commonly down-regulated at advanced stages of infection, correlating with the effects of prolonged HPV16 E6 and E7 expression and reduced levels of IGFBP2 in CIN1 disease may indicate a propensity to progress to a high grade.We have also investigated whether IGFBP2 expression is regulated in other cancers associated with HPV infection.HPV infection has been observed in head and neck cancers, where it is associated with between 30-60% of oro-pharyngeal cancers [45].We have utilised publically available gene array datasets to assess the expression of IGFBP2 in oropharyngeal cancers [46].In these studies HPV infection was detected by immunohistochemistry of the surrogate marker p16 and the samples were sub-divided into p16 positive and p16 negative groups and microarray datasets were analysed for IGFBP2 and p16 expression.In p16 positive cancers, IGFBP2 expression was significantly reduced (Fig 8F ), suggesting that downregulation of IGFBP2 expression is likely associated with HPV infection in oro-pharyngeal cancers. In summary, we have shown that IGFBP2 is reduced in an E6/7 dependent manner over the passage of human keratinocytes leading to invasion in our 3-dimensional model system.Our in vivo studies with cervical premalignancies show that IGFBP2 expression is reduced with severity of disease from CIN1 to CIN3.The reduced IGFBP2 expression leads to activation of the IGF/IGFR pathways, which through cross talk with the KGF/FGFR2b complex can drive invasion (Fig 9).The results suggest that the IGF/IGFR/IGFBP2 axis would make a logical target for further investigation for potential treatment of cervical cancers. Discussion Invasion of the hyper-proliferative cervical epithelium into the surrounding stroma is an important event in progressive disease and here we have identified a crucial role for IGFBP2 in controlling this invasion process.Our results suggest that the prolonged expression of E6/7 proteins can generate an invasive epithelium through depletion of IGFBP2 expression, which in turn leads to signalling through the IGF1R in cross-talk with the FGFR2b.These results are also in keeping with previous results which demonstrated that HPV16 E7 can transform fibroblasts, in an IGF1R dependent manner [47].We propose that IGFBP2 functions as a brake preventing the HPV-infected epithelium from invading into the underlying stroma. HPV infected regions the same sample was co-stained with IGFBP2 and p16, the latter a surrogate marker of HPV infection.IGFBP2 was found to be commonly down-regulated in HPV infected regions (CIN3).HPV infected epithelium is indicated with a red arrow, and neighbouring normal tissue with a white arrow.Epi = epithelium; str = stroma.B) To further address the role of IGFBP2 in the progression of cervical cancer, 40 CIN1, 39 CIN3 lesions and 50 invasive carcinomas were assessed for IGFBP2 staining intensity and scored as-, +, ++, +++, for normal and p16 positive regions.If IGFBP2 scores decreased by a factor of 2 or more between p16 positive compared to normal regions then samples were identified as 'reduced IGFBP2'.Shown are representative images of CIN1 where IGFBP2 expression was un-altered and invasive disease where IGFBP2 is reduced in p16 positive regions compared to neighbouring normal epithelium *indicates non-specific staining of red blood cells.Arrows are as stated above.C) Quantification across all samples.D) We had data for 13 patients who had low levels of IGFBP2 in the original CIN1 biopsy.10 patients progressed, whilst 1 patient regressed and 2 had persistent CIN1 lesions.E) Analysis of publically available microarray datasets for IGFBP2 in oro-pharyngeal cancers demonstrated that in tumours where p16 is readily detected by IHC, IGFBP2 is reduced compared to tumours with no p16 staining.p16 expression in these dataset was also assessed by mRNA expression and was used as a surrogate of HPV infection.doi:10.1371/journal.ppat.1004988.g008 Our in vitro findings are mirrored in cervical cancer specimens where IGFBP2 expression is commonly lost in 85% of CIN3 lesions, which progress to invasive disease with high incidence, if left untreated [48], while CIN1 lesions do not.We did however observed that 43% of CIN1 lesions have reduced IGFBP2, and our preliminary data indicates that a significant proportion of patients with CIN1 disease who later progressed to a higher grade lesion, had reduced levels of IGFBP2 in the HPV infected epithelium of the original CIN1 biopsy.As this has only been examined in a limited number of cases, future studies are required to confirm that IGFBP2 levels may indicate patients at risk of progression.If the reduction of IGFBP2 levels identifies a sub-group of CIN1 lesions that have the propensity to progress, this could be useful clinically, as these patients could be monitored more closely to detect disease progression.There is a In the presence of IGFBP2, IGF1/2 cannot be released from the cell surface and therefore cannot activate the IGF1 receptor.However, when IGFBP2 is lost, KGF activation of ADAM17 leads to cleavage of the unprotected IGF1 leading to IGF1R activation and subsequent AKT activation, which we have previously demonstrated leads to expression of MMP1 [22].doi:10.1371/journal.ppat.1004988.g009possibility that the CIN1 lesions where IGFBP2 were down regulated were mis-classified, however, grading of lesions was conducted by two pathologists with overall agreement in each case. Mechanistically our results demonstrate a critical role for IGF-signalling in driving the invasive process.The IGF pathway is well known to be modulated in cancer and is known to promote neoplastic growth [13,39].The IGF receptors are expressed in a variety of cancers, and in vivo studies have demonstrated that cancer cells have a dependency for IGF1.Following prolonged expression of HPV16 E6 and E7 the IGF pathway becomes activated i) through loss of IGFBP2 and ii) through enhanced expression of the IGF-receptors.Here we demonstrate that re-addition or re-expression of IGFBP2 through HDAC inhibitor treatment, blocks IGF-signalling and is sufficient to inhibit epithelial invasion, while reciprocal knockdown of IGFBP2 in non-invasive E6/7-HFKs resulted in enhanced invasion, demonstrating a critical role for the pathway in the invasion process.We have further demonstrated that the loss of IGFBP2 allows pro-invasive signals derived from the stroma to enhance epithelial invasion, and this is conducted via IGF1R.It has previously been demonstrated that the keratinocyte growth factor functions via activating the metalloprotease ADAM17 [42] and here we show this is also the case, and there is preferential activation of the AKT pathway.We have previously demonstrated that the AKT pathway is activated in cervical cancer specimens [6] and have demonstrated it as a key component of epithelial invasion [22].Here we show that the activation of AKT by KGF was dependent on IGF1R (Fig 9 ), and this can be modulated by IGFBP2 and ADAM17.Signalling via the IGF1R pathway has been proposed to be an important determinant of cervical cancer progression, since elevated expression of IGF1R has been observed in cervical specimens which positively correlated with stage of the CIN lesions [13,39] and further highlights the importance of the IGF-axis in HPV infection and incidence of CIN lesions [49].This, together with our data, suggests that an activated IGF1R pathway promotes a pro-invasive phenotype in the cervical epithelium.An important caveat is that our in vitro model utilises fibroblasts which promote epithelium invasion [22], and reduction of IGFBP2 in the epithelium alone, may not be sufficient to drive invasion in situ.As CIN lesions take a number of years to progress to invasive disease during this time the stroma may be 'activated' which in combination with loss of IGFBP2 can drive epithelium invasion.In line with this hypothesis detection of cancer associated myofibroblasts has been observed in the stroma of cervical cancers and is correlated with poor prognosis [50]. IGFBP2 has been demonstrated to have both tumour suppressive and oncogenic properties in different cancer types.Our data show IGFBP2 as an inhibitor of the invasion process in our 3-D model of cervical pre-cancer and in the main, IGFBP2 tumour suppressive functions are those which antagonise IGF signalling [38,51,52], although IGF-independent pro-apoptotic functions of IGFBP2 have also been described [53].In examples where IGFBP2 functions in an oncogenic manner, these functions appear to be independent of IGF and are mediated via integrin alpha 5 [54,55] which leads to inhibition of PTEN and ultimately activates the AKT pathway [56].These oncogenic functions of IGFBP2 were not observed in our studies which may be due to functions of the HPV E6 and E7, which have been shown to down-regulate various integrins, including alpha 5 [57].Whilst the HPV vaccine will ultimately reduce the incidence of cervical cancer if administered universally, there still remains a generation of women who will require intervention.Since IGFBP2 is itself a potential target for therapeutic intervention [58], and has been shown to inhibit the growth of breast cancer cells in vivo [59], we propose that since HDAC3 is involved in reducing expression of IGFBP2, HDAC inhibitors maybe a useful tool to treat patients with progressive disease. Fig 2 . Fig 2. IGFBP2 is transcriptionally repressed in late-passage E6/7-HFKs and HPV positive cervical cancer cell lines.A) Continued expression of E6/7 in HFKs leads to dramatic down-regulation of IGFBP2 mRNA levels, n = 4, error bars represent SEM, this is also observed at a protein level (B).C) Real-time PCR analysis of the IGFBP family of proteins demonstrated a specific down regulation of IGFBP2 expression.n = 3, error bars represent SEM.Note IGFBP1 is not expressed by HFKs but was readily detected in human fibroblasts (S4A Fig).D) Gene rank analysis of IGFBP2 expression in cervical cells lines identified that IGFBP2 expression is low in HPV positive cell lines in comparison to cervical HPV negative cell lines (C33A and HT-3).E) The results in D were confirmed by western blot (E) and real-time PCR analysis (F) for the C33a, Caski and Hela cervical cell lines.n = 3, error bars represent SEM.doi:10.1371/journal.ppat.1004988.g002 Fig Fig), where IGFBP2, but not IGFBP3, significantly inhibited invasion of the epithelial cells.To further assess the effects of IGFBP2 loss on the invasive potential of the epithelium, IGFBP2 levels were stably depleted in early-passage non-invasive E6/7-HFKs using two different shRNA molecules.IGFBP2 knockdown was confirmed by Western blot and real-time PCR analysis (Fig 3Cand 3D), and resulted in enhanced invasion (Fig 3Eand 3F).IGFBP2 was also depleted in primary HFKs (Fig3G), however, this did not result in the generation of an invasive epithelium (Fig 3Eand 3F) suggesting that IGFBP2 acts as a brake to pro-invasive signalling mediated by E6 and E7 and following continued cell expansion, this brake is lost. Fig 3 . Fig 3. IGFBP2 is a regulator of the invasion process.A) Recombinant IGFBP2 was added to organotypic cultures at the indicated concentrations over a 14 day period.Addition of IGFBP2 inhibited the invasive behaviour of late-passage E6/7-HFKs and the invasive frequencies are quantified in B). n = 3, error bars represent SEM.C) IGFBP2 levels were depleted from early-passage E6/7-HFKs as demonstrated by western blot (C) and real-time analysis (D).n = 3, error bars represent SEM.E) Depletion of IGFBP2 from early-passage E6/7-HFKs resulted in enhanced invasive potential as quantified in F).However this was not observed when IGFBP2 was depleted in primary HFKs (G, also E and F).n = 3, error bars represent SEM.Scale bars represent 100 μM.doi:10.1371/journal.ppat.1004988.g003 Fig 4 . Fig 4. IGFBP2 expression is repressed through enhanced HDAC function.A) Addition of 1 μM Trichostatin A (TSA) or 5 mM sodium butyrate (SB) restores expression of IGFBP2 in late passage E6/ 7-HFKs to levels similar to primary keratinocytes, as demonstrated by Western blot and real-time PCR (B).n = 3, error bars represent SEM.C) The expression levels of histone deacetylases were assessed by realtime PCR in control primary keratinocytes (pBabe) early and late passage E6/7-HFKs.n = 3, representative experiment shown, error bars represent standard deviation (SD).D) Addition of HDAC inhibitors to organotypic cultures significantly reduced the invasive frequency of the epithelium, as quantified in (E).n = 3, error bars represent SEM.doi:10.1371/journal.ppat.1004988.g004 Fig 5 . Fig 5. HDAC3 is a critical regulator of IGFBP2 expression.A) Using selective (HDAC6i, HDAC1/2i and HDAC3i) HDAC inhibitors and a class I inhibitor (entinostatentino), we were able to restore IGFBP2 expression in late passage E6/7-HFKs both at the protein and transcriptional level (B).n = 3, representative experiment shown, error bars represent SD.C) Following 48 hours treatment with siRNA targeting HDAC1, 2 or 3, western blot analysis confirmed the specific depletion of individual HDACs siRNA and restoration of IGFBP2 following HDAC3 depletion.D) This was also confirmed at the transcriptional level n = 3, error bars represent SEM.E) Encode histone 3 lysine 9 acetylation (H3K9Ac) CHIP-seq from normal human epithelial keratinocytes (NHEK) at the IGFBP2 locus.Blocks represent primer locations to assess the levels of H3K9Ac binding at these sites.F) H3K9Ac CHIP-qPCR at the three sites identified in E, demonstrates the loss of the modification in late passage E6/7-HFKs which lack IGFBP2 expression.n = 3, error bars represent SEM G) HDAC3 CHIP-qPCR was conducted at locations throughout the IGFBP2 locus, predicted from previous HDAC3 CHIP-seq experiments in human and mouse derived cells.HDAC3 binding was enriched at the transcriptional start site (TSS) and was enhanced in late passage E6/7-HFKs.n = 3, error bars represent SEM.H) Western blot of NcoR1 and NcoR2 in HFKs, early and late passage E6/7-HFKs.I) The co-repressors NcoR1 and NcoR2 are elevated in late passage E6/7 cells at the protein and transcriptional level (I).n = 3, error bars represent SEM J) CHIP-qPCR analysis also demonstrated enhanced binding of both NcoR1 and NcoR2 at the TSS.Average of two experiments shown, error bars represent SD. doi:10.1371/journal.ppat.1004988.g005 Fig 6 . Fig 6.IGFBP2 blocks pro-invasive IGF1R signalling.A) Late-passage E6/7-HFKs were pre-treated with 10ng/mL IGFBP2 for 1 hour prior to 10 minutes of 10ng/ml of IGF1/2 treatment, as described in the Methods section.IGF1/2 induced activation of AKT and ERK pathways, which was inhibited by IGFBP2.B) Similar treatment of primary HFKs and early-passage E6/7-HFKs showed that these cells do not activate AKT in response to IGF1/2 treatment.C) The expression of the IGF and insulin receptors (IGF1R/2R and INSR-A/B, respectively) were assessed by real-time PCR and demonstrated significant increases in IGF1R and IGF2R mRNA.n = 3, error bars represent SEM.This was also detected at a protein level for IGF1R in cycling cultures of late passage E6/7-HFKs D).E) IGFBP2 blocks signalling pathways mediated by KGF, including those that result in activation of Ets1 and MMP1.F) The role of IGF1R signalling in regulating invasion was assessed by depletion of IGF1R using siRNA (siIGF1R) and knockdown confirmed by western blotting (F) and real-time PCR (G).n = 3, error bars represent SEM.H) siIGF1R treatment inhibited the invasion process, as quantified in I).Average of three experiments, error bars represent SEM.J) IGF1R knockdown inhibited the KGF-dependent activation of Ets2 and MMP1.Scale bars represent 100 μM.doi:10.1371/journal.ppat.1004988.g006 Fig 9 . Fig 9. Proposed mechanism of IGFBP2 function.In the presence of IGFBP2, IGF1/2 cannot be released from the cell surface and therefore cannot activate the IGF1 receptor.However, when IGFBP2 is lost, KGF activation of ADAM17 leads to cleavage of the unprotected IGF1 leading to IGF1R activation and subsequent AKT activation, which we have previously demonstrated leads to expression of MMP1[22].
7,296.8
2015-06-01T00:00:00.000
[ "Biology", "Medicine" ]
Convergent synthesis and evaluation of 18F-labeled azulenic COX2 probes for cancer imaging The overall objectives of this research are to (i) develop azulene-based positron emission tomography (PET) probes and (ii) image COX2 as a potential biomarker of breast cancer. Several lines of research have demonstrated that COX2 is overexpressed in breast cancer and that its presence correlates with poor prognoses. While other studies have reported that COX2 inhibition can be modulated and used beneficially as a chemopreventive strategy in cancer, no viable mechanism for achieving that approach has yet been developed. This shortfall could be circumvented through in vivo imaging of COX2 activity, particularly using sensitive imaging techniques such as PET. Toward that goal, our laboratory focuses on the development of novel 18F-labled COX2 probes. We began the synthesis of the probes by transforming tropolone into a lactone, which was subjected to an [8 + 2] cycloaddition reaction to yield 2-methylazulene as the core ring of the probe. After exploring numerous synthetic routes, the final target molecule and precursor PET compounds were prepared successfully using convergent synthesis. Conventional 18F labeling methods caused precursor decomposition, which prompted us to hypothesize that the acidic protons of the methylene moiety between the azulene and thiazole rings were readily abstracted by a strong base such as potassium carbonate. Ultimately, this caused the precursors to disintegrate. This observation was supported after successfully using an 18F labeling strategy that employed a much milder phosphate buffer. The 18F-labeled COX2 probe was tested in a breast cancer xenograft mouse model. The data obtained via successive whole-body PET/CT scans indicated probe accumulation and retention in the tumor. Overall, the probe was stable in vivo and no defluorination was observed. A biodistribution study and Western blot analysis corroborate with the imaging data. In conclusion, this novel COX2 PET probe was shown to be a promising agent for cancer imaging and deserves further investigation. INTRODUCTION Prostaglandin endoperoxide synthase, known more commonly as cyclooxygenase (COX), is the key enzyme required for the conversion of arachidonic acid to the biological mediators known as prostanoids, which include prostaglandins, prostacyclin, and thromboxane (Moore and Simmons, 2000). The two COX isoforms, COX1 and COX2, are expressed in different tissue at varying degrees (Dubois et al., 1998). While COX1 is expressed under basal conditions in almost all tissues and is particularly important to the maintenance of gastric mucosal integrity, renal function, and hemostasis, COX2 is undetectable in most normal tissues (van Ryn et al., 2000). COX2 is highly inducible in cells involved in inflammation and cancer (Rouzer and Marnett, 2009). In addition to the role it plays in inflammation, several lines of research suggest that COX2 is involved in the early stage of tumorigenesis (Yokota et al., 1986;Xie et al., 1991). Notably, COX2 not only continues to express during tumor progress, but the expression of COX2 also indicates an aggressive tumor phenotype that behaves more invasively (Fujita et al., 1998) and thus, a poor prognosis (Sobolewski et al., 2010). COX2 overexpression has been well documented in several human carcinomas including colon (Nasir et al., 2011), stomach (Murata et al., 1999), lung (Hida et al., 1998), breast (Glynn et al., 2010;Singh et al., 2011), head and neck (Chan et al., 1998), bladder (Shimada et al., 2011), and pancreas (Hill et al., 2012). The relationship between cancers and increased COX2 activity provides a rationale for the use of COX2 as a prognostic marker and as a quantifiable indicator of tumor progression and treatment efficacy. Collectively, this approach could be achieved through in vivo imaging of COX2 activity, especially when using a sensitive www.frontiersin.org imaging technique such as positron emission tomography (PET). A number of research initiatives have reported the development of COX2 probes with which to visualize cancer-related inflammation including its use in optical (Uddin et al., 2010) and PET imaging (McCarthy et al., 2002;Prabhakaran et al., 2005;Uddin et al., 2011). Our laboratory has focused on the development of azulene-based COX2 probes owing to the nanomolar affinity and high selectivity toward the COX2 enzyme reported previously (Tomiyama et al., 1999). Azulene has a structural backbone similar to indomethacin and sulindac, two of the most common non-steroidal anti-inflammatory drugs (NSAIDs). However, the difference between such NSAIDs and this non-benzenoid aromatic hydrocarbon is the existence of a 7-member ring. According to Tomiyama et al. (1999) azulene is suitable for COX2 development since the larger ring fits well within the larger binding pocket of COX2 compared to COX1, which enhances COX2 selectivity. Herein, we describe a novel chemistry approach that uses a convergent synthesis methodology to develop azulene-based COX2 PET probes. Of note, we synthesized the main azulene ring using the procedure we reported previously (Pham et al., 2002;Nolting et al., 2009). The two other ring structures were assembled onto the azulene ring using commercially available analogs. To retain the biological activity as reported by Tomiyama et al. (1999), we designed the precursors specifically with 18 F fluoride labeling in mind. Not only do we prefer this isotope due to its relatively long half-life, but also because replacing a hydrogen atom with a fluorine is likely to not affect biological activity since they are very similar sterically (Jalilian et al., 2000;Mueller et al., 2007). We also report herein, to our knowledge, the first time, a modified labeling condition that uses dipotassium phosphate (K 2 HPO 4 ) for this family of compounds, which we found to be unstable using the conventional PET labeling process. Overall, the chemical yield of this 7-step synthesis of the nitro precursor 12 (Figure 1) is 25%. The biodistribution results and small animal PET imaging demonstrate the potential use of the 18 F-COX2 probe in breast cancer imaging. CHEMICALS AND CHARACTERIZATION We synthesized 2-methyl azulene 2 and reported that outcome in previous publications (Pham et al., 2002;Nolting et al., 2009). All reagents were obtained through commercial sources such as Sigma-Aldrich, Acros, or Tokyo Chemical Industry (TCI) and were used without further purification. Solvents were purified using the PureSolv MD purification system. All reactions were conducted in argon-flushed, rubber septum-sealed flasks, and the reagents were introduced via tight-gas syringes. Reaction progress was monitored by thin layer chromatography (TLC) on pre-coated silica gel plates. Visualization was accomplished by the naked eyes and by 254 nm-UV light. Flash chromatography separations were performed using Biotage and Teledyne systems. HPLC analysis and purification were performed using diode array Hitachi LaChrome Elite® systems. 1 H NMR and 13 C NMR spectra were recorded on a Bruker 400 MHz spectrometer in CDCl 3 using tetramethylsilane (TMS) as the internal standard. All chemical shifts were reported in ppm. 4-Methyl-2-((2-methylazulen-1-yl)methyl)thiazole (compound 6) Triethylsilane (178 μL, 1.11 mmol) was added slowly to 2 mL of trifluoroacetic acid (TFA) at room temperature. The mixture was cooled to 0 • C and mixed for 30 min. A fresh solution of (2-methylazulen-1-yl)(4-methylthiazol-2-yl)methanol 5 (100 mg, 0.371 mmol) in dichloromethane was then added slowly to the mixture being stirred at 0 • C. The reaction was kept at 0 • C for 2 h and then warmed to room temperature. Afterward, the mixture was poured into cold 20% KOH to quench the reaction. The organic layer was extracted into diethyl ether and washed with water and brine, dried with MgSO 4 and purified using flash chromatography (ethyl acetate and hexane (2-Methyl-3-((4-methylthiazol-2-yl)methyl)azulen-1-yl) (4-nitrophenyl)methanone(compound 12) AlCl 3 (101 mg, 0.757 mmol) was weighed quickly into an argonflushed vial. While the vial was being purged with argon, dichloroethane was added slowly. The ensuing mixture was syringed quickly into a round-bottom flask and cooled to 0 • C. A solution of 4-nitro benzoyl chloride (70 mg, 0.377 mmol) in dichloroethane was added slowly into the suspension of AlCl 3 at 0 • C. This mixture was stirred at 0 • C for 30 min after which a fresh solution of compound 6 (64 mg, 0.253 mmol) in dichloroethane was added slowly to the reaction mixture being stirred at 0 • C. After the reaction was stirred at 0 • C for 30 min, it was brought to room temperature and then stirred for another 30 min. The reaction was quenched by adding ice-cold water slowly. The organic layer was extracted into dichloromethane and washed with water and brine, dried with MgSO 4 and purified using flash chromatography (ethyl acetate and hexane). The purified material was dried down into a brown/orange solid. was then eluted with an acetonitrile/water mixture containing 20 mg of Kryptofix 222 and 5.0 mg of dipotassium phosphate trihydrate (K 2 HPO 4 .3H 2 O) into a conically shaped reaction vial previously purged with helium. The [ 18 F]F − solution was evaporated under a small stream of helium at 100 • C after which the residue was dried by azeotropic evaporation with anhydrous acetonitrile to ensure anhydrous reaction conditions were maintained for 18 F labeling. After precursors 10 or 12 (2-3 mg, each) were added to the reaction vial, the resultant mixture was heated to 110 • C for 15 min. After cooling to 30 • C the reaction mixture was diluted with 4.4 mL of mobile phase (60% EtOH/H 2 O) and loaded onto a C-18 semipreparative column (Macherey-Nagel C-18 250x10mm). The flow rate was increased from 0 to 6 mL/min over a 3 min time period. The 6 mL/min flow rate was maintained for 35 min during which the radioactive product was collected (28-31 min). The contents corresponding to the radioactive peak were diluted with 100 mL of distilled water and loaded onto a C-18 Sep-Pak® pre-conditioned with ethanol and water. The Sep-Pak was eluted by hand with 1 mL of 200 proof ethanol followed by 9 mL of saline. Qualitative control of the radioactive product was performed using radio-HPLC (C-18 column, Varian Dynamax, 4.6 × 250 mm, 30-75% gradient water to acetonitrile over 35 min, flow rate 1 mL/min) to confirm [ 18 F]fluoride incorporation. The retention time was compared to that of the "cold" standard compound 11 (retention time = 20.4 min). The experimental protocol for animal imaging was approved by the Vanderbilt Medical Center Institutional Animal Care and Use Committee. Nude mice 6-8 weeks of age (n = 8, from Jackson Laboratory, Bar Harbor, ME, USA) were implanted subcutaneously under anesthesia (isoflurane mixed with 2% oxygen) with 1.0 × 10 6 C57MG cells in the mammary fat pad. The progress of tumor growth was monitored via every-other-day measurement of tumor size and animal weight. When the tumors reached approximately 4 mm in diameter, in vivo PET imaging was performed. IC 50 ASSAY Various concentrations of the 19 F-COX2 compound ranging from 0.1 μM to 0.3 nM were dispensed into designated wells within a 96-well microtiter plate at a final volume of 220 μL per well. Each well contained an assay buffer, heme, and ovine COX2 provided in Cayman's colorimetric COX inhibitor screening assay kit. In addition to the tested probe, the assay condition was accompanied by background control wells and the 100% initial activity wells. Five minutes after incubation of all assay components at 25 • C, an arachidonic acid substrate at a final concentration of 100 μM and the colorimetric co-substrate N,N,N ,N -tetramethylp-phenylenediamine were added to each well. The plate was then incubated at 25 • C for an additional 5 min before reading the absorbance at 590 nm using a plate reader. Absorbance of the duplicate assay of each well was averaged and subtracted from the 100% initial activity sample, after which it was divided by the 100% initial activity sample and multiplied by 100 to arrive at the percentage of inhibition. POSITRON EMISSION TOMOGRAPHY Positron emission tomography imaging was performed using the microPET Focus 220 (Siemens Pre-clinical, Knoxville, TN, USA) in a static acquisition mode for 30 min at 60, 120, and 150 min after injection of 18 F-COX2 probe 13 (150-200 μCi, 100-130 μL) into awake, non-fasted mice (n = 8) via the tail vein. To obtain whole-body scans, mice were placed in a supine position. The data were acquired in a 3-D mode with an axial span of approximately 8 cm. During the scanning, the animals were anesthetized using isoflurane and the temperature inside the scanner was maintained at 30 • C using a pad connected to a circulating warm water bath. After PET imaging, a CT image was acquired using the microCAT II (Siemens Preclinical, Knoxville, TN, USA) using the same animal holder with the subjects maintained under anesthesia throughout, and then the mice were immediately euthanized upon completion of the CT scan. PET images were reconstructed using the iterative MAP reconstruction algorithm with 18 iterations and a beta smoothing value of 0.001 into 128 × 128 × 95 slices with a voxel size of 0.475 mm × 0.475 mm × 0.796 mm. The PET and CT images were co-registered using the imaging tool AMIDE (Loening and Gambhir, 2003). BIODISTRIBUTION After the imaging session, the mice were euthanized and hearts, muscles, blood, livers, spleens, kidneys, stomachs, brains, intestines, tumors, and lungs were retrieved. The tissues were weighed and assessed for 18 F radioactivity using a gamma counter (CRC-15W, Capintec, Ramsey, NJ, USA). WESTERN BLOT Cells were washed twice with PBS, and lysed in ice-cold lysis buffer (50 mM Tris-HCl, pH7.4, 0.5% Triton X-100, 0.25% NP-40, 0.25% Na deoxycholate, 0.1% SDS, 150 mM NaCl, 1 mM EDTA), supplemented with complete anti-protease cocktail (Sigma). After removing nuclear and insoluble debris at 16,000g for 20 min, the supernatant designated as whole cell lysate (WCL) was saved. Protein concentrations were determined with Bradford method (Bio-Rad assay, Bio-Rad, Hercules, CA, USA). Thirty micrograms of WCL proteins were separated by 12% sodium dodecyl sulfate polyacrylamide gel electrophoresis (SDS-PAGE) and transferred onto polyvinylidene difluoride membranes (PVDF, Biorad). Membranes were blocked with 5% dry milk in Tris-buffered Saline with 0.1% Tween-20 (TBST) and immunoblotted overnight at 4 • C with primary antibodies against COX2. β-Tubulin antibody (Santa Cruz) was used to blot same membrane for loading control. After washing with TBST three times, horse radish peroxidase (HRP) conjugated secondary antibodies were added for 1 h incubation. After wash with TBST twice and once with TBS, the protein bands were detected with an enhanced chemiluminescence (Pierce, Rockford, IL, USA) by exposure to films (Kodak) for 30 s. Band intensity was quantified by using NIH Image J software. REAL-TIME PCR Total RNA was isolated and purified from cultured cells by using the Qiagen RNAeasy kit. RNA (2 mg) was reversibly transcribed by Superscript II (Invitrogen) with oligo-(dT) as primer to generate single stranded cDNA by following manufacturer recommended protocols. Quantitation of mRNA (cDNA) levels for COX2 was carried out by real-time PCR using S16P as internal controls. Real-time PCR primers were designed by webbased OligoPerfect TM Designer (Invitrogen). The primer pairs used in PCR are forward 5 -CAGGAGAGAAGGAAATGGC-3 and backward 5 -TGAGGAGAACAGATGGGATT-3 to yield a 184nt product. Real-time PCR was carried out with the SYBR-green mixture from Bio-Rad in a final volume of 25 μL, with initial denaturation at 94 • C for 3 min, followed by 45 cycles of denaturation at 94 • C for 10 s, annealing and extension at 65 • C for 1 min. PCR products were verified by acrylamide gel electrophoresis, melting curve analysis. CHEMICAL SYNTHESIS AND CONFIRMATION OF THE 18 F-COX2 PROBE Starting with tropolone 1, we synthesized three analogs of the precursor as shown in Figure 1. The advantage of working with azulene is that the reaction progress can be monitored via color changes. For example, evidence that compound 2 was converted to an aldehyde 3 using the Vilsmeier-Haack reaction resulted in a color change from blue to red. The thiazole ring was incorporated Frontiers in Oncology | Cancer Imaging and Diagnosis onto azulene in two steps. This included a hydrogenolysis reaction using triethylsilane in the presence of TFA to yield compound 6, which is blue. Finally, we used Friedel-Crafts acylation to attach an aromatic ring to position 3 of the azulene. This reaction was completed using dichloroethane at room temperature for 30 min, which yielded the final product, which is brown. Under reaction conditions similar to those used with p-nitrobenzoyl chloride, 4bromobenzoyl chloride provided an average yield of only 17% for the resultant Friedel-Crafts acylation product. The seemingly low yield can be attributed to the weak electron-withdrawing group. From an electronic perspective, we noted that the fluoro moiety is a much more favorable alternative than its bromo counterpart, as the fluoro derivative possesses greater electronegativity and is thus suitable for generating reactive electrophilic acylium ions. Notably, it is important to perform the Friedel-Crafts reaction as the last step since the nitro precursors will be reduced to amino groups under the reduction conditions. Compounds 10 and 12 were designed for [ 18 F]fluoride labeling while compound 11 was used as a control to confirm the radiolabeling product and for specific activity analysis. All of the intermediates and products were characterized fully by 1 H NMR and 13 C NMR and mass spectrometry. F-COX2 probe We found this family of azulene compounds to be unstable under conventional 18 F labeling conditions. After exhaustively analyzing every single reagent, solvent, and temperature involved in the labeling experiment, which included Kryptofix, Dimethyl sulfoxide (DMSO), DMF, acetonitrile, and potassium carbonate, we found by HPLC analysis that potassium carbonate was decomposing precursors 10 and 12 instantaneously at room temperature. This undesired chemical transformation was easily visualized since the color changed from brown to black when the precursors came into contact with potassium carbonate. Although we did not analyze the intermediates, this undesired reaction could be attributed to the acidic methylene protons between the azulene and thiazole ring, which may be sensitive to potassium carbonate. To overcome this problem, we decided to use a milder buffer such as dipotassium phosphate, which works perfectly for this purpose. Although there was no sign of decomposition after we optimized the labeling conditions, the labeling of the bromo precursor 10 was sluggish. In contrast, we labeled successfully the nitro derivative 12, albeit with low yield (3%, decay corrected) at EOS with >99% chemical and radiochemical purities and with a specific activity of 733 Ci/mmol. THE SPECIFICITY OF THE 19 F-COX2 COMPOUND FOR THE COX2 ENZYME In addition to being synthesized for use in facilitating the confirmation of the 18 F-labeled product, the cold compound 11 was also used to assess the IC 50 value. The assay was performed using 10 duplicate concentrations in a range comparable to DuP697, a known COX2 inhibitor. As shown in Figure 2, the Hill slopes of the curves that represent 19 F-COX2 and DuP697 are −0.62 and −1.0, respectively; suggesting the specificity of the synthesized PET probe for COX2. After taking the background signal into account, the IC 50 value of the 19 F-COX2 compound was 661 nM. COX2 IS OVEREXPRESSED IN C57MG BREAST CANCER CELLS To confirm and quantify COX2 expression in the C57MG cell line, we selected two other cells, 4T1 and 67NR, which are also murine breast cancer cell lines. It has been demonstrated previously that the 4T1 (Harmey et al., 2002) and 67NR cells (Nagler et al., 2011) were positive and negative, respectively, for COX2. As shown in Figure 3, Western blot analysis on cell lysate indicated a very low level of COX2 in 67NR cells. In contrast, C57MG possesses a high constitutive level of COX2. Furthermore, realtime PCR data demonstrated that COX2 was expressed at a rate approximately 31-fold higher in C57MG cells compared to 67NR. IN VIVO IMAGING OF COX2 IN TUMOR MOUSE MODEL AND BIODISTRIBUTION To assess the specificity of the probes for the detection of COX2 expression, we performed in vivo PET imaging of non-fasted mice in which C57MG tumors had been implanted on the mammary fat pads. We monitored the distribution of the probe in breast cancer at several times after the intravenous bolus injection. The optimal emission data were collected during a static, wholebody scan 150 min after administration of the probe. PET and PET/CT images showed accumulation and retention of the 18 F-COX2 probe 13 and that significant accumulation in the tumor resulted in high signal intensity compared to the background (p < 0.05; Figure 4). The PET data corroborates with Western blot and RT-PCR analysis. We also observed a predominant hepatic uptake of the probe. That outcome is reasonably understandable since lipophilic compounds tend to possess a strong affinity for the liver. In addition, the high liver-bowel activity observed in this study suggests the possibility of hepatobiliary excretion. The probe exhibited negligible signal in the bone, thus eliminating the notion of in vivo defluorination. Figure 5 shows the probe's biodistribution in non-fasted tumor-bearing mice (n = 3) at 150 min post injection. The data shows that the probe accumulated in the tumor; however, the highest uptake was detected in the liver, followed by the intestine. It is very likely that the high activity observed in the intestine can be attributed partially to the stool residuals. www.frontiersin.org FIGURE 3 | Analysis of COX2 expression and quantity in murine breast cancer cells. Western blot analysis was performed to verify the presence and relative intensity of COX2 in C57GM cells compared to other cells. β-actin served as a loading control (left). RT-PCR data were used to quantify the level of COX2 expression after normalization. STATISTICS Student's t-test was used to evaluate statistical differences between samples. Significant differences were considered as p < 0.05. DISCUSSION The goal of this work is to design, synthesize and test a novel class of azulene-based probes with which to image COX2 in cancer. Although synthesis of this class of COX2 inhibitors has been reported in the past (Tomiyama et al., 1999), conversion from an inhibitor to a contrast agent requires an entirely different chemistry. This is because the chemistry used originally is unsuitable for producing the nitro precursor 12. Conversion of a nitrile derivative into thioamide using hydrogen sulfide, shown by Tomiyama et al. (1999), concomitantly reduces the nitro to an amine. Another disadvantage of constructing the thioamide directly from the azulene ring is that the process requires many steps of synthesis, and failure in any single step in the link will affect the whole scheme. In this project, we utilized a convergent synthesis strategy wherein the three major rings of the compound were either synthesized or acquired from a diverse library of analogs commercially. These were then assembled into the desired product using simple chemistry. Thanks to this approach, we shortened the synthesis by three or four steps. In addition, the approach enables the potential generation of a library of compounds with novel functional groups that offer untapped bioisosteres. Another innovative approach of this work lies in the 18 F labeling process. To our knowledge, currently, there are no reported data showing an alternative buffer to the conventional use of potassium carbonate. We hypothesized that the role played by potassium is that of serving as a counter-ion for the [ 18 F]fluoride and as such it can be displaced by a similar cation. However, for these precursors or any basic sensitive compounds, weaker bases such as dipotassium phosphate should be used as an alternative since their pH is nearly neutral. Since we have not performed this sort of experiment on other types of compounds, we cannot extrapolate the reason why the specific activity of final product is low. More work is under progress to improve the specific activity of compound 13. One approach in that direction is to use high-grade dipotassium phosphate to ensure the elimination of trace fluoride in the labeling process. Nevertheless, in view of our recent findings and in light of the high number of basic sensitive precursors that failed in PET labeling, it is appropriate to hope that this finding can provide far-reaching applications for other compounds. In vivo PET imaging demonstrated that there was no defluorination of the probe in vivo even 2.5 h post injection of the radioligand. To our knowledge, this is the first COX2 PET radioligand demonstrating such high stability in vivo. However, as the scope of this article was to report the chemical development of the probe, future studies will be needed to fully characterize this radioligand in vivo which include blood sampling and kinetic modeling as well as displacement studies. In addition, other important issues still need further evaluation. For example, we do not have information regarding tumor uptake between fasted and non-fasted mice. Although there is no systematic or mechanism that explains the difference between these two groups of study, Fueger et al. (2006) reported that in fasted mice, tumor uptake increased fourfold while tumor-to-organ ratios increased up to 17-fold compared to the non-fasted counterparts. Currently, work is in progress in our group to address this issue. Furthermore, in vivo blocking studies using cold compound 11 or COX2 inhibitors would be ideal to further confirm the specificity of this PET probe. Data obtained in this work suggest that this probe not only has the potential to detect inflammation, but it can also be used to detect the early onset of cancer. Furthermore, this targeted imaging approach is applicable for the assessment of tumor response during chemotherapy. Another application for the in vivo imaging of COX2 lies in cell therapy. Muthuswamy et al. (2010) showed that COX2 impairs the ability of dendritic cells (DCs) to attract naïve T cells. One of the mechanisms involved is that COX2 inhibits the ability of DCs to produce CCL19. In another study, Harizi et al. (2002) showed that COX2 induced PGE2 enhances the production of endogenous IL-10, which downregulates DC functions. By using COX2 inhibitors to attenuate the expression of IL-10 and the concomitant restoration of IL-12 production by DCs, Stolina et al. (2000) demonstrated that the COX2 inhibitor can modulate and be used beneficially as an adjuvant strategy in cancer therapy. Altogether, we believe that non-invasive imaging of COX2 with this probe in breast cancer would provide valuable insight into the tumor microenvironment. In conclusion, we have demonstrated an innovative synthetic approach to the development of a novel class of 18 F-COX2 contrast agents. In addition, we reported on the optimized labeling conditions that can be applied to any base-sensitive PET precursors. The chemistry we utilized is reproducible and scalable, and each step of the syntheses described in this work has been repeated and characterized more than 30 times by NMR and mass spectrometry. Most importantly, small animal PET imaging data suggest the specificity of the probe for COX2. In general, it seems reasonably certain that this class of azulene-based agents deserves further evaluation, as in vivo imaging of COX2 will offer significant insights into the implication of this enzyme in the inflammation-dysplasia-cancer matrix. ACKNOWLEDGMENTS We thank Meiying Zhu and Breia Jefferson for contributing their effort in the project. This work was supported by NIH K01AG026366 (Pham), R01CA160700 (Pham), The VICC Cancer Center Support Grant (Pham) and P50CA128323 (Gore). www.frontiersin.org
6,234.2
2013-01-03T00:00:00.000
[ "Chemistry", "Biology" ]
Macroeconomic dynamics and the IS puzzle The authors solve the IS puzzle for the G7 countries. They find that five of the G7 countries have the expected significant negative relationship between the output gap and the realrate gap; the time series of the remaining two show material deviation from expected IScurve behavior. The authors show that the observed time dependence of the interaction between the output and real-rate gaps can be represented in a parsimonious and practical manner using the theory of anelasticity that unifies partial-adjustment specifications of the IS curve. JEL C22 E3 E32 E52 E61 Introduction New Keynesian macroeconomic dynamics is based on three major elements: a central bank that seeks to keep economic output as close to the economy's highest level of sustainable output and to keep inflation as close to a target level as possible, a Phillips curve that expresses how a deviation of output from potential drives changes in inflation, and an IS curve that expresses economic output as inversely related to the level of the real interest rate. 1 The latter of these -the relationship between output and the real interest rate -is the basis for the use of interest rates as the primary tool of monetary policy by central banks in advanced economies. Despite the centrality of the IS curve to monetary policy since the late 1970s the small empirical literature dedicated to establishing that an increase in the real rate does in fact have a negative impact on output has come to be characterized as the "IS puzzle": a statistically significant IS-curve is not found in some studies of the United States and is not seen in any of the other G7 countries. 2 The goal of this paper is to solve the IS puzzle. The IS Puzzle The IS puzzle began with empirical results that supported the existence (i.e., statistical significance) of the IS curve in the U.S. (Rudebusch and Svensson, 1999;Peersman and Smets, 1999) and the EU5 (Peersman and Smets, 1999). The IS puzzle emerged (and was coined) in subsequent work by Nelson (2001Nelson ( , 2002a on U.S. and U.K. data which showed that the existence of the IS curve depended on the time frame of the data used in the analysis. A subsequent empirical reaffirmation of the IS curve in the U.S. by Fuhrer and Rudebusch (2004) was followed, shortly thereafter, by another finding of the IS puzzle, this time by Goodhart and Hofmann (2005a,b) in their examination of the G7. The IS puzzle was also found by Angeloni and Ehrmann (2007) in the euro area, by Hafer et al. (2007) in the U.S., by Hafer and Jones (2008) in the U.S. and other countries, and by Stracca (2017) in their examination of the IS curve, its puzzle, and a similar phenomenon in the analysis of the classic consumption Euler equation. Our approach to solving the IS puzzle is motivated by the work of Svensson (1999, 2002) and of Goodhart and Hofmann (2005a,b) that illustrates the IS puzzle in the United States. These studies employed the specification: 1 See, for example, Goodfriend and King (1997), Clarida et al. (1999), andWoodford (2003). The IS curve is also known as the intertemporal Euler equation or the output Euler equation. 2 These results are established in Svensson (1999, 2002), Peersman and Smets (1999), Nelson (2001Nelson ( , 2002a, Fuhrer and Rudebusch (2004), Goodhart and Hofmann (2005a,b), Angeloni and Ehrmann (2007), Hafer et al. (2007), Hafer and Jones (2008), and Stracca (2017). Svensson (1999, 2002) are (i) the congressional budget office (CBO) estimates of potential GDP and (ii) the GDP chain-weighted price index (CWPI) for inflation. Those used by Goodhart and Hofmann (2005a,b) are (i) a Hodrick-Prescott filter calculation of potential GDP and (ii) the consumer price index (CPI) for inflation. where y t denotes the output gap at time t and where i t and π t are the nominal interest rate and the inflation rate, respectively. 3 Using the data shown in Table 1 Rudebusch and Svensson (1999) and Goodhart and Hofmann (2005a,b) obtained the coefficients for Eq (1) shown under the column heading of "Puzzle" in Table 2. Rudebusch and Svensson (1999) obtained a statistically significant value for β r while Goodhart and Hofmann (2005a,b) did not; this is the IS puzzle in the United States. While the work of Rudebusch and Svensson (1999) considered only the United States, Goodhart and Hofmann (2005a,b) also found the IS puzzle to hold across the G7; in their work no G7 country was found to have a statistically significant value for β r . The heterogeneity of the entries in Table 1 -the time range, input for the GDP-gap calculation, and input for the real-rate calculation all differ -suggests that in these differences may lie the source of the IS puzzle. We localized the sources of the IS puzzle by first reproducing the results of Rudebusch and Svensson (1999) and of Goodhart and Hofmann (2005a,b) and then, having established the reproducibility of these results, varying the input data as illustrated in Table 2. As mentioned above, the IS puzzle of the United States is shown in the column labeled "Puzzle". In the upper panel of the column labeled "Puzzle" the IS-curve coefficients (the β s) and associated t-statistics of Rudebusch and Svensson (1999) are shown. All betas are statistically significant and the sign of β r = −0.10 is negative; all as expected for the IS curve. By contrast Goodhart and Hofmann (2005a,b) find a statistically insignificant β r = −0.021 shown in the lower panel; not as expected for the IS curve. Our reproduction of the Rudebusch and Svensson (1999) result is shown in the upper panel of Table 2 in the column labeled CWPI. Our coefficient values are consistent with those of Rudebusch and Svensson (1999) and we therefore conclude that their results for the IS curve can be reproduced. Similarly, our reproduction of the Goodhart and Hofmann (2005a,b) results are shown in the lower panel of Table 2 in the column labeled CPI. Our results are consistent with those of Goodhart and Hofmann (2005a,b) both for the statistically-significant coefficients and for the identity of the statistically-insignificant coefficient. We thus conclude that the results of Goodhart and Hofmann (2005a,b) can also be reproduced and this, together with our reproduction of Rudebusch and Svensson (1999), is our reproduction of the IS puzzle. Table 2: Coefficients and associated t-statistics (in parenthesis) for the IS curve in the United States given by Eq. (1). Asterisks indicate significance at the 1 percent (***) and 5 percent (**) levels; ( †) indicates reported significance at least at the 10 percent level by Goodhart and Hofmann (2005a,b 1982-1998: The origins of the IS puzzle are revealed in the remaining entries of Table 2. 4 We begin by examining the impact of changing the inflation measure on the coefficients. As mentioned above, Rudebusch and Svensson (1999) used the CWPI measure while Goodhart and Hofmann (2005a,b) used the CPI. The impact of the CPI measure on the Rudebusch and Svensson (1999) result can be seen by comparing the coefficients in the columns labeled CWPI and CPI in the upper panel of Table 2. Moving from CWPI to CPI eliminates the statistical significance of the real rate coefficient. For completeness we also show the result for the personal-consumption expenditure (PCE) deflator; use of this inflation measure also creates an IS puzzle. Variation of the inflation measure for the time range of Goodhart and Hofmann (2005a,b) shown in the columns labeled CWPI and PCE in the lower panel of Table 2 reveals that the lack of statistical significance for this time range is indeed robust; changing the inflation measure does not restore statistical significance to β r . Thus we conclude that one source of the IS puzzle is the choice of inflation measure. Next, comparing the coefficients in the upper and lower panels of the column labeled CWPI we find a second source of the IS puzzle; changing the time range of the input data can eliminate the statistical significance of the real-rate coefficient. Examination of the CPI and PCE columns demonstrates that this change in time range does not restore the statistical significance of the real rate in either case. Thus, we conclude that a second source of the IS puzzle is the choice of time range of the input data. These origins of the IS puzzle -changes in the inflation measure and changes in the time range of the input data -can be seen in Figure 1 where the data used to generate the coefficients in Table 2 are shown. In panel (a) we see the output gap and the Fed Funds rate as a function of time with the grey bars indicating recessions. The three inflation measures discussed above are shown as a The reason the time range matters is likely associated with the dramatic movement in both the output gap and the real interest rate seen before 1982. To begin the analysis in 1982 is to exclude the period during which each of these variables experienced their greatest dynamic range and to focus the analysis on a period during which movement in the real rate is somewhat less correlated with the output gap. Indeed, it appears that the late 1970s and early 1980s are a sort of natural experiment regarding the IS curve designed to highlight the rate dependence of output. Similarly, the choice of inflation matters because the time series of the CPI inflation measure differs significantly from that of the PCE and IPD inflation measures during some time intervals. Of particular importance to our study is the comparatively dramatic rise and fall of CPI inflation in This sensitivity to inflation measure, however, brings into focus the implicit manner by which the natural-rate component 5 of the real-rate gap in Eq. (1) has been calculated: either as the mean rate of a demeaned time series or as a non-zero constant term in the regression. Furthermore, Eq. (1) expresses the rather strong assumption that, in contrast to the output gap, all lags of the real rate have the same coefficient. This, together with the somewhat ad hoc origin of Eq. (1) suggest a reexamination of the specification the IS curve and it is to this that we now turn. Time-Dependence of the IS Curve In equilibrium the IS curve is given by where y is the output gap, r is the real interest-rate gap (the rate gap), and J is a constant that should have a negative sign. Implicit in this version of the IS curve is (i) a unique equilibrium output gap for each level of the rate gap, (ii) instantaneous achievement of the equilibrium response, and (iii) linearity of the response. The validity of these assumptions can be assessed by comparing empirical time series for the rate gap and the output gap. If valid, the temporal variation of these time series should, modulo small fluctuations, be the same. The output and rate 6 gaps for the United States are shown in Figure 2 where, contrary to the instantaneous proportionality implicit in Eq. (2), we see the dynamic monetary-policy interplay between the rate gap and the output gap, with the rate gap rising in response to an increase in the output gap and falling in response to a decline in the output gap. 7 The lead-lag nature of the output-rate dynamics clearly illustrated in Figure 2 indicates that the assumption of instantaneous response of the rate gap to a change in the output gap is not supported empirically. Relaxing this assumption to allow for time-dependence in the response while maintaining linearity and the long-run equilibrium described by Eq. (2) is the basis of anelasticity and the notion of a standard anelastic economy in which the IS curve becomes 8 τ r dy dt + y = τ r J U dr dt + J R r (3) 5 The natural rate of interest is "the real short-term interest rate consistent with output equaling its natural rate and constant inflation." See, for example, Holston et al. (2017). 6 The rate gap for the PCE inflation measure is shown in Figure 2. The rate gaps for the other inflation measures are essentially identical to that of PCE and, in the interest of clarity, not shown. 7 To calculate the natural-rate component of the rate gap we employed the Hodrick-Prescott filter. This approach is consistent with current econometric practice for evaluating long-wavelength components of time series, provides methodological coherence across our long-wavelength calculations since we are using it to calculate the potential output in the output gap, introduces a relatively small computational deviation from the detrending approaches used by Rudebusch and Svensson (1999) and Goodhart and Hofmann (2005a,b), and is consistent with our goal of identifying the source of the IS puzzle within a formal framework as close to that of Eq. (1) as possible. Other more formally and computationally complex approaches to determining the natural rate (e.g., Laubach and Williams (2003), Garnier andWilhelmsen (2009), Holston et al. (2017) and references therein) hold promise for future work on this issue. 8 See Hawkins (2015) and references therein. where τ r is the relaxation time at constant rate gap and the proportionality constant J is now represented by two terms: J U that represents any instantaneous response and J R that represents the equilibrium proportionality. The difference J R − J U is the time-dependent component of the response. In the equilibrium limit we recover IS curve seen in Eq. (2) above as y = J R r. Some intuition for this dynamic form of the IS curve can be had from a consideration of a simple rate shock. If the rate gap is shocked and held at a constant level r the time-dependent output gap becomes where δ J ≡ (J R − J U ). The output-gap response to this step change in the rate gap is illustrated in Figure 3. At t = 0 the rate gap increases by 1% and in response to this the output gap has a very rapid response of J U = −0.50% followed by a slower relaxation to J R = −2.0%. The partial-adjustment form of the IS curve obtained by discretizing Eq. (3) has an output-gap lag structure similar to that of Eq. (1) but a rate-gap lag structure that differs from Eq. (1) in that each lag has a different coefficient. 9 A further simplification can be had by noting that with a quarterly measurement frequency the instantaneous component of the response J U will not be observed; consequently we set J U = 0 and used the following form of Eq. 5 in our The coefficients for Eq. (3), obtained by fitting Eq. (6) using ordinary least squares via the lm function in R, over the period studied by Svensson (1999, 2002) are shown in Table 3. This analysis demonstrates that this specification is more robust to the choice of inflation measure. The proportionality constant J R is rather close to two in the two cases with statistically significant values for 1/τ r indicating that y ≈ −2r in equilibrium. The success of our analysis in other G7 countries is contingent on the existence of the lead-lag relationship between the output gap and the rate gap found in the United States; an indication of active rate-based monetary policy. As the time series in Figure 4 illustrate, however, this appears to be the case for only a portion of the G7 countries. The output-rate relationship of the United Kingdom and Germany is similar to that of the United States with changes in output generally leading changes in rate, and this relationship is described well by Eq. (3) as indicated by the coefficients in Table 4. Canada, Italy, and France did not have statistically significant coefficients for 1/τ r . Since this suggested that the equilibrium IS curve was a more appropriate specification we ran regressions using Eq. (2) which yielded the coefficients J shown in Table 4. With this equilibrium specification Canada and France are seen to have statistically significant IS curves. Italy, by contrast, does not have a statistically significant IS relationship, although this consistency with the results of Goodhart and Hofmann (2005a,b) could be due to the lack of data before 1980. Finally, the data for Japan successfully resisted our attempts at parameterization. Inspection of the time series for Japan reveals interactions, such as the real-rate gap change leading the output gap change in 1980, that are inconsistent with the IS-curve generally and suggests a richer dynamical model is needed to describe the IS relationship in Japan. Of particular interest is that in all G7 countries except for Italy and Japan a statistically significant negative relationship between the output gap and the real-rate gap is observed, and thus it can be said that for these countries the IS puzzle is solved. Note: The values of J R for the United Kingdom and for Germany are the ratios of J R /τ r and 1/τ r . The retail price index (RPI) was used to calculate inflation in the United Kingdom; the consumer price index (CPI) was so used for the other G7 countries. Discussion and Summary As one of the three components of New Keynesian economic dynamics the IS curve is of considerable importance to academic economists, central bankers, and other policy makers. In particular, the negative dependence of the output gap on the real interest rate is fundamental to the use of interest rates as macroeconomic policy tools. Thus, the IS puzzle -the observation that the output gap is not dependent on the real rate in a statistically significant manner -is a major concern to those for whom interest rates are a key monetary policy tool. We have found that the IS puzzle has two primary sources. First, is the choice of time range over which the relationship between the output gap and the rate gap is studied. This is perhaps the most consistent feature of the IS puzzle, having figured prominently in its discovery by Nelson (2001Nelson ( , 2002a and in all studies thereafter. If a material change in the rate gap is not present in the data, estimation of the dependence of the output-gap on the rate gap will be complicated significantly by the presence of other factors that bear on the the value of the output gap; an estimation challenge that has bested most attempts to control for other factors. Consequently, focussing on (or at least including) a period of time during which the functional relationship one is attempting to estimate is the primary (in terms of magnitude) effect seen in the data provides as close to a controlled experiment as one is likely to encounter in this area of macroeconomics. Indeed, one is reminded of Fisher's (1925) comment on his choice of time frame when studying the relationship between inflation and output which, adapted to our examination of the IC curve, reads; "[t]his period seemed the most suitable for the purpose (namely to obtain the best estimate of the true influence of [rate changes] on [output]) chiefly because, during this period the [rate] changes were so great." The second primary source is lack of robustness of the IS curve as specified by Eq. (1). We found that the sensitivity of the IS curve with respect to inflation measure was reduced significantly by using an anelastic specification of the IS curve that relaxes the treatment of the real rate given by Eq. (1) to one in which the temporal evolution of the rate gap is on the same footing as that of the output gap. The anelastic specification of the IS curve has a natural extension to other variables 10 and the application of this specification to the variables proposed by Nelson (2001Nelson ( , 2002a, Goodhart and Hofmann (2005a,b), and Hafer and Jones (2008) represents an interesting opportunity for future work. Another promising opportunity along this line are enhancements in the estimation of the real rate developed by Laubach and Williams (2003) and furthered by Garnier and Wilhelmsen (2009) and Holston et al. (2017). In summary, we have shown that the IS puzzle is a result of both the chosen time-range of the data used to study the IS curve and the specification of the IS curve. With a time-range chosen to include a materially dynamic range for the real-rate gap and a specification of the IS curve that includes a time-dependent natural rate the real-rate gap the IS puzzle in the G7 countries can be solved.
4,799.2
2018-09-18T00:00:00.000
[ "Economics" ]
Atmospheric Chemistry and Physics Modeling organic aerosols in a megacity: comparison of simple and . A multi-model study of the long-range transport of ozone and its precursors from major anthropogenic source regions was coordinated by the Task Force on Hemispheric Transport of Air Pollution (TF HTAP) under the Convention on Long-range Transboundary Air Pollution (LRTAP). Vertical profiles of ozone at 12-h intervals from 2001 are available from twelve of the models contributing to this study and are compared here with observed profiles from ozonesondes. The contributions from each major source region are anal-ysed for selected sondes, and this analysis is supplemented by retroplume calculations using the FLEXPART Lagrangian particle dispersion model to provide insight into the origin of ozone transport events and the cause of differences between the models and observations. ozonesonde measurements is seen in the winter and autumn months. Following the increase in photochemical activity in the spring and summer months, the spread in model results increases, and the agreement between ozonesonde measurements and the individual models deteriorates further. At selected sites calculated contributions to ozone levels in the free troposphere from intercontinental transport are shown. Intercontinental transport is identified based on differences in model calculations with unperturbed emissions and emissions reduced by 20% by region. Intercontinental transport of ozone is finally determined based on differences in model ensemble calculations. With emissions perturbed by 20% per region, calculated intercontinental contributions to ozone in the free troposphere range from less than 1 ppb to 3 ppb, with small contributions in winter. The results are corroborated by the retroplume calculations. At several lo-cations the seasonal contributions to ozone in the free troposphere from intercontinental transport differ from what was shown earlier at the surface using the same dataset. The large spread in model results points to a need of further evaluation of the chemical and physical processes in order to improve the credibility of global model results. Introduction Organic aerosol (OA) comprises a large fraction (20 to 90 %) of submicron particulate matter in the atmosphere affecting radiative climate forcing and human health (Murphy et al., 2006;Zhang et al., 2007).Accurate representation of OA in models requires a good understanding of processes leading to formation and removal of OA in the atmosphere.OA is composed of directly emitted primary organic aerosols (POA) and photochemically produced secondary organic aerosols (SOA).POA is emitted from a variety of sources such as fossil fuel and biomass burning. POA has traditionally been considered as non-volatile and non-reactive in air quality models.However, recently Robinson et al. (2007) showed that instead of a static fixed nonvolatile mass, POA is a dynamic system formed due to gasparticle mass transfer of a multi-component mixture of semivolatile organic species evolving as a function of atmospheric variables such as dilution, temperature, and pre-existing OA as predicted by absorptive partitioning theory (Shrivastava et al., 2006).Thus, the conceptual model of Robinson et al. (2007) emits organic precursors which are lumped into nine surrogate volatility species separated by factor of 10 at 298 K (volatility basis set or VBS) classified as: (1) Semivolatile organic compounds (SVOC) with effective saturation concentrations (C * ) ranging 10 −2 to 10 3 µg m −3 and (2) Intermediate volatility organic compounds (IVOC) with C * Published by Copernicus Publications on behalf of the European Geosciences Union. ranging 10 4 to 10 6 µg m −3 .A substantial portion of SVOC mass will partition to POA in the atmosphere, while in the absence of photochemistry, the IVOC species remain as organic vapors under most atmospheric conditions.This multicomponent mixture of SVOC and IVOC (S/IVOC) species is assumed to undergo gas-phase photochemical oxidation by OH radicals resulting in formation of successively lower volatility species, which may condense to form SOA (Robinson et al., 2007;Shrivastava et al., 2006). SOA formation also occurs through gas-phase oxidation of volatile organic compounds (VOCs with C * greater than 10 7 µg m −3 ) such as biogenic VOCs (e.g., terpenes and isoprene) and traditional anthropogenic VOCs (e.g., aromatics and higher MW alkanes and olefins) (Tsimpidi et al., 2010).However, SOA formation through oxidation of S/IVOC precursors is thought to be more efficient as compared to VOC precursors, as S/IVOC species have lower volatility favoring partitioning to the particle phase after oxidation (Donahue et al., 2006).SOA formed due to photochemical oxidation of S/IVOC precursors is named "SI-SOA", while oxidation of biogenic/traditional anthropogenic VOCs forms "V-SOA". Aerosol Mass Spectrometer (AMS) measurements and subsequent analysis with Positive Matrix Factorization (PMF) classify total OA as hydrocarbon-like OA (HOA representing fresh primary OA), and oxygenated OA (OOA representing OA formed after chemical oxidation in the atmosphere) (Ulbrich et al., 2009).HOA and OOA have been shown to be good surrogates of urban POA and total SOA respectively in the atmosphere (Zhang et al., 2007).Recent results show that SOA accounts for a large fraction of OA burden throughout the atmosphere with its fraction of total OA increasing from urban to remote continental locations (Zhang et al., 2007).Previous "bottom up" chemical transport models based on parameterizations derived from laboratory experiments severely under-predicted the magnitude and evolution of SOA in polluted regions (de Gouw et al., 2005;Goldstein and Galbally, 2007;Hallquist et al., 2009;Heald et al., 2005;Volkamer et al., 2006), while predictions in unpolluted biogenically-dominated regions do not show a similar under-prediction (Slowik et al., 2010;Tunved et al., 2006).Recent modeling efforts have significantly increased the amount of SOA modeled in polluted regions, bringing model predictions closer to measurements (Dzepina et al., 2009;Hodzic et al., 2010).Using a box model and data from the MCMA-2003campaign, Dzepina et al. (2009) combined different modeling approaches to close the gap between model and measurements for SOA.Dzepina et al. (2009) found that SI-SOA accounted for about half of the observed SOA mass.However, large uncertainties remain in terms of various model parameters and other SOA formation pathways and yields.Recently some models have been proposed which "age" semi-volatiles formed in V-SOA mechanisms by gas-phase reaction, as in e.g.Tsimpidi et al. (2010).Dzepina et al. (2011) recently reported that the Tsimpidi et al. (2010) A-V-SOA mechanism produces enough SOA to match the regional observations, and that a large SOA over-prediction is observed when SI-SOA is also implemented. In addition to mass, aerosol hygroscopicity is an important parameter affecting direct and indirect radiative forcing of climate.The hygroscopicity parameter κ was recently shown to be directly related to elemental oxygen to carbon molar ratios of OA (O:C ratio) for ambient aerosols in urban, remote and forest locations (Jimenez et al., 2009).Most of the largescale chemical transport models are not designed to represent the O:C ratio of OA due to complexity of processes involved.Prediction of O:C ratios also requires separate tracers for carbon and oxygen species for both freshly emitted and oxidized organic species in the atmosphere, with each class of organics (such as fresh, oxidized) being represented by a separate VBS of 8 or 9 volatility intervals (Hodzic et al., 2010;Shrivastava et al., 2008).In chemical transport models, advection of this large set of organic species requires more computational time than chemistry and gas-particle partitioning combined.Models running online meteorology, such as the Weather Research and Forecasting Model coupled to chemistry (WRF-Chem) (Grell et al., 2005), are especially susceptible to this large computational burden as compared to offline representations of meteorology in chemical-transport models, such as CHIMERE (Hodzic et al., 2009) and CMAQ (Carlton et al., 2010), because the advection time step is similar for meteorology and chemistry.An important advantage of online models is that they permit aerosol-radiation-cloudchemistry interaction processes and the associated feedback effects on meteorology to be simulated, whereas offline models cannot study these processes. The objectives of this work are to: (1) implement a detailed OA mechanism in WRF-Chem based on a 9species VBS that includes SOA formation from S/IVOC precursors (Robinson et al., 2007) and traditional anthropogenic/biogenic VOCs; (2) modify the ROB mechanism in terms of oxygen added per generation of oxidation and test predictions of O:C ratios; (3) develop a highly condensed 2-species SOA mechanism and evaluate it in terms of performance and computational speed compared to the more detailed VBS mechanism; and (4) evaluate the OA mechanisms using field measurements of organic aerosols collected during the 2006 MILAGRO field campaign in the vicinity of Mexico City. We will show that it is possible to successfully develop highly condensed OA mechanisms that give very similar results to the detailed VBS mechanisms and are more suitable for real-time forecasting and climate model applications.It is also extremely important to test simplified organic aerosol mechanisms using a model configuration that can resolve much of the temporal and spatial variations of observed organic aerosols, before these mechanisms are routinely used in global models with coarse spatial resolution that are difficult to evaluate using point measurements.The terminology used for various classes of organic species used in this study is described in Table 1 for reference. OA Organic Aerosol: includes both primary and secondary mass components (POA + SOA defined below).POA Primary Organic Aerosol: defined as organic aerosol either directly emitted or formed due to condensation of organic vapors before photochemical oxidation in the atmosphere. SOA Secondary Organic Aerosol: defined as organic aerosol formed after photochemical oxidation and condensation of organic vapors (V-SOA + SI-SOA) in the atmosphere V-SOA Component of SOA formed due to photochemical oxidation of all VOC precursors (A-V-SOA + B-V-SOA described below) A-V-SOA Component of SOA formed due to photochemical oxidation of traditional anthropogenic VOC precursors (e.g.ARO1, ARO2, ALK4 etc.) module (Zaveri et al., 2008).Aerosol species in MOSAIC includes sulfate, nitrate, ammonium, sodium, chloride, calcium, carbonate, other inorganics (i.e.dust), methanesulfonate, elemental carbon, organic matter, and aerosol water; however, until now organic matter has been treated as nonvolatile POA.Additional details of the WRF-Chem model as applied in this study have been described previously (Fast et al., 2009).Here, only processes and modifications to WRF-Chem relevant to simulating organic aerosol components are described in detail.The 9-species VBS mechanism for organic aerosols implemented in WRF-Chem is described first, followed by a discussion of the assumptions needed to develop a condensed 2-species mechanism. Detailed 9-species VBS mechanism for OA The 9-species VBS mechanism for POA and non-traditional SOA implemented in WRF-Chem is similar to that described by Robinson et al. (2007) and Shrivastava et al. (2008), with modifications for global non-volatile fraction and amount of oxygen added per generation of oxidation as described later. Previous studies have already implemented versions of this mechanism using offline meteorological models (Hodzic et al., 2010;Tsimpidi et al., 2010).The mechanism treats POA described by 9 surrogate species with C* values (at 298 K and 1 atm) of 10 −2 , 10 −1 , 10 0 , 10 1 , 10 2 , 10 3 , 10 4 , 10 5 , 10 6 µg m −3 .For each surrogate species, we treat both the aerosol phase species (in 4 size bins for this study) and the gas phase species.The bin boundaries for the size bins are 0.0391, 0.156, 0.625, 2.50, and 10.0 µm (dry diameter).Aerosol phase species for higher volatility (>10 4 µg m −3 ) could be neglected with little effect on OA predictions, but were included for completeness.In future applications, aerosol species of higher volatility would be excluded from WRF-Chem to save computational time.The POA species are segregated by two emissions sectors: biomass burning and anthropogenic (predominately fossil fuel).To allow calculating O:C ratios for the modeled OA, separate model species are used for the oxygen and non-oxygen (C, H, N) components of each species.This gives the following POA species: -POA(a) i,e,x,n = aerosol-phase POA, where i is the volatility species (1-9), e is either the biomass or anthropogenic emission sector, x is either the oxygen or non-oxygen component, and n is the size bin (1-4). Partitioning between the gas and aerosol phase species is calculated using absorptive partitioning theory assuming thermodynamic equilibrium approach as described by Donahue et al. (2006). The O:C ratio of bulk particulate OA evolves as a function of emissions of fresh primary organic material, oxidation of organic vapors with addition of oxygen mass after each generation of oxidation (Robinson et al., 2007), gasparticle partitioning varying with ambient factors such as temperature, dilution and OA concentrations, and removal rates of OA and gas-phase semi-volatiles due to dry and wet deposition.An OM/OC (organic mass to organic carbon ratio) of 1.57 and 1.25 for biomass burning and anthropogenic emissions respectively is assumed, converting OM emission rates to OC. Elemental O:C ratios of 0.3 and 0.06 are assumed for fresh biomass burning and anthropogenic emissions for calculating the oxygen fraction of each species.In addition, non-oxygen (carbon, hydrogen and nitrogen) to carbon ratio of 1.17 was assumed for all species.These assumptions are consistent with PMF analysis of ambient AMS data by Aiken et al. (2008) in Mexico City.The sum of oxygen and non-oxygen parts for each species equals total OM input to WRF-Chem, thus all OM mass is accounted for in the gas-particle partitioning calculations.The gas-phase POA species react with OH to produce more-oxygenated and lower-volatility SI-SOA species, as described in Sect.2.1.2.2.These SI-SOA species are represented in the mechanism by SI-SOA(a) i,e,x,n and SI-SOA(g) i,e,x , for i = 1,8 representing 8 oxidized volatility species as described by Shrivastava et al. (2008). In addition to the 9-species VBS mechanism for POA and non-traditional SOA, we include a 4-species VBS treatment of traditional SOA (referred to as V-SOA) produced by oxidation of biogenic and traditional anthropogenic VOCs.C * for describing V-SOA ranges from 1 to 10 4 µg m −3 .We segregate the V-SOA species by the parent-VOC emissions sector (biogenic and traditional anthropogenic).This gives the following V-SOA species: V-SOA(a) i,e,n = aerosol-phase V-SOA, where i is the volatility species (1-4), e is either the biogenic or traditional anthropogenic emission sector, and n is the size bin (1-4). V-SOA(g) i,e = corresponding gas-phase V-SOA species.The mechanism does not treat further oxidation of the V-SOA gas-phase species, so separate model species for oxygen and non-oxygen component are not required for calculating O:C ratios.Instead, we assume a fixed OM:OC (mass) ratio of 1.90 and O:C (elemental) ratio of 0.4 for V-SOA (Aiken et al., 2008). Overall, there are 180 POA species (36 gas, 144 aerosol), 160 SI-SOA species (32 gas, 128 aerosol), and 40 V-SOA species (8 gas, 32 aerosol) in the mechanism.The total SOA formed at any time within the modeling domain is the sum of SI-SOA and V-SOA after gas-particle partitioning calculations. Emissions An updated anthropogenic emissions inventory is used for MILAGRO 2006 from the work of Song et al. (2010).The anthropogenic emissions inventory includes traffic emissions and municipal trash burning as well as emissions from a wide range of point, and area sources.Municipal trash burning emissions are estimated to be comparable in magnitude to traffic emissions, but most of the trash burning sources are located outside the city.Also, municipal trash burning is expected to have similar OA spectra as fresh vehicular emissions dominated by hydrocarbon-like OA (HOA) with also some similarities with BBOA (Mohr et al., 2009).In this work, OA emissions from municipal trash burning and vehicular emissions are lumped together.BBOA represents primary biomass burning OA emissions. Biomass burning estimates are derived from satellite remote sensing data.Emissions of gases and particles from open burning were calculated using the Fire inventory from NCAR version 1 (http://bai.acd.ucar.edu/Data/fire/).This method is based on the estimation framework described by Wiedinmyer et al. (2006).Fire counts (MODIS Data Processing System, MODAPS) were provided by the University of Maryland (Giglio et al., 2003;MODIS Rapid Response Project).Land cover was determined with the MODIS Land Cover Type product (Friedl et al., 2010) and fuel loadings from Hoezelman et al. (2004).Emission factors were taken from multiple sources (Akagi et al., 2011;Andreae and Merlet, 2001).The non-methane organic compounds were speciated to the SAPRC-99 mechanism based on species-specific emission factors and the ecosystem type in which the fire burned. Representation of biomass burning within models is uncertain due to errors arising from calculations of plume rise, horizontal mixing of point sources and also due to the fact that several small fires may not be captured by remote sensing data (Fast et al., 2009).In this work, fire emission injection heights are treated the same as was described in Fast et al. (2009), Sect.3.3.Emissions from fires are distributed uniformly within ∼300 m of the ground since insufficient information (e.g.fire temperature) was available to compute plume rise.Visual observations from aircraft also suggested low plume rise for these fires (R. Yokelson, personal communication, 2008).SVOC and IVOC emissions corresponding to both anthropogenic and biomass burning emissions are derived as follows: Total SVOC emissions (organic vapors with C * of 0.01-10 4 µg m −3 ) are estimated as 3 times POA emissions for both anthropogenic and biomass burning emissions in Mexico City following Hodzic et al. (2010) and Tsimpidi et al. (2010).As discussed by Tsimpidi et al. (2010), POA emissions in Mexico City Metropolitan Area (MCMA) were derived for ambient conditions and reflect remaining aerosol fraction after evaporation of associated semi-volatile vapors.Since SVOC emissions were not measured in Mexico City, the ratio of SVOC to POA (assumed to be 3 in this work) is poorly constrained.In the future, it is recommended to measure and include SVOC in emission inventories to better constrain OA predictions in models. The total SVOC emissions are then distributed among different volatility species with C * ranging 0.01-10 4 µg m −3 using the mass fractions suggested by Robinson et al. (2007) with one modification.Recently, Cappa and Jimenez (2010) found that a significant fraction of POA emissions in Mexico City was globally non-volatile i.e. it would remain in particle phase under all ambient conditions.Using the default volatility distribution from Robinson et al. (2007) in the CHIMERE model, Hodzic et al. (2010) found that POA was too volatile downwind of Mexico City.In this work, using the globally non-volatile fraction suggested by Cappa and Jimenez (2010), 9 % of POA emissions from biomass burning and 22 % of POA emissions from anthropogenic sources are represented by the lowest volatility species with C * of 0.01 µg m −3 thus rendering this fraction to be non-volatile under all relevant ambient conditions in and around Mexico City.It should be noted that this assumption did not change the volatility distribution for biomass burning emissions used by Hodzic et al. (2010).Also, the volatility distribution for anthropogenic carbon used in this work is similar to biomass carbon, except the use of 23 % POA emissions for the species with the C * of 0.01 µg m −3 , in comparison to 9 % used by Hodzic et al. (2010) for anthropogenic carbon.The IVOC emissions (organic vapors with C * of 10 4 -10 6 µg m −3 as shown in Table 1) are estimated as 1.5 times SVOC emissions (4.5 times traditional POA emissions) for both biomass burning and anthropogenic emission sources consistent with Robinson et al. (2007).Thus, the sum of all SVOC and IVOC precursors in the inventory is 7.5 times the mass of traditional POA emissions inventory.Addition of this large pool of S/IVOC precursors to the inventory is supported by an observed gap between measured OH reactivity and calculated OH reactivity based on known VOC precursors in Mexico City (Dzepina et al., 2009).Table 2 shows the mass factors (f i ) used to calculate S/IVOC emissions from POA emissions (converted to OC) in each category.The non-oxygen part of OM is calculated as 1.17 times the carbon part, while the oxygen part is derived from O:C ratios, as described in Sect.2.1. The MEGAN (Model of Emissions of Gases and Aerosols from Nature, http://bai.acd.ucar.edu)model (Guenther et al., 2006) is used to generate biogenic emissions in the modeling domain within and around Mexico City.The 138 biogenic species from MEGAN are lumped into 3 biogenic VOC classes: isoprene (ISOP), terpenes (TERP) and sesquiterpenes (SESQ).In addition, anthropogenic VOC emissions including lumped classes corresponding to alkanes (ALK4 and ALK5), olefins (OLE1 and OLE2), and aromatics (ARO1 and ARO2) are included in the inventory corresponding to the SAPRC-99 mechanism, as described by Tsimpidi et al. (2010).Isoprene and terpene emissions calculated by the NEI emissions inventory (http://mexiconei.blogspot.com/2007/01/national-emissions-inventory-now.html) for Mexico domain are removed to avoid double counting of biogenic emissions (already calculated by MEGAN model within the domain).This also removes the anthropogenic isoprene emissions within the modeling domain.Hodzic et al. (2009) showed that contributions of SOA formed due to oxidation of anthropogenic isoprene precursors was low ∼0.2 µg m −3 on an average during March 2006 at T0 site in Mexico City.As compared to Hodzic et al. (2009), this study predicts significantly lower V-SOA as discussed in the Supplement.This difference is chiefly due to lower yields for biogenic SOA precursors including isoprene in this study, as compared to parameterizations used by Hodzic et al. (2009) that were based on yields from Henze and Seinfield (2006) and Pun et al. (2006).However, V-SOA is expected to be a minor contributor to total SOA in the Mexico City region (Hodzic et al., 2010).Also, neglecting anthropogenic isoprene emissions does not affect the main objective of this paper (intercomparison of different OA formulations), as all the formulations use the same isoprene emissions. Gas phase chemistry All gas-phase chemistry equations leading to ozone and SOA formation are included within the Kinetic Pre-Processor (KPP) in WRF-Chem (Damian et al., 2002).The SAPRC-99 mechanism includes 211 reactions of 56 gases and 18 free radicals.This mechanism is updated to include gas-phase oxidation of various S/IVOC precursors forming SOA (A-SI-SOA and BB-SI-SOA for anthropogenic and biomass burning SI-SOA respectively), and SOA formed due to oxidation of VOC precursors from traditional anthropogenic and biogenic emissions (A-V-SOA and B-V-SOA respectively).The detailed treatment of OA and the inorganic MOSAIC aerosol module in WRF-Chem constitute a comprehensive representation of processes leading to organic and inorganic aerosol formation in the atmosphere.In the present version, the gas-particle partitioning of SOA species is treated as an instantaneous equilibrium process, while the gas-particle mass transfer of inorganic species is treated as a particle size-dependent dynamic process.Also, the organic and inorganic species are not allowed to interact with each other in the particle phase state and water uptake calculations.These assumptions will be relaxed in the future as experimentally derived parameterizations of complex physicochemical interactions between organic and inorganic species as well as dynamic condensation, evaporation, and reactive uptake of organic gases are implemented in the MOSAIC aerosol module. SI-SOA formation Observations suggest continued SOA production as pollutants leave the Mexico City basin during low biomass burning periods (DeCarlo et al., 2008;Kleinman et al., 2008).Thus, multi-generational SOA chemistry and/or SOA formation from longer lived precursors are consistent with ambient observations.SI-SOA formation from multi-generational gas-phase oxidation of S/IVOC precursors are calculated using oxidation parameters proposed by Robinson et al. (2007) with one modification.The mass of parent SVOC or IVOC species is assumed to increase by 15 % for each generation of oxidation to account for added oxygen mass or functionalization.This is equivalent to assuming that 2 atoms of oxygen are added to an equivalent C 15 H 32 precursor per generation of oxidation.In comparison, Robinson et al. (2007) assumed an addition of 7.5 % mass due to added oxygen.The oxidation mechanism proposed by Robinson et al. (2007) was not designed to predict the oxidation state of OA in the atmosphere.Use of 7.5 % added oxygen mass has been shown to severely under-predict the O:C ratios in the atmosphere (Hodzic et al., 2010).Jimenez et al. (2009) suggested that 1 to 3 oxygen atoms could be added per generation of oxidation.Grieshop et al. (2009) used 40 % increase in mass due to addition of oxygen per generation of oxidation for wood smoke.Thus, the addition of 2 oxygen atoms is a fairly conservative assumption improving O:C ratio predictions as compared to 1 oxygen atom added by Robinson et al. (2007).An OH reaction rate constant of 4×10 −11 cm 3 molecule −1 s −1 is assumed for all SVOC and IVOC species. The equations governing oxidation of S/IVOC precursors are written within the KPP module of WRF-Chem as follows: where, i denotes any given volatility species except the lowest volatility, i − 1 denotes the species with C * equal to i/10, e denotes the source type, and subscripts c and o represent the non-oxygen and oxygen parts respectively of given species.As shown by equations ( 1) and ( 3), oxidation of non-oxygen part of SI-SOA precursor i results in formation of non-oxygen and oxygen parts (15 % by mass for 2 oxygen atoms added) of SI-SOA with successive lower volatility i − 1.Since molecular weights of all VBS species are assumed as 250 g mole −1 mass yields are same as molar yields, so 0.15 is used as oxygen yield per oxidation step within KPP.Equations ( 2) and ( 4) account for movement of oxygen part of each precursor to lower volatility.Thus at any time, both non-oxygen and oxygen parts of any given species move to successively lower volatility species due to oxidation, satisfying mass conservation.The lowest volatility species (C * equal to 0.01 µg m −3 ), was assumed to be non-reactive, neglecting fragmentation reactions, following Robinson et al. (2007).In Eqs. ( 2) and ( 4), OH was added to both sides of the equations to make sure that OH loss is not double counted by oxidation of non-oxygen and oxygen parts of the same species. V-SOA formation SOA formation from biogenic and traditional anthropogenic VOCs (V-SOA) is represented using fixed yields of these species using a 4-product VBS following Tsimpidi et al. (2010).For alkane, olefin, isoprene, terpene and sesquiterpene SOA species, mass yields are similar to Tsimpidi et al. (2010).For aromatic species yields from Hildebrandt et al. ( 2009) are implemented.Aging of VOCs in the gas-phase could be represented by the following equation: where i is the volatility species, a i is the overall NO x dependent molar yield calculated from Eq. ( 6), a i,high and a i,low are the molar yields under high and low NO x conditions respectively, B is the NO x branching ratio as defined by Lane et al. (2008), and V-SOA(g) i is the gas phase V-SOA precursor concentration.The reaction rates of various VOC species with the OH radicals in Eq. ( 5) are already present within the SAPRC-99 mechanism, as a part of gas-phase chemistry, with the exception of sesquiterpenes.In this work, it is assumed that sesquiterpenes have the same OH reaction rate as the terpene species in SAPRC-99.While sesquiterpenes may react much faster with OH radicals as compared to terpenes, their emissions are significantly lower.Sesquiterpene concentrations within the modeling domain are atleast an order of magnitude lower than terpenes.Formation of V-SOA(a) i is represented by gas-particle partitioning of V-SOA(g) i defined by absorptive partitioning theory as discussed by Donahue et al. (2006).However, in contrast to Tsimpidi et al. (2010), no further aging of the V-SOA(g) i species is implemented in WRF-Chem as including it leads to larger regional over-prediction of SOA (Dzepina et al., 2011).Aging parameterizations based on smog chamber measurements are very uncertain as they try to predict SOA formation over longer time-scales (photochemical ages) than so far have been accessible in chambers (Ng et al., 2010).Smog chamber measurements need to be carried out to much longer time-scales (over several days) and OH exposures to quantify and parameterize multi-generational V-SOA formation from both biogenic and traditional anthropogenic precursors. In this work, V-SOA yields are NO x dependent as described by Tsimpidi et al. (2010).Table S1 in the Supplement lists mass yields of various V-SOA precursors represented by the 4-product VBS species V-SOA(g) i .Molar yields are required as the SAPRC-99 mechanism lists equations and reaction rates in molecular units within the Kinetic Pre-Processor (KPP) in WRF-Chem.Mass yields listed in Table S1 are converted to molar yields by multiplying them with the ratio of molecular weights of V-SOA(g) i species (assumed to be 250 g mole −1 ) and the corresponding VOC(g)precursors taken from the CAMx User's guide for SAPRC 1999 mechanism (CAMx vs 5.10 User's guide, 2009).The assumed enthalpy of vaporization H vap for the V-SOA(g) i species are equal to same-volatility SI-SOA(g) i species shown in Table 1. Condensed 2-species OA mechanism As discussed earlier, the addition of a large number of species represents a huge computational burden in terms of advection alone.Development of a condensed 2-species mechanism is attractive for computational efficiency in large-scale global simulations.In this section, assumptions and parameterizations for development of the condensed 2-species mechanism within WRF-Chem are discussed. The condensed mechanism represents POA by two volatility species with C * values (at 298 K and 1 atm) of 10 −2 and 10 5 µg m −3 respectively.Separate POA species are used to represent the two emissions sectors and the oxygen and nonoxygen (C, H, N) components of each species as described in Sect.2.1 above.This gives the following: -POA(a) i,e,x,n = aerosol-phase POA, where i is the volatility species, e is either the biomass or anthropogenic emission sector, x is either the oxygen or nonoxygen component, and n is the size bin (1-4) as described in Sect.2.1. The gas-phase POA(g) i = 2,e,x species (C * of 10 5 µg m −3 ) reacts with OH to produce SI-SOA(a) i = 1,e,x,n and SI-SOA(g) i = 1,e,x (C * of 10 −2 µg m −3 ).Note that POA (a) i = 2,e,x would almost entirely remain in the gas phase under most atmospheric conditions due to its high volatility. In addition to the 2-species VBS mechanism for POA and non-traditional SI-SOA, we include a 1-species treatment of traditional SOA (referred to as V-SOA) produced by oxidation of biogenic and traditional anthropogenic VOCs.V-SOA C * is assumed to be equal to 1 µg m −3 corresponding to the lowest volatility species in 4-product VBS for V-SOA in Sect.2.1.2.We segregate the V-SOA species by the parent-VOC emissions sector (biogenic and traditional anthropogenic) giving V-SOA(a) i=1,e,n and V-SOA(g) i=1,e as described in Sect.2.1.2. S/IVOC emissions The condensed 2-species approach represents POA and SOA as the first volatility species with C * of 10 −2 µg m −3 .This approach assumes that both POA and SOA in the model are non-volatile under most atmospherically relevant conditions.POA emissions are assumed to be 1/3rd of SVOC emissions in the 9-species VBS approach discussed in Sect.2.1.1.,allowing 2/3rd of SVOC emissions to have already evaporated relative to 9-species VBS approach, thus implicitly accounting for gas-particle partitioning.IVOC emissions (are 6.5 times POA and include 2/3rd of SVOC emissions) are represented as the 2nd IVOC species with of C * of 10 5 µg m −3 .The 2nd species represents all gas phase S/IVOC emissions within the modeling domain.Hence C * for this species is chosen to be in the IVOC range to ensure all material remains in the gas phase under most atmospheric conditions.The total SVOC and IVOC emissions in the 2-species VBS approach are equal to the 9-species VBS.Table 2 shows the factors (f i ) used to calculate S/IVOC emissions from POA.As discussed earlier POA emissions are divided by 1.57 and 1.25 for biomass burning and anthropogenic emissions respectively, to convert OM to OC prior to application of factors f i in Table 2.The enthalpy of vaporization H vap is assumed to be 83 kJ mol −1 as in Pye and Seinfield (2010), but the model is not very sensitive to H vap for the two volatility species used in the condensed mechanism as the material represented by either species is firmly in one phase and far from the region where substantial fractions are in both phases. The SAPRC-99 gas-phase chemistry leading to ozone formation in condensed 2-species VBS is exactly the same as detailed 9-species VBS.However, reactions and SOA yield leading to SI-SOA and V-SOA formation are different and are discussed in the following section. SI-SOA SI-SOA is formed by gas-phase oxidation of S/IVOC vapors represented by the 2nd volatility species (C sat of 10 5 µg m −3 ), with each generation of oxidation moving material to the 1st volatility species, thus representing 7 orders of magnitude reduction in volatility.For a given reaction rate and S/IVOC emissions, this mechanism will be much faster in producing SOA as compared to the 9-species VBS.In order to align SOA predictions from the 2 species VBS to the 9-species VBS, the reaction rate with OH radical is reduced by a factor of 7 as compared to 9-species VBS approach (OH reaction rate of 0.57×10 −11 molecule cm −3 s −1 ).An addition of 50 % oxygen mass is also assumed for the one generation of oxidation (instead of 15 % in the 9-species VBS approach) following the discussions by Pye and Seinfield (2010). Equations ( 1)-( 4) are repeated within KPP for the 2species VBS approach, but these equations are only written once resulting in oxidation of S/IVOC of species 2 on the left hand side to form SI-SOA represented by species 1 on the right hand side.The large addition of oxygen and reduction of volatility in one oxidation step is not meant to represent a physical process, but rather to parameterize the average effect of the more complex real processes, as the 7 times slower OH reaction rate makes up for the large changes, bringing predictions of SI-SOA in the 2-species VBS closer to 9-species VBS as shown later. V-SOA V-SOA formation is represented using fixed 1-product yields of these species.In the 4-product basis set for V-SOA as described by Tsimpidi et al. (2010), the lowest volatility species has a C * of 1 µg m −3 .For consistency, the volatility of 1-product V-SOA is assumed to have a C * of 1 µg m −3 at 298 K.The NO x dependent 1-product mass yields for traditional anthropogenic and biogenic V-SOA precursors are given in Table S2 in the Supplement.The SOA yields for olefin species are chosen to be equal to the yields corresponding to species with C * of 1 µg m −3 in the 4-product VBS from Tsimpidi et al. (2010).For alkane species, since SOA yields corresponding to species with C * of 1 µg m −3 are zero in Tsimpidi et al. (2010), yields from the next higher volatility species (C * of 10 µg m −3 ) are assigned to the lowest volatility species.Yields for ARO1 and ARO2 are assumed to be equal to toluene and m-xylene SOA yields respectively following Ng et al. (2007b).These yields are chosen to be higher than respective ARO1 and ARO2 yields corresponding to C * of 1 µg m −3 in Table 3.Yields for TERP and SESQ species are assumed to be equal to α-pinene and aromadendrene respectively (Ng et al., 2007a).All yields are chosen at lowest M 0 values measured during the experiments which are closer to ambient SOA concentrations. where N is the number of samples, O i are the AMS PMF factors, and M i are WRF-Chem predictions. The traditional A-V-SOA and B-V-SOA predicted from the 4-species formulation shown in Table S1 were found to be similar to the 1-species formulation shown in Table S2.All SOA formation in this work is assumed to result from photochemical reaction with OH radicals.Reactions with O 3 and NO 3 radicals may also be important for SOA formation under certain conditions (e.g.Capouet et al., 2008), and they will be incorporated into WRF-Chem in the future. Dry and wet deposition Dry deposition for all gas-phase SOA precursor species is calculated using the resistance model of Wesely (1989) assuming a Henry's law constant of 2700 M atm −1 which is used for species such as cresol and condensable organic gases as documented in CAMx user's guide (Environ, 2009).Dry deposition of OA is treated within MOSAIC similar to inorganic aerosols.Wet deposition is neglected in present work.Cloud-aerosol interactions, including wet removal, for all aerosols are not accounted for because the first two weeks of the MILAGRO campaign were mostly cloud free (Fast et al., 2007).Periods of afternoon convection and scattered precipitation did occur during the last week of the field campaign, but previous simulations using WRF-Chem by Fast et al. (2009) found that the amount of aerosols removed by the wet deposition during that period was relatively small.Also, the computational burden of handling the cloud processes would be excessive in the 9-species VBS formulation due to the need to transport both interstitial and cloud-borne copies of each aerosol species.This would almost double the cost of the simulation.The condensed 2-species VBS formulation developed in this work is more suitable for complex cloud-aerosol interactions in WRF-Chem. Modeling runs A nested grid configuration with an outer grid using 12 × 12 km 2 grid spacing and an inner grid using 3 × 3 km 2 grid spacing is used to model the Mexico City region.For both detailed and condensed mechanisms, all OA species including freshly emitted POA and SOA are assumed to form an ideal solution.The total OA within the modeling domain is calculated as the sum of POA, SOA, and a small amount (0.1-0.3 µg m −3 ) of background OA coming from boundary conditions obtained from MOZART global simulations of trace gases and aerosols (Emmons et al., 2010).Initial and boundary condition for all newly added VBS OA species is assumed to be zero.Three modeling cases were carried out.The 9-species VBS was run for two anthropogenic S/IVOC emissions cases: (1) default emissions from 2006 inventory (2) twice the amount of default S/IVOC emissions as compared to (1).S/IVOC emissions from biomass burning are assumed to be identical in both cases.The reasoning behind these runs is that default emissions from (1) using the 2006 MCMA inventory significantly under-predicted HOA as compared to AMS measurements.Using twice the amount of S/IVOC emissions allows us to study the sensitivity of HOA and SOA to anthropogenic emissions.In the third model Case, the condensed 2-species VBS mechanism was run with S/IVOC emissions equal to Case (2) above.Thus comparison of Cases (2) and (3) enables us to evaluate the condensed mechanism against the detailed 9-species VBS mechanism.The condensed mechanism predicts the same information as the 9-species VBS including source-resolved POA and SOA mass concentrations, and evolution of O:C ratios using the 4-species sectional representation for aerosols.For comparison with measurements, model predictions are spatially and temporally interpolated to the location of the measurement for both aircraft flights and ground site data using the Aerosol Modeling Testbed Toolkit developed for WRF (Fast et al., 2011).Ground measurements are compared with the lowest level in the model ( z ∼25 m).Comparisons between measurements and model predictions are done at local ambient conditions of pressure and temperature for both ground sites and aircraft flights.Also, all WRF-Chem results are extracted for the inner grid using 3 × 3 km 2 grid spacing.Spatial maps shown in the present study (for e.g.Fig. 1) represent the part of modeling domain where the nested grid configuration was used.The larger 12 × 12 km 2 grid spacing modeling domain is shown in previous studies e.g.Fast et al. (2007).In WRF-Chem, the predicted OA in particle size bins 1-3 (0.039-2.5 µm dry diameter) is compared to AMS measurements.Fraction of mass in the third size bin (between 0.625 µm and 2.5 µm) was often less than 15 % of total PM 2.5 OA mass. Results and discussion In this section, the three modeling cases are evaluated with highly time-resolved AMS measurements at two surface sites (T0 and T1) and several aircraft flights (8 G-1 flight tracks and 2 C-130 flight tracks) within and around Mexico City.Predicted organic aerosols are evaluated with respect to mass of total OA, HOA, OOA and BBOA, and O:C ratio measurements.The rigorous evaluation with the comprehensive measurements in Mexico City region is essential to evaluate the emissions inventory and the OA mechanism, and also establish the utility of the simpler 2-species VBS mechanism in predicting OA concentrations for future regional and global simulations. Spatial distributions of SOA Figure 1 shows spatial distributions of 24-day average total SOA surface concentrations during 6-30 March, 2006 as predicted by the three modeling cases.SOA predictions from 9-species VBS and condensed 2-species VBS cases (Cases 2 and 3) are very similar throughout the modeling domain.In comparison, Case 1 with half the anthropogenic S/IVOC emissions predicts lower SOA formation as expected, due to smaller amounts of S/IVOC precursors as compared to the other two cases. Figure 2 shows the 24-day average contributions of various SOA components as a percentage of OA for Case 2. As shown in Fig. 2a, A-SI-SOA contributes 20-30 % to OA at T0 site located within Mexico City.As the S/IVOC precursors move downwind and undergo multiple generations of oxidation chemistry, A-SI-SOA is dominant and contributes 50-70 % to OA. BB-SI-SOA forms the second major component contributing 10-30 % to OA as shown in Fig. 2b.The upper-right corner of Fig. 2b shows dominant BB-SI-SOA contribution in a part of Gulf of Mexico, but 24-day average absolute concentrations of BB-SI-SOA ranged 0.6-0.8µg m −3 in that region, and were less than 1 µg m −3 over the entire Gulf of Mexico.In comparison, higher BB-SI-SOA concentrations ranging 1.7-2.0µg m −3 are observed over land and areas surrounding Mexico City (at T0 and T1 sites not shown here).Figure 2c and d shows that both traditional SOA components (B-V-SOA and A-V-SOA) contribute a much lower fraction (2-5 %) to total OA.B-V-SOA is higher in areas where biogenic emissions are higher, whereas A-V-SOA is highest within the city and decreases downwind.The decrease in A-V-SOA is in contrast to the increase in A-SI-SOA downwind from Mexico City.This decrease is due to the fact that in the present formulation, A-V-SOA is formed only by first generation products of V-SOA precursors emitted close to the city and partially evaporates with dilution downwind of the city (Dzepina et al., 2011), while A-SI-SOA formation continues downwind due to multiple generations of chemistry. Evaluation of OA components at T0 site The T0 site is situated within the center of Mexico City, representing an area dominated by urban emissions.In WRF-Chem oxidation by OH radicals leads to SOA formation, so accurate representation of OH radical concentrations is necessary.As shown in the Supplement Fig. S7, WRF-Chem under-predicts OH concentration peak at day-time by a factor of 2 as compared to observations at the T0 site (Dusanter et al., 2009), but reproduces the timing of the OH peak.Also, WRF-Chem predicts near-zero OH concentrations during night-time, while measurements show higher concentrations.But effects of low OH concentrations observed during night-time (order of magnitude lower than day-time) on SOA formation is expected to be small.Model POA(a) i,antrhopogenic,x,n is compared to PMF HOA factor, SI-SOA(a) i,e,x,n + V-SOA(a) i,e,x,n is compared to PMF OOA, POA(a) i,biomass−burning,x,n is compared to PMF BBOA, and total simulated PM 2.5 OA is compared to measured total OA from PMF. Figure 3 compares observed and simulated HOA and OOA at the T0 site. HOA Models reproduce observed diurnal variations of HOA peaking in the early morning rush hour period due to traffic emissions as shown in Fig. 3a.However, all 3 modeling cases under-predict the magnitude of observed peak in HOA. Figure 3c shows that on an average across all days, the default emissions (Case 1) under-predicts morning HOA peak by a factor of 3, implying problems with the 2006 emissions inventory.Case 2 (9-species VBS) and Case 3 (2-species VBS) with twice the anthropogenic S/IVOC emissions better represent traffic emissions.Table 3 shows bias and correlation coefficients of HOA, OOA and BBOA comparing WRF-Chem predictions to AMS PMF data.Table 3 shows a higher negative HOA bias of −2.97 µg m −3 for Case 1 compared to other two cases.Also, since the spatial and temporal variation of POA emissions is similar for all three cases, the Pearson correlation coefficient for HOA is same for the three cases.HOA peak from 9-species VBS is 35 % higher than 2-species VBS during early morning.The difference is related to the volatility distribution of SVOC emissions, as the 9-species VBS allows dynamic gas-particle partitioning using a globally non-volatile fraction of 22 % as described earlier, while the 2-species VBS assumes that a constant fraction (onethird) of the SVOC emissions are HOA at all times.This is also reflected in small positive HOA bias of 0.5 µg m −3 for Case 2, as compared to a small negative bias of −0.3 µg m −3 for Case 3 when comparing against the AMS data (Table 3). OOA Figure 3b shows that after 18 March, OOA is under-predicted by all modeling cases.The diurnal average plot in Fig. 3d shows that PMF OOA increases by a factor of 3 during the afternoon as compared to night-time, due to photochemistry.But, Case 1 and Case 2 predict almost constant SOA concentrations throughout the day due to compensating effects of dilution by growth of boundary layer and photochemistry as day progresses.In addition, Table 3 shows significantly higher OOA negative bias (−2.6 µg m −3 ) for Case 1 compared to the other two cases.Among the 3 modeling cases, the 2-species VBS (Case 3) is closest to PMF OOA, predicting two times higher afternoon peak as compared to morning as shown in Fig. 3d.Also, Table 3 shows significantly higher correlation of SOA predictions from Case 3 (0.42) as compared to the other two cases (0.23 and 0.19 respectively).The differences between 9-species and 2-species VBS are due to volatility distribution of SOA, as the 2-species VBS causes all SOA formed to be almost non-volatile at ambient conditions (C * of 10 −2 µg m −3 ), while the 9-species VBS allows evaporation of SOA with dilution as the boundary layer grows.Higher correlation of 2-species VBS SOA predictions with AMS PMF OOA implies that SOA may be less volatile as compared to the volatility distribution in the 9-species VBS mechanism, as shown by Cappa and Jimenez (2010) and Dzepina et al. (2009Dzepina et al. ( , 2011)).Consistent with this, Vaden et al. (2011) recently showed that both laboratory generated and ambient SOA particles do not evaporate at room temperature for hours even under extremely dilute vapor-free conditions.Also, within the existing modeling framework, the under-predictions of HOA and OOA using default emissions (Case 1) is caused due to lower emissions.Dzepina et al. (2009) in a box modeling study derived HOA and S/IVOC from observations, rather than the emissions inventory, and observed better closure between modeled SOA and OOA observations.Aiken et al. (2009) concluded that total primary PM (not the same as POA) was underestimated by about a factor of 4 with respect to the 2006 emissions inventory, therefore it is possible that an underestimation of urban POA emissions remains in Case 2, leading to the observed discrepancy.Also, the 2006 biomass burning emissions inventory for Mexico City of Wiedynmier et al. (2006) under-predicts BBOA as compared to PMF BBOA.Table 3 shows consistently high BBOA negative biases, on the order of −2 µg m −3 , for all three modeling cases.Missing SOA from biomass burning precursors may also be responsible for model-measurement differences in OOA, although the BBOA under-prediction is stronger during the early morning (Fig. 4c), and the low levels of the biomass burning tracer acetonitrile at T0 during afternoons (Aiken et al., 2010) make this possibility less likely. Vertical profile, surface concentration and column burden In addition to surface concentrations, it is also useful to look at vertical concentration profiles and total column integrated burdens of various OA components.Surface concentrations of pollutants are monitored for their health impacts, while vertical concentration profile and total column burden are important for climate effects. Figure 4a shows the vertical distribution of HOA, SOA, BBOA and total OA concentrations, while Fig. 4b shows ratio of OA components to total OA with height above ground level (a.g.l.) at the T0 site.HOA concentrations are maximum at the surface (48 % of total OA as shown in Fig. 4b) and decrease with increasing AGL.SOA concentration is comparable to HOA at the surface, but decreases much slower as compared to HOA at higher levels.Figure 4b shows that ratio of SOA/total OA increases from 0.5 near the surface to 0.75 at 1-4 km a.g.l.height.Continued photochemical oxidation of SOA precursors in the atmosphere causes SOA to be dominant component of OA above the surface even over the highly urbanized T0 site as shown in Fig. 4b.Thus model predictions imply that SOA is the most important component of OA in the atmosphere affecting both human health and climate.The ubiquity and dominance of SOA in the atmosphere is also implied by PMF analysis of AMS measurements (Zhang et al., 2007).The fractional importance of BBOA increases with height from 4 % near the surface to 12 % at 3 km a.g.l. as shown in Fig. 4b.The vertical distribution of BBOA emissions in WRF-Chem is based on the fire emission locations in the hills and mountains surrounding Mexico City, as well as mixing of smoke in the boundary layer before it is transported into Mexico City.Previous aircraft measurements have also seen increasing BBOA with height in Mexico City (Aiken et al., 2010;Crounse et al., 2009). Figure 4d shows diurnal variation of total column integrated burden (mg m −2 ) of OA components in the atmosphere.Solid lines in the figure represent results from Case 2 in present study, while dashed lines represent estimations from previous work by Aiken et al. (2010).The SOA burden decreases during night-time, but increases due to photochemistry in the day peaking at 16:00-17:00 LT.The magnitude and timing of daytime peak in SOA burden is comparable to the previous estimates by Aiken et al. (2010).Aiken et al. (2010) Aiken et al. (2010) in the nighttime and early morning till 08:00 LT are factor of 4-9 lower than present study as shown in Fig. 4d.There is good agreement for the middle of the day when the convective boundary layer is deep, but the column burden is strongly underestimated using the method of Aiken et al. (2010) in the nighttime and early morning when the boundary layer is shallow.This underestimation points to a very important conclusion that there is often substantial OA in the residual layer above the boundary layer in the night-time and early morning, and it should not be neglected in model calculations. It is also instructive to compare diurnal variations of total column burden vs. surface concentration of various OA components as shown in Fig. 4c and d.Surface concentration shown in Fig. 4c changes due to evolution of boundary layer as the day progresses, but the total column integrated burden (shown in Fig. 4d) is not influenced by vertical dilution due to changing boundary layer.Model surface SOA concentrations do not indicate significant diurnal variation due to opposing effects of dilution and photochemistry, but total SOA burden shows a strong diurnal variation due to photochemistry.Also HOA surface concentration peaks at 07:00 LT, but column burden of HOA is nearly constant throughout the day.BBOA surface concentrations and column burden both show similar diurnal variations with peaks at 18:00 LT due to relatively uniform vertical distribution of BBOA shown in Fig. 4a and b.Total OA surface concentrations and column burden follow corresponding diurnal variations of SOA. O:C ratio O:C ratios provide another measure to verify the performance of SOA treatments, supplementing evaluations using OA mass and its components.The evolution of the carbon and oxygen parts of SI-SOA are explicitly tracked in WRF-Chem, making it possible to calculate temporal and spatial variations in elemental O:C ratios in the aerosol phase.In Fig. 5, we evaluate modeled O:C ratio predictions in terms of 20-day diurnal variations at the T0 site, time averaged spatial variation across the modeling domain and along two C-130 flight transects in the atmosphere.Figure 5a compares measured and simulated temporal variation of O:C ratios at the T0 site, while Fig. 5b looks at 24-day average spatial variation of elemental O:C ratios predicted by Case 2. In this study, two oxygen atoms are added per generation of oxidation of S/IVOC precursors as discussed earlier.Figure 5a shows that all three modeling cases reproduce the temporal variations of O:C ratios, but the magnitude is under-predicted at the T0 site.Case 1 is closest to AMS measurements predicting higher elemental O:C ratios as compared to Case 2 and 3. O:C ratios decrease as fresh reduced primary organic emissions are added every hour in the model, but increase as photochemistry causes SOA formation with addition of oxygen.Case 1 has half of the fresh reduced anthropogenic emissions as compared to Case 2, resulting in higher O:C ratios. Case 3 (2-species VBS) predicts very similar O:C ratios as Case 2 at T0 site as shown in Fig. 5a.The agreement in O:C ratios between Case 2 and Case 3 over the city is very interesting.Case 2 represents 9-species VBS with 15 % added oxygen mass per generation of oxidation, while Case 3 represent 2-species VBS with 50 % added oxygen and 7 times slower chemistry as compared to Case 2. Figure 5b shows that predicted 24-day average O:C ratios varies spatially ranging from a high of 0.3 at T0 site increasing to 0.6 further downwind, and could be as high as 0.7 over the Gulf of Mexico representing highly oxygenated organic material.In comparison, Hodzic et al. (2010) and Dzepina et al. (2011) predicted much smaller O:C ratios ranging 0.14-0.24over the Mexico City domain using the ROB approach adding one oxygen atom per generation of oxidation following Robinson et al. (2007).These studies better predicted the O:C ratios both within the city and downwind using the GRI approach in which the added oxygen mass was 40 % and the chemistry was 2 times slower than in our simulation. Figure 5c and d compare AMS O:C ratios to WRF-Chem simulations along C-130 flight tracks on 10 March (a high biomass burning day) and 29 March (a low biomass burning day), respectively.This comparison along the flight transects allows a more comprehensive evaluation of O:C ratios as compared to just ground site locations.A portion of the C-130 flight transects were located further downwind of Mexico City as discussed later.All 3 model Cases reproduce variations in measured AMS O:C ratios on 10 March reasonably well; but simulations under-predict peaks in O:C ratios, specially at downwind locations.The two large peaks in O:C ratios predicted by Case 1 and Case 2 in Fig. 5c are during lowest OA predictions events chiefly dominated by SOA.On March 29, WRF-Chem simulations consistently underpredict O:C ratios as compared to measurements (shown in Fig. 5d).Most of the simulated values vary around 0.5, while AMS measured O:C ratios as high as 0.8 downwind of Mexico City on this day.It is important to note that the 2-species VBS (Case 3) predicts lower O:C ratios than 9-species VBS (Case 2) over downwind locations on this day; however the differences between these cases are less than 25 %.All 3 modeling cases show very similar temporal variations in O:C ratios, which is expected since temporal variations in emissions, deposition, meteorology, and chemistry are similar within WRF-Chem for all runs. Results from both this study and previous studies show a strong sensitivity of O:C ratios to the assumed oxygen added per generation of oxidation and point towards a need for additional experimental validation.Also fragmentation reactions which could cause an increase in O:C ratios (Kroll et Improving emission estimates for e.g.increasing biomass burning emissions would also help to increase O:C ratio predictions bringing them closer to measurements.Accurate predictions of O:C ratios are important to better understand the resulting effects on direct and indirect radiative forcing of climate by relating aerosol optical properties and CCN activation as function of chemical processing of OA in the atmosphere (Jimenez et al., 2009). Evaluation of OA components at T1 site The T1 site is located at the northern edge of the city.As discussed in Fast et al. (2009), the present WRF-Chem setup uses 3 × 3 km 2 grid spacing which may not be enough to represent the strong spatial gradients of emissions in this region.Figure 6a-d compare average diurnal variations of AMS total OA, HOA, OOA and BBOA with corresponding WRF-Chem predictions.As shown in Fig. 6b, Case 1 with default emissions under-predicts the early morning HOA peak by a factor of 2, while the other two modeling cases reproduce both magnitude and timing of the early morning HOA peak. Figure 6c shows that PMF OOA peaks during late afternoon due to photochemical production of SOA.The timing of the late afternoon PMF OOA peak is best reproduced by Case 3, but Case 3 over-predicts magnitude of this peak by 40 %. In comparison, both Case 1 and Case 2 show SOA peaking later in the day as compared to PMF OOA.As shown in Table 3, while OOA bias is lowest for Case 1, Case 3 shows the highest correlation with PMF OOA at T1 site.Some of this over-prediction in SOA may also be due to uncertainties in chemistry parameterization producing too much SI-SOA downwind of the city center.Figure 6d shows two peaks in AMS PMF BBOA during early morning and late afternoon hours respectively.All model predictions clearly show significantly lower BBOA and do not capture the timing and magnitude of measured PMF BBOA, pointing to limitations in biomass burning emission inventory.Table 3 shows consistent negative biases in BBOA for all three modeling cases. Overall, the default emissions inventory (Case 1) underpredicts surface level HOA at both T0 and T1 sites.Also, Table 3 shows significantly lower correlation of predicted and observed HOA at the T1 site as compared to the T0 site (0.27 for T1 vs. 0.50 for T0), suggesting that the spatial and temporal distribution of POA emissions needs to be revised in the 2006 emissions inventory.The diurnal variation of OOA is not well captured by all three modeling Cases at both T0 and T1 sites.Also, both 9-species and 2-species VBS schemes show significantly greater bias and lower correlations for OOA as compared to HOA at T0 and T1 sites as shown in Table 3. But, Case 3 with 2-species VBS predicts OOA concentrations better at both T0 and T1 sites as compared to other two modeling cases, reflected in significantly www.atmos-chem-phys.net/11/6639/2011/Atmos.Chem.Phys., 11, 6639-6662, 2011 higher correlation coefficients of Case 3 predictions with AMS OOA.Since SI-SOA contributes major fraction of SOA within the modeling domain, these results suggest that both volatility distribution and chemistry parameterizations of SI-SOA are poorly constrained.Consistent with trends in HOA and OOA, total OA is under-predicted by Case 1, at both T1 and T0 (not shown) sites as compared to observations. Evaluation of OA components aloft AMS measurements aloft are available from G-1 (Kleinman et al., 2008) andC-130 (DeCarlo et al., 2008) aircraft flight transects.The two aircrafts made several transects on different days flying above the center of Mexico City and downwind.This high-time resolution AMS data is valuable to study time evolution and growth of organic aerosols as they move from city to further downwind locations. Figures 7 and 8 compare WRF-Chem outputs to highly time resolved AMS data for OA components along the C-130 transect on 10 March and the G-1 transect on 15 March, respectively.On 10 March, MODIS detected several large fires within 60 km of Mexico City, thus it was a high biomassburning day.March 15 was a day with relatively low biomass burning.Both WRF-Chem and AMS data are averaged to 1 min time intervals to reduce high-frequency variability and ease visual comparison. C-130 flight on 10 March On the morning of 10 March, 2006, the C-130 aircraft encountered a large number of biomass burning fires as detected by MODIS fire counts (Fast et al., 2009) as it flew from the Gulf of Mexico towards Mexico City.aircraft sampled several downwind locations between Mexico City and Veracruz at 11:00-14:00 LT. Figure 7a shows that there may be small transport and dilution errors close to 12:30-13:00 LT shown by difference in measured and simulated CO concentrations.When the aircraft flew close to the city over T0 and T1 sites (14:30-16:00 LT), Fig. 7a shows that CO is over-predicted somewhat by WRF-Chem as compared to observations; however, the temporal variations look consistent.The C-130 then flew back to Veracruz late in the afternoon.The high temporal variability in CO and OA emissions is caused by the aircraft flying within and outside the boundary layer. Figure 7b shows that variations in total OA over city region are reasonably simulated by all WRF-Chem modeling cases.WRF-Chem slightly under-predicts total OA over the city as compared to AMS data.However, when the aircraft flew over downwind locations earlier during the day, WRF-Chem overpredicts total OA by more than a factor of 2 as compared to AMS measurements. Figure 7c shows that HOA is under-predicted by all modeling cases over the city.Also, at downwind locations, OOA is over-predicted by upto a factor of 5 by all three modeling cases as shown in Fig. 7d.The overestimation in OOA points to missing processes such as fragmentation reaction which are not included in this study.Figure 7e shows that BBOA is consistently under-predicted both over the city and downwind locations pointing to problems in the biomass burning emissions inventory. Figure 7f quantifies different SOA source-contributions in this study along the flight track using Case 2. These include V-SOA from traditional anthropogenic and biogenic precursors (A-V-SOA and B-V-SOA), and SI-SOA from S/IVOC precursor emissions related to anthropogenic and biomass burning emissions (A-SI-SOA and BB-SI-SOA respectively).Both over the city and at downwind locations A-SI-SOA and BB-SI-SOA are equally important.Traditional A-V-SOA and B-V-SOA contributes a much smaller fraction to SOA both over city and downwind locations, and biomass burning and anthropogenic emissions (predominantly traffic emissions) are the two major SOA precursor sources within and around the Mexico City region.Total simulated OA in Fig. 7b looks better as compared to individual derived OA components due to compensating errors from simulated HOA and BBOA that are too low and OOA that is too high. G-1 flight on 15 March Another example is shown in terms of G-1 flight transects on the morning of 15 March in Fig. 8.This was a low biomass burning day as compared to 10 March.As shown in Fig. 8a, spatial variations of predicted CO are qualitatively similar to observations.The largest scatter in observed and simulated CO occurs over the city due to errors in timing and location of simulated plume (Fast et al., 2009). Figure 8b shows that variations in total OA are reasonably simulated by WRF-Chem over the city and nearby downwind locations.OA is over-predicted at farther downwind locations mainly due to over-prediction in OOA.As shown in Fig. 8c, simulated HOA is under-predicted over the city as compared to AMS HOA (during 11:00-11:30 LT).Case 1 predicts lower HOA as compared to other two cases due to lower anthropogenic S/IVOC emissions as discussed earlier. The consistent under-prediction in HOA suggests possible errors in SVOC emissions or volatility distribution of emissions, e.g. a potentially higher fraction of SVOC emissions being non-volatile than assumed in the current model. Between 10:00-10:30 LT and 11:30-12:30 LT the aircraft flew downwind over T1 and between Mexico City and Veracruz farther downwind.Figure 8d shows that OOA is overpredicted both over the city and downwind by all modeling cases, with much higher over-prediction downwind as compared to over city.Figure 8e conclusions from C-130 flight transect shown in Fig. 7f.As expected, traditional A-V-SOA is higher over the city where anthropogenic emission sources are higher as compared to downwind locations.Also, both traditional A-V-SOA and B-V-SOA precursors contribute relatively low SOA mass as compared to S/IVOC precursors from anthropogenic and biomass burning emissions.Figure 9 shows that WRF-Chem predictions of HOA are lower than that derived from PMF, while WRF-Chem over-predicts SOA on most flight days, consistent with all discussions so far.PMF HOA also shows significantly greater variability on any given day as compared to predictions.Part of the variability in PMF HOA may arise due to the fact that PMF of unit resolution data used for the G-1 aircraft, has difficulty separating contributions of HOA and BBOA (Aiken et al., 2009).C-130 data utilize high-resolution AMS data, which is much better able to separate HOA and BBOA components.But as discussed earlier, significant spatial and temporal variation of emissions also causes higher variability in PMF HOA which is not captured by 2006 MCMA emissions inventory used in this work. Model Almost an opposite trend is seen for OOA in Fig. 9b.On most days, the model over-predicts SOA as compared to corresponding PMF OOA.In addition, the model predictions show greater variability as compared to PMF observations.The greatest predicted variability in the simulations is seen for the C-130 flight on 10 March.10 March was a high biomass burning day with several large fires within 60 km of Mexico City.Biomass burning contributions to SI-SOA on this day were similar to anthropogenic SI-SOA contributions (as shown in Fig. 7f). Figure 9c shows that predicted BBOA across most flight tracks (especially evident on high biomass burning days, such as 10 March) is too low, pointing to significant uncertainties and missing biomass burning smoke emissions in the inventory used here.Most importantly, Fig. 9 shows that the magnitude and variability of HOA, SOA and BBOA predicted by 9-species VBS and 2species VBS mechanisms are very similar.This comprehensive evaluation using aircraft measurements over both city and downwind locations provides evidence of the utility of the condensed 2-species VBS mechanism to represent OA in climate models.WRF-Chem under-predicts total OA across most G-1 flight transects as shown in Fig. 9d, consistent with under-prediction of HOA and BBOA over the city.Also, WRF-Chem over-predicts total OA at farther downwind locations as shown by the two C-130 flights on 10 and 29 March respectively, due to significant over-prediction of SOA downwind.Section S3.0 in the supporting online information compares predictions from WRF-Chem using 9-species VBS (Case 2) against CHIMERE model predictions described by Hodzic et al. (2010).The main differences include the following: (1) WRF-Chem uses online meteorology and CHIMERE uses offline meteorology, as discussed earlier, (2) the CHIMERE model includes the wet deposition of aerosols, whereas WRF-Chem does not, and (3) the emissions of POA are based on two different inventories.Differences in coupling of processes between online and offline air quality modeling, as discussed by Grell et al. (2004), likely contribute to some of the differences in the chemical fields between WRF-Chem and CHIMERE.Additional differences in gas-phase chemistry, dry deposition, and implementation of VBS between the two models are discussed in Sect.S3.0.Section S3.0 shows that predictions of WRF-Chem and CHIMERE are comparable at the T0 site, while WRF-Chem predicts on average 50 % higher (and closer to observed) SOA than CHIMERE at the T1 site. Discussion The VBS approach formulated by Robinson et al. (2007) is useful to represent varying gas-particle partitioning and multi-generational photochemical aging of a complex mixture of thousands of organic species in air-quality models.In this work, it is shown that HOA is better simulated by the VBS approach as compared to OOA.Model simulation with the all 3 Cases show significantly high bias and low correlations with AMS PMF OOA at the T0 and T1 sites (Table 3).In addition the VBS approach significantly overpredicts OOA downwind of city as shown by flight transects.These results show limitations in representation of processes contributing to formation and evolution of secondary organic aerosols in models.As shown in this study and several recent studies using this approach for Mexico City (Dzepina et al., 2009(Dzepina et al., , 2011;;Hodzic et al., 2010;Shrivastava et al., 2008;Tsimpidi et al., 2010), experiments constraining various parameterizations related to emissions of S/IVOC precursors, volatility distribution, and chemistry including functionalization, fragmentation and oligomerization reactions are needed to improve predictions of both mass and oxidation state of OA in the atmosphere.In addition, Vaden et al. (2011) recently showed that SOA particles in both laboratory and ambient environments evaporate much slower than predictions of kinetic models based on absorptive partitioning theory, challenging fundamental assumptions of instantaneous reversible equilibrium and liquid-like SOA behavior in SOA models.The implications of Vaden et al. (2011) study on SOA predictions needs to be evaluated in the future.More research is also needed to make sure that total OA is accurately predicted for the right reasons: i.e. all the components of OA including HOA, OOA, and BBOA need to be right as well.In addition, models also need to capture the evolution of O:C ratios of OA.AMS measurements during field experiments involving both ground and aircraft flights are valuable to help constrain parameters of the VBS approach.For example, we showed that biomass burning emissions are consistently under-predicted by all model cases at both ground and aircraft locations pointing to a continued need to revise biomass burning emissions in and around Mexico City region.In addition, consistent under-prediction in HOA within the city center and most aircraft flights aloft suggest that either primary anthropogenic SVOC emissions need to be increased or SVOC emissions have a higher nonvolatile fraction than currently assumed.Also, Fig. S2 in the Supplement shows higher scatter in PMF HOA when plotted against CO as compared to WRF-Chem predictions.This suggests that spatial and temporal variation of emissions in 2006 MCMA inventory need to be revised.In addition, the effect of loss mechanisms such as dry deposition of S/IVOC vapors downwind need to be quantified experimentally.Karl et al. (2010) recently showed that dry deposition of oxygenated VOCs is substantially larger than previously assumed for deciduous ecosystems.Models need to account for changing dry deposition as a function of photochemical aging of organics in the atmosphere.Accurate representation of all physical and chemical processes affecting OA is necessary to get the right answers for the right reasons in climate models.OA and organic vapor measurements also need substantial improvements.Uncertainties in AMS measurements and subsequent PMF analysis also need to be better quantified.These uncertainties will vary spatially and temporally due to two factors.First, there were possible variations in collection efficiency among the different G-1 flights (Kleinman et al., 2008).These variations are inconsistent, however, with the good intercomparisons observed for the C-130 aircraft (De-Carlo et al., 2008) at the T0 site and the uncertainty analysis of collection efficiency (CE) in several campaigns by Middlebrook et al. (2011), which highlights the difficulty of determining AMS CE from single instrument intercomparisons in the field.Second, PMF will have difficulty separating the contribution of different OA factors at locations and times when markers for different factors as HOA, OOA and BBOA co-vary in the atmosphere (which often occurred during MI-LAGRO), especially when unit-resolution data is used as for the G-1 aircraft and the T1 site (Aiken et al., 2009).Comparing PMF results to predictions from a source-oriented modeling approach such as WRF-Chem helps to identify the range of uncertainties in both source-oriented and PMF based approaches.At the same time, information from both types of approaches needs to be combined in a consistent manner to improve OA predictions in the atmosphere.In addition to improvements of the AMS and its analysis techniques, additional real-time instruments for OA characteri-zation need to be developed and deployed, especially those that may have more detailed chemical markers for OA from different sources.Importantly, a measurement of total gasphase species with some volatility and/or chemical resolution such as O:C (analogous to the AMS) is critically needed, as otherwise no comparison of predicted vs. measured bulk gasphase species is possible, as in this study.Finally resolving the discrepancies between different non-fossil carbon measurements and performing those measurements with higher time resolution is important for better constraining model results, as discussed in Sect.S2.0 in the Supplement.In addition, several other processes such as formation of SOA from volatile species as glyoxal, increase of biogenic SOA yields in presence of anthropogenic pollution for e.g.formation of organo-sulfates, organo-nitrates and esterification processes, and SOA formation in clouds, need to be studied and included in models. Conclusions The WRF-Chem community model has been revised to include SOA formation coupled to the inorganic MOSAIC code for the first time.Traditional V-SOA formation using NO x -dependent yields from both traditional anthropogenic and biogenic VOC precursors are included.Non-traditional Figure 1 :Fig. 1 . Figure 1: Spatial distribution of 24-day average PM 2.5 SOA concentrations (µg m -3 ) over Mexico 1273 City basin as predicted by 3 modeling Cases discussed in the text.Also indicated are locations of 1274 T0 site within city, T1 and T2 sites at the edge of the city, and the Altzomoni (Alt) site.Case 1 1275 represents 9 species VBS with default emissions, while Case 2 and Case 3 represent 9 species 1276 and 2 species VBS respectively with twice anthropogenic S/IVOC emissions, as discussed in the 1277 text.1278 1279 1280 1281 1282 1283 1284 Fig. 3 . Fig. 3. Observed and simulated OA components at the T0 site within Mexico City.(a) HOA time series (b) OOA time series (c) HOA diurnal average (d) OOA diurnal average.Case 3 is not shown in panels (a) and (b) as it is very similar to Case 2. Fig. 5 . Fig. 5. Elemental O:C ratios over Mexico City region.(a) Time of AMS and model results at T0 site for 10-30 March, (b) 24-day average spatial distribution at surface level for Case 2 AMS and model results along C-130 flight track of 10 March morning (d) results along C130 flight track on morning of 29 March. Fig. 6 . Fig. 6.Average diurnal observed and simulated OA components at the T1 site within Mexico City for (a) Total OA (b) HOA (c) OOA (d) BBOA. -measurement comparisons across several flight tracks So far we have looked at 2 flight transects in detail.Figure 9 compares WRF-Chem predictions to AMS PMF data for eight G-1 and two C-130 flights.Variations in both WRF-Chem predictions and AMS data are represented by box plots showing percentiles. Figure 7 :Fig. 7 . Figure 7: Comparing WRF-Chem predictions to measurements (a) CO mixing ratios, and (b) 1361 total OA, (c) HOA, (d) OOA and (e) BBOA versus corresponding Positive matrix factorization 1362 (PMF) factors.(f) Various SOA components predicted by WRF-Chem Case 2 simulations along 1363 flight transect of C-130 on 10 th of March as discussed in text.Case 1, Case 2 and Case 3 are 1364 different modeling Cases as discussed in the text.PMF data are averaged to 1 min time intervals 1365 to reduce visual clutter.In (f), blue and green denote the sum of semi-volatile and intermediate 1366 volatility SOA from anthropogenic and biomass burning sources, respectively, and black and 1367 orange denote SOA from traditional anthropogenic and biogenic sources, respectively.1368 1369 1370 1371 Fig. 8 . Fig. 8. Comparing WRF-Chem predictions to measurements (a) CO mixing ratios, and (b) total OA, (c) HOA, (d) OOA and (e) BBOA versus corresponding Positive matrix factorization (PMF) factors.(f) Various SOA components predicted by WRF-Chem Case 2 simulations along flight transect of G-1 on 15th of March as discussed in text.Case 1, Case 2 and Case 3 are different modeling Cases as discussed in the text.PMF data are averaged to 1 min time intervals to reduce visual clutter.In (f), blue and green denote the sum of semi-volatile and intermediate volatility SOA from anthropogenic and biomass burning sources, respectively, and black and orange denote SOA from traditional anthropogenic and biogenic sources, respectively. Table 1 . Terminology used for various classes of organic species in this study. Table 2 . Factors (f i ) used to calculate S/IVOC carbon emissions from POA. Table 3 . Statistics comparing AMS PMF factors to corresponding WRF-Chem species predictions for the three modeling cases discussed in this study at urban T0 site and suburban T1 site. * Bias (µg m calculated the column burdens by multiplying surface concentrations with boundary layer depths assuming constant concentration across depth of boundary layer, www.atmos-chem-phys.net/11/6639/2011/Atmos.Chem.Phys., 11, 6639-6662, 2011 but neglected species present above boundary layer in the morning.In the present study, column burden is calculated by integrating vertical concentration profile of OA components adding species present within and above the boundary layer till the top of modeling domain as predicted by WRF-Chem.SOA estimated by
17,532.2
2011-01-01T00:00:00.000
[ "Environmental Science", "Physics" ]
Non-Aligned Multi-View Multi-Label Classification via Learning View-Specific Labels In the multi-view multi-label (MVML) classification problem, multiple views are simultaneously associated with multiple semantic representations. Multi-view multi-label learning inevitably has the problems of consistency, diversity, and non-alignment among views and the correlation among labels. Most of the existing multi-view multi-label methods for non-aligned views assume that each view has a common or shared label set, but because a single view cannot contain the entire label information, they often learn suboptimal results. Based on this, this paper proposes a non-aligned multi-view multi-label classification method that learns view-specific labels (LVSL), aiming to explicitly mine the information of view-specific labels and low-rank label structures in non-aligned views in a unified model framework. Furthermore, to alleviate insufficient available label information, we thoroughly explored the global and local structural information among labels. Specifically, first, we assume that there is structural consistency between the view and the label space and then construct the view-specific label model in turn. Second, to enrich the original label space information, we mine the consistent information of multiple views and the low-rank correlation information hidden among multiple labels. Finally, the contribution weight of each view is combined with learning the complementary information among the views in the decision-making stage, and extend the model to handle nonlinear data. The results of the proposed method compared with existing state-of-the-art algorithms on several datasets validate its effectiveness. I. INTRODUCTION M VML is used to describe multi-semantic problems of multi-source heterogeneous data objects [1], [2], [3].In Fig. 1, given a natural scene image, it can be represented by multiple view structures (LBP, HOG, HSV) with multiple labels (blue sky, white clouds, desert).Multi-view multi-label is Dawei Zhao is with the School of Electrical Engineering and Automation and with the School of Computer and Technology, Anhui University, Hefei 230601, China (e-mail: zhaodwahu@163.com). Qingwei Gao, Yixiang Lu, and Dong Sun are with the Key Laboratory of Intelligent Computing and Signal Processing of Ministry of Education, School of Electrical Engineering and Automation, Anhui University, Hefei 230601, China (e-mail: qingweigao@ahu.edu.cn;lyxahu@ahu.edu.cn;sundong@ahu.edu.cn). Digital Object Identifier 10.1109/TMM.2022.3219650 a learning framework for handling high-dimensional heterogeneous multi-semantic data classification problems.Multi-view learning [4], [5], [6], [7] can describe data objects more comprehensively and accurately than single-view learning.For example, video labeled as "Sports," "National Basketball Association," and " Basketball Stars" is represented simultaneously by diverse data forms, such as text, image, and audio.In addition, there are learning paradigms with different perspectives under the same modality.For example, we can use various feature forms to describe image data (texture description, shape description, color, etc.).With the emergence of Big Data and the rapid development of data collection technology, people are bound to face data classification problems in more complex and changeable real-world scenarios.In the past few decades, multi-view and multi-label learning [8], [9], [10] have been extensively studied as two separate research fields.A fundamental assumption of conventional single-label learning is that the relationships among labels are mutually exclusive.In multi-label learning, the semantic information of the labels is rich, and there is mutual dependence among the labels, which is a theoretical conflict with single-label learning.To solve more complex data classification problems in real-world scenarios, the MVML framework has emerged.The existing methods have the following problems in the existing methods that urgently need to be solved: 1) There are two major principles in multi-view learning: consistency and diversity in multi-source heterogeneous data [11], [12].The principle of consistency asserts that it is necessary to keep the consistent information of multiple views as much as possible in multi-view learning.The diversity principle advocates that each view should learn complementary information among views while completing its specific knowledge discovery task. 2) Label correlation learning problem [13], [14].The correlation among labels in multi-view learning is one of the critical factors for improving multi-label classification performance. 3) The non-aligned multi-view learning problem [15].In most multi-view learning methods, it is often explicitly or implicitly assumed that the view samples are uniformly aligned, but in reality, it is often difficult to obtain fully consistent multi-view information.For example: in video recommendations, label data are obtained from different video software, but due to the privacy protection principle of users, we cannot match and align these data with the same user consistently [16].In the field of face recognition, due to the failure of face landmark detection, multi-view faces cannot be aligned, which harms facial expression recognition [17].In general, there are many non-aligned multi-view data in the real world, and a single view cannot contain all the label information.Otherwise, multi-view learning will lose its meaning.Therefore, we naturally face the following challenges: one is how to solve these three problems simultaneously, and the other is to solve the linear inseparability problem of the given data.According to the different solutions, we divide the existing strategies into two types: feature fusion and classification fusion [18], [19]: The feature fusion strategy usually considers transforming the problem into a multi-view shared subspace information extraction problem and degenerates the multi-view heterogeneous feature information into a multi-label learning problem after fusion [20], [21], [22], [23].The matrix factorization method [24] is often used to obtain the shared subspace information of the multi-view data and then uses the shared information among the views and the label information of the labeled samples to learn the discriminant predictor.The effectiveness of subspace learning relies on the accurate acquisition of consensus representations, but low-dimensional consensus representation learning becomes more difficult as the number of views increases. The classification fusion strategy divides the problem into multiple multi-label learning problems and then predicts the unknown example label set by assigning a weight to each view classifier [18], [19], [25], [26].Because a unified predictor needs to be learned for each view, the classification fusion strategy forces each view to learn common sample label information to learn multiple views and consistent information across multiple labels and assigns different views to each view weight to learn complementary information for this view.Such methods can effectively learn view diversity information, and these individual modes can also improve the robustness of the predictor.Clearly, individual models rely heavily on the performance of each individual classifier.Since it is impossible to label each view separately in reality, the label information learned by this type of method is often the general label information. Most of the existing methods focus on the first two challenges.For the third problem, the literature [15] gives a mitigation scheme: although the samples among views are not aligned, they can still be implicitly connected through common or shared labels to be learned complementarily.However, this strategy is suboptimal because it assumes that all views have a uniform set of labels.In practice, there is a problem of inconsistent views with their corresponding labels [27].The intuitive explanation is that each view only observes a part of the corresponding label information, so different views have specific label sets.For example, in Fig. 1, we observe that in subgraphs (a), (b), and (c), all three different views can only obtain a part of the complete label information.Subspace learning can avoid the effect of inconsistent labels for views, it does not focus on the problem of non-aligned multi-views. With our existing knowledge, it is impossible to learn viewspecific features and multi-label structures jointly.Additionally, the data of each view have a complex nonlinear structure, so linear models are no longer sufficient for current needs.This paper proposes an MVML method for jointly learning view-specific labels and multi-label structural information.Specifically, first, a view-specific label matrix is learned based on the structural assumption of similarity between multi-view features and labels.Then, the global label structure and local structure correlation are introduced to enrich view-specific label information.Finally, the joint learning model is extended to nonlinear models. We designed the model to establish the final optimization goal to study the above problems jointly.Fig. 2 illustrates the model framework of the proposed method.The most significant difference between our method and the existing multi-view learning method is that the latter ignores the misalignment of multi-source heterogeneous features and label space.Our experiments prove that this view-specific label learning structure plays an indispensable role.Our main contributions in this paper are as follows: 1) We propose a novel MVML method, that combines viewspecific labels and label structure learning.2) Our method mines view-specific label information for multi-view consistency and complementary information learning.3) We extend the linear model to the nonlinear model to solve scenarios where the given data are not be linearly separable.The rest of this article is organized as follows.In Section II, we briefly summarize the related work of multi-view multi-label learning.Section III proposes our method, and Section IV proposes an effective alternative iterative optimization solution method to solve it.A large number of experimental results and analyses are reported in Section V. Section VI summarizes the research directions of this article. II. RELATED WORK The previous section divided existing approaches into two different strategies, depending on the solution.In this section, we outline the latest research that is closely related to our approach based on the above taxonomy. A. Multi-View Multi-Label Learning Direct feature fusion is a method that connects the features of all views in series for classification.For example, Fig. 2. The framework of the proposed LVSL method.High-order label correlation information is used to augment and complete the shared label set.View inconsistency is guided by view-specific label learning, and label consistency is guided by view-label alignment learning.LVSL combines multi-view feature data with the consistent alignment of views and labels for non-aligned multi-view multi-label classification tasks.RLM-MCML [26] merges multi-view features through a simple concatenation strategy.Meanwhile, the structural relationship among labels is learned based on low-rank labels and sample local smoothness assumptions.This degenerate method of merging ignores the unique physical meaning of the view itself.Simultaneously, the high-dimensional heterogeneous features obtained by the merging strategy may lead to the curse of dimensionality and overfitting.The subspace learning method considers that all views have a latent common representation to build a classification model, a feature fusion strategy.For example: in lrMMC [28], the first stage captures the low-dimensional common representation of all views, limits it to a low-rank matrix, and then assigns specific weights to each view to explore the complementarity between different views.In the second stage, the consensus matrix is embedded in the matrix completion for classification.The difference between TMV-LE [22] and lrMMC is that tensor factorization technology is added to learn the high-order relationship between different views when using subspace learning to mine public representations.In addition, the label enhancement method is used when performing multi-label classification.GLMVML [29] learns a consensus multi-view representation through matrix factorization and encodes complementary information from different views.In addition, it also learns global and local label structural information.iMvWL [20] attempts to capture a distinguishable shared subspace from incomplete views through nonnegative matrix factorization and local label structure learning, thereby constructing a robust weak label classifier.LSA-MML [23] uses subspace learning to force the alignment of undiscovered latent patterns to obtain a public representation, revealing the latent semantic patterns in the data.ICM2L [21] utilizes nonnegative matrix factorization to learn the individual and common information of different views, thereby improving the recognition ability of the classifier on rare labels.MLMVL-MM [30] uses multi-label correlation information to merge multiple feature views and maximum margin classification simultaneously.However, with the subspace method, as the number of views increases, it becomes more challenging to learn an effective latent low-dimensional consistency representation, which leads to decrease in the performance of the algorithm. Classification fusion: Multiple views are fused to perform multi-label classification in the prediction stage.For example, VLSF [31] leverages pairwise label correlations and views contributions to learn view label-specific features in multi-view multi-label learning, addressing the issues of view consistency and complementarity.GRADIS [32] adopts a two-stage label disambiguation method to solve the multi-view partial multilabel problem.First, the candidate labels are disambiguated based on the fusion similarity graph, and the ground-truth labels of the training samples are estimated; then, the disambiguationguided clustering analysis is used to generate a prediction model for learning label-specific features.NAIM 3 L [15] uses a classification fusion strategy to describe the global and local structures among labels as high-rank and low-rank, respectively, to alleviate the problem of insufficient available labels, which simultaneously solves the learning problems of missing labels, incomplete views, and non-aligned views.F2L21F [33] proposes a sparse framework for image classification.MLSO [3] builds an SVM classifier based on each data view and jointly learns multi-source multi-label learning tasks under a unified optimization framework.Multi-label classification results are obtained by a weighted combination of decisions from multiple sources.The classification fusion methods generally consider that although the various views are not explicitly aligned, they can still be implicitly connected through public or shared labels [15].Nevertheless, intuitively, each view has only a subset of the corresponding labels, meaning each view can only catch a subset of common or shared label data.Therefore, there are obvious shortcomings in the premises of the methods mentioned above based on classification fusion. In addition, the existing multi-view multi-label learning methods have achieved certain results, but most of them are based on linear models.When a given dataset is linearly inseparable, we may not achieve the expected classification effect.For this reason, scholars add nonlinear mapping to the model.For example, TM3L [18] is a two-step learning strategy.The first step is to learn a common representation of multiple views with complementarity and consistency through subspaces, and the second step combines label correlation to build a nonlinear multilabel classifier model.MVLE [34] utilizes the low-dimensional latent semantic space to connect the labels and features of different views and further uses the Hilbert-Schmidt independence criterion (HSIC) [35] to mine the consistency information among different views.SIMM [36] proposes a neural network MVML method, which uses the shared subspace learning and view-specific information identification.On this basis, MML-DAN [37] adopts a self-attention mechanism to model the interaction information of label-specific views to explore consistent label correlations.CDMM [19] utilizes multiple multi-label models to learn view consistency information jointly and introduces HSIC theory to extract the different information among views. B. Label Correlation Learning Different from traditional single-label learning tasks, multilabel learning aims to assign multiple category labels to a sample, which has gained increasing attention in different machine learning tasks.From an intuitive point of view, samples with similar labels are more likely to have strong correlations [38].Therefore, the existing multi-label methods are divided into three categories according to the different label correlations used [9].First-order strategies: consider that there is no inherent correlation among labels and that labels are independent of each other [39], [40].Second-order strategies: consider that the label correlation exists in pairs, and use the distance measurement method to evaluate the correlation of the label pairs [31], [41].High-order strategies: consider that label correlation in complex scenarios is multifaceted and semantically related [42], [43].Theoretical research on label propagation dependencies shows that label correlations can reconstruct and enrich original label information [44]. In addition, most of the previous label correlation studies considered the global structural information of labels, but more studies confirmed that the correlation among labels might only be shared with a subset of samples [38].Therefore, there is a weak correlation or irrelevance among samples with different labels, reflecting the local structural relationship within multiple labels [45].ML-LRC [46] uses a low-rank structure to capture the complex associations among labels and jointly learns label correlation and multi-label classifiers; GLOCAL [47] builds the global sum of labels by combining multiple regularizers of labels in a multi-label classifier of local structural relationships. As mentioned above, most of the existing MVML methods consider that all views share a set of labels, but in practical applications, there is a problem of inconsistent view-label information.Moreover, this problem caused by non-aligned view learning has not been directly investigated in previous studies.We propose an MVML method for learning view-specific labels based on the aforementioned issue.First, view-specific label learning addresses the view-label inconsistency of non-aligned views.Then, effective global and local structural regularizers for label correlations are introduced into view-specific label learning.Finally, the complementary information among views is learned by a weighted combination of each view, and the model is extended nonlinearly.The effectiveness of our method is verified on multiple benchmark multi-view multi-label data sets. A. Problem Settings Let X = {x v } m v=1 denote multi-view multi-label data sets with m views, where is the complete feature space of the v-th view, N represents the number of training samples.Y = [y 1 , y 2 , . . ., y N ] ∈ R N ×l represents the label space corresponding to the feature set, where y i ∈ {0, 1} N ×l is the label vector of x i , and l represents the number of labels. B. Problem Formulation In the initial prediction model of multi-view multi-label classification, label classification learning is a typical regression model problem.The base model advocates different views to predict the same label result to use consistent information between different views.Furthermore, the different contribution weights of each view are considered in the base model to learn the complementary information among views.The objective function can be formally defined as follows: (1) The variable θ v is used to measure the contribution of each view. There are two main problems currently faced: 1) We need to learn non-aligned views in a common label space. 2) The introduction of multi-label structural learning in multi-label learning helps to improve the classification performance of the algorithm.Therefore, how to combine these two attributes more effectively and make our model more discriminative is the main issue to be considered below. Eq.1 assumes that the samples among views share a common label set, which is an implicit solution to view alignment consistency.However, there is no such explicit or implicit alignment view sample in a large amount of data in reality because the labels that each view in the real world can observe may only be part of the entire information, so it is necessary to learn a particular non-aligned multi-view method that solves the inconsistency of observable information in each view.For the first question, Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply. we propose a display view non-alignment method, introducing the concept of view-specific labels.Then, we have the following equation: Where P v represents the view-specific label matrix, the second term of Eq.3 represents the introduction of the topological structure of each view in the feature space, which ensures that the local geometric structure between the feature space and the semantic matrix of different views is consistent. is the graph Laplacian matrix.S ij measures the similarity between instances X i and X j .The local geometric structure is constructed from the nearest neighbor graph on the feature space X v in our work.In addition, the calculation of the similarity between the two instances of the v-th view is as follows: where N p (x) is the set of p nearest neighbors of instance X v . For the second problem, we introduce a structural learning method of label correlation.We know that most existing multilabel label correlation learning methods have two limitations: 1) Label correlation is usually regarded as prior knowledge and cannot correctly describe the true dependency relationship among labels; 2) The consideration of the local structure of the label relationship in the label space is ignored.For the first limitation, we use the idea of label propagation to build a joint learning model of view-specific labels and label correlations to solve them.Specifically, we believe that in addition to keeping the structure consistent with different view features, the view-specific labels should also consider the impact of label correlation on the information supplement of the original label space.Therefore, we introduce label correlation to supplement the original label matrix: (5) Regarding the second limitation, we believe that in addition to focusing on the global features of multi-labels, we also need to capture some local structural information.For example, there is usually a group of labels so that the labels in a group have a strong correlation with each other and are independent of different labels.Therefore, we use • * to represent the nuclear norm to limit the label correlation matrix C to a low-rank structure.Finally, obtain the objective function as follows: (6) Based on the above problems, we jointly learn non-aligned multi-view and multi-label semantic structures.Furthermore, because Eq.6 is a linear model, it cannot solve the inseparable linearity of given data.At present, some existing multi-label learning algorithms (such as [14], [34], and [48]) use nonlinear models to achieve good performance.We use the feature map φ(•) to map the feature space X to a higher-dimensional (possibly infinite-dimensional) Hilbert space φ(•).According to the expression theorem, we rerepresent the linear combination of input variables W as W = φ(x) T A, according to the expression theorem [40].Suppose K is the kernel matrix where κ(•, •) is the kernel function used (the Gaussian kernel is used in this paper).Then, Eq.3 and Eq.6 can be rewritten as: In the next section, we will solve problem 8 with alternate iterative optimization. A. Model Optimization The optimization problem in ( 8) is convex, and the resulting problem can be solved by following the alternate optimization procedure. Fix P v , C and θ, Optimize A v . Taking the derivative of L(A v ) w.r.tA v and setting the derivative to 0 can obtain a closed solution w.r.t.A v : Fix A v , C and θ, Optimize P v . Taking the derivative of L(P v ) w.r.t.L(P v ) and setting the derivative to 0 can obtain a closed solution w.r.t.L(P v ). Compared with variables A v and P v that can directly obtain closed solutions, it is difficult to directly optimize C because of the nonsmooth regularization term in (8).To make the objective function Eq.8 separable, we introduced the auxiliary variable Z to replace C, and then an equivalent objective function can be expressed as: We use augmented Lagrangian multipliers (ALMs) to solve this problem and reformulate the objective function (13) as: Then, the inexact ALM (IALM) method is used to iteratively solve each variable in ( 14) by the block coordinate descent method.μ and Λ are expressed as nonnegative penalty factors and Lagrangian multipliers, respectively.According to the optimization strategy of IALM [49], we divide ( 14) into the following subproblems: Update multiplier Λ. Fix A v , P v and C, Optimize θ. In summary, we introduce a kernel model to generate the predicted label vector Y t : where , and η is the given threshold obtained by cross-validation. B. Complexity Analysis In this section, we mainly analyze the complexity of the optimization parts listed in Algorithm 1.The time complexity of LVSL is mainly controlled by step 4. The complexity of updating A v in each iteration is O(N 3 + N 2 l), and the complexity of updating , where t is the number of iterations.Typically, the model reaches its optimum after ten iterations converge quickly. A. Experimental Settings We performed experiments on 7 benchmark multi-view multilabel data sets, which can be downloaded from Mulan [51]. 1ascal07, Corel5k, ESPgame, Iaprtc12, and Mirflickr are the five widely used image datasets 2 from [52], [53].The details of the datasets are summarized in Table I . To verify the effectiveness of the proposed method, we compare our method with the following seven competing methods.Two of these methods use a concatenation strategy, which builds a multi-label learning model based on each data view and combines the weights of the output results to make the final prediction.Other methods are multi-view multi-label learning methods. r ICM2L [21]: Individual-view and commonality-view min- ing MVML classification method.Parameter configurations are implemented according to the suggestions given in the paper.[20]: Incomplete multi-view weak label learning. In the experiment, the complete view information is available.Parameter configurations are implemented according to the suggestions given in the paper. 3 code: https://github.com/zhaodwahu/LVSL.For all the above methods, the parameters are tuned to achieve the best performance by grid search. B. Evaluation Metrics We use five evaluation metrics that are widely used in multilabel learning to measure the performance of each algorithm.The specific evaluation metrics are average precision (AP), coverage (CV), Hamming loss (HL), one error (OE), and ranking loss (RL).The larger the value of AP is, the better.The smaller the other evaluation metrics values are, the better.The detailed metric definitions can be found in [9], [10]. C. Experimental Results We performed fivefold cross-validation on each dataset, and each algorithm repeated the experiment 5 times.The average and standard deviation of each metric value under each dataset are reported in Tables II to VI.We show the best results in red and the second-best results in blue. The F riedman test [55], as a common strategy for comparing whether multiple algorithms have the same performance.VII, we know that the F F statistics of all metrics are greater than the critical value.Obviously, all metrics negate the null hypothesis, so we need to use a post-hoc test method to illustrate the significant differences among the approaches.In this article, we choose the Nemenyi test [39], [56], [57] as the post-hoc test method.In Fig. 3, the algorithm performance is sorted from left to right, and the best algorithm is ranked on the far right.Specifically, if the average ranking difference among the comparison algorithms is within a CD value, they are connected with a red solid line.From the reports in Tables II to VI and Fig. 3(a) to (e), the following conclusions can be drawn: r Among 35 configurations (7 datasets and 5 evaluation met- rics), ours ranked first and second at 71.4% and 14.3%, respectively. r Fig. 3 shows that LVSL is significantly better than other methods in 40% of cases, followed by CDMM and SIMM in 20% of cases.It is worth noting that our method is always better than CDMM. r Encouragingly, by observing Tables II to VI, we find that our method achieves better performance on all metrics of Emotions and Y east.The overall CV metric performance of LVSL is not as good as SIMM, but it is not much different from the better results.The analysis in addition to the experimental results is as follows: r Compared with LSML and ML k NN, it can be seen that the performance of the traditional multi-label method connected to the multi-view multi-label learning approaches is flawed, mainly because they ignore the consistency and complementary information mining of multi-view and the physical interpretation of the characteristics of different views. r The comparison among LVSL and iMvWL, ICM2L, and TM3L shows that our view-specific label learning method Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply.r LVSL, SIMM, TM3L, and CDMM use nonlinear mapping to solve the linear inseparability problem.In general, LVSL is always better than the other three methods.View-specific labels and multi-label structural learning can effectively improve classification performance.In addition, SIMM also ignores the impact of label correlation, which leads to its poor overall performance. r LVSL performs worse than SIMM on the AP and CV met- rics on the Pascal07 and Mirflickr datasets for two main reasons.(1) LVSL uses a single kernel function for kernel mapping of multiple views, but it is undeniable that the performance of the kernel method often depends on the choice of the kernel function.Because the nonlinear relationship among the data of each view may be different, the optimal kernel function for one view may not be suitable for another view [58], which provides a new direction for our future research work.SIMM does not need to consider this problem.(2) SIMM develops the shared subspace based on the information among each view.In our work, considering the problem of the non-aligned view, the information among views cannot be directly communicated, which affects the performance of the LVSL to a certain extent.Additionally, there are two main reasons for the advantage of our method over deep learning methods: r The current multi-view multi-label learning tasks cannot directly perform end-to-end training through deep learning and require solutions that benefit from some traditional feature extraction techniques.Therefore, the feature representation capability of deep learning is limited in this task, and due to its powerful nonlinear data processing capability, our method using kernel tricks can also achieve this purpose [48]. Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply. r The training data in this paper are relatively limited, and deep learning may overfit the training data, resulting in insufficient model generalization ability.The traditional method has good generalization ability, interpretability, sufficient transparency, and universality [59].Therefore, to some extent, traditional methods are more suitable for solving the complex tasks proposed in this paper. D. Ablation Analysis In this section, to further verify the effectiveness of each component in LVSL, we conducted additional ablation analysis experiments and reported the values on the five evaluation metrics in Table VIII.LVSL-I, LVSL-II, and LVSL-III are variants of LVSL, which exclude the influence of view-specific labels, label correlations, and view contributions, respectively.Comparing the results of LVSL-I and LVSL on Table VIII, it can be found that the overall performance is significantly improved after adding view-specific labels, which confirmed our clear motivation to use view-specific label learning to solve the problem of the non-aligned view.Comparing LVSL-II and LVSL, it is found that LVSL is better than LVSL-II in most cases, which proves the necessity of capturing label structure information and verifies the effectiveness of using the label association matrix C to complement the original label matrix Y .In some cases, LVSL-III and LVSL have the same performance, showing that our contribution measurement method has room for further improvement. The hyperparameter λ 1 controls the complexity of the model coefficients and adjusts the balance between overfitting and underfitting.When λ 1 is too small, it will cause overfitting problems in the model, and underfitting problems will occur when λ 1 is too large.The hyperparameter λ 2 controls the contribution of different views.The hyperparameter λ 3 controls the structural diversity among different views.The hyperparameter λ 4 controls the global consistency of information between the view-specific label and the real label.The hyperparameter λ 5 controls the effect of local label correlation. Fig. 4 shows that the parameter λ 1 has a better effect in taking the intermediate value, and intuitively, the intermediate value ensures the balance of the model fitting.When the parameter λ 2 achieves 10 5 , the effect is better.A larger value means that the influence of the contribution weight of each view is ignored, and a smaller value will be too sensitive to the contribution of view parameters and ignore the complementary information between views.The parameter λ 3 and λ 5 values tend to take smaller values, but values that are too small will ignore the contribution of the corresponding regularization term, so we generally choose the median value.The performance is better when the parameter λ 4 takes a larger value.A larger value can fully learn the view consistency information of multiple views, but an excessively large value will also lead to insufficient complementary learning of view-specific labels.Our parameter sensitivity analysis results on other datasets are similar, and similar conclusions can be drawn. F. Further Analysis We report the algorithm efficiency analysis of LVSL in this section.Fig. 5 shows the iterative trend of our method on two datasets.Fig. 5 shows that the value of the objective function is significantly reduced during the initial iteration, and as the optimization process proceeds, the value of the objective function Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply. gradually converges.LVSL tends to converge for 10 iterations on both datasets, proving that it can converge faster.Our convergence results on other datasets are similar. VI. CONCLUSION This paper proposes a novel multi-view multi-label classification method that jointly learns view-specific labels and label structures.LVSL differs from existing work on multi-view multi-label classification by implicitly concatenating common or shared labels in that it assigns a specific label to each view to solve the problem of inconsistent labels for views in non-aligned views.When constructing view-specific labels, the consistency and diversity information among the views in multi-view learning are learned, and the label correlation information in multilabel learning is also combined.A large number of experiments show that the proposed non-aligned view learning method is a promising solution for multi-view multi-label classification based on view-specific labels. This method is of great significance for future research on the feasibility of the multi-view multi-label classification of nonaligned views.Future work will be devoted to proposing more new methods to study view-specific label learning problems via multi-kernel learning. Manuscript received 17 August 2021; revised 23 March 2022 and 20 September 2022; accepted 29 October 2022.Date of publication 4 November 2022; date of current version 1 November 2023.This work was supported in part by the Nature Science Foundation of Anhui under Grants 2008085MF183 and 2008085MF192 and in part by the National Natural Science Foundation of China (NSFC) under Grants 62071001 and 61502003.The Associate Editor coordinating the review of this manuscript and approving it for publication was Prof. Ngai-Man Cheung.(Corresponding author: Qingwei Gao.) Fig. 3 . Fig. 3.The performance comparison results of LVSL and other comparison methods using the Nemenyi test (CD = 3.9685 at the 0.05 significance level) under five evaluation metrics. Fig. 4 . Fig. 4. Parameter sensitivity analysis of the LVSL algorithm on the Corel5k dataset.(a) Effect of λ 1 with other fixed parameters.(b) Effect of λ 2 with other fixed parameters.(c) Effect of λ 3 with other fixed parameters.(d) Effect of λ 4 with other fixed parameters.(e) Effect of λ 5 with other fixed parameters. TABLE V EXPERIMENTAL RESULTS (MEAN ± STD) ON ONE ERROR (↓) TABLE VII THE CORRESPONDING STATISTICAL F F VALUE OF EACH EVALUATION METRIC AND CRITICAL VALUE UNDER THE F riedman TEST r iMvWL Table VII summarizes the Friedman statistical F F value of each Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply. TABLE VIII COMPARISON RESULTS OF LVSL-I, LVSL-III, LVSL-III AND LVSL.LVSL-I WITHOUT VIEW-SPECIFIC LABEL STRUCTURE, LVSL-III WITHOUT LABEL CORRELATION, AND LVSL-III WITH THE SAME CONTRIBUTION WEIGHT FOR ALL VIEWS evaluation metric and the critical value at the 0.05 significance level.Observing Table
7,896
2023-01-01T00:00:00.000
[ "Computer Science" ]
The complex Langevin analysis of spontaneous symmetry breaking induced by complex fermion determinant In many interesting physical systems, the determinant which appears from integrating out fermions becomes complex, and its phase plays a crucial role in the determination of the vacuum. An example of this is QCD at low temperature and high density, where various exotic fermion condensates are conjectured to form. Another example is the Euclidean version of the type IIB matrix model for 10d superstring theory, where spontaneous breaking of the SO(10) rotational symmetry down to SO(4) is expected to occur. When one applies the complex Langevin method to these systems, one encounters the singular-drift problem associated with the appearance of nearly zero eigenvalues of the Dirac operator. Here we propose to avoid this problem by deforming the action with a fermion bilinear term. The results for the original system are obtained by extrapolations with respect to the deformation parameter. We demonstrate the power of this approach by applying it to a simple matrix model, in which spontaneous symmetry breaking from SO(4) to SO(2) is expected to occur due to the phase of the complex fermion determinant. Unlike previous work based on a reweighting-type method, we are able to determine the true vacuum by calculating the order parameters, which agree with the prediction by the Gaussian expansion method. Introduction The sign problem is a notorious technical problem that occurs in applying Monte Carlo methods to a system with a complex action S. The importance sampling cannot be applied as it is since the integrand exp (−S) of the partition function cannot be regarded as a Boltzmann weight. If one uses the absolute value | exp (−S) | for generating configurations and treats the phase factor as a part of the observable, huge cancellations occur among configurations, and the required statistics grows exponentially with the system size. This problem occurs in various interesting systems in particle physics such as finite density QCD, gauge theories with a theta term or a Chern-Simons term, chiral gauge theories and supersymmetric theories. The complex Langevin method (CLM) [1,2] is a promising approach to such complexaction systems, which may be regarded as an extension of the stochastic quantization based on the Langevin equation. The dynamical variables of the original system are naturally complexified, and the observables as well as the drift term are extended holomorphically by analytic continuation. It is known that the CLM works beautifully in highly nontrivial cases [3][4][5][6], while it gives simply wrong results in the other cases [7][8][9][10]. In the past several years, significant progress has been made in theoretical understanding of the method and the conditions for justifying the CLM. First it was realized that the probability distribution of the complexified dynamical variables has to fall off fast enough in the imaginary directions of the configuration space [11,12]. In order to satisfy this condition, a new technique called gauge cooling [13] was proposed. Using the gauge cooling, the CLM has been successfully applied to finite density QCD 3 either with heavy quarks [13] or at high temperature [20]. An explicit justification of the gauge cooling has been provided recently [21] extending the argument for justification of the CLM without gauge cooling [11,12]. It was known for some time that the CLM gives wrong results also when the determinant that appears from integrating out fermions takes values close to zero during the complex Langevin simulation. This was first realized in the Random Matrix Theory for finite density QCD [22,23] and confirmed also in effective Polyakov line models [24]. In these papers, it was speculated that the problem occurs due to the ambiguity associated with the branch cut in the logarithm of the complex fermion determinant, which appears in the effective action. On the other hand, ref. [25] pointed out that the singular drift term one obtains from the fermion determinant breaks holomorphy, which plays a crucial role in justifying the method. A theoretical understanding of this problem and a possible cure have been given recently. First it was pointed out in ref. [26] that the branch cut cannot be the cause of the problem since the CLM can be formulated solely in terms of the weight w = exp(−S) without ever having to refer to the action S. Indeed it was found that a similar problem can occur when the action has pole singularities instead of logarithmic singularities. In the same paper, it was shown that the probability distribution of the complexified variables has to fall off fast enough near the singularities of the drift term, based on the argument for justification in ref. [11,12]. It was then proposed [27,28] that the gauge cooling can be used to satisfy this condition as well with an appropriate choice of the complexified gauge transformation. A test in the Random Matrix Theory shows that the gauge cooling indeed solves the singular-drift problem unless the quark mass becomes too small. In ref. [29], the argument for justification with or without gauge cooling was revisited. In particular, it was pointed out that the expectation values of time-evolved observables, which play a crucial role in the argument, can be ill-defined. Taking this into account, it was shown that the CLM can be justified if the probability distribution of the drift term falls off exponentially or faster at large magnitude. This condition serves as a useful criterion, which tells us clearly whether the results obtained by the CLM are trustable or not. In this paper, we focus on the singular-drift problem that occurs in a system with a complex fermion determinant. In many such systems, the phase of the fermion determinant is expected to play a crucial role in the determination of the vacuum. An example of this is finite density QCD at low temperature and high density, where various exotic fermion condensates are conjectured to form (See ref. [30], for instance.). Another example is the Euclidean version of the type IIB matrix model [31] for 10d superstring theory, where the SO(10) rotational symmetry is conjectured to be spontaneously broken [32][33][34][35]. When one applies the CLM to these systems, the singular-drift problem occurs due to the appearance of eigenvalues of the Dirac operator close to zero. We propose to avoid this problem by deforming the action with a fermion bilinear term and extrapolating its coefficient to zero. The fermion bilinear term should be chosen in such a way that the nearly zero eigenvalues of the Dirac operator are avoided and yet the vacuum of the system is minimally affected. We test this idea in an SO(4)-symmetric matrix model with a Gaussian action and a complex fermion determinant, in which spontaneous breaking of SO(4) symmetry is expected to occur due to the phase of the determinant [36]. This model was studied previously by the Gaussian expansion method (GEM) [37] and the spontaneous breaking of the SO(4) symmetry down to SO(2) was suggested by comparing the free energy for the SO(2)-symmetric vacuum and the SO(3)-symmetric vacuum. The same model was studied also by Monte Carlo simulation using the factorization method 4 , and the order parameters obtained by the GEM were reproduced for both the SO(2)-symmetric vacuum and the SO(3)-symmetric vacuum [38,39]. However, the comparison of free energy for the two vacua suffered from too much uncertainty to make a definite conclusion on the true vacuum by this approach. When one applies the CLM to this system, the singular-drift problem is actually severe because the fermionic part of the model is essentially an exactly "massless" system. Indeed, it turns out that the gauge cooling proposed in refs. [27,28] is not sufficient to solve this problem in the case at hand. Following the idea described above, we therefore add a fermion bilinear term, which breaks the SO(4) symmetry minimally, down to SO (3). The results of the CLM show that the SO(3) symmetry of the deformed model is broken spontaneously to SO (2). Extrapolating the deformation parameter to zero, we find that the SO(4) symmetry of the original matrix model is broken spontaneously to SO (2) and that the order parameters thus obtained agree well with the prediction obtained by the GEM. We also try another type of the fermion bilinear term for the deformation and show that the final results obtained after the extrapolations remain the same, which supports the validity of our analysis. Note that we are able to determine the true vacuum directly without having to compare the free energy for each vacuum preserving different amount of rotational symmetry. In order to probe the spontaneous symmetry breaking (SSB), we need to introduce an O(ε) symmetry breaking term in the action, on top of the deformation described above, and send ε to zero after taking the large-N limit. The singular-drift problem occurs at small ε even for the deformed model. Here, the criterion for correct convergence proposed recently [29] turns out to be useful since it tells us which data are free from the singular-drift problem and hence can be trusted. Indeed, we find that the data points in the reliable region can be fitted nicely by an expected asymptotic behavior, while the data points in the unreliable region deviate from the fitting curve. We hope that our strategy to overcome the singular-drift problem enables the application of the CLM to the type IIB matrix model and to finite density QCD at low temperature and high density. The rest of this paper is organized as follows. In section 2, we define the SO(4)symmetric matrix model and briefly review the results obtained by the previous approaches. In section 3, we explain how we apply the CLM to the SO(4)-symmetric matrix model. In particular, we deform the action with a fermion bilinear term, which enables us to investigate the SSB without suffering from the singular-drift problem. In section 4, we present the results of our analysis. In particular, we extrapolate the deformation parameter to zero, and confirm that the SSB from SO(4) to SO (2) indeed occurs in this model. The order parameters thus obtained are in good agreement with the prediction of the GEM. Section 5 is devoted to a summary and discussions. In appendix A we give the details on how we determine the region of validity of the CLM, which is useful in making the ε → 0 extrapolations. In appendix B, we present the results obtained by deforming the action with another type of the fermion bilinear term, which turn out to be consistent with the ones obtained in section 4. Brief review of the SO(4)-symmetric matrix model The SO(4)-symmetric matrix model investigated in this paper is defined by the partition function [36] where the bosonic part and the fermionic part of the action is given, respectively, as Here we have introduced N × N Hermitian matrices X µ (µ = 1, . . . , 4), which are bosonic, and N f copies of N -dimensional column vectors ψ using the Pauli matrices σ i (i = 1, 2, 3). The model has an SO(4) symmetry, under which X µ transforms as a vector, whereas ψ α andψ α transform as Weyl spinors. Also, the model has an SU(N ) symmetry, under which the dynamical variables transform as Integrating out the fermionic variables for each f , one obtains the determinant of the Dirac operator which is complex in general. Thus, the partition function (2.1) can be rewritten as It was speculated that the SO(4) rotational symmetry of the model is spontaneously broken in the large-N limit with fixed r = N f /N > 0 due to the effect of the phase of the determinant [36]. In the phase-quenched model, which is defined by omitting the phase of the fermion determinant, the SSB was shown not to occur by Monte Carlo simulation [39]. We may therefore say that the SSB, if it really occurs, should be induced by the phase of the fermion determinant. Throughout this paper, we consider the r = 1 case, which corresponds to N f = N . In order to see the SSB, we introduce an SO(4)-breaking mass term in the action, where and define the order parameters for the SSB by the expectation values of where no sum over µ is taken. Due to the ordering (2.8), the expectation values obey at finite ε. Taking the large-N limit and then sending ε to zero afterwards, the expectation values λ µ (µ = 1, · · · , 4) may not take the same value. In that case, we can conclude that the SSB occurs. Explicit calculations based on the GEM were carried out assuming that the SO(4) symmetry is broken down either to SO(2) or to SO(3) [37]. For r = 1, the order parameters are given by The free energy was calculated in each vacuum, and the SO(2)-symmetric vacuum was found to have a lower value. Monte Carlo simulation of this model is difficult due to the sign problem caused by the complex fermion determinant. Among various reweighting-type methods, the factorization method [35] turned out to be particularly useful in the present case. Assuming that the SO(4) symmetry is spontaneously broken down either to SO(2) or to SO(3), the results of the GEM (2.11) and (2.12) were reproduced [38,39]. However, the calculation of the free energy difference had large uncertainties, and it was not possible to determine which vacuum is actually realized using this approach. Application of the CLM to the SO(4)-symmetric matrix model In this section, we explain how we apply the CLM to the SO(4)-symmetric matrix model (2.1). Including the symmetry breaking term (2.7), we can write the partition function as The drift term that appears in the Langevin equation is given by as a function of the Hermitian matrices X µ . Note that the second term in (3.3) is not Hermitian in general corresponding to the fact that the fermion determinant is complex. Thus, the application of the idea of stochastic quantization naturally leads us to complexifying the dynamical variables, which amounts to regarding the Hermitian matrices X µ as general complex matrices X µ . Accordingly, the definition of the drift term (3.3) is extended to general complex matrices X µ by analytic continuation. Then we consider the fictitious-time evolution of the general complex matrices X µ described by the discretized version of the complex Langevin equation where η µ (t) is an N × N Hermitian matrix generated with the probability proportional to e − 1 where t 0 represents the time required for thermalization and T should be large enough to achieve good statistics. In order to justify the CLM, the probability distribution of the drift term (3.3) measured during the complex Langevin simulation should fall off exponentially or faster at large magnitude [29]. In the present model, this condition can be violated for two reasons. First, the first term in (3.3) can be large when the configuration X In order to avoid the first problem, we use the gauge cooling [13]. Note that the original theory (3.1) has the symmetry X µ → g X µ g −1 with g ∈ SU(N ), under which the drift term (3.3) transforms covariantly as v µ → g v µ g −1 and the observables (2.9) are invariant. Upon complexifying the variables, the symmetry property of the drift term and the observables enhances to X µ → g X µ g −1 with g ∈ SL(N, C). Using this fact, we can implement the gauge cooling procedure [13] in the Langevin process as where the transformation matrix g ∈ SL(N, C) is chosen appropriately as a function of the configuration X which measures the deviation of X µ from a Hermitian configuration, and choose the SL(N, C) transformation g in (3.6) in such a way that the norm is minimized. In practice, this is done by using the steepest descent method as follows. Let us consider an infinitesimal SL(N, C) transformation 9) where N × N traceless Hermitian matrices t a are the generators of SU(N ) normalized as tr (t a t b ) = δ ab . Since the norm (3.8) is invariant under SU(N ), we restrict the infinitesimal parameters ǫ a to be real. Under the infinitesimal transformation, we have Therefore, the change of the Hermiticity norm (3.8) becomes from which the gradient of the norm is obtained as Using this f a , we consider a finite SL(N, C) transformation where the real positive parameter α is chosen in such a way that the Hermiticity norm (3.8) is approximately minimized. We repeat this procedure until the norm (3.8) stops decreasing within certain accuracy. and the Langevin step-size is chosen as ∆t = 2.0 × 10 −4 unless stated otherwise. We find that the gauge cooling keeps the Hermiticity norm well under control. Next we turn to the second problem, which is associated with the eigenvalues of the Dirac operator D close to zero. In Fig. 2, we plot the eigenvalue distribution of the Dirac operator obtained during the complex Langevin simulation for ε = 0.1 (Left) and ε = 0.5 (Right) with N = 32. We find that there are many eigenvalues close to zero for ε = 0.1, but not for ε = 0.5. This suggests that there is some critical ε, below which the results of the CLM cannot be trusted because of the singular-drift problem. It turns out that the extrapolation to ε = 0 is rather difficult in this situation. In order to avoid this problem, we add a fermion bilinear term to the action (2.3). The partition function of the deformed model is defined as Note that the extra fermion bilinear term explicitly breaks the SO(4) symmetry of the original model (2.1). Here we choose the parameters M µ in such a way that the SO(4) We can then ask whether the SO(3) symmetry of this deformed model is spontaneously broken in the large-N limit. In Fig. 3, we plot the eigenvalue distribution of the Dirac operator (3.16) obtained during the complex Langevin simulation of the deformed model for ε = 0.1 (Left) and ε = 0.5 (Right) with m f = 1.0 and N = 32. We find that the distribution is shifted in the real direction. This is understandable since, at large m f , the eigenvalue distribution of the Dirac operator would be distributed around m f . As a result, the distribution avoids the singularity even for ε = 0.1 in contrast to the undeformed (m f = 0) case. Therefore, we can extrapolate ε to zero using data obtained with smaller ε for finite m f . Eventually, we extrapolate the deformation parameter m f to zero, and compare the results with the prediction (2.11) obtained by the GEM for the original model. Results of our analysis In this section, we present our results obtained by the CLM as described in the previous section. Let us recall that we have introduced an O(ε) mass term (2.7) for the bosonic matrices, which breaks the SO(4) symmetry explicitly. In order to probe the SSB, we need to take the large-N limit with fixed ε, and then make an extrapolation to ε = 0. In Fig. 4, the expectation values λ µ ε,m f (µ = 1, 2, 3, 4) obtained for N = 16, 32, 48 with ε = 0.1 and m f = 1.0 are plotted against 1/N , where the data can be fitted nicely to straight lines. Thus we can extrapolate the expectation values to N = ∞ for each ε and m f . In what follows, we assume that the large-N limit is already taken in this way. Next we would like to make an extrapolation to ε = 0. For that purpose, it is convenient to consider the ratio This is motivated from the fact that the mass term (2.7) tends to make all the expectation values λ µ ε,m f smaller than the value to be obtained in the ε → 0 limit. By taking the ratio (4.1), the finite ε effects are canceled by the denominator, and the extrapolation to ε = 0 becomes easier. Since ε is a parameter in the action (2.7), the expectation values λ µ ε,m f and hence the ratios (4.1) can be expanded in a power series with respect to ε. By taking the ratios, the coefficients of higher order terms become smaller, and the truncation of the series becomes valid for a wider range of ε. In Fig. 5, we plot the ratio (4.1) against ε for m f = 1.0 (Top-Left), 0.8 (Top-Right), 0.6 (Bottom-Left) and 0.4 (Bottom-Right). The data obtained at small ε suffer from the singular-drift problem, and hence cannot be trusted. Here the condition for justifying the CLM proposed recently in ref. [29] turns out to be useful since it enables us to determine the range of validity as we explain in appendix A. Taking this into account, we fit the data in Fig. 5 to the quadratic form using the fitting range given in Table 1, where we also present the extrapolated values. We find for each value of m f that ρ 1 (ε, m f ) and ρ 2 (ε, m f ) approach the same value in the ε → 0 limit, while the others approach smaller values. This implies that the SSB from SO(3) to SO(2) occurs in the deformed model. In Fig. 6, we plot the extrapolated values lim ε→0 ρ µ (ε, m f ) obtained in this way against m 2 f . We find that our results within 0.4 ≤ m f ≤ 1.0 can be nicely fitted to the quadratic behavior, which is motivated by a power series expansion of the ex- which agree well with the results (2.11) obtained by the GEM. Here we emphasize that in the GEM, the true vacuum was determined by comparing the free energy obtained for the SO(2) vacuum and the SO(3) vacuum. In contrast, the CLM enables us to determine the true vacuum directly without having to compare the free energy for different vacua. As a further consistency check, we repeat the same analysis with a different choice of the deformation parameter M µ = (0, 0, m f , 0) in (3.16) instead of (3.17). We find that the results obtained after the extrapolation m f → 0 turn out to be consistent with the ones obtained above. See appendix B for the details. Fig. 5 for the ε → 0 extrapolations is listed with the extrapolated values obtained by the fits. Summary and discussion In this paper, we have shown that the CLM can be successfully applied to a matrix model, in which the SSB of SO(4) is expected to occur due to the phase of the complex fermion determinant. The SSB does not occur if the phase is quenched, which implies that it is extremely hard to investigate this phenomenon by reweighting-based Monte Carlo methods. In the factorization method, for instance, one introduces a constraint with some parameters and extremizes the free energy with respect to these parameters. While this has been done successfully in refs. [38,39], the comparison of the free energy for the SO(2) and SO(3) vacua turns out to be subtle and a definite conclusion on the true vacuum was not reached. In contrast, we have shown by the CLM that the SSB from SO(4) down to SO(2) occurs as predicted by the GEM. For the success of the CLM, it was crucial to overcome the singular-drift problem associated with the appearance of nearly zero eigenvalues of the Dirac operator. The gauge cooling was used to suppress the excursions in the imaginary directions, but the singular-drift problem in the present case was too severe to be solved by the gauge cooling. This is understandable because the fermionic variables are exactly "massless" in the present case. Our strategy to overcome the singular-drift problem was to deform the Dirac operator in such a way that the singular-drift problem is avoided while maintaining the qualitative feature of the vacuum as much as possible. On top of this, we have to introduce an O(ε) symmetry breaking term to probe the SSB, which should be removed after taking the large-N limit. In making the ε → 0 extrapolations, the criterion for correct convergence proposed in ref. [29] turns out to be useful since it tells us the range of parameters for which the CLM is free from the singular-drift problem and the results are trustable. The order parameters obtained after extrapolating the deformation parameter to zero turn out to be consistent with the prediction by the GEM. We have actually tried two types of deformation to avoid the singular-drift problem and confirmed that the extrapolated results agree with each other within fitting errors. While this confirms the validity of the extrapolations to some extent, we cannot exclude the possibility that something dramatic happens when the deformation parameter approaches zero. Let us recall, however, that the singular-drift problem can occur at some point in the parameter space even if the system itself does not undergo any dramatic change. For instance, in QCD at finite density, the singular-drift problem is anticipated to occur at the quark chemical potential µ m π /2, where m π is the pion mass, but the first order transition to the phase of nuclear matter occurs at µ ∼ m N /3, where m N is the nucleon mass. Nothing really happens in the wide parameter range 0 µ m N /3. This example clearly shows that the singular-drift problem has more to do with the methodology rather than the physics of the system to be investigated. The CLM with the proposed strategy can be directly applied to the type IIB matrix model, which is conjectured to be a nonperturbative formulation of type IIB superstring theory in ten dimensions [31]. While the SO(10) symmetry of the model is expected to be spontaneously broken down to SO(4) for consistency with our 4d space-time, the GEM predicts that it is spontaneously broken down to SO(3) rather than SO(4) [40]. It would be interesting to investigate this issue using the CLM extending the present work. We consider that the same strategy would be useful also in applying the CLM to finite density QCD at low temperature and high density, where various exotic condensates are speculated to form [30] due to the complex fermion determinant. In this case, one can deform the Dirac operator by switching on the corresponding fermion bilinear term without disturbing the vacuum significantly. Now that we have a useful criterion [29] for justifying the CLM, we can try possible deformations and see whether any of them allows us to extrapolate the deformation parameter to zero within the region of validity. A How to determine the region of validity In this appendix, we explain how to determine the region of validity of the CLM. When the symmetry breaking parameter ε becomes small, the singular-drift problem occurs and the results obtained by the CLM can no longer be trusted. In order to make ε → 0 extrapolations, it is important to determine the value of ε, below which the results become unreliable. Here we use the criterion based on the argument for justifying the CLM [29]. For that, we calculate the magnitude of the drift term for each configuration and obtain its probability distribution. If the tail of the distribution falls off exponentially or faster, we can trust the results obtained with those simulation parameters. We find that the finite step-size effects can modify the tail of the distribution significantly without changing the expectation values λ µ ε,m f . In order to make the plots in this section, we therefore have to decrease the step-size when it turns out to be necessary. Let us define the magnitude of the drift term by where v µ is the drift term defined by (3.3). Then, we define the probability distribution p (u) with the normalization ∞ 0 du p (u) = 1. In Fig. 7, we plot p (u) against u in the log scale for various ε with m f = 1.0 and N = 48. We find that p (u) falls off exponentially or faster for all the ε. Thus, we can trust the results obtained in this region. In Fig. 8, we show a log-log plot (Left) and a semi log plot (Right) of the distribution p (u) for various ε with m f = 0.8 and N = 48. Since the drift term can become fairly large for ε = 0.1, we decrease the Langevin step-size to ∆t = 2.0× 10 −6 in order to probe the tail of the distribution correctly. We find that the distribution falls off exponentially or faster for ε ≥ 0.2, but a power-law tail develops for ε = 0.1. Therefore, we can trust the data for ε ≥ 0.2, but not the ones at ε = 0.1. In Fig. 9, we show a log-log plot of p (u) for various ε with m f = 0.6 and N = 48. Here the drift term tends to become even larger than in the m f = 0.8 case, and we have to investigate the tail of the distribution more carefully. We therefore present the results obtained for two Langevin step-size, ∆t = 2.0 × 10 −4 (Left) and 2.0 × 10 −6 (Right). Indeed, we find that the behavior of the tail seems to change qualitatively by decreasing the step-size. In Fig. 10, we show a semi-log plot for ∆t = 2.0 × 10 −6 , which suggests that the tail of the distribution falls off exponentially for ε ≥ 0.3, but not for ε = 0.1. The result for ε = 0.2 is marginal. We may therefore trust the results for ε ≥ 0.3. In Fig. 11, we show a log-log plot of p (u) for various ε with m f = 0.4 and N = 48. Here we have decreased the Langevin step-size to ∆t = 2.0 × 10 −8 , but the tail of the distribution still follows a power law for all values of ε within the region. However, the comparison of the two plots in Fig. 9 suggests a possibility that the step-size ∆t should be decreased further to see the behavior of the tail correctly. Thus for the m f = 0.4 case alone, we had to determine the lower end of the fitting range empirically from the plausibility of the fit to the quadratic behavior. Even if we omit the m f = 0.4 point in B Results for another type of the fermion bilinear term In this appendix, we present the results obtained by choosing the deformation parameters in (3.16) as instead of (3.17). Taking into account the ordering (2.10), we can preserve only an SO(2) symmetry with this choice. In Fig. 12, we plot the eigenvalue distribution of the Dirac operator (3.16) for ε = 0.1 (Left) and ε = 0.5 (Right) with m f = 0.6 and N = 32. We find that the distribution is separated in the imaginary direction. This is understandable since, at large m f , the eigenvalue distribution of the Dirac operator would be distributed around ±i m f . As a result, the singularity at the origin can be avoided for even smaller ε than in the case of (3.17). This enables us to extrapolate ε to zero using the data obtained in the large-N limit for finite m f . In Fig. 13, we plot the ratios (4.1) obtained after taking the large-N limit against ε for m f = 0.6 (Top-Left), 0.5 (Top-Right), 0.4 (Middle-Left), 0.3 (Middle-Right) and 0.2 (Bottom). The data obtained for small ε cannot be trusted because of the singular-drift problem. We fit the data in Fig. 13 to the quadratic form using the fitting range given in Table 2, where we also present the extrapolated values. We find for each value of m f that ρ 1 (ε, m f ) and ρ 2 (ε, m f ) approach the same value in the ε → 0 limit, while the others approach smaller values.
7,509.6
2016-09-15T00:00:00.000
[ "Physics" ]
Ageing in India: Some Social Challenges to Elderly Care Ageing in India is exponentially increasing due to the impressive gains that society has made in terms of increased life expectancy. With the rise in elderly population, the demand for holistic care tends to grow. By 2025, the geriatric population is expected to be 840 million in the developing countries [1]. It is projected that the proportion of Indians aged 60 and older will rise from 7.5% in 2010 to 11.1% in 2025 [2]. In 2010, India had more than 91.6 million elderly and the number of elderly in India is projected to reach 158.7 million in 2025 [2]. An aging population puts an increased burden on the resources of a country and has raised concerns at many levels for the government in India. The aging population is both medical and sociological problem. The elderly population suffers high rates of morbidity and mortality due to infectious diseases. The demographic transition in India shows unevenness and complexities within different states. This has been attributed to the different levels of socio-economic development, cultural norms, and political contexts. Hence it will be a herculean task for policy makers to address the geriatric care that will take into account all these determinants. Care for the elderly is fast emerging as a critical element of both the public and private concern. Introduction Ageing in India is exponentially increasing due to the impressive gains that society has made in terms of increased life expectancy. With the rise in elderly population, the demand for holistic care tends to grow. By 2025, the geriatric population is expected to be 840 million in the developing countries [1]. It is projected that the proportion of Indians aged 60 and older will rise from 7.5% in 2010 to 11.1% in 2025 [2]. In 2010, India had more than 91.6 million elderly and the number of elderly in India is projected to reach 158.7 million in 2025 [2]. An aging population puts an increased burden on the resources of a country and has raised concerns at many levels for the government in India. The aging population is both medical and sociological problem. The elderly population suffers high rates of morbidity and mortality due to infectious diseases. The demographic transition in India shows unevenness and complexities within different states. This has been attributed to the different levels of socio-economic development, cultural norms, and political contexts. Hence it will be a herculean task for policy makers to address the geriatric care that will take into account all these determinants. Care for the elderly is fast emerging as a critical element of both the public and private concern. The apparent success of the medical science is invariably accompanied by several social, economic and psychological problems in older persons, in addition to the medical problems. It needs to be understood that many of these problems require lifelong drug therapy, physical therapy and long-term rehabilitation [3]. The elderly tend to be cared for in a variety of settings: home, nursing home, day-care centre, geriatric out-patient department, medical units or intensive care unit depending on the nature of the clinical problem. Care of elderly necessitates addressing several social issues. The needs and problems of the elderly vary significantly according to their age, socioeconomic status, health, living status and other such background characteristics. Their social rights are neglected and they are profusely abused which goes unreported. Lack of Infrastructure With increasing longevity and debilitating chronic diseases, many elder citizens will need better access to physical infrastructure in the coming years. Lack of physical infrastructure is a major deterrent to providing comfort to the aged. Many elder citizens need better access to physical infrastructure, both in their own homes and in public spaces. Unattended chronic disease, unaffordable medicines and treatment and malnutrition are part of old age life in India as there is no system of affordable health care. Emphasis on geriatrics in the public health system is limited with few dedicated geriatric services. The other issues of the public health system are lack of infrastructure, limited manpower, poor quality of care and overcrowding of facilities due to insufficient focus on elderly care [4]. Changing Family Structure The traditional Indian society with an age-old joint family system has been instrumental in safeguarding the social and economic security of the elderly people. The traditional norms and values of Indian society also laid stress on showing respect and providing care for the elderly. However with the emerging prevalence of nuclear family set-ups in recent years, the elderly are likely to be exposed to emotional, physical and financial insecurity in the years to come. There is an upward trend in the living arrangement pattern of elderly staying alone or with spouse only from 9.0% in 1992 to 18.7% in 2006 [5]. Family care of the elderly seems likely to decrease in the future with the economic development of the nation and modernization. Lack of Social Support The elderly in India are much more vulnerable because of the less government spending on social security system. The elderly in urban area rely primarily on hired domestic help to meet their basic needs in an increasingly-chaotic and crowded city. Social isolation and loneliness has increased [6]. Insurance cover that is elderly sensitive is virtually non-existent in India. In addition, the preexisting illnesses are usually not covered making insurance policies unviable for the elders. Pension and social security is also restricted to those who have worked in the public sector or the organized sector of industry. In a study by Lena et al. [7], almost half of the respondents felt neglected and sad and felt that people had an indifferent attitude towards the elderly. It was also found that 47% felt unhappy in life and 36.2% felt they were a burden to the family. Social Inequality Elderly are a heterogeneous section with an urban and rural divide. They are less vulnerable in rural areas as compared to their urban counterparts, due to the still holding values of the joint family system. All the elderly are not seen in the same view as the needs and problems of elderly are rejected to a vast extent as government classifies these people based on caste and other socio cultural dimensions. In a case study, it was found that a major proportion of the elderly women were poorer; received the lowest income per person; had the greatest percentage of primary level education; recorded the highest negative affective psychological conditions; were the least likely to have health insurance coverage and they recorded the lowest consumption expenditure [8]. Availability, Accessibility and Affordability of Health Care children who find themselves responsible for their parents' well-being. Managing home care for the elderly is a massive challenge as multiple service providers -nursing agencies, physiotherapists and medical suppliers -are small, unorganized players who extend sub-optimal care. In India, health insurance coverage is essentially limited to hospitalization. The concept of geriatric care has remained a neglected area of medicine in the country. Despite an aging population, geriatric care is relatively new in many developing countries like India with many practicing physicians having little knowledge of the clinical and functional implications of aging [9][10][11]. Not many institutes offer the geriatrics course, and even takers are few. Most of the government facilities such as day care centres, old age residential homes, counselling and recreational facilities are urban based. The geriatric outpatient department services are mostly available at tertiary care hospitals [12]. Reaching to 75% of the elderly that reside in rural areas with geriatric care will be challenging. Dhar [13] has pointed out the relative neglect in provision of facilities for patient care as well as training and development in geriatrics in the Indian context. As pointed by Dey et al. [14], the key challenges to access and affordability for elderly population include reduced mobility, social and structural barriers, wage loss, familial dependencies, and declining social engagement. The stigma of aging is another social barrier to access of health in addition to the health and social conditions the elderly commonly face such as dementia, depression, incontinence and widowhood [15]. Economic Dependency As per the 52nd round of National Sample Survey Organization, nearly half of the elderly are fully dependent on others, while another 20 percent are partially dependent for their economic needs [16]. About 85% of the aged had to depend on others for their day to day maintenance. The situation was even worse for elderly females [17]. The elders living with their families are largely contingent on the economic capacity of the family unit for their economic security and well being. Elderly often do not have financial protection such as sufficient pension and other form of social security in India. The single most pressing challenge to the welfare of older person is poverty, which is a multiplier of risk for abuse [18]. Also due to their financial dependence, elderly persons though are most vulnerable to infections have low priority for own health. Migration of younger generation, lack of proper care in the family, insufficient housing, economic hardship and break-up of joint family have made the old age homes seem more relevant even in the Indian context [19]. It is important to understand the social aspects concerning aged in the country as they go through the process of ageing. Increased life expectancy, rapid urbanization and lifestyle changes have led to an emergence of varied problems for the elderly in India. It must be remembered that comprehensive care to the elderly is possible only with the involvement and collaboration of family, community and the Government. India should prepare to meet the growing challenge of caring for its elderly population. All social service institutions in the country need to address the social challenges to elderly care in order to improve their quality of life. There is a need to initiate requisite and more appropriate social welfare programmes to ensure life with dignity for the elderly. In addition, there is also a need to develop an integrated and responsive system to meet the care needs and challenges of elderly in India.
2,347.8
2016-02-02T00:00:00.000
[ "Economics" ]
Effect of Applying Techniques and Polymer Content on Strength and Drying Shrinkage of Glass Fiber Reinforced Concrete The purposes of this study were to evaluate compressive strength, flexural strength, and drying shrinkage of Glass Fiber Reinforced Concrete (GFRC) applying different techniques and varying polymer content. Two groups of specimens were classified applying the techniques: sprayed and premixed methods. AR-Glass was used with fiber content of 3 to 4% by volume. GFRC was mixed and applied different techniques with proportions of Styrene Butadiene Rubber (SBR) content of 0%, 3%, 6%, and 9% by weight of cement. Compressive and flexural strength tests were performed at 1 and 28 days. Drying shrinkage tests were measured up to 98 days. The results obtained showed that increasing the SBR content showed a lower compressive strength of GFRC for both sprayed and premixed techniques. In the other hand, 28-day flexural strength results of GFRC for both premixed and sprayed techniques were found to increase with increasing SBR content. The GFRC mixes using sprayed technique exhibited flexural strength higher than the corresponding mixes using premixed technique because of the two-dimensional layer of fiber alignment for the sprayed technique. Increasing the SBR content exhibited the lower drying shrinkage of GFRC. At the age of 98 days, the drying shrinkage of GFRC using 9% SBR content was about 40% lower than that of GFRC using 0% SBR content. Introduction Glass Fiber-Reinforced Concrete (GFRC) is a cementitious composite material made up of mortar matrix and chopped glass fibers.GFRC is mainly used in exterior building façade panels and architectural precast concrete.The matrix can be made from cement, sand, and additives.The benefits for the use of GFRC are toughness, ductility, and reducing large cracks, when compared to normal reinforced concrete.Generally, alkali resistant glass (AR-Glass) is selected for making GFRC to reduce the degradation of glass fiber due to alkaline content of cement [1][2][3][4][5] .The polymer latex admixture is commonly used to improve the flexural strength, workability, and toughness of GFRC [ 6] .Two GFRC fabrications were reviewed: namely premixed and sprayed techniques.The sprayed technique, using up to 5% of fiber content, is commonly used to make GFRC.Increasing fiber content could improve flexural strength of GFRC [ 7] .The production cost of GFRC using sprayed technique is higher than that of GFRC using the premixed technique.Nonetheless, the premixed technique was found to require a higher water demand to obtain the specified workability, when compared to the sprayed technique [8]. However, limited research was found that studied the comparison of engineering performance of GFRC using sprayed and premixed techniques.Therefore, the main aims of the study were to evaluate the effects of applying techniques and polymer content on the compressive strength, flexural strength, and drying shrinkage of GFRC. Materials White Portland cement conforming to ASTM C150 was used in this research.The specific gravity of the cement was 3.15.Silica sand was selected with the maximum nominal size of 0.84 mm (No.20). Glass fiber, conforming to ASTM D578, was AR-Glass type obtained from Nippon Electric Glass.The composition of glass fiber was 61% of silicon oxides, 15% of sodium oxides, and 20.8% of zirconia oxides.Specific gravity of AR-glass was 2.74 g/cm 3 . Polymer admixture was a Styrene Butadiene Rubber (SBR).Infrared transmittance of SBR using Fourier Transform Infrared Spectroscopy (FTIR) is shown in Fig 1 .Specific gravity of SBR was 1.02 kg/l.The water content of SBR used was 54.5%. Mixing proportions and specimen preparation The summary of mix proportions for GFRC using sprayed and premixed techniques is given in Table 1.Six cylindrical specimens with the dimension of ø100×200, six prismatic specimens with the dimension of 350×150×25 mm, and three prismatic specimens with the dimension of 300×75×75 mm for each mix were prepared. The compositions of GFRC using the premixed technique consisted of cement, sand, water, chopped glass fiber, and SBR.The GFRC samples were prepared by combining silica sand and white Portland cement in a mixer for 1 minute.Water and SBR were then added and mixed further for 1 minute.Thereafter, silica sand and white Portland cement, which were properly blended, were added into the mixer and mixed further for 5 minutes.The glass fiber with 3.5% by volume was then added and mixed for 2 minutes.The mixture was cast in the specimens as mentioned previously and vibrated using a vibrating table.It should be noted that the premixed technique could not prepare GFRC mixture without SBR content due to high water demand of glass fiber filaments leading to very low workability of GFRC mixes. The GFRC specimens using sprayed technique were prepared by mixing silica sand and white Portland cement for 1 minute.Water and SBR were then added and mixed further for 1 minute.Thereafter, blended silica sand and white Portland cement were added into the mixer and mixed further for 5 minutes.The mixture was transferred to the container of the spraying machine and then continuously pumped through the spray nozzle of the spraying machine.The fiber roll was cut to nominal lengths of between 38 and 50 mm and then continuously fed through the mixture at the spray nozzle.The air pressure valve and pumping speed of the machine were adjusted to control the fiber content in the GFRC mixture within the range of 3-4%.The GFRC mixture was then sprayed into the specimens as mentioned previously and consolidated using a tamping rod and vibrating table. All of the specimens prepared were covered with plastic sheets.After 24 hours, the specimens were demolded and kept in a moist-cured room which was controlled at a temperature of 23ºC and a relative humidity of 95% until the age of specimens were 28 days. Procedure and testing Flexural strength test, modified from ASTM C1018 [9], was performed at the testing ages of 1 and 28 days for the prismatic specimens with the dimensions of 350×150×25 mm.The thickness of the specimens was adjusted from ASTM standard to be 25 mm.The specimens were placed on the support of the universal testing machine where the specimens were loaded at their mid-point span.The displacement rate of the test was controlled at 1 mm/min using "Merlin" software of the universal testing machine.The compression loads and displacements of the test were monitored.The flexural strength test of three specimens for each testing age was calculated. Compressive strength test conforming to ASTM C39 [10] was performed at the testing ages of 1 and 28 days for the cylindrical specimens with the dimension of ø100×200 mm.The specimens were loaded at a constant rate of 5. 3 kilo-Newton per second.The compressive strength test of three specimens for each testing age was calculated. All of the specimens with the dimension of 300×75×75 mm for the drying shrinkage test were kept in the temperature-and humidity-controlled room which the temperature and relative humidity were controlled at 23ºC and 50% respectively.The length changes of three prismatic specimens for each GFRC mix were measured starting from 1 day to 98 days.The shrinkage strains of the specimens were calculated. Compressive strength of GFRC using sprayed and premixed techniques Fig 2 shows the compressive strength of GFRC varying the SBR content and using sprayed and premixed techniques.The compressive strength results of GFRC were found to decrease with increasing the SBR content for both premixed and sprayed techniques.The premixed technique exhibited slightly higher compressive strength than the sprayed technique because entrapped air voids were found to form in the cylindrical specimens using sprayed technique more than in the specimens using the premixed technique.The results showed that GFRC specimens containing higher SBR content exhibited 1-day flexural strength lower than those containing lower SBR content.However, the 28-day flexural strength results of GFRC were found to be increased with increasing SBR content because the polymerization of SBR was developed at the later ages.In addition, the GFRC using the sprayed technique had slightly higher 28-day flexural strength than the corresponding premixed technique because the fiber alignment using sprayed technique was mostly found to be a two-dimensional layer which was different to the premixed technique as shown in Fig 4. In addition, the toughness value of GFRC using the sprayed technique with 9% SBR content was lower than that of GFRC with 6% SBR content, although the flexural strength of GFRC with 9% SBR content was higher than that of GFCR with 6% SBR content, as shown in Fig 6 .The toughness values of GFRC varying with the SBR content and using premixed technique are shown in Fig 7 .The toughness values of GFRC were found to increase with increasing SBR content.In addition, GFRC with 9% SBR content using premixed technique was found to exhibit the highest toughness values of the I20 toughness index.The results showed that the shrinkage strains of GFRC were found to decrease with increases in the SBR content.At the testing age of 98 days, the shrinkage strain of GFRC containing 9% SBR content was about 40% lower than that of GFRC using 0% SBR content.No significant difference in drying shrinkage results between GFRC using premixed and sprayed techniques could be observed.-The 28-day flexural strength results of GFRC were found to increase with increasing the SBR content for both premixed and sprayed techniques. -The toughness values of GFRC using sprayed technique were higher than corresponding GFRC using premixed technique. -The shrinkage strains of GFRC were found to decrease with increasing the SBR content. -The optimum content of SBR latex was in the range of 6% to 9% for GFRC using sprayed technique. Fig 2 . Fig 2. Compressive strength of GFRC using sprayed and premixed techniques Fig 8 . Fig 8. Drying shrinkage of GFRC with and without SBR and applying techniques Table 1 . Mix proportion of GFRC used in this research
2,233
2017-01-01T00:00:00.000
[ "Materials Science", "Engineering" ]
Searches for Dark Matter with Superheated Liquid Techniques One of themost celebrated detectors operating at accelerators is the bubble chamber [1]; very important discoveries were done employing this technology during the sixties and seventies. Bubble chambers were divided into two categories (hydrogen and heavy liquid bubble chambers); the former ones (like the “80 cm,” BEBC, 15-foot Flab, Argonne 30 inches, etc.) had the advantage that the target waswell defined and static; the latter ones (Gargamelle, BP3, 15-foot Bubble Chamber, SKAT, etc.) had a bigger stopping power and were particularly suited to identify the nature of the secondary produced particles like electrons, gamma rays, and pions and kaons decays. Many discoveries were done by bubble chambers: several resonances, the neutral currents, leptonic and semileptonic, the Ω, and so forth. Their use decreased with the birth of the “electronic detectors” capable of performing automatic event selection and scanning and collecting and analysingmuchmore events. However the expansion of the bubble chamber was linked to the beam passage: the switching off of the acceleration of the primary particles was used to command it. So the bubble chambers were commanded by the beam passage and the used liquid reached ametastable equilibrium state which occurs when the pressure of the liquid was lowered adiabatically: the substance remains in the liquid state despite the vapor pressure or the boiling point temperature. The metastability of the liquid makes it possible to detect charged particles. When the liquid is brought to a temperature and pressure, where, according to its phase diagram, it should be gaseous but maintains the liquid phase, it is said to be “superheated.” The difference in pressure between the vapor pressure and the operating pressure of a bubble chamber is known as “degree of superheat.” The higher this degree is, the less stable the liquid is, but at high degree of superheat the bubble chamber becomes more sensitive to lower energy particles that can interact with the nuclei giving lower energy recoils and becomes sensitive to electrons, γ rays, high energy muons, and so forth. These particles are an important background for the search of dark matter, so, in order to exploit superheated detectors for direct detection of dark matter, the operation technique had to be changed [2–4]. Introduction One of the most celebrated detectors operating at accelerators is the bubble chamber [1]; very important discoveries were done employing this technology during the sixties and seventies. Bubble chambers were divided into two categories (hydrogen and heavy liquid bubble chambers); the former ones (like the "80 cm, " BEBC, 15-foot Flab, Argonne 30 inches, etc.) had the advantage that the target was well defined and static; the latter ones (Gargamelle, BP3, 15-foot Bubble Chamber, SKAT, etc.) had a bigger stopping power and were particularly suited to identify the nature of the secondary produced particles like electrons, gamma rays, and pions and kaons decays.Many discoveries were done by bubble chambers: several resonances, the neutral currents, leptonic and semileptonic, the Ω − , and so forth. Their use decreased with the birth of the "electronic detectors" capable of performing automatic event selection and scanning and collecting and analysing much more events. However the expansion of the bubble chamber was linked to the beam passage: the switching off of the acceleration of the primary particles was used to command it. So the bubble chambers were commanded by the beam passage and the used liquid reached a metastable equilibrium state which occurs when the pressure of the liquid was lowered adiabatically: the substance remains in the liquid state despite the vapor pressure or the boiling point temperature. The metastability of the liquid makes it possible to detect charged particles.When the liquid is brought to a temperature and pressure, where, according to its phase diagram, it should be gaseous but maintains the liquid phase, it is said to be "superheated." The difference in pressure between the vapor pressure and the operating pressure of a bubble chamber is known as "degree of superheat." The higher this degree is, the less stable the liquid is, but at high degree of superheat the bubble chamber becomes more sensitive to lower energy particles that can interact with the nuclei giving lower energy recoils and becomes sensitive to electrons, rays, high energy muons, and so forth. These particles are an important background for the search of dark matter, so, in order to exploit superheated detectors for direct detection of dark matter, the operation technique had to be changed [2][3][4]. Bubble Nucleation in Superheated Liquids The phenomena describing the formation of a bubble in a superheated Liquid are the nucleation and the growing of the bubble. The nucleation and the growing are described by the theory of Seitz [5][6][7] which is briefly summarized in the following. A charged particle loses energy along its trajectory through a superheated liquid via ionization, collision, and radiation. Thus the primary particle leads to a temporary thermal excitation along its track; the temperature of the gas created is hotter than that of the surrounding liquid.The Seitz model is named as "hot spike" model of bubble nucleation. Advances in High Energy Physics If the pressure of the hot gas is sufficient, a protobubble will overcome the surface tension and the bubble grows.Its growth is due to its internal pressure which is the vapor pressure of the liquid at the current temperature V (this pressure is greater than the pressure outside the bubble by definition of superheated liquid); in this case the bubble becomes visible.To reach this condition the radius of the protobubble must be as follows: > (2)/(Δ), where Δ = V − and is the surface tension of the liquid.Furthermore the stopping power must be / > /(2 ), where V is the saturated vapor density, ℎ is the latent heat of vaporization per mole, and is the molecular mass [8]. If < and the relation / > /(2 ) is not fulfilled, < and then the protobubble created is smaller than the critical radius; it will collapse and disappear. Application of Superheated Devices to WIMP Searches To be useful as dark matter detectors the bubble formation devices needed several changes to fulfil three important constraints: (i) to be more stable than traditional high energy physics bubble chamber (reaching a quasicontinuously sensitive operation); (ii) to be triggered when a dark matter particle crosses the detector and interacts with it; (iii) to have a strong rejection of the principal backgrounds that can simulate a dark matter interaction with the ordinary matter.The rarity of the interactions also changes the nature of the bubble devices from that of a tracking device (full of multiple tracks of small bubbles from different particle crossing the detector) to a counting device. This different way to use a bubble detector as a counting detector brought the interested physicists in three directions: (1) new type of bubble chambers; (2) SDD which are superheated droplet detectors; (3) the Geyser detectors. In this paper I concentrate on the three directions mentioned above and I summarize the most relevant results and the proposals for the future. In the following, I will focus on weakly interacting massive particle (WIMP) as the most plausible candidate for dark matter. WIMPs interact not only with gravitational fields but also weakly; in this case indeed the search of their direct interaction is not without hope.Many experimental methods have been studied and realized to detect directly WIMPs.They include the use of scintillators NaI [9], liquid argon [10], xenon chamber [11], cryogenic semiconductors [12], and detectors based on the nucleation of bubbles [2][3][4].The results obtained with these detectors are in some case in contradiction and need a supplement of work to make clear the situation; the development of alternative and complementary techniques is thus particularly motivated. Bubble Chamber The experiments with bubble chambers are concentrated on the work of the Collaboration COUPP (Chicagoland Observatory for Underground Particle Physics). 2 kg Chamber (1 L) Filled with CF 3 I (Experiment T945). The first dark matter limits SD [13,14] produced by COUPP was achieved with a 2 kg (1 L) prototype which produced the best spin-dependent (SD) proton limits at the time over a significant mass range.This chamber was built at the University of Chicago and tested at the Laboratory for Astrophysics and Space Research (LASR) at a depth of six m.w.e.; the results are reported in Figure 1. Modified 2 kg Chamber (1 L). Due to the very high background from Radon filtering an O-ring, the first version of the chamber was modified: substitution of the O-ring and replacement of the quartz jar with a new, acid-etched, and precision cleaned jar; data were taken in NUMI (NeUtrinos at Main Injector) at Fermilab. 2 and [13,14]).This chamber worked in two phases which are (a) filled with CF 3 I; Excellent sensitivity was obtained for low energy recoils (3 keV) at SNOLAB [15,16], but this phase is in progress and no definitive results where reported up to now.[17].This chamber is working in SNOLAB filled at the moment with 37 kg of CF 3 I; the installation was completed in June 2013; a run collecting 50000 kg-days data is foreseen in the future, with a possible increase of the detector's mass. The Big Bubble Chamber (30 L = 60 kg) No results have been yet reported for the moment; the sensitivity of this chamber is shown in Figure 4. and PICASSO) plans to build a new bubble chamber on the scale of tons [18]. The conceptual design is well developed.If the results from COUPP-4 and COUPP-60 are scaled up, the expected sensitivities are reported in Figures 5 and 6 for a filling of C 3 F 8 . Superheated Droplet Detectors Superheated droplet detectors (SDD) are also based on the technique of the superheated bubble formation. In contrast to bubble chambers used in high energy physics, which are based on the same principle, SDD are basically continuously sensitive, since one droplet at a time undergoes phase transition.Only occasionally, for instance, every few days the detector medium is set under pressure in order to transform gas bubbles back to liquid droplets. The rupture of metastability by radiation has been used as a method in particle detection.The most important application was the bubble chamber.Apfel [19] extended this concept in the form of SDD in which small drops of the superheated liquid are uniformly dispersed in a gel or viscoelastic medium: it isolates the fragile metastable system from vibrations and convections currents that occur in bubble chambers; in Figure 7 a sketch of a detector exposed to a neutron flux is shown. The lifetime of the superheated state becomes very long, allowing applications of the SDD as neutron dosimeters and detectors for dark matter. Two experiments have used SDD for searches of direct interaction of WIMP with ordinary matter: SIMPLE and PICASSO. SIMPLE is superheated instrument for massive particle. PICASSO is Project in Canada to Search for Supersymmetric Objects.SIMPLE obtained the first important results (see Figure 8); the limit curve in function of the WIMP mass is shown in [8,20]. For PICASSO, see Figure 9 for technical procedures and [21] for results. In comparison with the bubble chamber the SDD technique has at least the following advantages: (1) stability for much longer times; (2) lower cost (0.19 k$/kg); (3) much less impurities (, , and due to the avoided contact with the wall of the vessel and with the buffer liquid), and the following disadvantages: (1) very poor quantity of sensitive matter (maximum 3% of the gel).This makes impossible a competition with the proposals at the ton scale for different detectors (see Table 1). Advances in High Energy Physics WIMP mass (GeV) Trigger of SDD and Bubble Chambers R&D within SDD detectors brought an interesting feature into light: the sound emitted at the bubble formation [22] is different if the bubble is due to a recoiling nucleus (as happens for an interaction of neutron or WIMP) or if the bubble is induced by an decay [23].Energetic charged particles traversing liquids or solids produce acoustic waves during their passage (see ANTARES and ICECUBE [24,25] experiments in the PeV range of energy).However, for processes useful for dark matter (in the range of 10-100 keV), the emitted sound predicted by the thermoacoustic effect is not detectable.Nevertheless particle interactions in stressed or superheated liquids produce detectable acoustic signal that is characteristic of the nature or the extension of the primary event. This suggests that the superheated liquids provide an intrinsic amplification mechanism with a gain of 10 5 . In Figure 10 [23] and Figure 11 typical spectra are reported for recoils induced by neutrons from an "Am-Be" source. Ions from nuclear recoils indeed have ranges with sub-m length; on the contrary the emitter (inside the superheated liquid) can provide two sources of ionizations (the itself with a track length of about 40 m for an energy of 5 MeV and the daughter nucleus.)In Figure 11 [23] such an effect is shown. The sound signal must be transformed to electronic signals by transducers accompanying the detector, studied with a Fourier analysis and described by an acoustic energy parameter.The success of this possible separation of the background has quickly stimulated COUPP and this technique was applied to the bubble chambers; the level of the rejection of this background is now <10 −3 , still keeping an acceptance for recoiling nuclei of 98% [21]. The Geyser or Condensation Chamber Two groups are interested in a new technique called "Geyser": (1) the Milano-Bicocca group MOSCAB [26,27]; (2) the PICO group [28].This technique is a variant of the superheated liquid technique of extreme simplicity. The main volume of the target liquid (C 3 F 8 , e.g.,) is kept in a thermal bath at a constant temperature . The vapour above the liquid is kept at a temperature < by cooling the top of the vessel by a circulating liquid (water). The equilibrium vapour pressure above the liquid is so the liquid is in a state of underpressure and therefore a superheat of Δ = − , where = Sat.( ) and = Sat.( ).A local energy release of energy due to, for instance, a recoiling ion induced by a WIMP interaction can produce a vapour bubble which can grow (if over a threshold in energy) to visible size.This vapour bubble rises in the liquid and pushes up part of the liquid in the neck (this is the reason of the name Geyser).When equilibrium pressure WIMP mass (GeV) WIMP mass (GeV) is reached, the hot vapour in the top of the vessel recondenses and the liquid is recovered into the main volume.The original metastable state is recovered in a few seconds and the system is ready to record a new event.The system does not require external intervention or recompression. In Figure 12 a drawing of the principal parts of the MOSCAB Geyser is shown. The figure represents a vertical section of a cylinder, so the coils used as sources of heat are represented by small circles. In the top part of the same figure the pressure equalizers are shown; they are constituted by two elastic membranes that push the external water when the pressure of the freon gas increases and acts also in the reverse sense.In Figure 13 there is a picture of the apparatus built in Milano.In the bottom the liquid freon is shown; the buffer liquid glycol that separates the liquid freon from the vapour is also shown.The degree of superheat applied must exclude the detection of minimum ionizing particles (electrons and rays) and on the contrary it must allow the detection with high efficiency of the recoiling ions. The principal advantages of the Geyser (and of the bubble chamber) are the following. (1) The strong rejection of the particles at minimum ionization (electrons and ) is an advantage.(2) The simplicity of the mechanical construction, also for large size detectors, and therefore low cost are an advantage. (3) The very interesting possibility of counting multiple neutron interactions and hence subtract the neutron background (the interaction length of a neutron is of the order of (6-20) cm in liquid freon) is an advantage.The double or triple interaction in the same frame can be used statistically to evaluate the number of events with a single interaction due to neutrons. (4) The possibility of distinguishing the spin-dependent interaction of WIMP from spin-independent by changing the liquid used is an advantage. (5) Only for the Geyser, the reset of the detector is automatic and has a very short time (few seconds).A prototype of the Geyser with a mass of 0.5 kg has been constructed in Milano-Bicocca University and INFN [26,27]. With reference to Figure 12 the quartz vessel of 0.33 liters is immersed in a water bath and it is surrounded by Cu coils with an internal circulating water at the two fixed temperatures. It contains freon C 3 F 8 around 25 ∘ C at a pressure of about 6 bars. The hot freon is separated from the cold freon vapour by the neck of the vessel filled with a buffer liquid (glycol) with thermal capacity greater than that of the water. In fact in the original Geyser of Hahn and Reist [29] no buffer liquid was used, but we found that it improves greatly the stability of the device. The temperature of the two regions of water is kept fixed by two thermostats with a precision of 0.1 degrees and the two regions are separated by a loosely fitting rubber washer. The temperature of the cold vapour was varied between 15 ∘ C and 21 ∘ C. Everything is surrounded by a cylindrical vessel of plexiglass of thickness of 1.5 cm, filled with a water/glycol mixture. In order that the flask undergoes only a small overpressure with respect to the water an automatic pressure equalizer using rubber membranes is used. The freon is illuminated by diffuse light, coming from LEDs. To summarize, the Geyser is essentially a vessel constituted by a "flask" containing the overheated liquid (e.g., some kind of freon) and a "neck" (containing partially a separation liquid and partially the freon vapour). The scattered ions, after an interaction with a neutral particle like a neutron or a WIMP, deposit their energy in very small regions (size of the order of 0.05-0.1 micron). In these conditions a bubble can grow and reach a few mm of radius (well visible). Advances in High Energy Physics Figure 11: Distribution of the acoustic energy parameter obtained with decays as a function of temperature; the dotted red histograms indicate the location of recoil events obtained with a neutron source; at low temperature the two peaks coincide, and at higher temperature a second peak appears on the high side for the . Two professional digital cameras monitor in a continuous way at 50 frames per second (fps) the volume in the freon vessel. Some pixels undergo a change of luminosity when a bubble is generated. At this point a trigger is launched and a stream of pictures is registered (between −50 and +50 frames starting from the trigger); in Figure 14 the evolution of a typical bubble observed in our detector is shown. The time sequence (period = 20 ms) starts in the bottom of this figure (right hand), where it is possible to see a small bubble; the sequence continues toward the left and passes to the third line (right); the bubble increases its volume and reaches the surface of the liquid freon (second line); here it produces a small Geyser (left side of the second line); in the first line the passage of the bubble in the lower layers of glycol is shown. A local energy release, due to, for instance, a recoiling ion induced by a neutron or by a WIMP interaction, can produce a vapour bubble which can grow (if over a threshold in energy) to visible size.This vapour bubble rises in the liquid and pushes up part of the liquid in the neck.When equilibrium pressure is reached the hot vapour in the top of the vessel recondenses and the liquid is recovered into the main volume.The system returns to equilibrium after a few seconds and it is ready to record new events.The system does not require external intervention or recompression.The degree of superheat applied must exclude the detection of minimum ionizing particles (electrons and rays) and on the contrary it must allow the detection with high efficiency of the recoiling ions. Advances in High Energy Physics A small prototype (0.3 L) of this kind of detector has been realized in Milano-Bicocca University and INFN [26,27]. For the future the MOSCAB group wants to construct a bigger detector (27 L) which could be competitive with other experiments and with other techniques and work in the Laboratory of the Gran Sasso (INFN).The transition from a small detector to a very big one requires a lot of new technologies, for the mechanical support, the thermostats, the quartz's vessel, the trigger, the dramatic reduction of the impurities contained in the materials surrounding the detector, and so forth. However, in particular, information on the spindependent interaction with protons is poor.On the contrary, fluorine based detectors offer excellent opportunities in this field. The PICASSO and now PICO groups have produced a detailed proposal for the construction of a big detector (0.5 tons) [28].The expected sensitivity for one-year run (0.5 tons per year) with a supposed 0 background is shown in Figures 5 and 6. The future of this kind of detectors depends on the possibilities of extrapolating the data from small prototypes to a very big detector and from the existing residual background in the LNGS and SNOLAB underground laboratories. Conclusion In this short review I have shown that detectors based on the superheated liquid technique have played a relevant role in the search of dark matter.The new generation of ton scale detectors is complementary and competitive with projects employing other techniques.Hopefully, in a few years the mystery of the dark matter will be revealed. Figure 8 : Figure 8: SIMPLE 2000: 95% C.L. limits from only 0.19 kg days SDD exposure compared with other experiments; the red lines indicate the expected sensitivity after an exposure of 25 kg days. superheated droplets of C 4 F 10 dispersed in polymerised gel • Droplets superheated at ambient T and P (T b = −1.7 ∘ C) • Bubble explosion recorded by piezoelectric transducers • Repressurization (6b) returns bubbles into droplets • Operating temperature determines energy threshold 7Figure 10 : Figure 10: Distribution of the acoustic energy parameter recorded in calibrations with neutrons from Am-Be source; neutron recoils show up an ion peak; this peak is well separated from acoustic and electronic noise (a) and shifts with increasing temperature to larger signal intensities (b). Figure 12 : Figure 12: Sketch of a vertical section of the Geyser. Figure 14 : Figure 14: Evolution of a bubble. Table 1 : Comparison between different techniques.
5,334
2014-06-12T00:00:00.000
[ "Physics" ]
A polarity dependent fluorescence ‘‘switch’’ in live cells The spectroscopic properties, ultrafast kinetics and utilization of a photochromic molecule as a bi-stable fluorescing sensor of polarity in live cells are described. This molecule is a photochromic fulgimide, 2,3-dialkylidenesuccinimide, which emits fluorescence that can be switched optically on and off. The fluorescence intensity is a function of the polarity of the molecular environment, namely it fluoresces strongly when the molecule is in its polar isomeric structure form. We demonstrate that this molecule enters live cells without inducing damage, it binds primarily to internal membranous organelles (mitochondria) and its fluorescence can be switched optically ‘‘on’’ and ‘‘off’’ repeatedly while inside the living cell. A possible use as a bi-stable, on/off sensor is discussed. (cid:1) 2004 Published by Elsevier B.V. Introduction Photochromic materials have found potential applications in high capacity optical storage, optical molecular switches, optical limiters and as non-linear media. In particular, photochromic fulgimides and fulgides are being studied widely because they exhibit excellent photochromic behavior, and their two isomeric forms are thermally stable and photoreversible, which make them suitable for many electronic applications [1,2]. Even though a number of these molecules have been studied previously, additional materials have been developed because of the continuously expanding need for new photochromic molecules with optimal physical and chemical properties. This has been necessary in order to satisfy high density computer storage requirements as well as the needs of other applications. The unique property of photochromism is the reversible photoin-duced conversion of the molecule to two isomeric forms. In some cases, only one of the forms emits fluorescence and that is utilized for 3D optical storage. The fulgimide utilized in the present studies fluoresces strongly only in its polar, closed ring form, see Fig. 1, while the nonpolar open ring form does not fluoresce. Therefore, the inter-conversion from one form to the other is the operational mechanism for switching the fluorescence ''on'' and ''off''. This bi-stability and the fact that the fluorescence intensity is a function of polarity, suggest that this molecule could function as a sensor for changes in the polarity of species within the environment of biological and chemical systems. In this paper, we report the spectroscopic properties of this photochromic molecule as well as its use as an ''on-off'' fluorescence switch in live cells. The fulgimide described here is the photochromic component of a recently described composite molecule [3], which under specific excitation conditions exhibits both photochromism and fluorescence. In addition it is also bi-stable, namely it can reside in either the ''on'' fluorescent state or the ''off'' non-fluorescent state. This ''on-off'' switching can be optically induced by changing the structure of the molecule from its polar to non-polar structure ( Fig. 1). This switching can be performed repeatedly over time in liquid solvents, solid polymer matrices, and individual living cells by excitation at the absorption wavelength of the two forms. Imaging of cells All cellular imaging experiments were performed with Zeiss a LSM 410 (Zeiss Inc., Thornwood, NJ, USA) confocal laser scanning microscope. The 488 nm light of the Ar ion laser induced the fluorescence. The fluorescence was observed through a long pass 610 nm filter. The second channel was used to observe the cells under phase contrast. A Zeiss Neofluar 100 Â Ph 3 1.3 n.a. oil immersion objective was used in all experiments. Spectroscopic measurements All in situ spectroscopic measurements were performed with a Zeiss LSM 410 microscope, fiber optically coupled to Spectra-Pro 150 spectrograph with a 300 grooves/mm grating blazed at 500 nm (Acton Research Corp., Acton, MA, USA), interfaced to a high dynamic range TE-CCD spectrograph and camera (Princeton Instruments, Princeton, NJ, USA). CCD temperature was maintained at )40°C. For in situ live cell measurements switching between the closed ring polar fluorescent form and the open ring non-polar non-fluorescent form was achieved by irradiation of the sample on the microscope stage with a 100 W halogen lamp. The polar form of the molecule was induced by 1 min irradiation through a short pass 450 nm filter. The non-polar form of the molecule was generated by 1 min irradiation through a long pass 520 nm filter. Fluorescence spectra were acquired immediately after irradiation. The excitation light from 100 W Hg lamp was filtered through a narrow band pass filter centered at 550 nm. The fluorescence emission was separated from the excitation light by a long-pass 610 nm filter. The emission acquisition time was 500 ms. The fluorescence intensity was measured for both the polar and non-polar forms at 630 nm using spectra that were corrected by background subtraction. The absorption and emission spectra in solution were obtained by means of a Shimadzu UV-1601 Spectrophotometer and Shimadzu RF-5301 PC Spectrofluorophotometer, respectively (Shimadzu Scientific Instruments, Inc., Colombia, MD, USA). The time resolved spectra and kinetics were measured with the 130 fs, 10 MHz, laser system (Tsunami 3941-MIS, Spectra-Physics Lasers, Mountain View, CA, USA) described previously [4], The fulgimide was synthesized by the previously described procedure [2]. Cell culture PTK 2 (Potorous tridactylis, American Type Tissue Culture Collection, Washington, DC, USA, #CCL 56) cells were cultured in minimum essential growth medium (GIBCO, Grand Island, NY, USA) supplemented with 2 mM L L -glutamine, penicillin (100 mg/ml), streptomycin (100 mg/ml) and 10% heat-inactivated fetal bovine serum. All culture reagents were purchased from Invitrogen (Carlsbad, CA, USA). Cells were maintained at 37°C in a 7.5% CO 2 incubator. For the experiments, cells were seeded in imaging dishes at a density of 50% confluence. Dye preparation Stock solution of the fulgimide molecule was prepared in DMSO (dimethylsulfoxide). For the imaging study, cells were labeled with the fulgimide at a concentration of 2.4 Â 10 À6 M prepared in growth medium without phenol red and pH value adjusted to 6.5. Spectroscopy and kinetics The open-ring form, Fig. 1, has a light yellow color which upon irradiation with 386 nm light is transformed into the closed-ring structure, whose absorption spectrum is shifted to the 485-550 nm range. The wavelength of the absorption maximum of the closed form is affected by the polarity of the solvent. This observation may be due to the large dipole moment of the fulgimide in the excited state which is stabilized by the polar solvent. The photochemical mechanism of the conversion from polar to non-polar structures and the resulting fluorescence are shown in the energy level diagram depicted in Fig. 1. The structure of this molecule and the absorption spectra of its two photochromic non-polar and polar forms with maxima located at 386 and 539 nm, respectively, are shown in Fig. 1 and Fig. 2(A). The fluorescence shown in Fig. 2(B) is emitted strongly when the molecule is in the polar form, while the non-polar form is practically void of any emission. The polar form is formed by excitation of the non-polar isomer with 400 nm light, while 539 nm light converts the polar form to the non-polar form. The molecule is found to be stable between )55 and +55°C, in both isomeric forms. The intermediate spectra and kinetics of the transformation from the polar to the non-polar forms were measured by means of ultrafast time resolved spectroscopy. The experimental system was composed of an amplified Ti:Sapphire laser emitting 100 fs pulses with up to 10 mJ/pulse, that has been described previously [4]. The fundamental pulses were converted by second harmonic generation and used at a rate of 50 pps. The excited state spectra formed at various times after excitation are shown in Fig. 3 from 1.3 to 63 ps. The transient spectrum at 1.3 ps has a shape and maximum absorption that is different than the spectra recorded at later times due to index dispersion [3]. A plot of intensity at 520 nm versus time (Fig. 4) gives the rate of formation Fig. 4 corresponds to vibrational decay and the growth of the v ¼ 0 level of the first excited singlet electronic state. The rate of transformation from the open non-polar to the closed polar form in acetonitrile was $5 Â 10 11 s À1 and the fluorescence lifetime was 5 Â 10 À8 s. These data suggest that the on-off switching is in the picosecond time scale rather than the much slower diffusion controlled rates of most chemical and biological reactions. Photoreaction quantum yield The quantum yield of the photochromic reaction that induces the cyclic reaction, which generates the polar closed-ring structure, Fig. 1, has been measured, as has the quantum yield of the reverse reaction. The non-polar open-ring form was converted, almost quantitatively, to the polar form by excitation with 390 nm light. However because both the polar and non-polar forms absorb in the 390 nm region, photoexcitation at 390 nm leads to the formation of a photoequilibrium mixture between the two forms. The colored form can be converted back to the open-ring form by excitation with 530 nm light. The quantum yields of these photoreactions in various solvents are listed in Table 1. The quantum yield of the ring-opening process of the fulgimide was found to be 0.08 in acetonitrile. In nonpolar hexane, the quantum yield of the ring-opening process is about two times larger than in acetonitrile. The low quantum yields in polar solvents may be due to the strong interaction between the polar excited state of the fulgimide polar form and the polar solvent, which may raise the activation energy of the ring-opening process and consequently decrease the transformation efficiency. Fluorescence quantum yield In contrast to the previously investigated fulgimides [5,6] which do not emit fluorescence, the closed-ring form of the fulgimide we synthesized, emits intense fluorescence. The fluorescence spectrum of the closed structure form in acetonitrile shows a broad emission band with a maximum at 650 nm, Fig. 2(B). To confirm that the closed-ring form of this fulgimide emits the observed fluorescence, as opposed to impurities or other species, the excitation spectra and fluorescence emission intensity changes as a function of open/close cycles were measured. The results show that the fluorescence intensity and excitation spectra of the closed-ring form decrease proportionally with the concentration of the fulgimide closed-ring form (see Fig. 2(B)). When the solution was completely bleached, i.e., the absorption band of the closed form completely disappeared, no fluorescence was detected. When the bleached solutions were illuminated with 390 nm light and converted to the polar form, the molecule emitted again. The fluorescence, non-polar to polar and polar to non-polar photoconversion quantum yields are shown in Table 1. Live cell studies The change in the polarity of the fulgimide is achieved by illumination of the polar fulgimide molecule with 530 nm light which excites it to an upper electronic state that is followed by interconversion to the ground state non-polar form. If such a compound maintains its polarto-non-polar switching characteristics over prolonged time periods in living cells, it could be used as an intracellular chemical/molecular sensor. Localized changes such as pH, viscosity, and sub-cellular chemistry are very difficult to measure in live cells using existing methods. The experimental observations (Fig. 5) demonstrate that this compound (1) does enter the live cell, (2) appears to associate with internal membranous organelles, especially the mitochondria, and (3) does not enter the interphase nucleus (at least in it's fluorescent polar state). The compound does not seem to bind to chromosomes in mitotic cells, and it is generally excluded from the mitotic spindle (Fig. 5(e)-(g)). Within the live cell, the molecule was found to emit at 630 nm after excitation at 550 nm. This demonstrates that the polar ''on'' form of the molecule is present in live cells. Additionally, as the fluorescing molecules are converted to the non-polar ''off'' form by illumination with 500 nm light, the fluorescence intensity decreases proportionally ( Fig. 5(a)-(d)). The molecule can then be driven back into its polar ''on'' state by exposure to 400 nm light and re-excited to the non-polar form with 500 nm light. The cycling, polar to non-polar form, was repeated more than seven times in the same cell over a 53 min time period. The repeated on-off fluorescence switching demonstrates that the photochromic properties of this molecule persist within the live cell. To our knowledge, fluorescence bi-stable switching in live cells has not been described previously. Such a system provides the additional capability of simultaneously using several different sensors that may fluoresce even in the same region without interference from each other because all but one sensor can be switched off at any time. Such sensors could provide a means for monitoring several cellular properties and reactions simultaneously. Another significant property of this system is the fact that the polarity dependent fluorescence intensity and the polarity of the molecule may be switched on and off by simply illuminating the molecule with either 530 or 400 nm light. This bi-stable switching should permit the detection of polar and non-polar species within specific regions of the live cell. The system described in this paper differs in concept and operation from the two-photon systems used for conventional multiphoton imaging [7], and ablation/ manipulation of subcellular structures [8,9]. The bipolar system described here is not dependent on a high photon flux generated by short-pulse lasers because the fluorescence may be induced by one photon processes or stepwise two photon processes which have much higher absorption cross-sections than the two photon virtual transition processes. This method is non-destructive to either the cell or its organelles because orders of magnitude lower photon pulse intensities are utilized. It should be possible to develop ''on-off'' bi-polar biosensor molecules that are specific to particular environmental, physical and chemical properties, and may be composed of specific groups that are targeted to particular structures and reactive centers in cells and tissues. In addition, the polar structure of these molecules renders them suitable for attachment to polar groups within the live cell, thus affording the opportunity to monitor local changes that reflect the changing chemical and physiological states of the cell. Using these specific types of molecular bi-polar probes it should be possible to detect and monitor local chemical changes that effect charge transfer and polarity within the live cell. Detection may be achieved within the timeframe of the polar-to-non-polar conversion ($5 Â 10 11 s À1 ) and the fluorescence lifetime (5 Â 10 À8 s). Owing to the fact that fluorescence is emitted only when the structure of the molecule and its environment are polar it becomes a means for the identification of polar moieties in the cell. Conversely, the absence of fluorescence is indicative of a non-polar environment. For example, because the fluorescence intensity increases with the polarity of the environment, it should be possible to measure intracellular pH non-invasively. Preliminary experiments suggest that a similar effect occurs as a function of viscosity. The use of optically driven bi-polar molecules to measure the chemical and physical state of cells in real time should prove very useful in biology and applied biotechnology. F49620-03-1-0087; NIH Grant RR 14892, and the LAMMP NIH RR11092 Biotechnology Resource.
3,480.4
2004-07-19T00:00:00.000
[ "Chemistry", "Biology" ]
Electronic polarization-division demultiplexing based on digital signal processing in intensity-modulation direct-detection optical communication systems We propose a novel configuration of optical receivers for intensity-modulation direct-detection (IM·DD) systems, which can cope with dual-polarization (DP) optical signals electrically. Using a Stokes analyzer and a newly-developed digital signal-processing (DSP) algorithm, we can achieve polarization tracking and demultiplexing in the digital domain after direct detection. Simulation results show that the power penalty stemming from digital polarization manipulations is negligibly small. © 2014 Optical Society of America OCIS codes: (060.2330) Fiber optics communications; (060.4080) Modulation. References and links 1. E. Yamazaki, S. Yamanaka, Y. Kisaka, T. Nakagawa, K. Murata, E. Yoshida, T. Sakano, M. Tomizawa, Y. Miyamoto, S. Matsuoka, J. Matsui, A. Shibayama, J. Abe, Y. Nakamura, H. Noguchi, K. Fukuchi, H. Onaka, K. Fukumitsu, K. Komaki, O. Takeuchi, Y. Sakamoto, H. Nakashima, T. Mizuochi, K. Kubo, Y. Miyata, H. Nishimoto, S. Hirano, and K. Onohara, “Fast optical channel recovery in field demonstration of 100-Gbit/s Ethernet over OTN using real-time DSP, ” Opt. Express 19, 13139–13184 (2011). 2. K. Kikuchi, “Performance analyses of polarization demultiplexing based on constant-modulus algorithm in digital coherent optical receivers,” Opt. Express 19, 9868–9880 (2011). 3. K. Kikuchi, “Digital coherent optical communication systems: Fundamentals and future prospects,” IEICE Electron. Express 8, 1642–1662 (2011). 4. C. Brosseau, Fundamentals of Polarized Light (John Wiley & Sons, Inc. 1998). 5. T. Okoshi and K. Kikuchi, Coherent Optical Communication Systems (KTK/Kluwer, 1988), Chap. 6. 6. K. Kikuchi, “Characterization of semiconductor-laser phase noise and estimation of bit-error rate performance with low-speed offline digital coherent receivers,” Opt. Express 20, 5291–5302 (2012). 7. P. M. Krummrich and K. Kotten, “Extremly fast (microsecond timescale) polarization changes in high speed long haul WDM transmission systems,” in 2004 OSA Technical Digest of Optical Fiber Communication Conference (Optical Society of America, 2004), FI3. Introduction The dual-polarization (DP) transmission scheme has been introduced into practical optical communication systems for the first time by using recently-developed digital coherent receivers [1]. Controlling the state of polarization (SOP) of the DP signal in the digital domain, such receivers can demultiplex two polarization tributaries in an adaptive manner [2]. The efficient SOP control based on digital signal processing (DSP) is owing to the phase information of the DP signal, which is obtained from coherent detection employing phase and polarization diversities [3]. On the other hand, it has been believed that in conventional intensity-modulation directdetection (IM·DD) systems, we cannot manipulate the signal SOP even by using DSP, because the phase information of the DP signal is entirely lost after direct detection; therefore, in order to demultiplex the DP signal, we need to rely on bulky and slow optical polarization controllers, which prohibit practical implementation of DP-IM·DD systems. Contrary to such a common belief, this paper proposes a novel configuration of directdetection receivers, which enables polarization-division demultiplexing the DP-IM signal in the digital domain without using optical polarization controllers. Implementing the Stokes analyzer and low-complexity DSP in the receivers, we can achieve tracking of SOP fluctuations and polarization-division demultiplexing of the DP-IM signal in the digital domain. Simulation results show that the power penalty stemming from digital polarization manipulations is negligibly small even under very fast SOP fluctuations. This technique may be useful for 100-Gbit/s short-reach optical transmission systems based on the IM·DD scheme, because the 50-GS/s sampling rate of analog-to-digital converters (ADCs) is currently available [1] and the bit rate can easily be doubled with our proposed method. The organization of the paper is as follows: Section 2 discusses the SOP of the DP-IM signal. Section 3 deals with the configuration of the proposed direct-detection receiver composed of the Stokes analyzer. In Sec.4, we discuss the polarization-tracking and polarization-demultiplexing algorithm used in the DSP circuit. Simulation results on the bit-error rate (BER) performance of the proposed receiver is described in Sec.5, and the effectiveness of the proposed algorithm is validated. Finally, Sec.6 concludes this paper. Figure 1 shows the configuration of the DP-IM transmitter. We assume that two independent laser diodes, LD 1 and LD 2, are intensity-modulated with the same clock using either the direct-modulation method or the external modulation method. The two signals are polarizationmultiplexed with a half-wave plate (λ /2) and a polarization beam combiner (PBC). The tributary 1 has the linear x polarization at the transmitter, whereas the tributary 2 does the linear y polarization. The intensity of the lasers is modulated in a binary manner. In a low logic level, the intensity of each tributary is zero. In the following analysis, we assume that the intensity of each tributary in a high logic level is two so that the average intensity is normalized to unity when both logic levels occur at the same probability of 1/2. SOP of the DP signal The total intensity of the DP-IM signal is classified into three cases shown in Table 1. In the case (I), logic levels of both of the polarization tributaries are low and the total signal intensity is zero. In the case (II), one tributary is in the high level, and the other in the low level; therefore, the total intensity is two. In the case (III), both of the tributaries are in the high level, and the total intensity is four. Corresponding to these cases, the SOP of the DP-IM signal is classified as follows: In the case (I), we have no signal. The SOP in the case (II) at the transmitter is determined either from the linear x polarization ((II)(a)) or from the linear y polarization ((II)(b)). On the other hand, in the case (III), the DP signal never has a fixed SOP, because phases of the two tributaries are not correlated. Noting that intensities of x-polarization and y-polarization components of the DP signal are the same, we find that S 1 of the Stokes vector of the DP signal is zero, whereas S 2 and S 3 are fluctuating at the speed of the laser linewidth under the condition that S 2 2 + S 2 3 = 4. Thus, we have the relation of the three Stokes vectors shown in Figure 3 shows the schematic diagram of our proposed receiver. The incoming DP-IM signal is equally split into four branches after optical pre-amplification and optical filtering if necessary. In the first branch, we measure the signal intensity I t . Inserting a polarizer (0 • Pol), whose transmission axis is the x axis, we measure the intensity of the x-polarization component I x in the second branch. Using a polarizer (45 • Pol), whose transmission-axis is rotated by 45 • with respect to the positive x axis, we detect the intensity of the 45 • linearly-polarized component I 45 • in the third branch. With a quarter-wave plate (λ /4), whose fast axis is aligned to the x axis, and a 45 • -rotated polarizer (45 • Pol), we measure I R , which is the intensity of the rightcircularly-polarized component, in the fourth branch. This configuration is known as the Stokes analyzer, which determines Stokes parameters from I t , I x , I 45 • , and I R [4] as Receiver configuration (1) The four outputs of photodiodes (PDs) in Fig. 3 are converted to digital data using fourchannel ADCs. The clock (CLK) extracted from the first branch of the Stokes analyzer controls sampling instances of the ADCs. The sampling rate is one sample/bit. The sampled data are sent to the DSP circuit. DSP circuit In the DSP circuit shown by Fig. 4, after calculations of Stokes parameters using Eqs. (1)-(4), polarization tracking and demultiplexing are done by the algorithm given in the following. First, in the intensity discriminator, we separate the case (I) from cases (II) and (III) (see Table 1) by intensity discrimination of the measured S 0 (n) with the threshold S th , where n denotes the number of samples. When S 0 (n) ≤ S th , both of the tributaries are decided to be in the low level. Next, in the Stokes-vector amplitude discriminator and the reference Stokes-vector updater, we process the cases (II) and (III). Let the reference Stokes vector v(n) be a noise-free unit vector expressing the SOP of the tributary 1 at the receiver (See Fig. 2). Note that the SOP for the tributary 2 is given as −v(n) in such a case. Provided that v(n) is known, we can calculate the inner product between the received normalized Stokes vector S(n)/S 0 (n) and the reference Stokes vector v(n) as which means the normalized Stokes-vector amplitude along the direction of the reference Stokes vector. Then, Fig. 2 shows that we can separate the three cases (II)(a), (II)(b), and (III), discriminating the distribution of u(n) into three regions. Let discrimination thresholds for u(n) be u th (> 0) and −u th . When u(n) ≥ u th , we decide that the measured sample belongs to the tributary 1. In such a case, the tributary 1 is in the high level, whereas the tributary 2 is in the low level. The reference Stokes vector is then updated as where μ is the step-size parameter. Equation (6) shows that the reference vector v(n) is modified by using the error signal ε = S(n)/S 0 (n) − v(n) and tracks the SOP of the tributary 1 even when it fluctuates on the Poincaré sphere due to the random change in fiber birefringence. A smaller value of μ improves the signal-to-noise ratio of v(n) but reduces the SOP tracking speed; therefore, we need to choose an optimum value of μ, depending on the SOP fluctuation speed. On the other hand, when u(n) ≤ −u th , we decide that the measured sample belongs to the tributary 2. In such a case, the tributary 1 is in the low level, whereas the tributary 2 is in the high level. Reversing the sign of the normalized Stokes vector in Eq. (6), we have the update formula for v(n) given as where the error signal ε = −S(n)/S 0 (n) − v(n) controls the reference Stokes vector. When |u(n)| < u th , both of the tributaries are in the high level. We do not update the reference Stokes vector, because the SOP is not fixed in such a case. It should be noted that in cases (I) and (III), we do not update the reference Stokes vector and keep that defined in the nearest preceding case of (II); however, since the fluctuation speed of the reference Stokes vector is much slower than the bit rate, such thinned-out operation of the update process never degrades the BER performance as shown in 5.3. Although we have assumed that v(n) is known, the update process using Eqs. (6) and (7) can start from an arbitrary reference vector in the blind mode. However, depending on the initial choice of the reference Stokes vector, the tributaries 1 and 2 may be exchanged. After a sufficient number of iteration with the proper choice of μ, the initial tracking process is converged and we can find an accurate estimate for v(n) even under very fast SOP fluctuations. Thus, we can discriminate the four cases (I), (II)(a), (II)(b), and (III). Finally, we complete the demodulation process, aligning bit sequences of the tributaries. Simulation model In computer simulations, we generate IM signals having binary random bit patterns for the two polarization tributaries. The number of bits for each tributary is N = 2 20 . The Jones vector of the DP signal at the transmitter is written as Complex amplitudes of electric fields E in, x (n) and E in, y (n) are given as In these equations, s x (n) and s y (n) are signal amplitudes, which are either √ 2 (High level) or 0 (Low level) so that the average intensity of each polarization is unity. Parameters n x (n) and n y (n) are complex-valued Gaussian noises. The variance of the real part of n x, y (n) and that of the imaginary part of n x, y (n) are represented as σ 2 s . Then, the carrier-to-noise ratio (CNR) of each polarization is expressed as The value of CNR/pol is controlled by the amount of Gaussian noise, while the average signal intensity is kept at a unity for each tributary. We do not take the CNR reduction by branching of the signal into account. This is valid for the optically pre-amplified signal [5]. Parameters Δφ x (n) and Δφ y (n) are real-valued Gaussian noises and their variance σ 2 p is given [6] as σ 2 where δ f is the 3-dB spectral width of the lasers and T the bit duration. After suffering from the random change in fiber birefringence, the Jones vector of the DP signal at the receiver is given as where J(n) is the Jones matrix of the fiber for transmission. From E out (n), Stokes parameters of the received signal is obtained [4] as where δ (n) = arg [E out, y (n)/E out, x (n)]. Equations (16)-(19) are equivalent to Eqs. (1)-(4). When we scramble the SOP of the signal to emulate random fluctuations of fiber birefringence, the Jones matrix is expressed as The polar angle and the azimuthal angle of the SOP randomly fluctuate on the Poincaré sphere in a bit-by-bit manner through φ r (n) and θ r (n). Parameters φ r (n) and θ r (n) including fluctuations obey the following equations: where Δφ r (n) and Δθ r (n) are real-valued Gaussian noises having the variance given as The parameter A with the dimension of s −1 is a constant under a specific condition of the fiber for transmission. Determination of discrimination thresholds In 5.2, we determine the optimum threshold S th for discriminating S 0 (n) and u th for discriminating u(n) through computer simulations. Ignoring the laser phase noise, we assume that δ f = 0 in Eq. (14). The fluctuation of the received SOP is also neglected throughout 5.2; then, we assume that J = 1 in Eq. (15). Figure 5 shows the simulation result of the probability-density function of the intensity S 0 (n) when CNR/pol=10, 12, and 14 dB. The cases (I) and (II) are clearly separated, and we can decide that both logic levels of the tributaries are low, when the measured intensity S 0 is smaller than the threshold S th = 0.6 shown by the sold line. On the other hand, the discrimination ability between (II) and (III) is so poor that the intensity discrimination shown by the broken line cannot be applied to separate (II) and (III). We can understand the noise distribution shown in Fig. 5 as follows [5]: When we consider that the Gaussian noise originates from amplified spontaneous emission (ASE) of optical preamplifiers, the noise distribution in the case (I) is determined from the spontaneous-spontaneous beat-noise process. On the other hand, since the signal-spontaneous beat noise is predominant in the case (II), the probability distribution in the case (II) is broader than that in the case (I). In the case (III), two stochastically independent signal-spontaneous beat noises for orthogonal polarizations are added together; therefore, the variance of the noise in the case (III) is twice as large as that in the case (II). Figure 6 shows the simulation result on the probability-density function of the inner product u(n) given by Eq. (5), when CNR/pol=10, 12, and 14 dB. Three peaks clearly appear in this function and we can optimally discriminate (II)(a), (II)(b), and (III) when u th = 0.55 as shown by solid lines. BER performance In BER calculations, we scramble the SOP of the signal, assuming that the parameter A in Eq. (23) is 10 5 [s −1 ]. The variance σ f (N) 2 of φ r (n) and θ r (n) at the N-th bit is written as Therefore, if we assume the 25-Gbit/s/pol system (T = 40 [ps]), the standard deviation is 2 rad in a 40-μs time span for N = 2 20 bits. This value is much larger than SOP fluctuations observed in real systems [7]. The step-size parameter μ is set at 1/2 7 to track the SOP fluctuation most accurately. We also include the effect of the laser linewidth δ f , assuming that δ f ·T = 1×10 −3 , which means δ f = 25 MHz at the bit rate of 25 Gbit/s/pol. Figure 7 shows the typical convergence property of the error magnitude ε controlling the reference Stokes vector, where we use the moving average with the span of 21 samples. Within 1,000-sample periods, the SOP tracking process is stabilized; then bit errors are counted after the convergence of the error magnitude. Figure 8 shows BERs calculated as a function of CNR/pol. The red curve is the BER of each Since the SOP tracking process is stabilized within 1,000-sample periods, bit errors are counted after the convergence of the error magnitude. polarization tributary of the DP-IM signal demodulated with our proposed method. The black curve represents the BER performance of the single-polarization (SP) IM signal for comparison. In the DP-IM scheme, we find that both of the polarization tributaries have almost the same BER characteristics and the power penalty from the SP-IM scheme is negligible. Thus, the digital polarization-manipulation process does not generate any harmful effect even under very fast SOP fluctuations. The effect from chromatic dispersion of the link is also examined. We include the dispersion value of β 2 L/T 2 = 0.125, where β 2 denotes the dispersion parameter and L the fiber length. This value corresponds to a 10-km-long standard single-mode fiber (SMF) at the bit rate of 25 Gbit/s/pol and at the wavelength of 1.55 μm. Red curves in Fig. 9 show BER characteristics of the proposed DP-IM scheme with and without chromatic dispersion, whereas black curves show those of the SP-IM signal. We find that the dispersion effect is severer in the proposed DP-IM scheme than in the conventional SP-IM scheme; however, the difference in the receiversensitivity degradation due to chromatic dispersion is not so significant between the two cases. Conclusions We have proposed a novel configuration of IM·DD receivers, which enables polarizationdivision demultiplexing in the digital domain after direct detection. Simulation results show that the power penalty stemming from digital polarization-division demultiplexing is negligibly small even under very fast SOP fluctuations. The proposed method is useful for 100-Gbit/s short-reach optical transmission systems based on the IM·DD scheme, because the bit rate can easily be doubled.
4,310.6
2014-01-27T00:00:00.000
[ "Physics", "Computer Science" ]
Advances in Neuro-Oncological Imaging: An Update on Diagnostic Approach to Brain Tumors Simple Summary In the realm of neurology, advanced imaging tools play a crucial role as critical endpoints in clinical trials. While magnetic resonance imaging (MRI) serves as a primary diagnostic tool, it exhibits limitations in specific scenarios. Ongoing research in neuro-oncological imaging aims to address these limitations. Our review explores the latest advancements in imaging modalities for neuro-oncology, highlighting the accuracy and competence of each modality. These include PET tracers and radiolabeled amino acids, PET/MRI, radiomics, deep learning, MR perfusion imaging, MR fingerprinting, MR spectroscopy imaging, MR elastography, and intra-operative ultrasound techniques. The focus is on the potency of these modalities in diagnosis, cancer staging, prognosis, and post-treatment evaluation, ultimately enhancing accuracy and effectiveness in managing brain tumors. Abstract This study delineates the pivotal role of imaging within the field of neurology, emphasizing its significance in the diagnosis, prognostication, and evaluation of treatment responses for central nervous system (CNS) tumors. A comprehensive understanding of both the capabilities and limitations inherent in emerging imaging technologies is imperative for delivering a heightened level of personalized care to individuals with neuro-oncological conditions. Ongoing research in neuro-oncological imaging endeavors to rectify some limitations of radiological modalities, aiming to augment accuracy and efficacy in the management of brain tumors. This review is dedicated to the comparison and critical examination of the latest advancements in diverse imaging modalities employed in neuro-oncology. The objective is to investigate their respective impacts on diagnosis, cancer staging, prognosis, and post-treatment monitoring. By providing a comprehensive analysis of these modalities, this review aims to contribute to the collective knowledge in the field, fostering an informed approach to neuro-oncological care. In conclusion, the outlook for neuro-oncological imaging appears promising, and sustained exploration in this domain is anticipated to yield further breakthroughs, ultimately enhancing outcomes for individuals grappling with CNS tumors. Introduction In the field of neurology, imaging plays a central role in diagnosis, predicting prognosis, and assessing treatment response for central nervous system (CNS) tumors.Evaluation through imaging may also serve as a crucial substitute for endpoints in clinical trials.The continuous evaluation and discovery of new therapeutic agents, including immunotherapy, underscores the central objective of neuro-oncologic imaging, which is the accurate evaluation of disease progression and the identification of treatment-related changes [1]. Malignant brain tumors can be categorized into two broad groups: metastatic tumors, which arise from locations outside the brain, and primary tumors, which originate within the brain tissue itself and its surrounding meninges.Metastatic brain tumors most commonly originate from the lungs, breasts, and skin, particularly melanoma [2].Over 100 distinct primary CNS tumor cell types contribute to different histopathologies, with each demonstrating a unique set of clinical presentations, treatment options, and potential outcomes.In addition to histology and immunohistochemistry, substantial advancement in molecular diagnostics has allowed for histogenetic classification of various types and subtypes of these tumors, as described in the recent fifth edition of the WHO classification of brain tumors.In a study spanning 2016-2020, the average age-adjusted incidence of all malignant and non-malignant CNS tumors was 24.83 per 100,000 people.In that study, roughly 27.9% of all CNS tumors were found to be malignant and 72.1% were categorized as non-malignant or benign.Gliomas constituted 26.3% of all tumors.Among the primary malignant tumor histopathologies, glioblastoma (GBM) was the most frequently occurring, constituting 14.2% of all tumors and 50.9% of all malignant tumors.Conversely, meningioma (Figure 1) was the most common non-malignant tumor, accounting for 40.8% of all tumors and 56.2% of all non-malignant tumors [3]. Introduction In the field of neurology, imaging plays a central role in diagnosis, predicting prognosis, and assessing treatment response for central nervous system (CNS) tumors.Evaluation through imaging may also serve as a crucial substitute for endpoints in clinical trials.The continuous evaluation and discovery of new therapeutic agents, including immunotherapy, underscores the central objective of neuro-oncologic imaging, which is the accurate evaluation of disease progression and the identification of treatment-related changes [1]. Malignant brain tumors can be categorized into two broad groups: metastatic tumors, which arise from locations outside the brain, and primary tumors, which originate within the brain tissue itself and its surrounding meninges.Metastatic brain tumors most commonly originate from the lungs, breasts, and skin, particularly melanoma [2].Over 100 distinct primary CNS tumor cell types contribute to different histopathologies, with each demonstrating a unique set of clinical presentations, treatment options, and potential outcomes.In addition to histology and immunohistochemistry, substantial advancement in molecular diagnostics has allowed for histogenetic classification of various types and subtypes of these tumors, as described in the recent fifth edition of the WHO classification of brain tumors.In a study spanning 2016-2020, the average age-adjusted incidence of all malignant and non-malignant CNS tumors was 24.83 per 100,000 people.In that study, roughly 27.9% of all CNS tumors were found to be malignant and 72.1% were categorized as non-malignant or benign.Gliomas constituted 26.3% of all tumors.Among the primary malignant tumor histopathologies, glioblastoma (GBM) was the most frequently occurring, constituting 14.2% of all tumors and 50.9% of all malignant tumors.Conversely, meningioma (Figure 1) was the most common non-malignant tumor, accounting for 40.8% of all tumors and 56.2% of all non-malignant tumors [3].The prognosis for patients with brain tumors, especially high-grade neoplasms, remains poor despite conventional treatments like surgery, radiotherapy, and chemotherapy.The complex and diverse nature of these tumors, along with frequent recurrence near the primary site, complicates their management [2]. To better facilitate accurate diagnosis and effective treatment planning, it is valuable to differentiate malignant and benign CNS tumors.Magnetic resonance imaging (MRI) Figure 1.A forty-eight-year-old male presented with a solid homogeneously enhancing left frontal lesion with dural tail sign and significant perilesional vasogenic edema, in keeping with WHO grade I meningioma ((A,B): T1 post contrast, (C): FLAIR). The prognosis for patients with brain tumors, especially high-grade neoplasms, remains poor despite conventional treatments like surgery, radiotherapy, and chemotherapy.The complex and diverse nature of these tumors, along with frequent recurrence near the primary site, complicates their management [2]. To better facilitate accurate diagnosis and effective treatment planning, it is valuable to differentiate malignant and benign CNS tumors.Magnetic resonance imaging (MRI) serves as the main imaging modality for diagnosis and follow-up monitoring in patients with CNS tumors.However, conventional structural MRI remains limited in certain capacities and situations, including an inability to discern the full extent of infiltrative tumors (such as gliomas) and difficulty discriminating between neoplastic and non-neoplastic processes, particularly in the post-treatment setting (such as radiation necrosis after radio-therapy) [4].Accordingly, neuro-oncologic imaging research has focused on addressing these shortcomings. Here, our objective is to review the latest advancements in various imaging modalities utilized in neuro-oncology and to delve into their influences on diagnosis, cancer staging, prognosis, and post-treatment evaluation. PET Tracers and Radiolabeled Amino Acids Although structural imaging with MRI and computed tomography (CT) provide excellent image resolution and anatomical localization of brain tumors, supplemental molecular imaging using positron emission tomography (PET) imaging with radiotracers can provide vital details about the metabolic and proliferative activity of various cancers.Significant advancements have been made in the field of radiotracers and their utilization in clinical settings. PET radiotracers have become an increasingly popular form of imaging due to their extensive capacity in identifying never-before-seen tumor activity in PET.One of the most famous and widely used radiotracers is 18-F-fluorodeoxyglucose (18F-FDG), a glucose analog.This radiotracer is widely used due to its proven efficacy in crossing the blood-brain barrier (BBB) with ease and its ability to tag highly metabolic areas, including tumors [5].Although FDG is incredibly beneficial for tumor identification throughout the body, it remains particularly limited in the brain, especially due to the high level of glucose uptake in normal brain tissue, making it difficult to distinguish between normal and pathologic tissue [6]. Furthermore, PET radiotracer limitations become more pronounced when imaging patients throughout various treatment stages.Since treatment for brain cancer may impact tissue surrounding the tumor itself, radiotracers can sometimes tag these areas, rendering it difficult for radiologists to distinguish between the progression of cancer vs. treatmentrelated changes in brain tissue.To address this, various other radiotracers utilize tagged amino acids, rather than glucose, to achieve a more specific uptake pattern on the PET scan.One common amino acid radiotracer is [18F]-fluoroethyltyrosine ([18F]FET), which demonstrates decreased uptake by normal brain tissue when compared to 18F-FDG, thereby providing a greater distinction between normal and cancerous brain tissue [7]. Research regarding new and advanced radiotracers has emerged, further proving the substantial utility of this technology.Recently, new protein markers have demonstrated an increased specificity for brain cancer, as well as an increased ability to cross the BBB.One such protein is [18F] PARPi, which is a protein overexpressed in cancer cell nuclei.A substantial advantage of this radiotracer when compared to FDG is that its uptake is completely independent of metabolism.This decreases the likelihood of its uptake by other healthy, highly metabolic tissue in the brain [6].Another tracer, known as fibroblast activation protein inhibitor (FAPI), tags an inhibitor known to be upregulated in some cancers.Early research studies have shown that although the inhibitor is not upregulated in diffuse astrocytomas, it is upregulated and traceable in isocitrate dehydrogenase (IDH)wildtype GBM (Figure 2) and high-grade IDH mutant astrocytomas (Figures 3 and 4) [8]. As new treatments emerge for cancer patients, new imaging tools must be used to better differentiate between cancerous tissue and recovering tissue.One such field is stereotactic radiosurgery, wherein surgeons irradiate specific brain tissues in a targeted manner, avoiding injury to surrounding healthy tissue [9].Various studies assessing post-treatment tumor recurrence have been conducted, and one emerging radiotracer that has proven successful is an amino acid radiotracer known as [11C] Methionine, which is discussed further in the following section [10]. Figure 2. A thirty-year-old male was found to have a heterogeneously enhancing left parietal mass with perilesional vasogenic edema, resulting in significant compressive effect on the left lateral ventricle and shift of midline to the right.The lesion was diagnosed as wildtype GBM WHO grade IV. Figure 2. A thirty-year-old male was found to have a heterogeneously enhancing left parietal mass with perilesional vasogenic edema, resulting in significant compressive effect on the left lateral ventricle and shift of midline to the right.The lesion was diagnosed as wildtype GBM WHO grade IV. PET and PET/MR in Neuro-Oncology PET and MRI can serve as complementary imaging modalities, each with their own strengths.Conventional MRI is known for its ability to provide high-resolution structural images of the brain, offering exceptional tissue contrast [11].As such, it is an invaluable imaging modality for many non-traumatic anatomical neurological conditions, including epilepsy and tumors [12].A particularly valuable aspect of MRI is diffusion-weighted imaging, which can be utilized to evaluate cell density, estimate tumor grade and extent, guide surgical resection and radiotherapy treatments, and assist in forecasting mortality outcomes [13]. On the other hand, PET focuses on delivering physiological data, offering insights into brain metabolism and functional processes.In oncological applications, PET serves multiple roles, from initially differentiating high-grade from low-grade tumors to guiding biopsy site selection and the extent of resection and radiation therapy at diagnosis.Post PET and PET/MR in Neuro-Oncology PET and MRI can serve as complementary imaging modalities, each with their own strengths.Conventional MRI is known for its ability to provide high-resolution structural images of the brain, offering exceptional tissue contrast [11].As such, it is an invaluable imaging modality for many non-traumatic anatomical neurological conditions, including epilepsy and tumors [12].A particularly valuable aspect of MRI is diffusion-weighted imaging, which can be utilized to evaluate cell density, estimate tumor grade and extent, guide surgical resection and radiotherapy treatments, and assist in forecasting mortality outcomes [13]. On the other hand, PET focuses on delivering physiological data, offering insights into brain metabolism and functional processes.In oncological applications, PET serves multiple roles, from initially differentiating high-grade from low-grade tumors to guiding biopsy site selection and the extent of resection and radiation therapy at diagnosis.Post PET and PET/MR in Neuro-Oncology PET and MRI can serve as complementary imaging modalities, each with their own strengths.Conventional MRI is known for its ability to provide high-resolution structural images of the brain, offering exceptional tissue contrast [11].As such, it is an invaluable imaging modality for many non-traumatic anatomical neurological conditions, including epilepsy and tumors [12].A particularly valuable aspect of MRI is diffusion-weighted imaging, which can be utilized to evaluate cell density, estimate tumor grade and extent, guide surgical resection and radiotherapy treatments, and assist in forecasting mortality outcomes [13]. On the other hand, PET focuses on delivering physiological data, offering insights into brain metabolism and functional processes.In oncological applications, PET serves multiple roles, from initially differentiating high-grade from low-grade tumors to guiding biopsy site selection and the extent of resection and radiation therapy at diagnosis.Post Cancers 2024, 16, 576 6 of 24 treatment, it aids in assessing either recurrence or the potential transformation to highergrade malignancy [14][15][16].PET imaging can employ different tracers, including FDG or amino acid tracers, each with distinct advantages.FDG, a glucose analog, allows for the detection of differences in glucose metabolism between malignant and physiological cells [13].However, FDG-PET's ability to assess tumor margins can be limited due to high uptake in normal brain parenchyma.In contrast, amino acid PET provides better visualization of tumor borders because normal brain tissue does not exhibit increased amino acid uptake [14,17].Nonetheless, the combined use of PET and MRI can mitigate the limitations inherent to each individual modality. When employed in tandem, PET/MRI offers a number of compelling advantages, including enhanced soft tissue contrast and a reduction in ionizing radiation exposure [11,12].Head movement during PET scanning can disrupt proper attenuation correction or result in incorrect alignment of PET information with MR images.To address this, motion tracking based on MR imaging can be employed to reposition the PET data accurately [11,18].The decrease in radiation is particularly beneficial for the pediatric population, where CNS cancer is a leading cause of death.Utilizing PET/MRI significantly reduces the cumulative radiation dose for these vulnerable patients [14].Overall, the combination of PET and MRI technologies not only facilitates an effective initial characterization of disease but also allows for meticulous monitoring of disease progression and the evaluation of treatment effectiveness.Together, PET and MRI provide a comprehensive, multidimensional view of the brain, encompassing both its structural intricacies and dynamic activities [18]. PET/MRI can provide vital information in the challenging landscape of neuro-oncology, such as in the diagnosis and management of gliomas.Gliomas represent approximately 80% of malignant brain tumors and are notorious for their high rates of recurrence and poor survival outcomes [19,20].Hence, distinguishing between recurrence and post-treatment changes is critical.Conventional MRI often faces challenges in this distinction due to the similar appearance of tumor recurrence and radiation necrosis [19,21].PET/MRI, particularly with the use of C11-methionine as a tracer, outperforms both MRI and CT alone in this regard [19].Studies have found that the diagnostic accuracy, sensitivity, and specificity of hybrid C11-MET-PET/MRI are superior to those of MRI alone [22][23][24].In fact, combined PET/MRI achieved an impressive diagnostic accuracy rate of 95%, compared to 63% for PET and 82% for MRI [25].Additionally, Deuschl et al. [22] reported a sensitivity of 97.14% and a specificity of 93.33% for 11C-MET-PET/MRI, further supported by Pauleit et al. [26] with a reported sensitivity of 93% and specificity of 94% for dual MRI/FET PET.Adding further weight to these findings, the integration of PET/MRI with parallel MRI (pMRI) delivered a remarkable 100% diagnostic sensitivity and specificity in differentiating between tumor progression and radiation necrosis post-treatment [21]. While gliomas are the predominant concern in malignant brain tumors, primary CNS lymphomas (PCNSLs) present a different set of challenges.PCNSLs constitute 1-5% of all brain tumors and are more commonly observed in immunocompromised patients [27].Early diagnosis is crucial for initiating chemotherapy, highlighting the vital role of imaging in the management of these patients.In a study of patients over 60 years old with PCNSLs, baseline cerebellar metabolism and metabolic tumor volume (sumMTV) assessed via [18F] FDG PET/MRI were significant predictors of chemotherapy response.Additionally, larger tumor volumes at diagnosis were associated with poorer overall survival and early death [28]. Emphasizing the crucial role of PET/MRI in the post-treatment management of PC-NSLs, [18F] FDG PET/MRI proves valuable in distinguishing between gliomas and PC-NSLs, thereby aiding in the selection of appropriate treatment.Despite differences in their MRI appearances, there can be significant overlap in imaging appearance which can make diagnosis challenging.A multiparametric approach that utilizes 18F-FDG PET/MRI has the potential to differentiate high-grade gliomas (HGGs) from PCNSLs [29]. Dual PET/MRI's expanding applications encompass other types of CNS tumors, including the diagnosis and treatment of meningiomas.MRI is the current diagnostic gold standard for meningioma, although it has limitations, particularly in post-surgical and post-radiotherapy settings [30].Recent advancements suggest that MRI combined with [86Ga]-DOTATATE PET can enhance meningioma diagnosis, treatment planning, and post-treatment evaluation.The combined approach demonstrates superior differentiation of meningioma from healthy tissue and post-surgical changes [30,31].[86Ga]-DOTATATE PET achieved a sensitivity of 97.6%, with a standardized uptake value (SUV) threshold of 2.3, and a specificity of 86.1%, with an SUV ratio referencing the pituitary gland (SUVRpit) threshold of 0.3 [30].This suggests that the technique could be an important tool for enhancing diagnostic accuracy in the management of meningiomas.MRI is often less effective in detecting smaller meningiomas, with a sensitivity of 74% for lesions < 0.5 cm 3 [30,32], a gap that [86Ga]-DOTATATE PET/MRI can help to fill [32]. Lastly, the brain is a common site for metastasis of extracranial tumors (Figures 5 and 6) and many lesions are treated with radiosurgery [33].However, distinguishing between recurrent brain metastases and radiation necrosis is again a challenge for conventional MRI.This issue can be addressed through the use of PET/MRI combined with radiomics.A study by Lohmann et al. demonstrated that the diagnostic accuracy for discerning recurrent brain metastases from radiation necrosis could be elevated to nearly 90% by integrating textural features from both CE-MRI and static FET PET scans [34].Figure 7 shows an example for recurrent brain metastases in a renal cell cancer patient post radiation.Also, Johannessen et al. [33] demonstrated 18F-FACBC could be a valuable tool for the early detection of easily overlooked brain metastases. Cancers 2024, 16, x FOR PEER REVIEW 7 of 24 Dual PET/MRI's expanding applications encompass other types of CNS tumors, including the diagnosis and treatment of meningiomas.MRI is the current diagnostic gold standard for meningioma, although it has limitations, particularly in post-surgical and post-radiotherapy settings [30].Recent advancements suggest that MRI combined with [86Ga]-DOTATATE PET can enhance meningioma diagnosis, treatment planning, and post-treatment evaluation.The combined approach demonstrates superior differentiation of meningioma from healthy tissue and post-surgical changes [30,31].[86Ga]-DOTATATE PET achieved a sensitivity of 97.6%, with a standardized uptake value (SUV) threshold of 2.3, and a specificity of 86.1%, with an SUV ratio referencing the pituitary gland (SUVRpit) threshold of 0.3 [30].This suggests that the technique could be an important tool for enhancing diagnostic accuracy in the management of meningiomas.MRI is often less effective in detecting smaller meningiomas, with a sensitivity of 74% for lesions < 0.5 cm 3 [30,32], a gap that [86Ga]-DOTATATE PET/MRI can help to fill [32]. Lastly, the brain is a common site for metastasis of extracranial tumors (Figures 5 and 6) and many lesions are treated with radiosurgery [33].However, distinguishing between recurrent brain metastases and radiation necrosis is again a challenge for conventional MRI.This issue can be addressed through the use of PET/MRI combined with radiomics.A study by Lohmann et al. demonstrated that the diagnostic accuracy for discerning recurrent brain metastases from radiation necrosis could be elevated to nearly 90% by integrating textural features from both CE-MRI and static FET PET scans [34].Figure 7 shows an example for recurrent brain metastases in a renal cell cancer patient post radiation.Also, Johannessen et al. [33] demonstrated 18F-FACBC could be a valuable tool for the early detection of easily overlooked brain metastases.Ultimately, PET/MRI presents multiple benefits across various phases of neurooncological conditions.Brendle et al. [35] reported an 85% diagnostic accuracy for brain tumors, along with a sensitivity of 78% and a specificity of 89%.The study also highlighted its value in tracking disease progression, noting a nearly 100% positive predictive value, 93% sensitivity, and 95% specificity.Based on current research, PET/MRI has the potential to substantially impact patient care by clarifying unclear treatment outcomes.In their cohort, clinical management was re-evaluated in 53% of cases upon detecting signs of disease progression [35]. Nonetheless, the use of PET/MRI comes with its own set of challenges, such as high costs and restricted availability, in addition to the likelihood of false positive results in cases with inflammation, infection, or post-surgical changes [20].Acquiring and interpreting PET/MR imaging studies also implicitly necessitates either two separate teams for PET and MR imaging or a single specialized team with training in both modalities [18].Ultimately, PET/MRI presents multiple benefits across various phases of neuro-oncological conditions.Brendle et al. [35] reported an 85% diagnostic accuracy for brain tumors, along with a sensitivity of 78% and a specificity of 89%.The study also highlighted its value in tracking disease progression, noting a nearly 100% positive predictive value, 93% sensitivity, and 95% specificity.Based on current research, PET/MRI has the potential to substantially impact patient care by clarifying unclear treatment outcomes.In their cohort, clinical management was re-evaluated in 53% of cases upon detecting signs of disease progression [35]. Nonetheless, the use of PET/MRI comes with its own set of challenges, such as high costs and restricted availability, in addition to the likelihood of false positive results in cases with inflammation, infection, or post-surgical changes [20].Acquiring and interpreting PET/MR imaging studies also implicitly necessitates either two separate teams for PET and MR imaging or a single specialized team with training in both modalities [18]. Radiomics and Deep Learning Radiomics involves the extraction of subvisual, quantitative data from routine medical images, such as MRI or PET, to form a 3D tumor phenotype.The radiomics workflow includes data acquisition, image pre-processing, tumor segmentation, feature extraction and selection, and model generation [36].Closely related to this concept is radiogenomics, which correlates genetic mutation status with radiologic features.Deep learning methods Ultimately, PET/MRI presents multiple benefits across various phases of neuro-oncological conditions.Brendle et al. [35] reported an 85% diagnostic accuracy for brain tumors, along with a sensitivity of 78% and a specificity of 89%.The study also highlighted its value in tracking disease progression, noting a nearly 100% positive predictive value, 93% sensitivity, and 95% specificity.Based on current research, PET/MRI has the potential to substantially impact patient care by clarifying unclear treatment outcomes.In their cohort, clinical management was re-evaluated in 53% of cases upon detecting signs of disease progression [35]. Nonetheless, the use of PET/MRI comes with its own set of challenges, such as high costs and restricted availability, in addition to the likelihood of false positive results in cases with inflammation, infection, or post-surgical changes [20].Acquiring and interpreting PET/MR imaging studies also implicitly necessitates either two separate teams for PET and MR imaging or a single specialized team with training in both modalities [18]. Radiomics and Deep Learning Radiomics involves the extraction of subvisual, quantitative data from routine medical images, such as MRI or PET, to form a 3D tumor phenotype.The radiomics workflow includes data acquisition, image pre-processing, tumor segmentation, feature extraction and selection, and model generation [36].Closely related to this concept is radiogenomics, which correlates genetic mutation status with radiologic features.Deep learning methods Radiomics and Deep Learning Radiomics involves the extraction of subvisual, quantitative data from routine medical images, such as MRI or PET, to form a 3D tumor phenotype.The radiomics workflow includes data acquisition, image pre-processing, tumor segmentation, feature extraction and selection, and model generation [36].Closely related to this concept is radiogenomics, which correlates genetic mutation status with radiologic features.Deep learning methods such as convolutional neural networks (CNN) are a form of machine learning that imitate human cognition and are often used in the radiomics pipeline for feature selection and modeling using various classifiers [37]. In neuro-oncology, radiomics and deep learning have shown potential to aid in diagnosis, prognostication, treatment response monitoring, and determining tumor biomarkers and genomics.Although numerous radiomics and deep learning studies have shown promising results, these methods are not commonly used in clinical trials and have yet to be used in clinical practice.Obstacles to clinical adoption include the lack of biologic correlation of radiomics features as well as lack of generalizability and reproducibility between different sites and scanners.The future of radiomics and machine learning in neuro-oncology is dependent on overcoming these issues [38]. Cancers 2024, 16, 576 9 of 24 Thus far, radiomics features have successfully been used to differentiate GBM from solitary metastasis [39,40] and GBM from PCNSL [41].As most radiomics studies focused on two-class classification, Priya et al. [42] demonstrated a three-class classification radiomics model that can differentiate GBM, metastasis, and PCNSL is also possible.In a recent study, Bathla et al. [43] compared the performance of machine learning and deep learning pipelines in three-class classification with the highest-performing deep learning pipeline having an area under the curve (AUC) of 0.854 on external validation.Three-class classification is more likely to have clinical utility and generalizability [42]. More recently, Stadbauer et al. [44] developed a radiomics and deep CNN model that differentiated GBM and brain metastasis based on oxygen metabolism data extracted from MRI.Using the parameters of cerebral metabolic rate of oxygen (CMRO2) and tissue oxygen saturation (mitoPO2), these diagnoses were differentiated more accurately than those made by radiologists.Malik et al. [45] found radiomics features that accurately differentiated low-grade gliomas (LGGs) from the peritumoral region (PTR) of GBM, which are often difficult to distinguish with visual inspection.Differentiating LGGs from GBM PTR could potentially aid in reducing the tumor volume that undergoes radiation treatment [45]. Another utility of radiomics is in determining the primary source of different types of metastases [46].Ortiz-Ramon et al. [47] demonstrated that a radiomics model using 3D texture features differentiated lung cancer metastasis from breast cancer metastasis (AUC = 0.963) and lung cancer metastasis from melanoma metastasis (AUC = 0.936) with high accuracy.Differentiating primary tumors from brain metastases can help prevent delays in diagnosis and treatment. Deep learning has also been demonstrated to be beneficial for real-time intra-operative diagnosis.Shen et al. [48] used near-infrared fluorescence imaging combined with a deep CNN (FL-CNN) to diagnose gliomas during surgery and compared the results to histologic examination, the current standard of practice.At high levels of specificity (>80%), the FL-CNN had higher sensitivity and also corrected over 70% of the neurosurgeons' errors.The study demonstrates the potential for deep learning models to improve neurosurgery outcomes by enhancing intra-operative diagnosis [48]. Furthermore, radiomics, deep learning, and radiogenomics have shown great potential in improving survival prediction, grading, and determining the genetic status of gliomas [49][50][51][52].Although stereotactic brain biopsy is the current gold standard for diagnosis and classification, it does not always capture the heterogeneous nature of gliomas.Therefore, radiomics and radiogenomics have the potential to non-invasively determine genetic status and prognosis via a more complete, "virtual biopsy", which would also aid in more selective chemotherapy and immunotherapy. Multiple studies have used MRI-derived radiomics features to accurately predict overall survival (OS) [49,53].Kickingereder et al. [53] designed and created an artificial neural network (ANN) that better predicted overall survival than the criteria for assessing response in neuro-oncology, known as the Response Assessment in Neuro-Oncology (RANO). Radiogenomics also provides prognostic value.Radiomics features with deep learning models have accurately predicted the genetic status of low-grade and high-grade gliomas, including IDH status, PTEN status, 1p19q-codeletion status, and the status of MGMT promoter methylation [54][55][56][57][58][59].Choi et al. [60] Radiomics features and machine learning methods have been demonstrated to accurately differentiate between tumor progression and treatment-related changes or pseudoprogression, which has long been a challenge for radiologists [65][66][67].Kim et al. [68] developed a multiparametric radiomics model (AUC 0.90) that included data from T1WI post-contrast, FLAIR, ADC, and cerebral blood volume.This model performed significantly better than radiomics models that only used conventional MRI (AUC 0.76) or ADC alone (AUC 0.78) and also performed superiorly in external validation (AUC 0.85). Recently, Müller et al. [69] used radiomics features in conjunction with FET PET parameters, specifically TBRmean (Tumor-to-Background Ratio, Mean) and TBRmax (Tumor-to-Background Ratio, Maximum), to differentiate between tumor progression and treatmentrelated changes with high accuracy (AUC 0.92).Prasanna et al. [70] also used COLLAGE features, a type of radiomics feature, to differentiate between radiation necrosis and tumor recurrence in primary and metastatic brain tumors using T1WI contrast-enhanced imaging.Zhang et al. [71] used a radiomics model based on multiparametric MRI, which included DWI and arterial spin labeling, which effectively distinguished between the recurrence of glioma and radiation necrosis with an AUC of 0.96 and performed better than the conventional MRI model (AUC 0.88). Radiomics models have also shown utility in predicting the response to various treatments, including immunotherapies and anti-angiogenic therapies.Li et al. [72] recently developed a radiomics model that evaluated the response to a combination therapy consisting of anlotinib, an anti-angiogenic drug, and temozolomide for recurrent gliomas.Being able to differentiate between patients with a good response to treatment and those with a poor response can help prevent delays in targeted treatments.George et al. [73] sought to create a radiomics model with the aim of predicting both progression-free survival (PFS) and overall survival in glioma patients treated with durvalumab, a PD-L1 inhibitor.They found that the pre-treatment MRI features did not accurately predict PFS and OS; however, the first post-treatment MRI features had a high predictive value for both PFS and OS.Jiang et al. [74] constructed a radiomics model to forecast the responsiveness of brain metastases from lung cancer to gamma knife radiosurgery, achieving an AUC of 0.93 in the primary dataset and 0.85 in external validation. Despite numerous studies proving the efficacy and potential of radiomics and deep learning models in enhancing the field of neuro-oncology, they have yet to be used in clinical practice.Some of the major barriers to clinical adoption include a lack of generalizability and reproducibility between sites and scanners and lack of correlation of radiomics features with underlying biological features [75].Recently, efforts have been made to standardize radiomics features [76][77][78][79], including by Zwanenburg et al. and the Image Biomarker Standardization Initiative [79], which accomplished the standardization of 169 radiomics features for PET, MRI, and CT.Delineating biological etiologies of radiomics features [80,81] remains an obstacle that requires further exploration.The successful application of radiomics and deep learning in clinical practice hinges on effectively addressing these challenges. MR Perfusion Imaging Blood perfusion is crucial for supplying oxygen and nutrients to tissues and is closely linked to tissue function.Therefore, disorders affecting perfusion are recognized as significant contributors to medical mortality and morbidity [82].Evaluation of cerebral blood volume (CBV) has been extensively employed in neuro-oncological contexts, such as determining the grade of brain tumors, guiding biopsies, informing targeted therapy, and assessing disease progression and treatment response [83].Elevated CBV is linked to heightened malignancy and proves beneficial in the grading of gliomas and prognostic assessment [84].The connection between increased tumor aggressiveness and neovascularization has been extensively documented in the literature on brain tumor perfusion.CT and MR perfusion methods have consistently shown a correlation, indicating that higher CBVs and permeability are associated with high-grade tumors [83,85].Previous studies found that high-grade tumors indicated statistically remarkable higher mean values than low-grade tumors [85,86].The differentiation of low-and high-grade tumors, employing a relative cerebral blood volume (rCBV) and setting a threshold at 1.75, demonstrated a sensitivity of 95% and a specificity of 57.5% [87].In addition, permeability values obtained through a T2-weighted technique were markedly greater for high-grade tumors compared to their counterparts in low-grade tumors [88].The rCBV at one month could discriminate pseudoprogression arising from recurrent, progressive tumors with a specificity of 86% and sensitivity of 77% [89].Pseudoprogression showed a lower median rCBV and permeability [90].Similarly, the most routinely used parameter in distinguishing between tumor progression and delayed radiation necrosis is rCBV, which exhibits an elevation in recurrent tumors.In contrast, it is reduced in the vicinity of radiation necrosis [91]. For glioma patients without distinctive high-grade anatomical imaging characteristics, international recommendations from the European Society of Neuroradiology endorse the use of perfusion MR imaging before tissue diagnosis [92].Additionally, MR perfusion imaging can serve in the differential diagnosis of brain tumors.Primary CNS lymphoma has shown low vascularization compared to malignant glioma, so intra-tumor CBV is not increased or only moderately increased [93].Metastases are typically easily distinguishable from normal brain tissue, whereas glioma and lymphoma exhibit infiltrative growth patterns [94].An elevation of CBV beyond the enhanced tumor regions indicates the infiltration zone of glioma and lymphomas, serving as evidence against metastases [95]. Perfusion imaging is a technique used to evaluate blood flow at the tissue level [96].MR perfusion is performed through three primary techniques: dynamic susceptibility contrast enhancement (DSC), dynamic contrast enhancement (DCE), and arterial spin labeling (ASL).MRI contrast is administered and dynamically monitored in DCE, utilizing a T1-weighted acquisition, and in DSC, utilizing a T2*-weighted acquisition.Even though the approaches for quantifying cerebral perfusion differ, they both involve monitoring the concentration of a contrast agent over time to estimate permeability and blood volume [97].In contrast, evaluating perfusion with ASL is accomplished without contrast, relying instead upon magnetic labeled arterial blood, while water acts as a tracer that freely diffuses. A meta-analysis and systematic review of twenty-eight studies investigated the diagnostic presentation of both DCE and DSC in evaluating glioma after treatment.The accuracy of distinguishing treatment-induced changes from tumor recurrence is affirmed by the high sensitivity and specificity of the DSC and DCE techniques: 90% and 88% for DSC and 89% and 85 for DCE, respectively [98].DSC perfusion is applicable for evaluating the response to treatment due to providing information on neoangiogenesis and microvascular density [99].Permeability metrics like Ktrans (volume transfer constant), Vp (plasma volume), and Ve (extravascular extracellular space volume) obtained from DCE perfusion have been linked to microvascular leakage and vascular density.Consequently, they are employed with some success in assessing treatment response [97]. However, contrast agent leakage represents a pitfall to accurate analysis, and correction methods are essential for correctly evaluating CBV in brain tumors [100].While DSC evaluation of rCBV is accurate, it may be affected by T1-weighted contrast leakage resulting from blood-brain barrier disruption.This can potentially lead to the underestimation or overestimation of rCBV values within the tumor [101].Accordingly, some clinical trials have been performed to address this issue [102,103].Various techniques regarding imaging acquisition, the different extracted parameters, processing software, and analysis methods have generated accurate thresholds for distinguishing tumors; e.g., a rCBV threshold range of 0.9 to 2.15 is employed in the diagnosis of tumor recurrence [98]. ASL represents a non-invasive approach for measuring cerebral blood flow (CBF) by utilizing labeled endogenous blood, producing a normalized CBF map as the main parameter for observation [82].ASL has the potential to be beneficial in the extended monitoring of glioma post radiation, including those patients with renal dysfunction [104].It was observed that the normalized CBF ratio was greater in cases of glioma recurrence in comparison with post-treatment radiation injury.Moreover, a strong linear correlation was identified between the DSC ASL and MRI approaches, with a linear regression coefficient of R = 0.85 and a significance level of p = 0.005.This correlation aids in differentiating recurrent glioma from radiation-related injury [105]. MR perfusion imaging could be an excellent diagnostic and follow-up modality in the neuro-oncology field; however, further investigations are required regarding various imaging techniques and extracted parameters. Magnetic Resonance Fingerprinting Magnetic resonance fingerprinting (MRF) has surfaced as a promising imaging method in the field of neuro-oncology, offering quantitative insights into tissue properties.MRF employs a unique single-sequence, pseudorandomized approach to generate T1 and T2 values, providing rapid quantification and tissue identification potential in neurooncology.It offers advantages such as accurate tumor margin delineation, distinguishing between primary and metastatic brain tumors, and discerning high-grade from low-grade gliomas.However, its efficacy in tracking longitudinal tumor progression through treatment remains unproven. Three distinct studies were conducted to investigate MRF's effectiveness in defining areas within solid tumors (STs), peritumoral white matter (PWM), contralateral white matter (CWM), and perilesional edema.MRF successfully distinguished solid tumor regions from CWM with T1 and T2 across three studies [106][107][108], and one study of 19 patients was able to distinguish PWM from CWM [107].Another study found similar success in distinguishing PWM from CWM in GBM multiforme specifically [106].However, when statistical analysis was conducted on the subset of patients with LGGs, only T1 differences were significant, with T2 trending towards significance [107].These findings slightly differed from those made in another study, which revealed no noteworthy distinctions in either T1 or T2 values between PWM and CWM of LGGs.The same study also found that PW and CW regions did not have statistically notable variations in T1 and T2 values in metastatic brain tumors following comparison correction [106].MRF also successfully used T1 values to separate the ST and PWM regions in LGGs.In IDH-wildtype tumors, MRF T2 and ADC values within the peritumoral edema ≤1 cm away from the ST were significantly higher than those in the ST; however, peritumoral edemas >1 cm away from ST margins were not discernable.Conversely, in mutant IDH gliomas, in the ST, MRF, T1, T2, and ADC values were markedly elevated compared to the peritumoral edema [108]. MRF has been tested to characterize neoplasms in three different ways: high-vs.lowgrade gliomas, primary vs. metastatic brain tumors, and IDH mutant vs. wildtype gliomas.MRF displayed mixed results in distinguishing LGGs from HGGs.Two independent studies achieved successful differentiation with both T1 and T2 [107][108][109]; however, one of the two studies only showed significant T1 differences when the sample was confined to pathologically diagnosed tumors, with T2 values approaching significance [107].Another limitation was that limited differences between solid tumor parameters were observed between GBM multiforme and LGG, except for T2 skewness, which was significant.Significant T1 and T2 variations were also observed in PWM of GBM vs. LGG.MRF has proven to be a promising tool in identifying primary vs. metastatic brain tumors.MRF mean T2 values were shown to distinguish between solid tumor of low-grade gliomas and metastases.In examining GBM multiforme versus metastases, analysis of the ST and PW regions indicated variances in T1 and T2 parameters solely prior to Bonferroni correction [106].MRF proved effective in identifying genetic mutations, particularly differentiating IDH mutants from wildtype gliomas.Significantly higher T1 and T2 relaxation times were observed in IDH mutants for regions of interest, including solid tumor and peritumoral edema within 1 cm of solid tumor margins [108]. MRF's non-invasive nature, devoid of radiation and wait time, makes it ideal for pediatric imaging [110].Pediatric T1 and T2 values significantly differ across the solid tumor and peritumor regions and CWM.MRF T1 values were able to differentiate between LGG and HG while T2 values were not, paralleling MRF's ability to characterize adult tumors [107]. Despite MRF's diagnostic potential, limitations were noted in monitoring treatment effects.A cross-sectional study demonstrated no significant changes in T1 or T2 values between treated and untreated low-grade glioma groups.Similarly, a longitudinal assessment showed no differences before and after treatment, with a median interval of 262 days [107].That said, CEST-MRF has recently been combined with a deep reconstruction network (DRONE) to yield much faster brain scans that are also sensitive to lower metabolite concentrations.The six-parameter DRONE reconstruction was able to produce a 256 × 256 voxel image in ~100 ms compared to the 4 h process using dictionary matching.Even under limited conditions, DRONE provided tissue maps that were less noisy compared to dictionary matching and was able to find significantly different T1 and T2 values between metastatic solid tumors and contralateral tissue [111]. Magnetic resonance fingerprinting holds great promise in neuro-oncology, offering valuable insights into tumor characterization, grading, genetic mutation identification, and pediatric imaging.Its diagnostic ability would allow physicians to quickly and noninvasively provide patients with accurate treatment plans and prognoses.However, its effectiveness in monitoring treatment responses remains inconclusive, emphasizing the need for further research.Continued exploration of MRF's potential is essential for advancing neuro-oncological diagnostics and patient management. Magnetic Resonance Spectroscopic Imaging Magnetic resonance spectroscopy (MRS) is a type of metabolic imaging method that has the ability to detect signals generated by spins of active nuclei elements.In clinical practice, MRS signal mainly originates from hydrogen (1H) or proton-MRS, comprising water and lipid molecules, since hydrogen is one of the primary molecules in the human brain.MRS demonstrates excellent potential for evaluating brain neoplasms by supplying chemical information about different metabolites to characterize brain tumors [112].For instance, the combination of MR spectroscopy and perfusion imaging achieved a specificity of 92% and a sensitivity of 72% in discerning between neoplasms and non-neoplastic lesions [113]. MRS is a non-invasive method for evaluating metabolic function, enabling the measurement of distinct metabolites within a specific tissue volume.In clinical evaluations using proton 1H-MRS, key measurable metabolites include N-acetyl aspartate (NAA), creatine (Cr), and choline (Cho).This established technique is widely recognized for aiding in the diagnosis and monitoring of various brain lesions [114,115].Common metabolite changes in brain tumors include an increase in Cho, lipids, and lactate, and a decrease in Cr and NAA.Other studies also demonstrated the use of the MRS technique as a powerful method in discerning metabolic changes linked to tumor grading and progression.Especially, a depression of NAA with an elevation in Cho are suggested as a reliable marker of tumor characterization [87,116].In neuro-oncology, achieving complete tumor excision is the main therapeutic objective.Hence, it is essential to accurately identify the precise boundaries of the tumor.Proton MRS guides the surgeon in the evaluation of regions with high metabolic activity (low NAA levels and elevated Cho levels) for biopsy [117,118]. The increase in Cho is due to proliferation and cell membrane turnover [119,120].Cho levels differ markedly based on cellular density, tumor grade, and necrosis.Cho resonance is particularly prominent in areas characterized by elevated neoplastic density which is noticeably lower in moderate-to low-grade tumors [121,122].NAA serves as a neuronal indicator, and its concentration diminishes as a result of neuronal damage, as observed in conditions such as extensive lesions, hypoxia, dementia, or multiple sclerosis.The connection between the decrease in NAA concentration and the increase in glioma grading related to a reduction in neuronal density makes NAA a possible substantial diagnostics marker for glioma [123].Several studies have shown that the MRS technique can potentially evaluate cerebral glioma grading accurately [124][125][126].The most prevalent primary tumor in the central nervous system originating from glial cells is glioma.In classical histological analyses, gliomas can be categorized into high-grade and low-grade through atypia, anaplasia, mitosis, necrosis, and microvascular proliferation.The role of 1H-MRS imaging to forecast the survival rate of GBM patients has been evaluated in brain tumor populations [127,128]. Histopathological findings with data assessed using MRS in patients with recurrent or new glioma clarified that decreased NAA and increased Cho were more associated with tumors than normal brain parenchyma and necrosis [129].Quantitative or qualitative detection of elevated Cho/NAA peak height ratios serves as a predictive factor in diagnosing high-grade glioma [130,131].Furthermore, lipid/lactate in untreated glioma indicates the diagnosis of necrotic grade IV tumor [131,132].In another study, the ability to differentiate biopsy samples containing glial tumors from non-tumoral regions containing a combination of normal, gliotic, edematous, and necrotic tissue exhibited a sensitivity of 90% and specificity of 86% when employing a Cho-NAA index (CNI) threshold of 2.5 [133].Proton MRS has been utilized to differentiate between tumor recurrence and radiation-induced tissue damage following radiation and gamma knife radiosurgery.Elevated Cho signal, Cho/Cr, or Cho/NAA ratios are indicative of recurrence, whereas diminished Cho and Cr levels suggest radiation-induced necrosis [134].Post radiotherapy or gamma knife radiosurgery, a decrease in Cho levels may signify partial remission, whereas stability or an increase in Cho suggests disease progression [135].Combining short and long TE MRS gives a diagnostic validity of 98% for the main pediatric brain tumor types, such as medulloblastoma, ependymoma, and pilocytic astrocytoma [136].Moreover, the percentage alteration in the Cho/NAA ratio detected through proton MR spectroscopic imaging proved beneficial in prediction of tumor advancement in pediatric brain tumor cases [137].Furthermore, an elevated Cho/NAA ratio was linked to reduced survival rates in children experiencing recurrent glioma [138]. Myo-inositol (MI) is a cellular osmotic regulator which is detectable within the brain via short TE MRS.There are fluctuations in its concentration within brain tumors.Highgrade tumors such as GBM have lower levels resulting from disruptions in the bloodbrain barrier and may lead to disturbances of the osmotic equilibrium [139,140].MI normalized by contralateral creatine (MI/c-Cr) values could serve as an indicator aiding in the prediction of responses to anti-angiogenic treatment and differentiation between individuals with short-term and long-term survival [141,142].Reduced MI/c-Cr levels in intra-tumoral, contralateral, and peritumoral volumes may indicate a prognosis of poor survival and lack of response to anti-angiogenic therapy before initiating treatment of recurrent GBM [141,143].A recent study proved that MI/c-Cr has the capability to differentiate between pseudo-and true progression, highlighting the significance of this MRS metabolite with a short echo time [144]. In the 2021 WHO tumor classification, the existence of the isocitrate dehydrogenase (IDH 1/2) enzyme mutation is what distinguishes astrocytoma from GBM, further highlighting the clinical role of 2-hydroxyglutarate (2-HG) MRS [145].IDH mutations, predominantly observed in oligodendroglia and astrocytic tumors, have been identified as a marker for low-grade glioma.Gliomas with IDH mutations exhibit improved treatment responses and longer survival durations compared to tumors with IDH-wildtype [146].Mutations in the IDH 1/2 enzyme, commonly found in grade II and grade III gliomas, cause the accumulation of 2-HG in brain tumor cells [147].Accordingly, 2-HG can be a valuable biomarker and onco-metabolite for diagnosing and observing therapy responses in IDH-mutated gliomas.MRS can detect this metabolite at a high field strength [148][149][150]. In one study, 1H-MRS with a short echo time accurately identified the presence of an IDH mutation with an accuracy of 88.39%, sensitivity of 76.92%, and specificity of 94.52% [151]. Utilizing MRS imaging along with conventional MRI can reveal essential information concerning the biological traits of tumors to assist effective treatment of recurrent GBM [141].In patients undergoing chemotherapy, proton MRS could offer insights into the functional response regarding tumor chemosensitivity and early treatment modification to prevent unnecessary toxicity [152].Non-invasive accurate diagnosis of glioma and recurrent glioma is vital, as the prognosis and therapeutic plans mainly rely on the histopathological grade of the tumor.Proton MRS imaging along with other combined imaging approaches can provide valuable data and assist the surgeon in acquiring representative cancer samples for histological examination and resection by pinpointing active tumor regions.As elucidated earlier, MRS offers valuable potential information for targeted radiotherapy and selecting the optimal patient treatment. Magnetic Resonance Elastography Magnetic resonance elastography (MRE) is a non-invasive method for measuring the mechanical characteristics of tissues.Brain tumor cells and their extracellular matrix demonstrate altered tissue mechanics which manifests in varied tissue stiffness.A prior knowledge of the visco-elastic property of brain tumors may guide neurosurgeons in the pre-operative planning of optimal surgical techniques and therapeutic stratification of patients.Studies in GBM have also shown that MRE may provide information on the WHO grade and IDH status of the tumor, where higher-grade gliomas and IDG wildtype tumors were softer than lower-grade and IDH mutant tumors.MRE may therefore significantly contribute to the growing field of "mechanogenomics" [153]. Intra-Operative Ultrasound While other modalities that are used to characterize neuro-oncological pathology are mostly pre-operative in nature, the intra-operative setting provides its own set of challenges.For example, as the brain is a non-fixed structure, "brain shift" often occurs and can be due to a variety of factors including surgical hardware manipulation, gravity, and fluid loss [154].This can cause incongruity between pre-operative imaging and actual surgical visualization, making accurate surgical margins difficult to appreciate.Ultrasound, as a real-time intra-operative imaging modality, has been utilized to address these issues.With the improvement in probe technology and development of advanced software, intraoperative ultrasound (ioUS) in neuro-oncological surgeries is increasingly being utilized in the operating room.Most literature on intra-operative ultrasound has emerged in the past decade, particularly in the past few years, including a few review articles by Dixon et al. [155] and Moiyadi [156], a clinical trial by Incekara et al. [157], and a textbook written by Prada et al. [158]. The primary benefit of ioUS lies in its capability to offer imaging in real time during surgical resection in defining surgical borders when compared to pre-operative imaging, which is often fused with ultrasound imaging [159].While MRI can also be utilized intra-operatively, only ioUS provides real-time imaging.Intra-operative MRI (iMRI) has significant disadvantages, such as cost and increased operative time, that ioUS generally does not have [160][161][162][163].Oftentimes, iMRI and ioUS are used in conjunction through fusion imaging, and have been shown to correct for brain shift in a study involving 58 patients with 42 cases successfully correcting for the commonly encountered issue [162].On the other hand, ioUS used alone can provide similar results to iMRI.Studies involving pediatric patients reported a high concordance between ioUS and post-operative MRI and an equal efficacy of iMRI and ioUS in regards to determining the extent of brain tumor resection [164]. While ioUS has its advantages, its constraints still limit its widespread adoption in the intra-operative setting [155].Ultrasound artifacts, such as acoustic shadowing and posterior wall acoustic enhancement, limit evaluation.In addition, the field of view is confined to the craniotomy site as ultrasound cannot penetrate the nearby intact calvarium.Another limitation is that ultrasound remains operator-dependent, with variations in technique, and is significantly more difficult to standardize in imaging, interpretation, and teaching [165]. Specific advanced ultrasound modalities, e.g., contrast-enhanced ultrasound and elastography, have been described for neuro-oncologic purposes.In contrast-enhanced ultrasound, microbubbles allow for the visualization of surrounding arteries and veins, characterize tumor microvascularization, and allow for better definition of tumor borders, especially in tumors with ill-defined boundaries on B-mode [166].Similar to liver elastostography in the evaluation of liver stiffness in cirrhosis, ultrasound can be used intra-operatively to determine certain tumor characteristics based on stiffness and detect residual tumor tissue [158,167]. While ioUS is spatially inferior to CT and MRI, with lower resolution, it has its own set of advantages that make it a valuable tool in the operating room.This intra-operative modality allows for the real-time visualization of tumor margins, surrounding structures, and nearby vasculature, allowing for safer resections and more accurate planning.In addition, there are significant cost-and time-saving benefits.However, utilization is still relatively new, and its main limitations are the lack of standardization in training and imaging techniques and the dependency on the user.Generally, high-grade gliomas have been found to be more echogenic than low-grade gliomas; however, the sonographic appearance of different brain tumors is highly variable due to a variety of factors requiring correlation with pre-operative imaging [155].Newer literature has attempted to standardize ioUS [155].For example, studies have attempted to identify the pre-operative parameters that would indicate the need for ioUS [168].Still, with improvements in ultrasound technology and increasing utilization, ioUS holds substantial promise as a tool that will be increasingly implemented in the future. Conclusions In conclusion, the field of neuro-oncological imaging has made significant progress in recent years, revolutionizing our approach to the diagnosis, staging, management, and monitoring of brain and CNS tumors.With the advent of cutting-edge imaging modalities and techniques, we are incrementally achieving a deeper understanding of the complicated nature of these diseases and their response to treatment. These advancements have not only improved the accuracy of tumor diagnosis but have also addressed challenging clinical scenarios, including the evaluation of treatmentrelated changes, responses to novel therapies like immunotherapy, and the early detection of disease progression.The knowledge of both the capabilities and limitations of these emerging imaging technologies is essential for providing a higher level of personalized care to patients with neuro-oncological conditions. Figure 1 . Figure 1.A forty-eight-year-old male presented with a solid homogeneously enhancing left frontal lesion with dural tail sign and significant perilesional vasogenic edema, in keeping with WHO grade I meningioma ((A,B): T1 post contrast, (C): FLAIR). Figure 2 . Figure2.A thirty-year-old male was found to have a heterogeneously enhancing left parietal mass with perilesional vasogenic edema, resulting in significant compressive effect on the left lateral ventricle and shift of midline to the right.The lesion was diagnosed as wildtype GBM WHO grade IV. Figure 5 . Figure 5.A 60-year-old male with history of colon cancer presented with heterogeneously enhancing mass involving the right parietal lobe ((A): T1 post contrast, (B): FLAIR).There is significant periolesional vasogenic edema and mass effect on the right lateral ventricle, resulting in midline shift to the left.The findings are consistent with cerebral metastasis. Figure 5 . Figure 5.A 60-year-old male with history of colon cancer presented with heterogeneously enhancing mass involving the right parietal lobe ((A): T1 post contrast, (B): FLAIR).There is significant periolesional vasogenic edema and mass effect on the right lateral ventricle, resulting in midline shift to the left.The findings are consistent with cerebral metastasis. 24 Figure 6 . Figure 6.A 65-year-old female with history of papillary thyroid cancer was found to have multiple small enhancing cerebral metastases (A) with perilesional vasogenic edema (FALIR (B)) and hemorrhagic component (SWI (C)). Figure 7 . Figure 7.A 55-year-old male with history of renal cell carcinoma presented with heterogenoeusly enhancing mass (T1 post contrast (A)).Post-treatment perfusion map (B) demonstrates two components: blue areas are suggestive of post-treatment changes/radiation necrosis and red areas are indicative of minimal residual tumor.The lesion demonstrates internal diffusion restriction (C,D). Figure 6 . Figure 6.A 65-year-old female with history of papillary thyroid cancer was found to have multiple small enhancing cerebral metastases (A) with perilesional vasogenic edema (FALIR (B)) and hemorrhagic component (SWI (C)). Figure 6 . Figure 6.A 65-year-old female with history of papillary thyroid cancer was found to have multiple small enhancing cerebral metastases (A) with perilesional vasogenic edema (FALIR (B)) and hemorrhagic component (SWI (C)). Figure 7 . Figure 7.A 55-year-old male with history of renal cell carcinoma presented with heterogenoeusly enhancing mass (T1 post contrast (A)).Post-treatment perfusion map (B) demonstrates two components: blue areas are suggestive of post-treatment changes/radiation necrosis and red areas are indicative of minimal residual tumor.The lesion demonstrates internal diffusion restriction (C,D). Figure 7 . Figure 7.A 55-year-old male with history of renal cell carcinoma presented with heterogenoeusly enhancing mass (T1 post contrast (A)).Post-treatment perfusion map (B) demonstrates two components: blue areas are suggestive of post-treatment changes/radiation necrosis and red areas are indicative of minimal residual tumor.The lesion demonstrates internal diffusion restriction (C,D). developed a potentially generalizable combined deep learning and radiomics model that accurately predicted IDH mutation status in gliomas with multiple datasets.Yogananda et al. [61] used a deep learning model for T2WI MRI that predicted MGMT promoter methylation status with a 94.73% mean crossvalidation accuracy.Wang et al. [62] used DCE-MRI and DWI radiomics features to forecast IDH mutation status and VEGF expression in gliomas, achieving an AUC of 0.909, 0.880, and 0.842 in external validation groups.Recently, Liu et al. [63] used a radiogenomics model that used radiomics features to predict immune cell infiltration (ICI), a tumor microenvironment biomarker, in GBM, and provide additional prognostication value.Eleven radiomics features were used to differentiate tumors with varying ICI scores which aided in prognostication.Lastly, van der Voort et al. [64] developed a CNN that simultaneously predicted IDH mutation status (AUC 0.90), 1p/19q co-deletion status (AUC 0.85), tumor grade (AUC 0.81), and tumor segmentation (Dice score 0.84) for gliomas.This represents a unique deep learning method that can answer multiple important clinical questions at once.
12,742.8
2024-01-30T00:00:00.000
[ "Medicine", "Engineering" ]
Filtering genetic variants and placing informative priors based on putative biological function High-density genetic marker data, especially sequence data, imply an immense multiple testing burden. This can be ameliorated by filtering genetic variants, exploiting or accounting for correlations between variants, jointly testing variants, and by incorporating informative priors. Priors can be based on biological knowledge or predicted variant function, or even be used to integrate gene expression or other omics data. Based on Genetic Analysis Workshop (GAW) 19 data, this article discusses diversity and usefulness of functional variant scores provided, for example, by PolyPhen2, SIFT, or RegulomeDB annotations. Incorporating functional scores into variant filters or weights and adjusting the significance level for correlations between variants yielded significant associations with blood pressure traits in a large family study of Mexican Americans (GAW19 data set). Marker rs218966 in gene PHF14 and rs9836027 in MAP4 significantly associated with hypertension; additionally, rare variants in SNUPN significantly associated with systolic blood pressure. Variant weights strongly influenced the power of kernel methods and burden tests. Apart from variant weights in test statistics, prior weights may also be used when combining test statistics or to informatively weight p values while controlling false discovery rate (FDR). Indeed, power improved when gene expression data for FDR-controlled informative weighting of association test p values of genes was used. Finally, approaches exploiting variant correlations included identity-by-descent mapping and the optimal strategy for joint testing rare and common variants, which was observed to depend on linkage disequilibrium structure. Background With the availability of very dense genetic marker data sets, such as sequence data, even large association studies can become underpowered. This raises the need to filter, or prioritize, or jointly test genetic variants. Filters or priors on genes may be derived from methylation or expression data if available in the same individuals. Alternatively, one may use external information. Recently, multiple annotation tools have become available using several databases and algorithms that predict functional effects of genetic variants. Commonly used are, for example, ANNOVAR (Annotate Variation) [1], Var-iantTools [2], PolyPhen [3], SIFT (Sorting Intolerant From Tolerant) [4], ENCODE (Encyclopedia of DNA Elements) [5], RegulomeDB [6], CADD (Combined Annotation-Dependent Depletion) [7], or Gerp++ [8]. Tools like ANNOVAR additionally provide variant annotation to genes and to regions such as conserved regions among species, predicted transcription factor binding sites, and segmental duplication regions. Many of the above-listed tools also provide information on regulatory elements that control gene activity. This article demonstrates that functional scores can contribute to the success of association studies. Simultaneously, functional scores may differ substantially between databases and prediction tools as they can be based on different functional aspects. Additionally, variant annotations to chromosomal positions continue to be updated with the National Center for Biotechnology Information (NCBI) [9] human genome build as standard. Furthermore, variants can be annotated to genes based on different sources, such as ENSEMBL [10], Vega [11], GENCODE [12], and many more. Researchers also use a variety of definitions of flanking regions. Finally, genes may be grouped by function or biological pathway, again with substantial variability between data bases such as KEGG [13], Biocarta [14], or Pathway Interaction Database [15]. This article discusses approaches that filtered or prioritized genetic variants, regions, or genes. Pathway-based approaches, although also incorporating filters or priors, are discussed separately by Kent [16]. Many researchers filter genetic variants. The simplest forms of filters are minor allele frequency (MAF), candidate genes or variants, or considering the exome. Filters and statistical models are chosen to increase the power under a hypothetical disease model. The advent of sequencing renewed interest in disease mechanisms less frequent but more penetrant than common single nucleotide polymorphisms (SNPs) of genome-wide association studies (GWAS). This led, for example, to screening for recessive variants by examining runs of homozygosity [17,18]. When multiple rare causal variants cluster within a gene, identityby-descent (IBD) mapping may be more powerful than single-locus association testing [19]. IBD mapping can be used in 2-step approaches. For example, Balliu et al [20] identified regions where hypertension cases shared more segments of IBD than controls in one part of the sample. They modeled aggregate effects of each of these regions on blood pressure (BP) in the sample remainder. Aggregation tests are used especially for testing rare single-nucleotide variants (SNVs). Aggregation tests are burden tests, variance-component tests, or a combination of both, such as SKAT-O (optimal unified sequence kernel association test) (see, eg, Lee et al [21] for a review). Kernel-based approaches (see Schaid [22] for a review) such as the sequence kernel association test (SKAT) [23] are variancecomponent tests. Examples of genetic burden tests are T5, combined multivariate collapsing (CMC) [24], or C-α [25]; see also Santorico et al [26]. Aggregation tests can prioritize SNVs by weighting minor allele dosages in the test statistic. Typical weights account for MAF, but may also incorporate putative functional relevance of SNVs [27,28]. Moreover, weights may be used to combine aggregation test statistics [21,29,30], and one may weight p values while controlling the false discovery rate (FDR) [31,32]. For example, GWAS p values may be weighted based on functional annotations. For aggregation tests on genes, p value weights can be utilized to integrate gene expression or other omics data [33]. This article summarizes contributions of the Genetic Analysis Workshop (GAW) 19 group on filtering variants and placing informative priors (Tables 1 and 2). These investigations found that improving SNV grouping or selection can noticeably increase power. Moreover, including functional scores or gene expression data as filters or weights on variants, genes, or when combining test statistics assisted in detecting associations. Some contributions also exploited SNV correlations to increase power or improved the multiple-testing adjusted significance threshold by accounting for SNV correlations. Materials Analyzed data were provided by GAW 19 and included a family sample (n = 959) with extended pedigrees of Mexican Americans from the San Antonio Family Heart Study (SAFHS) and the San Antonio Family Diabetes/ Gallbladder Study (SAFDS/ SAFGS) [34]. The family sample also contained 103 unrelated sequenced subjects; 259 subjects had gene expression data. This study was designed to identify low-frequency or rare variants influencing susceptibility to type 2 diabetes (T2D) as part of the T2D Genetic Exploration by Next-generation sequencing in Ethnic Samples (T2D-GENES) Consortium. Phenotypes included real and simulated longitudinal systolic (SBP) and diastolic blood pressure (DBP) and hypertension (HT) status. Available were sequence for 464 pedigree members and GWAS SNPs for all 959 subjects. Additionally, all subjects were imputed to sequence based on original genotypes and familial relationships [34]. Approaches described herein mostly analyzed imputed dosages to avoid missing genotypes and to maximize sample size. Zhang et al [28] analyzed the GAW19 sample of 1943 independent Hispanic subjects with whole exome sequence. This sample had been ascertained by T2D status. However, GAW19 provided real and simulated cross-sectional BP traits instead [35], using the same trait-simulation model as for the family study. All approaches described herein are nonlongitudinal analyses of BP traits (SBP, DBP, or HT) in relation to minor allele dosages of sequence SNVs or genome-wide SNPs. Methods Statistical methods employed by this group (see Table 1) to incorporate filters or informative priors are mostly based on regression models [27,30,33,36,37]; one is also based on counting methods [28]. Analyses of family data adjusted for familial dependence based on the kinship matrix. They included the familial covariance in a linear mixed model [27,30,36] or transformed the trait to a conditionally independent surrogate variable [33]. Analyses of independent subjects accounted for population structure (cryptic relatedness and admixture) [37] by using the programs Eigensoft [38] and Admixture [39]. Annotating genetic variants for location and function A variety of freely available genetic databases and highly developed software tools support annotation of location and biological function of SNVs. In our group, SNV locations were obtained by ANNOVAR [28,36] or determined based on reference data, for example, from the Genome Reference Consortium [40] or the International Haplotype Map (HapMap) Consortium [41] [30,37]. Reference data were also used to determine linkage disequilibrium (LD) blocks [30] with Haploview [42]. Kim and Wei [27] and Almeida et al [36] used functional annotations from ENCODE, PolyPhen or PolyPhen2, and SIFT, while Liu et al [37] used CADD. In contrast, Zhang et al [28] annotated putative protein binding sites based on 2 different algorithms using random forest classifiers [43]. Filtering genetic variants Not all areas of the genome were studied. Some researchers filtered the data prior to analyses. Zhang et al [28] investigated exome sequence and Almeida et al [36] molecularly functional nonsynonymous SNVs predicted by PolyPhen and SIFT. Liu et al [37] examined IBD sharing regions on chromosome 3. Malzahn et al [30] considered gene-containing LD blocks for selected candidate genes. Ho et al [33] analyzed rare SNV burden in genes containing less than 50 and more than 1 rare SNV (MAF <0.01). Accounting for correlations between genetic variants An important difference between methods is that variant correlations can either be a nuisance or may be used to increase power. For example, IBD mapping exploits variant correlations. IBD mapping can be more powerful than single-locus association testing when multiple causal rare variants cluster within a gene [19]. Therefore, Liu et al [37] tested the relationship between IBD sharing status and trait differences and sums for pairs of individuals. Moreover, the power of kernel methods such as SKAT may be increased through the exploitation of variant correlations [44]. This ability can be utilized fully by analyzing LD blocks [30]. On the other hand, single-locus methods need to account for variant correlations to appropriately correct the significance level for multiple testing. Hence, Almeida et al [36] determined the effective number of independent tests by extreme value theory based on replicates of a simulated unassociated trait. Correcting the significance level for the number of independent tests The significance level used with multiple testing is always an issue as too conservative a correction will cause false negatives and not correcting enough will cause false positives. Almeida et al [36] adjusted the significance level for single locus analyses by estimating the number of independent tests [45]. A total of 1000 replicates of a quantitative phenotype with no genetic effects were simulated and tested on whole genome sequence data, using linear mixed models in SOLAR (Sequential Oligogenic Linkage Analysis Routines) [46]. The smallest p value per simulation run was extracted. The density of these 1000 extremely small p values was fitted to a theoretical beta distribution beta(1,n e ) where n e is the effective number of independent tests [47]; yielding the adjusted significance level a à ¼ 0:05 n e . This procedure was applied to both whole genome sequence and functional nonsynonymous SNVs. Identity-by-descent mapping IBD mapping aims to detect loci sharing ancestral segments in unrelated individuals. In particular, unrelated subjectpairs with smaller trait differences are expected to share significantly more rare causative variants than pairs with larger trait differences. Liu et al [37] estimated IBD sharing segments with BEAGLE [48]. The squared trait difference (D) and squared trait sum (S) for trait DBP between pairs of unrelated subjects was regressed on IBD sharing status. This yielded parameter estimates for slopes ðβ S ;β D Þ and variances (σ S 2 , σ D 2 ), which were combined into an overall slope Linkage was tested with test statistic t ¼β SEβ ð Þ under the null hypothesis of an overall slope of zero [37]. The significance threshold for nonindependent pairs was estimated by permutation procedure. Priors on genes and variants Genetic priors can be incorporated by variant weights in aggregation tests such as burden tests or SKAT [21]. Burden tests collapse minor allele dosages x ik of a set of i = 1, …, m variants into a burden score s k = ∑ i = 1 m ω i x ik per individual k using a priori specified variant weights ω i . One tests trait association with genetic burden s k . Although burden tests are powerful when causal SNVs have the same effect direction, SKAT is more powerful when effect directions differ or if many noncausal SNVs are included in testing [21,49]. SKAT is based on an underlying Bayesian model that estimates a random effect per SNV [50]. Specified is a kernel matrix of genetic Gene covering LD-blocks SNV-weights: using MAF SKAT: power depends on SNV weights, exploiting LD is very beneficial, optimal strategy for joint testing rare and common SNVs depends on LD structure Haploview with HapMap data for LD-calculation Overall weight: on rare SNV variance component in SKAT Ho et al [33] Rare SNVs in genes with >1 and <50 rare SNVs (MAF < 0.01) p value weights: improve gene ranking Power of burden tests improved by incorporating phenotype associated gene expression into p value weights Genes: hg19; GO biological process categories CADD combined annotation dependent depletion, DBP diastolic blood pressure, DOMINO database of domain-peptide interactions, DSSP define secondary structure of proteins, ENCODE encyclopedia of DNA elements, GO gene ontology, IBD identity-by-descent, LD linkage disequilibrium, MAF minor allele frequency, PSAIA protein structure and interaction analyzer, SBP systolic blood pressure, SIFT sorting intolerant from tolerant, SKAT sequence kernel association test, SNV single nucleotide variant, WGS whole genome sequence between-subject similarity and this kernel constitutes a prior on genetic model space [51]. SNV weights are incorporated in the kernel (see, eg, Malzahn et al [30]). Typically, rarer SNVs get assigned more weight to counterbalance their reduced power compared to more frequent SNVs. Used are, for example, weights ω j ¼ [23], where b is the probability density function of a beta(1, 25) random variable. Malzahn et al [30] compared the power of SKAT when using different SNV weights and different kernel functions that either allow or do not allow for SNV interactions in the genetic model. Alternatively, SNV weights may be based on regulatory importance [27] or protein binding effects [28]. Incorporating functional information into variant weights Kim and Wei [27] categorized SNVs according to Regulo-meDB and PolyPhen2 functional relevance scores. SNV weights were defined based on f(s) = S 2 where s equaled the reverse order of categories, namely s = 6, 5, 4, 3, 2, 1 for category 1 ("most likely affecting binding and expression") to category 6 ("not functionally relevant"). Kim and Wei [27] tested rare SNVs jointly, in sets defined by sliding windows of 4 kb size, for association with SBP. They compared the power of SNV weighting schemes in SKAT versus ω j = b(MAF j )), and burden test T5 (ω j = f(s j ) versus ω j ¼ [53] to test if the proportion of subjects with an informatively weighted minor allele burden exceeding a given threshold differed between HT cases and controls. P values were obtained by permutation procedure. SNV weights ω i accounted for putative effect direction and distinguished between functional SNVs in binding-sites (|ω i | = 10), not in binding-sites (|ω i | = 5), and nonfunctional SNVs (|ω i | = 1). The informatively weighted LRT was compared with C-α and CMC burden tests. Optimal joint testing of rare and common variants When not filtering for rare or common SNVs, optimal joint testing of both becomes an issue. Suppose, one computed 2 SKAT statistics, Q rare and Q common , separately on rare SNVs and common SNVs, in the same region of interest, for the same trait, based on the same genetic null model. As SKAT is a variance-component test, combining Q rare and Q common [29] Qws weights the rare SNV variance-component by overall a priori weight (1-λ) relative to the common SNV variance-component (see Ionita-Laza et al [29] and Malzahn et al [30] for choices of λ). The weighted sum test (1) is another way of structuring a prior in SKAT. Note that Q rare and Q common may use different kernel functions or different SNV weights. Malzahn et al [30] compared this form of joint testing of rare and common SNVs with the default choice of entering all SNVs with appropriate weights into a single kernel. Exact p values for SKAT and weighted sum test (1) were obtained by Davies method [54]. Another investigated alternative was Fisher pooling of the correlated p values resulting from the separate rare SNV and common SNV SKAT statistics. Fisher pooling accounted for correlations by Satterthwaite approximation and Brown's method ( [55]; see also [29,30]). Note that analogously to equation (1), SKAT-O combines SKAT and burden tests with statistic Q = (1 − ρ)Q SKAT + ρQ burden where 0 ≤ ρ ≤ 1 [56]. Informed p value weighting for genes Ho et al [33] obtained gene-wise p values, p g , for association of average BP T with rare SNV burden s g in genes g that had more than 1 and less than 50 rare SNVs (MAF <0.01) Restricting the number of rare SNVs avoids collapsing too many null variants. Ho et al [33] used the sequential sum test [57], which data-adaptively assigned SNV weights ω i = 0, 1, − 1. Earlier, Genovese et al [31] and Roeder and Wasserman [32] had proven that informative weighting of p values p g ν g with weights v g > 0; v g ¼ 1 maintains proper FDR control; where p g ν g ≤α FDR means significance. Ho et al [33] determined such weights v g as follows. They tested if rare minor allele burden s à g (with SNV weights ω i = 1, for simplicity) also associated with gene expression E g and further if gene expression E g associated with trait value T Association tests (2) to (4) were made conditionally independent by adjusting test (3) for trait value T and test (4) for rare minor allele burden s à g (Fig. 1). where the maximum was over all gene expression measurements and v à g was the average of all ν à g . Results and discussion The results for this GAW19 working group varied widely as a result of the different objectives of each contributor. Table 2 provides a brief summary of specific results. Under H 0 , extreme p values follow a beta distribution [47]. Almeida et al [36] reported that the beta distribution provided an excellent fit to determine the effective number of independent tests n e for n single-locus tests. For whole genome sequence, n e n ¼ 15%; that is, accounting for LD reduced the multiple-testing burden by 85 %. However, significant associations could only be found when LD-correcting the significance level after a priori reducing sequence data based on functional annotations. Then 2 SNPs were detected: rs218966 in gene PHF14 associated with SBP and rs9836027 in MAP4 associated with DBP. Liu et al [37] scanned chromosome 3 (GWAS data) for IBD sharing segments that associated with DBP. No genome-wide significance was found. However, several risk variants were detected in the region of gene ZPLD1 by using CADD functional scores and sequence for the most promising region at 3q12.3. In the GAW19 trait simulation model, SNV effect sizes were based on PolyPhen2 functional prediction scores (Fig. 2) [35]. In Figs. 2 and 3, displayed SNV effects, PolyPhen2 scores, and the assignment to positions and genes (NCBI build37, human genome build 19) came from the simulation answers. To illustrate differences between functional annotations, SIFT scores (and rs-numbers) were added by annotating sequence (variant call format [vcf] files) with ANNOVAR and merging vcf files and simulation answers by chromosome and position. RegulomeDB scores were merged by dbsnp138 rs-identifier. Furthermore, functional scores were transformed to have the same directionality (Fig. 3). Different functional annotations focus on different information about SNVs and only annotate selected SNVs. PolyPhen2 and SIFT both annotate nonsynonymous coding SNVs by a metric score that can be categorized to distinguish benign mutations from damaging ones affecting protein function. Nevertheless, PolyPhen2 and SIFT scores differ to a substantial extent in value and category (Fig. 3a). RegulomeDB annotates regulatory SNVs by an ordinal score ranging from the highest evidence (eQTL, expression quantitative trait locus) to the lowest. Figure 3c illustrates that some SNVs were rated to affect gene expression and transcription factor binding (RegulomeDB scores 1 to 5) but not the protein function (scored "benign" by PolyPhen2). For simulated BP, SIFT and RegulomeDB annotations yield mismatched filters or priors whenever they deviate from the Poly-Phen2 score used to simulate SNV effects. For example, SIFT annotated some SNVs with large effects in gene TNN as benign mutations (Fig. 3b) and only few SNVs in associated genes were rated to be of regulatory importance (Fig. 3d). Nevertheless, for real SBP, several multiple-testing adjusted significant windows (2 with SKAT, 4 with burden test T5) were only found when including RegulomeDB scores as variant weights for rare SNV analysis [27]. One of these regions contained SNUPN [27] which is a novel finding not previously reported to associate with BP. T5 and SKAT maintained the nominal significance level on simulated unassociated trait Q1 also when incorporating RegulomeDB scores into variant weights [27]. Kim and Wei [27] and Zhang et al [28] both recommended using relatively big differences in SNV weights distinguishing functional from nonfunctional SNVs. Zhang et al [28] observed that different burden tests with functionally informative SNV weights yielded different top ranked genes. Although no gene was significant, many of them had been reported in the BP literature before. For SKAT, Malzahn et al [30] found that variant weights, but not kernel choice, had a strong influence on power, for rare as well as common SNVs. Kernel methods may gain power by exploiting SNV correlations. This can be utilized fully by analyzing LD blocks [30]. LD structure also influenced which strategy yielded the best joint test of rare and common SNVs with SKAT [30]. When using gene expression data to informatively weight gene-wise p values for association of rare SNV burden with BP [33], 153 genes (out of 6118) reached nominal significance (weighted p ≤0.05). P value weights were determined such that evidence for phenotype associated gene expression lowered burden test p values. As no gene reached multiple-testing adjusted significance, Ho et al [33] used gene set enrichment analysis as aggregation test to relate the 153 top genes to biological pathways. Conclusions All analyses presented herein used a cross-sectional design by analyzing trait data of the first examination, the first available examination, or longitudinally averaged traits. This mainly contributed to differences in sample size and trait variability. Furthermore, analyzing trait values at different time points may affect the marginal effect of genes that interact with age. Including biological knowledge increased the power of association studies performed in our GAW group; especially filtering variants based on putative functional relevance. Prior weights can be included at different stages of the testing procedure. They can be incorporated into the test statistic of SKAT or burden tests, used when combining test statistics, or applied to association test p values. Selecting variant-sets also should take genetic structures into consideration, such as LD or IBD sharing. Moreover, the effective number of independent tests can be determined relatively easily by extreme value theory. This enables appropriate adjustment of the significance level for multiple testing to avoid an overly conservative approach. Ideally, variant grouping and selection, inclusion of biological information, and significance level adjustment can be applied simultaneously. Strategies like these are useful in increasing power in analyses of highly dense genetic data sets. Filtering variants clearly boosted power in the discussed studies. However, filtering might also lose information. Functional scores such as PolyPhen2, SIFT, CADD, or RegulomeDB differ as they focus on different information about SNVs. Moreover, appropriateness of functional scores for a considered trait is a priori unknown. Hence, one is well advised to use and combine multiple functional annotations into a single filter or prior. This is feasible as functional annotations yield strong filters that greatly reduce the SNV space.
5,544
2016-02-03T00:00:00.000
[ "Biology", "Computer Science" ]
Global Journal of Engineering and Technology Advances Particle Swarm Optimization (PSO) is one of the concepts of swarm intelligence inspired by studies in neurosciences, cognitive psychology, social ethology and behavioural sciences, introduced in the domain of computing and artificial intelligence as an innovative collective and distributed intelligent paradigm for solving problems, mostly in the domain of optimization, without centralized control or the provision of a global model. The PSO method has roots in genetic algorithms and evolution strategies and shares many similarities with evolutionary computing such as random generation of populations at system initialization or updating generations at optima search. This paper presents an extensive literature review on the concept of PSO, its application to different systems including electric power systems, modifications of the basic PSO to improve its premature convergence, and its combination with other intelligent algorithms to improve search capacity and reduce the time spent to come out of local optimums Introduction The method of particle optimization (PSO) is one of the concept of swarm intelligence [1] inspired by studies in neurosciences, cognitive psychology, social ethology and behavioural sciences, introduced in the domain of computing and artificial intelligence [2] as an innovative, collective and distributed intelligent paradigm for solving problems, mostly in the domain of optimization, without centralized control or the provision of a global model [3,4]. In the utilization of PSO for multivariable optimization problems, the swarm takes a specified size corresponding to the variables of the objective function(s). The particles are individually located with initial random locations with zero velocity in the multidimensional design space. Particles of the swarm represent possible solutions in the search pace, possessing position and velocity [5]. In this particle arrangement and behaviours, each particle keeps track of its positions in the search space and its behaviour will depend on the best position it has discovered and on the best overall position that any member of the swarm has achieved so far. With this behavioural arrangement, gross effect of the design space is optimized. The PSO method works by considering the parametric optimization as an unconstrained D-dimensional minimization problem as follows [6]. Where 1 , … . . are particles to be optimized in the form of a D-dimensional vector( ( )). As already outlined, each particle has position and velocity at any time t, so that, and are the position and velocity respectively of the ℎ particle on the ℎdimensional space. As the swarms meanders in the search space, individual positions and velocities are updated as follows: [7, 5 and 8]. Equations 6 and 7 represents the best position of the i th particle and the overall best position of the swarm so far. Δt is the time interval or step between iterations; 1 and 2 , the acceleration constants representing cognitive and social learning rates; 1 and 2 are randomly generated numbers. The inertia weights necessary to balance the global and local search abilities is represented with w. PSO algorithm combined with other intelligent algorithms While iteratively searching for optimal particles, premature convergence is one of the limitations of the PSO algorithm. To reach to an optimum value, particle swarm optimization depends on interaction between particles. If this interaction is restrained algorithm's searching capacity will be limited, thereby requiring long time to come out of local optimums [9]. Many developments including combination of PSO with other intelligent algorithm have been developed to solve this problem. An improved algorithm for Particle Swarm Optimization (PSO) named Elite Particle Swarm Optimization with Mutation (EPSOM) was proposed by [10]. The EPSOM algorithm improves the individual quality of the swarm and accelerates the convergence. Meanwhile, mutation operation which is employed in this new algorithm is to guarantee the diversity of the swarm and to decrease the risk of plunging into local optimum. A novel particle swarm optimization algorithm [11]: Multi-Swarm and Multi-Best particle swarm optimization algorithm. In order to make full use of the searching information, the novel algorithm updates the population's position and velocity by following multi-pbest and multi-gbest instead of single pbest and single gbest. Accordingly, the population is not trapped by local optimum position easily. To solve the premature convergence problem of the PSO, [20] proposed a novel particle swarm optimization based on swarm particles equilibrium distribution. A new particle which can measure the swarm equilibrium of distribution degree was proposed to effectively avoid particle clustering within a sub-area of the search space. In modified particle swarm optimization (PSO) algorithm [12]. Their method integrates the particle swarm optimization with the simulated annealing algorithm. It can solve the problem of local minimum of the particle swarm optimization, and narrow the field of search continually, so it has higher efficiency of search. The algorithm was applied to the function optimization problem and simulation shows that the algorithm is effective. Another hybrid particle swarm optimization (OPSO) algorithm, which combines the advantages of Neider-Mead simplex method (SM) and particle swarm optimization (PSO) algorithm to solve systems of nonlinear equations was proposed by [13]. Numerical computation results show that the approach has great robust, high convergence rate and precision, it can give satisfactory solutions of nonlinear equations. A hybrid approach incorporating an enhanced Nelder-Mead simplex search scheme into a particle swarm optimization (PSO) with the use of a center particle in a swarm for effectively solving multi-dimensional optimization problems was proposed by [14]. To show the effectiveness of the proposed approach, 18 benchmark functions were adopted for optimization via the proposed approach in comparison to existing methods. A new variation of the particle swarm optimization algorithm basing on group decision (GDPSO) was also proposed by [15]. The algorithm takes each particle as an individual decision-maker and uses the basic particle information such as the position of individual history and fitness value to decide a new position. In this way, using the position replaces the global best position. So the space of searching is expanded and the population diversity is increased. The GDPSO algorithm can improve the convergence speed and the capacity of global searching as well as the avoidance of premature convergence. The simultaneous perturbation particle swarm optimization which is a combination of the particle swarm optimization and the simultaneous perturbation optimization method was proposed by [16]. The method has global search capability of the particle swarm optimization and local search one of gradient method by the simultaneous perturbation. Comparison between these methods and the ordinary particle swarm optimization were shown through five test functions and learning problem of neural networks. A new algorithm for the multimodal function optimization based on the particle swarm optimization (PSO) was developed by [17]. This method, called the multi-grouped particle swarm optimization (MGPSO), keeps basic concepts of the PSO, and, thus, shows a more straightforward convergence compared to conventional hybrid type approaches. The usefulness of the proposed algorithm was verified by the application to various case studies, including a practical electromagnetic optimization problem. Due to the slowness and the locality of convergence for Simple Particle Swarm Optimization (PSO) in solving the complex system optimization, [30], proposed a Two Stage Composite Particle Swarm Optimization (TS-CPSO) as an improved PSO with the strategy of gradual range contraction. The designing ideas and the implementation of TS-CPSO were given, and the convergence analysed by simulation. All the results indicate that the new type of algorithm was able to converge to the global optimal solution and could efficiently avoid the premature phenomenon. Particle swarm optimization based on chaotic neighbourhood search (PSOCNS) was proposed by [38]. The algorithm avoids premature convergence by searching each small area which is defined by all particles by chaotic search and then jumped out of local optimization. The experiment results demonstrate that the PSOCNS proposed is better than the basic particle swarm optimization algorithm in the aspects of convergence and stability. Similarly, to prevent that particles of the PSO from easily falling into local optima point in optimization of highdimensional and complex functions, [18] proposed a novel two sub-swarms exchange particle swarm optimization based on multi-phases (TSEM-PSO). The particle swarm was divided into two identical sub-swarms, with the first adopting the standard PSO model, and the second adopting the proposed model, When the two sub-swarms evolve steady states independent, the exchange number of particle is different in different searching phase and its amount is gradually decreasing which can increase the information exchange between the particles, improve the diversity of population and meliorate the convergence of algorithm. Experiment results show that the TSEM-PSO is superior to standard PSO and TSE-PSO algorithm. PSO applied to electric power systems The PSO method has roots in genetic algorithms and evolution strategies, therefore it shares many similarities with evolutionary computing such as random generation of populations at system initialization or updating generations at optima search but also differs from it in not using evolution operators such as crossover and mutation or, in that each particle owns memory. Because of these similarities PSO has, many of the preferable properties of GA and used successfully in many fields [6]. Expanded PSO method for reactive power and voltage control considering voltage security assessment was proposed by [19]. Hybrid PSO, evolutionary PSO, and constriction factor to find optimal mapping between unit load demand and pressure set point in a fossil fuel power unit was used by [21]. A GA hybrid with PSO to find the optimal design of a plate-fin heat exchanger was applied by [22]. Based on literature PSO has been found to be robust, flexible, and stable. It is insensitive to local optima or saddle and suitable to solve complex optimization problems with many parameters. PSO is fast in solving nonlinear, non-differentiable multi-modal problems and just like GA it does not require gradient computation [1]. PSO applied to miscellaneous problems The PSO method has been applied in the optimal scheduling of hydro system. Here, [24] proposed an enhanced particle swarm optimization algorithm (EPSO) to solve optimal daily hydro generation scheduling problem. The feasibility and effectiveness of the proposed EPSO method was demonstrated for optimal daily generation scheduling of a hydro system and the test results were compared with those of other methods in terms of solution quality and convergence property. The simulation results showed that the proposed method was able to obtain good solution. The calculation of critical depth, an essential parameter in hydraulic engineering, in horseshoe cross section open channels, based on PSO algorithm was presented by [25]. The consistency of the model was checked through certain examples, numerical examples demonstrated the capacity, accuracy and simplicity of the present PSO model. The results showed that the application of their algorithm may be used for solving other similar hydraulic engineering problems and equations like normal depth. A method for seismic wave impedance inversion in order to improve the fine structure inversion ability of igneous rocks for the exploration of underlying strata, based on particle swarm optimization (PSO) was developed by [26]. The results showed that the inversion based on PSO method has a better result for this igneous area. An application of particle swarm optimization for optimizing the process parameters in turning of PEEK CF30 composites presented by [27]. The PSO program gives the minimum values of the considered criteria and the corresponding optimal cutting conditions. A random dimension velocity updated PSO proposed by [40]. Simulations were used to test the proposed PSO based several benchmark functions and then the proposed PSO was applied to the dynamic economic dispatch problems of power system. All the simulations prove the efficiency of the proposed method. Modified PSO applications A novel multi-objective endocrine particle swarm optimization algorithm (MOEPSO) based on the regulation of endocrine system was proposed by [29]. The results indicate that the designed method is efficient for some multiobjective optimization problems. A multi-objective optimized support vector machine (SVM) algorithm which is proved effective for binary-class fingerprint classification was developed by [39]. The results showed that the algorithm can reduce the work from manual operation for testing suitable parameters of SVM. A new correlated modified particle swarm optimization (COM-PSO) was developed by [31]. The results of simulations and convergence performance were compared with the original PSO. The improvement of results, the convergence speed, and the ability to simulate the correlated phenomena by the proposed COM-PSO were discussed by the experimental results. A new particle swarm optimization (PSO) that incorporates a hybrid mutation strategy was proposed by [32]. They used the Monte Carlo method to investigate the behaviour of the particle in PSO. The results revealed the essence of the particle's trajectory during executions and the reasons why PSO has relative poor global searching ability especially in the last stage of evolution. A new particle swarm optimization algorithm (NPSO) for dealing with the portfolio model from stocks market in which the optimal and sub-optimal positions of each particle were considered in the iteration process, and the crossover operation is used to avoid premature, this was presented by [33]. The result showed that NPSO outperforms PSO in algorithm tests, which prevents premature and has better convergence ability. An adapted Particle Swarm Optimization (PSO) algorithm for the inverse kinematic solution of the robot that is designed for reduction in fracture treatment with external fixator and it is applied to this robot which has six degrees of freedom was proposed by [34]. The proposed method has been tested on a robot with 3 linear and 3 rotation axes and much better results have been obtained in comparison with classical PSO. A hybrid improved particle swarm optimization (IPSO) algorithm for the optimization of hydroelectric power scheduling in multi-reservoir systems, this was proposed by [35]. The scheduling results of the IPSO algorithm were found to outperform PSO and to be comparable with the results of the dynamic programming successive approximation algorithm. The immune algorithm-based particle swarm optimization (IA-PSO) by involving the immune information processing mechanism into the original particle swarm optimal algorithm proposed and developed by [36]. The result demonstrates that IA-PSO can achieve both a superior load distribution scheme and a higher convergence precision as compared to PSO, and will hopefully be applied to solving more extensive optimization problems. Conclusion Particle swarm optimization is an innovative, collective and distributed intelligent paradigm for solving problems, mostly in the domain of optimization, without centralized control or the provision of a global model. Several of these applications and an apt insight into the algorithm operation have been presented. Issues related to premature convergence around the local minima have been addressed in the light of the reviewed literature. Furthermore, improvement on the traditional PSO have been reviewed using appropriate literature to give leverage to the algorithm's searching capacity. Acknowledgments Our gratitude goes to the staff of the libraries of the Cross River University of Technology, Calabar and Michael Okpara University of Agriculture, Umudike, Abia Statefor allowing us use their reserve, main and e-library sections of their libraries.
3,439.8
2020-06-30T00:00:00.000
[ "Computer Science" ]
Recent Advances in Applied Electrochemistry: A Review : Applied electrochemistry (AE) plays today an important role in a wide range of fields, including energy conversion and storage, processes, environment, (bio)analytical chemistry Introduction Applied electrochemistry (AE) is a leading modern science that addresses societal challenges across diverse fields, including energy conversion and storage, processes, environment, (bio)analytical chemistry, and many others [1][2][3].In the energy sector, electrochemical processes are used for energy conversion and storage.This enables the development of productive and sustainable technologies, such as batteries [4,5], fuel cells [6,7], and electrolyzes [8,9].Therefore, these advancements help in the integration of renewable energy sources and support the transition towards green and more sustainable energy sources. In the environmental field, electrochemistry provides novel solutions for pollution control and water treatment.Electrochemical processes, such as direct and indirect electrochemical oxidation processes and advanced oxidation processes, are used for the degradation of organic pollutants [10], removal of heavy metals [11][12][13], and enhancement of effluents' biodegradability [14,15].These technologies offer efficient and cost-effective approaches to address environmental challenges, protect ecosystems, and improve water quality. In the manufacturing sector, electrochemistry plays a vital role in electrosynthesis, materials manufacturing, and surface modification.Electrochemical techniques, such as electroplating [16], electroforming [17], and electrochemical machining [18], are used to produce functional and protective coatings and enhance the performances of materials and components.These processes allow for the production of high-quality products with improved material properties and high durability.For example, the electrochemical treatments of material surfaces in metallurgical industries aim, first, to create porous materials that have a higher geometric surface area and, second, to enable the creation of a thick layer of metal oxide that protects and stabilizes the nanostructure of the metal or the semiconductor [19,20]. improved material properties and high durability.For example, the electrochemical treatments of material surfaces in metallurgical industries aim, first, to create porous materials that have a higher geometric surface area and, second, to enable the creation of a thick layer of metal oxide that protects and stabilizes the nanostructure of the metal or the semiconductor [19,20]. On the other hand, the European Union (EU) has set goals regarding the neutrality of the climate [37] and adopted in April 2021 the European climate law [38].This is central to making the EU's economy sustainable and reducing its environmental footprint [39,40].This aims to address many changes and incorporate a wide range of initiatives that aim at transforming key sectors, such as energy [41,42], industry [43,44], transport [45], and agriculture [46], to reach climate neutrality and push towards a more sustainable economy.In this context, electrochemistry emerges as a keystone for industry decarbonization and transitioning towards sustainable manufacturing processes, reducing carbon footprints [47,48].This includes the use of clean, renewable, and more sustainable energy sources instead of thermal energy sources via electrification powered by low-carbon electricity sources [49,50] and the use of fuel cells (for hydrogen production) [51,52].Carbon dioxide capture offers a pathway for industries to mitigate CO2 emissions and simultaneously produce value-added products (e.g., methane production) [53,54]. This review article summarizes the recent advances in applied electrochemistry.It shows how this field has become an indispensable tool for innovation, progress, and problem-solving in the modern world and addresses societal challenges across diverse fields (Figure 1). Application Field 2.1. Energy Conversion and Storage Energy surrounds us all the time, fueling our activities day and night.We often take for granted the convenience of accessing energy to power our gadgets, appliances, machines, and vehicles.However, it is crucial to think about how we store this energy for use [55].Unlike fossil fuels, which can be easily stored and transported in their natural state, renewable energy sources, such as sunlight and wind, require an intermediary storage method due to their intermittent nature.As a result, batteries are considered to be the only solution for storing this energy, so that it can be used when needed, providing a crucial bridge between energy generation and consumption [56,57]. There are two types of batteries, namely those storing energy for a single use, like nonrechargeable batteries, and those for multiple uses, exemplified by rechargeable batteries.Our focus is on batteries known for their ability to store and release energy repeatedly, leading to cost-effectiveness and eco-friendliness across various applications [58].The batteries' applications fall into three primary categories: transportation and automotive, including electric vehicles (EVs); portable electronics; and stationary power storage, with each type demanding unique specifications.Table 1 summarizes the different types of commercial batteries used for energy storage and their main applications, advantages, and disadvantages. In fact, the progress of EVs heavily depends on the improvement of battery technology, which encounters various obstacles, such as underdeveloped batteries and challenges in practical applications [59].Lead acid batteries (Pb-A) have historically dominated the rechargeable battery market, particularly within the automotive sector, owing to their significant market share in terms of sales value and energy production.However, Pb-A batteries come with inherent limitations, including a relatively short cycle life, low energy density, susceptibility to acid stratification and leakage if damaged, and challenges in downsizing due to concerns related to lead production.Moreover, the environmental impacts associated with lead acid batteries are well-documented [60,61], necessitating extensive recycling efforts to mitigate their adverse effects.Consequently, lithium-ion batteries (LIBs) have emerged as a promising alternative, gaining traction due to their numerous advantages, including the high storage efficiency of close to 100% and offering a diverse range of chemistries, making them suitable for various sustainable applications [62].In the most common designs of LIBs, the cells consist of a negative electrode called an anode and a positive electrode called a cathode separated by an isolating separator and surrounded by an electrolyte.During discharge, lithium ions are transported from the anode, through the separator to the cathode and bound to the active material.Simultaneously, electrons are released and conducted via an external circuit from the anode to the cathode.When charging the LIB, the movements of lithium ions and electrons are reversed by a connected power supply [63]. In recent years, there has been a notable focus on advancing the development of cathodes to be more sustainable and safer.These efforts have resulted in the commercialization of different cathode materials in the EV market, including LiNi x Co y Al 1−x−y O 2 (NCA), LiMn 2 O 4 (LMO), LiNi 0.5 Mn 1.5 O 4 (LNMO) [64], LiFePO 4 (LFP), and LiNi x Mn y Co 1−x−y O 2 -(NMC)-based batteries.Each of these materials offers its own set of advantages over the others [65].NCA, LFP, and NMC-based batteries are prominently utilized in electric vehicles produced by companies like BMW, Chevrolet, Nissan, Tesla, etc. [66].While Li-air and Li-S batteries have been manufactured, they are not yet ready for car applications.However, sodium-ion batteries are emerging as a potential alternative to lithium-ion batteries. On the other hand, personal devices, including smartphones, laptops, tablets, cameras, and wearable technology, heavily depend on energy storage to function effectively within compact designs.Batteries serve as the primary power source for these devices, requiring relatively small storage capacities in limited volumes and lightweight formats.Among the battery types, LIBs, especially those utilizing LMO and LiCoO 2 , stand out as particularly well-suited for small-scale electronics.They serve as the primary power cathode in a variety of devices, from smartphones and computers to power tools.Moreover, the utility of LIBs extends beyond powering portable electronics and transportation.They now play vital roles in supporting the electricity grid.This expansion enables the integration of variable renewable energy sources, ultimately improving efficiency in transmission and distribution systems [67].An alternative strategy on the verge of commercialization, in addition to cathode development, involves transitioning from graphite anodes to silicon anodes.Silicon, an abundant, non-toxic, and cost-effective material, offers significantly higher storage capacity.Initial attempts to create anodes solely from pure silicon were unsuccessful.However, a more promising approach, now further developed by the industry, involves incorporating porous or other carbon additives into the anode [75].Currently, various manufacturers, such as Varta, Sila Nanotechnologies, Enovix Corporation, Gotion High-Tech, and others, are actively working on composite anodes with varying, yet increasing, proportions of silicon and/or SiO x or TiSi [76].Substituting conventional electrodes like graphite or silicon with graphene can enhance battery stability and lifespan, while also providing higher energy density at a lower cost.However, the structural constraints of graphene limit battery size, thereby restricting the energy-storage capacity primarily to small devices, rendering them unsuitable for large battery packs, including those for EVs [77,78]. Current advancements in nanotechnology focus on the miniaturization of electronic devices to provide power on demand.The lithium ion-based micro/nano-batteries are excellent candidates for this purpose, which feature small size, light weight, high capacity, and long cycle life, and they also offer stability and safety, making them suitable for energy storage in microdevices and wearable applications [79].In addition to the development of metal-based batteries, organic batteries, also known as polymer-based batteries, feature several advantages over their common metal-based counterparts: they do not contain any toxic and rare heavy metals, their organic raw materials can potentially be obtained from renewable resources, and at the end of the life cycle, they can be disposed of by incineration without toxic leftovers.In the early 2000s, different potential applications were explored for these batteries, which are more environmentally friendly.Nevertheless, these systems have not found commercial applications until today, with Evonik Industries currently at the forefront in supplying materials for printable polymer-based batteries, which can be used in thin and flexible devices [69,80].Biobatteries based on biofuel cells are also emerging as a net-zero CO 2 emissions solution.While still in the development phase, these biobatteries provide clean, safe, durable, and efficient energy storage that aligns with future climate objectives.They use enzymes, organelles, or microorganisms as ecofriendly biocatalysts to convert chemical or biological energy into electrical energy, enabling sustainable power generation for portable, wearable, implantable, or ingestible devices, as well as offering long-term solutions for unattended environmental electronics [70,81].The BeFC (Bioenzymatic Fuel Cells) company has developed the first cost-effective and efficient paper biofuel cells.These cells are metal-free, organic, recyclable, compostable, safe, and sustainable. With the expected rise in battery demand by 2050, Verkor, headquartered in France, has launched a battery gigafactory.Their objective is to manufacture low-carbon, highperformance electric batteries, primarily LIBs, to support sustainable mobility efforts.This initiative aims to reduce reliance on Chinese battery manufacturers and create more job opportunities. Apart from employing batteries with zero net CO 2 emissions, battery recycling is crucial [58].As most battery materials are recyclable, investing in sustainable practices is essential to mitigate environmental impact and meet growing energy needs.Recognizing this necessity, the Swedish battery company NorthVolt and its subsidiary Revolt, which handles recycling, have been established.These facilities are dedicated to not only producing high-quality batteries but also implementing comprehensive recycling strategies.From the initial selection of components (batteries devoid of Li or Co) to the end-of-life recycling process, these companies play a pivotal role in minimizing resource depletion and reducing carbon emissions associated with battery production and disposal (Figure 2).handles recycling, have been established.These facilities are dedicated to not only producing high-quality batteries but also implementing comprehensive recycling strategies.From the initial selection of components (batteries devoid of Li or Co) to the end-of-life recycling process, these companies play a pivotal role in minimizing resource depletion and reducing carbon emissions associated with battery production and disposal (Figure 2).Illustration of the fabrication of batteries using raw materials, such as Pb, Li, and Co, and their tendency for shortage in the long term (left-hand side), followed by their end-of-life phase after usage and improper disposal, which poses hazards (right-hand side).In the center, the trend among recent and future companies is to recycle batteries and manufacture new ones in a sustainable manner, utilizing renewable resources, such as solar energy, wind power, bacteria, and enzymes. Material Characterization Electrochemical characterization points to a range of techniques within the field of electrochemistry that are used to analyze and understand the properties and behaviors of materials or systems.These techniques include applying electrical stimuli and measuring Figure 2. Illustration of the fabrication of batteries using raw materials, such as Pb, Li, and Co, and their tendency for shortage in the long term (left-hand side), followed by their end-of-life phase after usage and improper disposal, which poses hazards (right-hand side).In the center, the trend among recent and future companies is to recycle batteries and manufacture new ones in a sustainable manner, utilizing renewable resources, such as solar energy, wind power, bacteria, and enzymes. Material Characterization Electrochemical characterization points to a range of techniques within the field of electrochemistry that are used to analyze and understand the properties and behaviors of materials or systems.These techniques include applying electrical stimuli and measuring the resulting electrical or electrochemical signals to achieve insights into various properties, such as conductivity, surface reactivity [82], corrosion resistance [83], catalytic activity [84], and ion transport [85].Electrochemical characterization techniques are valuable tools for the analysis of both species in solution and solid states. For species dissolved in a solution, common techniques, such as cyclic voltammetry, chronoamperometry, potentiometry, and impedance spectroscopy, are employed for the electrochemical characterization.However, for solid materials, cavity microelectrodes are typically used.To characterize the electroactive species, solids are inserted in the cavity of the microelectrode, and electrochemical characterization is performed.In this part, each technique is briefly described, and examples are given. Cyclic voltammetry (CV) is a powerful and popular electrochemical method commonly used to explore the reduction and oxidation processes of molecular species.CV is also valuable for studying chemical reactions initiated by electron transfer, including catalysis, providing insights into catalytic processes, and facilitating the understanding of redox mechanisms in various systems [27].Cyclic voltammetry characterizes electrochemical systems by measuring the current response (i) as a function of applied voltage (E) (i vs. E), providing information on redox reactions, electron transfer kinetics, and stability of electroactive species [86].This technique involves linearly and cyclically sweeping the voltage while monitoring the resulting current response, allowing peak potentials, peak currents, and other electrochemical parameters to be determined [87].In CV, the Butler-Volmer equation (Equation ( 1)) is often used to model the kinetics of electron transfer at the electrode interface as follows: where i is the current density, i 0 is the exchange current density (in A/m 2 ), α is the charge transfer coefficient (dimensionless), n is the number of electrons involved in the electrode reaction, R and F are, respectively, the perfect gas and the Faraday constants, T is the temperature, and η is the overpotential, defined as the difference between the applied electrode potential and the equilibrium potential for the electrode reaction. On the other hand, chronoamperometry is an electrochemical technique that measures the electric current (i) as a function of time (i vs. t) when a constant electric potential (E) is applied to an electrode.Unlike cyclic voltammetry, which involves applying a potential that varies cyclically, chronoamperometry maintains a constant potential and measures the resulting electric current over time according to Cottrell's law (Equation ( 2)).This technique is often used to study electrochemical reactions with specific electrode materials and to determine the kinetic constants of electrochemical reactions [88,89]. where i is the current, n is the number of electrons involved in the electrochemical reaction, F is Faraday's constant, A is the area of the electrode, D is the diffusion coefficient of the electroactive species, C is the concentration of the electroactive species, and t is time (in seconds).Steady-state amperometry using microelectrodes and based on potential step experiments is widely considered to offer superior accuracy for monitoring the state of charge (SOC) in redox flow battery (RFB) electrolytes.A novel analytical method for amperometric state-of-charge (SOC) and state-of-health (SOH) measurements in individual redox flow battery (RFB) electrolytes was developed, focusing on transient current signals obtained in chronoamperometric potential step experiments [90]. On the other hand, potentiometric measurement is a technique used to determine the concentration of ions in a solution by measuring the difference in electrical potential (∆E) between a reference electrode (typically Ag/AgCl) or a pseudo-reference electrode and a working electrode [91][92][93] as a function of time (∆E vs. t).This method is based on the Nernst equation (Equation ( 3)), which relates the measured potential to the concentration of ions in the solution.It is characterized by its simplicity and sensitivity; however, the presence of different ions in solution conducts to measure a mixed potential that is proportional to the ion's concentration. where R and F are, respectively, the perfect gas and the Faraday constants, n is the number of electrons, T is the temperature, and E 0 is the standard potential of the target analyte. In the battery field, the entropy coefficient (dUOC/dT) serves as a crucial parameter for predicting the heat generation in lithium-ion batteries (LiBs), particularly at low-rate conditions.Although the potentiometric method is commonly used for entropy coefficient measurement, its accuracy comes at the expense of time.Consequently, there is a critical need for an efficient and accurate entropy measurement technique.The proposed rapid and precise improved potentiometric method, known as positive adjustment (PA), reduces the relaxation time to 10 min.Commercial 18650 lithium-ion cells are used to validate the PA method.A comparison between the entropy profiles obtained by the PA method and conventional potentiometric method (CPM) indicates a comparable accuracy, with an average error of ±0.01 mV/K.Even when contrasted with recent alternative methods, the PA method demonstrates notable advantages in measurement efficiency [94]. Conversely, impedance (Z) measurement is a technique used to characterize the electrical behavior of a system in response to an alternating electrical signal (for a fixed frequency range) [95].Impedance is a measure of the resistance of an electrical circuit to alternating current (as per Ohm's Law) and can include components of resistance, capacitance, and inductance.There are several impedance measurement techniques, such as electrochemical impedance spectroscopy (EIS), which is widely used to study electrochemical processes in batteries [96,97], sensors [98,99], and other electrochemical systems (Figure 3).A timedomain measurement technique utilizing a preset equivalent circuit model, comprising numerous series-connected resistor and capacitor parallel elements, was explored as a means of measuring the electrochemical impedance spectrum of a lithium-ion battery, while excluding the apparent impedances resulting from open circuit voltage changes in the low-frequency range.Initially, an extensive experimental investigation was conducted to determine the optimal conditions for the applied signal suitable for this technique.It was established that an impedance spectrum ranging from several tens of microhertz to several tens of millihertz could be accurately measured by selecting a suitable, low-rate, and long-duration constant current charge or discharge as the applied signal.Subsequently, impedance spectra, excluding the apparent impedances resulting from open circuit voltage changes, were measured under various conditions using this technique.The fundamental characteristics of the impedances associated with the solid-state diffusion processes of lithium within the corresponding low-frequency range were examined.It was revealed that the impedance spectrum, excluding the apparent impedances resulting from open-circuit voltage changes in the low-frequency range (ranging from several tens of microhertz to several tens of millihertz), could be reasonably separated into two finite-length Warburg impedances.These impedances could be clearly characterized by differences in their diffusion time-constant values and the state-of-charge dependence of their diffusion resistances.Furthermore, an Arrhenius-type temperature dependence was confirmed for both of their diffusion resistances.A time-domain measurement technique using a preset equivalent circuit model (ECM) was examined for electrochemical impedance spectrum analysis in lithium-ion batteries (LIBs).Experimental studies determined optimal signal conditions, allowing accurate measurements in the low-frequency range.Impedance spectra, excluding apparent impedances from open circuit voltage changes, were analyzed, revealing distinct characteristics related to SOC and battery temperature.This technique enabled the separation of two finite-length Warburg impedances with clear diffusion time-constant values and SOC dependencies.Minimal variations were observed in the time constants of these impedances, showcasing their potential for accurate impedance analysis in LIBs and reducing the resistance for the use of electrochemical impedance spectroscopy analysis in materials chemistry. Chemistry 2024, 6, FOR PEER REVIEW 9 variations were observed in the time constants of these impedances, showcasing their potential for accurate impedance analysis in LIBs and reducing the resistance for the use of electrochemical impedance spectroscopy analysis in materials chemistry.Cavity microelectrodes (CMEs) are used in various areas of scientific research [100], including electrophysiology, neuroscience, material [101,102] analytical chemistry, and even batteries [103].They are generally made of a conductive material, often metallic, with a small cavity at the end (tens of µm depth and larger).These microelectrodes are used to Cavity microelectrodes (CMEs) are used in various areas of scientific research [100], including electrophysiology, neuroscience, material [101,102] analytical chemistry, and even batteries [103].They are generally made of a conductive material, often metallic, with a small cavity at the end (tens of µm depth and larger).These microelectrodes are used to measure electrical signals on a very small scale.The cavity at the end of the electrode allows better electrical conduction between the electrode and the medium in which it is inserted.This feature improves the sensitivity of electrophysiological or electrochemical measurements.When it comes to batteries, cavity microelectrodes are used to study the electrochemical processes that occur inside the battery.They make it possible to precisely measure electric currents and electrochemical potentials at very fine scales, thus contributing to the understanding of the performance and durability of batteries.Cavity microelectrodes in the field of batteries are characterized by their small size, high energy density, rapid response, stability, and durability, as well as their ability to enable precise control of electrochemical processes.They offer the potential for new battery architectures that can lead to improved performance and better energy efficiency.Cavity microelectrodes (CMEs) provide a valuable platform for evaluating the electrocatalytic performance of micro-and nanoparticle materials.The technical factors and physicochemical processes affecting the electrochemical response of CMEs need to be recognized, particularly the accessibility of redox species on the surface of the electrocatalyst.The voltammetric response of cavity microelectrodes (CMEs) is explored using a combined experimental and theoretical approach.This includes a comparative examination of cyclic voltammetry and square-wave voltammetry (SWV) techniques.The results demonstrate a capacitive response distortion, increasing with the powder surface area, but with a Faradaic response analogous to that of embedded microdisks, indicating that electrochemical reactions occur primarily on the first layer of the powder filler.Furthermore, it is demonstrated that the SWV is well suited to discriminate Faradic processes at CMEs, and precise mathematical expressions are presented to describe it.These results provide guidelines for the design and analysis of the voltammetric measurements of CMEs [104]. In summary, while cyclic voltammetry, impedance spectroscopy, potentiometry, and chronoamperometry are all electrochemical techniques used in battery research, they each have distinct principles, applications, advantages, and limitations.The choice of technique depends on the specific aspects of the battery behavior being studied and the desired level of detail and complexity in the analysis. Surface Modification The surface modification by electrochemical methods takes various forms through different processes, such as electroplating, electrodeposition, anodization, surface etching, and electrochemical functionalization.This section briefly summarizes the concept of each of the above-cited processes and focuses on the main applications that are related to each method. Electroplating and electrodeposition processes are sometimes used interchangeably.However, in technical contexts, they refer to different processes.Electroplating refers to the process of depositing a thin layer of metal onto a substrate surface using an electrochemical reaction [16,105].It is widely used in metallurgical industries, such as automotive, electronics, and jewelry manufacturing (e.g., nickel coatings, platinum coatings [106], gold coatings [107], copper coatings [108], etc.), with the objective of producing protective coatings to preserve surfaces from aggressive conditions, thereby increasing the performance of materials and enhancing the physical properties of components (brightness, shape, etc.).Conversely, electrodeposition refers to the process of depositing any material onto a substrate surface using electrochemical methods.Electrodeposition enables the production of high-quality products with improved material properties and high durability.It allows for performing the electrodeposition of alloys, composites, or non-metallic materials (e.g., deposition of glasses [109], silicone [110], and conductive polymers like polypyr-role [111], polyaniline [112,113], PEDOT [114], etc.).To summarize, electroplating is a type of electrodeposition focused on depositing metal coatings. Nevertheless, the anodization process is commonly used to improve corrosion resistance, hardness, and adhesion of the substrate [115].This technique involves applying a positive potential to the substrate surface in an electrolyte solution, leading to the formation of a stable oxide layer [116,117].It is primarily used on aluminum and its alloys to enhance surface properties and provide various benefits (durability, enhanced aesthetics since anodizing has an impact on the surface color, electrical insulation required for some applications, and improved adhesion as anodizing creates a porous surface that enhances the adhesion of paints and other coatings) [118,119] (Figure 4).coatings to preserve surfaces from aggressive conditions, thereby increasing the performance of materials and enhancing the physical properties of components (brightness, shape, etc.).Conversely, electrodeposition refers to the process of depositing any material onto a substrate surface using electrochemical methods.Electrodeposition enables the production of high-quality products with improved material properties and high durability.It allows for performing the electrodeposition of alloys, composites, or non-metallic materials (e.g., deposition of glasses [109], silicone [110], and conductive polymers like polypyrrole [111], polyaniline [112,113], PEDOT [114], etc.).To summarize, electroplating is a type of electrodeposition focused on depositing metal coatings. Nevertheless, the anodization process is commonly used to improve corrosion resistance, hardness, and adhesion of the substrate [115].This technique involves applying a positive potential to the substrate surface in an electrolyte solution, leading to the formation of a stable oxide layer [116,117].It is primarily used on aluminum and its alloys to enhance surface properties and provide various benefits (durability, enhanced aesthetics since anodizing has an impact on the surface color, electrical insulation required for some applications, and improved adhesion as anodizing creates a porous surface that enhances the adhesion of paints and other coatings) [118,119] (Figure 4).Electrochemical etching or wet etching (also known as electrochemical machining) is a process that uses an electrical current and an electrolyte solution to selectively remove materials from a conductive substrate (metal or semi-conductor) to perform etching [121,122].This can be used to create surface textures, patterns, or microstructures for various applications.It is somehow the reversible process of electrodeposition where a thin layer of metal is coated.The process consists, in most cases, of immersing the metal (or the semiconductor) in acid baths (electrolyte) during a specific time to create, first, porous nanostructures onto the substrate surfaces and, second, to enable the creation of a thin layer of metal oxide (by anodization) that protects and stabilizes the nanostructures [19,20].By this, the surface area increases as the structure of the substrate is converted from 2D surfaces to 3D surfaces [123].Conceptively, many parameters affect the process and should be controlled, e.g., the concentration of the acidic solutions, the temperature of the baths, the presence of a catalyst that has an essential role in the initiation of material nucleation (e.g., copper ions), and, finally, the thickness of the substrate.On the other hand, etching may affect the mechanical properties of the etched substrate.For this reason, many physical properties are monitored during the process by relying on routine testing of the substrate (e.g., Bursting strength tester for foil substrates, etc.).To obtain the best compromise between the surface etching while conserving good and acceptable physical properties, the contact time (residency time of the substrate in the electrolytes) should be considered [124,125].The surface etching can take various forms depending on the bath composition.Consequently, it can create tunnels crossing the material surfaces [126,127], and it can create nanopores (also named pore nucleation) on the substrate surfaces by increasing the surface area [128]. The widespread application of electrochemical etching is in the preparation of raw materials (anode and cathode components) used in the fabrication of electrochemical ca-pacitors, also known as supercapacitors [129].In this process, the aluminum foils (high purity ≥ 99%) are treated by electrochemical etching and then formed.Depending on the capacitance measurements, sheets may be classified and used for different applications, e.g., DC capacitor high voltage, AC capacitor motor start, etc.The higher capacitance values can indicate certain desirable properties, such as increased surface area or improved electrode performance [130].Therefore, they should be interpreted in conjunction with other characterization techniques and performance metrics to assess overall foil quality accurately. Electroanalysis Electrochemical analysis (or electroanalysis) relies on the use of electrochemical methods to quantitively determine the concentration of an analyte in a sample.Typically, these methods include voltammetry [27], amperometry [28], potentiometry [29], and impedance spectroscopy [131] (the principle of these methods is detailed in § 2.2).These methods are employed to measure antioxidant activity [21,22] and analyze [24], and characterize various compounds and materials [25,26], as well as to develop reference methods [132].Electrochemical sensors and biosensors are among the most common applications of electroanalysis.They are used for the detection of biomolecules [30,133], pollutants [31], and analytes in environmental samples [32,33], food [134], pharmaceuticals [34,35], and clinical and biochemical diagnostics [23,36,135].These devices offer rapid, sensitive, and relatively selective measurements, making them valuable tools for research, quality control, and real-time process monitoring (Figure 5). In this section, two examples of electroanalysis are investigated based on their importance and widespread applications: the pH electrode and the glucose biosensor.In fact, pH monitoring is of high interest in many chemical and biochemical processes.The term pH stands for the potential of hydrogen and allows it to express its power.The pH measurements express the acidity and alkalinity of solutions and vapors and help in controlling and optimizing industrial processes.The measure of pH using a redox electrode probe consists of potentiometrically determining the concentration of hydrogen ions (H + ) in a solution based on the Nernst equation [136,137].Recently, many miniaturized pH electrodes have been reported in the literature.These devices are designed to be small and compact and are essentially important in many applications that require small-scale systems for integration (e.g., lab-on-a-chip systems [23,30], point-of-care diagnostics, etc.) or dedicated for portable devices (for physiological monitoring or wearable health trackers) [138,139]. A glucose biosensor is a device that incorporates a biological recognition element as a sensing component (enzymes, e.g., glucose oxidase (GOx) or glucose dehydrogenase (GDH) and its cofactor).The enzymatic reaction byproducts diffuse through the sensing membrane and are then oxidized or reduced on the electrode surface at a fixed potential that depends on the electrode material [140,141].Typically, in the case of first-generation enzymatic glucose biosensors, the enzymatic oxidation of glucose by GOx leads to the production of hydrogen peroxide (H 2 O 2 ), which is oxidized at a positive potential of +0.7 V vs. a Ag/AgCl reference electrode or reduced at negative potentials (between −0.2 and 0 V) using mediators that lower the detection potentials and prevent interference problems [142].Glucose biosensors are devices commonly used to monitor blood glucose levels in diabetic patients and represent one of the most widely used applications for glucose monitoring worldwide [143]. enzymatic glucose biosensors, the enzymatic oxidation of glucose by GOx leads to the production of hydrogen peroxide (H2O2), which is oxidized at a positive potential of +0.7 V vs. a Ag/AgCl reference electrode or reduced at negative potentials (between −0.2 and 0 V) using mediators that lower the detection potentials and prevent interference problems [142].Glucose biosensors are devices commonly used to monitor blood glucose levels in diabetic patients and represent one of the most widely used applications for glucose monitoring worldwide [143]. Electrochemical Processes for Depollution and Water Remediation Providing clean water for industrial or drinking purposes is challenging because of the wide range of chemicals generated and used on a regular basis that eventually contaminate water streams [144].Huge pollutant fluxes from industrial and agricultural processes unintentionally affect the quality of water.In fact, it is well known that the world's largest polluters are the manufacturers of synthetic organic chemicals (>400 million tons annually), fertilizer use (∼200 million tons annually), and pesticide use (∼3 million tons annually) [145].Water-pollution management is still more important than ever, given the Electrochemical Processes for Depollution and Water Remediation Providing clean water for industrial or drinking purposes is challenging because of the wide range of chemicals generated and used on a regular basis that eventually contaminate water streams [144].Huge pollutant fluxes from industrial and agricultural processes unintentionally affect the quality of water.In fact, it is well known that the world's largest polluters are the manufacturers of synthetic organic chemicals (>400 million tons annually), fertilizer use (~200 million tons annually), and pesticide use (~3 million tons annually) [145].Water-pollution management is still more important than ever, given the fact that over 30% of Earth's usable freshwater is used for industrial processes, energy production, and agriculture. Many ways to find more sustainable solutions for water-treatment applications are offered by electrochemical processes [146,147].In most cases, electricity is used as the main energy source that facilitates the electrochemical reactions that occur and conducts these processes.Among well-developed electrochemical water-treatment processes, such as electrocoagulation, electroflotation, electrodialysis, electrochemical oxidation, and reduction, there are emerging processes that show good prerequisites for their use in industrial-scale applications.These processes tend to increase the rate of pollution removal, eliminate disadvantages, and expand the applicability of existing electrochlorinated water-treatment techniques to improve cost effectiveness.The following emerging and combined electrochemical processes, such as electrodeionization [148], capacitive deionization [149], electro-Fenton [150], microbial fuel cell treatment [151], photo-and sonoelectrocatalyses, are showing impressive results in water depollution, especially at lab scale [152].However, the main concern in the use of these processes remains the scale-up to industrial scale which is a very challenging step to apply these processes. Electrodeionization is a combination of two desalination processes, electrodialysis and ion exchange, that result in even deeper demineralization rates.Capacitive deionization allows for the simultaneous desalination of water and recovery of electrical energy, in an efficient manner.Electro-Fenton allows the generation of catalysts in situ, thereby reducing sludge formation.In addition to water treatment, microbial fuel cells allow electrical energy to be recovered, thus reducing the maintenance costs required for this process.The increased separation of charge carriers and the suppression of electron-hole recombination in an electrical field make photoelectrocatalysis provide better removal efficiencies than a single photocatalyst.Sonoelectrocatalysis increases the rate at which reactive substances are transferred, inhibits the polarization of electrodes, and plays an important role in hydroxyl radical production to improve treatment efficiency compared to single electrolyte oxidation. An example of the oldest industry with complex processes and high chemical consumption is the textile sector.Due to the ever-increasing demand for clothing from the world's population and changes in the fashion industry, it continues to grow in variety and number [153].This may contribute to the economic benefit of the population, but the negative impact on the environment is often reported, in particular because of the discharge of wastewater [154].Due to the potential toxicity and risks to human health, textile wastewater is a concern and must be treated before being discharged into the water supply. In the treatment of textile waste, conventional processes are most applied, such as integrating chemical, biological, or physical methods.Aluminum sulfate, ferrous sulfate, and PAC (polyaluminum chloride) are generally used as coagulants in textile wastewater treatment [155].These coagulants are easy to agglomerate with pollutants and form tiny particulate matter and flocculants in wastewater.Flocculants, in turn, improve the size and properties of the floc, particularly its stability [156].However, for dye wastewater, this process has a limited COD (chemical oxygen demand) removal rate of 10%, which may lead to sludge production.To improve treatment performance, activated sludge, moving bioreactors (MBR), and biofilters are usually used after coagulation-flocculation.However, the quality of the effluents often does not meet the required quality standards.Thus, it needs a finishing process (sedimentation).Physical processes, such as adsorption, are reported for their potential application as a finishing treatment.However, it has a relatively short life span (less sustainable) and may produce spent adsorbent as a byproduct.The integration of chemical, biological, and physical processes generally requires more space, longer retention time (1-4 days), higher operational cost, and needs a sludge handling and disposal system.As a stand-alone technology or combined with aerobic processes, biological processes are often used.An integrated system of anaerobic and aerobic processes has become an option because of the large quantities of sludge generated in chemical processes [153].The color-removal efficiency achieved in this process is, on the other hand, relatively small.In addition, high ammonium content, color, and non-degradable CODs are characteristics of printing textile wastewater.As a final step in removing persistent pollutants, including dyes, and improving the performance of biological wastewater-treatment plants, most textile industries use a decolorization agent (DCA) [157].DCA is a cationic organic polymer with a dicyandiamide formaldehyde resin base with high adsorption efficiency and settleability, the ability to neutralize the electric charge of the particle surface, more stable flocs, and the ability to remove dissolved dyes, such as direct, reactive, disperse, and acid dyes [158].However, DCA is often not the appropriate solution in every situation due to its high cost, low availability, and requirement for sludge disposal.The fact that sludge produced as a byproduct in this case may have significant environmental impacts and be of high risk to humans and the ecosystem is also worth noting. Over the last decade, intensive research has been carried out on the removal of color in modern oxidation processes.It can be created by photocatalysts [159] ozonation, Fenton's reagent [160], and electrocatalytic processes [161].The obvious advantages of photocatalytic oxidation are the low temperature and pressure requirement and being biologically inert and soluble in water, which is widely available, highly photoactive, less toxic, and environmentally friendly.However, major limitations remain in terms of application potential as regards TiO 2 catalyst morphology and crystallinity, metal doping requirements, high selectivity to the specific character of pollutants, and close contact with light sources. Electrocatalytic oxidation, commonly known as EAOP (electrochemical advanced oxidation process), uses electrical energy in pollutant degradation.These processes rely on the in situ formation of hydroxyl radicals (OH • ) known to non-selectively attack and oxidize the pollutants until the achievement of biodegradability or the total mineralization of the effluent.Its high performances in color and COD removals were reported in dyeing textile wastewater and printing textile wastewater [162].A new hybrid Fenton electrochemical system and reactor are proposed to be an effective post-treatment option for textile effluents and many others [10,163].In terms of color, COD, and ammonium removal, this process is highly effective.As they are cheaper than TiPtIr and more economically feasible than GDL (gas-diffusion layer) carbon [164], mesh TiRuO 2 and graphite carbon rods have been used as electrodes. Applied in the industrial sector, a unique system with a cylindrical stainless-steel reactor with pairs of electrodes (mesh cylinder Ti/RuO 2 /anode and graphite carbon rod cathode) can be proposed to degrade the complex pollutants of real textile wastewater.The internal circulation system provides considerable advantages for the operation of this system, e.g., increased volume loads of up to 400%.This method can be a promising alternative for textile wastewater after treatment, as pollutants have been degrading simultaneously using electro-Fenton oxidation.In a short time, the removal of color, COD, and ammonium was significant.Due to the longer contact between the generated reactive species and the pollutants in the wastewater, the circulating system was more advantageous than the noncirculating system.The removal efficiency of pollutants, color, and COD, the oxidability index, and operational costs have confirmed and supported this technique.Electrical consumption was about 3 kwh/m 3 of Rp 4.200/m 3 for the non-circulated system, and it was reduced significantly to 1.1 kwh/m 3 or Rp1.540/m 3 for the circulated system.A high potential system can be found in the hybrid Fenton electrochemical, and it is possible to investigate this through a scaleup reactor with direct integration into an existing anaerobic-aerobic unit of textile [165]. Finally, a variety of treatment techniques, such as biological remediation, physicochemical treatment, and electrochemical techniques like electrochemical/electrocatalytic reduction, oxidation, and electrocoagulation, will be needed due to the diversity and complexity of chemical water pollutants.The use of electrocatalytic treatment with contaminated water is becoming more common due to the falling cost of renewable energy sources and the growing need to transform harmful substances into harmless or beneficial compounds.While electrocatalytic reduction can handle streams containing oxidized species like nitrate and nitrite produced by fertilizer runoff, electrocatalytic oxidation can handle a variety of industrial waste streams, including textile and food effluents.Because of these benefits and their small-scale suitability, electrocatalytic processes are perfect for decentralized water treatment.Electrocatalysis has the potential to have a sustainable impact on converting harmful water pollutants into valuable or harmless substances with the increasing availability of cheap electricity from renewable sources. Electrosynthesis Electrochemical organic synthesis (or electrosynthesis) has now become a tool to synthesize new compounds via green chemistry [166].They can function at room temperature and pressure and mainly do not require auxiliary chemicals.Typically, the process of electrosynthesis relies on the use of an electrochemical reactor supplied by a power source [167].Two parameters are keystones for obtaining high efficiency of electrosynthesis.First, the reactor design is crucial for obtaining a high production yield.Second, electrode characteristics (material, number, size, geometry, and surface area) are an essential consideration for the quantity of electrogenerated compounds, since electrodes are the site hosting the electrochemical reactions [168]. Alternatively, the use of electrical current through a reaction to activate organic molecules by means of the addition or removal of electrons possesses several advantages, such as the simplicity and high selectivity of reactions and the availability of the synthetic materials (low cost, simplicity of uses, needleless for a separation method, etc.).In addition, electrosynthesis can be modular and scalable, allowing for flexibility in production capacity [169].Electrooxidation and electrosynthesis methodologies have been developed for the selective functionalization of organic molecules, including C-H activation, C-C bond formation, and asymmetric synthesis.This field is important in many major industrial processes such as chlorine generation [170], aluminum manufacturing [171], production of decarbonized nitrogen-based fertilizers [172], drug synthesis [173], and many others [174]. This part shows a few examples dealing with the synthesis of new organic molecules by electrochemical pathways.The first example focuses on the synthesis of nitrogen-based fertilizers (NBFs) by electrochemical methods.The synthesis of NBFs, such as ammonia and urea, was successfully realized by using waste compounds, such as carbon dioxide and nitrates [175].The work largely focused on understanding the catalytically active sites for urea electrosynthesis.In fact, selectivity, and activity dependence on the relative composition of the copper and zinc oxide catalyst was found and assigned to a synergetic electronic effect.On the other hand, amorphous nanomaterials were used as catalysts, with the reaction involving electrochemical synthesis of N-containing compounds from a variety of abundant N-containing small molecules (N 2 , NO, NO 3 − etc.. . .).The results show that these materials can simplify (break) the C-N coupling bond, leading to the synthesis of urea [176]. Practically, producing decarbonized nitrogen-based fertilizers (NBFs) was developed by CASFER technologies by bringing together nanotechnology and electrochemical science.In fact, they were able to develop precise, commercial-like NBFs from wastestreams.They used an organic synthetic approach (OSA) to produce NBFs, with ingredients predictability and reliability designed to stimulate plant growth [177]. Moreover, chlor-alkali electrolysis is one of the oldest and most implemented processes and has the most significant applications of electrochemistry in industry.This process consists of producing chlorine and sodium hydroxide (NaOH), which are commodity chemicals required by industry.Here, electrolysis was identified as a green method for molecular transfer.The electrons in these electrochemical reactions are waste-free when generated by solar energy and wind. The importance of electrosynthesis was also implemented for the functionalization of molecules in a sustainable way, minimizing the use of toxic reagents and byproducts.This is implemented by alkene difunctionalization, stereoselective heterocyclic synthesis, and carboxylation reaction [178]. In addition, a very recent study by Talebi and co.described the electrochemical synthesis of sulfonamide derivatives known among the most widely used antibiotics in the world [179].The electrosynthesis conditions and reaction pathways were studied.The optimal values of the operating parameters (pH, solvent, electrodes. ..) were evaluated.By optimizing the conditions, the control of the reaction kinetically and thermodynamically using the electrode potential was promising (Figure 6).In fact, this method was able to show the selectivity to oxidize or to reduce a given compound, preventing, then, the oxidation/reduction of intermediates.This electrochemical method of synthesis of sulfonamides over classical ways has many advantages: -Allow the use of green oxidants, which reduces the use of toxic compounds/solvents and prevents environmental risks; -Rely on electrification as an energy source, which decreases the amount and cost of the energy consumed; -Prevent the uses of additional catalysts; -Rely on a less complicated setup for the simplicity of the technical procedure. Hydrogen peroxide (H 2 O 2 ) can also be synthetized using electrochemistry.In fact, the cathodic two-electron (2e − ) oxygen reduction reaction (ORR) conducts the electrosynthesis of H 2 O 2 .This reaction typically occurs under alkaline conditions and requires a suitable catalyst to enhance the efficiency of the process.Various catalysts, including metal complexes and metal oxides, have been investigated for this purpose.Some studies show the importance of a free Fe motif-based electrocatalyst for hydrogen peroxide synthesis [180].This use of isolated Fe led to high activity, selectivity, and stability due to high binding energy with the intermediates to break the peroxyl bond into H 2 O [181]. -Allow the use of green oxidants, which reduces the use of toxic compounds/solvents and prevents environmental risks; -Rely on electrification as an energy source, which decreases the amount and cost of the energy consumed; -Prevent the uses of additional catalysts; -Rely on a less complicated setup for the simplicity of the technical procedure.Hydrogen peroxide (H2O2) can also be synthetized using electrochemistry.In fact, the cathodic two-electron (2e − ) oxygen reduction reaction (ORR) conducts the electrosynthesis of H2O2.This reaction typically occurs under alkaline conditions and requires a suitable catalyst to enhance the efficiency of the process.Various catalysts, including metal complexes and metal oxides, have been investigated for this purpose.Some studies show the importance of a free Fe motif-based electrocatalyst for hydrogen peroxide synthesis [180].This use of isolated Fe led to high activity, selectivity, and stability due to high binding energy with the intermediates to break the peroxyl bond into H2O [181].In conclusion, electrochemical synthesis is now proven as a promising pathway to avoid all the disadvantages, in terms of high energy consumption and the large amount of pollution generated by the classical method of chemical synthesis. Electrochemical Protection Metal structures exposed to aggressive environments (hostile, corrosive, marine environments, etc.) require continuous preventive maintenance to ensure prolonged and safe operation [182].Corrosion is identified as it occurs above water, in the splash zone, and subsea.In this part, we will discuss electrochemical metal corrosion protection to prevent corrosion in offshore gas platforms and underwater metal piping [183,184]. Electrochemical metal corrosion protection is the most appropriate method used and includes anodic and cathodic protections.Both methods rely on manipulating the electrochemical reactions at the metal's surface to prevent corrosion and extend the lifespans of steel structures in harsh marine environments [185]. The anodic protection (AP) system maintains the surface in an actively oxidizing state.This method involves applying an external electrical current to the metal, making it the anode in an electrochemical cell, which polarizes the surface to a more positive potential, thus inhibiting and preventing corrosion reactions [186,187].Another way to apply this method is by attaching sacrificial anodes made of more reactive metals that have higher electrical potentials, such as zinc or magnesium, to the metal structure.The sacrificial anodes corrode preferentially, protecting the structure from corrosion by sacrificing themselves.In some systems to maximize the protection level both ways can be used together [188]. The cathodic protection (CP) systems are another important method to reduce or arrest the corrosion of metal structures by lowering the metal potential using a cathodic current supplied by an anodic system [189,190].Two main types of CP systems are widely used: sacrificial anode cathodic protection (SACP) and impressed current cathodic protection (ICCP).Sacrificial anode cathodic protection (SACP) is mainly used for smaller offshore structures or locations with low-to-moderate corrosion rates.In SACP systems, the structure to be protected becomes the cathode of an electrochemical cell, while sacrificial anodes made of more reactive metals, such as zinc or aluminum, are attached to the steel monopiles [191].These sacrificial anodes corrode preferentially, effectively sacrificing themselves to protect the steel structure by providing a source of electrons that suppresses the oxidation of the steel.As a result, the sacrificial anodes need to be periodically replaced as they are consumed over time.However, ICCP systems are used in bigger offshore structures or located in areas with high corrosion rates.In fact, inert anodes composed of mixed metal oxides or platinized titanium are connected to an external power source [192].This power source applies a controlled electrical current to the steel structure, creating a protective cathodic potential that suppresses corrosion [193].ICCP systems offer precise control over the cathodic protection process. Tools for Electrochemical Modeling The field of electrochemical modeling has seen continuous advancements through improvements in computational techniques, software tools, and methodologies.Examples of this include COMSOL MUTIPHYSICS [194], MATLAB with Simulink [195], Battery Design Studio [196], Canton, DigiElch [197,198], Python [199], and others.These advances are crucial for the development of batteries, sensors and biosensors, fuel cells, and many other electrochemical applications [200].In recent years, electrochemical modeling has shown an increasing demand and has become a powerful tool for researchers and scientists.In fact, it allows for an understanding of complex systems and helps in the prediction and optimization of electrochemical devices under various conditions, and as a result, it reduces the study time and the cost (Figure 7).In addition, electrochemical modeling enables the virtual prototyping of devices and systems, allowing engineers to simulate performance, troubleshoot potential issues, and iterate designs before physical fabrication [201,202].Recently, to reduce animal testing, it became mandatory to model (bio)medical devices (including bioelectrochemical devices) to get Food and Drug Administration (FDA) approval, which highlights, again, the importance of electrochemical modeling [203][204][205]. In this part, we will give some examples of the recent research involving electrochemical modeling divided by application field.In the field of organic electrochemistry, Sigman et al. implemented statistical modeling tools for the design of redox-active organic molecules for the application as electrolytes in nonaqueous redox flow batteries [206].In the field of electrochemical reactors, the modeling and simulation of electrochemical reactors (ECRs) by computational fluid dynamics (CFD) techniques have been of crucial importance due to their main applications: electrosynthesis of chemicals and drugs, electrowinning of metals, chlor-alkali, redox flow batteries, water treatment, and fuel cells [207].In the field of nanoconfinement, Long et al. studied three nano-electrochemical techniques involving modeling to study multiphase chemistry under nanoconfinement: stochastic collision electrochemistry, single nanodroplet electrochemistry, and nanopore electrochemistry [208].In the field of hydrogen production using a proton-exchange membrane (PEM) electrolyzer, a modeling tool based on computational fluid dynamics (CFD) software, ANSYS/Fluent was used [209].In the field of fuel cells, which are a promising source of clean energy, COMSOL Multiphysics was used to incorporate a range of physical phenomena to simulate the performance of solid oxide fuel cells (SOFCs) [210].In another study, a numerical simulation of a three-dimensional model with a single flow channel was constructed.It provides a scientific basis for the control strategy and structural design of SOFCs [211].In the field of batteries, a study shows how the current implementation of the Doyle model in COMSOL in Li-ion battery electrodes can enhance the electrochemical dynamic of these batteries [212]. performance, troubleshoot potential issues, and iterate designs before physical fabrication [201,202].Recently, to reduce animal testing, it became mandatory to model (bio)medical devices (including bioelectrochemical devices) to get Food and Drug Administration (FDA) approval, which highlights, again, the importance of electrochemical modeling [203][204][205].In this part, we will give some examples of the recent research involving electrochemical modeling divided by application field.In the field of organic electrochemistry, Sigman et al. implemented statistical modeling tools for the design of redox-active organic molecules for the application as electrolytes in nonaqueous redox flow batteries [206].In the field of electrochemical reactors, the modeling and simulation of electrochemical reactors (ECRs) by computational fluid dynamics (CFD) techniques have been of crucial importance due to their main applications: electrosynthesis of chemicals and drugs, electrowinning of metals, chlor-alkali, redox flow batteries, water treatment, and fuel cells [207].In the field of nanoconfinement, Long et al. studied three nano-electrochemical techniques involving modeling to study multiphase chemistry under nanoconfinement: stochastic collision electrochemistry, single nanodroplet electrochemistry, and nanopore electrochemistry [208].In the field of hydrogen production using a proton-exchange The figure shows the maximum corrosion rate at the bottom of the pit, which is 2.3 higher after 30 days compared to the corrosion rate after 1 day.This is in line with the proton concentration being 2.2 higher after 30 days compared to 1 day (adapted from COMSOL MULTIPHYSICS website: www.comsol.comaccessed on 16 April 2024). Modeling was also used for advancement in many other electrochemical applications; in thermo-electrochemical cells, the TEC Multiphysics model is constructed to provide a deeper understanding of the interplays between heat/mass transport and electrochemical reactions, with the objective of converting waste heat to electricity [213,214].In gasdiffusion electrodes, modeling and numerical investigation were used in the performance of the electrochemical reduction of carbon dioxide to methanol.A model was built to investigate the role of Cu 2 O-/ZnO-based gas-diffusion electrodes in enhancing the reduction of carbon dioxide into methanol inside an electrochemical cell.The model was simulated using COMSOL Multiphysics software and validated using the experimental results [215].In electrochemical machining (ECM), COMSOL Multiphysics software is used to optimize many parameters [216]. Electrochemistry in Education Electrochemistry's importance in different scientific and technological fields has led to its inclusion in educational curricula.The incorporation of electrochemistry into education has evolved to be more interdisciplinary and aligned with current scientific and technological advancements.Here are two examples involving electrochemistry and education. Potentiostats are crucial for research development in electrochemistry, but their cost is the principal drawback to their massive use.With the aim to provide an affordable alterna-tive for resource-constrained communities, a low-cost, portable electrochemical workstation that integrates an open-source potentiostat based on Arduino and a smartphone application was adopted in graduate teaching and research.It can perform the most used electrochemical techniques of cyclic and linear voltammetry and chronoamperometry [217]. On the other hand, electrochemistry is difficult to learn due to its abstract concepts involving macroscopic, microscopic, and symbolic representation levels.Studies have shown that students can visualize and improve their understanding of chemistry by using interactive computer animation and simulation.This study reports on the effect of the interactive computer animation and simulation module named "Interactive Electrolysis of Aqueous Solution" (IEAS) which was developed to aid students in learning electrolysis [218].Thus, it can be concluded that IEAS has an impact on enhancing students' understanding of the electrolysis concept, and the students are more motivated to learn electrochemistry. Finally, hands-on experimentation with electrochemical cells, electrodes, electrolytes, and measurement devices remains the best choice to provide students with practical experience and reinforce theoretical concepts.Today, there are many manufacturers that can supply and provide academic institutions with an "Educational kit".This contains electrodes, electrolytes, and instructional materials, making it easier for instructors to incorporate electrochemistry into their curriculum (refer to the subsection to know the companies that might deliver these kits).These kits often include experiments with clear procedures that facilitate the integration of applied electrochemistry learning in the educational system.Herein, it is worth noticing that the MENA region faces a lack of institutions providing higher education programs in electrochemistry, despite the availability of diverse academic offerings.This deficit is particularly concerning, given the growing demand in numerous sectors of the job market for professionals equipped with electrochemical knowledge.As industries increasingly recognize the importance of electrochemistry in various applications, the absence of relevant educational opportunities underscores a critical gap that needs to be addressed.In the coming years, integrating electrochemical expertise into educational curricula across the region will be imperative to meet the evolving demands of the job market and foster innovation and growth in key sectors. Electrochemical Companies Many Electrochemical companies are found around the world.We represent some of them according to their locations. Europe's electrochemical industry is characterized by its focus on sustainability and the circular economy, with significant investments in clean energy, battery technology, and recycling processes.The key players and initiatives include Northvolt (Stockholm, Sweden), Umicore (Brussels, Belgium), BASF (Ludwigshafen, Germany), ITM Power (Sheffield, United Kingdom), SOLVAY (Brussels, Belgium), etc.The electrochemical sector in the USA is diverse, with a strong emphasis on innovation and technology development, particularly in the areas of batteries and renewable energy-storage solutions.Some key players and areas of focus include Tesla Inc. (Texas, United States), 3M (Maplewood, United States), Dow Chemical Company (Michigan, United States), General Electric (Boston, United States), Albemarle Corporation (North Carolina, United States), etc.The MENA region's participation in the electrochemical sector is growing, particularly in renewable energy and related technologies, like green hydrogen production.Some key players are ACWA Power (Riyadh, Saudi Arabia), MASDAR (Abu Dhabi, United Arab Emirates), SABIC (Riyadh, Saudi Arabia), OQ (Muscat, Oman), etc. Conclusions Applied electrochemistry plays an important role in advancing technology, promoting sustainability, and addressing societal challenges across diverse fields.Its versatility, efficiency, and reliability make it an indispensable tool for innovation, progress, and problem-solving in the modern world.Over the next several years, it is expected that electrochemistry will occupy a solid position in many sectors and become a reference science for many researchers.This requires special attention for integrating electrochemical expertise into educational curricula worldwide to meet the evolving demands of the job market and foster innovation and growth in key sectors.On the other hand, the future of companies appears encouraging; however, companies should adapt to the market changes and address emerging challenges effectively to follow the evolution of electrochemical technology. Figure 1 . Figure 1.Main application of electrochemistry.Figure 1. Main application of electrochemistry. Figure 1 . Figure 1.Main application of electrochemistry.Figure 1. Main application of electrochemistry. Figure 2 . Figure 2. Illustration of the fabrication of batteries using raw materials, such as Pb, Li, and Co, and their tendency for shortage in the long term (left-hand side), followed by their end-of-life phase after usage and improper disposal, which poses hazards (right-hand side).In the center, the trend among recent and future companies is to recycle batteries and manufacture new ones in a sustainable manner, utilizing renewable resources, such as solar energy, wind power, bacteria, and enzymes. Figure 4 . Figure 4. SEM images of the surface morphology of oxide layers anodized in phosphoric acid at 50 V in dependency on the dwell time and the bath temperature (reproduced with permission from the publisher under license number 5777000505982) [120]. Figure 5 . Figure 5. Common applications of electroanalysis used for the detection of biomolecules, pollutants, and analytes in environmental samples, pharmaceuticals, and clinical and biochemical diagnostics. Figure 5 . Figure 5. Common applications of electroanalysis used for the detection of biomolecules, pollutants, and analytes in environmental samples, pharmaceuticals, and clinical and biochemical diagnostics. Figure 7 . Figure 7. (a) Difference in battery temperature and airflow streamlines between the coupled solution and the one-way solution after 2100 s, (b) electrochemical modeling of corrosion protection of an oil platform using sacrificial anodes.The figure shows the potential of the steel surface, the parts of the steel surface with the highest (most anodic) values in this plot are the least protected, and (c) corrosion rate in µm/day four times.The figure shows the maximum corrosion rate at the bottom of the pit, which is 2.3 higher after 30 days compared to the corrosion rate after 1 day.This is in line with the proton concentration being 2.2 higher after 30 days compared to 1 day (adapted from COMSOL MULTIPHYSICS website: www.comsol.comaccessed on 16 April 2024). Figure 7 . Figure 7. (a) Difference in battery temperature and airflow streamlines between the coupled solution and the one-way solution after 2100 s, (b) electrochemical modeling of corrosion protection of an oil platform using sacrificial anodes.The figure shows the potential of the steel surface, the parts of the steel surface with the highest (most anodic) values in this plot are the least protected, and (c) corrosion rate in µm/day four times.The figure shows the maximum corrosion rate at the bottom of the pit, which is 2.3 higher after 30 days compared to the corrosion rate after 1 day.This is in line with the proton concentration being 2.2 higher after 30 days compared to 1 day (adapted from COMSOL MULTIPHYSICS website: www.comsol.comaccessed on 16 April 2024). Table 1 . Summary of the energy-storage devices (batteries), with their applications, advantages, and disadvantages.
13,862.8
2024-05-23T00:00:00.000
[ "Chemistry", "Environmental Science" ]
Constructing a Segregated Magnetic Graphene Network in Rubber Composites for Integrating Electromagnetic Interference Shielding Stability and Multi-Sensing Performance A flexible, wearable electronic device composed of magnetic iron oxide (Fe3O4)/reduced graphene oxide/natural rubber (MGNR) composites with a segregated network was prepared by electrostatic self-assembly, latex mixing, and in situ reduction. The segregated network offers the composites higher electrical conductivity and more reliable sensing properties. Moreover, the addi-tion of Fe3O4 provides the composites with better electromagnetic interference shielding effectiveness (EMI SE). The EMI shielding property of MGNR composites is more stable under tensile deformation and long-term cycling conditions and has a higher sensitivity to stretch strain compared with the same structure made from reduced graphene oxide/natural rubber (GNR) composites. The EMI SE value of MGNR composites reduces by no more than 2.9% under different tensile permanent deformation, cyclic stretching, and cyclic bending conditions, while that of GNR composites reduces by approximately 16% in the worst case. Additionally, the MGNR composites have a better sensing performance and can maintain stable signals, even in the case of cyclic stretching with a very small strain (0.05%). Furthermore, they can steadily monitor the changes in resistance signals in various human motions such as finger bending, wrist bending, speaking, smiling, and blinking, indicating that the MGNR composites can be used in future wearable electronic flexibility devices. Introduction Rubber-based composite materials, especially graphene rubber composite materials, have excellent flexible, thermoelectric, sensing, and morphologically controllable properties. These materials are ideal for smart, flexible, wearable electronic devices that have been studied by many researchers [1][2][3][4][5]. However, a certain amount of electromagnetic interference (EMI) occurs between electronic products, which will affect the performance of this kind of device. It is therefore of great interest to prepare flexible electronic materials with an excellent EMI shielding performance [6][7][8]. Enhancing the electrical conductivity of the material, adding magnetic particles, and increasing the thickness of the material are effective methods for improving the EMI shielding properties of rubber materials [9][10][11][12][13][14][15][16]. For rubber matrix materials, the preparation of composite materials with a segregated or 3D network structure can increase the conductivity of the rubber material by several orders of magnitude, thereby improving the EMI shielding performance of the composites [17][18][19][20][21]. Jia et al. [18] prepared carbon nanotube/natural rubber composites with a flexible network structure. The prepared Synthesis of MGNR and GNR Composites The synthesis of the MGNR and GNR composites was described in our previous work [17]. Firstly, graphene oxide was dispersed in water (3 mg/mL) for 2 h in an ultrasonic bath to form a stable graphene oxide (GO) dispersion. Afterward, a mixed solution of FeCl 3 ·6H 2 O and FeSO 4 ·7H 2 O with a molecule ratio of 2:1 was dispersed into the above ultrasound GO solution. The mass ratio of GO: FeCl 3 ·6H 2 O:FeSO 4 ·7H 2 O was 1:5:2.56. After ultrasonicating for 30 min, an aqueous ammonium hydroxide solution was slowly injected into the mixed solution until the pH of the solution reached 12. After 1 h, Fe 3 O 4 /GO hybrids were obtained and the NR latex was dispersed into the Fe 3 O 4 /GO solution by ultrasonication for 30 min to obtain NR/Fe 3 O 4 /GO latex. Hydrazine hydrate was injected into the NR/Fe 3 O 4 /GO latex and the mixture was in situ reduced with ultrasound at 60 • C for 2 h. The sulfur and other additives formed an aqueous suspension with a concentration of 4 mg·mL −1 (the content of NR was fixed at 100 phr, the content of rGO was at 4, 6, 8, or 10 phr, zinc oxide was at 5 phr, sulfur was at 2.8 phr, antioxidant 4010NA was at 3 phr, stearic acid was at 3 phr, accelerator MBT was at 0.1 phr, accelerator CBS was at 1.4 phr, and emulsifier OP was at 2 phr), which was dispersed into the Fe 3 O 4 /rGO/NR latex. Finally, the mixed latex was coagulated. After filtration, the solid mixture was dried in a vacuum oven at 65 • C for 4 h. The composites were compression molded and vulcanized at a temperature of 150 • C and a pressure of 10 MPa for 5 min to obtain the composites. The obtained MGNR composites were designated MGNR-x, in which x is 4, 6, 8, or 10 depending on the rGO content. GNR composites were prepared by the same process. Characterization TEM images of MGNR and GNR composites were taken using an FEI Tecnai G2 F20 S-TWIN (FEI Inc., Hillsboro, OR, USA) transmission electron microscope. Electrical conductivity was detected by employing a picometer (Keithley 2400, Keithley Instruments Inc., Solon, OH, USA) system. The electromagnetic interference shielding properties of the composites were evaluated by a vector network analyzer (Agilent N5247A, Agilent Technologies Inc., Santa Clara, CA, USA) in transmission-reflection mode. The scattering parameters in the frequency range of 8. 2-12.4 GHz (X-Band) were recorded. For the tensile permanent deformation test, rectangular GNR-6 and MGNR-6 samples (45 × 15 × 0.6 mm 3 ) were fixed to each end of a self-made permanent deformation machine at a distance of 15 mm. The samples set aside 15 mm in the middle to test the tensile permanent deformation of the composites. The GNR and MGNR samples were stretched to a length of 25% (3.75 mm), 50% (7.5 mm), and 75% (11.25 mm) for 12 h. Finally, the samples were removed and left for 12 h, testing the enhanced length and then taking the middle part of the samples to test the electrical conductivity and EMI shielding properties. In order to better explain the changes in the segregated network before and after the tensile permanent deformation test, a GNR-6 sample (45 × 15 × 0.6 mm 3 ) was subjected to tensile permanent deformation with 100% strain (15 mm stretched length). Additionally, the morphology of the GNR-6 composites before and after being subjected to tensile permanent deformation were characterized by TEM (FEI Tecnai G2 F20 S-TWIN, Hillsboro, OR, USA). For cyclic tensile sensing measurements, different rectangular GNR and MGNR samples (45 × 10 × 1 mm 3 ) were fixed to an MTS CMT-4000 universal (MTS Corporation, Eden Prairie, MN, USA) testing machine (reserve 25 mm in the middle) and the samples were stretched at a tensile speed of 10 mm/min. The electrical resistance of the samples during the tensile process were recorded with the Keithley 6485 picometer (Keithley Instruments Inc., Solon, OH, USA). The gauge factor (GF), which can measure the sensing properties of the material, is obtained from Equation (1) [38,39]: where ∆R/R 0 is the relative change in the resistance and ε denotes the applied stretching strain. For human sensing measurements, the samples were fixed onto different human body parts (fingers, arms, throat, mouth corners, and eyes) and the electrical resistance change under different human motion conditions (finger bending, wrist bending, talking, smiling and blinking) was measured at least six times. Morphology of GNR and MGNR Composites In our previous study, it was proven that the discrete and spherical Fe 3 O 4 particles are homogeneously anchored on the surface of the flake-like shape rGO sheets, suggesting a strong interaction between the Fe 3 O 4 nanoparticles and the rGO sheets [17]. Additionally, the morphology of GNR and MGNR composites was characterized by TEM in Figure 1. From Figure 1a,a', we can clearly see that the rGO flakes with anchored Fe 3 O 4 particles coated the surface of the rubber matrix and connected to form a segregated conductive network in all MGNR composites. This gives the MGNR composites ferromagnetic properties, which are very important for enhancing the EMI shielding properties of MGNR composites [17]. The GNR and MGNR composites were both made by ultrasonically assisted latex mixing and an in situ reduction process; it is worth noting that the similar segregated structure in the MGNR and GNR composites exist, which can be seen in Figure 1. The segregated structure can greatly enhance the electrical properties of the composites, causing an improvement in the EMI shielding performance. Additionally, the structure of the segregated network can be changed during different tensile or human processes, which can alter the resistance during the above situations [40]. As a result, the composites have reliable sensing properties. The Stability of EMI Shielding and Electrical Conductivity Properties of MGNR and GNR Composites under Different Mechanical Deformation In our previous work, we proved that the addition of Fe 3 O 4 particles decreases the electrical conductivity and increases the EMI shielding properties of the composites [17]. The specific data about the electrical conductivity and EMI shielding properties of different GNR and MGNR composites are shown in the supporting information. Specifically, the EMI SE value of the MGNR-10 composites is 42.6 dB at 8.5 GHz, while that of the GNR-10 composites is only 32.4 dB at the same frequency, which is shown in Figure S2a. This is because the Fe 3 O 4 particles cause the composites to have more magnetic field interactions with natural resonance, exchange resonance, and eddy currents [41,42]. Additionally, the addition of Fe 3 O 4 can enhance the interface polarization relaxation between the fillers and the rubber matrix, which increases the transmission path of electromagnetic waves between composite materials, consequently increasing the possibility of the attenuation of incident waves [43][44][45]. In terms of the EMI shielding mechanism, it is the absorption efficiency, not the reflection efficiency, that contributes more to the EMI SE of the MGNR composites, absorbing most of the electromagnetic radiation that is then dissipated in the form of heat [17]. Furthermore, the specific EMI SE (EMI SE divided by sample thickness) of MGNR-10 composites was 21.3 dB·mm −1 , which is competitive with the reported EMI shielding performance properties of polymer/rGO or polymer/Fe 3 O 4 /rGO composites [24,42,[46][47][48]. The Stability of EMI Shielding and Electrical Conductivity Properties of MGNR and GNR Composites under Different Mechanical Deformation In our previous work, we proved that the addition of Fe3O4 particles decreases the electrical conductivity and increases the EMI shielding properties of the composites [17]. The specific data about the electrical conductivity and EMI shielding properties of different GNR and MGNR composites are shown in the supporting information. Specifically, the EMI SE value of the MGNR-10 composites is 42.6 dB at 8.5 GHz, while that of the GNR-10 composites is only 32.4 dB at the same frequency, which is shown in Figure S2a. This is because the Fe3O4 particles cause the composites to have more magnetic field interactions with natural resonance, exchange resonance, and eddy currents [41,42]. Additionally, the addition of Fe3O4 can enhance the interface polarization relaxation between the fillers and the rubber matrix, which increases the transmission path of electromagnetic waves between composite materials, consequently increasing the possibility of the attenuation of incident waves [43][44][45]. In terms of the EMI shielding mechanism, it is the absorption efficiency, not the reflection efficiency, that contributes more to the EMI SE of the MGNR composites, absorbing most of the electromagnetic radiation that is then dissipated in the form of heat [17]. Furthermore, the specific EMI SE (EMI SE divided by sample thickness) of MGNR-10 composites was 21.3 dB•mm −1 , which is competitive with the reported EMI shielding performance properties of polymer/rGO or polymer/Fe3O4/rGO composites [24,42,[46][47][48]. Apart from good EMI shielding properties, EMI shielding stability under different cyclic stretching, cyclic bending, and tensile permanent deformation is also important in flexible shielding materials. First, we tested the stability of the EMI SE value under tensile permanent deformation. The rectangular GNR-6 and MGNR-6 samples (45 × 15 × 0.6 mm 3 ) were held over a length of 15 mm in the middle of a self-made tensile permanent deformation machine and were stretched by 25% (3.75 mm), 50% (7.5 mm), and 75% (11.25 mm) of the original length. The specific experiment schematic can be seen in Figure 2. The tensile permanent deformation results are shown in Figure 3, in which it can be seen that the enhanced length of the GNR composites that have been treated by permanent deformation under strains of 25%, 50%, and 75% are 1 mm, 2.3 mm, and 3.2 mm, respectively. Meanwhile, the enhanced length of MGNR composites under strains of 25%, 50%, and 75% are 0.9 mm, 1.7 mm, and 2.9 mm. The enhanced permanent deformation length of the treated MGNR composites is slightly smaller than that of GNR composites. This may be due to the fact that the addition of Fe 3 O 4 particles enhances the stiffness of the composites, and therefore reduces the permanent deformation length. enhanced length of the GNR composites that have been treated by permanent defor-mation under strains of 25%, 50%, and 75% are 1 mm, 2.3 mm, and 3.2 mm, respectively. Meanwhile, the enhanced length of MGNR composites under strains of 25%, 50%, and 75% are 0.9 mm, 1.7 mm, and 2.9 mm. The enhanced permanent deformation length of the treated MGNR composites is slightly smaller than that of GNR composites. This may be due to the fact that the addition of Fe3O4 particles enhances the stiffness of the composites, and therefore reduces the permanent deformation length. enhanced length of the GNR composites that have been treated by permanent defor-mation under strains of 25%, 50%, and 75% are 1 mm, 2.3 mm, and 3.2 mm, respectively. Meanwhile, the enhanced length of MGNR composites under strains of 25%, 50%, and 75% are 0.9 mm, 1.7 mm, and 2.9 mm. The enhanced permanent deformation length of the treated MGNR composites is slightly smaller than that of GNR composites. This may be due to the fact that the addition of Fe3O4 particles enhances the stiffness of the composites, and therefore reduces the permanent deformation length. Figure 4, we can see that the average EMI SE value of MGNR-6 composites that were treated under different tensile permanent deformations decreased by a small amount (less than 2.2%) at all frequencies, indicating that MGNR has good EMI shielding stability. Meanwhile, the average EMI SE value of the GNR-6 composite decreased markedly, by up to 16% in the worst case. For the change in the electrical conductivity of MGNR and GNR composites, we observed opposite results from the two directions, which can be seen in Figure 5. In the stretching direction, the conductivity of the MGNR composites decreased more (higher R/R 0 value) than the GNR composites, while the conductivity performance in the vertical stretching direction barely changed. edly, by up to 16% in the worst case. For the change in the electrical conductivity of MGNR and GNR composites, we observed opposite results from the two directions, which can be seen in Figure 5. In the stretching direction, the conductivity of the MGNR composites decreased more (higher R/R0 value) than the GNR composites, while the conductivity performance in the vertical stretching direction barely changed. In Figure 1, we can see that the segregated network structure has an important influence on the electrical conductivity and EMI shielding performance of the GNR and MGNR composites. The tensile permanent deformation changes the segregated network along the stretched direction and affects the properties of the composites, which is demonstrated in Figure 6. In order to better explain the changes in the segregated network before and edly, by up to 16% in the worst case. For the change in the electrical conductivity of M and GNR composites, we observed opposite results from the two directions, which c seen in Figure 5. In the stretching direction, the conductivity of the MGNR compo decreased more (higher R/R0 value) than the GNR composites, while the conductivity formance in the vertical stretching direction barely changed. In Figure 1, we can see that the segregated network structure has an important i ence on the electrical conductivity and EMI shielding performance of the GNR and M composites. The tensile permanent deformation changes the segregated network a the stretched direction and affects the properties of the composites, which is demonst in Figure 6. In order to better explain the changes in the segregated network before In Figure 1, we can see that the segregated network structure has an important influence on the electrical conductivity and EMI shielding performance of the GNR and MGNR composites. The tensile permanent deformation changes the segregated network along the stretched direction and affects the properties of the composites, which is demonstrated in Figure 6. In order to better explain the changes in the segregated network before and after the tensile permanent deformation test, we took the GNR-6 sample as an example to compare the TEM image (Figure 7) of the original GNR-6 sample and the GNR-6 sample that had been subjected to permanent deformation with 100% strain (15 mm stretched length). From Figure 7, we can see that after tensile permanent deformation, the segregated network becomes longer in the stretching direction (the red arrow), which causes electrons to take a longer path to go through the rubber network (or less conductive rGO particles in the same length area). This leads to worse electrical conductivity for the GNR and MGNR composites [49,50]. The electrical conductivity of the GNR composites mainly determines the EMI shielding performance, so the decline in electrical conductivity greatly reduces the EMI shielding properties of GNR composites. However, for the MGNR composites, the outstanding EMI shielding performance is determined by good electrical conductivity and excellent magnetic properties. Although the electrical conductivity is reduced, the addition of Fe 3 O 4 gives the materials higher magnetic permeability, and thus greater magnetic loss. As a result, the MGNR composites can efficiently absorb electromagnetic wave radiation, maintaining their excellent EMI shielding properties [48]. sion path in the conductive network and prevent electronic transmission between the con-ductive rGO nano-platelets. Therefore, the electrical conductivity of the MGNR composites decreases more (lager R/R0 value) than that of the GNR composites after tensile permanent deformation. However, from another point of view, the MGNR composites have higher electrical resistance changes under the same tensile strain deformation, which shows that, compared to the GNR composites, the MGNR composites have better sensing properties in the same strain process. In the vertical direction, the electrical conductivity of both GNR and MGNR composites did not change much, which also proves that the change in the segregated network in the stretching direction is the main driver affecting the electrical conductivity of the composites. We also studied the stability of the EMI SE values and the electrical conductivity (parallel direction) of GNR and MGNR composites under different cyclic stretching fatigue tests. The GNR and MGNR samples were cyclically stretched 250, 500, 1000, 1500, and 2000 times on a tensile fatigue testing machine (MTS810, MTS Corporation, Minnesota, USA) at a strain of 25%. Some fractures in the MGNR-6 samples only occur after cyclic stretching more than 1000 times. This may be due to the addition of Fe3O4 particles, which decrease the mechanical properties of the composites. Therefore, the MGNR-6 composites were only cyclically stretched for 250 and 500 cycles. As shown in Figure 8a, For the electrical conductivity in the stretching direction for MGNR composites that have been treated by tensile permanent deformation, the lengthening of the segregated network (or the decreased conductive rGO particles in the same length MGNR area) and the Polymers 2021, 13, 3277 9 of 16 addition of non-conductive Fe 3 O 4 particles will both increase the electronic transmission path in the conductive network and prevent electronic transmission between the conductive rGO nano-platelets. Therefore, the electrical conductivity of the MGNR composites decreases more (lager R/R 0 value) than that of the GNR composites after tensile permanent deformation. However, from another point of view, the MGNR composites have higher electrical resistance changes under the same tensile strain deformation, which shows that, compared to the GNR composites, the MGNR composites have better sensing properties in the same strain process. In the vertical direction, the electrical conductivity of both GNR and MGNR composites did not change much, which also proves that the change in the segregated network in the stretching direction is the main driver affecting the electrical conductivity of the composites. We also studied the stability of the EMI SE values and the electrical conductivity (parallel direction) of GNR and MGNR composites under different cyclic stretching fatigue tests. The GNR and MGNR samples were cyclically stretched 250, 500, 1000, 1500, and 2000 times on a tensile fatigue testing machine (MTS810, MTS Corporation, Eden Prairie, MN, USA) at a strain of 25%. Some fractures in the MGNR-6 samples only occur after cyclic stretching more than 1000 times. This may be due to the addition of Fe 3 O 4 particles, which decrease the mechanical properties of the composites. Therefore, the MGNR-6 composites were only cyclically stretched for 250 and 500 cycles. As shown in Figure 8a, MGNR composites have better EMI shielding stability performance compared with GNR composites (after 500 stretching cycles, the average EMI SE value of MGNR composites at all frequencies decreases by only 1.4%, while that of the GNR composites decreases by approximately 12.5%). However, the electrical conductivity (the R/R 0 value) of MGNR composites exhibits more changes compared with GNR composites, which can be seen in Figure 8b. This is also due to the fact that cyclic stretching destroys the material's segregated network structure and the Fe 3 O 4 particles prevent the electronic transmission between the conductive rGO nano-platelets. This is the same result as the one we observed in the previous tensile permanent deformation experiment. In conclusion, the addition of Fe 3 O 4 particles offers the composites better stability of EMI shielding performance but worse electrical conductivity robustness. Additionally, the reliability of EMI SE and electrical conductivity under cyclic ing are also important to the flexibility material and the MGNR-6 and GNR-6 sa underwent cyclic bending at a certain bending frequency (3 Hz) and angle ( Figure 250, 500, 1000, 1500, and 2000 cycles. The results of the stability of the EMI SE and ele conductivity under cycle bending are shown in Figure 9. The results show that the M 6 composites maintain bendability as well as GNR-6 composites do. After hundr bending cycles, the EMI shielding performance of both MGNR-6 and GNR-6 comp decreased to a certain extent (Figure 9a). However, compared with GNR composit erage decrease of approximately 9%), MGNR composites (average decrease of ap mately 2.9%) exhibit better EMI shielding performance based on our results. Additionally, the reliability of EMI SE and electrical conductivity under cyclic bending are also important to the flexibility material and the MGNR-6 and GNR-6 samples underwent cyclic bending at a certain bending frequency (3 Hz) and angle (Figure 9b) for 250, 500, 1000, 1500, and 2000 cycles. The results of the stability of the EMI SE and electrical conductivity under cycle bending are shown in Figure 9. The results show that the MGNR-6 composites maintain bendability as well as GNR-6 composites do. After hundreds of bending cycles, the EMI shielding performance of both MGNR-6 and GNR-6 composites decreased to a certain extent (Figure 9a). However, compared with GNR composites (average decrease of approximately 9%), MGNR composites (average decrease of approximately 2.9%) exhibit better EMI shielding performance based on our results. Figure 9b displays the electrical resistance change (R/R 0 ) as a function of bending cycles. Interestingly, the R/R 0 of MGNR and GNR both remain very stable with no more than a 15% rise even after 2000 cycles. This indicates that bending experiments will not damage the segregated network of composites and MGNR composites can be used in flexible bending electronics. underwent cyclic bending at a certain bending frequency (3 Hz) and angle (Figure 9 250, 500, 1000, 1500, and 2000 cycles. The results of the stability of the EMI SE and ele conductivity under cycle bending are shown in Figure 9. The results show that the M 6 composites maintain bendability as well as GNR-6 composites do. After hundr bending cycles, the EMI shielding performance of both MGNR-6 and GNR-6 comp decreased to a certain extent (Figure 9a). However, compared with GNR composite erage decrease of approximately 9%), MGNR composites (average decrease of ap mately 2.9%) exhibit better EMI shielding performance based on our results. Figu displays the electrical resistance change (R/R0) as a function of bending cycles. Int ingly, the R/R0 of MGNR and GNR both remain very stable with no more than a 15 even after 2000 cycles. This indicates that bending experiments will not damage th regated network of composites and MGNR composites can be used in flexible be electronics. Figure 9. (a) The EMI SE of the MGNR-6 and GNR-6 composites before and after bending for 2000 cycles; and (b) norma ized electrical resistance as a function of bend cycles for MGNR-6 and GNR-6 composites (the thickness was 0.6 mm). Multiple Sensing Properties of the MGNR Composites Reliable and excellent sensing properties are very important for flexible, we electronic devices. Through the research in the previous section, we found that the M composites have a higher electrical resistance change under the same tensile perm deformation conditions, which provides a good foundation for the preparation of co sites with good sensing properties. Multiple Sensing Properties of the MGNR Composites Reliable and excellent sensing properties are very important for flexible, wearable electronic devices. Through the research in the previous section, we found that the MGNR composites have a higher electrical resistance change under the same tensile permanent deformation conditions, which provides a good foundation for the preparation of composites with good sensing properties. We first studied the sensing properties of different GNR and MGNR composites under different cyclic tensile strains (0.05%, 0.5%, and 2%) and the results are shown in Figure 10. Among them, the resistance changes in GNR composites are not obvious under low strain, while an obvious resistance change occurs under high strain. However, MGNR-6 and MGNR-10 composites have stable and obvious resistance changes under low strain (0.05%), indicating their excellent sensing performance. Except for MGNR-10 composites, the other GNR and MGNR composites all display small new peaks in the same cycle under high strain. This may be due to the segregated network structure in the composites, which cannot be recovered in time during the stretching and recovering process. The delayed recovery phenomenon of the segregated network leads to the appearance of such peaks [51][52][53]. From the above results, it can be seen that the MGNR-10 composites have the most stable and obvious resistance change, meaning they show the best sensing performance under cyclic tensile strain. Polymers 2021, 13, x FOR PEER REVIEW 12 of 17 materials is the slope value of the resistance change/strain curves in Figure 11. At low strain (<5%), the slope of the MGNR-6 composites is the greatest, indicating that the GF of MGNR-6 is the greatest in a small strain range. However, as the strain increases (5-10%), the GF of the MGNR-10 composites also increases. Within a certain strain range, the GF of the MGNR composites is larger than that of the GNR composites, indicating that the MGNR composites have better sensing performance compared with GNR composites (when the strain is 10%, the GF of the MGNR-10 composite is 14.13, while the GF of the GNR-10 composites is only 6.21). It is worth noting that within a certain range of elongation, the relative resistance change rate of MGNR composites will show a sudden increase. This is because the structure of the segregated network in MGNR composites will be damaged under the tensile strain, thus causing a significant change in the resistance. Additionally, with an increase in the Fe 3 O 4 content, the elongation that produces this change is greatly reduced. The sensing performance of a material is usually characterized by its gauge factor (GF) [30,32]. Figure 11 shows the resistance changes in different GNR and MGNR composites under various tensile strains. From Equation (1), it can be seen that the GF of different materials is the slope value of the resistance change/strain curves in Figure 11. At low strain (<5%), the slope of the MGNR-6 composites is the greatest, indicating that the GF of MGNR-6 is the greatest in a small strain range. However, as the strain increases (5-10%), the GF of the MGNR-10 composites also increases. Within a certain strain range, the GF of the MGNR composites is larger than that of the GNR composites, indicating that the MGNR composites have better sensing performance compared with GNR composites (when the strain is 10%, the GF of the MGNR-10 composite is 14.13, while the GF of the GNR-10 composites is only 6.21). It is worth noting that within a certain range of elongation, the relative resistance change rate of MGNR composites will show a sudden increase. This is because the structure of the segregated network in MGNR composites will be damaged under the tensile strain, thus causing a significant change in the resistance. Additionally, with an increase in the Fe3O4 content, the elongation that produces this change is greatly reduced. The significant sensing properties of MGNR are suitable for skin-wearable sensors for real-time physiological and motion monitoring. Since the strain sensing properties of MGNR-10 composites are the best in terms of sensing stability and GF value among all the composites, we selected MGNR-10 composites for use in wearable applications to test the sensing properties of human motion such as finger bending, wrist bending, speaking, smiling, and blinking. Figure 12 shows the relative resistance change in the aforementioned motions and it can be seen that the MGNR-10 materials have stable and repeatable resistance signal changes in different human motions (from small facial muscle changes to large wrist bending or joint movements) because the segregated network in MGNR-10 composites can be more or less changed under these motions. From the above results, it can be seen that the MGNR composites have the potential to be used as a new flexible electronic material to monitor various behaviors of the human body, including human physiological signals and different body motions. The significant sensing properties of MGNR are suitable for skin-wearable sensors for real-time physiological and motion monitoring. Since the strain sensing properties of MGNR-10 composites are the best in terms of sensing stability and GF value among all the composites, we selected MGNR-10 composites for use in wearable applications to test the sensing properties of human motion such as finger bending, wrist bending, speaking, smiling, and blinking. Figure 12 shows the relative resistance change in the aforementioned motions and it can be seen that the MGNR-10 materials have stable and repeatable resistance signal changes in different human motions (from small facial muscle changes to large wrist bending or joint movements) because the segregated network in MGNR-10 composites can be more or less changed under these motions. From the above results, it can be seen that the MGNR composites have the potential to be used as a new flexible electronic material to monitor various behaviors of the human body, including human physiological signals and different body motions. Conclusions Magnetic MGNR composites with a segregated network were prepared by static self-assembly of Fe3O4 and graphene oxide followed by mixing with natural latex and being in situ reduced by hydrazine hydrate. Fe3O4 nanoparticles were de on an rGO sheet layer and the segregated network provides the MGNR composit excellent EMI shielding properties. Furthermore, the existence of Fe3O4 nanop gives the MGNR composites outstanding EMI shielding stability under different permanent deformation, cyclic stretching, and cyclic bending (the EMI SE value is r by no more than 2.9%). However, the EMI shielding performance of GNR compos a certain degree of decline after the same treatment, by approximately 16% in th case. The Fe3O4 anchored on the segregated graphene network gives the comp higher resistance change under different tensile strains, which can give the MGN posites better sensing performance, even in the case of cyclic stretching with v strain (0.05%). Resistance signal changes can also be stably and repeatedly monit the MGNR-10 composites when they are used to detect human motions such a bending, wrist bending, speaking, smiling, and blinking, indicating that the MGN posites can be used in future flexible, wearable electronic devices. Conclusions Magnetic MGNR composites with a segregated network were prepared by electrostatic self-assembly of Fe 3 O 4 and graphene oxide followed by mixing with natural rubber latex and being in situ reduced by hydrazine hydrate. Fe 3 O 4 nanoparticles were deposited on an rGO sheet layer and the segregated network provides the MGNR composites with excellent EMI shielding properties. Furthermore, the existence of Fe 3 O 4 nanoparticles gives the MGNR composites outstanding EMI shielding stability under different tensile permanent deformation, cyclic stretching, and cyclic bending (the EMI SE value is reduced by no more than 2.9%). However, the EMI shielding performance of GNR composites has a certain degree of decline after the same treatment, by approximately 16% in the worst case. The Fe 3 O 4 anchored on the segregated graphene network gives the composites a higher resistance change under different tensile strains, which can give the MGNR composites better sensing performance, even in the case of cyclic stretching with very low strain (0.05%). Resistance signal changes can also be stably and repeatedly monitored by the MGNR-10 composites when they are used to detect human motions such as finger bending, wrist bending, speaking, smiling, and blinking, indicating that the MGNR composites can be used in future flexible, wearable electronic devices. Supplementary Materials: The following are available online at https://www.mdpi.com/article/ 10.3390/polym13193277/s1, Figure S1. The effect of rGO content on the electrical conductivity of MGNR and GNR composites. Figure S2. (a) EMI SE of the MGNR and GNR composites with rGO content as a function of frequency. (b) Shielding by reflection, absorption, and total shielding of GNR nanocomposites. (c) Shielding by reflection, absorption, and total shielding of MGNR composites. (d) Effective absorb-ance of the MGNR and GNR composites. The thickness of the sample was 2 mm. Figure S3. Effect of the thickness on the EMI SE of MGNR-6 composites.
7,900.2
2021-09-26T00:00:00.000
[ "Materials Science", "Engineering", "Physics" ]
A Novel Technique for Data Steganography In this paper, a novel stego-method will be introduced, which can be used to hide any secret message in any holding color image. The proposed method will be implemented and tested and the calculated parameters will be compared with the LSB method parameters. It will be shown that the proposed method provides a high-security level by using two keys to extract the secret message from the holding image, making it very difficult to hack. Keywords-steganography; hiding time; extracting time; MSE; PSNR INTRODUCTION Steganography is a process of hiding data in covering data, such as a hidden text message in a color image [1,2]. Steganography is an important process and many applications utilize it. Hiding a text message in color image can be performed as shown in Figure 1 by selecting a stego-system encoder to hide the message and a stego-system decoder to extract the message from the holding color image [3]. One of the most popular methods of secret message hiding is the least significant bit (LSB), and many methods are based on it [4][5][6]. LSB reserves 8 bytes from the covering image to hide a character from the secret message, the first bit of the binary version of the character is stored in the least significant bit of the first selected byte of the image, the second bit in the least significant bit of the second byte of the image and so on [4]. Table I shows how to hide the letter "A" (ASCII A is equal to 65 in decimal and 01000001 in binary) in a sequence of 8 bytes in a color image. LSB is easy to implement, the changes to the image are not essential and cannot be observed with the naked eye, and it is easy to discover the text message and therefore it is not considered safe. So, in order to take the advantages and to discard the disadvantages of LSB data hiding method we modified it to improve the security level of data hiding [7]. LSB method provides a small mean square error (MSE), and a high peak signal to noise ratio (PSNR) between the original and the holding images, these parameters are very important for analysis purposes [4] and they are calculated by: where f represents the matrix data of the original image, g represents the matrix data of the holding image in question, m represents the number of pixel rows of the images and i represents the index of the current row, n represents the number of pixel columns of the image, j represents the index of the current column, and ‫ܺܣܯ‬ is the maximum signal value that exists in our original "known to be good" image. II. RELATED WORK LSB steganography is widely used to hide secret messages into color images due to its simplicity [15]. In [4], a method based on LSB was proposed, which creates a key of random positions to hide the message. This method is very secure but the MSE increases when the message length increases. In [10], a technique was introduced on hiding a secret image into a cover image, where both images should have the same size. The technique first compresses the secret message using the Set Partitioning in Hierarchical Trees (SPIHT) algorithm, then the output of this compression was hided into the covering image [11], the authors hided a secret message file into a covering image, the image should be colored and transformed into 3 matrices (R, G, and B). The message converts to binary, depending on the secret message bit using OR operation or AND operation, sequentially (RGB, BGR, RGB, BGR...). The results showed better performance in terms of quality of the obtained stego-image. Authors in [9] used LSB and DCT to perform steganography. The comparison gave a good result according to the PSNR values when compared with previous works and the security was increased by using DCT. In [8], the message was embedded by hiding each byte of the message in three pixels based on randomization in the cover image using Pseudo Random Number Generator (PRNG) of each pixel's value. This method achieved a very high maximum hiding capacity and higher visual quality as indicated by PSNR. In [16], the authors tried to overcome the disadvantage of the LSB method by appending the encrypted data in an image in place of plain textual data. To encrypt the data RSA and Diffie Hellman algorithms were used. To check the efficacy of their proposed method, they calculated the number of instructions executed at sender and receiver site since the number of instructions executed is a measure of the time complexity of the process. The result showed that the use of encryption in stego-analysis does not affect the time complexity if Diffie Hellman algorithm is used instead of the RSA algorithm. In [15], a method that hided a secret text message was proposed based on searching for identical bits between the secret message and image pixels values. The proposed method was compared with the LSB benchmarking method for hiding a secret message which hides the secret message directly in the least two significant bits of the image pixels. The proposed method was more efficient, simple, appropriate and accurate than the LSB method, the change in the image resolution was quite low and it made the secret message more secure. In [17], the authors used the Pixel-Value Differencing (PVD) method as an image steganography mechanism. They eliminated the overflow problem of pixel values in the stego image exceeding the range 0 ... 255. Moreover, for providing more security, they used different number of bits in different pixel components. It was very difficult to trace how many bits are embedded in a pixel of the stego image. The obtained results provided better visual quality of the stego-image compared to the PVD method. III. THE PROPOSED METHOD The proposed method for data hiding can be implemented by applying the following steps: • Select the original color image, and find the image size (n1:number of rows, n2: number of columns, and n3: number of colors). • Select the message to be hidden in the image and find the message length (n4). • Define an 8 digit number to be used as private key (key1). • Divide key1 into 2 equal parts (part1 and part2, each of them is a 4 digit number. • Reshape the original image from 2D matrix to 2D matrix with size n1*n3, n2. • Calculate the row and column indexes (where to start the hiding message) using a defined hash function, in our case we used the following functions: Row index = floor (rand (1)*(n1*n3-n4)); Column index = floor (rand (1)*(n2-n4)); • Apply LSB method to hide a message using the indexes. • Reshape the holding image back to a 3D matrix. • Generate the second key (key2) by using another hash function. We used the XOR function with the indexes and the two parts. • Save n4, key1 and key2 to be used to extract the message from the image. To extract the data, the proposed method requires the following steps to be implemented: • Select the holding image. • Reshape the image matrix from 3D to 2D matrix. • XOR the first part of key1 with the first part of key2 to get the row index. • XOR the second part of key1 with the second part of key2 to get the column index. • Use the indexes to retrieve n4 characters from the image. Figure 2 shows the block diagram of the proposed stegosystem, while Figure 3 shows a simple example of how to perform some calculations using the proposed method. Using this method will increase security level, because we have to know key1 and key2 and the way of their calculations. This method also decrease MSE, which will be very small whatever the message length was. Figures 4 and 5 show the original image and the holding image after hiding 100 characters, and here we can notice that there are no visible differences between the two images. A. Experiment 1: Hiding a Fixed Length Message The following message with length of 100 characters was hidden in different types and size images. Steganography is the process of hiding of a secret message within an ordinary message and extracting at its destination [20]. From the results shown in Table II we can conclude the following which can be considered as the advantages of the proposed stego-method: • Hiding and extracting times are significantly small. • Hiding time slowly increases due to the increase in image size. In the experiment it fell in the range 0.093000-2.506000. • Extracting time slowly increases due to the increase in image size and in the experiment it fell in the range 0.442000-0.469000. • The proposed method provides a significantly small MSE. • The proposed method provides a significantly high PSNR. B. Experiment 2: Hiding Various Messages in a Colored Image A tiff color image with size 516×600×3 was taken, and several messages with different lengths were hidden in the image and extracted from the holding image using the proposed method. The results of this experiment are shown in Table III.
2,140.6
2019-12-01T00:00:00.000
[ "Computer Science", "Engineering" ]
A Comparative Study on Numerical Solutions of Initial Value Problems (IVP) for Ordinary Differential Equations (ODE) with Euler and Runge Kutta Methods This paper mainly presents Euler method and fourth‐order Runge Kutta Method (RK4) for solving initial value problems (IVP) for ordinary differential equations (ODE). The two proposed methods are quite efficient and practically well suited for solving these problems. In order to verify the ac‐ curacy, we compare numerical solutions with the exact solutions. The numerical solutions are in good agreement with the exact solutions. Numerical comparisons between Euler method and Runge Kutta method have been presented. Also we compare the performance and the computa‐ tional effort of such methods. In order to achieve higher accuracy in the solution, the step size needs to be very small. Finally we investigate and compute the errors of the two proposed meth‐ ods for different step sizes to examine superiority. Several numerical examples are given to dem‐ onstrate the reliability and efficiency. Introduction Differential equations are commonly used for mathematical modeling in science and engineering.Many problems of mathematical physics can be started in the form of differential equations.These equations also occur as reformulations of other mathematical problems such as ordinary differential equations and partial differential equations.In most real life situations, the differential equation that models the problem is too complicated to solve exactly, and one of two approaches is taken to approximate the solution.The first approach is to simplify the differential equation to one that can be solved exactly and then use the solution of the simplified equation to approximate the solution to the original equation.The other approach, which we will examine in this paper, uses methods for approximating the solution of original problem.This is the approach that is most commonly taken since the approximation methods give more accurate results and realistic error information.Numerical methods are generally used for solving mathematical problems that are formulated in science and engineering where it is difficult or even impossible to obtain exact solutions.Only a limited number of differential equations can be solved analytically.There are many analytical methods for finding the solution of ordinary differential equations.Even then there exist a large number of ordinary differential equations whose solutions cannot be obtained in closed form by using well-known analytical methods, where we have to use the numerical methods to get the approximate solution of a differential equation under the prescribed initial condition or conditions.There are many types of practical numerical methods for solving initial value problems for ordinary differential equations.In this paper we present two standard numerical methods Euler and Runge Kutta for solving initial value problems of ordinary differential equations. From the literature review we may realize that several works in numerical solutions of initial value problems using Euler method and Runge Kutta method have been carried out.Many authors have attempted to solve initial value problems (IVP) to obtain high accuracy rapidly by using numerous methods, such as Euler method and Runge Kutta method, and also some other methods.In [1] the author discussed accuracy analysis of numerical solutions of initial value problems (IVP) for ordinary differential equations (ODE), and also in [2] the author discussed accurate solutions of initial value problems for ordinary differential equations with fourth-order Runge kutta method.[3] studied on some numerical methods for solving initial value problems in ordinary differential equations.[4]- [16] also studied numerical solutions of initial value problems for ordinary differential equations using various numerical methods.In this paper Euler method and Runge Kutta method are applied without any discretization, transformation or restrictive assumptions for solving ordinary differential equations in initial value problems.The Euler method is traditionally the first numerical technique.It is very simple to understand and geometrically easy to articulate but not very practical; the method has limited accuracy for more complicated functions. A more robust and intricate numerical technique is the Runge Kutta method.This method is the most widely used one since it gives reliable starting values and is particularly suitable when the computation of higher derivatives is complicated.The numerical results are very encouraging.Finally, two examples of different kinds of ordinary differential equations are given to verify the proposed formulae.The results of each numerical example indicate that the convergence and error analysis which are discussed illustrate the efficiency of the methods.The use of Euler method to solve the differential equation numerically is less efficient since it requires h to be small for obtaining reasonable accuracy.It is one of the oldest numerical methods used for solving an ordinary initial value differential equation, where the solution will be obtained as a set of tabulated values of variables x and y.It is a simple and single step but a crude numerical method of solving first-order ODE, particularly suitable for quick programming because of their great simplicity, although their accuracy is not high.But in Runge Kutta method, the derivatives of higher order are not required and they are designed to give greater accuracy with the advantage of requiring only the functional values at some selected points on the sub-interval.Runge Kutta method is a more general and improvised method as compared to that of the Euler method.We observe that in the Euler method excessively small step size converges to analytical solution.So, large number of computation is needed.In contrast, Runge Kutta method gives better results and it converges faster to analytical solution and has less iteration to get accuracy solution.This paper is organized as follows: Section 2: problem formulations; Section 3: error analysis; Section 4: numerical examples; Section 5: discussion of results; and the last section: the conclusion of the paper. Problem Formulation In this section we consider two numerical methods for finding the approximate solutions of the initial value problem (IVP) of the first-order ordinary differential equation has the form A continuous approximation to the solution   y x will not be obtained; instead, approximations to y will be generated at various values, called mesh points, in the interval   0 , n x x .Numerical methods employ the Equation (1) to obtain approximations to the values of the solution corresponding to various selected values of 0 , 1,2,3, . The parameter h is called the step size.The numerical solutions of ( 1) is given by a set of points     x y is an approximation to the corresponding point x y x on the solution curve. Euler Method Euler's method is the simplest one-step method.It is basic explicit method for numerical integration of ordinary differential equations.Euler proposed his method for initial value problems (IVP) in 1768.It is first numerical method for solving IVP and serves to illustrate the concepts involved in the advanced methods.It is important to study because the error analysis is easier to understand.The general formula for Euler approximation is Runge Kutta Method This method was devised by two German mathematicians, Runge about 1894 and extended by Kutta a few years later.The Runge Kutta method is most popular because it is quite accurate, stable and easy to program.This method is distinguished by their order in the sense that they agree with Taylor's series solution up to terms of r h where r is the order of the method.It do not demand prior computational of higher derivatives of   y x as in Taylor's series method.The fourth order Runge Kutta method (RK4) is widely used for solving initial value problems (IVP) for ordinary differential equation (ODE).The general formula for Runge Kutta approximation is Error Analysis There are two types of errors in numerical solution of ordinary differential equations.Round-off errors and Truncation errors occur when ordinary differential equations are solved numerically.Rounding errors originate from the fact that computers can only represent numbers using a fixed and limited number of significant figures.Thus, such numbers or cannot be represented exactly in computer memory.The discrepancy introduced by this limitation is call Round-off error.Truncation errors in numerical analysis arise when approximations are used to estimate some quantity.The accuracy of the solution will depend on how small we make the step size, h.A numerical method is said to be convergent if denotes the approximate solution and n y denotes the exact solution.In this paper we consider two initial value problems to verify accuracy of the proposed methods.The Approximated solution is evaluated by using Mathematica software for two proposed numerical methods at different step size.The maximum error is defined by Numerical Examples In this section we consider two numerical examples to prove which numerical methods converge faster to analytical solution.Numerical results and errors are computed and the outcomes are represented by graphically. Discussion of Results The The approximated solution is calculated with step sizes 0.1, 0.05, 0.025 and 0.0125 and maximum errors also are calculated at specified step size.From the tables for each method we say that a numerical solution converges to the exact solution if the step size leads to decreased errors such that in the limit when the step size to zero the errors go to zero.We see that the Euler approximations using the step size 0.1 and 0.05 does not converge to exact solution but for step size 0.025 and 0.0125 converge slowly to exact solution.Also we see that the Runge Kutta approximations for same step size converge firstly to exact solution.This shows that the small step size provides the better approximation.The Runge Kutta method of order four requires four evaluations per step, so it should give more accurate results than Euler method with one-fourth the step size if it is to be superior.Finally we observe that the fourth order Runge Kutta method is converging faster than the Euler method and it is the most effective method for solving initial value problems for ordinary differential equations. Conclusion In this paper, Euler method and Runge Kutta method are used for solving ordinary differential equation (ODE) in initial value problems (IVP).Finding more accurate results needs the step size smaller for all methods.From the figures we can see the accuracy of the methods for decreasing the step size h and the graph of the approximate solution approaches to the graph of the exact solution.The numerical solutions obtained by the two proposed methods are in good agreement with exact solutions.Comparing the results of the two methods under investigation, we observed that the rate of convergence of Euler's method is   O h and the rate of convergence of fourth-order Runge Kutta method is   4 O h .The Euler method was found to be less accurate due to the inaccurate numerical results that were obtained from the approximate solution in comparison to the exact solution. x is a given function and   y x is the solution of the Equation(1).In this paper we determine the solution of this equation on a finite interval   0 , n x x , starting with the initial point 0 x . Example 1 : 2 y we consider the initial value problem   exact solution of the given problem is given by   approximate results and maximum errors are obtained and shown in Tables 1(a)-(d) and the graphs of the numerical solutions are displayed in Figures 1-7. Figure 7 .Example 2 :. Figure 7. Error for different step size using RK4 method.Example 2: we consider the initial value problem obtained results are shown in Tables 1(a)-(d) and Tables 2(a)-(d) and graphically representations are shown in Figures 1-7 and Figures 8-14. Table 1 . (a) Numerical approximations and maximum errors for step size Numerical approximation for step size h = 0.1.Numerical approximation for step size h = 0.05.Numerical approximation for step size h = 0.025.Numerical approximation for step size h = 0.0125.Error for different step size using Euler method. Table 2 . (a) Numerical approximations and maximum errors for step; size
2,713.2
2015-08-20T00:00:00.000
[ "Mathematics" ]
Selective Cysteine Protease Inhibition Contributes to Blood-feeding Success of the Tick Ixodes scapularis* Ixodes scapularis is the main vector of Lyme disease in the eastern and central United States. Tick salivary secretion has been shown as important for both blood-meal completion and pathogen transmission. Here we report a duplication event of cystatin genes in its genome that results in a transcription-regulated boost of saliva inhibitory activity against a conserved and relatively limited number of vertebrate papain-like cysteine proteases during blood feeding. We further show that the polypeptide products of the two genes differ in their binding affinity for some enzyme targets, and they display different antigenicity. Moreover, our reverse genetic approach employing RNA interference uncovered a crucial mediation in tick-feeding success. Given the role of the targeted enzymes in vertebrate immunity, we also show that host immunomodulation is implicated in the deleterious phenotype of silenced ticks making I. scapularis cystatins attractive targets for development of antitick vaccines. Ixodes scapularis is the main vector of Lyme disease in the eastern and central United States. Tick salivary secretion has been shown as important for both blood-meal completion and pathogen transmission. Here we report a duplication event of cystatin genes in its genome that results in a transcription-regulated boost of saliva inhibitory activity against a conserved and relatively limited number of vertebrate papain-like cysteine proteases during blood feeding. We further show that the polypeptide products of the two genes differ in their binding affinity for some enzyme targets, and they display different antigenicity. Moreover, our reverse genetic approach employing RNA interference uncovered a crucial mediation in tick-feeding success. Given the role of the targeted enzymes in vertebrate immunity, we also show that host immunomodulation is implicated in the deleterious phenotype of silenced ticks making I. scapularis cystatins attractive targets for development of antitick vaccines. Among the differences that make a relationship between two organisms parasitic rather than symbiotic is the lack of mutual benefit; the parasite manages to continuously receive valuable resources from the host without returning this favor; in addition, sometimes it triggers catastrophic conditions to the host such as disease transmission. Hard ticks can be considered another case of efficient ectoparasites that are able to suck blood, a rich source of nutrients, from their vertebrate host(s) for several days (1). If the "blood donor" is aware of the tick attack to the skin/blood circulation, given that a tick cannot fly, rejection could be the best scenario and death the worst for the arthropod. Consequently, ticks have developed a series of mechanisms to gain undisturbed access to their nutritious meal, including saliva injection in biting sites (2). Tick salivary glands regulate water and ion excretion by saliva secretion that in addition reduces the volume of the blood bolus in the tick digestive tract as feeding progresses. Furthermore, they deliver a repertoire of pharmacologic compounds in the site of infestation that affects among other things, hemostasis and host immunity, thus facilitating the completion of a good quality meal for the tick (3). Unluckily for the host, saliva has also been shown to enhance tick vector competence, e.g. its capability to transmit pathogens (4). The black-legged tick Ixodes scapularis is among the most successful arthropod blood feeders; after hatching from the egg, larvae and nymphs feed normally on small rodents, whereas adults feed on larger animals (5). This species transmits the Lyme disease etiologic agent Borrelia burgdorferi as well as Anaplasma phagocytophilum, Babesia microti (causing the diseases anaplasmosis and babesiosis respectively), and viruses within the tick-borne encephalitis complex (6). To enhance our knowledge about saliva constituents that account for its unpleasant (for the host) biologic properties, two massive sequencing projects of salivary expressed sequence tags (ESTs) 2 were completed in our laboratory (7,8). Analysis using bioinformatic tools revealed gene family expansions in salivary gland secretome. This finding created, as is almost always the case, more questions than answers. Is the expansion due to polymorphisms or gene duplications, and if the last is the case, are they maintained stable in the genome to achieve antigenic variation, redundancy in biochemical pathways, or a combination of these? In an attempt to bridge this gap between genomics and the function of the secretome, we focused on characterizing two cystatins with high amino acid (aa) identity to each other that are secreted from salivary glands of I. scapularis. Cystatins are present in vertebrates, invertebrates, plants, and protozoa, and all of them form tight, equimolar, and reversible inhibitory complexes with papain-like cysteine proteases. This holds true for the first cystatin we expressed, which we named sialostatin L because of its affinity for cathepsin L (9). We further showed that saliva indeed displays inhibitory activity against cathepsin L in vitro that could be partially attributed to the presence of sialostatin L. Consistent with the role of its target enzymes in immunity, we finally discovered an antiinflammatory and immunosuppressive action of the protein to the vertebrate host (9). Here we report the characterization of the second cystatin, which we named sialostatin L2 to emphasize its redundant inhibitory activity against cathepsin L. Other than their high aa identity and similar affinity for cathepsin L, we show that the two cystatins are not equally potent in inhibition of other target enzymes and that they also differ in antigenicity. Furthermore, we report major differences in their transcript abundance during tick infestation; sialostatin L2 transcripts greatly accumulate in the salivary glands as feeding to the host progresses, whereas sialostatin L transcripts slightly decrease at the same time. Given this transcriptional induction of sialostatin L2 and the absence of classical genetic approaches to this non-model arthropod vector, we undertook a reverse genetic approach to silence sialostatin L2 by RNA interference (RNAi). This well established technique (10) led to reduction of cystatin transcripts in the salivary glands followed by feeding inhibition, reduced tick size, and number of eggs laid. Moreover, normal ticks when exposed to a rabbit previously infested with silenced ticks exhibited significant feeding impairment due to an enhanced host immune response. Because of its stringent and unique specificity, sialostatin L2 can be useful for studying the role of certain papain-like proteases in various biologic phenomena. In addition it can provide a starting point for potent pharmaceutical interventions that target the key role of those enzymes in human diseases. Besides their limited number of targets, we reveal the crucial mediation of I. scapularis cystatin salivary constituents in blood-meal uptake through control of their targets' proteolytic activity. Taking into account their role in the success of parasitism, they should be considered in the development of antiparasitic vaccines; they may be additional candidate ingredients in the mixture of antigens that will potentially lead to achievement of this difficult goal. EXPERIMENTAL PROCEDURES Unless otherwise indicated, protocols followed standard procedures (11), and all experiments were performed at room temperature (25 Ϯ 1°C). All materials were obtained from Sigma, and the water used was of 18-megaohm quality, produced by a MilliQ apparatus (Millipore, Bedford, MA). Bioinformatic Tools-To obtain genomic information relative to the cystatin transcripts, raw trace FASTA files from shotgun genomic sequences of I. scapularis (found in ftp://ftp. ncbi.nih.gov/pub/TraceDB/ixodes_scapularis) representing nearly 24 million sequences were downloaded and removed of vector and primer sequences using a homemade tool written in Visual Basic. Sequences with average quality values below 20 were excluded. Sialostatins L and L2 coding sequences (NCBI accession gi:22164282 and gi:67083499, respectively) were blasted against these genomic sequences using blastn with a word size of 80 (ϪW 80 switch). The resulting matches were assembled using the cap3 assembler (12), and the produced consensus sequences were in turn blasted against the two cystatin transcripts. All other sequence comparisons reported here were done using the BLAST server at the NCBI (http://www.ncbi.nlm.nih.gov/BLAST) and the ClustalW Service at the European Bioinformatics Institute, whereas protein secretion signals were revealed in the SignalP 3.0 server of the Technical University of Denmark. Expression, Purification, and Sequence Verification of Sialostatin L2-We followed the same procedure as described before for sialostatin L (9) except that sialostatin L2 cDNA was PCR-amplified using high fidelity Taq polymerase from a TriplEx2 cDNA clone, described in our previous work (8), with gene-specific primers (forward, 5Ј-GCC CAT ATG GAA CTG GCA CTG CGT GGC GGT TAC CGC GAG CG-3Ј; reverse, 5Ј-GCC CTC GAG TTA TGC GGC CGC ACA CTC AAA GGA GCT-3Ј) designed for subcloning into the pET17b bacterial expression vector. Enzymatic Assays-Apparent inhibition constants of sialostatin L2 or L for various proteases were obtained as described earlier (9) by measuring loss of enzymatic activity at increasing concentrations of inhibitor in the presence of a fluorogenic enzyme substrate in large excess. Production of Polyclonal Sera-Female Swiss Webster mice, 6 -8 weeks old, were purchased from The Jackson Laboratory (Bar Harbor, ME) and maintained in the NIAID Animal Care Facility (Twinbrook 3 Bldg., NIH) under pathogen-free conditions in temperature-controlled rooms and received water and food ad libitum. Groups of six mice each received intradermal injections of 10 g of pure recombinant protein in each ear and four boosts followed at 2-week intervals. Preimmune sera were taken from each mouse before vaccination, whereas control groups received buffer (vehicle) vaccination in parallel. All treatments were performed in accordance with The Guide for Care and Use of Laboratory Animals (NIH). Tick Rearing-For most experiments ticks were harvested after detaching from mice (nymphs) or rabbits (adults). Engorged nymphs were maintained at 23°C and Ͼ90% relative humidity under a 14-h light/10-h dark photoperiod until enough time elapsed for them to molt into the adult stage. In all feeding experiments involving adult ticks, we placed an equal number of female and male ticks on the ears of New Zealand White rabbits. Ears were covered with cotton ear bags, and an Elizabethan collar was placed around the neck of each rabbit to prevent grooming. Engorged adult ticks were held under similar conditions as nymphs until enough time elapsed for them to lay eggs. For harvesting tick tissues, partially fed females were dissected within 4 h of being removed from hosts. Harvesting Tick Tissues-Tick tissues (salivary glands and midguts) were dissected in ice-cold 100 mM MOPS buffer containing 20 mM ethylene glycol/EGTA, pH 6.8. After removal, glands were washed gently in the same ice-cold buffer. Dissected tissues were stored immediately after dissection in RNAlater (Ambion, Austin, TX) before isolating total RNA. Tissues were used immediately after dissection or stored at Ϫ70°C in 0.5 M PIPES, pH 6.8, containing 20 mM EGTA, 1ϫ Complete TM Mini Protease inhibitor mixture (Roche Applied Science). All other manipulations were carried out at 4°C. Synthesis of Tick Salivary Gland cDNA and Reverse Transcription (RT)-PCR-Total RNA was isolated using an RNAqueous TM total RNA isolation kit (Ambion) from dissected partially fed female salivary glands/midguts and unfed female adult salivary glands/midguts. Concentration of total RNA was determined spectrophotometrically, separated into aliquots, and stored at Ϫ70°C before use. Total RNA was reverse-transcribed using Moloney murine leukemia virus reverse transcriptase according to manufacturer's protocol. For each gene, cDNA was PCR-amplified using gene-specific primers; sialostatin L2, forward 5Ј-CTA TGC GGC TTC CTC GAA GGG GCT-3Ј, and reverse, 5Ј-GGC TAC AGC GAG AGG GCG AAC CAC CAA-3Ј; tick salivary gland Isac, forward, 5Ј-AGC GAA GAC GGT CTC GAG CAA GAT-3Ј, and reverse, 5Ј-TCG GCA CAC GAT GCC TCA GGG AAT-3Ј; ␤-actin forward, 5Ј-GAA GAT CTT GAG AAG ATG GCC CAG-3Ј, and reverse, 5Ј-CGG TAC CGT CGA TGG TCA CC-3Ј, as the control. The PCR program we used included the following cycles: 75°C for 3 min, 94°C for 2 min, and 22 cycles of 94°C for 1 min, 49°C for 1 min, and 72°C for 1.20 min followed with 10 min at 72°C. Real-time Quantitative PCR-Real-time quantitative PCR was performed using the Mx4000 or Mx3005P Multiplex Quantitative PCR system and the Brilliant SYBR Green Single- Step QRT-PCR Master Mix kit (Stratagene, La Jolla, CA) according to the manufacturer's instructions. A standard curve (10 0 -10 7 copies per reaction) was generated using purified sialostatin L and L2 PCR products as the template. The following primers were used for all reactions; sialostatin L, forward 5Ј-TCG CGA TCG CTA GCA TCA CAC TT-3Ј, and reverse, 5Ј-AGC AGA AGG ACC AAA GCG AAG GTA-3Ј; sialostatin L2, forward 5Ј-AAG TCC ATT AGC TCC TTC GAG TGT G-3Ј, and reverse, 5Ј-ATC ATT CCG CGA CGT ACA GTG AGA-3Ј. Reactions (25-l final volume) contained 10 ng of total RNA and were run under the following conditions; 1 cycle of 50°C for 30 min and 95°C for 15 min followed by 40 cycles of 95°C for 30 s and 55°C for 30 s. Fluorescence was measured every cycle at the end of the 55°C step. Samples were run in triplicate as well as in the absence of reverse transcriptase or template as negative controls. The copy number of sialostatin L and L2 mRNA in each sample was determined using the Mx4000 or Mx3005P data analysis software based on the standard curve. Double-stranded RNA (dsRNA) Synthesis, Tick Injections, and Feeding-Sialostatin L2 RT-PCR product was joined to the Block-iT T7 TOPO linker. This TOPO linking reaction was used in two PCR reactions with gene-specific and T7 PCR primers to produce sense and antisense linear DNA templates. These sense and antisense DNA templates were used to generate sense and antisense transcripts using the BLOCK-iT RNA TOPO transcription kit. The resulting dsRNA was analyzed by agarose gel electrophoresis to verify its size. Subsequently, unfed female ticks were injected with 0.5 g of cystatin dsRNA or with 1 l of TS.MOPS (vehicle) using a 35-gauge needle. After injection of dsRNA or buffer alone, ticks were kept at 37°C overnight under high humidity to observe tick survival. Surviving ticks were exposed to a naïve (never tick-bitten) rabbit and allowed to blood-feed to repletion. Their feeding success was determined by total engorged weight, survival, and egg lying. The ears of the rabbits exposed to dsRNA sialostatin L2 or water-injected ticks were cleaned by the end of the experiment; the animals were kept for 14 days and then re-exposed to normal unfed ticks, and feeding success evaluation was performed as described above. Statistics-All data are expressed as the mean Ϯ S.E. Statistical significance was determined by Student's t test; differences in multiple comparisons among different experimental groups were determined by analysis of variance using the Tukey test. The Two Cystatin Transcripts Are Encoded by Two Different Genes-Several I. scapularis transcripts were revealed to be of salivary origin during our most recent massive EST sequencing project (8) including a novel cystatin that shows 75% identity at the protein level to sialostatin L, a secreted cystatin previously characterized in our laboratory (9). When the secretion signal was removed from this polypeptide, multiple alignment with sialostatin L showed a clustering of aa substitutions in two regions of the protein; of a total of 27 aa substitutions throughout the 115-residue polypeptide, 12 were located in the first 22 amino-terminal residues, whereas another 12 substitutions gathered in the last 33 carboxyl-terminal aa of the protein (Fig. 1A). This raised the possibility that the two proteins could be allelic products of the same gene. To test this hypothesis, a bioinformatic approach was undertaken. cDNA sequences of both transcripts were compared by BLAST analysis to the publicly available shotgun genomic sequences from the I. scapularis genome project. The resulting matches were assembled into contigs that were in turn compared by BLAST to both cystatin transcripts. The result showed clearly that the two cystatins are encoded by two different genes (data not shown). The sialostatin L locus consists of three exons, whereas only two exons coding for parts of the amino terminus and carboxyl terminus of the second cystatin could be revealed (data not shown). Possibly the third exon was not detected due to the limited DNA sequence available. In both genes, intronic sequences were partial but unique; their high numbers of repeating sequences made impossible their successful extension due to the very large number of matches with repetitive sequences from intronic regions found in the shotgun genomic sequences. The Polypeptide Products of the Two Genes Differ in Their Target Specificity and Display Different Antigenicity-We next proceeded to the expression and purification of the protein encoded by the novel transcript, which was subsequently used in inhibition assays of various commercially available purified proteases. Only four cysteine proteases of seven tested were affected by the presence of the protein in the assay, namely cathepsins L, V, S, and C (Fig. 1B). No inhibition was observed for cysteine proteases cathepsin X/Z/P, B, or H (Table 1), aspartic proteases cathepsin D and legumain, or serine proteases cathepsin G and elastase (data not shown). We next compared this novel cystatin with sialostatin L for efficiency in inhibiting their overlapping target enzymes. The results are shown in Fig. 1C and are summarized in Table 1. Briefly, the two inhibitors are equally potent for inhibition of cathepsins L and V (Fig. 1C, upper panel) but displayed major differences in inhibition of cathepsins S and C (Fig. 1C, lower panel). To further evaluate those findings, we tested whether this novel cystatin is a tight inhibitor for cathepsin L, as is the case for sialostatin L (9). Indeed, when we used decreasing amounts of cathepsin L in our assays, less cystatin was necessary to achieve the same percentage of enzymatic inhibition ( Fig. 2A), which is a typical characteristic of tight inhibition. The decrease in the concentration of the inhibitor at which 50% enzymatic inhibition (IC 50 ) is achieved was actually analogous to the reduction of the amount of enzyme used in the assay (Fig. 2B). Because conventional Michaelis-Menten kinetics do not hold true for tight binding inhibition, we applied Morrison's equation (13) to obtain apparent dissociation constants (K i *) in the presence of varying substrate concentrations. Fig. 2C shows the linear regression line (r 2 ϭ 0.9918) when K i * for several substrate concentrations was plotted against the substrate concentration, indicating a y intercept of 65.5 Ϯ 23.1 pM that is the inhibition constant (K i ) of this novel cystatin for cathepsin L. The sialostatin L K i for the same enzyme is 95.3 Ϯ 7.3 pM (9), demonstrating a similar affinity of the two inhibitors for cathepsin L. To emphasize this similarity, we assigned the name sialostatin L2 to this second salivary cystatin. Having in hand both pure and active cystatins, we then examined their antigenicity, i.e. their capability to induce production of specific polyclonal sera in a vertebrate host, in this case female Swiss Webster mice. Sialostatin L or L2 was administered (20 g) in each mouse five times at 2-week intervals; 2 weeks post the last vaccination, their sera were tested by enzyme-linked immunosorbent assays for recognition of vaccination antigen (sialostatin L or L2) and potential for cross-reaction with the second cystatin (sialostatin L2 or L, respectively). The results are shown in Fig. 3. Although both proteins were immunogenic, only sera from mice vaccinated with sialostatin L2 cross-reacted with sialostatin L. In a step further we estimated the mean antibody titer in the sera of the mice in both experimental groups using standard methods (11); for the sialostatin L-vaccinated mice the mean antibody titer was 4100 Ϯ 400 for sialostatin L and 200 Ϯ 35 for sialostatin L2, whereas the mean antibody titer in the sera of the sialostatin L2 vaccinated mice was 4000 Ϯ 450 for sialostatin L2 and 1070 Ϯ 136 for sialostatin L. To sum up, the 27 different amino acids between the two cystatin molecules apparently results in changes in their interaction interface with some of the targeted enzymes (and, therefore, their binding affinity). These primary structure changes and, more interestingly, the observed different affinity of the two inhibitors for cathepsin S, a critical enzyme for antigen processing and presentation, can account for the observed differences in their recognition from the vertebrate immune system as well. 10 scale, and the ordinate shows the percentage of remaining enzymatic activity in the presence of sialostatin L2. Each experiment was performed in triplicate. Additional details can be found in Table 1. C, sialostatin L2 differs in affinity for two of their common enzymatic targets when compared with sialostatin L. The two inhibitors were allowed to interact with the same amount of enzyme under the same assay conditions. The resulting reduction of enzymatic activity was plotted against the corresponding inhibitor concentration. The abscissa represents inhibitor concentration (M) in log 10 scale, and the ordinate shows the percentage of remaining enzymatic activity in the presence of the inhibitor. Each experiment was performed in triplicate. Red, results for sialostatin L; black, results for sialostatin L2. Additional details can be found in Table 1. Sialostatin L2 Transcription Increases as Feeding to the Host Progresses-To shed light on the transcriptional control of the two genes during I. scapularis feeding on the vertebrate host, we employed real-time quantitative RT-PCR using RNA isolated from unfed or partially fed adult female tick salivary glands or midguts. Expression levels were first normalized using the constitutively expressed actin transcript as a standard (14). Similar accumulation of sialostatin L transcripts was revealed in unfed salivary glands and midguts, which were 80 and 20 times higher when compared with sialostatin L2 expression levels in the corresponding tissues. Furthermore, the difference in transcript abundance for the two tick cystatins, both in the midgut and in the salivary glands, as feeding continues and when compared with the corresponding transcript abundance in tissues from unfed ticks was estimated, and it is presented in Table 2. Briefly, as feeding starts, sialostatin L transcript levels decrease in both the midgut and salivary glands. On the other hand, sialostatin L2 transcripts slightly fluctuate in the midgut but drastically accumulate in salivary glands. Our bioinformatic approach uncovered that the 600-bp of the 5Ј-untranslated region of the two genes do not show any similarity when compared with BLASTN (data not shown), indicating that the differences in the transcription regulation of the two genes can be partially or fully attributed to their different 5Ј-untranslated region nucleotide sequences. FIGURE 2. Sialostatin L2 is a tight binding inhibitor for cathepsin L. A, lower inhibitor concentration is necessary for the same percentage of cathepsin L inhibition to be achieved, as the concentration of the enzyme used in the assays decreases from 75 to 12.5 pM. Each experiment was performed in triplicate. The abscissa represents sialostatin L2 concentration (M) in log 10 scale, and the ordinate represents the percentage of remaining cathepsin L activity in the presence of sialostatin L2. B, the reduction in sialostatin L2 concentration at which 50% inhibition of cathepsin L activity is achieved (IC 50 ) is analogous to the reduction of cathepsin L concentration used in the assay. The abscissa represents (IC 50 ) ؎ S.E. of triplicates, and the ordinate represents cathepsin L concentration. C, relationship of the apparent dissociation constant K i * to substrate concentration when reactions were initiated by the addition of cathepsin L. Values for K i * were calculated as described under "Results." Linear regression of the data yields a K i of 65.5 Ϯ 23.1 pM (r 2 ϭ 0.992). Each point in the graph is the mean K i * Ϯ S.E. of four independent experiments. . Each group consisted of six mice. The ordinate shows mean milliabsorbance units of the enzyme-linked immunosorbent assays read ( ϭ 405 nM) for each sample serum Ϯ S.E. **, statistically significant difference (p Ͻ 0.001); *, statistically significant difference (p Ͻ 0.05) in the absorbance read when corresponding sera were tested. TABLE 1 Sialostatin L2 affinity changes for proteolytic enzymes when compared with sialostatin L Repertoire of cysteine proteases tested for inhibition by sialostatins L and L2 and the concentration of inhibitor at which 50% inhibition of the activity of the targeted proteolytic enzymes is achieved (IC 50 ) Ϯ S.E. Enzyme concentration used in the assays is also given for all their targets. NI, no inhibition, i.e., inhibition of the enzyme was not observed in the presence of 10 M inhibitor. TABLE 2 Transcriptional regulation of sialostatins L and L2 in the midgut and salivary glands during the onset of tick blood feeding The table shows the difference in accumulation of transcripts for the two tick cystatins both in the midgut and in the salivary glands as feeding continues and when compared with the corresponding transcript abundance in tissues from unfed ticks. Similar levels of sialostatin L transcripts were revealed in unfed salivary glands and midguts, 80 and 20 times higher when compared with those of sialostatin L2 in the corresponding tissues. Sialostatin L2 Is Essential for Tick Blood Feeding Success-Given this transcriptional induction of sialostatin L2 in tick salivary glands as feeding progresses, we decided to silence the gene using the RNAi technique. Adult unfed female ticks were injected with sialostatin L2 dsRNA and subsequently allowed to recover from the injections and feed on rabbits as described under "Experimental Procedures." Groups of 12 ticks each were pulled from the rabbit after 4 days of feeding, and their salivary glands were dissected and subsequently checked for gene silencing efficiency by RT-PCR. As shown in Fig. 4A, ticks injected with sialostatin L2 dsRNA showed an ϳ80% decrease in sialostatin L2 transcript levels when compared with water-injected controls. Moreover, sialostatin L was completely silenced (data not shown), whereas levels of ␤-actin and Isac (negative controls) remained unchanged in both experimental and control groups. When attached to a rabbit in vivo, ϳ40% of the silenced ticks were unable to feed and subsequently died (Fig. 4B), whereas in most cases apparent inflammatory and swollen skin was revealed in the feeding sites of dead ticks (Fig. 4C). For the remaining ϳ60% of RNAi ticks that fed on the host to repletion, their average weight approximated 60 mg, much lower than the control average weight of 170 mg (Fig. 4, D and E). Additionally, they showed ϳ70% egg-laying inhibition and became "stone hard" after detachment from the host (data not shown). The Phenotype of Silenced Ticks Can Be Attributed to Enhanced Immune Reaction from the Vertebrate Host-Rabbits exposed multiple times to ticks eventually develop a strong anti-tick immunity (15). We hypothesized that the signs of inflammation in feeding sites of dead ticks treated with cystatin dsRNA could indicate an accelerated immune response to tick salivary proteins because sialostatins are absent or decreased. Therefore, rabbits exposed to control and silenced ticks were kept and exposed to wild type (normal) adult female ticks 2 weeks after the first infestation. As shown in Fig. 5A, when ticks were attached to rabbits previously exposed to RNAi-treated ticks, they fed poorly and were unable to engorge, whereas a severe skin reaction could be seen at the tick attachment site (Fig. 5B). In contrast, when adult female ticks were attached to rabbits previously exposed to water-injected control ticks, they managed to feed and engorge (Fig. 5, A and C), although less efficiently (data not shown) than when attached on naïve rabbits (never exposed to ticks), in agreement with a previous report (15). 1, 3, and 5, respectively) or sialostatin L2 RNAi salivary glands (lanes 2, 4, and 6, respectively) using sialostatin L2 (lanes 1 and 2), Isac (lanes 3 and 4), and ␤-actin (lanes 5 and 6) gene-specific primers for transcript amplification. B, sialostatin L2 RNAi ticks were unable to feed successfully; three experiments were performed on different dates during the active adult tick feeding period using different batches of ticks and New Zealand rabbits. Each experiment was carried out with water-injected control (n ϭ 50) and sialostatin L2 dsRNA-injected (n ϭ 50) ticks. C, the percentage of feeding inhibition was calculated by counting dead ticks attached to rabbit ear (arrows) during the first 24 -48 h of infestation. D, partially fed female adult ticks were pulled from the rabbit on days 4, 5, and 7 (n ϭ 10) and weighed during each experiment. The ordinate shows the average tick weight (in mg) of three replicate experiments; *, statistically significant difference (p Ͻ 0.05). E, fully engorged female adult ticks, representing the control and the experimental group that dropped off and were kept for egg mass recovery. FIGURE 5. Immunomodulatory role of sialostatin L2 during tick feeding on vertebrate host. A, naïve adult female ticks (n ϭ 50) were allowed to feed on rabbits previously exposed to sialostatin L2 RNAi and water-injected control ticks. Each experiment was carried out three times, and the percentage of dead, de-attached, or fed-to-repletion ticks was calculated for experimental and control groups; **, statistically significant difference (p Ͻ 0.001); *, statistically significant difference (p Ͻ 0.05). B, naïve female adult ticks that attached but died in the first 24 h of infestation in rabbits previously exposed to sialostatin L2 RNAi ticks. In this characteristic photo from the ear of such a rabbit. Asterisks indicate swollen skin; arrowheads point to dead attached ticks with Inf indicating the site of a profound inflammation. C, photo of naïve female adult ticks that managed to feed to repletion when attached to rabbits previously infested with water-injected control ticks. DISCUSSION In this report we identify two different loci in the tick genome encoding for two I. scapularis cystatin transcripts. Given the high similarity of the corresponding transcripts (nucleotide identity in the region 124 -388 reaches 86%), it would be almost impossible to show that they are encoded in separate genes in the absence of released genomic sequences. On the other hand, if EST sequences were not available, it would be equally difficult to assemble the corresponding genomic sequence into contigs due to the high number of repetitive sequences in the intronic regions. This is not the first demonstration of the value of a cross-talk between genome and ESTs, but taking into account our difficulties in assembling the intronic regions of the cystatin genes, we propose that all available EST sequences could be valuable additional scaffolds in the assembly of I. scapularis genome as long as the repetitive sequences remain at high levels. Members of the cystatin superfamily have been isolated from tissues of animals and plants and a variety of microbes. They can be subdivided into three groups (16); family 1 cystatins (also known as stefins) are cytoplasmic and lack disulfide bonds, whereas family 2 cystatins are secreted and bear two disulfide bonds. Members of both groups display low molecular mass (roughly 11-14 kDa) in contrast to the family 3 members (also known as kininogens) that are much larger molecules made of multiple cystatin modules. Structural studies of various cystatins show that they display a wedge-shaped interface that binds to the active site of their target proteases (17). This interface consists of three typical segments (18) (red in Fig. 1A); they are the amino-terminal domain located around a conserved G (PI segment), a hairpin loop located around the conserved sequence QXVXG (PII segment), and a second hairpin loop located around a conserved PW dipeptide (PIII segment). We have previously shown that secreted cystatins from ticks are divergent in their aa sequence from the other family 2 members from animals and lower eukaryotes (9). Now we further show that both I. scapularis salivary cystatins lack the two PW residues in the PIII segment that are instead substituted with a conserved NL dipeptide. Single aa substitutions in the PW dipeptide have been shown to reduce cystatin affinity for cathepsins B and H (19). It is possible that sialostatins L and L2 recruited those two aa substitutions for I. scapularis to get rid of a potentially undesirable or unnecessary inhibitory activity of their salivary cystatins against vertebrate cathepsins B and H, which can diverge these salivary proteins for their target selectivity. Of interest, a recent paper (20) describes two secreted cystatins from the soft tick Ornithodoros moubata. Soft ticks feed rapidly, so their cystatins are shown in the same paper to play a role in midgut physiology rather than in salivary glands. Both soft tick cystatins display the PW motif in their PIII segment and inhibit cathepsins B and H. The same holds true for a secreted cystatin from the hard tick Haemaphysalis longicornis that plays a role in tick midgut physiology/innate immunity (21) but not in salivary glands. Although it is difficult to be conclusive as there are several other aa differences throughout those proteins, another salivary cystatin from the hard tick Amblyo-mma americanum has the NL substitution in PIII segment, and RNAi-silenced ticks displayed reduced ability to feed successfully in rabbits (10). Biochemical characterization of this protein and the transcriptional regulation of the gene are both still lacking, but it is tempting to speculate that divergence of the sequence in the PIII segment of salivary hard tick cystatins and the resulting lack of inhibition for cathepsins B and H is a major contributor to the conserved role of those molecules in hard tick feeding success in the vertebrate host. Identity of the two cystatins at the aa level suggests that the corresponding genes resulted from a relatively recent duplication event. The question arises of why such an event was fixed in the genome. Both inhibitors target the same proteases, namely cathepsins L, V, S, and C, but on the other hand, they differ in their affinity for cathepsins S and C. Additionally, antisera produced against the two proteins were not completely cross-reactive. Furthermore, we uncovered very large differences in their transcriptional regulation; sialostatin L2 transcripts rapidly and constantly accumulate as feeding progresses. Given this transcriptional induction of the sialostatin L2 gene, there is possibly enhancement of the inhibitory activity of saliva against cathepsins L, V, C, and S as feeding to the host continues, assuming that transcript accumulation will result in a corresponding increase of sialostatin L2 secretion from the salivary glands. Ticks can be considered clever pharmacologists (22) because adaptation to their natural vertebrate hosts has sculptured their saliva composition in such a way that the amount of each salivary constituent is sufficient to counteract any host action that would lead to tick rejection. What could be the reason for salivary cystatin target specificity? Cathepsins V, L, and S are efficient elastinolytic endopeptidases identified as secreted by macrophages during the onset of inflammation (23) and as major contributors to tissue damage under chronic inflammatory conditions (24). Elastic fibers are the key extracellular matrix components conferring elasticity to tissues such as blood vessels and skin. In the absence of salivary cystatins, proteolytic degradation of elastic fibers, resulting from the release of cathepsins in the initial steps of tick infestation, would destroy tissue elasticity and lead to high risk for maintaining the tick feeding cavity. This is the phenotype of the RNAi ticks; immediate rejection or failure to successfully accomplish a blood meal. This phenotype can be further explained from extensive work on the role of cathepsins L and S in antigen presentation/immunity (25,26). The absence of immunosuppressive action of cystatins during the first infestation (the genes were knocked down by RNAi) led to a much stronger primary immune response from the vertebrate host, as shown by the increase in the number of dead ticks and the signs of inflammation in their attachment sites. Subsequent boost of the same animal with a second tick infestation had detrimental consequences for tick feeding, as shown by almost immediate tick rejection and stronger inflammatory responses in the sites of infestation. Previous work has shown the importance of anticoagulation in I. scapularis feeding success using an RNA interference approach (27). In this study, in addition to confirming the value of the technique in gene function analysis in this non-model organism, we combine bioinformatics/genomics, biochemistry, and molecular biology to shed light on the mechanism of action of another key mediator in the tick strategy to access the bloodstream for a long time without triggering host reactions; saliva cystatins target a limited number of vertebrate cysteine proteases that possess a pivotal role in vertebrate immunity. Therefore, they should be considered in the process of developing antiparasitic vaccines using a mixture of vector antigens. But before attempting an antibody-mediated inhibition of their detrimental effect to the host, we may first need to resolve their structure and reveal the essential aa for their interaction with the target proteases. In the era of the human genome, it is now clear that the group of human papain-like cysteine proteases numbers 11 members (26). Our current study presents the most complete (to our knowledge) analysis of cystatins vis à vis their target specificity, making them useful tools in the study of their target enzymes in various biologic phenomena. Moreover, extensive work involving transgenic mice that lack the corresponding gene(s) has shown the implication of cathepsins L and S under various pathologic conditions including atherosclerosis and cancer (28,29). It is the unique and stringent specificity of sialostatins L and L2 that could potentially provide a solid basis for future pharmaceutical applications against those diseases.
8,548.4
2007-10-05T00:00:00.000
[ "Biology" ]
What Remote PPG Oximetry Tells Us about Pulsatile Volume? While pulse oximetry using remote photoplethysmography (rPPG) is used in medicine and consumer health, sound theoretical foundations for this methodology are not established. Similarly to traditional pulse oximetry, rPPG oximetry uses two wavelengths to calculate the tissue oxygenation using the so-called ratio-of-ratios, R. However, the relationship between R and tissue oxygenation has not been derived analytically. As such, rPPG oximetry relies mostly on empirical methods. This article aimed to build theoretical foundations for pulse oximetry in rPPG geometry. Using the perturbation approach in diffuse approximation for light propagation in tissues, we obtained an explicit expression of the AC/DC ratio for the rPPG signal. Based on this ratio, the explicit expression for “ratio-of-ratios” was obtained. We have simulated the dependence of “ratio-of-ratios” on arterial blood saturation across a wide range (SaO2 = 70–100%) for several commonly used R/IR light sources (660/780, 660/840, 660/880, and 660/940 nm) and found that the obtained relationship can be modeled by linear functions with an extremely good fit (R2 = 0.98–0.99) for all considered R/IR pairs. Moreover, the location of the pulsatile volume can be extracted from rPPG data. From experimental data, we found that the depth of blood pulsations in the human forehead can be estimated as 0.6 mm on the arterial side, which points to the papillary dermis/subpapillary vascular plexus origin of the pulsatile volume. Introduction While pulse oximetry is ubiquitous in medicine and consumer health, the strict foundations for this methodology are not well established.As such, it relies mostly on empirical methods, which is particularly true for remote photoplethysmography (rPPG). The typical approach in pulse oximetry is to use two wavelengths and calculate the oxygenation using the so-called ratio-of-ratios, R. Here, AC and DC refer to the amplitude of the AC component and the DC components of the measured signal; 1 and 2 refer to different wavelengths.Typically, the wavelengths are selected in red in the spectrum's near-infrared (NIR) ranges, where oxyhemoglobin and deoxyhemoglobin have very different absorptions.For example, 660 nm and 940 nm are commonly used in commercial pulse oximeters [1]. The dependence of peripheral oxygen saturation (S p O2) on the "ratio-of-ratios" R in pulse oximetry is typically modeled using the linear relationship (see, for example, [1]) Here, C 1 and C 2 are factors determined using a calibration procedure.The same approach is used in rPPG pulse oximetry.For example, Humphrey et al. [2] observed pulse waveforms at 760 and 880 nm on 10 healthy volunteers and concluded that they could be used to extract SpO2 remotely.Shao et al. [3] found that the use of orange (611 nm) and NIR (880 nm) provides the best SNR for remote PPG pulse oximetry.Verkruysse et al. [4] showed that a single universal calibration curve with acceptable spread between individuals could be achieved by using 660 nm (red) and 840 nm (NIR) light.Much effort is dedicated to using only the visible spectrum range for remote pulse oximetry, as it allows for much simpler setups, like smartphone cameras.For example, Moco and Verkruysse [5], in addition to red (675 nm) and NIR (840 nm), used green (580 nm) as a potential substitution for NIR lights.They extracted the arterial blood oxygen saturation (SpO2) of 46 healthy adults.They found that SpO2 can be calibrated under controlled conditions with red and green light, but the accuracy is less than that of SpO2, as estimated in the usual red-NIR window. However, we have identified a critical gap in the current knowledge.Namely, there is no sound analytical justification for an ability to extract tissue oxygenation in rPPG geometry. In particular, while the use of "ratio-of-ratios" is typically explained using the Beer-Lambert model, oxygenation cannot be derived directly from physical and physiological considerations of light absorption in oxy-and deoxyhemoglobins, based on the Beer-Lambert law [6].In particular, the ability to extract oxygenation is based on the primary assumption that the optical pathlengths for both wavelengths are identical.While it may be a reasonable assumption for traditional (transmissive mode) pulse oximeters [7], it is definitely not the case in remote PPG, where pathlengths linearly depend on the penetration depths [8], which are very different for red and infrared light [9,10].Therefore, in practice, even transmissive mode commercial pulse oximeters use an empirical relationship, where the relationship between R and S a O2 is determined experimentally for each type of pulse oximeter sensor by calibration. This lack of accurate models can be partially attributed to the fact that even the origin of observed pulsations is far from clear [4].There are multiple contradictory theories, from traditional volumetric (absorption-based) to scattering-based, which assume rouleau formation and disintegration during the cardiac cycle [11]. This uncertainty is particularly obvious for remote PPG (rPPG), which has a very shallow sampling depth.In particular, while contact pulse oximetry uses spatially resolved measurements, which sample tissues at different depths, the rPPG is an imaging geometry that samples primarily the epidermis and the papillary dermis. This paper aims to lay the foundations for the theoretical framework, which explains the possibility of extracting tissue blood oxygenation in rPPG geometry.The primary contribution of the article is the model, which directly links the peripheral oxygen saturation with the experimentally measurable value (ratio-of-ratios) in rPPG geometry.In addition, we explored the possibility of assessing the origin of the pulsating volume from experimental data. Materials and Methods Photoplethysmographic techniques assess changes in the blood volume caused by pulse propagation.We will refer to the variable blood volume in microcirculation during the cardiac cycle as a pulsatile volume.More specifically, we will denote an excess blood volume over diastolic volume as a pulsatile volume V. Generally, V can be expressed as volume per vessel (µm 3 ).In this case, the distribution of the pulsatile volumes can be characterized by surface density ρ (L/mm 2 ).Alternatively, V can be expressed as volume per unit of skin surface area, (µm 3 /mm 2 ).Moreover, V depends on time V(t).However, we will skip the time dependence for compactness. Analytical Model Let us consider the following volumetric model of blood pulsations in rPPG geometry: A single pulsatile volume, V, is located at a certain depth, Z.This pulsatile volume impacts light propagation.To account for this impact, we can consider a tissue with diastolic blood distribution as a base-case scenario approximated by a homogenous blood distribution.In addition to that, pulsatile blood volumes are present.As these volumes are small, we can consider them perturbations and follow the perturbation approach developed by Saiko et al. [12].This approach can be summarized as the following: Firstly, we find the light distribution for the homogeneous semi-infinite space.Then, we consider a light-absorbing heterogeneity with excessive absorption coefficient δµ a and volume V, buried at some depth Z as a perturbation.We represent this heterogeneity as a negative point source and look for the light distribution caused by this negative light source.The overall light distribution will be the sum of homogeneous and point-source-induced contributions. In particular, if the inhomogeneity is located at (0,0,Z), then the fluence rate at any point on the surface of the tissue surface (here, we assume cylindrical coordinates) in the presence of mismatched boundary can be found as follows [12]: Here, φ(Z) is the fluence rate for the homogeneous media (unperturbed solution), r is the distance on the surface of the tissue from the projection of the defect to the surface , where r 10 is the coefficient of reflection of diffuse light on the border of tissue and air. Unlike [12], which focused on detecting a single or double inhomogeneity, we are interested in the combined effect of multiple pulsatile volumes distributed homogeneously in the dermis.Thus, we can change perspective, select an observation point (which, for convenience, can be at the beginning of coordinates), and sum contributions from all heterogeneities in the tissue to the fluency rate at this point. Here, ρ is the 2D inhomogeneity density in the plane.After fairly straightforward integration by applying u = r 2 , a + u -> u, and v = √ u substitutions, we will obtain As the pulsatile volume V changes in time, Equation ( 5) is directly associated with an AC component of the rPPG signal.As such, we can try to estimate the AC/DC ratio, which is routinely used in PPG measurements.We notice that the DC component will have a homogeneous fluence rate at the surface φ(0).Thus, the AC/DC ratio can be found as Here, φ(0) and φ(Z) are homogeneous fluence rates at the surface and the pulsatile volume's depth, accordingly.A light-absorbing defect is characterized by excessive absorption coefficient δµ, maximum pulsatile volume, V max , depth Z, and surface density ρ. Wide Beam Diffuse Illumination The homogeneous fluence rate within the tissue depends on the illumination.For example, the homogeneous fluence rate can be found using a diffuse approximation for collimated and diffuse illumination. For a practical case of wide beam diffuse illumination φ(Z) /φ(0) = exp(−µ e f f Z) (see, for example, [13]).Thus, for certain practical applications, Equation ( 6) can be simplified into Consequently, the "ratio-of-ratios", which is used for blood oxygen saturation calculations, can be written as Subindexes 1 and 2 refer to different wavelengths. In linear approximation on µ eff h, we will have a much simpler expression. Note that R in Equations ( 8) and ( 9) explicitly (and quite strongly) depends on Z. Thus, Equations ( 8) and ( 9) can potentially be used to extract the depth of pulsations from experimental data. Simulations To verify the model, we performed simulations using Equation (8).The following parameters were used. Absorption In the absence of melanin, the absorption of the bloodless tissue can be modeled as the background absorption of human flesh [14]: µ a = µ a, f l where µ a, f l = 7.84 × 10 7 λ −3.255 (mm −1 ).Here, the wavelength λ is measured in (nm). In the presence of blood (dermis), the absorption of the dermis can be modeled as a combination of background-, oxyhemoglobin-, and deoxyhemoglobin-related absorption. Here, c is the blood volume fraction in the dermis, SO2 is the tissue oxygen blood saturation, and HbO2 and RHb refer to oxyhemoglobin and deoxyhemoglobin, respectively.The absorption coefficients for oxyhemoglobin-and deoxyhemoglobin are wellknown [15].Blood typically occupies around 0.4% of the physical volume of the papillary dermis [16]. However, the tissue oxygen blood saturation will be average between arterial and venous compartments.Thus, Equation (10) can be rewritten taking into account relative volumes of the arterial and venous compartment: ν a and ν v (ν a + ν v = 1) S a O2 and S v O2 are the arterial and venous compartment oxygenations, respectively.Assuming no accumulation of blood into the papillary plexus, the relative blood volume will be approximately equal between compartments, ν a = ν v = 1/2.Thus, we can write where Scattering The reduced scattering coefficient for the dermis and epidermis also follows the power law [17] µ′ s ∝ λ −k , with k = 1.3.With this power law, we can set a reference value at a particular wavelength and simulate its dependence on the wavelength.In particular, we can assign values at 633 nm [18] for the living epidermis (µ s ' = 9 mm −1 ) and reticular dermis (µ s ' =5 mm −1 ), which represent the bulk of the tissue in healthy epidermis and dermis, respectively.Assuming normal skin, where the stratum corneum is thin, we can ignore the presence of the stratum corneum.Thus, we can write µ′ s = 2.2 × 10 4 λ −1.3 . Other Parameters The refractive index of the tissue depends on the wavelength.However, this dependence is small; we will ignore it in our calculations.As such, the refractive index was set to n = 1.42.The depth of the pulsating volume was set in the papillary dermis (Z = 0.5 mm). The model depends on two variables: S a O2 and S v O2.To simplify interpretation, the assumption was made that the oxygen's constant fraction (0.3) is extracted during gas exchange in capillaries irrespective of initial oxygenation.Thus, As a result, all obtained results depend on a single variable, S a O2. Test Scenarios While it is typically assumed that the pulsatile volume is located on the arterial side, there is no definitive justification.While the arterial side is characterized by the larger changes in the blood pressure, it is also characterized by much smaller compliance than the venous compartment.Thus, it is hypothetically possible that the pulsatile volume is located in the venous compartment. Thus, to understand the origin of the photoplethysmographic signal, it is necessary to estimate the depth of the pulsations and the compartment where these pulsations occur: arterial or venous.Thus, we have simulated two scenarios: (a) the pulsatile volume is located on the arterial side, and (b) the pulsatile volume is located on the venous side. In the first scenario, the pulsatile volume is characterized by the arterial oxygenation S a O2.Thus, In the second scenario, the pulsatile volume is characterized by the venous oxygenation S v O2.Thus, Results We have simulated the ratio-of-ratios R as a function of tissue arterial blood oxygenation (Equation (13) with Equation ( 14) as a constraint) for several commonly used infrared light wavelengths (780, 840, 880, and 940 nm), assuming a constant red light wavelength (λ 1 = 660 nm).As described in Section 2.2.4,we have simulated two scenarios: (a) the pulsatile volume is on the arterial side, and (b) the pulsatile volume is on the venous side.They are depicted in Figure 1A,B, respectively.Note that variables in Figure 1 are transposed in line with the common way of displaying these data (SpO2 as a "ratio-of-ratios" function). We also performed a linear fit of SpO2 as a function of the "ratio-of-ratios".The results are also displayed in Figure 1. As we found that the linear function fits simulated data well (R 2 = 0.98-0.99 for all considered R/IR pairs), we have obtained explicit expressions for the parameters of this linear model, C 1 and C 2 .Their derivation is provided in Appendix A. We also performed a linear fit of SpO2 as a function of the "ratio-of-ratios".The results are also displayed in Figure 1. As we found that the linear function fits simulated data well (R 2 = 0.98-0.99 for all considered R/IR pairs), we have obtained explicit expressions for the parameters of this linear model, C1 and C2.Their derivation is provided in Appendix A. Discussion We derived an explicit analytical relationship connecting blood oxygenation with rPPG geometry's ratio-of-ratios.We also fitted the obtained function with a linear function.Our first finding was that a linear function in the considered oxygenation range (SaO2 = 70-100%) provides an extremely good fit (R 2 = 0.98-0.99)for all considered R/IR pairs. We can compare our results with experimental data from the literature.In particular, Verkruysse et al. [4Error!Bookmark not defined.]calibrated rPPG oximetry with illumination at 660 and 840 nm by video recordings of the foreheads of 41 healthy adults subjected to normoxic, hypoxic, and low-temperature conditions.They obtained a SO2=118 − 45.9R fitting function.These values are very close to our out-of-the-box prediction for C1 and C2 for the 660/840 pair 119.1 and 41.8, respectively (see Figure 1A). We have analyzed the sensitivity of the C1 and C2 to model parameters.The model is insensitive to changes in volume blood fraction c and refractive index n.However, it is very sensitive to the depth of the pulsatile volume Z.In particular, for Z = 0.6 mm, for C1 and C2, we obtained 119.2 and 45.2, respectively, which is very close to values estimated Discussion We derived an explicit analytical relationship connecting blood oxygenation with rPPG geometry's ratio-of-ratios.We also fitted the obtained function with a linear function.Our first finding was that a linear function in the considered oxygenation range (S a O2 = 70-100%) provides an extremely good fit (R 2 = 0.98-0.99)for all considered R/IR pairs. We can compare our results with experimental data from the literature.In particular, Verkruysse et al. [4] calibrated rPPG oximetry with illumination at 660 and 840 nm by video recordings of the foreheads of 41 healthy adults subjected to normoxic, hypoxic, and low-temperature conditions.They obtained a SO2 = 118 − 45.9R fitting function.These values are very close to our out-of-the-box prediction for C 1 and C 2 for the 660/840 pair 119.1 and 41.8, respectively (see Figure 1A). We have analyzed the sensitivity of the C 1 and C 2 to model parameters.The model is insensitive to changes in volume blood fraction c and refractive index n.However, it is very sensitive to the depth of the pulsatile volume Z.In particular, for Z = 0.6 mm, for C 1 and C 2 , we obtained 119.2 and 45.2, respectively, which is very close to values estimated by Verkruysse et al. [4].Depending on the epidermis thickness, this depth can be in the papillary dermis, subpapillary vascular plexus, or upper portion of the reticular dermis.However, we can estimate more accurately as measurements were taken from the forehead.For example, Jeong et al. [19] found that the epidermal thickness on the forehead is 0.334 ± 0.157 mm.As the thickness of a papillary dermis is approximately 300-400 µm [20], we can conclude that the pulsatile volume is located in the lower part of the papillary dermis or subpapillary vascular plexus. Another conclusion from this experimental comparison is that pulsatile volume most likely resides on the arterial side.In particular, the venous side's origin would be characterized by a much smaller coefficient C 2 (see Figure 1B vs. Figure 1A). Based on these estimations, the pulsatile volume is located in the arterial compartment of the papillary dermis' lower part or subpapillary vascular plexus.Potentially, it can be an upper portion of ascending arterioles.However, a subpapillary vascular plexus origin is much more likely as it most likely has the largest compliance. The shortcoming of the proposed model is that it does not account for the impact of the stratum corneum and skin tone.In particular, there is growing evidence that skin tone may impact the accuracy of pulse oximetry measurements [21].Similarly, the stratum corneum thickness may impact the accuracy of pulse oximetry measurements.In particular, while the thickness of the dermis is quite uniform across all body parts, the epidermis thickness varies between glabrous and non-glabrous skin [22], primarily due to the differences in stratum corneum thickness.Moreover, the stratum corneum thickness can be even higher in corns and calluses.However, we excluded epidermis effects (skin tone and stratum corneum thickness) from consideration as semi-quantitative analysis shows that their impact is nonmaterial.In particular, we can write the ratio-of-ratios R as Here, I 1 and I 2 refer to the intensity of the red and infrared light on the sensor, operator ∆refers to changes between systole and diastole.Then, using the Beer-Lambert law for multilayer tissue (I = I 0 exp(−∑ i µ a,i L i ); here, L i is the pathlength in the i-th layer), we can write Here, µ a,1,i , µ a,2,i and ∆µ a,1,i , ∆µ a,2,i are absorption coefficients for red and IR wavelengths in the i-th layer and changes in absorption coefficient between diastole and systole for red and IR wavelength in the i-th layer, accordingly.Similarly, L 1,i , L 2,i and ∆L 1,i , ∆L 2,i are the mean optical path for red and IR wavelengths in the i-th layer and changes in the mean optical path between diastole and systole for red and IR wavelengths in the i-th layer, accordingly.Thus, mean optical paths (MOP) are MOP 1 = ∑L 1,i and MOP 2 = ∑L 2,i . From Equation ( 17), we can see two types of contributions.In the first term in the nominator and denominator, one can expect that only tissue layers where ∆µ a,1,i and ∆µ a,2,i (pulsatile volume) are non-zero contribute to R. The second term in the nominator and denominator accounts for the change in the mean optical path between diastole and systole. As the surface layers (stratum corneum and living epidermis, where melanin is synthesized) do not contribute to the first terms and are unlikely to have a significant contribution to the second terms, one would expect that surface layers should not provide a meaningful contribution to Equation (17).Thus, it should not have a noticeable impact on the ratio-of-ratios.R.However, it does not hold true if the surface layer significantly impacts the passage of light to the dermis (either strong scattering in thick stratum corneum or strong absorption in the melanin layer).In this case, the light does not sample the pulsatile volume, and pulse oxygenation cannot be estimated altogether.Alternatively, the signal is so weak (and noisy) that accurate oxygenation estimation is impossible.For example, it was estimated that oxygenation cannot be accurately extracted for calluses thicker than 1.5 mm [23].Another effect of the thickening of the epidermis layer is an increase in the pulsatile volume depth, Z, and a corresponding increase in the C 2 coefficient (see Equation (A6) for details).Thus, rPPG oxygenation measurement should be taken from skin areas with similar epidermal thicknesses once calibrated.Considering these corner cases, we can conclude that the epidermis should not impact our analysis for light skin tone and not very thick stratum corneum (e.g., non-glabrous skin).However, in the general case, a thorough analysis is required. In summary, we can conclude that our simple estimation gave us quite realistic values.However, further analysis of the model's sensitivity to different model parameters is required.In future work, we also plan to validate our results with Monte Carlo simulations, which are a de facto gold standard in tissue optics. Conflicts of Interest: The authors declare no conflicts of interest. Appendix A We can explicitly obtain model parameters C 1 and C 2 of the linear fit model (see Equation ( 2)) from the developed model.To do so, we can insert expressions for the excessive absorption (Equation ( 15)) into Equation (8).(1−exp(−2µ e f f ,1 h 1 )) (1−exp(−2µ e f f ,2 h 2 )) exp(−2(µ e f f ,1 − µ e f f ,2 )Z); subindexes 1 and 2, as usual, refer to different wavelengths.Here, based on our findings, we assumed that the pulsatile volume resides on the arterial side. Wavelengths in pulse oximetry are selected in a certain way.Namely, in the red range, the deoxyhemoglobin absorption on several orders of magnitude is larger than those of oxyhemoglobin (and the tissue).Similarly, in the infrared range, the oxyhemoglobin absorption is much higher than deoxyhemoglobin and tissue.Thus, we can extract the leading factors from the nominator and denominator. R = S a O2 Note the strong dependence of C 2 on Z.This dependence can be used to extract the pulsatile volume depth from experimental data. Figure 1 . Figure 1.Tissue arterial blood oxygenation as a function of theratio-of ratios R for the constant red light wavelength (λ1 = 660 nm) and several infrared light wavelengths (780, 840, 880, and 940 nm).(A) The pulsatile volume is on the arterial side, and (B) the pulsatile volume is on the venous side.The fitting functions are displayed for each R/IR pair.Both oxygenation (SpO2) and the ratio-of-ratios (R) are dimensionless.Oxygenation (SpO2) ranges from 0 to 1 and can be converted to typical clinical presentation (%) by multiplying by 100. Figure 1 . Figure 1.Tissue arterial blood oxygenation as a function of theratio-of ratios R for the constant red light wavelength (λ 1 = 660 nm) and several infrared light wavelengths (780, 840, 880, and 940 nm).(A) The pulsatile volume is on the arterial side, and (B) the pulsatile volume is on the venous side.The fitting functions are displayed for each R/IR pair.Both oxygenation (SpO2) and the ratio-of-ratios (R) are dimensionless.Oxygenation (SpO2) ranges from 0 to 1 and can be converted to typical clinical presentation (%) by multiplying by 100. Funding: G.S. is thankful to the NSERC I2I (I2IPJ 586883-23) and NSERC Discovery (RGPIN-2023-03933) grants for financial support.Institutional Review Board Statement: Not applicable.Informed Consent Statement: Not applicable.Data Availability Statement: The original contributions presented in the study are included in the article.
5,551
2024-08-01T00:00:00.000
[ "Medicine", "Engineering" ]
Dynamics and optimal control of a stochastic coronavirus (COVID-19) epidemic model with diffusion In view of the facts in the infection and propagation of COVID-19, a stochastic reaction–diffusion epidemic model is presented to analyse and control this infectious diseases. Stationary distribution and Turing instability of this model are discussed for deriving the sufficient criteria for the persistence and extinction of disease. Furthermore, the amplitude equations are derived by using Taylor series expansion and weakly nonlinear analysis, and selection of Turing patterns for this model can be determined. In addition, the optimal quarantine control problem for reducing control cost is studied, and the differences between the two models are compared. By applying the optimal control theory, the existence and uniqueness of the optimal control and the optimal solution are obtained. Finally, these results are verified and illustrated by numerical simulation. Introduction By the end of 2019, a new virus infection named COVID-19 (SARS-CoV-2) was recorded in China [1][2][3]. The symptoms of infected individuals are respiratory problem, fever, dry cough, etc. and it seriously affects the lungs. The incubation period of this infectious disease is 3-14 days or longer [4], the asymptomatic period is on an average 3 days [5]. The evolution that followed the outbreak indicated that the world mechanisms for preventing the transmission and quarantine of COVID-19 were considerably limited, and almost all areas are suffering from its serious impact. The epidemic models are often used to forecast and control the spread of diseases as much as possible, so as to the relevant government departments to prepare in advance and make necessary decisions. As early as the eighteenth century, the work on constructing mathematical models [6] of epidemiology has begun. From Bernoulli [7] at that time to Kermack and McKendrick [8] more than 100 years later, many changes have taken place in the models, and now most of the models used in epidemic research are based on the latter. These models which constitute a set of nonlinear ordinary differential equations are called compartment models. As we all know, classical differential equations are often used to analyse and study infectious diseases, such as SI [9], SIS [10], SIR [11], SIRC [12], SEIR [13] models and so on. Since the discovery of the COVID-19, many models have been constructed to describe and study its dynamics [14][15][16]. With the increase of real data and available information, the models for COVID-19 pandemic are also developing, what followed was increasingly complex epidemiological models [17][18][19][20]. Sun et al. [21] proposed an SEIQR model in light of the influences of lockdown and medical resources on the propagation of COVID-19. Zhang et al. [22] considered the threshold of a stochastic SIQS epidemic model with varying total population and its corresponding deterministic epidemic model. Zhang et al. [23] constructed a SIQRS model on networks and investigate the related optimal control problems. Tang et al. [24] studied the effects of isolation and quarantine on the tendency of this novel coronavirus-caused pneumonia in China. The present novel coronavirus (SARS-CoV-2) infection has spread all over the world in a large scale, but there is no specific effective vaccine, anti-viral medicine for such infection. The UK, which has made rapid progress in COVID-19 vaccine trials, has only approved the use of the vaccine in emergency situations [25]. There are still many questions about the effect when it is promoted to millions of people. Thus, at present, the most effective method is still early detection and isolation treatment. This approach has been practiced forcefully in China. In the face of a sudden epidemic, China built Fangcang shelter hospitals [26] which are large-scale, extemporaneous hospitals for the first time to cope with it in February 2020. They transformed the existing public places, for example exhibition centres and gymnasium, into the Fangcang shelter hospitals to quarantine patients with COVID-19 and prevent further infection. This measure reduce the incidence and maintain it at a very low level by strict social distancing, localized and targeted measures. China has generally controlled the propagation of COVID-19 by implementing these measures. Thus, quarantine treatment is still the most effective treatment until specific effective vaccines and drugs are developed. Although continuous control can effectively control the epidemic of infectious diseases, it usually costs a lot of manpower and material resources. In order to achieve the control goal and reduce the control cost, the optimal control is a effective method to better control the epidemic situation [27,28]. The research on COVID-19 is developing vigorously. Some meaningful results are obtained to recognize this infectious disease, for example, clinical studies have shown that the immune system's memory of the new coronavirus lingers for at least 6 months in most people [29]. According to the facts and inspired by [30], a stochastic reaction-diffusion epidemic model is presented. Diffusion is introduced into it to better understand and analyse COVID-19, this model will be elaborated in the next section. The structure of this paper is as follows. In Sect. 2, the epidemic models studied in this paper are introduced in detail. In Sect. 3, the sufficient criteria for the persistence and extinction of disease are derived. In Sect. 4, we deal with this stochastic model by the given method and obtain the conditions of how the Turing instability arises. In Sect. 5, the amplitude equations for Turing pattern are derived by using Taylor series expansion and weakly nonlinear analysis. And the stability of these equations are analysed, by which the selection of Turing patterns for this model can be determined. In Sect. 6, the optimal quarantine control problems of the stochastic model and its corresponding deterministic model are studied. The existence and uniqueness of the optimal control and the optimal solution are got by using the optimal control theory. In Sect. 7, an approximation based on the solution of the deterministic model is used to solve the stochastic optimal control problem numerically. These results are verified and illustrated by numerical simulations. Finally, some discussions and conclusions are made in Sect. 8. The model In 2020, Anwarud Din et al. [30] had proposed a stochastic coronavirus (COVID-19) epidemic model, which consists of three stochastic differential equations. This model is based on stochastic theories and study the transmissions dynamic of the novel virus. The stochastic model is obviously better than the deterministic model in describing most phenomena in nature. That's because stochastic model has some inherent randomness, while the deterministic model is completely determined by initial condition and parameter value. Different from the temporal development of epidemics that most researchers have focused on in the past. Many significant epidemiological behaviours are keenly impacted by space in the process of their transmission and development due to the relevant characteristics of the transmission environment or other interactions. And its spread can also lead to strong spatial pattern changes, resulting in some new phenomena. The subjects which are related to the spatial variation in disease risk or incidence have attracted more attention [31]. When the distribution of people is in different spatial locations, the diffusion terms should be taken into consideration to accord with the actual situation of infectious diseases. Hence, on the basis of [30], we propose novel susceptible-infected-quarantined epidemic models with spatial diffusion term that are more in line with the actual characteristics. This model can better research the spatial and temporal transmission laws of population epidemics, and improve the awareness of the epidemiological characteristics of population. According to the characteristics of COVID-19, the following assumptions are given. The state variables and parameters in the model are nonnegative throughout this paper. (iii) The initially infected individuals move to the quarantined class. (iv) Once the infection is confirmed, then the quarantined will be sent back to the infected area. Based on the above assumptions (i)-(iv), we proposed the following model where S is the susceptible individuals. I denotes the infected individuals. Q represents the quarantine individuals. Lipschitz-continuous functions. According to the different problems studied, the specific forms of R i are also different. In the problems investigated in this paper, they have two forms: i.e., the environmental influence on the individuals described by stochastic perturbations [32]. the equilibrium point of (2.1) after removing diffusion terms and stochastic terms, i.e., the situation of stochastic perturbations around the equilibrium state [33]. The difference between (2.2) and the model from [30] is that the incidence rate varies in form, one is bilinear incidence rate β S I , and the other is standard incidence rate β S I N . If η 1 = η 2 = η 3 = 0, then (2.2) is simplified to the underlying deterministic model. With regard to this deterministic system, we give the following results according to [34]. When there is no disease, the population size N (t) approaches the carrying capacity A μ 0 . The solutions of the underlying deterministic model are always within 1 ∈ R 3 + which defined by The basic reproduction number (or the threshold) R 0 of the underlying deterministic model corresponding to (2.2) is determined by using the next generation matrix method [35]: If R 0 ≤ 1, there exist a unique equilibrium point, i.e., the disease-free equilibrium (S * * , I * * , Q * * ) = ( A μ 0 , 0, 0). If R 0 > 1, there are two equilibrium points, a disease-free equilibrium point and a positive endemic equilibrium point (S * , I * , Next, some related theories in stochastic differential equations [36] are given. Let For some n ∈ N , some x 0 ∈ R n , and n-dimensional Brownian motion B(t), we have the general n-dimensional SDE as follows, (2.4) Define the differential operator L related to the function in (2.4), and operate L on a function Then, the following two theorems show the existence of the unique positive global solution of stochastic model (2.2) with form H 1 and H 2 . Proof The essence of the proof of this theorem is the same as that of Theorem 2.1. The specific proof process is omitted here, and only the differences corresponding to (2.8) of Theorem 2.1 in the proof of this theorem are pointed out as follows, Then, (2.9) corresponds to the following formula The remaining proof process is the same as the previous theorem. Therefore, this theorem shows that the solution process of system (2.2) with form H 2 is positive and global. Remark 2.2 In the following sections, we mainly study two kinds of problems, one is the impact of the stochastic fluctuation of the environment on the existence and extinction of diseases, and the related optimal quarantine control problems, i.e., the form of stochastic perturbations in (2.2) is form H 1 . The other is the influence of stochastic perturbations around the equilibrium state on the pattern formations, i.e., the form of stochastic perturbations in (2.1) is form H 2 . Extinction and stationary distribution criteria In this section, we investigate the conditions for the extinction and stationary distribution criteria of this disease. For the sake of convenience, let In order to prove the extinction and stationary distribution criteria, the following lemmas need to be used. Lemma 3.2 Assume that (S(t), I (t), Q(t)) be a solution of (2.2) with form H 1 along with initial values Proof From (2.2) with form H 1 , we can obtain Solving this above equation, Obviously, M(t) is a continuous local martingale with M(0) = 0. We define s. for all t ≥ 0. It is clear that A(t) and Q(t) are continuous adapted increasing processes on t ≥ 0 with A(0) = Q(0). By Theorem 1.3.9 in [36], we obtain lim Then, the following results can be got Setting Since the quadratic variations, we have [36,37] Suppose that X(t) is a regular Markov process in R n + whose dynamics is given by The diffusion matrix is defined as follows The Markov process X (t) has a unique ergodic stationary distribution π(·) if there exists a bounded domain D ⊂ R d with regular boundary and (i) There is a positive number M such that for all x ∈ R d , where F(·) is an integrable function with respect to measure π . In order to illustrate the extinction of disease, the following result can be obtained, I(t) approaches zero exponentially a.s., i.e., the infection of COVID-19 will die out from the community with probability 1. And Proof We integrate (2.2) with form H 1 directly, and apply the Itô formula to the second formula of (2.2), By integrating relation (3.1) from 0 to t, then we can get According to the theorem of large number in [36], we obtain Taking the limit superior on both sides of (3.2) and when Adding both sides of each equations in (3.3) respectively, we get the following formula is obtained by calculation (3.5) According to the third equation in (3.3), we obtain Based on Lemma 3.2, we can get And (3.5) and Lemma 3.2 imply In order to illustrate the stationary distribution of disease, we can obtain the following result, Proof This theorem is mainly proved by Lemma 3.3. Firstly, the diffusion matrix of (2.2) with form H 1 as follows Secondly, the key is to construct a nonnegative C 2function V * : R 3 + → R + . Firstly, we construct where c 1 , c 2 , c 3 are the positive constant and will be determined later, It can be obtained by calculation that ). According to the Hesse matrix of V 1 at this stagnation point is positive definite, we can obtain V 1 (S, I, Q) has a minimum value , we can say that V 1 has one unique minimum value inside R 3 + . Next, we construct a nonnegative C 2 -function V * : We consider V 2 by using (2.2) and (2.5), the following formula can be obtained Let . Then, we have constants. In the set R 3 + \ D, we can choose sufficiently small δ i > 0 (i = 1, . . . , 6) such that these conditions hold: where K 1 is positive constant which can be determined at later stages [37]. Next, we will prove LV * (S, I, Q) < 0 on R 3 + \ D, (i) If (S, I, Q) ∈ D 1 , then through (3.10), we can obtain (ii) If (S, I, Q) ∈ D 2 , then through (3.10), we can get we select a big enough c 3 > 0 and as small as Thus, under some above suitable conditions, LV * < 0 for all (S, I, Q) ∈ R 3 + \ D. Then, (ii) in Lemma 3.3 holds. In conclusion, through Lemma 3.3, we can get (2.2) with form H 1 is ergodic and it has one and only one stationary distribution. These are further verified in the later numerical simulations (Figs. 2, 3). Turing instability is the equilibrium point of the underlying deterministic model corresponding to (2.2), i.e., the form H 2 of stochastic perturbations mentioned above. Then, this means the situation of a white noise stochastic perturbations around the equilibrium state of the underlying deterministic model corresponding to (2.2) is considered. From the relationship between the white noise ξ(t) and the Brownian motion where σ 0 is the variance of the Gaussian distribution satisfied. Then when R 0 > 1, (2.1) has two equilibrium points, i.e., (S * * , I * * , Q * * ) and (S * , I * , Q * ). In this section, Turing instability [38] of the positive equilibrium of (2.1) with form H 2 is studied, i.e., the white noise perturbations around the endemic equilibrium state. In the following, the positive equilibrium point (S * , I * , Q * ) is denoted as (S 0 , I 0 , Q 0 ). And the zero-flux boundary conditions as follows, where n is space, (x, y) ∈ ∂ and is spatial domain. Next, linearising (2.1) with form H 2 around (S 0 , I 0 , Q 0 ) which depends on time and space [39], here we expand the stochastic terms by Taylor expansion and then keep the linear terms, we can obtain . And we can get the system governing the dynamics of P is defined by where the coefficient matrix is given by where We assume P take the following form, where k·k = k 2 and k is the wave number, r = (x, y) is the spatial vector in two dimensions, i is the imaginary unit, i 2 = −1, then we obtain the following characteristic matrix Therefore, we have the following characteristic equation, According to Routh-Hurwitz criterion, all the eigenvalues have negative real parts if and only if Turing instability occurs if the equilibrium point (S 0 , I 0 , Q 0 ) is stable without diffusion, but driven unstable by diffusion, i.e., with respect to certain value of k(> 0). It is clear that (S 0 , I 0 , Q 0 ) is locally asymptotically stable without diffusion if and only if Therefore, if at least one of the three conditions in (4.8) does not hold, then Turing instability occurs. So according to (4.7), we let where g 3 > 0 and g 0 > 0. If we want to find some real number k 2 (> 0) such that the value of p 0 is negative, then min p 0 (k 2 ) < 0 must be true, and here where k 2 c is real and positive if g 1 < 0 or, g 2 < 0 and 3g 1 g 3 < g 2 2 . (4.12) Therefore, (4.13) The conditions (4.12) and (4.14) are sufficient for the occurrence of Turing instability with noise. And conditions (4.9), (4.12) and (4.14) represent together the analytical Turing space in parametric space of model (4.1). Thus, from the perspective of diffusion, we must do our best to control the diffusion of the infectious to avoid another outbreak of COVID-19. Amplitude equations for Turing patterns The specific expression of amplitude equations for Turing patterns plays an important role in pattern selection theory. From the point of view of epidemiology, Turing patterns may lead to the homogeneous steady state in the spatial domain by changing the control parameter or diffusion parameter. In this section, we choose r 1 as the control parameter, and use multiple scale analysis to derive the amplitude equations. We utilise the Taylor series expansion to expand the stochastic terms of (2.1) with form H 2 at (S 0 , I 0 , Q 0 ), then we truncate the expansion at third order and higher order have no effect on the amplitude equations in the process. In order to obtain the amplitude equations, we write (2.1) with form H 2 around the equilibrium point In the following, we use multiple scale analysis to derive the amplitude equations with wave vector k j ( j = 1, 2, 3), k j (i = 1, 2, 3) are different types of modes corresponding to the Turing patterns associated with an angle of 2π 3 within each pair, which satisfy | k j |= k c and k 1 +k 2 +k 3 = 0. Near the critical point, the solution of (5.1) can be expanded as where P is the eigenvector of the linearised operator, K j are the amplitudes associated with the modes k j ( j = 1, 2, 3), and c.c. denotes complex conjugate. Now we rewrite (5.1) as the following form where Next, we investigate over the expansion around the Turing threshold constant r 1c and we obtain as follows, where such that, where where K is amplitude. The operator L at the point r 1 = r 1c can be expanded as Substituting the above formulas into (5.2) and expanding it with respect to different orders of , then the following three equations can be obtained, where By solving the first formula of (5.11), we have ⎛ where termed as the amplitude of the mode e ik j ·r and its form is determined by the perturbational term of the higher order. Then using the solvability condition of Fredholm to determine whether the second formula of (5.11) has a nontrivial solution or not. Next, considering of operator L + c which is the adjoint operator of L c , and the zero eigenvectors of operator L + c are ⎛ where Substituting (5.12) into the second formula of (5.11), we can get where u 1 = −2βξ 1 , u 2 = 2βξ 1 , j, l, m = 1, 2, 3; j = l, m; l = m. According to the Fredholm solubility condition, the vector function of the right hand of (5.14) must be orthogonal with the zero eigenvectors of operator L + c to ensure the existence of the nontrivial solution of this equation, then comparing the value of the coefficient of e ik j ·r , we have By replacing the (5.16) into the second formula of (5.11) and the results is as follows for j = k; j, k = 1, 2, 3; Substituting the (5.12), (5.16), (5.17) into the third formula of (5.11), and then we have 22 2 , Then, utilizing the Fredholm solubility condition again, we can obtain Therefore, K j ( j = 1, 2, 3) can be expanded as below, (5.21) In summary, we can get the amplitude equations from (5.7) as follows, wherẽ Next, the stability conclusion of (5.22) is given for the subsequent numerical simulation and analysis. Each amplitude in (5.22) can be decomposed to mode ρ j = |K j | and acorresponding phase angle ϕ j . Then substituting K j = ρ j e iϕ j , ( j = 1, 2, 3) in (5.22), separating the real and imaginary parts and ϕ = ϕ 1 + ϕ 2 + ϕ 3 , we can get the following equation (5.25) For (5.24), this system lies the stationary state when ϕ = 0 or ϕ = π . For ρ j ≥ 0, we can know that the solution of ϕ = 0 is stable when h 1 > 0 and the solution of ϕ = π is stable when h 1 < 0. If we only consider the stable solution, then the following equations are obtained, By analysing (5.26), the following theorem is obtained, Proof Now, substituting ρ j = ρ j + ρ j , ( j = 1, 2, 3) into (5.26) (ignoring higher orderterms), and changing ρ j in the obtained equations to ρ j , then, we can obtain the following matrix, Therefore, (i) The stationary state ρ 1 = ρ 2 = ρ 3 = 0, according to (5.27), we can know that the stationary solution is stable forr 1 < r 11 = 0, and vice versa. Optimal quarantine control The spread of infectious diseases can be effectively suppressed under the continuous and high-intensity quarantine control. However, combined with practical factors, these are often difficult to implement 100%, which correspond to the cost of isolation, treatment and transportation, allocation of medical resources, even people's psychological spirit, etc. Applying the timevarying optimal control theory [28] to control the epidemic situation can achieve the desired control objectives and reduce the related control costs to a certain extent. In this section, optimal control problem of (2.2) with form H 1 is studied. If η 1 = η 2 = η 3 = 0, then the model (2.2) with form H 1 is reduced to the underlying deterministic model. Now, Let is study the deterministic optimal control problem firstly. Define the time-varying quarantine control variables r 1 (·) ∈ U ad = {ζ(t) is measurable, 0 ≤ ζ(t) ≤ r 1 , t ∈ [0, T ]}, where 0 ≤ r 1 ≤ 1 and T > 0 is terminal time corresponding to the actual needs. Then, we have the following deterministic model, Problem 1 In view of our control objectives are to decrease the prevalence of epidemic and to balance control strengths, so the objective function is defined by subject to (6.1) and S(0) = S 0 ≥ 0, I (0) = I 0 ≥ 0, Q(0) = Q 0 ≥ 0, under these conditions, minimize the objective function (6.3). Next, Problem 1 will be solved by applying the Pontryagin's minimum principle [40]. We construct the Hamiltonian function H for this problem as where λ 1 (t), λ 2 (t) and λ 3 (t) are Lagrange multipliers introduced. The Pontryagin's minimum principle transforms Problem 1 into minimizing the Hamiltonian with regard to the controls at each time t, then the following result can be obtained. 1, 2, 3). Moreover, the optimal quarantine rate is Proof Obviously, according to the convexity of Hamiltonian with respect to r 1 (t), it is easily to know the existence of solution. The partial derivatives of the function H with respect to S, I, Q respectively, i.e., λ 1 (t),λ 2 (t),λ 3 (t) as follows, the above formulas verify (6.5). We now calculate the optimal quarantine rate r * 1 (t). Now for a fixed value of t, on the basis of the Pontryagin's minimum principle, r * 1 (t) must satisfy the following formula in the interval then the optimal control r * 1 (t) has been worked out. If for ∀r 1 ∈ [0, r 1 ], there is 2r 1 Remark 6.1 In view of the above deterministic optimality system has Lipschitz structure and the boundedness of the above variables, i.e., the state and adjoint variables, we can determine the uniqueness of the solution. Then, the uniqueness of the optimal control also can be guaranteed by the theories in Fister et al. [41]. Next, let is study the stochastic optimal control problem of (2.2), our objective is to seek an optimal quarantine rate r * 1 to minimizes the following objective function and x 0 is an initial state, what we obtain here is the expectation of the initial state of the system i.e. at time t = 0. For the deterministic problem studied above, a fixed constant r 1 ≤ 1 with r 1 (t) ≤ r 1 (a.s.) is assumed. Then, we define the admissible control law as follows, : r 1 is adapted, and 0 ≤ r 1 ≤ r 1 a.s.}. (6.9) We define the following performance criterion for this problem of stochastic control, (6.10) where the expectation depends on the state of the system. Define the following value function, We determine J : A → R + given by (6.11). Next, the stochastic optimal control problem is proposed and solved. Theorem 6.2 The Problem 2 about the optimal quarantine control has a solution as the following form r * 1 (t) = min[max(0, (6.14) Proof In order to determine (6.14) through the dynamic programming method, we need to calculate IU (t) by utilizing (2.5), According to the Hamilton-Jacobi-Bellman theory [42], we need to work out the following formula, . (6.16) For this purpose, we obtain According to the proof of the argument in the previous corresponding deterministic problem, and the bounds of r 1 , then r * 1 (t) emerges. Case study and numerical simulation In this section, we will show the relevant numerical simulations of the stochastic and deterministic model of coronavirus. The numerical simulations in this section are divided into four parts. Firstly, the values of relevant parameters in our proposed model are estimated by an indirect method based on the real data of COVID-19 in China and the USA. Secondly, the numerical simulations are used to show the difference between stochastic system (2.2) and its corresponding deterministic system, and to verify the extinction and stationary distribution criteria. Thirdly, numerical simulations of the proposed spatially COVID-19 epidemic model with diffusion are made to test the stability conclusion. Lastly, the numerical simulations are used to solve numerically the optimality system, and to test the feasibility and effect of the proposed optimal control strategy. In order to obtain the proposed model parameters based on the real data of COVID-19 of China and the USA. We consider some of the parameters from some reports and literatures, and the rest of the parameters are fitted the model to some epidemiological data by the use of least-square fitting, which provides the minimized estimates of the needed parameters [43]. Here, we use the least square method to the proposed model to obtain the best-fit parameters for China and the USA. The procedure looks for the set of initial guesses and pre-estimated parameters for the model whose solutions best fit or pass through all the data points [44], by reducing the sum of the square difference between the observed data and the model solution. Chinese authorities reported the new virus on January 4, 2020. From this period up to January 22, the statistics on the number of people contracting this disease are not comprehensive enough, and the relevant information is less. Since then, infection has received more attention. We consider the real data of COVID-19 in China from 22 January to 21 February 2020 obtained from worldometer [45]. According to [45,46], considering the following estimates and values, Fig. 1a, we fitted the proposed COVID-19 model to the epidemic data of China using the least-squares fitting and the relevant best-fit curve is shown. At the same time, unlike China's stricter quarantine measures, some countries have weak or almost no quarantine control, which greatly increases the risk of infection. Take the USA as an example. According to the statistical data of WHO [47], the outbreak in the USA occurred on March 4, 2020. According to [46,48], considering the following estimates and values, N 2 (0) = 331000000, S 2 (0) = N 2 (0) − 158, I 2 (0) = 158, Q 2 (0) = 16. In Fig. 1b, we fitted the proposed COVID-19 model to the epidemic data of the USA using the least-squares fitting and the relevant best-fit curve is shown. These fittings about China and the USA, i.e., Fig. 1a, b shows that our model relatively fit well to the reported data points and the accuracy of the development tendency of the infected class. From these, the obtained relevant parameter estimates about China and the USA are given in Tables 1 and 2. From the fitting results, development trend and estimated parameters in Tables 1 and 2, we can observe and summarize the differences between China and the USA in response and treatment measures. The start time of simulation and fitting in Fig. 1a, b is both the time of outbreak or data recording in each country. It can be seen that under the strong control and isolation measures in China, the number of infected people gradually stabilized after a period of time. However, both the number of infected persons and the development trend reflected in Fig. 1b almost show a trend that is difficult to control. This is inseparable from the way of regulation adopted by the government of the USA. In addition to the isolation of patients in the hospital, other people's travel, work and life are basically unrestricted, and they do not wear masks and other protective measures in their daily activities. This greatly increases the probability of contact with susceptible persons in the incubation period, and makes the incidence rate high. In the rest of this section, we will simulate and analyse the proposed model to compare the differences of the relevant contents and verify the results of the previous theoretical analysis. The simulations about the difference between the two systems, the extinction and stationary distribution of disease are obtained by the method in [51], the parameters with biological feasibility are set to two groups, which correspond to the extinction and persistence of disease respectively. The first group is A = 0.3, β = 0.5, μ 0 = 0.2, μ 1 = 0. As we all know, the existence of noise disturbance can change the behaviour of evolution in the deterministic system. In view of this reason, we compare them in Fig. 2a, b. We can observe from Fig. 2a, b that the random fluctuations can eradicate the infectious disease, i.e., the infection vanishes, but even in the case of extinction, there will always be susceptible population. In the deterministic system with isolated treatment measures, although it can also reduce the number of infected people, it will not be eliminated, and it takes longer. In Fig. 3a, b, it can be observed that susceptible, infected and quarantine individuals always exist. According to the given parameter values, we can obtain R S 0 ≈ 0.897 < 1, R S 1 ≈ 1.832 > 1, these satisfy the conditions of the extinction and persistence. Thus, Figs. 2a and 3a verify the extinction and stationary distribution criteria. Figure 4a shows that in a reasonable range, no matter what changes I (0), I (t) approaches zero exponentially. Now, we select the first set of parameters to elaborate that the influence of white noise magnitude η 2 on this epidemic. For this purpose, we select η 2 = 0.25, 0.45, 0.75, and other parameters remain unchanged. In Fig. 4b, we can observe that the infected population has decreased faster with the increase of the stochastic disturbance intensity and they all end up close to 0. Comparing the curves in Fig. 5a, b, we know that quarantine measure can decrease the number of the infected population whether in the stochastic system or in the corresponding deterministic system. But in the stochastic system, the effect is better because of the existence of stochastic term. Next, we show how the quarantine rate r 1 and the noise intensity η 2 influence the threshold R 0 and R S 0 . Figure 6 describes that R 0 decreases with the isolation rate r 1 increases and there exists a critical value r 0 1 ≈ 0.583. When r 1 > r 0 1 , R 0 < 1. In addition, Fig. 7 shows that R S 0 decreases with the quarantine rate r 1 or the noise magnitude η 2 increase and there is a critical noise magnitude η * 2 . If η 2 > η * 2 , then R S 0 < 1. The above numerical simulations show that sufficiently big stochastic disturbance of the transmission rate can make this epidemic disease die out to some extent. Next, we will show the limit case of the system (2.2) with form H 1 when S(t), I (t), Q(t) are perturbed by small noise. For this reason, we can assume that the noise magnitude of the small noise they are subjected to is the same, i.e., all of them are ε, (ε → 0). In this way, we can consider the following groups of parameters, namely η 1 = η 2 = η 3 = 0.5, 0.3, 0.1, 0.05, 0.001, 0.0001, to show the intermediate cases of transition from stochastic to deterministic, so as to analyse and show the limit system. Not only the representative parameters close to 0 are selected, but also some larger ones are selected, which can better reflect the asymptotic behaviour of the limit system. Other parameters used in this part of the simulations are in the first group described earlier, i.e., A = 0.3, β = 0.5, μ 0 = 0.2, μ 1 = 0.2, r 1 = 0.3, σ = 0.2, μ = 0.1. In fact, the noise magnitude can reach three decimal places, which is corresponding to the previous ε → 0 in real life, i.e., it is only subject to very small noise disturbance or the limiting situation. Our smallest parameter is taken to four decimal places, so the numerical simulations will be closer to the deterministic system and this system can be used to study the limit properties more formally through ε → 0. As shown in Fig. 8, it is not difficult to find that as the value of η i (i = 1, 2, 3) tend to 0, the trajectories of numerical simulations of S(t), I (t), Q(t) are closer to the deterministic system shown in Fig. 2b. And when 0.0001 is taken, the trend is basically the same as that of Fig. 2b, and the change of S(t), I (t), Q(t) with the change of η i (i = 1, 2, 3) is also in line with the previous analysis and practical significance. The numerical simulations of the spatially COVID-19 epidemic model with diffusion are made to test the stability conclusion. Next, we will simulate the continuous problem of spatially model with diffusion in a discrete region of M × N lattice points by using the Euler method. Define the time step t and the lattice constant h between the lattice sites. Let r 1 is a varied parameter, d 1 = 4.8, d 2 = 1.6, d 3 = 0.8, M = N = 200 and other parameters are the same as above. We run the simulation until the characteristics and distribution of the simulated objects in the image do not seem to change, or reach a stable state, then stop and get the final image. In this section, the pattern formation of I is analysed by simulating the distribution of infected people. Figure 9a-d shows that the spatial distribution patterns of the infected class evolve with the small stochastic disturbance of the stationary solution in the spatially homogeneous state when the parameters are in the region of analytic Turing space. With the change of r 1 , the spatial pattern is different, the pattern transits from the hexagonal pattern (Fig. 9b) to the stripe pattern only (Fig. 9d), and in the process of change experienced the coexistence of the two states (Fig. 9c). When r 1 is changed to an appropriate range, stripe patterns prevail in the whole dominant. For the simulations of optimal control of (2.2) and its corresponding deterministic model, the parameters are set as A = 0.3, β = 0.5, μ 0 = 0.2, μ 1 = 0.2, σ = 0.2, μ = 0.1, c = 0.3, r 1 = 1, and S(0) = 0.7, I (0) = 0.02, Q(0) = 0, terminal time T = 150. An iterative scheme of Runge-Kutta method which is fourth order is utilized to deal with the deterministic optimal problem. Beginning by assuming an initial control based on the actual situation, and substituting it into the deterministic model (6.1) to solve S, I, Q forward in time by Runge-Kutta method. Then the above variables and the initial control are used to dispose of (6.5) with the transversality conditions backward in time through the same method. Thus, a new control r 1 is obtained. After that, repeating the above process, and this algorithm is terminated when all the values of related variables in the above optimal system converge sufficiently [52,53]. Numerical simulations to the system including the stochastic model corresponding to (6.1) compelled with the proxy adjoint system with transverslity conditions and characterization of the control variable r * 1 (t) in equation (6.14) are carried out using forward backward algorithm. Stochastic differential equations were first simulated using forth order Runge-Kutta method by introducing noise through Euler-Maruyama method [54] and then adjoint system (6.5) are simulated backward in time with final conditions. Particularly, we use as a proxy for (λ 2 − λ 3 ) in the calculation of r * 1 (t) in this case. We note that makes U (t) becomes a stochastic variable because of the existence of I (t). For the sake of indicating the accuracy, effect and validity of the proposed optimal control strategy, optimal control r * 1 and other constant controls are contrasted on the basis of values of I (t) and objective function. Figures 10a and 11a tell us that the optimal control can make the value of I (t) to keep relatively low level contrasted to other seven constant controls (r 1 = 0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6). With the increase of quarantine control intensity, the overall level of the number of infected people, i.e., whether it is the peak or the end, will decrease in both deterministic system and stochastic system. Although low-intensity quarantine control will make the number of infected people reach a final stable trend and no longer increase, the number of infection is very high at this time, which will make COVID-19 long-term existence. On the contrary, high-intensity quarantine control will delay the time to reach the final stable trend, and the final level of infection is relatively low, even to achieve the purpose of elimination. Figures 10b and 11b represent the control profile of optimal control for the corresponding model. Further it is observed that control profile of optimal control for model (2.2) exhibits same state of affairs as that of deterministic control profile. The optimal control should be kept as high as possible from the beginning of the control policy and till the level of infection reaches a significantly stable low level. Then the quarantine measures maybe slightly relaxed. It is not a surprise that the corresponding optimal control is to keep the maximum value during almost the entire time period and then different kind of restrictions can be reduced to lower. It is worth noting that this approximation is desirable due to the expression of r * 1 (t). Though the constant control r 1 = 0.5 seems to make the value of I (t) lower than the optimal control r * 1 , what's more remarkable is that, the optimal control minimizes the value of objective function J from Fig. 12. Thus, it can be seen that the optimal control achieves the balance of control objective and control cost. Conclusion and further suggestions The novel COVID-19 which broke out all over the world is one of the most severe disease today. In this paper, according to the facts in the infection and propa-gation of COVID-19, we have formulated a stochastic reaction-diffusion epidemic model to analyse and control this infectious diseases. Through the analysis, the sufficient criteria for the persistence and extinction of the disease are derived. This stochastic model is handled skilfully, by which the conditions of how the Turing instability arise have been obtained through stability analysis of local equilibrium. By using Taylor series expansion and weakly nonlinear analysis, amplitude It is well-known that noise could make a bistable system which switches and regulates relevant mechanism [54]. And from this stochastic model and the results obtained, it can be seen that the white noise has a certain influence on pattern formation and the system with noise effect has more abundant spatial dynamics. These and general processing method for stochastic system are the contents that are worthy of further study. From the current situation around the world, the outbreak of COVID-19 not only seriously endangers personal health, but also greatly affects the social and eco-nomic development. In terms of the scope, it also has a strong impact on the medical resources and systems of various countries. As soon as possible to develop specific and effective aversion and treatment is still the first to bear the brunt. In today's situation, quarantine is the most popular and effective method to control and eliminate epidemic. Furthermore, the optimal quarantine control problem is studied. The optimal control strategies and solutions of deterministic and stochastic problems are derived by the Pontrygin's Minimum Principle. Thanks to the difficulty of obtaining numerical results for stochastic optimal system, we utilise the solution of the deterministic problem to approximate them. The results of numerical simulation show that the premise of effectively restraining the prevalence of infectious diseases at present is constant and intensive quarantine control, which is corresponding to the cost, materials and manpower required. The blockade order issued by the government to a certain extent disrupt people's work and life, leading to significant and widespread socio-economic costs [55]. On the contrary, it is the pressure of production and life caused by these restrictions that makes some enterprises and individuals gradually conflict with and lift the bans. That's what we don't want to happen. Thus, We can not blindly pursue the minimum number of infected people corresponding to the duration and intensity of quarantine. In order to promote the continuity of current and future development, we should seek the balance between control target and control cost. In view of the complexity and labor intensity of the formal method for numerical simulation of stochastic optimal control problems, the method adopted in this paper is a feasible approximation method. All the above analytical results are supported by numerical simulations. In the absence of full coverage of the vaccine, controlling the flow of infectious individuals is still the top priority. According to the World Health Organization recommendations [56,57], 14 days of quarantine is one of the most effective means to guarantee safety, whether due to entering and leaving the country or having just come into contact with the carrier. Let's take China and the USA shown in the first part of section 7 for example. As analysed in the previous paper, due to the different attitudes and measures of the two countries in dealing with the epidemic situation, the relevant simulations and analysis results are also different. The most obvious is the fitting results of the real COVID-19 data of the two countries in Sect. 7. The start time of fitting is the same time node with the same property, and the time span of data selection is the same, but one result is that the number of infected people gradually stabilizes after a period of time, and the other is that it is difficult to control. From the analysis results of this paper, we can see that this is inseparable from the quarantine control strategy of the two countries. It can be seen from the numerical simulations that the number of infected people in China has gradually stabilized about 60 days after the outbreak, which has to be admitted to have a great relationship with the strong control and quarantine measures implemented in China. This is in sharp contrast to the spread of the epidemic in the USA in the same period. With the overall improvement of China's epidemic situation, not only the domestic population flow has increased, but also the entry-exit population increases relatively, which makes some unfavorable situations worthy of the attention of the relevant departments appear, that is, they are related to the rebound of the epidemic situation. How to deal with the coming situation is the top priority for the relevant departments in China and even other countries with better control of the epidemic situation facing similar situations. Through the investigation and the results of our study, we put forward the following suggestions: (i) Strengthen the isolation and virus detection of entry-exit population to prevent overseas import. (ii) Enhance the awareness of protection, do not take it lightly, maintain a certain social distance, and prevent a large-scale rebound in the territory. Especially, during the university holidays and the beginning of school, it should be carried out in batches and travel off peak. (iii) For areas that have rebounded or have a rebound trend, i.e., relatively high risk areas, strict treatment and control measures should be taken immediately, and the source should be found out as soon as possible. (iv) For some areas with low risk, normal work and life can be kept to some extent, but the corresponding monitoring and control mechanism should be further improved according to the local actual situa-tion, in and out of public places to take temperature and wear masks, and so on. One of the original purposes of this analysis and simulation in our paper is to emphasize the strategic position and importance of quarantine in the outbreak of COVID-19 by comparing the different results caused by different attitudes and implementation measures of quarantine between China and the USA in the same period. As for the USA, where the epidemic situation is still severe, the above suggestions on China may not be applicable to it in general because the epidemic situation is quite different from that of China. According to the research results and investigation, we also put forward the following trend suggestions, which are not Fig. 10 The simulation of the path I (t) for the deterministic system corresponding to model (2.2) with respect to seven kinds of constant control and optimal control, control profile of optimal control r 1 (t) Fig. 11 The simulation of the path I (t) for model (2.2) with respect to seven kinds of constant controls and optimal control, control profile of optimal control r 1 (t) only applicable to the USA, but also in line with some western countries with controversial implementation of quarantine measures like the USA. (i) The government should issue more popular propaganda to make the public understand the importance of quarantine and less direct contact for the current situation. Understand what the people think, and issue effective quarantine measures on the premise of relaxing and appeasing policies. The process should not be too tough to avoid the loss outweighing the gain. And according to the celebrity effect combined with some more influential people to demonstrate the implementation. (ii) It is true that it is difficult to implement isolation measures, but previous studies have found that in some countries, more people support quarantine than voluntary vaccination, which is due to the risk of early vaccination. And one infected individual at each site had less than one secondary infection. Even partial vaccination, the infection can be stabilized or even reduced by quarantine measures. Fig. 12 The simulation of the values of objective function J with respect to five kinds of constant controls and optimal control (iii) Even weak quarantine can promote the development of the situation, but it takes a long time, and the effect of strong quarantine in a short time is often unsatisfactory. One should not expect that even a short period of quarantine is sufficient to reduce the infection below its survival level. The above study found that implementation of longer quarantine measures, such as 70-80 days or even longer, is needed to achieve decisive results. (iv) Keep as much quarantine as possible for most of the time in the controlled quarantine plan. Only when the infection reaches low level can quarantine restrictions be gradually reduced. During any quarantine period, even at the end of a predetermined time interval, control should not be zero. Some protective measures, such as wearing masks, avoiding crowds and maintaining good personal hygiene habits, should be continued and encouraged in society, even after strict isolation. Through the findings of this paper, especially the section of optimal quarantine control, these show that it is far from enough to deal with the corresponding epidemic prevention and control problems by a single measure or a simple superposition of several measures. For example, the results obtained in Sect. 6 show that only one quarantine control measure is far from enough. Although it will also have some effects, it will take a long time, and some unstable and uncontrollable factors will be more. In order to control the epidemic as quickly as possible, a variety of measures should be cross coordinated response. We should be prepared to fight against coronavirus infection for a long time, rather than the current epidemic wave, so as to reduce the endemic burden and potentially eradicate the disease eventually. One can use model to short-term data which will enables us to comprehend deeply the existing data as well as to make predictions when data is unavailable. Due to the prevalence and concomitant of infectious diseases, if the current situation changes, the results of the model can be generally applied to the characterization of the mutated infectious diseases after improvement, or can be applied to any next disease. In the future, after verifying the safety, effectiveness and universality of the vaccine in daily life, combined with more actual data, we can further expand the model by adding a vaccinated class to the stochastic system proposed in this paper. And it is also the next step to further consider the effects of infectious disease treatment, vaccination, media publicity and other control in the model on the related optimal control problems.
12,217.8
2021-11-17T00:00:00.000
[ "Mathematics" ]
Representing mutations for predicting cancer drug response Abstract Motivation Predicting cancer drug response requires a comprehensive assessment of many mutations present across a tumor genome. While current drug response models generally use a binary mutated/unmutated indicator for each gene, not all mutations in a gene are equivalent. Results Here, we construct and evaluate a series of predictive models based on leading methods for quantitative mutation scoring. Such methods include VEST4 and CADD, which score the impact of a mutation on gene function, and CHASMplus, which scores the likelihood a mutation drives cancer. The resulting predictive models capture cellular responses to dabrafenib, which targets BRAF-V600 mutations, whereas models based on binary mutation status do not. Performance improvements generalize to other drugs, extending genetic indications for PIK3CA, ERBB2, EGFR, PARP1, and ABL1 inhibitors. Introducing quantitative mutation features in drug response models increases performance and mechanistic understanding. Availability and implementation Code and example datasets are available at https://github.com/pgwall/qms. Introduction A basic mode of precision oncology is to scan the tumor genome for genetic alterations, typically activating mutations in oncogenes, that can be specifically recognized by targeted inhibitors.For example, dabrafenib competes with ATP for binding to the BRAF catalytic site and is thus indicated for BRAF V600þ melanoma (Maloney et al. 2021).Similarly, EGFR L858R mutations activate EGFR signaling, indicating targeted inhibitors like osimertinib (Gijtenbeek et al. 2023); PIK3CA mutations indicate the use of inhibitors like alpelisib (Andr� e et al. 2019); and so on.While such targeted therapeutics have been transformative, a substantial proportion of patients fail to respond despite having the supposed biomarkers of a successful response (Hu and Dignam 2019).This challenge of distinguishing responders from non-responders extends to non-targeted chemotherapeutics, where a precise set of molecular indications is often lacking. Beyond activating mutations that are directly targeted, many predictive models have been recently introduced that integrate genetic alteration information across many, if not all, human genes (Fig. 1a; Supplementary Table S1) (Partin et al. 2023).The rationale for these expanded models is that genetic modulators of drug response can occur not only in the targeted protein, but also in proteins that physically or functionally interact with the target in the same or related molecular pathway.In expanding to mutational states of many genes, these models have generally not attempted to resolve which individual nucleotides or amino acids are affected in each gene, or their functional effects.Rather, each gene is assigned either a 0 or 1 based on the absence or presence of non-synonymous coding mutations (Fig. 1b).Combining mutation values across genes creates a genomic profile of a tumor, which multi-gene models use to predict tumor behaviors.Examples of approaches using this type of binary encoding include GraphDRP, DrugCell, and PNet, among others (Fig. 1a, all models listed in Supplementary Table S1).Some of these models pre-select particular types of mutations (e.g.DeepDEP, DeepDR: presence/absence of any SNV/indel/splice site/nonsense) or those with prior functional associations, such as mutations to kinases only (DEERS), mutations already correlated with drug responses (QRF), or genes with high mutation frequency in cancer cell lines (DGSDRP). While these approaches have shown promising successes, it remains that some gene mutations clearly impact a drug response more than others, and to varying degrees.The question then is how to generally assess the effects of mutations observed across a large set of genes/proteins, not just for the single protein specifically targeted by a drug.Relevant to this task is the growing collection of variant effect prediction algorithms, which estimate the likelihood of a genetic variant impacting its protein function (Horne and Shukla 2022) (Fig. 1c).These algorithms have not yet been widely used in drug response models, however, and when they have been it is only as a pre-filter to remove mutations with low expected effect (Koras et al. 2020). Here, we evaluate the benefit of integrating variant effect prediction algorithms directly in drug response models.In the assessment that follows, we find that associating somatic mutations with predicted impact scores, which we refer to as quantitative mutation scoring (QMS), not only increases the information of somatic mutation features (Fig. 1d) but enhances model accuracy and mechanistic interpretability.Moreover, multiple mutation scores can be combined to create a multi-dimensional representation of gene mutations, providing richer and more expressive information than is captured by any single QMS method.By analogy, if binary gene mutation states (0/1) are akin to a black-and-white photograph (Fig. 1e, left), moving to QMS reveals a more nuanced grayscale image (Fig. 1e, middle), while combining continuous values across multiple dimensions yields a fullcolor picture (Fig. 1e, right).Representing mutations for predicting cancer drug response 2 Materials and methods Tumor cell line datasets Datasets were compiled from GDSC (Yang et al. 2012) and CTRP (Rees et al. 2016), resulting in response data for 1,244 cell lines and 26 anti-cancer agents.Drug response was reported as area under dose-response curve (AUDRC, continuous values, 0 ¼ total cell death, 1 ¼ no effect, >1 ¼ cell growth).Repeated drug/cell tests were averaged.Cell line somatic mutations were accessed from the Cancer Cell Line Encyclopedia (CCLE) DepMap portal (23Q1 release).A set of 702 genes was constructed from the union of genes contained within the FoundationOne CDx (Frampton et al. 2013), Tempus xT (Beaubier et al. 2019), Project GENIE (Smyth et al. 2020), and PALOMA Trial (Lira et al. 2017) gene panels.Filtering by our panel genes and drug cell lines resulted in 61,284 somatic mutations used in this study. Scoring mutations and gene features Variant effect prediction algorithms were selected by their superior performance in five cancer benchmark tasks, as examined in a previous study (Chen et al. 2020): (i) identifying pathogenic mutations clustered in 3D; (ii) identifying known cancer driver mutations; (iii) identifying mutations impacting TP53 kinase activity; (iv) identifying mutations driving in vitro cell growth; (v) identifying tumor-forming mutations in patient-derived xenograft models.From these data, we selected three high-performing algorithms that generate continuous value scores for somatic mutations: CHASMplus and VEST4 predict a continuous value 2 [0, 1) denoting the probability a variant drives cancer or impacts protein function, respectively.CADD generates a value 2 [0, 99] denoting the likelihood a mutation is deleterious to protein function, which was normalized 2 [0, 0.99).Mutation scores were generated with OpenCRAVAT (Pagel et al. 2020), an opensource variant annotation platform.Algorithms generated at least one score for 54,757 mutations (89.3% of somatic mutations).Tumor cell lines were represented as a collection of gene states, wherein each gene was described by a single value.Binary mutation features assigned unmutated genes a 0 and genes with one or more somatic mutations a 1. QMS features assigned unmutated genes a 0 and genes with a somatic mutation the QMS value of its mutation (maximum QMS value if multiple mutations in a gene).A binary "not scored" feature was created for mutations not scored by any of the three QMS algorithms, which was concatenated to QMS input features during training. Drug panel selection Clinical indicators of drug responses were collected from OncoKB, a precision oncology database identifying drugs sensitive/resistant to specific genetic alterations based on varying levels of evidence.From these data, we identified a panel of 23 current cancer drugs having FDA-approved somatic mutation biomarkers.Three additional non-targeted chemotherapeutic agents were included in our panel based on the availability of relevant clinical genomics datasets. Neural network architecture and training For each drug, we created models to compute the AUDRC of a tumor from the somatic mutation features of its genes.For tumor t, F(X t ) ¼ � y t , where F is the drug response model and � y is the predicted drug response (i.e.AUDRC).X t is a matrix of gene features of size (702 genes × number of gene features) (binary mutations: 1 feature; 1, 2, or 3-QMS configurations: 2, 3, or 4 features, respectively; i.e.QMS features þ the "not scored" feature).Each model was a feed-forward neural network with six layers, 1-6.Starting from layer 1, each layer contained 702 � x, 512, 512, 2048, 36, and 4 neurons, respectively, where x is the number of input features.Layer 1 neurons were partitioned by gene, such that each of the 702panel genes was allocated x neurons.The activation states h of these gene embeddings were concatenated and passed forward to the next layer.The multidimensional activation state h for layer "a" was calculated by the transfer function T(x a ) ¼ Batchnorm(tanh(Linear(Dropout(x a )))) ¼ h a . T maps the inputs x a 2 R c ! R d , where c is the number of input features and d is the number of layer neurons.Linear is a linear transformation parameterized by W T x a þ b a , with weight matrix W 2 R c;d and bias vector b.Dropout is the dropout function, Batchnorm is the batch normalization function, and tanh is the hyperbolic tangent activation function.We applied a linear transformation to the final layer (with four neurons) to predict � y. Models were trained to minimize the mean squared error (MSE) of the real versus predicted AUDRC values (y versus � y, respectively), using the objective function: The term on the right is the ' 2 norm penalty for network weights W parameterized by weight decay parameter λ.Models were trained using mini-batch stochastic gradient descent with batch size ¼ 32 and the AdamW optimizer.We used the Pytorch adjustable learning rate (torch.optim.ReduceLROnPlateau, starting at learning rate of 0.014) to reduce the learning rate by 80% every 10 epochs without validation set loss improvement.An early stop method terminated training if validation loss did not improve by at least ɛ ¼ 1 × 10 -4 after 20 epochs. Control for overfitting Model overfitting was controlled and assessed using several key measures.First, the model training procedure was built to incorporate multiple regularization methods, including early stop, dropout, and weight decay functions (see previous section).Second, model performance was assessed by nested cross validation, a conservative technique often used in machine learning to reliably evaluate the performance and generalizability of a model, especially when the dataset size is limited (Parvandeh et al. 2020).By apportioning samples (cell lines) into train/validate/test partitions, model optimization (fitting of parameters and hyperparameters) was fully insulated from the final performance assessment, which was conducted for tumor cell lines not seen previously during any earlier stage of model training or validation.Third, we observed that random shuffling of drug responses across cell lines broke model performance almost entirely (Supplementary Fig. S1), suggesting the models are not overfit.Finally, model generalizability was evaluated by the degree of transferability from cell lines to patient tumor biopsies, as well as the transferability from drugs used during training to alternate drugs against the same targets (Fig. 4, Supplementary Fig. S2). Computational complexity of model training QMS models encode gene features with a square matrix O (jfeaturesj 2 ).Roughly, the training complexity is [O i162 Wall and Ideker where i is the model layer and n i are the number of neurons of that layer.Models never exceed four input features (CHASMplus, VEST4, CADD, and the "not scored" binary mutation features), so complexity is dominated by the deeper model layers.Each (1244 cell lines × 702 genes) PyTorch float tensor requires approximately 5 MB of memory.Concatenating multiple QMS arrays into a single tensor object further reduces the memory requirements per extra QMS feature.Additional computational details are provided (Supplementary Table S2).The time complexity of model training is not significantly affected by multi-QMS configurations. Alternate models for predicting drug responses QMS model predictions were compared against DrugCell (Kuenzi et al. 2020) and DeepCDR (Liu et al. 2020), two previously published methods developed to predict responses of many tumor cell line drug pairs.These models were retrained by 5-fold cross-validation on the same cell line drug response dataset used to train QMS models, which ensures all models have observed the drugs of our drug panel during training. Test sets consisted of held-out cell lines. Interpretation and importance scores Models were interpreted by gradient analysis.Gradients quantify the influence of a model feature on the final prediction; measuring the size of the gradient can be considered as the importance of the feature.For any model feature f in our network, we defined the gradient as the change in model prediction � y with respect to the feature f.Thus, G f ¼ @� y=@f and was calculated via chain rule and accessed using the torch. Tensor.register_hook() method.The importance of each model feature was calculated by the ' 2 norm of the feature gradients.Importance scores were averaged across nested cross validation models (test set predictions; unseen tumors). Predicting sensitivity to BRAF inhibitors in cutaneous melanoma A Predicting sensitivity to EGFR inhibitors in lung cancer A clinical genomics dataset relevant to non-small cell lung cancer (Choudhury et al. 2023) was accessed from cBioPortal (Cerami et al. 2012).Data were extracted for patients treated with osimertinib (n ¼ 215) and included patient outcomes (overall survival, months) and exome somatic mutations from pre-treatment tumor biopsies.Somatic mutations were limited to those present in our gene panel, and mutation profiles were constructed for each QMS model (CHASMplus, VEST4, CADD, and a binary "not scored" feature set) as well as for binary mutation models.Models trained on cellline osimertinib data (Fig. 2c) were used to predict patient responses.Patients were labeled as responsive or nonresponsive as for cutaneous melanoma (above section). Stratification performance was assessed by hazard ratio and C-index (calculated by the Python library "Lifelines"). Tumor cell drug responses and mutation scoring We accessed drug treatment data for 26 chemical agents measured across each of 1,244 tumor cell lines (Section 2).Each tumor cell drug response was summarized as the AUDRC. Exome-wide somatic mutations were accessed for each cell line, focusing on alterations in 702 genes commonly screened in clinical gene panels (Section 2).These somatic mutations were scored by three leading QMS algorithms: CHASMplus, a cancer-specific algorithm that assigns the likelihood a mutation drives cancer (Tokheim and Karchin 2019); VEST4, an algorithm predicting the likelihood a mutation alters protein function (Carter et al. 2013); and CADD, a complementary algorithm predicting altered protein function (Rentzsch et al. 2021).Given this information, cell lines were represented as a profile of gene mutation states, with each gene represented by the maximum QMS score of its (possibly multiple) somatic mutations.Scores from each algorithm were either treated separately, resulting in one score per gene (single-QMS), or using scores for multiple algorithms concatenated together (multi-QMS).For comparative benchmarking, a third representation was constructed in which QMS values were replaced by binary (0/1) mutation indicators for each gene (binary values). Improved accuracy of QMS models over binary gene mutations An instructive test case for QMS prediction is the response of tumor cells to dabrafenib, a small molecule designed to specifically target V600X activating mutations in the BRAF kinase (X denotes any amino acid change).BRAF V600X mutations specifically elicit dabrafenib sensitivity, whereas other BRAF mutations generally do not (Maloney et al. 2021).Notably, all three QMS algorithms scored BRAF V600X mutations as particularly deleterious (CHASMplus ¼ 0.996, VEST4 ¼ 0.946, and CADD ¼ 0.289 representing 99th, 91st, and 87th percentiles of all mutation scores across cell lines).A key question then was whether drug response models based on these QMS features would Representing mutations for predicting cancer drug response recognize the V600X alteration as informative for prediction, or if these models would instead give preferential attention to mutation scores for the many other genes provided as input. Accordingly, we constructed a series of deep neural network models that use gene mutation features (single-QMS, multi-QMS, or binary values; see previous section) to predict the response of each tumor cell line to dabrafenib treatment (Fig. 2a).To guard against data leakage and overfitting, models were trained and tested using a rigorous nested 5-fold cross validation procedure, in which the collection of cell lines is partitioned into 70%/15%/15% splits for training, validation, and test phases (Section 2).Performance was estimated on the held-out lines using Pearson correlation.Analyzing these results, we found that use of the single CHASMplus feature for each gene significantly outperformed use of binary mutations (Fig. 2a) and yielded essentially the same performance as directly encoding knowledge of the BRAF V600X indicator.Combining multiple mutation scores (multi-QMS model) showed a further increase in performance above all other models, although this effect was not significant. We further expanded our assessment to models constructed for each of 26 precision oncology therapies (Section 2).We found that at least one of the single-QMS models outperformed binary mutations for all 26 drugs (Fig. 2b).Multi-QMS configurations outperformed the binary model in 25/26 cases, and they usually, but not always, outperformed single-QMS configurations (20/26 drugs).One example of this was trametinib, a selective inhibitor of MEK1 downstream of BRAF.In this case, single QMS models performed nearly equivalently to knowledge of BRAF V600X (Fig. 2c), as did binary mutations.However, combining multiple QMS significantly improved performance over all other models (Fig. 2c). QMS models extend canonical biomarkers with additional mutations To provide further insight into the dabrafenib drug response models, we next benchmarked their various feature sets against the BRAF V600X marker using precision-recall statistics.For this purpose, the collection of tumor cell lines was equally divided into dabrafenib-sensitive versus resistant Wall and Ideker classes, depending on whether the AUDRC was in the top 20% of most sensitive or resistant responses (Section 2).In this configuration, we saw that the BRAF V600X marker was very precise in predicting dabrafenib sensitivity (precision ¼ 95%) with moderate recall of these sensitive samples (recall ¼ 35%; Fig. 3a).We then examined the quantitative output of the multi-QMS model, which we also thresholded to sort tumor cell lines into predicted sensitive versus resistant classes; varying this sensitivity threshold traced a precisionrecall curve (Fig. 3a, Section 2).From this curve, we noted that the multi-QMS model was able to maintain the precision of the BRAF V600X marker (95%) while substantially extending the recall of sensitive samples (from 35% to 53%).These results implied that QMS not only captures the effects of BRAF V600X but also other mutation features predictive of dabrafenib sensitivity. To identify these other important features, we interpreted the multi-QMS dabrafenib neural network using a gradientbased methodology (Section 2) in combination with Sankey diagrams, which help visualize the flow of information through a network (Fig. 3b).QMS features of the BRAF gene ranked as the most important, primarily driven by BRAF CHASMplus and BRAF CADD scores.The secondmost important mutated gene was BCL2, an anti-apoptosis factor that facilitates cell death during BRAF inhibition (Sullivan et al. 2018).Other top mutated genes TP53 (Wang et al. 2023), PIK3CA (Candido et al. 2022), MAPK1 (Long et al. 2014), and more, are known modulators of BRAF activity and dabrafenib response, many of which are under clinical investigation for potential adjuvant targeting strategies in combination with BRAF/MEK inhibition.While these factors had each been recognized in previous (mostly separate) studies, they had yet to be integrated within a single precision oncology model to yield accurate drug response predictions. QMS models generalize to patient cohorts Finally, we evaluated how well QMS models translate from tumor cell lines to patients.First, we examined a clinical study of 39 cutaneous melanoma patients treated with either dabrafenib or vemurafenib, another targeted inhibitor of oncogenic BRAF V600X mutations (Van Allen et al. 2014) (Fig. 4a, Section 2).Treatment outcomes had been recorded using the RECIST classes of partial response (PR, reduction in tumor volume), stable disease (SD, no change in tumor volume), or progressive disease (PD, increase in tumor volume).Patients with PR were considered sensitive to BRAF inhibition, and patients with stable or progressive disease were considered resistant.Tumors were biopsied before treatment and subjected to whole-exome sequencing to call somatic mutations. Previously constructed dabrafenib predictive models (Fig. 2a) were benchmarked against patient outcomes by F1 score (Fig. 4b, Section 2) and precision-recall statistics (Fig. 4c, Section 2).Multi-QMS mutations were more predictive of patient responses than binary mutations, as we had observed earlier for tumor cell lines (Fig. 2b).Notably, dabrafenib QMS models extended to predicting outcomes of patients treated with vemurafenib (Fig. 4b).In contrast to cell lines, the BRAF V600X marker had very low precision as it was found in nearly all (35/39) patients.These results implied that the better performance of the QMS models was due to other features.In particular, gradient-based interpretation of the QMS models showed that, beyond BRAF, high importance was assigned to TP53, PTEN, and SOX9, similar to our previous findings in cell lines (Fig. 3b), as well as to TAF1 Representing mutations for predicting cancer drug response and PTPN11, which have been implicated in dabrafenib sensitivity (Wang et al. 2014, Harigai et al. 2022).Thus, the QMS model generalized to predictions of patient outcomes, as well as to drugs with similar molecular mechanisms. Generalizability was further assessed on a cohort of nonsmall cell lung cancer patients who received the EGFR inhibitor osimertinib (215 patients; Supplementary Fig. S2a).Exome sequences of tumor biopsies were obtained prior to treatment, and outcomes were recorded as overall survival (months; Section 2).We found that overall survival was significantly longer in patients predicted as responsive to osimertinib by cell-line QMS models than in patients predicted as non-responsive (Supplementary Fig. S2b, Section 2; Cox proportional hazards test; � P < .05).Notably, while binary models successfully partitioned patient responses, actual patient outcomes were the inverse of what these models had predicted (Supplementary Fig. S2c).Thus, binary models recognized the importance of mutations but could not decipher their effects, whereas QMS models identified features useful for predicting sensitivity/resistance (C-index scores 0.56 versus 0.42, QMS versus binary). Discussion Here we have explored the utility of QMS, a class of approaches for representing genetic mutations in drug response models.Contrasting with previous models, which generally do not differentiate individual mutations, we evaluated each of three variant effect prediction algorithms: CADD and VEST4, which assess the likelihood of a mutation altering normal gene function, and CHASMplus, which evaluates the potential to drive cancer.First, we evaluated whether these QMS approaches distinguish mutations known to affect drug response, using the test case of dabrafenib-a targeted BRAF inhibitor that disproportionately kills tumors with BRAF V600X substitutions.QMS captured the effects Wall and Ideker of BRAF V600X mutations on dabrafenib sensitivity, whereas binary (yes/no) mutations did not.Expanding the analysis to evaluate models for 25 additional precision oncology drugs and chemotherapies revealed that QMS consistently outperformed binary mutation representations across drug responses.These models used QMS values to identify functionally significant mutations, pinpoint genes critical to drug response mechanisms, and uncover genetic indicators of drug responses beyond established biomarkers.QMS models generalized from tumor cell lines to melanoma patients treated with BRAF inhibitors, where these models not only outperformed binary mutations but also identified key molecular factors influencing BRAF inhibition. In some cases, one QMS method was enough to capture functional variants, such as CHASMplus alone being sufficient to highlight BRAF V600X mutations (Figs 2a and 3b).In other cases, combining scores from multiple algorithms allowed models to find predictive features not available in single-QMS configurations (e.g.multi-QMS trametinib and dabrafenib models; Figs 2a, 2c and 3b).On the other hand, algorithms that generate redundant annotations can inflate computational requirements and increase the likelihood of overfitting.Ideally, multi-QMS configurations should capture orthogonal, complementary, and biologically relevant information. The models explored here implement a two-phase approach: Individual amino acid changes are scored in a first phase, after which these scores are provided to a second modeling phase to predict response.One wonders whether better performance might be achieved by a model that directly translates amino acid changes into a prediction of tumor drug response.While such an approach should be investigated, tumors exhibit an enormous number of rare mutations; inferring the effects of each on drug responses may require a prohibitive number of observations.A twophase approach, using QMS as an intermediate interpreter of amino acid changes, may help distinguish mutations with less training data.On the other hand, QMS does not consider drug-related effects and thus may not sufficiently resolve certain impactful mutations.Choosing a one versus twophase approach will also depend on computational resources.During training, QMS algorithms evaluate dozens of atomic, molecular, and biological features for each mutation (CHASMplus ¼ 95 features, VEST4 ¼ 86 features, CADD ¼ 63 features).Two-stage models benefit from scores that reflect these features, without including the features themselves during training. While QMS features generally increased model performance, for some drugs, additional biological mechanisms likely influence drug responses in ways not captured by either binary or QMS mutations (Fig. 2b).For example, our models did not consider the effects of genes with multiple mutations, which might be evaluated by assigning genes with multiple mutations a 1 (and 0 otherwise), or simply representing a gene with the number of its mutations.Indeed, other binary strategies are certainly possible, such as assigning genes a 1 for mutations that score above/below some threshold.These models also do not consider structural variants, which significantly impact drugs like ponatinib (also indicated by BCR-ABL1 fusions) (Luciano et al. 2020).Expanding the gene set to include additional classes of molecular biomarkers is of immediate interest moving forward. Patient outcomes are recorded with categorical (e.g.RECIST classes) and continuous value (e.g.overall survival) criteria, but drug response models are typically trained on large pre-clinical cancer drug screens that measure a killing effect (e.g.AUDRC).Tumor killing may not extend to longitudinal measures of sustained patient responses.Models aiming for translational applications might consider reconciling model predictions with relevant clinical outcomes.This analysis attempted to convert the AUDRC values predicted for patients into a label denoting the patient was likely/unlikely to respond to therapy.These cutoffs succeeded at stratifying patients, but additional readouts describing how confident a model is in its decision would be very useful. Transitioning from binary to scored mutations not only resolved BRAF V600X from other BRAF mutations, but found additional mutations highly predictive of dabrafenib responses in both cell lines and melanoma patients (Figs 3a,3b,4b,4c).One additional notable marker of sensitivity was mutations of BCL2, a regulator of responses to drugs targeting MEK/ERK and PARP pathways in multiple tissue subtypes (Valentini et al. 2023).BCL2 inhibitors increase sensitivity to BRAF/MEK inhibition in tumors without BRAF V600X mutations (Mukherjee et al. 2020), suggesting BCL2 mutations might provide a biomarker of successful BRAF/ MEK inhibition in patients lacking a BRAF V600X mutation. In summary, we have evaluated the benefit of encoding somatic mutations by quantitative values in drug response models, without requiring these models to evaluate gene sequences directly.An individual mutation can assume many different values, making it possible to recognize particular variants by the magnitude of their effects on a gene.In this way, QMS compresses an altered gene sequence to a single continuous value, which is sufficiently expressive to capture the mutation-specific molecular state of the gene.By incorporating QMS values, cancer drug response models can integrate the molecular states of many genes, identify relevant mutations, and make better predictions. Figure 1 . Figure 1.From all-or-nothing representations to scored mutations.(a) Previous models for drug response prediction (see also Supplementary Table S1) arranged beneath time of publication.Strategies for representing mutations organized by text color corresponding to those in panel (b).(b) Previous strategies for representing somatically mutated genes as an all-or-nothing mutation status indicator (mutated ¼ 1, not mutated ¼ 0), which treats all mutations as equal.Genes values are integrated to create a molecular gene profile of a tumor used in drug response prediction models.(c) Three variant prediction algorithms (CHASMplus, VEST4, CADD) are used to generate mutation-specific quantitative mutation scores (QMS) values for a variety of mutation types.QMS are continuous values [0,1) that predict how likely a particular mutation is to alter the function of a gene.In QMS profiles, mutated genes are represented by the QMS value corresponding to their mutation (One QMS), or as the set of QMS values from multiple QMS algorithms (Multiple QMS), with color scales representing larger (bright colors) or smaller (faded colors) values.(d) The information entropy (y-axis; Shannon entropy, log2 bits) of three strategies to represent mutations present in a drug response dataset (x-axis): Mutations represented as all-or-nothing gene mutation status (left, black); Mutations represented by continuous values of one QMS algorithm (middle, gray); Mutations represented by continuous values of three QMS algorithms (right, red).(e) Image of the Mona Lisa displayed as one-dimensional binary values (black and white, left), one-dimensional continuous values (grayscale, middle), or three-dimensional continuous values (RGB color, right). Figure 2 . Figure 2. Evaluation of alternative feature sets in predicting tumor drug responses.(a) Pearson correlation of model predictions versus measured responses to dabrafenib (y-axis), with six alternative feature sets (x-axis) ordered by increasing predictive performance.Feature sets from left-to-right include binary mutation status of clinical panel genes (unmutated ¼ 0; any mutation in coding sequence ¼ 1); continuous mutation scores of these genes using each of the three mutation scoring algorithms (CADD, VEST4, CHASMplus); binary mutation status of the BRAF 600 amino-acid residue only (V ¼ 0; other ¼ 1); and a combination of all three scored features.Test set predictions are from 5-fold nested cross validation runs with 70%/15%/15% (train/ validation/test) splits.Error bars show 95% confidence intervals.� P < .05by Fisher's r-to-z transform.NS ¼ not significant.(b) As for (a), showing average Pearson correlation of model predictions versus actual drug responses (y-axis) across a panel of 23 targeted precision oncology inhibitors and three chemotherapies (x-axis) using various feature set configurations (larger red circle ¼ multi-QMS model, blue asterisk ¼ CHASMplus, blue plus sign ¼ VEST4, blue triangle ¼ CADD, smaller green circle ¼ binary mutations).(c) As for (a), but predicting cell line responses to trametinib, a MEK inhibitor also indicated for use by the presence of BRAF V600X mutations. i164 Figure 3 . Figure 3. Scored mutations capture and extend gene alterations predictive of dabrafenib sensitivity.(a) Precision-recall curves as an assessment of model performance in identifying unseen tumor genotypes sensitive to dabrafenib, highlighting two feature sets: binary mutations (green) and multi-QMS scored mutations (red).Area under precision-recall curve in parentheses.Precision/recall of BRAF V600X biomarker denoted by (mutation present in 50 out of 715 test set cell lines).Recall values of BRAF V600X (vertical black dashed line) and the multi-QMS model (vertical red dashed line) are indicated, thresholded at the same 95% precision value (horizontal black dashed lines).(b) Sankey diagram illustrating how groups of QMS features (left: CHASMplus, VEST4, CADD) affect genes (right) embedded in a dabrafenib neural network model.Groups of features within each layer are represented by vertical rectangles, with height reflective of importance in model predictions.The thickness of the band connecting features denotes influence in model predictions (Section 2).Important but less influential genes are binned together and represented as the residual box (gray).Mutations identified by the model as predictive of dabrafenib response are shown (far left) next to the QMS method the model used to identify the mutation (mutations are repeated if the mutation was recognized by multiple QMS methods).Image generated by Plotly (version 5.13). Figure 4 . Figure 4. Stratification of clinical cohort and genetic markers that affect dabrafenib sensitivity or resistance.(a) Tumor biopsies of skin cutaneous melanoma (n ¼ 39 patients; 28 treated with vemurafenib, 11 treated with dabrafenib) were sequenced prior to monotherapy with a BRAF inhibitor (vemurafenib or dabrafenib).Patient outcomes were recorded according to RECIST criteria.Tumor somatic mutations were used to predict drug responses in models trained from cell line data (same dabrafenib models highlighted in Fig. 2a).(b) Performance of BRAF inhibitor response prediction by using F1 score (reports the average of precision and recall), assessed on all patients (left), patients who received dabrafenib (middle), or patients who received vemurafenib (right).(c) Performance of BRAF inhibitor response prediction using precision-recall curves.Similar to Fig. 3a, with additional genetic markers indicating sensitivity to BRAF inhibition in the melanoma cohort.Solid black circle: BRAF V600X mutations with mutations in TAF1 or PTPN11.Plus sign: BRAF V600X mutations with wildtype PTEN, TP53, and SOX9.(d) Similar to Fig. 3b, showing the flow of genetic information between input features and genes embedded in the dabrafenib model. i166 if the predicted values were in the bottom 20th percentile of AUDRC over tumor cell lines.Otherwise, patients were labeled as non-responsive. (Cerami et al. 2012014)et relevant to a cutaneous melanoma cohort(Van Allen et al. 2014)was accessed from cBioPortal(Cerami et al. 2012) (39 patients; largest publicly available BRAF inhibitor dataset).Patients were were constructed for each QMS model as well as for the binary mutation models.Models previously trained on dabrafenib cell-line data were used to predict drug responses (Fig.2a; multi-QMS and binary models).Responses and importance scores were averaged from the five nested cross validation models for each input configuration.Prediction performance was assessed by F1 score and precision-recall statistics.As model predictions were AUDRC values, patients were labeled as responsive
7,487.2
2024-06-28T00:00:00.000
[ "Medicine", "Computer Science", "Biology" ]
Polystyrene Based Silver Selective Electrodes Silver(I) selective sensors have been fabricated from polystyrene matrix membranes containing macrocycle, Me6(14) diene.2HClO4 as ionophore. Best performance was exhibited by the membrane having a composition macrocycle : Polystyrene in the ratio 15:1. This membrane worked well over a wide concentration range 5.0×10–1.0×10M of Ag with a near-Nernstian slope of 53.0 ± 1.0 mV per decade of Ag activity. The response time of the sensor is <15 s and the membrane can be used over a period of four months with good reproducibility. The proposed electrode works well in a wide pH range 2.5-9.0 and demonstrates good discriminating power over a number of mono-, di-, and trivalent cations. The sensor has also been used as an indicator electrode in the potentiometric titration of silver(II) ions against NaCl solution. The sensor can also be used in non-aqueous medium with no significant change in the value of slope or working concentration range for the estimation of Ag in solution having up to 25% (v/v) nonaqueous fraction. Introduction The determination of heavy metal ions in water, soil, and effluents is important in view of their toxic nature above certain concentration level.Elevated environmental level of heavy metals come from a variety of sources.The average crustal abundance of silver has been estimated at 0.07 mg/kg, ranking it 69 th amongst the element.Silver is thus less common than metals such as cadmium and mercury but more abundant than selenium, gold or platinum.Bulk of silver produced in the world is used in photographic materials.Other major uses of silver are in the manufacture of sterling and plated ware, jewellery, coins, medallions, electrical and electronic products such as batteries, contacts and conductors, brazing alloys and solders, catalysts, mirrors and dental and medical supplies.Whereas silver has a relatively low toxicity to humans and other highest life forms, to primitive life forms such as bacteria and viruses, silver is as toxic as the most powerful chemical disinfectant.This gives the metal great potential as disinfectant.Silver nitrate taken orally caused necrosis of the gastrointestinal tract.In the body silver is precipitated by chloride ion or protein.Generalized argyria, localized argyria and argyrosis (argyria of the eye, unless otherwise stated) are the most common effects of chronic exposure to silver.Argyria occurred almost exclusively among silver nitrate makers and in some workers involved in mirror plating, silver mixing, photographic plate making and glass bead silvering, but it can also occur as a result of medicinal applications of silver. Chemical sensors are increasingly used in the field of environmental analysis as they enjoy a number of advantages over other methods of analysis.The most attractive features of this technique are the speed with which samples can be analyzed, portability of the device, sample non-destruction, cost effectiveness and also large measuring range, often spanning across as many as six decades of ion concentration.Moreover, their fabrication in the laboratory is quite easy and may become commercially available soon after their development.During the last few decades efforts have been made by many researchers in the field of ISEs to develop sufficiently selective sensors for silver. Unlike many other sulfide-based solid-state electrodes, the Ag 2 S electrode has a very high primary ion selectivity and only weakly responds to most transition metal ions.Only Hg 2+ gives a serious interference [1].This may be one reason the interest is developing liquid membrane ISEs for Ag + has for a long time been fairly small.However, there have recently been an increasing number of reports on carrier-based Ag + selective electrodes [2].These include membranes of crown ethers, viz., 1,4dithia-15-crown-5 and 1,4-dithia-12-crown-4 [3], dodecyl-16-crown-5 [4] and dibenzo-15-crown-4 [5]; calixarene based viz., thioether functionalized calix [4]arene [6].Recently, Chen et al. [7] reported polymeric membranes based on two calix [4]arene derivatives functionalized by two hydroxy and two benzothiazoly-1-thioethoxy groups.These electrodes gave a Nernstian response in the activity range 5µM to 100mM, a detection limit of 0.8 µM and high selectivity towards alkali, alkaline earth and some transition metal ions.A loss in selectivity towards various metal ions is observed when an aromatic carbon in 2-position of benzene-1,3-bis(thioic) acid bis (S-propyl) ester is replaced by nitrogen atom [8].Chung et al. [9] used sulfur containing podants with diisodecyl adipate in PVC matrix to develop Ag + selective sensor.Katsu and Xu [10] reported organoselenide as a noble ionophore for a silver selective membrane electrode.It gave a near-Nernstian response from 0.1µM to 0.01 M with a slope of 52 mV per decade of activity.Higher activities of silver ions are obtained with the selenides compounds than with the corresponding sulfides. Taking into consideration all the above facts, Me 6 (14) diene.2HClO 4 has been studied as an electroactive phase in polystyrene matrix for the fabrication of Ag + -selective electrode and the results are presented in this paper.The present electrode shows good selectivity over other cations and is superior to the existing electrodes in some aspects. Reagents All reagents used in the investigations were of analytical reagent grade (BDH, UK).Doubly distilled water was used for preparing all aqueous solutions.Preparation of Me 6 (14) diene.2HClO 4 20g of ethylenediamine was added to 500ml of anhydrous acetone followed by drop wise addition of 55.7g of 60% perchloric acid from a dropping funnel with constant stirring of the solution.After addition of the acid, the solution was vigorously stirred and allowed to cool to room temperature.The fine crystalline compound was filtered and washed thoroughly with acetone and dried in vacuum.The ligand was a white crystalline material that can be recrystallised from hot aqueous methanol.This compound shows a strong broad band at 3050 cm -1 in the infrared spectra due to the N-H vibration, and another weaker but broad band at 1530 cm -1 due to NH 2 + vibration.The C=N stretching mode occurs as a strong sharp band at 1650 cm -1 .The melting point of the ligand is 110 o C. Apparatus The potential measurements were carried out at 25 ± 0.1 o C on a PH 5652 digital pH meter/millivoltmeter (ECIL, Hyderabad, India) and CVM 301 Century microvoltmeter (Century Instruments, Chandigarh, India).pH measurements were made on a digital pH meter ( model PH 5652, ECIL, Hyderabad, India; Glass electrodes as pH electrode and calomel as reference electrode). Membrane preparation Heterogeneous membranes were prepared by taking different compositions of the ionophore and polystyrene and the mixture was heated to 80 O C under pressure (6500 to 7000 psi) in a die kept in a metallurgical specimen mount press.Membranes were fabricated under optimum conditions of temperature and pressure which were fixed up after a great deal of preliminary investigations.Membranes prepared in this way were quite stable and did not show any dispersion in water or in other electrolyte solutions. The membranes were subjected to microscopic and electrochemical examination for cracks and homogeneity of the surface and only those which had smooth surface and generated reproducible potentials were chosen for subsequent investigations.Membrane to membrane (and batch to batch) reproducibility was assured by carefully controlling the conditions of fabrication. Determination of Functional Properties of Polystyrene based Membranes The prerequisite for understanding the performance of a membrane is its complete physicochemical characterization which involves the determination of all such parameters which affects its electrochemical properties.These parameters are porosity, electrolyte absorption, water content and swelling.A survey of literature reveals that this particular aspect of membrane phenomena has received less importance. The first major attempt to establish standard methods for membrane characterization was reported on dried colloidal membrane by Michaelis [11].Hale et.al.[12] investigated the effect of resin content and the degree of cross linking of the resin on the physical and electrochemical properties of membranes.Gregor [13] and Kawabe [14] studied the characterization of ion-exchange membrane in a number of different exchange states and correlated this information with their structure.Wyllie and Kannan [15] reported that if a rigid plastic such as polystyrene is used the properties of the ionexchange membrane may be modified.Lakshminarayaniah and Subhramaniyam [16] gave a direct method of measuring the membrane resistance which gives better results as compared to the methods followed by earlier workers.Another important parameter which plays an important role in the functioning of a membrane electrode is the surface study.A detailed methodology of surface study and its effect on the membrane was reported by Marco [17] in 1990. Porosity The stability, response time and selectivity of an electrode is influenced by the diameter and the multiplicity of the membrane pores.Mizutani and Nishimura [18] gave the method for the estimation of porosity of the membrane which is calculated from water content data by using the following formula: ρ w Where, A = area of the membrane, L =thickness and ρ w = Density of water Electrolyte Absorption The membrane after attaining equilibrium in 1.0 M NaCl solution was wiped free of adhering electrolyte and then dipped in 20 mL distilled water.It was shaken intermittently and left as such foe few hours.The solution was then transferred to a 100 mL measuring flask, The whole process was repeated 3-4 times and the entire solution was collected in a measuring flask.It was finally made up to the mark by distilled water and the strength was measured conductometrically. Water Content Membrane was kept immersed in a solution of 1.0 M concentration of electrolyte.It was then washed several times with distilled water and the adhering liquid was wiped off with blotting paper.Then the membrane was weighed and dried to a constant weight in a vacuum desiccator.The difference in the two weighings divided by the weight of the wet membrane was taken as the water content. Swelling After measuring the thickness of the dried membrane it was immersed into 1.0 M solution of NaCl for 24 hours and again the thickness was measured after wiping with blotting paper.Difference between the thickness of dry and swollen membrane was taken as a measurement of swelling. Potential Measurements The membranes were equilibrated for 3 days in 1.0 M Ag + solution to generate noiseless and reproducible potentials.The conditions necessary for equilibration i.e., the contact time and concentration of salt solutions of the cation were decided by observing the performance of electrode systems equilibrated for different period of time with solutions of varying concentrations. The membranes were fixed to one end of a "Pyrex" glass tube with araldite and equilibrated with silver nitrate solution.The glass tube containing 0.1M silver nitrate solution, as the internal solution, was placed in test solutions of different concentrations.Potentials were measured by direct potentiometry at 25 ± 0.1 o C with the help of ceramic junction calomel electrodes and the cell setup was the same as reported earlier [19].1.0 × 10 -1 M silver nitrate was taken as inner reference solution and saturated calomel electrodes (SCE) were used as reference electrode.All pH adjustments were made with appropriate acid or base. Membrane Characteristics Functional properties of polystyrene based membrane no.3 of macrocycle Me 6 (14)diene.2HClO4 are given in Table 1.Potential studies on the membrane sensors were carried out with the varying Ag + concentration (1.0 × 10 -6 to 1.0 × 10 -1 M).Table 2 depicts the results of the working concentration range, slope, and response time of each membrane.The membranes with macrocycle and polystyrene in the ratio 8:1 showed a large response time with less Nernstian potential response and a narrow working concentration range.Though the membranes having macrocycle and polystyrene in the ratio 20:1 and 30:1 (w/w) showed a fast response time (20s) and near-Nernstian slope (40 mV/decade) each, respectively but they exhibited a narrow working concentration range.The membranes with macrocycle and polystyrene in the ratio 12:1 and 15:1 (w/w) showed a near-Nernstian slope with a response time of 20 and 15 s respectively.Membrane sensor no.3 exhibited a rectilinear potential response in the concentration range of 5.0 × 10 -6 -1.0 × 10 -1 M with a near-Nernstian slope of 53 mV/decade of [Ag + ].Potentials generated with dummy membranes were insignificant (5-10 mV).As such, the potentials generated in the proposed sensor are ascribed to the uptake of silver ions on the macrocycle.Thus, it can be seen that the membrane no.3 (Fig. 1a) gave the best performance with regard to working concentration range, slope and response time.The sensing behavior of the membranes did not change when the potentials were recorded from lower to higher concentrations or vice versa. Reference Solution In order to observe the effect of reference solution concentration on the functioning of the membrane sensor, measurements with varying concentrations with reference solutions (5.0 × 10 -2 and 1.0 × 10 -2 M Ag + solutions) were also observed (Fig 1b).It was found that the membrane sensor exhibited optimum performance at 1.0 × 10 -1 M concentration of Ag + ions as internal solution while at other concentration the magnitude of potential falls and the working concentration range also decreased with decreasing concentration of reference solution.Therefore all subsequent investigations were performed with 1.0 × 10 -1 M concentration of Ag + as reference solution. Response and Lifetime The static response time (time in which the membrane sensor generates constant potential) of the membrane sensor is observed at various determinand ion concentrations and the same is found to be <15 s at all dilutions.Besides this, the potentials stayed constant for more than 3 min, after which a slow divergence is recorded.Potentials were repeatedly monitored at a fixed concentrations and the standard deviation of twenty identical potential measurements is 0.2 mV.The membrane sensor could be used for four months, at a stretch, without observing any change in response time or slope, thereafter a slight change in slope and response time is observed and this could be corrected by equilibrating the membrane again with 1.0 M Ag + solution for 10 h (lesser time is required in comparison to the initial equilibration).With this treatment the assembly could again be used for two months time and then it was replaced by a fresh membrane. pH and Solvent Effect The potentiometric response of the silver electrode was found to be sensitive to pH changes.Thus, the pH dependence of the electrodes was tested by measuring the potential response of solutions containing 1.0 × 10 -3 and 1.0 × 10 -2 M of silver ions in the pH range 1.0 -11.0.The pH was adjusted using nitric acid or ammonium hydroxide.As seen from Fig. 2, the potential remained constant from 2.5 to 9.0,which can be taken as the working pH range of the assembly.Beyond this pH range a drift in potentials was observed.The observed drift at higher pH values could be due to the formation of some hydroxy complexes of Ag + in solution.At low pH, there could be protonation of the macrocycle in the membrane, which results in loss of their complexing ability with the metal ion. The performance of the membrane (no.3) was also investigated in partially non-aqueous medium using methanol-water, ethanol-water, and acetone-water mixtures.The membrane worked satisfactorily in solutions having maximum of 25% (v/v) non-aqueous content as in these mixtures the working concentration range and slope remained unaffected (Fig. 3).However, above 25% nonaqueous content, slope, and working concentration range was reduced and potentials showed drift.It is worth mentioning that the lifetime of the membranes did not alter in non-aqueous solutions. Potentiometric Selectivity The selectivity coefficients ( Pot B A K , ) were evaluated by modified form of the fixed interference method [16] as suggested by Sa'ez de Viteri and Diamond at 1.0 × 10 -2 M interfering ion concentration and varying concentration of Ag + solution (Table 3).The selectivity pattern indicates sufficiently low (~ 10 -3 ) values for monovalent cations and quite low (~ 10 -4 ) for bivalent and trivalent ions.As such, these cations are not expected to interfere even at this higher concentration level (1.0 ×10 -2 M) of the interfering ions.Heavy metals such as Cu 2+ , Cd 2+, Hg 2+ and Pb 2+ ( normal interferents) also do not disturb the functioning of the membrane sensor at all. Analytical Application The practical applicability of the electrode was tested by using it as an indicator electrode to determine the end point in the potentiometric titration of Ag + with NaCl solution.20 mL of 1.0 × 10 -3 Ag + solution was titrated against 4.0 mL of 1.0 ×10 -3 M NaCl solution.The potential data are plotted against the volume of NaCl (Fig. 4).Although the changes observed in potentials are not large, the end point is quite sharp and a perfect stoichiometry is observed.The removal of silver ions results in a decrease in membrane potentials and beyond the end point the potentials almost stay constant and the change is also nominal. Conclusion The polystyrene based membrane incorporating Me 6 (14) diene.2HClO 4 as an ionophore, could be used to determine Ag + in the concentration range 5.0 × 10 -6 -1.0 × 10 -1 M with a slope of 53.0 mV/decade of activity.The sensor works in a wide pH range 2.5-9.0 with a response time of 15 s.The selectivity of the electrode towards Ag + is quite good over other cations and the lifetime of the assembly is 4 months in aqueous and non-aqueous medium In addition, the membrane sensor can be used as an indicator electrode in potentiometric titration involving silver(I) ions against NaCl. Figure 3 . Figure 3. Variation of cell potential with Ag + concentration in (a) ethanol-water (b) methanol-water and (c) acetone-water mixtures. Table 2 . Composition of polystyrene based membranes of Me 6 (14)diene.2 HClO 4 and performance characteristics of Ag + -selective electrodes based on them. Table 3 . The selectivity coefficient values ( Pot
4,058.6
2002-06-24T00:00:00.000
[ "Materials Science" ]
Nonlinear Extended State Observer-Based Distributed Formation Control of Multiple Vessels with Finite-Time Prescribed Performance : In the presence of unmeasurable velocities and system uncertainties, the distributed formation control problem is investigated in this paper for multiple vessels. A robust formation controller is proposed by incorporating an extended state observer (ESO) and finite-time prescribed performance function (FTPPF). Firstly, a nonlinear ESO is designed to estimate the unmeasurable velocities and system uncertainties. Subsequently, a novel FTPPF is designed to improve the dynamic performance of multi-vessel formation systems, where the upper bound of the convergence time and the constraint bounds can be set in advance based on the actual circumstances. Then, the proposed ESO and FTPPF are applied to the distributed formation controller design for multiple vessels. The proposed formation control scheme can maintain the multiple vessels in an expected formation with high tracking accuracy, a faster convergence speed, and smaller fluctuations. Finally, the performance of the proposed control method is verified by theory analysis and simulations. Introduction Distributed formation control of multiple vessels has emerged as an active research area over the past decade [1][2][3].Recently, various control schemes have been proposed for distributed formation systems of multiple vessels [3][4][5][6][7].The accurate model parameters of a vessel are very hard to obtain.In addition, a vessel is unavoidably subject to unknown environmental disturbances [8].A number of estimation methods are proposed to eliminate the influence of environmental disturbances and modeling uncertainties such as neural networks [9][10][11], uncertainty and disturbance estimators [12,13], and fuzzy systems [14,15]. Note that the measured velocity, which is hard to accurately obtain in practice, is necessary for the aforementioned schemes.To obtain the velocity information of vessels, lots of successful applications of the extended state observer (ESO) technique can be seen in motion control systems for vessels [16][17][18][19].A finite-time ESO-based control scheme is developed with high estimation accuracy [20].For each vessel in [21], a control system is proposed with an echo state network-based observer.A reduced-order ESO is employed in the under-actuated marine surface vehicle control system to obtain the vehicle side slip angle caused by time-varying ocean disturbances [4].However, the transient performance of the above control systems cannot be guaranteed, which is still an open issue. Recently, the prescribed performance function (PPF) [22] has had a number of successful applications in nonlinear control systems [23][24][25][26].PPF-based control technology is able to make the tracking error converge to any desired small residual set with a moderate convergence rate and smaller overshoot and can improve the transient performance of multi-vessel formation systems.In [27], an observer-based neuro-adaptive control problem using a PPF-based idea is investigated to computationally simplify the developed scheme. In [28], PPF-based control technology is considered for an adaptive fault-tolerant attitudetracking control system.A conventional PPF-based control algorithm is applied to the cooperative learning formation control problem with guaranteed transient performance in [29].It should be noted that the aforementioned PPFs are asymptotically convergent, which may cause infinite convergence times. Owing to the fast convergence rate and high precision, the finite-time stability theory has become a hot topic [30].Recently, the finite-time prescribed performance function (FTPPF) has been used in various nonlinear control systems [31][32][33][34][35][36][37].Ref. [31] integrates a new FTPPF as a transformation of the output error for the position control into a pneumatic servo system, which is capable of improving the nominal controller.Ref. [32] develops a control strategy for high-order nonlinear systems by incorporating FTPPF-based technology and an adaptive fuzzy control scheme.For a stochastic system considering FTPPF [33], the semiglobal uniform ultimate boundedness can be ensured for the residual error, which is closely related to the boundary of the FTPPF.For a 6-DOF attitude-orbit synchronous control system, a time-varying PPF-based control scheme is proposed to make the tracking errors move to a tiny area containing the equilibrium in finite time [34].The FTPPF is considered in strict-feedback nonlinear systems [35].Ref. [36] investigates a trajectory tracking control problem with full-state constraints by designing an appointed-time performance function.In [37], at the kinematic level, a finite-time time-varying guidance law is proposed based on the FTPPF-based error transformation.The upper bounds of convergence time in [32][33][34][35][36][37] are determined by the initial states and designed parameters, resulting in weak practicality.This limits the application of FTPPF-based control technology in the distributed formation control field of multiple vessels. Inspired by the aforementioned discussions, a nonlinear ESO-based distributed formation control scheme is designed with a novel FTPPF for multiple vessels under unmeasurable velocities and system uncertainties.In the multi-vessel system, a nonlinear ESO is proposed to estimate the unmeasurable velocities and system uncertainties.Subsequently, a robust controller is designed by incorporating the proposed FTPPF and dynamic surface control method.To a certain degree, the presented method can improve transient performance and guarantee finite-time convergence.The main features are as follows: • For the aforementioned ESO-based control strategies [4,[16][17][18][19]21], the convergence time may be infinite.Our proposed nonlinear ESO-based distributed formation controller can guarantee finite-time stability with the appropriate parameters of the designed FTPPF. The remaining sections are organized as follows.Section 2 provides the preliminaries on the graph theory, vessel model, nonlinear ESO, and prescribed performance.Section 3 presents the proposed control algorithm and stability analysis.Then, the simulations are conducted and analyzed in Section 4. Finally, the conclusions are stated in Section 5. Basic Concepts of Graph Theory Define an undirected connected weighted graph ℘ = ℘(ν, ), where ν = {1, 2, . . . ,n} represents the set of vessels, ⊆ ν × ν is the set of edges, and ij = (i, j) represents that node j can obtain the information of node i. (j, i) ∈ expresses that node i is a neighbor of node j and N i = {j ∈ ν|(j, i) ∈ } represents the set of neighbors of node i.A = a ij ∈ n×n is the weighted adjacency matrix.The Laplacian matrix D = d ij ∈ n×n is defined as D(i, j) = ∑ j∈N i a ij , i = j 0, i = j .In the same way, L = l ij ∈ n×n is defined as . I n represents the feature vectors of L. Let B = diag{b 1 , b 2 , . . ., b n } be the adjacency matrix, where diag( • ) denotes the diagonal matrix.b i > 0 means that the ith vessel is accessible to the leader and b i = 0 represents the other case [38]. System Modeling The following mathematical model of multi-vessel motion [39] is presented based on the earth-fixed (O-NED) and body-fixed (B-XYZ) frames, as shown in Figure 1: where n is the number of vessels in the formation system and i represents the ith vessel.M i represents the system inertia matrix, which can be obtained in practical engineering applications; C i (v i ) and D i (v i ) represent the centripetal force matrix and damping coefficient matrix, respectively; g(v i ) is the unmodeled dynamics; η i and v i denote the position and velocity vectors; τ i is the control force vector; τ wi is the time-varying environmental disturbance vector; and R(ψ i ) represents the conversion matrix between the two coordinate frames shown in Figure 1: Body-fixed coordinate frame A new vector µ i = ηi is defined for (1), which yields: where Remark 1 ([40] ).Since the energies of vessels and the ocean environment are finite, the system uncertainties W i should be considered as bounded with a finite rate.There are low-and highfrequency external disturbances.The high-frequency disturbances do not contribute to the vessel's movement.Based on the wave-filtering technique, high-frequency disturbances can be discarded when designing the formation controller.Thus, the disturbances can be considered low frequency, which means the disturbances are differentiable.Therefore, Assumption 1 is reasonable. Assumption 2 ([1] ).The desired trajectory of the virtual leader T is bounded and differentiable with bounded ηd and ηd . Nonlinear Extended State Observer A nonlinear ESO is given to estimate the system uncertainties and velocities of the vessels [20]: where ηi and μi are the observed position and velocity vectors in the O-NED frame of the vessel; Ŵi are the observed system uncertainties; and where |•| is the absolute value of a scalar.Then, define two new vectors z 2i = γ(µ i − μi ), z 3i = W i − Ŵi .Therefore, the estimation error system can be given by: Theorem 1.For System (3) with the proposed nonlinear ESO (4), under Assumptions 1 and 2 for any given initial η i and v i , the observation errors where P i is an arbitrary positive-definite symmetric matrix, • is the Euclidean norm of a vector, and λ min (•) and λ max (•) denote the minimum and maximum eigenvalues of a matrix. Proof of Theorem 1. Define the new vectors as follows: Based on (6), we obtain: Define the following Lyapunov function for (6): Based on the theory of homogeneity, (6) has the homogeneity The following formula can be easily obtained: Design a parameter as follows: Then we obtain max For (15), further analysis is discussed in two cases: We have ς = (1 + θ 1 )/2 so (15) can be simplified to: Choose the appropriate parameters for γ, β j , and P i : Then, we obtain V1 (Z i ) < 0. We obtain: , the system is stable and we have: According to (20), we have: Hence, the proof of Theorem 1 is completed. Prescribed Performance The following performance specifications are imposed on the formation errors to improve the performance of the control system. It should be noted that the conventional PPF is asymptotically convergent, which may result in infinite convergence times.Since finite-time stability can drive the system states to equilibrium in finite time, an FTPPF is designed to overcome the drawback of the conventional PPF.Firstly, a definition for the FTPPF is given as follows: Definition 1 ([33]).If the following properties are able to be satisfied for a continuous function ρ(t): where ρ T f is an arbitrarily small constant and T f is the set time, then ρ(t) can be called the FTPPF. Subsequently, an FTPPF is designed according to (22) and Definition 1, which is expressed as: where ρ 0 , ρ T f , and k are positive constants.As can be seen, the designed FTPPF has two benefits: it can achieve the finite-time convergence of tracking errors in the prescribed stability areas, which is practical; and T f can be set by users in advance, which can be achieved more easily than the conventional PPF. Design and Analysis Firstly, a distributed formation error is designed in Section 3.1.Then, an error transformation is proposed for the formation error.Subsequently, the distributed formation controller is designed using dynamic surface control technology.The stability of the distributed formation system is proved in Section 3.2. Controller Design A distributed formation controller incorporating the proposed FTPPF and dynamic surface control method is designed for multi-vessel formation control under multiple constraints. Based on the adjacent rule, the first formation error is defined as follows: where η d is the desired position of the vessels.Take the derivative of ξ i1 as: In (26), ii is the element of the Laplacian matrix. Based on (22), the following error transformation is constructed to facilitate the controller design: where T(s i ) is strictly monotonic increasing and s i = 1 2 ln( ).Then, choose the following error transformation: where The following virtual control law based on the backstepping method is designed: where 1 is the parameter to be designed. To avoid differential expansion, dynamic surface control technology is introduced: where the time constant T d is positive and α di is the guidance law for the velocities. Define a new error α ξ i = α di − α i , where we have: where Consider that the initial states of the system are bounded and we have The velocity tracking error is defined as follows: Based on ( 31) and ( 33), we take the derivative of ξ i2 as: where f is the estimate of the system uncertainties from the nonlinear ESO.Therefore, design the formation control law as follows: where 2 is the designed parameter. Stability Analysis Theorem 2. Consider a multiple-vessel system (3) with unmeasurable velocities and system uncertainties, combined with the nonlinear ESO and proposed FTPPF.Under Assumption 2, for any given constant V M > 0 based on the estimation of the nonlinear ESO and (35), if the initial states of the system have then all signals of System (3) are bounded and ξ i1 can converge to a small-enough FTPPF-based set within the set time T f , which means that the formation errors of multiple vessels can approach zero. Proof of Theorem 2. Construct the following Lyapunov function: Take the derivative of V 3 based on ( 15) and ( 41) and we have: By selecting the appropriate parameters, > 0 can be satisfied.In addition, according to (18), we have 43) can be rewritten as: Then, we obtain: Based on (45), we have . Therefore, the errors O, E 2 , and α ξ are bounded with the appropriate parameters.Given ( 25) and ( 30)-(32), η i , α i , α di , α ξ i , and ξ i2 are bounded.Since O is bounded, it is further concluded from ( 22) and ( 28) that ξ i1 will converge to a small-enough FTPPF-based set Ξ = {ξ i1 |−δ 1 ρ(t) < ξ i1 < δ 2 ρ(t) } within the set time T f , which means the formation errors of multiple vessels can approach zero. Hence, the proof of Theorem 2 is completed. Simulation Results and Comparative Analysis To show the performance of the designed control method, simulations are conducted using a computer with Windows 11 and MATLAB 2022a.In the simulations, five vessels and one virtual leader are considered.The sizes of the five vessels are the same (the length is 44.79 m and the width is 6.2 m).The unmodeled dynamics and related main particulars of the vessels are given as follows: 46), with the first-order Markov process ḣ = −T −1 h + A w ( [43,44]), where h ∈ 3 represents the bias forces and moment, T ∈ 3×3 represents the time constant matrix, w ∈ 3 is the zero-mean Gaussian white noise, and A ∈ 3×3 is used to scale the amplitude of w. ] T , u i (0) = v i (0) = 0(m/s), and r i (0) = 0(rad/s).The structure vectors in formation are l 1 = [0, 100, 0] T , l 2 = [0, 50, 0] T , l 3 = [0, 0, 0] T , l 4 = [0, −50, 0] T , and l 5 = [0, −100, 0] T .Through trial and error, the observer gains are set as β 1 = 5, β 2 = 0.5, and β 3 = 0.1 according to the estimation effect of the ESO.The control gains are designed as The expected trajectory is designed as: The communication topology of multiple vessels is shown in Figure 3.It can be seen in Figure 3 that the communication topology of multiple vessels is undirected connected and only the first vessel can obtain the information of the expected trajectory.Due to confidentiality requirements, more details about the vessels cannot be provided.Nevertheless, it can be revealed that the communication devices on vessels can meet the requirements of two-way communication.Therefore, the undirected connected configuration is selected in this paper.Moreover, the following conventional PPF and error transformation are constructed to show the superiority of our proposed FTPPF-based method: In order to ensure comparability, the parameters of the conventional PPF and error transformation are consistent with those of the proposed FTPPF. Then, to intuitively show the performance of the nonlinear ESO and proposed FTPPFbased formation controller, the simulation results for the estimation errors, trajectories of the vessels, formation errors, and control forces are provided below. Figure 4 shows that the estimation errors of the nonlinear ESO can get close to zero after a period of time.This indicates that the observed values can approach the true velocities and system uncertainties, which can satisfy the design requirements of the subsequent formation controller.The trajectories of the vessels are shown in Figure 5.It can be seen in Figure 3 that only the first vessel can obtain the information of the expected trajectory, but all vessels can follow the desired trajectories with an expected formation.After maintaining a stable formation, it can be seen in the zoomed-in regions that all the deviations between the actual and the expected trajectories are less than 0.2 m, which meets the requirements of tracking accuracy.6a-c that all the formation errors can finally converge to near zero under the three different control methods.In addition, it can be seen in Figure 6a that the convergence time under the proposed FTPPF-based controller is within the preset time T f = 150 s.Furthermore, in the zoomed-in areas in Figure 6a-c, we can see that the convergence time under the proposed FTPPF-based controller is approximately 100 s, which is smaller than the convergence times under the other two controllers (almost 110 s and 150 s).Moreover, compared with the zoomed-in areas in Figure 6b,c, the fluctuations of the formation errors are significantly reduced due to the smaller preset constraint bounds of our proposed FTPPF-based controller.Therefore, the superiority of our proposed FTPPF-based method is verified in Figure 6a-c.Figure 7 shows the surge, sway forces, and yaw moments for the vessels.As shown in the zoomed-in area in Figure 7, the control forces and moments are large at the beginning due to the initial large formation errors, as shown in Figure 6a, but maintain relatively small values after maintaining a stable formation.Therefore, the proposed FTPPF-based formation controller can maintain multiple vessels in an expected formation with high tracking accuracy.In addition, our proposed FTPPF-based controller has a faster convergence speed and smaller fluctuations than a conventional PPF-based controller and a controller without PPF. Conclusions This paper presents a nonlinear ESO-based distributed formation control scheme with an FTPPF for multiple vessels, subject to unmeasurable velocities and system uncertainties.Initially, a nonlinear ESO is constructed to estimate the unmeasurable velocities and system uncertainties.Subsequently, a novel FTPPF is designed to improve the system transient performance, where the upper bound of the convergence time and constraint bounds can be flexibly preset without depending on the initial states and designed parameters.Then, a robust formation control scheme is presented based on the designed ESO and FTPPF.The boundedness can be guaranteed for all signals of the closed-loop system and the formation errors can approach zero within the preset time.Finally, simulations and comparisons show that our proposed FTPPF-based controller can maintain multiple vessels in an expected formation with high tracking accuracy, a faster convergence speed, and smaller fluctuations.However, collision avoidance is not considered in our proposed method, whose application would be limited in practice.Hence, collision avoidance will be the focus of future research in the design of a distributed formation controller. The curves of ρ(t) under different T f are shown in Figure 2. As shown in the figure, tuning the design parameters T f can lead to different forms of the constraint boundary.ρ(t) also conforms with Definition 1.The errors can meet the preset transient and steady-state performance with the proper selection of these parameters. Figure 3 . Figure 3.The communication topology of multiple vessels. Figure 4 . Figure 4.The estimation errors of the nonlinear ESO. Figure Figure 6a-c show the formation errors under the proposed FTPPF-based controller, conventional PPF-based controller, and controller without PPF, respectively.It can be seen in Figure6a-c that all the formation errors can finally converge to near zero under the three different control methods.In addition, it can be seen in Figure6athat the convergence time under the proposed FTPPF-based controller is within the preset time T f = 150 s.Furthermore, in the zoomed-in areas in Figure6a-c, we can see that the convergence time under the proposed FTPPF-based controller is approximately 100 s, which is smaller than the convergence times under the other two controllers (almost 110 s and 150 s).Moreover, Figure 5 .Figure 6 . Figure 5.The trajectories of the vessels under the proposed FTPPF-based controller. Figure 6 . Figure 6.(a) The formation errors under the proposed FTPPF-based controller.(b) The formation errors under a conventional PPF-based controller.(c) The formation errors under a controller without PPF. Figure 7 . Figure 7.The surge forces, sway forces, and yaw moments for the vessels under the proposed FTPPFbased controller.
4,740.4
2023-02-02T00:00:00.000
[ "Engineering" ]
A study on time-varying dependence between energy markets and linked assets based on the Russia-Ukraine conflict The energy industry, acutely sensitive to geopolitical shifts due to the Russia-Ukraine conflict, experiences sustained disturbances in global energy markets, reshaping global energy supply dynamics and significantly influencing global trade patterns. Utilizing static and dynamic GARCH-Copula models, this study elucidates the dependency between energy markets and related assets. The Copula function, when compared with the multivariate GARCH model, demonstrates distinct advantages, notably in delineating joint asset distributions, capturing market dependence's nonlinear traits, and highlighting robust tail correlation structures. Beyond the average inter-market dependence, its tail correlation offers a vital perspective on market risk. This research delves into the temporal and structural variations in interdependence between energy markets and related assets. It probes potential structural breakpoints in dynamic interdependence and pinpoints their occurrences. By focusing on the Russia-Ukraine conflict, this study offers a holistic view of the changing interplay between the energy market and other asset categories, providing pivotal insights for investor portfolio optimization, regulatory oversight, and risk mitigation. Moreover, employing wavelet analysis, this study examines the frequency domain traits of the interdependency between energy markets and associated assets. As frequency wanes, market price fluctuations become less pronounced. The continuous wavelet power spectrum indicates that price variations are predominantly mid to high frequency. Cross-wavelet transform results suggest that correlations between energy markets and related assets are more influenced by short-term perturbations than enduring shifts. Preface Russia stands as a predominant energy producer globally, supplying Europe with significant proportions of its energy needs-30% of its crude oil and 35% of its natural gas.Consequently, Europe exhibits a pronounced reliance on Russian energy resources.The geopolitical tensions stemming from Russia's military activities in Ukraine prompted stringent economic sanctions from the United States and the European Union.This, in turn, nudged Russia to recalibrate its energy strategies.Simultaneously, European nations have been fervently exploring alternative energy sources, leading to substantial shifts in the global energy paradigm.These geopolitical disturbances have escalated the risk of synchronized volatility in the energy financial markets.Notable indices, such as the Shanghai Stock Exchange and the Hang Seng Index, experienced pronounced declines.Concurrently, stock markets in the UK, France, and Germany also registered significant drops.This study illuminates the temporal characteristics of the interdependence between energy markets and related assets, considering both the degree and structure of this interrelation.Utilizing structural breakpoint analysis, it pinpoints key crisis events that induce alterations in interdependence structures.The findings underscore a positive, time-varying dependence between the energy market and its associated assets, save for the money market.Additionally, an asymmetric tail dependence in the energy market contrasts with the symmetric tail dependence observed in other markets. Construction of the Copula model First introduced by Sklar in 1956, Copula functions, often referred to as conjugate functions, are renowned for their proficiency in detailing the correlation structure of random variables.These functions not only outline the nonlinear correlation architecture between markets but also unveil the tail correlation in extreme market scenarios, positioning them a notch above the traditional multivariate GARCH model. Sklar's theorem posits that each Copula function (also termed a covariance function) can amalgamate the joint distribution of the variables with their individual marginal distributions.Let's presume the existence of a copula function abiding by: F x y (x, y) = C(F x (x), F y (y)) (1) Where, and represent marginal distributions.For continuous is denoted as follows: Where, both and (obtained via probabilistic integral transformation) uniformly range between [0,1], and signify the generalized inverse distribution functions of the marginals and FY, respectively. In terms of dependence, the Copula function presents three pivotal advantages: First, it permits the individual modeling of marginal distributions, correlating their quantiles via the Copula function.This endows greater latitude in crafting marginal functions and correlations.Second, for non-elliptical joint distributions, the Copula function renders a more precise dependence metric, as the conventional dependence metric steered by the linear correlation coefficient fails to authentically capture the dependence blueprint.Third, it embraces tail dependence, quantifying the likelihood of simultaneous extreme ascents or descents in two variables.Joe (1997) meticulously defined the tail correlation coefficient as follows: With the cap presented as follows: Where, τ U , τ L ∈ [0, 1] denotes upper and lower tail dependences.A scenario where signifies the absence of tail dependency. Copula functions manifest in an array of forms.To meticulously delineate the dependence framework between the energy market and interlinked assets, this study employs common Archimedean and elliptical class Copula functions, each exhibiting distinct tail characteristics.The optimal fit connection function model is then harnessed to further gauge the CoVaR. The two-stage stepwise estimation method, known as the iterated filtering method, is employed in this study to deduce the relevant parameters.The steps are detailed below: Initially, all parameters in the marginal distribution functions for both the energy market and linked asset return series, fitted via the ARMA(p, q)-GARCH(1,1) model, are determined.The estimated values of these parameters are represented as follows: Where, T represents the sample size, f i t (•) represents the density function of the marginal distribution, GB represents the energy market, and S represents the associated assets. Similarly, all parameters θ C in the Copula model can be obtained and estimated as follows: Where, C t ([) signifies the density function of the Copula model.Finally, in this paper, Akaike's information criterion (AIC) is standardized for an assortment of models, written by the following expression: Where, k represents the total number of model parameters.A lesser AIC value indicates superior model accuracy.Thus, the time-varying Copula model with the minimal AIC value is distinguished as the optimal one.The marginal distribution models of energy market and correlated asset returns Rt(S) are identified as ARMA with Generalized Autoregressive Conditional Heteroskedasticity errors.Specifically, these models are the ARMA(p, q)-GARCH(1,1) variants steered by a generalized error distribution (GED) which astutely captures salient features of the energy market Rt(GB) and the correlated asset returns Rt(S), such as pronounced tails, leverage effects, and conditional heteroskedasticity.The mean equation is represented by the ensuing ARMA(p, q) process: Where, both p and q represent non-negative integers.ε i t = σ i t z i t , where z i t adheres to an independent GED distribution with degrees of freedom ν and σ i t conditional standard deviation, aligning with the variance equation articulated in (4.9): Where, the parameters ω, a 1 , β 1 , and v require estimation, and for series stability, and a 1 and β 1 must ensure a 1 + β 1 <1.Leveraging the maximum likelihood estimation method, the marginal return distribution function for both the energy market and the associated assets is acquired as follows: Where, GED v (•) represents the distribution function of the GED, detailed as follows: Structural break detection In contemporary research, two prominent methods exist for pinpointing structural breaks: First.Subjective Inference Method: This approach relies on visual analysis of time series plots to discern inflection points.By correlating these points with significant events occurring near them, researchers can hypothesize potential structural breaks.However, this method struggles to ascertain a direct sequence between inflection points and related events, potentially leading to flawed conclusions.Second, Financial Econometric Models: Techniques like the Chow and Cusum tests are employed to determine structural breaks.Nonetheless, these tests are not devoid of limitations. For one, the selection of sample intervals carries an inherent subjectivity, which risks misidentifying breakpoints.More critically, tests like Chow and Cusum can only evaluate a single breakpoint at a given time, restricting their applicability.Recognizing these limitations, this paper leverages the endogenous multiple breakpoint detection method, known as the BP test, as proposed by Bai and Perron (2003).This advanced technique surmounts the constraints of previous methods that were confined to single breakpoint assessments.Currently, the BP test stands out as a more precise and effective methodology.Under the BP test framework, the initial step is to structure a multivariate linear regression model, accounting for the breakpoints: Where, y t is the dependent variable, x t , z t is the independent variable, and β, δ j is the corresponding vector of coefficients.μ t denotes the residual term, m signifies the number of structural changes, and T1, T2, . . ., Tm denotes the latest identified structural change point.u t represents white noise. The structural breakpoints can be derived by minimizing the sum of squares of the total residuals, i.e.: ( The procedure to determine structural breaks encompasses the following steps: Firstly, application of Fisher's Algorithm (1958): Equation ( 13) undergoes iterative processing using Fisher's algorithm to pinpoint the minimal value for the total residual sum of squares.Secondly, utilization of the supWald Test: This test facilitates the identification of the number of breakpoints and pinpoints their respective occurrence times. The multiple structural breakpoint test is instrumental in delving deeper into the intertwined relationship between the energy market index and its related assets.By testing for shifts in the timevarying correlation coefficient, this methodology evaluates whether the interplay between the energy market and its associated assets undergoes structural changes.Recognizing these breakpoints and juxtaposing them with real-world events illuminates the tangible factors and significant occurrences that sway the interconnectedness of the energy market and related sectors.By synthesizing qualitative and quantitative analyses, researchers can pinpoint the precise moments of timevarying correlation shifts.This, in turn, aids in extrapolating the deterministic external events that precipitate extreme risks, offering invaluable insights for adeptly navigating the perilous waters of the energy market. Energy market index selection To track the performance of energy market returns, various indices denominated in different currencies have been developed.These are designed to encapsulate the price dynamics of energy commodities traded across different platforms.This study zeroes in on three global energy market indices: the Standard & Poor's Dow Jones Energy Market Index, the Bloomberg Barclays MSCI Energy Market Index (MSCI_GB), and the Solactive Energy Market Index.The objective is to juxtapose their price dynamics and volatility trajectories.Each of these indices employs a unique calculation methodology and has distinct components.Their usage is well documented in earlier research.Daily price data for these indices, spanning from 24 February 2022 to 24 December 2022, was sourced from the Bloomberg database.We derived the daily return series for these indices utilizing the percentage logarithmic difference approach.Table 1 delineates the descriptive statistics for the daily returns, noting an average return proximate to zero.Based on the Jarque-Bera test, the normality assumption for all three indices is rejected at a 1% significance level.These series exhibit analogous statistical attributes regarding mean, kurtosis, skewness, and normality.Furthermore, their Pearson correlation coefficients are elevated, especially between GB and MSCI_GB, which approach unity.It's evident that these energy market index series convey congruent market insights.Hence, owing to its precedence in publication, the S&P Dow Jones Energy Market Index (GB) is chosen as the emblematic energy market index for this paper. Sample selection of linked assets To investigate the interdependence magnitude and structure between the energy market and various other markets, this study considers traditional asset classes, including fixed income, equities, and energy commodity markets.Additionally, energy equity indices across diverse sectors are taken into account.Specifically, various clean energy industry indices from the NASDAQ OMX Energy Economics series are evaluated to assess the financial dynamics of the global energy equity market at an industry-specific level.This index series captures a breadth of sectors, representing the behavior of distinct energy stock sub-sectors, thus offering an expansive view of market trends within the energy domain.To ensure data consistency between the energy market and the energy industry stock, we primarily focus on indices from the NASDAQ OMX Energy Economics series that have significant relevance to energy market trends. Therefore, guided by the work of Ferrer et al. (2021), this paper selects indices related to solar (GRNSOLAR), wind (GRNWIND), natural gas (GRNFUEL), energy efficiency (GRNENEF), clean transportation (GRNTRN), energy-centric buildings (GRNGB), pollution prevention (GRNPOL), and water management (GRNWATER).Among these, the GRNSOLAR and GRNWIND indices are curated to reflect the performance of enterprises engaged in energy production via solar and wind modalities, while the GRNFUEL index encompasses firms involved in natural gas-based energy production. Table 2 presents the principal descriptive statistics concerning the daily returns of all time series throughout the sample period.The table indicates that the mean values of daily returns for all variables approximate zero and are notably less than their corresponding standard deviations.This highlights significant variability in the pricing of the discussed asset classes.Notably, the standard deviation of crude oil returns is exceptionally elevated, surpassed solely by the natural gas equity sub-sector.This pronounced volatility in the oil market can be traced back to the major price fluctuations since the inception of the twenty-first century.Predictably, the mean returns and standard deviations for the energy market indices are more subdued than those of traditional and energy industry stock markets.Most return series exhibit negative skewness coefficients, suggesting a left-skewed distribution.Moreover, every series registers kurtosis values exceeding 3, indicating distributions with tails heavier than a standard normal distribution.The Jarque-Bera (JB) test statistic consistently negates the normality hypothesis across all series at the 1% significance level, underscoring distributional characteristics with pronounced peaks and heavy tails.Within the scope of this study, the ADF test reveals that all return series exhibit stable trends without unit roots-denoted as i( 0)-with a 1% level of significance.Table 3 presents the Pearson correlation coefficients for each asset pair across the entire sample period.Notably, a strong positive correlation (0.72) exists between the energy market and traditional treasury bonds, underscoring a significant relationship between the energy market and high-credit traditional bonds.The degree of Pearson correlation between energy markets and energy industry stocks appears both modest and varied, with correlations ranging from 0.12 in the Natural Gas subsector to 0.41 in the Pollution Prevention sub-sector.Such low correlations imply that, generally, the energy market and energy stocks might be influenced by distinct factors.Indeed, the correlations between the indices of energy stock sectors and traditional stock markets are generally more marked than those observed in energy market indices, with correlations exceeding 0.5 for all sectors except the natural gas sector.Additionally, the energy market's correlation with crude oil prices is notably low, at 0.12.Energy stocks also display modest positive correlations with crude oil, with values between 0.17 and 0.32.It's important to highlight that energy equities correlate weakly with traditional bond assets, including Treasury and corporate bonds. Estimation of time-varying Copula-GARCH models This section delves into the estimation of dynamic dependence and time-varying tail correlations between the energy market and other associated assets.The methodology involves: (a) Estimating all parameters in the marginal distribution functions of the variables using the AR(1)-GARCH(1, 1)-SkT model.(b) Gauging both static and dynamic dependencies between the energy market and related assets.This is done by estimating diverse static and dynamic Copula model parameters to elucidate these relationships.(c) Estimating the dynamic tail dependence between the energy market and the associated markets.The descriptive statistics reveal that all the time series exhibit pronounced tails and peaks.None adhere to a normal distribution, and they all display characteristics of smoothness and the ARCH effect.Consequently, this study adopts the AR(1)-GARCH(1,1)-SkT model to capture the marginal distributions for each market.Estimation results are delineated in Table 4. The results from Table 4 indicate that the coefficients of the mean equations, based on the ARMA model, are statistically significant.The GARCH model's effect on volatility is statistically significant for most of the time series.Furthermore, the sum of the estimated parameters "a" and "b," which indicates the continuity of price rise volatility, approaches 1, thus satisfying the condition α1 + β1 < 1 .This suggests a pronounced volatility clustering effect across asset return indices.Additionally, for most datasets as denoted by lambda, there exists a substantial leverage effect.This implies that negative return shocks induce greater uncertainty than positive return shocks of the same magnitude. Copula model selection Upon determining the marginal distribution, we chose appropriate Copula functions to analyze the dependence structure between the two markets.Different Copula function models can yield varying results when interpreting this dependence structure.Generally, the AIC serves as the primary criterion for Copula function selection.In this research, we considered seven prevalent Copula models: Student-t, Clayton, Gumbel, Rotated Gumbel, SJC, Gaussian, and BB7.These models were applied to the marginal distribution for correlation fitting.The best-fitting Copula model was then chosen based on the lowest AIC value.The fitting efficacy of each Copula model, as depicted by their AIC values, can be found in Table 5. According to the AIC, the optimal fit between the returns of the energy market and the majority of traditional assets is the Time-Varying Student-t Copula model.This indicates a symmetric dependence between the energy market and most of the traditional financial and energy markets.Such a dependence model suggests that both negative and positive international market return news similarly affect the energy market.However, the optimal Copula model describing the relationship between the energy markets is the Rotated Gumbel.This model captures the asymmetric tail dependence, indicating that negative news in one energy market profoundly influences the other more than positive news does. From the Copula model selection outlined above, this study adopts the dynamic Rotated Gumbel model to describe the interdependence and structure between two distinct energy markets.Conversely, the dynamic Student-t Copula model is employed to delineate the interdependence and structure between the energy market and other financial markets. Correlation analysis using static and time-varying Copula models This study computes dynamic dependence through a 180-degree rotated Gumbel Copula model.The dynamic relationship between the energy market and the commodity, oil, and energy fuel markets manifest as positive and irregular, fluctuating around the value of 1.Such findings suggest that surges in crude oil and commodity prices escalate production expenses.These increased costs subsequently stimulate sectors like renewable energy, augmenting the demand for energy investment capital.This amplification operates via three channels: the substitution effect, the aggregate demand effect, and the production disincentive effect. For the remaining correlated assets, the dynamic correlation coefficients largely mirror a similar trajectory.A notable rise in interdependence transpired during the Russia-Ukraine conflict in 2022.However, this heightened correlation receded from May 2022, only to surge again during the stock market downturn in June 2022.After this resurgence, a fluctuating decline was observed.The study further underscores that dynamic interdependence isn't perpetually positive, being contingent largely on external economic climates.Specific peaks in interdependence between energy and financial markets materialize at distinct junctures, primarily due to significant disruptive events.Financial crises, oil-centric conflicts, and shifts in socio-political landscapes can recalibrate market supply and demand dynamics, intensifying price volatility.This, in turn, amplifies risk contagion and market interdependence. To summarize, the empirical scrutiny in this segment divulges two salient observations: (a) Except for money markets, energy markets and related assets exhibit a predominant positive timevarying dependence in return realizations.(b) A symmetric tail dependence exists between energy markets and a majority of financial and energy markets.In contrast, there's an asymmetric tail dependence on the commodities, oil, and energy fuel markets.This indicates that unfavorable outcomes in the energy market exert a more pronounced influence than positive ones.Such dependencies are considerably swayed by investor sentiment, thereby endorsing the "asymmetric price adjustment" theory between equity and debt markets.Moreover, considering the informational interplay across these markets and their historical volatility, it's inferred that the bond between energy markets and their allied assets intensifies during economic upheavals. Test for structural breakpoints in dynamic dependence A "structural mutation" refers to a shift in the trajectory of a series at a point when variables that mirror its attributes undergo significant changes, such as the Russian-Ukrainian conflict, financial crises, currency upheavals, oil disruptions, or wars.Emerging markets frequently exhibit susceptibility to these structural alterations, often stemming from the adoption of pertinent economic policies, modifications in financial regulatory frameworks, or the repercussions of abrupt, significant economic or political crises, both domestically and internationally.Such disruptions can precipitate pronounced shifts in the interconnectedness of different markets.Therefore, a meticulous analysis of abrupt structural alterations can shed light on the intricate, evolving interrelationships between energy markets and other sectors, illustrating the multifaceted interdependence between energy markets and their associated assets. In this study, we employ Bai and Perron's test for structural breakpoints to ascertain the exact positions of these shifts in the interconnectedness between energy markets and related assets.For this test, the correction value is fixed at 0.1, the maximum count of breakpoints is set to 7, and the significance level is maintained at 5%.This approach aids in detecting the number and precise dates of structural breakpoints across all correlated assets.The findings of this test are detailed in Table 6.As deduced from Table 6, there exist a minimum of five structural breakpoints in the trajectory of evolving dependencies between energy markets and linked assets.In this study, we synchronize the timing of these structural shifts with real-world events.Notably, we focus on events within the sample period to pinpoint the locations of these breakpoints.Alongside this, we identify major crisis incidents within the sample as primary triggers inducing shifts in market dependence. Table 1 . Descriptive statistics of energy market indices. Table 2 . Descriptive statistics.Treasury denotes the Bloomberg Barclays Global Treasury Total Return Index; Corporate is represented by the Barclays Global Aggregate Corporate Index.High_Yield stands for the Bloomberg Barclays Global High Yield Total Return Index; Stock is indicative of the stock market price.Brent relates to the oil price.Commodity pertains to the commodity markets; GRNENEF is the Energy Efficiency Index.GRNFUEL represents the Natural Gas Index.GRNGB is the Energy Buildings Equity Index; GRNPOL represents the Pollution Prevention Equity Index; GRNSOLAR represents the Solar Equity Index; GRNTRN represents the Clean Transportation Equity Index; GRNWATER represents the Water Resource Management Equity Index, and GRNWIND represents the wind energy stock index.JB represents the Jarque-Bera test statistic regarding normality; ADF indicates the unit root test.Typically, *** signifies statistical significance at the 1% level. Note: GB represents the energy market; currency refers to the currency market; Table 5 . AIC values for selected common Copula models. Note: The best Copula fit is indicated by the minimum AIC values (in bold).
5,037.2
2024-02-08T00:00:00.000
[ "Economics", "Environmental Science" ]
The Effective Bootstrap We study the numerical bounds obtained using a conformal-bootstrap method - advocated in ref. [1] but never implemented so far - where different points in the plane of conformal cross ratios $z$ and $\bar z$ are sampled. In contrast to the most used method based on derivatives evaluated at the symmetric point $z=\bar z =1/2$, we can consistently"integrate out"higher-dimensional operators and get a reduced simpler, and faster to solve, set of bootstrap equations. We test this"effective"bootstrap by studying the 3D Ising and $O(n)$ vector models and bounds on generic 4D CFTs, for which extensive results are already available in the literature. We also determine the scaling dimensions of certain scalar operators in the $O(n)$ vector models, with $n=2,3,4$, which have not yet been computed using bootstrap techniques. Introduction There has recently been a great revival of interest in the conformal bootstrap program [2,3] after ref. [4] observed that its applicability extends to Conformal Field Theories (CFTs) in d > 2 dimensions. Since ref. [4], considerable progress has been achieved in understanding CFTs in d ≥ 2 dimensions, both numerically and analytically. Probably the most striking progress has been made in the numerical study of the 3D Ising model, where amazingly precise operator dimensions and OPE coefficients have been determined [5][6][7]. Essentially all numerical bootstrap studies so far have used the constraints imposed by crossing symmetry on 4-point correlators evaluated at a specific value of the conformal crossratios, u = v = 1/4, or equivalently in z-coordinates at z =z = 1/2 [8]. This is the point of best convergence for the combined conformal block expansions in the s and t channels. Taking higher and higher derivatives of the bootstrap equations evaluated at this point has proven to be very effective and successful in obtaining increasingly better bounds. We will denote this method in the following as the "derivative method". A drawback of the derivative method -both in its linear [4,6,9] or semi-definite [10,11] programming incarnations -is the need to include a large number of operators in the bootstrap equations. This makes any, even limited, analytical understanding of the obtained results quite difficult. A possible approximation scheme is in fact available: ref. [12] has determined the rate of convergence of the Operator Product Expansion (OPE), on which the bootstrap equations are based. This allows us to extract the maximal error from neglecting operators with dimensions larger than some cutoff ∆ * in the bootstrap equations and thus to consistently truncate them. These truncated bootstrap equations can then be evaluated at different points in the z-plane. This method, which we denote as the "multipoint method", has been previously advocated by Hogervorst and Rychkov in ref. [1] but has not yet been numerically implemented. The aim of this note is to provide such an implementation and study the resulting bounds. It is important to emphasize that the method of ref. [1] combines what are in principle two independent ideas: i) multipoint bootstrap and ii) truncation of the bootstrap equations. One could study i) without ii), or try to analyze ii) without i). We will not consider these other possibilities here. We begin in section 2 with a brief review of the results of refs. [1,12,13] on the convergence of the OPE. We use generalized free theories as a toy laboratory to test some of the results obtained in ref. [12]. We then generalize the results of ref. [12] for CFTs with an O(n) global symmetry. We write the bootstrap equations and set the stage for our numerical computations in section 3. Our results are then presented in section 4. For concreteness, we study bounds on operator dimensions and the central charge in 3D and 4D CFTs, with and without an O(n) global symmetry (with no supersymmetry). For these bounds, extensive results are already available in the literature (see e.g. refs. [5-7, 10, 14-22]). In particular, we focus our attention on the regions where the 3D Ising and O(n) vector models have been identified. We show how the results depend on the number N of points in the z-plane at which we evaluate the bootstrap equations and the cut-off ∆ * on the dimension of operators in the bootstrap equations. Using values for the dimension of the operator φ in O(n) vector models available in the literature and a fit extrapolation procedure, we then determine the dimensions of the second-lowest O(n) singlet and symmetric-traceless operators S and T for n = 2, 3, 4. To our knowledge, these have not been obtained before using bootstrap techniques. Our results are consistent with those from analytical calculations using the -expansion [23,24] with a mild tension with the result of ref. [24] for the dimension of T in the O(2) model. We notice from our results that the "kink" in the bound on the dimension of the lowest scalar (singlet) operator in 3D Ising and O(n) vector models is already visible for relatively small ∆ * , while the minimum in the central-charge bound is very sensitive to ∆ * . For our numerical implementation, we discretize the spectrum and formulate the bootstrap equations as a linear program which we solve using the optimizer CPLEX 1 by IBM. Since we focus on the truncated bootstrap equations with relatively low cutoff ∆ * , double precision as used by CPLEX is sufficient for our purposes. More refined implementations with higher numerical precision, possibly adapting the method and optimizer of refs. [6,9], are certainly possible. More details on the numerical implementation are given in section 5. We conclude in section 6. Convergence of the OPE We begin with a brief review of the results of refs. [12,13] (see also ref. [1]) about the convergence of the OPE in a euclidean, reflection positive, CFT in any number of dimensions. 2 For more details see the original references. Consider the 4-point function of a scalar primary operator φ with scaling dimension ∆ φ : are the conformally-invariant cross-ratios (x ij ≡ x i − x j ). Applying the OPE to the operator pairs φ(x 1 )φ(x 2 ) and φ(x 3 )φ(x 4 ) in the 4-point function, one can write where u = zz, v = (1 − z)(1 −z) and the sum runs over all primary operators O that appear in the φ × φ OPE with ∆ and l being respectively their dimension and spin. For each primary, the sum over all its descendants is encoded in the conformal block function g ∆,l (z,z). In a euclidean CFT,z = z * and the conformal blocks are regular everywhere in the complex z-plane, with the exception of a branch-cut along the real line [1, +∞). 3 Thanks to reflection positivity, the OPE coefficients λ O are real and thus λ 2 O > 0. Crucial for our considerations will be a bound on the remainder of the sum in eq. (2.3) when it is truncated at some primary operator of dimension ∆ = ∆ * . To determine this bound, one first uses that as follows e.g. from a representation of the conformal blocks in terms of Gegenbauer polynomials [1]. It is therefore sufficient to estimate the remainder for real z =z. As was found in ref. [12], the most stringent bound is obtained by using the coordinate Bounds on the OPE convergence are obtained in an alternative way using crossing symmetry in ref. [25]. Interestingly, ref. [25] sets bounds which are also valid for finite values of ∆ * at z =z = 1/2, though they are relative and not absolute bounds. It would be interesting to explore the approach followed in this paper further. We thank Slava Rychkov for having pointed out this reference to us. 3 The branch-cut is best seen in Lorentzian signature, where z andz are two independent variables. At fixedz (z), g ∆,l (z,z) is a true analytic function in z (z) with a branch-cut along the line [1, +∞). The z-plane is mapped to the unit disk in ρ and the branch-cut is mapped to the boundary of the disk. The conformal blocks in ρ are then defined for |ρ| < 1. In the manifestly reflection positive configuration withρ = ρ = r, the function g(u, v) in eq. (2.3) can be written as 4 where c n (∆, l) are positive coefficients whose explicit form is not important here and the sum over n takes into account the contributions from the descendants of each primary. It is convenient to rewrite g(r) as Here β ≡ − log r, k runs over all operators (primaries and their descendants) which are exchanged in the OPE and f (∆) is a spectral density with positive coefficients ρ k . Again, their explicit form is not relevant for our considerations. The behaviour of g(β) in the limit β → 0 (corresponding to the OPE limit x 3 → x 2 , in which case z → r → 1 and 1 − z → β 2 /4 → 0) is dominated by the exchange of the identity operator and one finds: Here a ∼ b means that a/b → 1 in the considered limit. The key observation of ref. [12] is that since the coefficients ρ k are all positive, this asymptotic behaviour determines the leading, large-∆ behaviour of the integrated spectral density by means of the Hardy-Littlewood tauberian theorem (see e.g. [26]): 6 . The remainder (2.4) can then be bounded as follows: We first note that 4 For simplicity, we use the same symbol to denote the functions g(u, v) andg(r) = g(u(r), v(r)) etc. here and below. 5 This is true in general only in d > 2 dimensions. In d = 2, one has to be careful since scalar operators can have arbitrarily small dimensions. See also the discussion after eq. (2.23). 6 It is in fact sufficient that the coefficients are all positive for operators with dimension larger than some fixed value ∆0. since the r.h.s. contains contributions from all operators with dimension larger than ∆ * , whereas on the l.h.s. only primaries with dimension larger than ∆ * and their descendents contribute. Using eq. (2.11), the r.h.s. can in turn be bounded as where Γ(a, b) is the incomplete Gamma function. Clearly, this bound applies for parametrically large values of ∆ * , where eq. (2.11) holds. Using eq. (2.5), we finally get the bound on the remainder This is valid in any number d > 2 of dimensions for 4-point functions with identical scalars. It was pointed out in ref. [13] that the conditions for the applicability of the Hardy-Littlewood tauberian theorem in both 3 and 4 dimensions are also fulfilled for the rescaled conformal blocks with γ = 1. Repeating the derivation reviewed above for a remainder involving the rescaled conformal blocks, it is straightforward to get the alternative bound For −∆ * log |ρ(z)| 1, eq. (2.17) can be approximated as We see that for |ρ(z)| not too close to 1 and ∆ * 8∆ φ , the bound is more stringent for γ = 1 than for γ = 0. It was furthermore shown in ref. [13] that in d = 3 dimensions, γ = 1 is the maximal allowed value such that the Hardy-Littlewood tauberian theorem remains applicable, whereas it was conjectured without proof that the maximal allowed value in d = 4 dimensions is γ = 3/2. Correspondingly we use eq. (2.17) with γ = 1 for the remainder both in 3 and 4 dimensions in our numerical implementation. 7 The above derivations were based on the existence of a configuration for which the function g(u, v) turns into a positive definite function of a single variable. The remainder is then estimated using the Hardy-Littlewood tauberian theorem. One cannot naively apply these arguments to arbitrary derivatives of g(u, v) w.r.t. u and v, unless the resulting functions remain positive definite and derivatives can be brought inside the absolute value in the l.h.s. of eq. (2.16). See the appendix of ref. [27] for a recent discussion on how to estimate the remainder on derivatives of g(u, v). It would be interesting to verify if this allows us to also study truncated bootstrap equations with the derivative method. Comparison with Generalized Free Theories and Asymptotics for z → 1 The results reviewed in the previous subsection are based on eq. (2.11) which holds in the limit ∆ * → ∞. Of course, for any practical use, we need to know the value of ∆ * beyond which we can trust eq. (2.11) and thus the bound eq. (2.16). It is difficult to determine this value for a generic CFT. But we can get useful insights by considering exactly calculable CFTs, like generalized free theories (sometimes called mean field theories) for which the CFT data are known and the function g(u, v) in eq. (2.1) in any number of dimensions reads ( In fig. 1, we show η as a function of ∆ * evaluated at the symmetric point z =z = 1/2. Notice that at the point of best convergence the actual remainder is always significantly smaller than R, and that the ratio gets bigger and bigger as ∆ * increases for large ∆ * . In particular, η is greater In order to make the bound more stringent, one could then alternatively use the series representation in ref. [1] which includes contributions from primary operators and their descendants separately. Using this series truncated at contributions corresponding to dimension ∆ * instead of the full conformal blocks g ∆,l would make the r.h.s. of the inequality (2.12) the actual remainder to be bounded. This would thus make eq. (2.16) with γ = 0 more stringent. Here, however, we choose not to follow this approach. The reason is that the representations for the full conformal blocks g ∆,l can be considerably faster calculated than (our implementation of) the truncated series representation of ref. [1]. When z → 1, both the numerator and the denominator of η in eq. (2.20) blow up, since the OPE is not convergent at z =z = 1. Operators with high scaling dimension are no longer suppressed and the remainder completely dominates the OPE. 8 More precisely, we have independently of γ. Notice that this limit is universal for any CFT that includes in its spectrum a scalar operator with dimension ∆ φ , because z =z → 1 selects the universal identity contribution in the t-channel. This class of CFTs always includes a GFT for the operator φ itself. In this case the universal nature of the limit is trivially checked using eq. (2.19): where in the last equality we have used that |1 − z| → (log |ρ(z)|) 2 /4 in the limit. It was found in refs. [28,29] that the spectrum of any Lorentzian CFT resembles that of a GFT for parametrically large spin operators. In particular, in ref. [28] this has been established by analyzing crossing symmetry in the limit z → 0 andz fixed for d > 2, where large twist operators are suppressed. The two-dimensional case is more subtle, because there is no longer a gap between the identity (which has the minimum twist zero) and the other operators. Indeed, the results of refs. [28,29] and those of ref. [12] in the euclidean do not straightforwardly apply for d = 2. In the euclidean, operators of any twist should be considered. However, given the results of refs. [28,29], it is natural to expect that the leading behaviour (2.22) is expected to come from operators with parametrically high dimension and high spin for any CFT, asymptotically approaching the GFT spectrum in this regime. It would be interesting to understand within euclidean CFTs, where the twist does not play an obvious role, why this is so. Remainder for CFTs with O(n) Symmetry The generalization of the OPE convergence estimate to CFTs with O(n) global symmetry is straightforward. For concreteness, let us consider scalars φ i in the fundamental representation of O(n). The only non-trivial point is to identify a proper linear combination of 4-point functions that leads to a positive definite series expansion, otherwise the Hardy-Littlewood tauberian theorem does not apply. A possible choice is where for simplicity we have omitted the x-dependence of the fields. The parameter η can in general take an arbitrary complex value, but it is enough for our purposes to consider η = ±1. Forρ = ρ = r and any η, this correlator is manifestly positive definite, because it corresponds to the norm of the state φ 1 |φ 1 + ηφ 2 |φ 2 . (2.26) The leading term in a η (u, v) for x 2 → x 3 is given by the exchange of the identity operator in the first two correlators and hence is independent of η. On the other hand, expanding in conformal blocks in the (12)-(34) channel, we have [19] A η = 1 where S and T denote operators in the singlet and rank-two symmetric representations of O(n), respectively. Both sums run over even spins. We can now repeat essentially verbatim the derivation below eq. (2.6). For η = −1, this gives rise to the bound where R is given in eq. (2.17). The factor 1/2 with respect to the non-symmetric case arises because the identity operator is exchanged in two correlators but a factor 4 is present in the second term in the r.h.s. of eq. (2.27). For η = 1 we similarly get Another positive definite linear combination of correlators is corresponding to the norm of the state Again, we consider η = ±1. In the (12)-(34) channel the correlator B η can be written as 9 where A stands for operators in the rank-two antisymmetric representation of O(n). The first sum runs over even spins, whereas for the second one they are odd. As before, the leading term in b η (u, v) for x 2 → x 3 is given by the exchange of the identity operator in the first two correlators and is independent of η. For η = 1, eq. (2.32) gives rise to the same bound given in eq. (2.28), while for η = −1 we have It is straightforward to see that the bounds (2.28), (2.29) and (2.33) are the best that can be obtained. Indeed, in the free-theory limit one has λ 2 S = λ 2 /n, λ 2 T = λ 2 A = λ 2 /2 with λ 2 being the OPE coefficients for a single free field (see e.g. eq. (5.11) in ref. [20]). The above three bounds then reduce to eq. (2.16) which is known to give the best bound on the r.h.s. of eq. (2.12) (see however footnote 7) [12]. Any potentially better bound for O(n) theories should in particular apply to the free theory, but would then be in contradiction with the results of ref. [12]. The above bounds will be used in the next section to bound the remainder of the bootstrap equations in CFTs with an O(n) global symmetry. Bootstrapping with Multiple Points The bootstrap equation for a 4-point function with identical scalars φ with scaling dimension ∆ φ in any number of dimensions is given by the sum rule (see refs. [30,31] for pedagogical reviews) Splitting the sum into two parts, for dimensions smaller and larger than a cutoff ∆ * , we can write Using eq. (2.16), the remainder E of the sum rule is bounded by where we have omitted the dependence on ∆ * , ∆ φ and γ. The truncated sum rule (3.2) still involves a generally unknown spectrum of operators up to dimension ∆ * . In order to make it amenable to numerical analysis, we discretize the spectrum and make the ansatz 10 for the quantum numbers (spin,dimension) of the operators that can appear in the truncated sum rule. For each spin l, the dimension runs in steps of size ∆ step from the unitarity bound ∆ d,l min ≡ l + (d − 2)/(1 + δ l0 ) to the cutoff ∆ * (or a value close to that, depending on ∆ step ). Accordingly, l max is the largest spin for which the unitarity bound is still below the cutoff, ∆ d,lmax min < ∆ * . In practice, we vary the step size ∆ step somewhat depending on the spin and dimension. This is discussed in more detail in sec. 5. We find that the bounds converge when going to smaller ∆ step , meaning that the discretization does not introduce any artifacts into our calculation. We similarly choose a finite number of points z i in the z-plane where the sum rule is evaluated. The details of our choice for this distribution of points are discussed in sec. 3.1. Together with the discretization of operator dimensions, this turns eq. (3.2) into the matrix equation The elements of the matrix M are the functions F ∆ φ ,∆,l (z,z) evaluated for the different quantum numbers in eq. (3.4) along the rows and for the different points z i along the columns. Furthermore, the vector ρ consists of the squared OPE coefficients λ 2 O of the operators corresponding to the quantum numbers in eq. (3.4) and Using the bound (3.3), we then obtain the matrix inequality where max is defined as but with E replaced by E max . This is the starting point for our numerical calculations. In order to determine bounds on OPE coefficients, we search for vectors ρ which satisfy eq. (3.7) and extremize the entry corresponding to that OPE coefficient. For bounds on the dimension of the lowest-lying scalar operator, on the other hand, we make an assumption on this dimension and drop all scalar operators with smaller dimension from our ansatz (3.4). This gap then allows for a consistent CFT only if there exists a vector ρ which satisfies eq. (3.7) with the reduced ansatz. By trying different assumptions, we can determine the maximal allowed gap. Both problems are linear programs which can be solved using fast numerical routines. An advantage of solving eq. (3.7) is that the vector ρ gives us the spectrum of operators and their OPE coefficients of a potential CFT living at the boundary of the allowed region. This has been used before in ref. [6]. 11 We also consider CFTs with an O(n) global symmetry. For an external scalar operator in the fundamental representation of O(n), the sum rule reads [19] and we have suppressed the arguments of the functions F and H. Splitting the sums in eq. (3.8) into two parts, for dimensions smaller and larger than a cutoff ∆ * , we can write Using eqs. (2.28), (2.29) and (2.33), we obtain the bounds on the remainders with E max defined as in eq. (3.3). Discretizing the space of operator dimensions as in eq. (3.4) and evaluating the sum rule at a finite set of points z i , we again obtain a matrix inequality of the form (3.7). This is the starting point for our numerical calculations for CFTs with O(n) global symmetry. Choice of Points An important choice for the multipoint method is the distribution of points in the z-plane at which the bootstrap equations are evaluated. Using the symmetries z ↔z and z ↔ (1 − z), z ↔ 1 −z of the bootstrap equations, we can restrict these points to the region Re(z) ≥ 1/2 and Im(z) ≥ 0 of the z-plane. The remainder of the truncated sum rule is controlled by |ρ(z)| and |ρ(1 − z)| (cf. eqs. (2.18) and (3.3)). Guided by this, we introduce the measure and consider points with λ(z) ≤ λ c for some constant λ c . It is desirable to choose λ c and the distribution of points within that region in such a way that the obtained bounds are as stringent as possible. We have performed extensive scans over different values for λ c and distributions with different density profiles and have found that a flat profile leads to as good or better bounds than more complicated profiles. We therefore choose the former and put points on a grid centered at z = 1/2. The grid spacing is chosen such that the desired number of points is within the region λ(z) ≤ λ c , Re(z) ≥ 1/2 and Im(z) ≥ 0. We have then found that λ c = 0.6 (3.12) gives the best bounds for all cases that we have studied. 12 In fig. 2, we show the corresponding region in the z-plane and a sample distribution of 100 points. In order to test the influence of the choice of measure on the bounds, we have performed further scans with λ(z) ≡ max(|ρ(z)|, |ρ(1 − z)|) proposed in ref. [1] and λ(z) ≡ |z − 1/2| (for the latter we have removed points at or close to the branch-cuts). We have found that, once the optimal λ c is chosen, the bounds obtained with these measures are indistinguishable from those obtained with eq. (3.11). This indicates that the precise form of the region within which points are sampled has only a marginal effect on the quality of the bounds. Results We now present the results of our numerical analysis. In subsection 4.1, we study bounds on the dimension of the lowest-dimensional scalar operator in the OPE and bounds on the central charge in 3D CFTs, focusing in particular on the regions where the 3D Ising and O(n) models have been identified. In subsection 4.2 we then study the same bounds for generic 4D CFTs. We analyze in particular how our results depend on the number N of points chosen in the z-plane, and on the cutoff ∆ * . In subsection 4.3 we give a closer look at the spectrum of the 3D O(n) models and determine the operator dimensions of the first two scalar operators in the singlet and rank-two symmetric representation of O(n). Before presenting our results, it is important to emphasize an important difference between the multipoint and the derivative bootstrap methods. As mentioned in the introduction, in the latter we do not have a reliable way of truncating the OPE series defining the bootstrap equations at some intermediate dimension ∆ * , because we do not have a reliable estimate of the resulting error. We are therefore forced to have ∆ * as large as possible to minimize this error and can only check a posteriori if the chosen ∆ * was sufficient. 13 More than ∆ * (or its analogue), the key 12 In more detail, we have considered bounds on the central charge and the dimension of the lowest-dimensional scalar operator, in 3D and 4D, with O(n) and without symmetry, and with different choices for the number of points N and the cutoff ∆ * . It is remarkable that λc = 0.6 (within ±0.02, the resolution of our scan) comes out as the optimal choice for such a variety of cases. 13 We are a bit sloppy here in order to keep the discussion simple and get to the point. For instance, in numerical methods based on semi-definite programming one is able to include all operator dimensions continuously up to infinity. The rough analogue of our ∆ * in that case is the maximum spin of the primary operators entering the parameter that controls the accuracy of the method is given by the total number of derivatives N D that are applied to the bootstrap equations. Of course, the larger N D is, the better are the bounds. The accuracy is then limited by the largest N D that allows the calculation to be performed within an acceptable amount of time with the available computing resources. In the multipoint method, on the other hand, we can reliably vary ∆ * due to the bound on the remainder of the truncation discussed in sec. 2. In addition, we can also vary the number N of points in the z-plane which is the analogue of N D in the derivative method. The parameter region for the multipoint method corresponding to the typical bootstrap analysis with the derivative method is then very large ∆ * and N as large as possible given the available computing resources. In this paper, on the other hand, we are mostly interested in the regime where ∆ * is not very large, with values O(10)-O (20). We find that for this range of ∆ * , the results converge for N ∼ O(100) and do not improve further if N is increased. This corresponds to the fact that the rank of the matrix M in the discretized bootstrap equation (3.5) is then O(100). Note that since CPLEX is limited to double precision, we also cannot take ∆ * arbitrarily large. Due to the excellent speed of CPLEX, on the other hand, we have found that taking N large enough so that the bounds converge is no limiting factor. OPE which are taken into account for the numerical implementation. 3D Ising and O(n) Models The most remarkable numerical results from the conformal bootstrap have been obtained in 3D CFTs. One interesting bound to study is on the dimension of the lowest-dimensional scalar operator appearing in the OPE. We denote this operator by and the operator that is used to derive the bootstrap equations by σ. It was noted in ref. [5] that the 3D Ising model sits at a special point, a kink, at the boundary of the allowed region of ∆ as a function of ∆ σ . The Ising model is similarly special with respect to the bound on the central charge c as a function of ∆ σ , sitting again at the boundary of the excluded region, at the point where c is minimized [5,6]. Note, however, that the theory minimizing c does not actually correspond to the 3D Ising model, but rather to some exotic theory with ∆ < 1. Most likely this theory is unphysical (though we are not aware of a solid argument to dismiss it). In practice this theory is removed by assuming a gap in the operator spectrum such that ∆ > 1. Independently of the nature of this theory, the condition ∆ > 1 is satisfied by the Ising model and can be legitimately imposed if we are interested in this particular 3D CFT. In fig. 3, we show the bound on ∆ as a function of ∆ σ for N = 100 points and different values of ∆ * . Notice how the kink shows up already for ∆ * = 13 and converges quite quickly as ∆ * increases. In fig. 4, we show the bound on the central charge c (normalized to the central charge c free of a free scalar theory) as a function of ∆ σ for N = 100 points and different values of ∆ * . The gap ∆ > 1.1 is assumed in the operator spectrum. A lower bound on c is obtained even for ∆ * = 10, but the convergence when going to larger ∆ * is now much slower than for the bound on ∆ . A minimum is visible starting from ∆ * = 16 but even at ∆ * = 22 it is a bit shifted to the right with respect to its actual value. We have still not reached the asymptotic value for ∆ * . Unfortunately, we cannot get reliable results for much higher ∆ * because the numerical accuracy of CPLEX is limited to double precision. Nevertheless, it is clear from comparing figs. 3 and 4 that the lower bound on c is more "UV sensitive" than the bound on ∆ . In both figures, the crosses mark the location of the 3D Ising model, as determined in ref. [6]. In order to quantify the dependence of our results on the number N of points, we show in figs. 5 and 6 the bounds on respectively ∆ and c as a function of ∆ σ for different values of N at fixed ∆ * = 16. We see that in both cases the convergence in N is quite fast, with N = 40 for ∆ and N = 60 for c being already an excellent approximation. Notice that for increasing N , the bound on ∆ converges faster than the bound on c, similar to the dependence on ∆ * . We have studied the dependence on N also for different values of ∆ * and have found as expected that the value N * beyond which no significant improvement in the bounds is observed increases with ∆ * . The dependence is however very mild for the central charge c and barely observable for ∆ . This is still a reflection of the different "UV sensitivities" of the two quantities. In all cases, N * O(100) up to ∆ * = 24. Let us now turn to 3D CFTs with O(n) global symmetry. We consider a primary operator φ in the fundamental representation and denote the lowest-dimensional scalar singlet operator in the φ × φ OPE by S. It was found in refs. [14,16] that these CFTs have kinks in the bound on ∆ S as a function of ∆ φ similar to that found for the Ising model. Moreover, the kinks coincide, for all values of n that have been studied, with the values of ∆ φ and ∆ S associated with the 3D O(n) models. On the other hand, a minimum in c no longer occurs for generic O(n) models and the lower bound on c instead monotonically decreases for n > 3 (see ref. [14] for details). In figs. 7 and 8, we show respectively the bound on ∆ S and c (the latter normalized to the central charge nc free of n free scalars) as a function of ∆ φ for different O(n) symmetries, at fixed N = 80 and ∆ * = 16. For the central charge, gaps ∆ S > 1 and ∆ T > 1 in the spectrum of respectively singlet operators S and rank-two symmetric-traceless operators T are assumed as in ref. [14]. This assumption is satisfied for the O(n) models and leads to more stringent bounds. The dashed line corresponds to the leading large-n prediction. All the qualitative behaviours found in ref. [14] are reproduced, though with milder bounds, as expected. 14 In particular, the kinks in the (∆ φ -∆ S ) plane are not well visible at ∆ * = 16. In figs. 9 and 10, we show the same bounds on ∆ S and c as a function of ∆ φ at fixed N and n, for different values of ∆ * . We see the same qualitative behaviours regarding the "UV sensitivities" found for 3D CFTs with no global symmetry (the Ising model). In particular, in fig. 9 we see how the kink in the bound becomes well visible at ∆ * = 18 and does not significantly improve for ∆ * = 20. Its location is in very good agreement with that found in ref. [14]. On the other hand, the central-charge bound in fig. 10 is still monotonically decreasing for ∆ * = 18 and a minimum appears only for ∆ * = 20. There are no signs of convergence comparing the bounds at ∆ * = 18 and 20, indicating the need to go to larger ∆ * to approach the optimal bound. 4D CFTs All the above considerations can be repeated for 4D CFTs. There are no known non-supersymmetric CFTs at benchmarks points but it is still interesting to study general bounds on operator dimensions and OPE coefficients. See e.g. refs. [4,10,[17][18][19][20][21][22]33], where bounds of this kind (and others) have been determined with the derivative method using both linear and semidefinite programming. In figs. 11 and 12, we show bounds respectively on the dimension ∆ φ 2 of the lowest-dimensional The analysis of 4D CFTs with O(n) global symmetry also closely resembles its 3D counterpart. We again take the external field φ to transform in the fundamental representation of O(n) and denote by S the lowest-dimensional singlet scalar operator that appears in the φ × φ OPE. For illustration, we report in fig. 13 the bound on ∆ S as a function of ∆ φ for CFTs with O(4) symmetry, at fixed N and for different values of ∆ * . By comparing figs.11 and 13 we notice that the convergence in ∆ * of the operator-dimension bound in 4D CFTs with O(4) symmetry is slower than its analogue with no global symmetry. A Closer Look at the Spectrum of 3D O(n) Models In the last subsections, we have shown how previously determined bounds are reproduced using the multipoint method. Here we present some new results for the spectrum of O(n) models. To this end we assume, as previous analyses indicate, that the 3D O(n) models sit precisely at the kink on the boundary of the excluded region in the (∆ φ -∆ S ) plane (∆ S -maximization). The vector ρ that we obtain from solving the linear program (3.7) then gives us the spectrum and OPE coefficients of the operators that are exchanged in the φφφφ correlator of the O(n) models. Here we report the scaling dimensions of the first two operators in respectively the singlet and rank-two representation of O(n), S, S and T , T , for n = 2, 3, 4. Scalar operators with larger scaling dimensions are physically uninteresting, whereas S and T are important in determining the stability of the fixed points of the O(n) models (being marginal operators in the underlying UV 4D Landau-Ginzburg theory) [24]. 15 Actually, one additional operator should be considered, denoted as P 4,4 in ref. [24], but it transforms in the rank-four representation of O(n) and hence cannot appear in the OPE of two scalar operators φ in the fundamental representation. Its dimension might be bounded (or computed) by considering a correlator involving, e.g., four T 's. As far as we know, the scaling dimensions of S and T have not been previously determined using the conformal bootstrap. The best determinations of these parameters have been made using a five-loop computation in the -expansion in refs. [23] and [24]. 16 In table 1, we report the values of ∆ φ , ∆ S , ∆ S , ∆ T , ∆ T determined in the literature, for n = 2, 3, 4. They should be compared with the values in table 2 which have been determined in this paper as follows: We take the values of ∆ φ for O(n) models with n = 2, 3, 4 calculated in refs. [34][35][36] as input and determine the scaling dimensions ∆ S , ∆ S , ∆ T and ∆ T using ∆ Smaximization. We repeat this procedure for the lower, central and upper value of ∆ φ given in these references and for different values of the cutoff ∆ * ∈ [18,23] and the number of points N ∈ [60, 120]. 17 At fixed N and ∆ * , we then take the average over the scaling dimensions obtained with the different input values of ∆ φ . Sometimes the same operator appears twice in the spectrum, at two different but close values of the scaling dimension. In this case we take the average of these values, weighted by the size of the corresponding OPE coefficient. Let us denote the resulting scaling dimensions by ∆ O (N, ∆ * ) for O = S, S , T, T . Each of these values is associated with an error, resulting from the averaging. The stepsize ∆ step of our discretization has been set to 10 −4 in the region where the operators were expected to be found (the resulting uncertainty in the scaling dimensions is typically negligible compared to the other errors). At Carlo simulations. On the other hand, since ∆ T has been determined only using the -expansion, we have decided to omit the other results for ∆ S . The interested reader can find them, e.g., in table I of ref. [24], where the coefficients y4,0 and y4,2 give ∆ S = 3 − y4,0 and ∆ T = 3 − y4,2. For completeness, we also report the relations defining ∆S and ∆T in the notation of ref. [24]: ∆S = 3 − 1/ν, ∆T = 3 − y2,2. 17 Our numerical precision does not allow us to take higher values of ∆ * and N without having issues with numerical stability. We have then extrapolated to N = ∞ using a linear fit in 1/N which seems to well describe the behaviour of ∆ O (N ) as a function of 1/N . An example of this extrapolation fit is shown in fig.15. We denote the resulting scaling dimensions as ∆ O ≡ ∆ O (∞). 18 We do not have an analytic understanding of why the results should scale as 1/N for parametrically large ∆ * . We simply take it as a working hypothesis. We expect that possible deviations from the linear behaviour should be contained within the errors of our determination (cf. fig.15). Note that having N as large as possible is clearly important for high precision. However, at fixed ∆ * the bounds saturate for sufficiently high N and there is no gain in taking N larger. We have noticed that, at least for n = 2, 3, 4, ∆ O (N, ∆ * ) decreases as N and/or ∆ * increase (this is obvious for S, but not for the other operators). If we assume that this is true for any N and ∆ * , we may then set rigorous upper bounds without using any fit extrapolation. These bounds are reported in table 3. Comparing them with the results in table 2 gives an idea of the impact of the fit extrapolation on the final results. As can be seen, all the scaling dimensions that we have determined are compatible with previous results in the literature. The only exception is ∆ T for the O(2) model for which our result has an approximate 3σ tension with that of ref. [24]. Our accuracy in the determinations of ∆ S and ∆ T is comparable with that achieved in ref. [14], though it should be emphasized that the results there do not rely on extrapolations. Furthermore, our accuracy in the determinations of ∆ S and ∆ T is comparable with that achieved using the five-loop -expansion. This is an indication that a slightly more refined bootstrap analysis will be able to improve the determinations of these scaling dimensions. As we mentioned at the beginning of this subsection, ∆ S -maximization also allows us to determine the OPE coefficients λ φφO . We have not performed a detailed analysis with fit extrapolations as above to determine the asymptotic values of λ φφO as ∆ * , N → ∞. Instead we just report λ φφS as determined with the highest values ∆ * = 22, 23 and N = 110, 120 used in We have not determined the error associated with these results and have instead rounded them to the last shown digit. The results for O(2) and O(3) are in agreement with the recent determination in ref. [7], whereas the result for O(4) is new as far as we know. Details of the Implementation For the conformal blocks in d = 4 dimensions, we use the closed-form expression from ref. [8], normalized as in ref. [19]. For d = 3 dimensions, on the other hand, we use the recursion relation for the conformal blocks found in ref. [14]. 19 To this end, we iterate the recursion relation up to some cutoff ∆ rec . We choose this cutoff large enough such that the resulting error in the conformal blocks is smaller than the error from neglecting contributions of operators with dimensions larger than the truncation cutoff ∆ * . In practice, we find that ∆ rec = ∆ * + few is sufficient to ensure this. For the ansatz (3.4) of discretized operator dimensions, we closely follow ref. [5]. We generate the discrete spectra T1 to T4 (the latter only for sufficiently large ∆ * ) in their table 2, where we rescale the stepsizes δ by the factor ∆ step /(2 · 10 −5 ). We then remove duplicates from the combined spectrum and restrict to operator dimensions less than or equal to ∆ * . We have performed extensive scans using different stepsizes ∆ step and have found that the bounds converge for sufficiently small ∆ step . This is in particular satisfied for ∆ step = 2 · 10 −3 which we choose for all the plots in this paper. For the determination of the spectra in sec. 4.3 we add additional operators with stepsize ∆ step = 10 −4 around the previously determined scaling dimensions for the operators S, S , T , T in the O(n) models. Furthermore, for bounds on operator dimensions for which the plots extend to bounds ∆ φ 2 > 3 (the largest dimension of T1 of ref. [5]), we have included additional operators in the scalar sector so that the smallest stepsize ∆ step is used up to the largest bound on ∆ φ 2 shown in that plot. We have also performed scans using different parametrizations for the ansatz (3.4) and have found that the bounds become indistinguishable from the bounds obtained with the ansatz discussed above for sufficiently small ∆ step . This gives us confidence that the discretization does not introduce any artifacts into our calculations. We use Mathematica to evaluate the conformal blocks for the different operators that appear in the ansatz (3.4) and for the set of points in the z-plane. The linear progam (3.7) is then set up by a program written in Python and is subsequently solved with the optimizer CPLEX by IBM using the primal simplex algorithm. Since this optimizer is limited to double precision, it is important to reduce the spread in size of the numerical values in the problem. To this end, note that we can rescale each row of the inequality (3.7) separately by a positive number. Denoting a given row by R, we rescale its elements by Similarly, we can rescale each column of the matrix M separately by a positive number if we 19 Alternatively, we can use the recursion relation also in d = 4 dimensions by setting d = 4 + (to avoid double poles that appear at d = 4). However, Mathematica evaluates the closed-form expression faster than (our implementation of) the recursion relation and we therefore choose the former. redefine the corresponding (squared) OPE coefficient in the vector ρ. We again choose and correspondingly for ρ. This procedure is iterated three times in our Python code, using precision arithmetric with 120 digits to ensure that no significant rounding errors are introduced in the process (the conformal blocks have been calculated with the same precision). Since we perform our own rescaling, we switch off this option in CPLEX. We find that the above rescaling typically reduces the orders of magnitude in the ratio between the largest and smallest numerical value in eq. (3.7) by about half. Nevertheless, precision is a limiting factor and does not allow us to go to cutoffs ∆ * much larger than 20. The fact that double precision is sufficent for smaller cutoffs, on the other hand, makes our calculations (combined with the excellent speed of CPLEX) very fast. Conclusions We have implemented the method proposed in ref. [1] to numerically study the bootstrap equations away from the symmetric point z =z = 1/2. Using this method, we have qualitatively reproduced various results that have been determined in the bootstrap literature using the more common method of taking derivatives at the symmetric point. The main aim of our work was to show that bootstrapping with multipoints works and is a valid alternative to the standard derivative method. In particular, it can be useful at a preliminary stage when one wants to qualitative bound or approximately compute some quantities using the bootstrap. By choosing a sufficiently low cutoff ∆ * , one can get qualitatively good results within seconds of CPU time with a standard laptop! Since the optimizer CPLEX that we use is limited to double precision, we can not achieve the high precision of refined bootstrap codes such as Juliboots [9] or SDPB [11]. Nevertheless we have shown how, using ∆-maximization, relatively precise results can be obtained for the scaling dimensions of operators (though we relied on an extrapolation procedure). In particular, for O(n) models with n = 2, 3, 4 we have determined the scaling dimensions of the second-lowest-dimensional operators S and T in the singlet and symmetric-traceless representation, respectively. To our knowledge, these have not been determined before using bootstrap techniques. We believe that it should not be difficult to go to arbitrary precision and get rid of the discretization (and the extrapolation procedure) by, for instance, adapting the algorithm developed in refs. [6,9] to multipoints. We do not exclude that bootstrapping with multipoints might then turn out to be comparable to (or better than) the derivative method for high-precision computations. From a conceptual point of view, the multipoint method is more rigorous, since the crossing equations are not truncated but bounded by an error. 20 We have also discussed how the multipoint method is useful in understanding to which extent a given numerical result depends sensitively on the high-dimensional operators. In particular, we have noticed that bounds on operator dimensions are less sensitive in this respect than bounds on the central charge. Ideally, one might want to push the multipoint method to the extreme "IR limit", by choosing a cutoff ∆ * so low that an analytic approach may become possible. This is certainly a very interesting direction that should be explored. Among other things, it requires to improve on the estimate of the OPE convergence given in ref. [12] that applies in the opposite regime, for parametrically large ∆ * . Perhaps the results of ref. [25] might be useful in this respect. 21 An important line of development in the numerical bootstrap is the analysis of mixed correlators which so far are numerically accessible only using semi-definite programming [15]. It would be very interesting to implement mixed correlators in the multipoint bootstrap, either by adapting the semi-definite programming techniques or by extending the linear programming techniques.
11,810.4
2016-06-08T00:00:00.000
[ "Mathematics" ]
Valdecoxib Protects against Cell Apoptosis Induced by Endoplasmic Reticulum Stress via the Inhibition of PERK-ATF4-CHOP Pathway in Experimental Glaucoma The purpose of this study was to investigate the effects of valdecoxib on the retina in retinal ischemia-reperfusion injury (IRI) and R28 cells following oxygen-glucose deprivation/recovery (OGD/R) injury, as well as the underlying mechanisms. Immunofluorescence and Cell Counting Kit-8 (CCK-8) analyses were used to identify the proper timepoint and concentration of valdecoxib’s protective effect on the R28 cells in the OGD/R model. Hematoxylin-eosin (HE) staining and immunofluorescence were used to explore valdecoxib’s effect on the retina and retina ganglion cell (RGC) in IRI. Cell apoptosis was determined by a TUNEL Apoptosis Detection Kit and Annexin V-FITC/PI flow cytometry. The expression levels of p-PERK, transcription factor 4 (ATF4), GRP78, CHOP, cleaved caspase 3, bax and bcl-2 were measured by Western blot analyses. The valdecoxib protected the R28 cells from OGD/R injury by decreasing the cell apoptosis rate, and it exerted a protective effect on retinas in I/R injury by inhibiting RGC apoptosis. The valdecoxib pretreatment reversed the expression of p-PERK, ATF4, CHOP, GRP78, cleaved caspase 3 and bax induced by the glaucomatous model. Meanwhile, the CCT020312 reversed the valdecoxib’s anti-apoptosis effect by activating PERK-ATF4-CHOP pathway-mediated endoplasmic reticulum (ER) stress. These findings suggest that valdecoxib protects against glaucomatous injury by inhibiting ER stress-induced apoptosis via the inhibition of the PERK-ATF4-CHOP pathway. Introduction Glaucoma is the leading cause of irreversible blindness worldwide, characterized by the progressive loss of the visual field and retinal ganglion cells, as well as optic nerve damage [1]. An estimated 57.5 million people worldwide are affected by primary open angle glaucoma (POAG), with a global prevalence of 2.2% [2]. Tham et al.'s predictions include an increase in the number of people aged 40-80 years who have glaucoma from 76 million in 2020 to 111.8 million by 2040 [3]. Elevated intraocular pressure (IOP), as a significant risk factor for glaucoma, has been targeted by pharmacological or surgical therapies to slow glaucoma progression. However, a proportion of glaucoma patients still progress to blindness in spite of these IOP-controlling treatments. Therefore, it is of great significance that we develop more potential novel approaches for glaucoma treatment. The pathogenesis of glaucoma is complicated and still under investigation, but evidence from both in vivo and in vitro models has suggested that ischemia-reperfusion injury, oxidative stress, inflammation, glutamate excitotoxicity, impaired microcirculation and dysfunctional immune responses may be involved in its onset [4][5][6][7][8]. Previous researchers have shown that, in both a chronic glaucoma model and an acute glaucoma model, an increase in endoplasmic reticulum (ER) stress proteins in the retina ganglion cells (RGCs) was 2 of 13 observed [9,10]. The ER is affected by environmental changes, such as cellular stresses, and controls cell function/survival [11]. Different pathological and physiological conditions, such as nutrient scarcity, changes in redox status and viral infection, can influence the ER's ability to facilitate protein folding, potentially resulting in unfolded or misfolded protein accumulation in the ER lumen and, consequently, increased ER stress. Several studies have suggested that ER stress is associated with neuronal cell death in neurodegenerative diseases such as glaucoma [9,12,13]. When using medicine that blocks ER stress, damage can be reduced and the survival rate of RGC improved [14]. CHOP knockout (KO) mice, which act as the common ER stress blockage model, demonstrated increased RGC survival by 24% after two weeks of ON axotomy [15]. These studies demonstrated that ER stress plays an important role in the pathological process of glaucoma. Finding new therapies that target ER stress has been proposed to be effective for glaucoma treatment. Valdecoxib is a selective COX-2 inhibitor, which has been widely used in clinical practice for the treatment of osteoarthritis (OA) of the knees and hips [16][17][18], rheumatoid arthritis [18], analgesia in dysmenorrhea [19] and postoperative analgesia after hip arthroplasty [20], orthopedic foot and oral surgery [21,22]. Kim et al. found that valdecoxib improves lipid-induced skeletal muscle insulin resistance via the simultaneous suppression of inflammation and endoplasmic reticulum stress, suggesting that valdecoxib is relevant to ER stress under certain conditions [23]. However, the question of whether valdecoxib has an effect on ER stress in glaucomatous injury has not been studied. In this study, we investigated valdecoxib's effects on glaucomatous damage using an I/R rat model and an OGD/R cell model to elucidate the underlying molecular mechanisms. Valdecoxib Protects R28 from OGD/R Injury by Inhibiting Apoptosis In Vitro As a first step to explore OGD/R-mediated cell death, we detected the cell death rate at multiple time points with the OGD/R model using PI staining. PI-positive cells were identified as dead cells. The proportions of PI-positive cells at multiple time points were calculated. The highest PI-positive cell rate was observed at 2 h after OGD/R ( Figure 1A,B). Next, to test whether the valdecoxib could protect the R28 from OGD/R-mediated cell death, CCK-8 was performed to identify the valdecoxib's effects on the R28 cells in the OGD/R model at different concentrations at 2 h post-OGD/R. The valdecoxib treatment significantly elevated the cell survival rate at the concentrations of 1 and 5 µmol/L ( Figure 1C). The former was chosen to detect the valdecoxib's protective effect on the R28 cells in the OGD/R model in the subsequent experiments. No significant cell death was observed in control groups pretreated with different concentrations of valdecoxib (Supplementary Figure S1). PI staining was further used to determine the valdecoxib's protective effect ( Figure 1D). To identify whether the valdecoxib's effect involved an anti-apoptosis mechanism, flow cytometry was performed, and a further analysis indicated both that OGD/R induced cell apoptosis and the valdecoxib decreased the cell apoptosis rate ( Figure 1E-H). Based on these results, we conclude that valdecoxib protects R28 cells from OGD/R injury by inhibiting apoptosis. Valdecoxib Protects the Retina from Ischemia-Reperfusion Injury (IRI) by Inhibiting Apoptosis In Vivo Eight-week-old SD rats were sacrificed and their eyeballs were removed at 1, 3 or 7 days post-IRI. HE staining was performed to detect the morphological changes in the retina. We discovered that, compared to the control group, the retinas from 3 and 7 days post-IRI were markedly thinner. Additionally, lost RCGs or their disordered arrangement was observed at 3 and 7 days after IRI when compared with the control retina (Figure 2A,B). We next determined the valdecoxib's effect on the RGCs in the I/R model by HE staining and immunofluorescence. The valdecoxib significantly increased the retinal thickness and RGCs' survival rate at 3 days post-injury ( Figure 2C). A TUNEL assay was performed to clarify whether valdecoxib's protective effect involves the anti-apoptosis mechanism. RBPMS and TUNEL were used to label the RGCs and apoptotic cells of the retina, respectively. We demonstrated that the TUNEL-positive RGCs increased after I/R and were reduced after the valdecoxib treatment compared to the I/R group ( Figure 2D) The untreated control group is assigned a survival rate of 100%. Data are presented as the mean ± SD of three independent experiments. **** p < 0.0001, ** p < 0.01 vs. control group in Figure 1B. *** p < 0.001 vs. control group, ## p < 0.01, ### p < 0.001 vs. OGD/R group in Figure 1C. ** p < 0.01 vs. control group, # p < 0.05 vs. OGD/R group in Figure 1H. Valdecoxib Protects the Retina from Ischemia-Reperfusion Injury (IRI) by Inhibiting Apoptosis In Vivo Eight-week-old SD rats were sacrificed and their eyeballs were removed at 1, 3 or 7 days post-IRI. HE staining was performed to detect the morphological changes in the retina. We discovered that, compared to the control group, the retinas from 3 and 7 days post-IRI were markedly thinner. Additionally, lost RCGs or their disordered arrangement was observed at 3 and 7 days after IRI when compared with the control retina (Figure The untreated control group is assigned a survival rate of 100%. Data are presented as the mean ± SD of three independent experiments. **** p < 0.0001, ** p < 0.01 vs. control group in Figure 1B. *** p < 0.001 vs. control group, ## p < 0.01, ### p < 0.001 vs. OGD/R group in Figure 1C. ** p < 0.01 vs. control group, # p < 0.05 vs. OGD/R group in Figure 1H. performed to clarify whether valdecoxib's protective effect involves the anti-apoptosis mechanism. RBPMS and TUNEL were used to label the RGCs and apoptotic cells of the retina, respectively. We demonstrated that the TUNEL-positive RGCs increased after I/R and were reduced after the valdecoxib treatment compared to the I/R group ( Figure 2D). The above results indicate that valdecoxib can attenuate IRI-induced RGC loss and retina injury by inhibiting apoptosis. obtained from retinas in the control, 1d-I/R, 3ds-I/R and 7ds-I/R groups, stained with hematoxylin (blue) and eosin (red). Scale bar = 50 μm. (B) Quantification of the mean total thickness of the retina in the control and 1-, 3-and 7-day post-I/R groups. The retinas of the 3ds-I/R and 7ds-I/R groups were significantly thinner compared to those of the control group. (C) Representative images of vertical sections obtained from retinas in the control, 3ds-I/R, 3ds-I/R+DMSO and 3ds-I/R+VAL groups, stained with hematoxylin (blue) and eosin (red). Scale bar = 50 μm. (D) Images obtained from retinas in the control, I/R, I/R+DMSO and I/R+VAL groups, stained with DAPI (blue), RBPMS (green) and TUNEL (red). Scale bar = 25 μm. RBPMS was used to label RGCs, and TUNEL was used to label apoptotic cells. The data are presented as the mean ± SD of three independent experiments. Each group was composed of five rats. ** p < 0.01, * p < 0.05 vs. sham group. Valdecoxib Inhibits R28 Apoptosis by Alleviating PERK-ATF4-CHOP Pathway-Mediated ER Stress Previous studies showed that OGD/R induced ER stress and increased activating transcription factor-4 (ATF4) and CHOP protein levels. We further examined whether the protein kinase RNA-like endoplasmic reticulum kinase (PERK)-ATF4-CHOP pathway was inhibited in the OGD/R model after valdecoxib pretreatment. Western blotting (Figure 3A) and its densitometric analyses ( Figure 3B-H) demonstrated that the GRP78, p-PERK, CHOP and ATF4 protein levels were markedly upregulated in the OGD/R and The retinas of the 3ds-I/R and 7ds-I/R groups were significantly thinner compared to those of the control group. (C) Representative images of vertical sections obtained from retinas in the control, 3ds-I/R, 3ds-I/R+DMSO and 3ds-I/R+VAL groups, stained with hematoxylin (blue) and eosin (red). Scale bar = 50 µm. (D) Images obtained from retinas in the control, I/R, I/R+DMSO and I/R+VAL groups, stained with DAPI (blue), RBPMS (green) and TUNEL (red). Scale bar = 25 µm. RBPMS was used to label RGCs, and TUNEL was used to label apoptotic cells. The data are presented as the mean ± SD of three independent experiments. Each group was composed of five rats. ** p < 0.01, * p < 0.05 vs. sham group. Valdecoxib Inhibits R28 Apoptosis by Alleviating PERK-ATF4-CHOP Pathway-Mediated ER Stress Previous studies showed that OGD/R induced ER stress and increased activating transcription factor-4 (ATF4) and CHOP protein levels. We further examined whether the protein kinase RNA-like endoplasmic reticulum kinase (PERK)-ATF4-CHOP pathway was inhibited in the OGD/R model after valdecoxib pretreatment. Western blotting ( Figure 3A) and its densitometric analyses ( Figure 3B-H) demonstrated that the GRP78, p-PERK, CHOP and ATF4 protein levels were markedly upregulated in the OGD/R and OGD/R+DMSO groups compared to the control group. The valdecoxib decreased the expression of those proteins during OGD/R injury. In addition, compared to the control group, the OGD/R and OGD/R+DMSO groups significantly elevated the apoptosis-related proteins, including bax and cleaved caspase 3, while these proteins were decreased in the valdecoxib pretreatment group ( Figure 3A). The expression of the anti-apoptosis protein bcl-2 was the opposite to that of the bax and cleaved caspase 3 in each group. The expression levels of the proapoptosis proteins bax and cleaved caspase3 were upregulated, along with the activation of the PERK-ATF4-CHOP pathway, and decreased together with the inhibition of the PERK-ATF4-CHOP pathway. These data demonstrate that valdecoxib may inhibit R28 cells' apoptosis by alleviating PERK-ATF4-CHOP pathway-mediated ER stress. pression of those proteins during OGD/R injury. In addition, compared to the control group, the OGD/R and OGD/R+DMSO groups significantly elevated the apoptosis-related proteins, including bax and cleaved caspase 3, while these proteins were decreased in the valdecoxib pretreatment group ( Figure 3A). The expression of the anti-apoptosis protein bcl-2 was the opposite to that of the bax and cleaved caspase 3 in each group. The expression levels of the pro-apoptosis proteins bax and cleaved caspase3 were upregulated, along with the activation of the PERK-ATF4-CHOP pathway, and decreased together with the inhibition of the PERK-ATF4-CHOP pathway. These data demonstrate that valdecoxib may inhibit R28 cells' apoptosis by alleviating PERK-ATF4-CHOP pathway-mediated ER stress. Quantification of expression levels of p-PERK, ATF4, GRP78, CHOP, cleaved caspase 3, bax and bcl-2 in the control, OGD/R, OGD/R+DMSO and OGD/R+VAL groups using the densitometric analyses of Western blotting. The bar charts show the quantitative data (normalized by β-tubulin) for each protein relative to the control group (assigned a value of 1). Data are represented as the mean ± SD of three independent experiments. One-way ANOVA is used in B to H. * p < 0.05, ** p < 0.01, *** p < 0.001 vs. control group. # p < 0.05, ## p < 0.01, ### p < 0.001 vs. OGD/R group. $ p < 0.05, $$ p < 0.01, $$$ p < 0.001 vs. OGD/R+DMSO group. Valdecoxib Protects the Retina from Ischemia Reperfusion Injury (IRI)-Mediated Apoptosis by Alleviating PERK-ATF4-CHOP Pathway-Mediated ER Stress After the valdecoxib's effect on the PERK-ATF4-CHOP pathway and the cell apoptosis in the OGD/R model were demonstrated, we examined whether valdecoxib exerts a protective effect on the retina in IRI through similar mechanisms. Retina lysates were subjected to Western blot analysis, which demonstrated that the expression levels of p-PERK, ATF4 and CHOP increased in the I/R and I/R+DMSO groups compared to the control retina, while these proteins were reduced in the retinas pretreated with valdecoxib during Quantification of expression levels of p-PERK, ATF4, GRP78, CHOP, cleaved caspase 3, bax and bcl-2 in the control, OGD/R, OGD/R+DMSO and OGD/R+VAL groups using the densitometric analyses of Western blotting. The bar charts show the quantitative data (normalized by β-tubulin) for each protein relative to the control group (assigned a value of 1). Data are represented as the mean ± SD of three independent experiments. One-way ANOVA is used in B to H. * p < 0.05, ** p < 0.01, *** p < 0.001 vs. control group. # p < 0.05, ## p < 0.01, ### p < 0.001 vs. OGD/R group. $ p < 0.05, $$ p < 0.01, $$$ p < 0.001 vs. OGD/R+DMSO group. Valdecoxib Protects the Retina from Ischemia Reperfusion Injury (IRI)-Mediated Apoptosis by Alleviating PERK-ATF4-CHOP Pathway-Mediated ER Stress After the valdecoxib's effect on the PERK-ATF4-CHOP pathway and the cell apoptosis in the OGD/R model were demonstrated, we examined whether valdecoxib exerts a protective effect on the retina in IRI through similar mechanisms. Retina lysates were subjected to Western blot analysis, which demonstrated that the expression levels of p-PERK, ATF4 and CHOP increased in the I/R and I/R+DMSO groups compared to the control retina, while these proteins were reduced in the retinas pretreated with valdecoxib during the I/R. The expression of the ER stress protein GRP78 was consistent with the PERK-ATF4-CHOP pathway proteins ( Figure 4A-H). The expression levels of the apoptosisrelated proteins were examined in the control, I/R, I/R+DMSO and I/R+valdecoxib groups. As shown in Figure 4A, when using Western blotting, compared to the control, there was an increased expression of bax and cleaved caspase 3 in the retinas of the I/R and I/R+DMSO groups. The valdecoxib decreased the expression of these proteins in the retina during I/R injury. The expression of the anti-apoptosis protein bcl-2 was the opposite of that of the bax and cleaved caspase 3 in each group ( Figure 4A-H). The expression levels of p-PERK, cleaved caspase 3 and GRP78 in RGCs were evaluated by immunofluorescence. The results obtained in the immunofluorescence analyses were in accordance with the corresponding Western blot results ( Figure 4I-K). These results support the conclusion that valdecoxib protects the retina from IRI-mediated apoptosis by alleviating PERK-ATF4-CHOP pathwaymediated ER stress. the I/R. The expression of the ER stress protein GRP78 was consistent with the PERK-ATF4-CHOP pathway proteins ( Figure 4A-H). The expression levels of the apoptosis-related proteins were examined in the control, I/R, I/R+DMSO and I/R+valdecoxib groups. As shown in Figure 4A, when using Western blotting, compared to the control, there was an increased expression of bax and cleaved caspase 3 in the retinas of the I/R and I/R+DMSO groups. The valdecoxib decreased the expression of these proteins in the retina during I/R injury. The expression of the anti-apoptosis protein bcl-2 was the opposite of that of the bax and cleaved caspase 3 in each group ( Figure 4A-H). The expression levels of p-PERK, cleaved caspase 3 and GRP78 in RGCs were evaluated by immunofluorescence. The results obtained in the immunofluorescence analyses were in accordance with the corresponding Western blot results ( Figure 4I-K). These results support the conclusion that valdecoxib protects the retina from IRI-mediated apoptosis by alleviating PERK-ATF4-CHOP pathway-mediated ER stress. Immunostaining was executed using a primary antibody against cleaved caspase 3 (green), and the nucleus (blue) is marked by DAPI. (K) Representative fluorescence images of GRP78 staining are shown (scale bar = 50 µm). Immunostaining was executed using a primary antibody against GRP78 (green), and the nucleus (blue) is marked by DAPI. Images demonstrate the increased expression level of p-PERK, cleaved caspase 3 and GRP78 in the I/R and I/R+DMSO groups, and decreased expression level of those proteins in the I/R+VAL group in the retinal ganglion cell layer. Data are represented as the mean ± SD of three independent experiments. Each group was composed of five rats. One-way ANOVA is used in B to H. * p < 0.05, ** p < 0.01, *** p < 0.001 vs. sham group. ## p < 0.01, ### p < 0.001 vs. I/R group. $$ p < 0.01, $$$ p < 0.001 vs. I/R+DMSO group. CCT020312 Reverses Valdecoxib's Anti-Apoptosis Effect by Activating PERK-ATF4-CHOP Pathway-Mediated ER Stress In Vitro Recent studies have identified that CCT020312, as a selective activator of PERK, can activate the PERK-ATF4-CHOP signaling pathway [24]. We examined whether activating the PERK-ATF4-CHOP pathway by using CCT020312 can reverse the anti-apoptosis effect of valdecoxib in the OGD/R model. The R28 cells were pretreated with different concentrations of CCT020312 and no significant cell death was observed. The CCT020312 activated p-PERK at the concentrations of 3 and 5 µmol/L (Supplementary Figure S2). The latter was chosen to detect the CCT020312's effect on the R28 cells in the OGD/R model in the subsequent experiments. The R28 was pretreated with valdecoxib prior to CCT020312 administration, after which it was subjected to the OGD/R model. The cell lysates collected from each group were subjected to a Western blot analysis of various markers of ER stress, apoptosis and the PERK-ATF4-CHOP pathway. A densitometric analysis confirmed that the valdecoxib significantly reduced the OGD/R-induced GRP78, p-PERK, ATF4 and CHOP, as well as the expression of apoptosis-related proteins, including bax and cleaved caspase 3. The CCT020312 reversed the valdecoxib's effects, increasing the expression of markers of ER stress and the PERK-ATF4-CHOP pathway ( Figure 5A-H). We next determined whether the activation of the PERK-ATF4-CHOP pathway induced by the CCT020312 increased the expression of the apoptosis-related proteins. The Western blot results showed that the expression levels of the pro-apoptosis proteins, including bax and cleaved caspase 3, were consistent with those of the PERK-ATF4-CHOP pathway proteins in the CCT020312 pre-treated group ( Figure 5A-H). These studies support the hypothesis that CCT020312 reverses valdecoxib's anti-apoptosis effect by activating PERK-ATF4-CHOP pathway-mediated ER stress. Data are represented as the mean ± SD of three independent experiments. Each group was composed of five rats. One-way ANOVA is used in B to H. * p < 0.05, ** p < 0.01, *** p < 0.001 vs. sham group. ## p < 0.01, ### p < 0.001 vs. I/R group. $$ p < 0.01, $$$ p < 0.001 vs. I/R+DMSO group. CCT020312 Reverses Valdecoxib's Anti-Apoptosis Effect by Activating PERK-ATF4-CHOP Pathway-Mediated ER Stress In Vitro Recent studies have identified that CCT020312, as a selective activator of PERK, can activate the PERK-ATF4-CHOP signaling pathway [24]. We examined whether activating the PERK-ATF4-CHOP pathway by using CCT020312 can reverse the anti-apoptosis effect of valdecoxib in the OGD/R model. The R28 cells were pretreated with different concentrations of CCT020312 and no significant cell death was observed. The CCT020312 activated p-PERK at the concentrations of 3 and 5 μmol/L (Supplementary Figure S2). The latter was chosen to detect the CCT020312's effect on the R28 cells in the OGD/R model in the subsequent experiments. The R28 was pretreated with valdecoxib prior to CCT020312 administration, after which it was subjected to the OGD/R model. The cell lysates collected from each group were subjected to a Western blot analysis of various markers of ER stress, apoptosis and the PERK-ATF4-CHOP pathway. A densitometric analysis confirmed that the valdecoxib significantly reduced the OGD/R-induced GRP78, p-PERK, ATF4 and CHOP, as well as the expression of apoptosis-related proteins, including bax and cleaved caspase 3. The CCT020312 reversed the valdecoxib's effects, increasing the expression of markers of ER stress and the PERK-ATF4-CHOP pathway ( Figure 5A-H). We next determined whether the activation of the PERK-ATF4-CHOP pathway induced by the CCT020312 increased the expression of the apoptosisrelated proteins. The Western blot results showed that the expression levels of the pro-apoptosis proteins, including bax and cleaved caspase 3, were consistent with those of the PERK-ATF4-CHOP pathway proteins in the CCT020312 pre-treated group ( Figure 5A-H). These studies support the hypothesis that CCT020312 reverses valdecoxib's anti-apoptosis effect by activating PERK-ATF4-CHOP pathway-mediated ER stress. Data are represented as the mean ± SD of three independent experiments. One-way ANOVA is used in B to H. * p < 0.05, ** p < 0.01, *** p < 0.001 vs. control group. # p < 0.05, ## p < 0.01, ### p < 0.001 vs. OGD/R+DMSO group. $ p < 0.05, $$ p < 0.01, $$$ p < 0.001 vs. OGD/R+VAL group. Discussion A wide variety of mechanisms are involved in the development and progression of glaucoma. ER stress has been recognized as playing a significant role in the pathology of glaucoma [12]. Valdecoxib's protective effect has been implicated in several neurodegenerative diseases; however, its role in glaucoma is largely unknown [25][26][27]. In this context, our study investigated whether valdecoxib can perform a rescue function in glaucomatous models, and whether the underlying molecular mechanisms are involved in ER stress. We explored valdecoxib's protective effect against glaucomatous models in vivo and in vitro. We found that pretreatment with valdecoxib reversed the morphological changes of RGC loss and thinner retinas caused by IRI. In addition, the expression of pro-apoptosis proteins increased in the I/R group and decreased after valdecoxib treatment. These results suggest that valdecoxib can alleviate I/R-induced glaucoma-like damage through the inhibition of apoptosis in RGCs. Similar results were observed in a cell model of glaucoma, demonstrating the key role of apoptosis in glaucoma pathology. This is consistent with other studies showing that the apoptosis of RGCs is the final common pathway in both human and experimental models of glaucoma [28][29][30][31]. Inhibiting apoptosis has proven to be effective for glaucoma protection [32,33]. In our study, the anti-apoptosis effect of valdecoxib was revealed in the glaucomatous models. Therefore, we speculate that this may be the possible mechanism through which valdecoxib confers protection against glaucoma. However, previous studies suggested that valdecoxib has a pro-apoptosis effect in tumor cell lines [34,35]. Cyclooxygenase-2 (COX-2) is a gene encoding an inducible prostaglandin synthase enzyme that is overexpressed in many tumors [36]. A study demonstrated that downregulating COX-2 induced cell apoptosis in hepatocellular carcinoma [37]. Therefore, we can infer that COX-2 may be the target for valdecoxib to induce cell apoptosis in tumors. In addition, other studies indicated that the effect of valdercoxib on apoptotic cell death is cell-line-dependent [35]. Our study revealed that valdercoxib exerts an antiapoptosis effect in glaucomatous models. This may be for two reasons. First, valdercoxib may respond differently to various cell lines. It exerts pro-apoptosis effects on tumor cell lines, but may inhibit apoptosis in other cell lines. Moreover, we speculate that valdercoxib's anti-apoptosis effect may be induced via COX-2 independent mechanisms in glaucomatous models. This is the first study to identify valdecoxib's anti-apoptosis mechanism in glaucoma protection. We further explored the underlying molecular mechanism of valdecoxib's anti-apoptosis effect in glaucomatous models. ER stress-induced apoptosis has been implicated in the pathogenesis of various diseases [38]. Inhibiting ER stress can protect cells against apoptosis [39,40]. Studies have revealed that increasing ER stress seems to be one reason for IOP and the development of glaucoma [41]. We speculate that ER stress may be the target of valdecoxib to suppress apoptosis in glaucomatous models. PERK-ATF4-CHOP is one of the classical pathways of ER stress and is pro-apoptosis [42]. When ER stress is prolonged, the activation of PERK can promote the translation of ATF4, which increases the transcription of specific unfolded protein response (UPR) target genes including CHOP [43]. In this study, we detected the expression of the ER stress protein in each group. We showed that the valdecoxib suppressed the expression of p-PERK, ATF4, CHOP and GRP78 induced by I/R injury. The apoptosis-related protein levels, including bax, bcl-2 and cleaved caspase 3, were further detected. The results showed that the activation of the PERK-ATF4-CHOP pathway positively altered the pro-apoptosis proteins' expression levels and negatively altered the anti-apoptosis proteins. The pretreatment with valdecoxib inhibited the activation of the PERK-ATF4-CHOP pathway and improved the expression level of the anti-apoptosis protein. Similar results were obtained in the cell model of glaucoma. These results indicated that valdecoxib attenuated ER stress-mediated apoptosis via the inhibition of the PERK-ATF4-CHOP pathway. Our study revealed that PERK-ATF4-CHOP can offer a potential target for glaucoma treatment. In addition, CCT020312 administration abolished valdecoxib's protective effect, activated the expression of p-PERK, ATF4 and CHOP and aggravated ER stress-mediated apoptosis in the OGD/R model, indicating that the inhibition of the PERK-ATF4-CHOP pathway was required for valdecoxib's protective effect on glaucoma. This is the first study to explore valdecoxib's function in glaucomatous models, providing a potential treatment for glaucoma. In addition, our results provide clues as to how we can better understand the mechanism of ER stress in glaucoma. However, while the experiments were performed in vivo and in vitro to detect valdecoxib's function in glaucoma models, there were limitations to our study. We used the acute intraocular pressure elevation model (aHIOP) to mimic retinal I/R injury. Although this is one of the classic glaucoma models and it was chosen for a number of glaucoma research studies, the results we obtained in the I/R model have not yet been tested on other glaucoma models in vivo. It is uncertain whether the same results can be obtained in other glaucoma animal models. More glaucoma models need to be considered to test our outcomes. In conclusion, our study indicates that the PERK-ATF4-CHOP pathway plays a significant pathological role in glaucomatous damage. Valdecoxib protects against glaucomatous injury by inhibiting endoplasmic reticulum stress-induced apoptosis via the inhibition of the PERK-ATF4-CHOP pathway. These findings suggest a promising role for valdecoxib therapy in protecting individuals from glaucoma. Animals The I/R Injury Model (IRI) We used the protocol described in our team's previous work to build the rat I/R model [44]. I/R injury was induced by elevating the intraocular pressure. Briefly, we anesthetized rats with 2% sodium pentobarbital by intraperitoneal injection. Rats were fixed on a stereotaxic instrument after using eye drops comprising obucaine, levofloxacin and compound tropicamide on the cornea. Next, a 31-G needle was inserted into the anterior chamber of the rat's eye, and the other side of the needle was connected to normal saline by an infusion set tube. The intraocular pressure was increased to 110 mmHg gradually for 1 hour. Next, we removed the needle from the anterior chamber. Tobramycin dexamethasone eye ointment was used after surgery to prevent eye infections. Valdecoxib Treatment in I/R Model In vivo, an anesthetized rat with dilated pupils was put under a stereomicroscope, and 3 µL of 5 µM valdecoxib was injected into the vitreous chamber using a 5-µL Hamilton syringe (Hamilton AG, Bonaduz, Switzerland) 30 min before the I/R model. Four concentrations (1, 5, 25 and 100 µM) were initially chosen based on the concentration of valdecoxib we used in the OGD/R cell model and the volume of the vitreous chamber, and were confirmed empirically; 5 µM was identified as the proper concentration of valdecoxib in the glaucomatous animal model ( Figure S3). R28 Cell OGD/R Injury Model For the R28 cells used to build the OGD/R model, we replaced the low-glucose DMEM with free-glucose DMEM and kept the cells under hypoxic conditions in a closed container at 37 • C for 2 h. Next, we replaced the free-glucose DMEM with low-glucose DMEM and returned the cells to the culture incubator at 37 • C and 5% CO 2 . Hematoxylin and Eosin (HE) Staining The morphological changes to the retina induced by I/R were visualized through HE staining. We removed the rats' eyeballs in each group and placed them in 4% paraformaldehyde for fixation, for 48 h. The eye tissue was embedded in paraffin, and 4 µm-thick slices were prepared. We followed the protocol for HE staining that was described previously [45]. Terminal Deoxynucleotidyl Transferase-Mediated Nick End-Labeling (TUNEL) The eye tissue was embedded in paraffin and 4 µm-thick slices were prepared. We used a TUNEL BrightRed Apoptosis Detection Kit to detect the apoptosis RGCs of retinal tissue. The TUNEL positive cells' nuclei were stained red. The slides were stained with DAPI to detect all cells' nuclei. Cell Counting Kit-8 (CCK-8) The Cell Counting Kit-8 assay kit was used to detect the cell viability. A microplate reader was used to detect the absorbance values at 400 nm after the cells were incubated with CCK-8 for 2 h. The cell survival rate was then calculated. Western Blot Assay Western blotting was performed following the routine protocol. Briefly, cells and retinal tissue were obtained after treatment under the indicated conditions. The cells and retina were harvested and mixed with lysate in tubes. Then, the tubes were placed on ice for 30 min. The supernatant of the total lysates was collected after centrifugation (12,000× g, 20 min, 4 • C). The protein concentration was detected using a commercial BCA Protein Assay Kit (CWBIO, Beijing, China). Next, 30-µg of protein was loaded and separated by 10% SDS-PAGE, then transferred to a nitrocellulose (NC) membrane. After blocking with 5% bovine serum albumin (BSA) at room temperature for 2 h, the membranes were incubated with the primary antibody at 4 • C overnight. The membranes were washed three times, each for 10 min. The membranes were then incubated with the second antibody for 1 h at room temperature. The membranes were washed and the band was visualized using the imaging system and quantified by densitometry using ImageJ software. Annexin V-FITC/PI Flow Cytometry A flow cytometry assay was used for the detection of cell apoptosis. Cells were made into single-cell suspensions and stained with Annexin V-FITC and PI according to the product's instructions, after treatment with the indicated conditions. The results were quantified and analyzed using a flow cytometer. Immunofluorescence Staining Assay The immunofluorescence staining process followed the routine protocol. Briefly, first, paraffin-embedded sections were deparaffinized using xylene, followed by rehydration in serial alcohol dilutions. Next, antigen retrieval with citrate buffer, permeabilization and blocking were performed. Subsequently, the sections were incubated with the primary antibody at 4 • C overnight. Next, they were washed three times, each for 10 min. The samples were incubated with the appropriate secondary antibody (goat anti-rabbit AlexaFluor ®® 488, ab150077, Abcam, Cambridge, MA, 1:100; goat anti-guinea pig Alexa Fluor ®® 488, ab150185, Abcam, Cambridge, MA, 1:100) for 2 h. The samples were washed three times with PBS and counterstained with 1X PureBlu DAPI (BioRad, Hercules, CA, USA) for 5 min, then mounted. The slides were then viewed using a fluorescence microscope [46]. Propidium Iodide (PI) Staining PI was used to detect the cell permeability. Dead cells' nuclei stained red while the normal cells were PI-negative. We applied the protocol described in our team's previous work [44]. The numbers of PI-positive (red) cells and Hoechst-positive (blue) cells were counted using ImageProPlus software, and the proportion of PI-positive (red) cells was calculated. Statistical Analysis All the data were presented as the mean ± standard deviation (MD). SPSS 22.0 software was used to analyze all the data, and statistical comparisons were performed using one-way ANOVA. p-values of <0.05 between datasets were considered statistically significant. Informed Consent Statement: Informed consent was obtained from all subjects involved in the study. Data Availability Statement: The data presented in this study are available on request from the corresponding author.
7,670.6
2022-10-26T00:00:00.000
[ "Biology", "Medicine" ]
On the Identification of Several Key Issues on OER Discovery for Smart Learning Environments : The best Open Educational Resources (OER) for each final user can be hard to find. OER come from lots of sources, in many different formats, conforming to a diverse logical structure and each user may present different objectives depending on their role in the teaching/learning process and their context. Previous attempts have only focused on one kind of technologies. A new approach that embraces diversity, may gain from the potential synergies of sharing resources in the development of the final recommendation system and the exploitation of the data. In this work, we aim at identifying the main challenges facing the field of OER recommendation, for a potential architecture model. Introduction Open Educational Resources (OER) are those "teaching, learning and research materials in any medium, digital or otherwise, that reside in the public domain or have been released under an open license that permits no-cost access, use, adaptation and redistribution by others with no or limited restrictions" [1]. Thanks to their open perspective, OER present an opportunity to make education more accessible, in many different ways. For the instructors, OER can be a great starting point, as the vast majority of teachers build on previous resources to prepare their own. For the students in an educational institution, having access to a more diverse and wide pool of resources improves the chance of finding the proper resource to gain new knowledge. In addition, for those not in an educational institution, but still interested in learning such as long-life learners or people without the material access to education, OER may be a great substitute. To take advantage of all the potential of OER, some problems should be addressed. A revision of the literature concerning OER pointed to some open questions in the field [2]. One of them was referred to as the discovery problem, that is, what can we do to make the most relevant resources become easily accessible to the final user? Finding the proper resource is challenging in multiple ways. First of all, as more and more OER become freely accessible, the discovery problem gets worse. For each user, the number of relevant resources is becoming more marginal compared to the total number of resources available. Comparing two resources to choose the most relevant is relatively easy. Comparing between millions is more complex. One of the more important advantages of OER, the vast number of them, is also one of the problems. Here is where recommendation technologies are a good starting point and have been widely tried in the OER field. Two major characteristics of the OER recommendation field are especially relevant. The first of them is that in other fields the recommendations are provided to a set of final users that share some goal, usually entertainment. In the OER field, different kind of final users coexist and their goals are more diverse. The second is that it is not easy to decide which the best resource for each circumstance is as the number of different variables that are important is much bigger than in other fields. We will start with the former feature. A relevant part of any recommendation systems is knowing and meeting the expectations of the users interested in the recommender system. Fulfilling these expectations or even exceeding them, is the definition of success in this kind of system. To fulfil expectations, we need to acknowledge the objectives of the user and give answers to them. On the education context, we mainly find two different roles: students and teachers. When we have two roles we have two different sets of objectives, at least, and their needs may change over time. Students and teachers will expect different outputs from the recommendation system [3], such as annotation in context, find good items, find all good items, recommend a sequence, recommendations out of the box while the user is browsing, find credible recommender, find novel resources, find peers with similar interests, recommendation for alternative learning paths through learning resources. We think that the quantity and type of variables that should be involved in an educational recommendation setting are also worth considering. It is harder to recommend an OER as the number of variables that are relevant to the recommendation process increases, and those variables are harder to quantify, and as so, they are harder to automatically being processed by a computer system. In the education field, we need to take into account different variables, such as the student previous knowledge, social context (including, but not limited to, language), most effective ways of learning, the quality of the resource, personal taste, etc. Each of these variables is more abstract, and so, more difficult to consider in a computational model. To address the discovery problem, many families of recommendation techniques exist. For example, [4] identify 11 different techniques. However, these previous attempts tend to have in common that they focus on a very specific task and in one family of the techniques. We think that this extensive set of independent solutions for different objectives is not a complete solution for final users, as they would need to jump from through multiple independent systems to find a proper resource. In the next section, we will review some of the more relevant related works in the area of OER recommendation and personalization, for any of their possible contexts. In Section 3, we will introduce and discuss the main challenges associated with the OER discovery problem. Next, in Section 4, a model to face the identified challenges is presented with a brief discussion about the implementation possibilities and threats. Finally, we will present a discussion for the presented work in Section 4. Related Works Many different attempts to solve the discovery problem have been already explored. We may consider plain repositories of OER, such as MERLOT [5] or OER Commons [6], as the most basic and intuitive approach. These kind of repositories gather resources but they are quite limited on their recommendation or adaptation efforts as the user still needs much effort to find the right resources. The discovery problem has been an active field during some time. We can find works trying solutions even before the UNESCO defined the term. Just to mention a few, we can highlight a system based on social navigation on repositories [7] or an approach based on social tagging and recommendation of similar resources to a given one [8]. More recently, once the term was coined by a UNESCO while also calling for solutions, even more works were developed. Many of these works have focused on just one repository, repositories sharing a common taxonomy or repositories conforming to the principles of Linked Data. An example of the latter is [9], which presents a recommendation framework based on Linked Open Data to make to recommend new OER. Other related works focus mainly on a very specific task. For example, [10] present a system that uses machine learning technics to detect if, for the understanding of a given resource, some previous requisites should be meet and if another resources can be used to learn that previous requisites. By doing so, they hope to fill the knowledge gap a new resource may arise for a student. Ref. [11] introduced CROERA, an aggregator of different OER repositories that was able to automatically aggregate the resources even when the repositories did not share a common taxonomy used to classify the resources. They did so while avoiding the traditional approach of a one-to-one matching situation, becoming easier to search and to aggregate new repositories in the future. To make the aggregation CROERA uses natural language processing (NLP) to extract features from the resources and categorize them by using support vector machines (SVM). This approach solves part of the discovery problem as makes it easier to search multiple repositories; but no recommendation, personalization or adaptation of the resources is presented, just an aggregation. Challenges in OER Recommendations As previously stated, the field of OER discovery presents some challenges. The resources are not as uniform as in other fields and there are, at least, two clear roles with different objectives using the same resources: teachers and students. In this section, we will discuss these main challenges: the source, format and logical structures heterogeneity and the multiple objectives. Most of the previous research and developments have focused on some parts of the global problem. While a divide and conquer strategy may be the viable, multiple subsystems should coexist in any proposal to allow synergies to emerge between them, what would not arise if they were different systems. If the recommendation algorithms share information, efforts are saved and new opportunities are achieved. Source Heterogeneity Plenty of different OER repositories exist: those managed by educational institutions (e.g., MIT Open Courseware or MERLOT), by non-governmental organizations (e.g., [5]) or by public institutions. In some repositories, only certain people or groups can upload material, while in others anyone can do it. Some repositories are supervised by editors or librarians who review all the content and some repositories without any kind of prior supervision. There are thematic repositories and for many different topics. There are some repositories associated with OER producers and there are some that are collectors. There are so many, that repositories of repositories also exist. The vast majority of them allow automatically access to resources, and some even offer public API to access them or their services, such as search engines. A few, ironically, although they collect open resources, but do not allow automatic access to third parties (an example of this would be the OER Commons Platform Terms of Use [12], so not all repositories serve as a data source. Likewise, the way of presenting each OER in each source is very diverse, making difficult a direct and automatic comparison. Usually, there is some type of listing, either as a result of a search or by a content organization scheme based on categories from which hang specific pages for each resource. These sites tend to label the resources with a set of common fields (title, date, license…), but many other fields are unique to each site or with substantial differences that do not allow for direct translation. For example, a field indicating the expected target audience is very common but it may indicate age, educational level (which change from one country to another) or by some type of category that attempts to represent difficulty. What is common to all OER in all kind of repositories or sources, regardless of any other consideration, is that every OER has assigned a URL to identify the resource, because if the resource does not have a URL in the first place, the recommender system will not be able to access it, nor the final users. Although this designation is not unequivocal since a single resource can have several URLs different, more complex designations that solve this problem are not worth the effort. Format Heterogeneity One of the great opportunities offered by the OER is that for each topic it is possible to find very diverse resources trying to explain the same topic. Not only with different approaches but also in completely different formats. We can find videos, books, presentations, recorded talks, software simulations, etc. For example, in the MERLOT system, 22 different kinds of resources are distinguished [13]. This diversity is very rare for a teacher to have material and temporary resources to create a course in which each resource is in various formats to please a different kind of students' preferences. Nevertheless, this diversity of formats complicates the recommendation a lot since each format comes with its peculiarities. For example, a book-only recommender will create a model that includes information about the publisher, the date of edition or the number of pages, metrics that do not make any sense talking about videos, for example. In addition, it is not only difficult when it comes to building the models, but it is also complicated when working with the files of original data. The software needed to process or analyze video files is very different from necessary to do the same with a web page. Diverse Logical Structures It is not uncommon for OER to be aggregated in courses, books or compilations, and those are also OER equally valid. Some students will only need to read a chapter, but others the whole book. As such, it is necessary some kind of variable hierarchical structure to store these multilevel aggregation relationships. The problem is that no standard regulates this in any way (and a new standard is not expected to be created and followed any time soon). In addition, by allowing the modification and redistribution of the modifications of OER, it is very likely to divide any resource into other new ones, and thus make new related OER by aggregation and division. Description of a Proposed Model After discussing the motivation and the challenges regarding the discovery problem in the OER field, now we will present a model. Our model is divided into 5 different stages. A diagram can be seen in Figure 1 to follow our explanation. It is remarkable that as new OER are created or updated constantly and the users may interact many times with the system, the model should be seen as a dynamic model, where the stages will be constantly producing results, it is not a static step-by-step process. Stage 0: Source Recollection A recommendation system cannot recommend what it does not know. So in the initial stage, it is necessary to discover what can be recommended. When we talked about the challenge posed by the heterogeneity of sources, we already anticipated that there are so many models of OER repositories that it is impossible to look for a common model that abstracts them all. The only assumption we can make about what the repository will contain is that there will be resources and the resources will be identified by an URL. The URL of the resource itself may not always be enough to process it. Many times, like in text files, this resource can be self-explanatory, but on other occasions, such as video or audio, these resources can be accompanied by a page describing the resource without which the possibilities of understanding the resource (or know the license under which it is distributed) decrease. For these cases, we have contemplated the possibility that each URL of the resource is accompanied by the description URL. This URL can be an HTML focused on being seen by humans or can be linked data aimed at automatic processing. Be that as it may, it cannot be assumed that this field will contain information. Finally, the lists of ascending and descendant resources appear. The founders of Google said in 1998 that "Crawling is the most fragile application since it involves interacting with hundreds of thousands of Web servers and various name servers which are all beyond the control of the system" [14] and this first stage is mainly about crawling. Any implementation should pay attention to crawlers' traps, the order of visit, any server limitations. redundant content found in multiple repositories and remember that the OER will change over time, being updated, so it will be necessary to consider strategies to control and maintain those changes. Stage 1: Transformation of the OER into a Common Representation The heterogeneity of formats discussed above, together with the diversity of objectives, forces us to seek a common representation of resources if we want to achieve a system with universal aspirations. The problem is that not all recommendation algorithms can work with the same starting representations. There are recommendation algorithms based on linked data, on the content of resources, on collaborative filtering, on models, etc. and as these representations are different, in the end, any recommendation system must choose what its approach will be from the beginning and is limited to recommendation algorithms based on it. But, what if it is not necessary to choose? That is, we have options to create multiple representations of the OER. In a single system we may have OER represented in linked data models or in a textual representation based on its content, and possibly many others. If we accept that we can have multiple recommendation systems, we necessarily have to accept that all these representations can coexist, and thus we accept that there are several possible representations for each resource. The question now is that if at a later stage we create as many models as necessary for each recommendation algorithm, what is the meaning of this previous common representation? Does not it pick up the diversity of models later and everything we need? The answer is no. That is, the resource models that are created in the next stage are strongly linked to the needs of specific recommendation algorithms. The common representations that are created in this stage are not, and are thought independently and abstractly from a specific algorithm while being linked to a family of algorithms. The general rule to discern if something should be a model or a common representation is if it can be useful for more than one recommendation algorithm: if it can be, it should be a common representation and therefore shared, because resource models are unique to each recommendation algorithm. This distinction is what justifies this step. Stage 2: Creation of the User and Resource Models Although this and the next stage are different, they have a strong mutual dependence, since no resource and user models will be created if not to be used later by the recommendation algorithms of the next stage. The idea is that the systems implementing the model at the end will start from a series of common representations of each resource (which are the result of the previous stage) and from each user (which is all the information collected by the system throughout their interaction) and instead of working on that primary model directly, each subsequent recommendation algorithm should create derived models or views of it to manipulate their information. These views allow models to store information without stepping on each other. It is important at this stage to pay special attention to avoid duplicated information. Firstly, because if something that a recommendation algorithm creates is useful for others it should be a common representation, not a model of its own. Secondly, because if there were duplicated information precedes inconsistent states that should be avoided. Several recommendation algorithms sharing the same model of the original user can also be a great advantage in terms of the usability of the system compared to several that replicate the same functionality. For example, if several recommendation algorithms share as input some data that they should ask the user, when sharing a single question is enough, which avoids saturating the user. Even if the system wants to avoid asking the user and tries to infer, the more data available previously, there will be more possibilities to find relationships that allow inference. Depending on the exact set of algorithms chosen, the system will have different models. For example, if algorithms from the collaborative filtering family are used, the user model will gather information about the evaluation of each resource made by the user, but they will not be gather in the user models related in content based recommendations, as in those the user evaluation of the resource plays no part. For each task defined in the system, more than one of these models may be used, as we explain in the next stage. Stage 3: Data Use The ultimate goal of the model is to make recommendations and this is the stage where that goal is finally met. As we have already explained, in this stage multiple recommendation systems coexist. The input data can be shared (from the OER themselves, of from the user activities) and the output of some models may be the input of others, as many times as necessary, creating a living ecosystem that is growing towards an increasingly complete and useful system. For example, the system may use a contented based algorithm to give results to a user query in a search bar. That algorithm will give a sorted set of results that are used as an input to an algorithm that takes that sorted set and the user affinity to some type of resources (deduced by the kind of interaction with previous resources) and provides a refinement of the original sorted set of results. This refined sorted set is again used as input for an algorithm that also takes into consideration some rules established by the course teacher, and filters that refined sorted set, providing finally the results to the user. These algorithms will relay in different user and resource models, may share some common representations and don't need to be created at the same time, as each one can be reuse by features demands and programmers. The number of recommendation algorithms that are here is immense. There are needs that will be covered by associating resources by thematic affinity, by the diversity of format or because they form a progression. Other algorithms could associate resources with users or even users with each other. The fundamental idea is that there are as many recommendation algorithms as can be useful for someone and that they carry as much information as possible. Stage 4: Feedback Once the recommendations are made, many models can take advantage of the feedback given by the users and it is important to consider this in order to allow a gradual improvement of the overall process. The idea is very simple: once a recommendation system has created a recommendation, some other model should enter to analyze the explicit or implicit feedback generated by the user. It can be as simple as offering "I like"/"I do not like" buttons or it can be more complex, such as analyzing user activity patterns in the system collected in registers implemented for it. Regardless of the method of obtaining feedback, all the previous stages can benefit from the conclusions derived from this feedback, but there is a distinction between the first two and the last two. For stages 2 and 3, it makes sense to devote efforts to automate the incorporation of the information delivered by this feedback process but this is not the case for stages 0 and 1. The two first stages will be much more stable stages and that they need fewer changes, so it seems more reasonable that these changes will occur only after a human intervention, although always after analyzing the feedback. Discussion In this work, we have reviewed some of the challenges that make difficult the recommendation of OER. Also, we have presented a model for the automatic recommendation of OER by the integration of multiple components, whose purpose is to find the best-fit OER. The model presented focuses on the integration of different recommendation algorithms in a single system. By doing so, we are able to better address the diverse objectives of the users; given that some techniques would yield better results for some objectives than others. Moreover, we are able to take advantage of potential synergies appearing from the sharing of resources for the development of a single system and the share of data. The model is divided into five stages: (1) the source recollection, (2) the transformation of the OER into a common representation, (3) the creation of the user and resource models, (4) the data use, and (5) feedback. This way, the model can integrate any existing or future recommendation technique. The proposed model allows integrating multiple OER sources and to make recommendations in very diverse educational settings, making it possible to respond to users with different objectives. With the presented model it is possible to treat all the resource formats and all the sources, in the same way, thanks to the transformation of the resources into common representations. These common representations facilitate the introduction of new recommendation algorithms since they can take advantage of previous efforts with greater ease and reduce maintenance efforts. Based on these common representations the next stage is a separate stage for the creation of the final models used by the recommendation algorithms. We distinguish these two stages in order to emphasize the importance of the recommendation algorithms sharing as much as possible in order to eliminate duplicated efforts, guarantee that there are no incongruent states and to lay the foundations for synergies to emerge. It is difficult to support the idea that creating an OER recommendation system that meets all the needs and objectives at first try is feasible. Anyway, with this highly modular architecture, we might build on previous models and efforts, and we hope to facilitate synergies. These synergies would be especially relevant to the possibilities that the feedback from the users offers. Every recommendation system is improved by this feedback and even it is not easy to gather nor to share this information between systems. In consequence, we believe that using our model to create a multipurpose system will create a system more interesting to the users, as they can achieve most of their objectives in a single system, and so it will be used in more situations creating even more feedback information that can be used to improve the system, attracting more users, etc. This should be, in the end, a positive feedback loop. Some weaknesses of the proposed architecture should be discussed too. We would like to begin by turning our attention to a question that has nothing to do with the proposed model itself. It has to do with the fact of trying to solve a problem in an area where there are already exist many previous attempts with yet a new proposal. If one of the problems for the discovery of resources is that there are multiple sources, will the situation really improve by the creation of a new source? It is true that this proposal aims to group as many previous efforts as possible while creating new possibilities but, certainly, it is not the first attempt with this vision. Looking to the proposed model itself, the idea of decoupling the stage of creation of common representations and the creation of models will make it easier for the recommendation algorithms to share data but, at the same time, there is a risk of the system becoming chaotic. Defining a clear set of rules, to decide what should be part of that common representation and what should not, will be a challenging task. Ideally, the common representations should be altered very little and be exclusively cumulative, that is, never eliminating content from the common representations, to avoid ending with orphaned recommendation algorithms. After all, it's always easier to ignore data than to have to guess it because it is missing. But storage and maintenance costs should also be considered. As future work, plenty is ahead. First of all, the model should be empirically validated. The proposal should be effectively implemented and progressively evaluated to know the effectiveness of it in different contexts. Likely, the progressive integration of all these recommendation strategies will be hard. It will be important to also pay special attention to the creation of the interface and user experience, in order to make it suitable to all the different user profiles that may benefit from it, always maintaining a balance between functionality and acceptable complexity. Once the system begins to take sufficient form, the model should be evaluated to check its effectiveness and limitations in real environments and consider it the effort is worthy. Given the highly integrative approach of the model, we expect that any future works should not be limited to the integration of previous methods of recommendation, but also focused on building new and improved methods, taking advantage of the synergies that we hope will arise. Ideally, if tests with users work and can be extended to larger user communities, the data collected itself may be valuable and worthy of becoming an independent datasets divulged to the scientific community to encourage research in recommendation systems in educational contexts. The vast majority of datasets publically available, on which research is founded, are of films or songs, and introduction these are settings different from the educational. Testing recommendation algorithms with thematically limited sets may be hiding problems in the recommendation algorithms and these datasets can become a useful tool to discover and address them.
6,629
2019-11-21T00:00:00.000
[ "Computer Science" ]
Direct Evidence of Drift‐Compressional Wave Generation in the Earth's Magnetosphere Detected by Arase We present the first direct evidence of an in situ excitation of drift‐compressional waves driven by drift resonance with ring current protons in the magnetosphere. Compressional Pc4–5 waves with frequencies of 4–12 mHz were observed by the Arase satellite near the magnetic equator at L ∼ 6 in the evening sector on 19 November 2018. Estimated azimuthal wave numbers (m) ranged from −100 to −130. The observed frequency was consistent with that calculated using the drift‐compressional mode theory, whereas the plasma anisotropy was too small to excite the drift‐mirror mode. We discovered that the energy source of the wave was a drift resonance instability, which was generated by the negative radial gradient in a proton phase space density at 20–25 keV. This proton distribution is attributed to a temporal variation of the electric field, which formed the observed multiple‐nose structures of ring current protons. DCM is excited via resonant interactions with particles energized through the "bump-on-tail" distributions or gradient instability (Crabtree et al., 2003;Kostarev & Mager, 2017;Mager et al., 2013) or via coupling with the shear Alfvén mode (Mager et al., 2015;Mager & Klimushkin, 2017).Although a few theoretical studies have investigated the excitation of DCM, the most crucial excitation mechanism in a realistic situation remains to be elucidated, owing to a lack of in situ observations of the plasma distribution function in the magnetosphere. This study derived a unique perspective on compressional Pc5 waves using observational data provided by the Arase satellite.Both the oscillation mode and generation mechanism of the waves were directly determined from the satellite data.Section 2 describes the data used in the study.Section 3 provides an overview of the wave properties and plasma environment.Section 4 presents a theoretical interpretation of the generation mechanism.Finally, Section 5 summarizes this study. Data In this study, we analyzed the data provided by the Arase satellite (Miyoshi, Shinohara, et al., 2018) and used 8-s spin-averaged magnetic field data (Matsuoka, Teramoto, Nomura, et al., 2018).The magnetic field vectors (B) were rotated in the mean field-aligned coordinate system.The parallel direction (∥) was determined from the moving magnetic field averaged over 10 min, which is much larger than the wave period (<250 s).The azimuthal (a) and radial (r) directions are positive eastward and outward, respectively.We used the 1-min electron density data obtained from the upper hybrid resonance frequency (Kumamoto et al., 2018) and ion flux data in the energy range from 3.8 eV/q to 184.2 keV/q, which were obtained using medium-energy particle experiment-ion (MEP-i; Yokota et al., 2017) and low-energy particle experiment-ion (LEP-i; Asamura, Kazama, et al., 2018;Asamura, Miyoshi, & Shinohara, 2018) mass analyzers.The time resolution of the mass analyzer was set to 8 s during the interval of interest.We calculated proton thermal pressure by combining the MEP-i measurements in the range 9.6-184.2keV/q with LEP-i measurements between 64 eV/q and 6.1 keV/q, following the approach used by Menz et al. (2017) and Imajo et al. (2019).The OMNI Web database provided 1-min solar wind and interplanetary magnetic field (IMF) data.The World Data Center for Geomagnetism, Kyoto provided the SYM-H, AE, AU, and AL indices, which measure geomagnetic activity. Observation During 03:30-04:30 UT on 19 November 2018, we found large amplitude oscillations in the parallel magnetic field component (Figure 1c) in the evening sector (magnetic local time (MLT) ≈ 21 hr) near the geomagnetic equator and close to the apogee of Arase's orbit at L ≈ 6.2 (See Figures S1a and S1b in Supporting Information S1).The amplitude of the parallel component is the largest among the three components at any latitudes, which indicates the observed waves are compressional waves.The compressional waves were preceded by transverse waves with a frequency ( f ) of approximately 4 mHz during 02:45-03:30 UT (Figures 1a and 1d).Both waves had comparable amplitudes but different dynamic spectra.As shown in Figures 1d and 1f, the compressional wave spectrum exhibits broadband oscillations in the wave-frequency range of 4-12 mHz, while the radial and azimuthal oscillations have narrow spectral peaks along the calculated eigen frequencies of standing Alfvén waves (gray lines).Therefore, we believe that the compressional and transverse waves should be discussed separately.The excitation mechanism of the transverse waves will be reported in a separate paper. The geomagnetic condition was slightly disturbed by a few southward turnings of the IMF around 04:00 UT on 19th November, increasing the AE index up to approximately 190 nT.The AU index variations were more significant than the AL index from 02:00 UT to 07:00 UT on 19th November.This is not a typical signature of substorm activity.The solar wind velocity remained nearly constant at 330 km/s, while the proton density exceeded 20 cm 3 .The SYM-H index fluctuated owing to magnetospheric compression resulting from an increase in solar wind dynamic pressure on 18 November; however, the index was approximately 0 nT during the wave observation.The solar wind parameters in this interval are shown in Figures S1c-S1g in Supporting Information S1. Figures 1g and 1h indicate that a "nose structure" of ring current protons (e.g., Ejiri et al., 1980) exhibited multiple energy bands at approximately 5 and 15 keV during 02:00-03:00 UT.The proton perpendicular pressure P ⊥,H+ reached approximately 3 nPa, which is slightly higher than the proton pressure near the midnight sector during quiet time (Lui & Hamilton, 1992).The plasma β of the protons reached 1; it is expressed by the equation β ⊥,H+ = 2μ 0 P ⊥,H+ /B, where μ 0 and B are the vacuum permeability and magnetic field intensity, (d-f) Wavelet power spectra of the radial, azimuthal, and parallel magnetic fields.Gray lines show eigenfrequencies of poloidal and toroidal standing Alfvén modes up to the seventh harmonic.The eigenfrequencies were calculated using the Tsyganenko (1989) model and the MHD wave equation developed by Singer et al. (1981).A power-law distribution of the plasma density and proton plasma are assumed in the calculation.(g, h) Proton omni-directional differential number fluxes measured by MEP-i and LEP-i mass analyzers, respectively. Geophysical Research Letters 10.1029/2023GL107707 respectively.The B ∥ wave power and the plasma β peaked simultaneously around 03:50 UT.Subsequently, both parameters gradually decreased as the spacecraft moved to the apogee at approximately 04:20 UT and then left the equatorial region.The ion anisotropy parameter Γ was calculated by using the following equation: where P ⊥,s and P ∥,s are the perpendicular and parallel plasma pressures of the ion species s, respectively.Γ was below 0.5 at all times.The cold electron density near the apogee was approximately 150 cm 3 because of either plasmasphere expansion or duskside plasmaspheric bulge during quiet time, causing a low ion temperature (40-140 eV) at L > 5.The ion pressure, the plasma β, the ion anisotropy parameter, the cold electron density, and the ion temperature are shown in Figure S2 in Supporting Information S1. We theoretically evaluated plasma pressure fluctuations using the perturbed distribution functions of DMM and DCM (δP ⊥,DMM and δP ⊥,DCM ), assuming a bi-Maxwellian distribution and a low wave frequency (Takahashi et al., 2022). where δB is the perturbation of the magnetic field intensity.T ⊥ and T ∥ the perpendicular and parallel temperatures of the ion species s. δP ⊥ and δB were obtained from the differences between raw data and their 10-min moving averages.The radial distributions of the plasma pressure and the magnetic field were calucated from the radial motion of Arase on the outband orbit.Radial gradients of them at L = 6.1 were used for Equations 2 and 3.As evident from Figure 2a, DCM explains the observed pressure fluctuations (δP ⊥,Obs ) better than DMM. The proton fluxes around the pitch angle α = 90 °are strongly modulated (Figure 2b).This pitch angle dependence is expected for wave-particle interactions with a compressional-mode wave confined around the magnetic equator.Figure 2c shows the residual proton fluxes of α = 90 °(δJ/J 0 , where J is the differential flux, δJ = J-J 0 , and J 0 is the 10-min moving average).The compressional wave modulated proton fluxes of energies in the range of 10-40 keV.These protons correspond to the higher-energy region of the outer portion of the nose structure (Figures 1g and 1h).We analyzed the residual fluxes using the Morlet wavelet to examine the energy dependence of the flux modulation.The dynamic spectra were averaged over 4-12 mHz and 03:30-04:30 UT. Figure 3a shows the averaged power spectral density of the residual fluxes, and Figure 3b shows the coherence between residual fluxes and δB ǁ with respect to energy.Both the power and coherence have maxima at 20-25 keV, which implies that the wave-particle interactions occurred in this energy range. We calculated the azimuthal wave number m corresponding to the drift resonance of protons at 20-25 keV under the drift-bounce resonance condition (Southwood et al., 1969). where ω is wave angular frequency, ω d and ω b are bounce-averaged drift angular velocity and the bounce angular frequency, respectively (Hamlin et al., 1961;Oimatsu et al., 2018), and K is an integer.As per the drift resonance condition, which is represented by the black curve labeled with K = 0 in Figure 3c, m values from 158 to 119 were obtained for the given energy range with high coherence.We also estimated m using the finite gyroradius effect (Su et al., 1977;Takahashi, Claudepierre, et al., 2018;Takahashi, Oimatsu, et al., 2018).In this analysis, we used proton fluxes having an energy of 19.1 and 25.5 keV from LEP-i measurements and 17.9 and 22.1 keV from MEP-i measurements at α = 90 °during 03:35-03:50 UT.The squares in Figure 3c show the estimated m values (blue and red squares indicate m values based on LEP-i and MEP-i measurements, respectively).The m values ranged from 130 to 104 with a standard deviation of approximately 50.This result is consistent with that obtained based on the drift resonance theory.The linear correlation between f and m based on the DCM dispersion relation (Rubtsov et al., 2018) predicted a wide range of m values because the compressional wave exhibited broadband spectra (4-12 mHz) during this event. Interpretation The observed compressional wave cannot be interpreted as a DMM wave because Γ is negative and δP ⊥,DMM is not consistent with δP ⊥,Obs .By contrast, the observations strongly suggest that the wave is a DCM wave.We calculated the DCM frequency via the gyrokinetic approach followed by Mager et al. (2013) for further validation.While the diamagnetic drift frequency provides an approximate value of DCM frequency (e.g., Takahashi et al., 2022), our analysis provides the first comparison between an observed frequency and DCM eigenfrequency calculated using the kinetic theory.We fitted a Maxwellian function to the observed proton distribution function to deduce the physical parameters describing the cold and hot populations of the protons.The first function is related to the main cold plasma population, whose peak was estimated to be located below the lower limit of the energy coverage of LEP-i.The second function describes hot proton population (>50 eV) whose peak is located at approximately 20 keV.These two populations were separated in the observations.Plasma pressure and plasma β are key parameters influencing the DCM frequency.Because the effect of cold protons on these parameters is insignificant, we considered only the hot proton population.We obtained the hot proton density N H+ and perpendicular temperature T ⊥ from the proton flux data for an energy range of 50 eV-180 keV.According to Mager et al. (2013), the DCM frequency is determined as follows: 2 3 where z = ω 1 /mω d and ω 1 is the DCM principal harmonic eigenfrequency. and are the diamagnetic angular velocities for protons at α = 90°due to temperature and density radial gradients, respectively.l b is the length of the particle path over a bounce period, Λ 1 = 0.5/R is the principal harmonic eigenvalue given by Equation 19in Mager et al. (2013), and R is the field line curvature radius.Z is the plasma dispersion function.The real part of ω 1 was obtained by substituting the proton temperature and density measured by Arase into Equations 6 and 7. Magnetic field curvature is the only prerequisite for the existence of a DCM wave, while either ∂T ⊥ /∂L > 0 and ∂N H+ /∂L < 0 must be satisfied or a "bump-on-tail" distribution must occur to trigger a drift-compressional instability (Crabtree & Chen, 2004;Crabtree et al., 2003). Accurate calculation of the ∂T ⊥ /∂L and ∂N H+ /∂L values near the apogee is difficult; hence, we used two approximations: (a) radial gradients obtained from the radial movement of the spacecraft, and (b) zero constant gradients.As shown in Figure 4a, the observed f (white dots) is close to or lies between f 1 = ω 1 /2π values (black and magenta dots) calculated using the approximations.As z is inversely proportional to m, we estimated the timevarying m to obtain the temporal variation in f 1 during the observation.We used the finite gyroradius effect for several wave periods to determine m in the time domain.The finite flux data cadence and broadband characteristics of the compressional wave spectra introduce an error in the time-varying m; hence, a wide range of m values were obtained during observation.Owing to this error, f deviated by a few millihertz around the values indicated by the dots in Figure 4a.Figures 4b and 4c show the ∂T ⊥ /∂L and ∂N H+ /∂L values used for the first approximation.The values could not be obtained after 03:53 UT, when Arase approached the apogee. ∂T ⊥ /∂L may become overestimated since the spacecraft moved into the off-equator.This is because P ⊥,H+ decreases with increasing magnetic latitude for a given L-shell (Imajo et al., 2019).The uncertainty in ∂T ⊥ /∂L obtained from the spacecraft motion also increases.This may cause a considerable discrepancy between f and f 1 calculated with the first approximation (black dots) in the latter stages of the calculation.The dependence of ion fluxes on MLT and temporal variation can also cause an error in evaluating the radial gradient. The radial ion temperature gradient during 03:00-03:30 UT is clearly positive, which might cause drift compressional instability (Figure S2d in Supporting Information S1), but the sign of the gradient around the spacecraft apogee is not stably positive (Figure 4b).Another possible mechanism for generating DCM waves is the instability caused by the drift resonance, which is a wave-particle interaction between drifting charged praticles and wave fields.While the drift resonance is considered as an excitation mechanism of the fundamental poloidal standing waves (e.g., Dai et al., 2013;Takahashi, Claudepierre, et al., 2018;Takahashi, Oimatsu, et al., 2018), some theoretical studies suggest that the gyrokinetic wave equation of DCM includes the effect of the drift resonance (e.g., Mager et al., 2013).The instability condition (Southwood et al., 1969) is expressed as follows. where F, W, M res , q, B eq , and W res are the distribution function, particle energy, magnetic moment of resonance particles, elementary charge, magnitude of the magnetic field on the Earth's equatorial surface (∼29,400 nT), and resonance energy, respectively.To the best of our knowledge, no previous studies have examined the drift resonance instability condition as an excitation mechanism of DCM waves. Figures 3e-3g show the results calculated at W res = 19.2keV and M res = 0.20 keV/nT.The first term in Equation 9 is always negative because the distribution function decreases with energy without the occurrence of any "bump-on-tail" signatures around the resonance energy (Figure 3e).As the spacecraft was close to the apogee, we used the proton flux data along the inward and outward guiding center directions to calculate ∂F/∂L (e.g., Yamamoto et al., 2018).For m < 0, the radial gradient of the distribution function (the second term in Equation 9) causes a drift resonance instability if ∂F/∂L < 0, which was satisfied several times during the observation (Figure 3f).Previous studies have shown that the gradient of the distribution function of ring-current ions can generate poloidal Alfvén waves as well (O.V. Mager, 2021;Mikhailova et al., 2022;Rubtsov et al., 2021;Yamamoto et al., 2019).For a general review of wave-particle interactions, refer to (Klimushkin et al., 2021).The periods corresponding to dF/dW > 0 roughly aligned with those corresponding to the occurrence of the wave packets (Figure 3g), indicating a strong correlation between the observed compressional wave and the destabilization condition of the drift resonance.The convective growth of waves or latitudinal inhomogeneity of resonance protons can result in a misalignment between wave amplification and the fulfillment of destabilization conditions. The protons generating the waves comprise a nose structure with multiple energy bands (Figure 1h).The formation of the multiple-band nose structure may be associated with the complicated radial distribution of energetic protons.Ebihara et al. (2004) and Ferradas et al. (2016) showed that the lower-energy protons of the multipleband nose structure drift directly from the source location rapidly, whereas the higher-energy protons drift around the Earth.Both populations of protons reach the same location under a time-varying convection electric field.In this study, a proton population, which probably belonged to the higher-energy band, served as the energy source of the observed wave.When we considered 19.2 keV protons and traced back their drift motion at L = 6.2, we discovered that these protons launched at 20:12 UT on November 18 encircled the Earth and reached the spacecraft position when the radial gradient of the distribution function became negative (03:50 UT on November 19).Therefore, these protons were possibly injected during a short southward excursion of the IMF between 20:20 UT and 20:40 UT on November 18 (See Figure S1c in Supporting Information S1).Before the protons reached the spacecraft's position, the IMF remained southward continuously from 23:50 UT on November 18.In this interval, the open-closed separatrix of the drift path of these protons may shrink, releasing some protons trapped in the higher L-shells.In this case, the radial gradient of the distribution function is likely to turn negative, thereby generating waves through drift resonance. Conclusions Pc4-5 compressional ULF waves were observed by the Arase satellite near the magnetic equator in the evening sector (∼21 MLT) during slightly disturbed geomagnetic conditions.The observed waves had a broadband frequency spectrum of 4-12 mHz.Arase was located inside the plasmasphere (N e ∼ 150 cm 3 ); however, the plasma β reached approximately 1.The anisotropy parameter Γ was below 0.5, implying that the drift mirror instability cannot occur.The wave properties were consistent with those theoretically estimated by Mager et al. (2013) and Takahashi et al. (2022).The eigenfrequency of DCM, which was derived via the method followed by Mager et al. (2013), showed quantitative agreement with the observed wave frequency.The relationship Geophysical Research Letters 10.1029/2023GL107707 between the magnetic field and proton pressure oscillations was confirmed using the theory proposed by Takahashi et al. (2022).These results led us to conclude that the observed wave was a DCM wave. Coherent proton flux oscillations occurred simultaneously at 20-25 keV, suggesting wave-particle interactions between the compressional wave and ring current ions.As no "bumps" were observed in the proton distribution function, the compressional waves can be attributed to a positive radial gradient of ion temperature and drift resonance instability.The negative radial gradient of the distribution function at 19.2 keV was sufficiently high to cause instability.Assuming the resonance energy to be 20-25 keV, the azimuthal wave number m was found to be in the range of 160 to 120 using the drift resonance theory.These results are consistent with the estimate of m (∼ 130) derived from the finite gyroradius effect. This study comprehensively analyzed the energy and radial gradients of the distribution function and was the first study to discover that the free energy of drift resonance is provided during DCM excitation.We suggest that the ring current ions are related to the nose structure as a source population of the resonating particles.The nose structure observed in this event had two earthward-extending energy bands at approximately 5 and 15 keV, suggesting that the source population was exposed to the temporal variations of the convection field in the magnetosphere.This may lead to the formation of an unstable spatial distribution; however, multipoint observations of the proton distribution function are required to validate the excitation scenario.Future multi-spacecraft missions in mesoscale physics are crucial for understanding the role of energetic ions in ULF wave excitation. Figure 1 . Figure 1.(a-c) Magnetic field oscillations in radial, azimuthal, and parallel components observed by the Arase satellite.For the parallel component, highpass filtered (>1.67 mHz) magnetic field is shown.(d-f)Wavelet power spectra of the radial, azimuthal, and parallel magnetic fields.Gray lines show eigenfrequencies of poloidal and toroidal standing Alfvén modes up to the seventh harmonic.The eigenfrequencies were calculated using theTsyganenko (1989) model and the MHD wave equation developed bySinger et al. (1981).A power-law distribution of the plasma density and proton plasma are assumed in the calculation.(g, h) Proton omni-directional differential number fluxes measured by MEP-i and LEP-i mass analyzers, respectively. Figure 3 . Figure 3. Energy-dependent Morlet wavelet spectra of (a) the power of residual flux oscillations and (b) the coherence between the residual flux and B ∥ averaged over 4-12 mHz and 03:30-04:30 UT on 19 November 2018.The red and blue lines show MEP-i and LEP-i fluxes, respectively.(c) Resonance energy calculated from the drift-bounce resonance theory as a function of m.The gray region indicates resonance energies at 20-25 keV.Red and blue squares denote the estimated values from MEP-i and LEP-i data, respectively.The horizontal bar on each square shows the standard deviation.(d) Band-pass filtered (4-12 mHz) B ∥ .1-min averaged gradients (e) ∂F/∂W, (f) ∂F/∂L, and (g) dF/dW of the proton distribution function at W res = 19.2keV and M res = 0.20 keV/nT.The gray region corresponds to dF/dW > 0. Figure 4 . Figure 4. DCM frequency calculation.(a) Wavelet amplitude function (WAF; Foster, 1996) of B ǁ is color-coded.White dots are wave frequencies extracted from the spectrum.Black dots denote the DCM eigenfrequency calculated using the first approximation (radial gradients) and the time-varying m obtained using the finite gyroradius effect.Magenta dots denote the DCM eigenfrequency calculated for the time-varying m and zero constant gradients.(b) Hot protons (>50 eV) perpendicular temperature gradient.(c) Hot protons density (red) and its gradient (black).
5,089
2024-04-17T00:00:00.000
[ "Physics", "Environmental Science" ]
Turbulence Models Commonly Used in CFD Here we provide an overview of some of the most commonly used turbulence models used in current CFD modeling. We compare the governing equations, applications of use, and results between the models. Finally, we provide our own recommendations, based on more than two decades of collaborative research. Introduction Calculation of turbulent flows is one of the most challenging problems in all of science and mathematics. Exact solutions of turbulence have bedeviled researchers for many decades and it is generally appreciated that there is no closed form solution of any fluid flow problem except the most simple laminar situations. Despite this fact, there are ways to complete calculations with sufficient accuracy so that engineering and design decisions can be made. The accuracy of turbulent calculations has gradually improved with more powerful computational resources and with improvements to numerical modeling. Here we discuss the most commonly used methods to simulate turbulent flow and discuss the strengths and weaknesses of each approach. The authors believe that particular methods are more or less appropriate for a particular situation, depending on the characteristics of the system, the computational resources available and the accuracy requirements. In this chapter, we pay particular attention to turbulence models that are most commonly used by scientists and researchers; we also provide guidance to researchers who are pondering different turbulent-modeling approaches. Turbulence and CFD The first problems handled by CFD were relatively simple, two-dimensional, incompressible, steady state situations that often were limited to laminar flows. To our best knowledge, the first three-dimensional CFD simulation was not completed until 1967 [1]. Around the same time, the very first climate models were being constructed, for modeling the circulation of fluids around the globe. Shortly thereafter, progress became much more rapid as both computational power and modeling approaches advanced. A key development was the incorporation of turbulence modeling into the CFD solutions. The first turbulence models accounted for turbulence effects through a concept termed the "eddy viscosity". Essentially, the eddy viscosity (or turbulent viscosity) reflects an apparent increase in viscosity caused by small-scale chaotic motions in a fluid. The simulations do not attempt to actually capture small scale turbulent motions, rather they approximate their effect with an increase in the fluid viscosity. As we will discuss, the concept of turbulence viscosity plays a central role in Reynolds Averaged Navier Stokes (RANS) models. As we will also show, other approaches do not rely extensively on the turbulent viscosity concept. RANS models The first turbulent viscosity "eddy viscosity" models were developed in the 1960s and are classified as algebraic [2,3], one-Equation [4], or two-Equation [5][6][7]. The basis for two equation models was the relationship between the turbulent viscosity and local values of the turbulent kinetic energy k and turbulent dissipation, ε. Since this approach soon became the dominant method (even for today), it is worthwhile to discuss it in some detail. In essence, this group of turbulence models neglect small scale and rapid turbulent motions and use an average flow field (timewise average values in the velocities and pressure values) to estimate the effects of turbulence. k-ε models The first major effort to simulate turbulence in the context of CFD was the socalled k-ε model [5,6]. This approach utilizes the fluctuating components of the turbulent velocity in the three coordinate directions to obtain a turbulent kinetic energy, from: That is, k is the additional turbulent energy that results from the timefluctuating turbulent motions. Accompanying the turbulent kinetic energy is a turbulent dissipation ε which can be calculated as for flows in pipes with diameter D [7,8]. The connection of turbulence kinetic energy and turbulent dissipation will be provided, following the equations of motion. In essence, the governing equations of motion are conservation of mass, which under steady conditions is: conservation of momentum, written as: and the closure equations for turbulence: The turbulent viscosity is calculated from The P k is the production of turbulent kinetic energy from the shear strain rate and P b is the production of turbulent kinetic energy from buoyancy effects. The production of turbulent kinetic energy is obtained from the time-averaged velocity field from: The σ terms are corresponding Prandtl numbers for the transported variables. The values of the constants and turbulent Prandtl numbers are specific to a particular k-ε model. The k-ε approach is likely the most widely used turbulent model, even today. It is generally sufficient for flows that are wall bounded, with limited adverse pressure gradients or separation. Traditionally, the elements are not used to capture steep velocity and temperature gradients near the wall. Rather, wall functions are employed to interpolate to the wall. Of course, the accuracy of this approach depends on the suitability of a particular wall function to a problem. For example, wall functions often fail when the flow experiences adverse pressure gradients and/or separation. On the other hand, when small elements are deployed near the wall and/or when damping equations are used to limit fluid motion in the boundary layer, integration can be performed up to the wall. In our experience, if integration is to be performed up to the wall (and wall function interpolation is avoided), the near-wall element should have a size of y+$1 for models that resolve the boundary layer. This guidance is not used for models that use the law-of-the-wall to interpolate to the wall. A popular modification of the traditional k-ε model is the RNG (Renormalization Group) model. It was developed by [9] in an effort to handle small flow phenomenon. The mechanism of multiple scale motions is achieved by modifying the turbulent dissipation equation production term. In our experience, it has somewhat better performance than the standard k-ε particularly for rotating flows. The differences between the RNG and standard models is in the relationship between the turbulent kinetic energy, turbulent dissipation, and turbulent viscosity. With the RNG approach the turbulent viscosity is found from: and the new turbulent dissipation transport equation becomes: With the following inputs k-ω models While the k-ε model has experienced success in computational modeling, it has deficiencies in some situations. In particular, the k-ε model performs suitably away from walls, in the main flow. However, it has issues in the boundary layer zone, particularly with low Reynolds numbers. Here, Reynolds numbers refer to local Reynolds numbers that decrease as one moves closer to the wall and the no-slip condition exerts its influence (rather than to the Reynolds number based on macroscopic dimensions such a pipe diameter or plate length). A significant development in CFD was brought forward by the development of k-ω model that replaced the transport equation for ε with a specific rate of turbulence dissipation, ω [10]. The new equations are: With a turbulent viscosity calculated as: Shear stress transport family of models Recognizing that the k-ε and k-ω model each have strengths and weaknesses, a new model was proposed that uses both of these approaches in a way that harnesses their strengths [11]. This new approach, termed the Shear Stress Transport model (SST), smoothly transitions from the k-ω model near the wall to the k-ε model in the main flow. With the SST model, the governing equation for turbulent dissipation is recast into an ω form. The governing equations are: (17) and the turbulent viscosity is found from As before, P k is the production of turbulent kinetic energy and ω reflects the specific rate of turbulent destruction. As noted earlier, the σ terms are turbulent Prandtl numbers associated with their subscript. The function F 1 is the aforementioned blending function that transfers the k-ω model near the wall to the k-ε model away from the wall from the wall. The S term is the magnitude of the shear strain rate. While ostensibly, the SST model is used for fully turbulent flows, it has shown ability to capture both laminar and turbulent flow regimes [12]. However, in the next section we discuss a set of modifications to the SST models that are specifically designed to handled laminar/transitional/turbulent flow regimes that are recommended. SST transitional models The already discussed turbulent models were largely developed based on correlations of canonical fully turbulent flow situations (such as flows over flat plates, airfoils, Falkner-Skans flows, and flows in tubes and ducts). Of course, researchers and engineers often experience situations where the flow is partially turbulent or other situations where the flow changes so that for part of the time it is laminar and other times turbulent. Consider for example pulsatile flow wherein the fluid velocity changes sufficiently so that for parts of the flow period, different flow regimes occur. There are a number of approaches to handle these situations but with respect to the RANS models, the approaches generally utilize the concept of turbulent intermittency. Intermittency was originally defined as the percentage of time that a flow was turbulent. However, more recently, turbulent intermittency has been used as a multiplier on the rate of turbulent kinetic production [13][14][15]. Here we will set forth two current transitional models, both based on the SST turbulence approach. The first method involves two extra transport equations. One for the intermittency, γ, which is a multiplier to the turbulent production. The transport equation for turbulent intermittency is: The P and E terms are, respectively, production and dissipation of intermittency. An additional transport equation is required for the transitional momentum thickness Reynolds number. This added equation is: Together, solution to Eqs. (19) and (20) determine the local state of turbulence. They result in an intermittency that takes values between 0 and 1. For fully laminar flow, γ = 0 and the model reverts to a laminar solver. When γ = 1, the flow is fully turbulent. The turbulent production then is then multiplied by the local value of the intermittency, γ. Interested readers are invited to review the development of this model, including implementation for problems that involve heat transfer [16][17][18][19][20][21][22]. Recently, the above two-equation model was modified to reduce the two transitional transport equations to a single Equation [23] and that approach was later adapted by [24] to accurately solve for situations in confined pipe/duct/tube flows. Essentially, Eqs. (19) and (20) are replaced by a single intermittency equation which is: As with the two-equation approach, the intermittency factor γ will take on values between 0 and 1. Also, as before, The P and E terms represent, respectively, the production and destruction in local value of intermittency. For these intermittency models, the onset of turbulence is calculated by a series of correlation functions. In particular, a local value of the critical Reynolds number is determined from Eq. (22) is used to identify the location of laminar-turbulent transition. It is based on the local value of the momentum layer thickness. The C terms are correlation constants and are based on comparison of numerically simulated results with experimentation. An important term in Eq. (22) is the local value of the mid-boundarylayer turbulence intensity (Tu L ). This value is attained at the midpoint of the boundary layer as an output from an empirical formulation based on experimentation. Local production of intermittency is calculated from: As we have already noted, the term S is the shear strain rate. A new term that appears in Eq. (23) is the so-called onset transition term (F onset ) which is calculated using the following set of equations. Similarly, the local rate of destruction of intermittency is found by: We have already noted that these transitional turbulence models were initially developed for external boundary layer flows (flat plate boundary layers, airfoil flows, Falkner-Skans flows, etc.). Insofar as we have adopted them for internal flow, some modification was required. We recommend, at least for flows through pipes, tubes, and ducts, that the initial constants determined in [23] be replaced by alternative values from [24]. While we recommend the above approach for solving transitional flow problems, this area of research is also heavily studied by other researchers who have provided alternative approaches to handle such flows. We cite them here for readers who are interested in those alternative but complementary viewpoints [25][26][27][28][29][30][31][32][33]. Reynolds-stress models Reynolds stress models (RSM) are quite different from the RANS approach that was just discussed. For RSMs, transport equations are used for all components of the Reynolds stress tensor and an eddy viscosity is not utilized. These models are expected to be superior for situations with non-isotropic turbulence and flows with significant components of transport in three directions. There are a number of RSM versions, some of which will be discussed here. The so-called SSG-RSM model employed here utilizes the following momentum transport equation: The second-to-last term on the right-hand side represents the Reynolds stresses. There is a pseudo-pressure term p' that is calculated from the local static pressure p and local velocity gradient from the following expression. The Reynolds stresses are calculated by a collection of six equations for all directional possibilities. The transport equations for Reynolds stresses are: We note that a turbulence dissipation term, ε, appears in Eq. (32) and it has to be solved from its own transport equation. We refer readers to [34,35] for more details. A modification to the above is realized from the Baseline RSM (BSL RSM) model. It differs from the SSG RSM in that the transport equation for ε is replaced by a transport equation for ω. The new equation is: (33) This approach blends between two different models that are used near the wall and alternatively away from the wall. The modeling is accomplished using a weighting function, similar to the SST: Where the symbols ϕ correspond to any particular transport variable in the near wall and far wall regions. Various constants change their values in the two regions, so that: The constants near the wall: The constants away from the wall: The last RSM version to be discussed is the Explicit Algebraic RSM (EARSM). This approach includes a non-linear relationship between the local values of the Reynolds stresses and the vorticity tensors. It is focused on flows with secondary motions and curvature [36]. The local values of the Reynolds stresses are calculated using an anisotropy tensor which is based on algebraic equations [36]. This is contrasted with RSM approaches that solve for the Reynolds stress components using differential transport equations. The approach is to use higher order terms for many of the flow phenomena. It was designed to handle secondary flow situations and flows with extensive curvature and rotation. The governing equations are complex and lengthy and for brevity sake, we refer interested readers to [36]. Scale adaptive models So far, we have presented RANS-based models that perform conservation calculations at each grid element. If turbulence is present, the impact of turbulence appears via the eddy viscosity. Traditionally users either a priori specify that the flow is laminar (so no eddy viscosity included) or the flow is turbulent (in which case an eddy viscosity is determined and applied throughout the flow field). The recent development of transitional modeling frees the researcher from having to a priori predict the level of turbulence. With transitional modeling, the numerical code automatically reverts to laminar flow in areas with low Reynolds numbers and also automatically becomes a turbulent model in areas where the Reynolds number is larger. Regardless of the method that is selected, the coupled equations are solved for each computational element and the turbulent viscosity is applied to the fluid in the element under consideration. In contrast to this approach, there is another major group of computational techniques that are termed "scale adaptive models". These are models that resolve part of the turbulent motions but model flow features that are smaller than the element size. Since there is less modeling and more actual resolution of fluid motion, one might expect the scale-adaptive models to be more accurate than RANS; and there are cases where that is so (particularly for free shear flows, swirling flows, boundary layer separation, and jets). However, the RANS approach can be more accurate than scale-adaptive methods in some situations, including wall bounded flows. Also, RANS is less computationally expensive because the eddy viscosity provides the link to the time-averaged flow field and the local turbulence with a very simple calculation. In fact, for even problems of modest complexity, scale adaptive models are more time consuming. There are a number of established and new Scale-Adaptive Models that are used in CFD simulations. We will not be exhaustive in this section by covering all the existing models, rather we will focus on some of the models we think are most useful and representative. Interested readers are directed to an excellent comprehensive discussion provided by [34,37]. Scale-adaptive SST models One of the primary decisions that models are faced with is whether to perform calculations in steady or unsteady mode. Typically with numerical simulation, unsteadiness is driven by either timewise changes in boundary conditions or it is related to unsteady phenomena that occur in an otherwise steady scenario. A classic example is the Karmen Vortex Street that occurs in a wake region of a blunt object. Figure 1, shown below, illustrates this phenomenon. Researchers have often conjectured that if a RANS model is performed with sufficiently small elements and time steps, the unsteady features of the flow would naturally be resolved. But in fact, this is not true. It is important to note that steady state calculations using RANS models will often provide very accurate information about averaged quantities (like drag), these simulations will miss details in the rapidly fluctuating downstream wake region. This issue was explored in depth in [35] where time-averaged results of drag obtained from unsteady RANS simulations were compared with calculations from steady RANS calculations (using the SST transitional model that was previously described). It was found that the steady state calculations were able to accurately capture drag forces but were only partially adept at capturing vortex movement in the downstream wake region. With this discussion as background, it is now time to turn attention to the governing equations of scale-adaptive RANS models. The model to be discussed here uses the SST approach for the underlying governing equations (in the literature it is often termed the SAS-SST model). The scale-adaptive approach modifies the ω transport equation based on [37]. In particular, a new transport equation is presented that incorporates the turbulent length scale L and is set forth here: and Values of the various constants can be found in [34,37] and are not repeated here for brevity. The term L t is a novel modification; it refers to the von Karmen length scale. Figures 2 and 3 are provided that show a comparison of downstream wake regions for an unsteady RANS calculation using the SST model (Figure 2) and a simulation using the scale-adaptive SST modification. Results are obtained from [34]. It can be seen that the standard SST model does capture a periodic release of eddies from the downstream side of a circular cylinder (shown in blue). In both images, the flow is left-to-right. The color legend is keyed to the local values of the turbulent length scale. Clearly the scale-adaptive approach provides a much wider range of turbulent eddy sizes. LES WALE model Another common approach to dealing with these types of problems is based on the so-called "large eddy simulation". To the best knowledge of the authors, the first articulation of a LES model was [38] and the models have been updated in the intervening decades. Here we focus on one popular and current LES method (the Wall-Adaptive Local Eddy, or WALE LES model). The general processes of LES modeling are the same, regardless of which variant is used. LES models involve the filtering of eddies that are smaller than the size of the computational elements. The algorithm incorporates an eddy viscosity for flow scales that are not resolved. For this model, the tensor-form of the Navier Stokes equations is: where τ ij is the small-scale stress defined as And the S ij term indicates the strain rate tensor for large scale motions. The small-scale eddy viscosity μ sgs is found from The term C w is a constant and the symbol Δ = (element volume) 1/3 . The tensor S ij d is calculated from the strain-rate and vorticity tensors, as shown here And the vorticity tensor Ω ij is defined as Results from various CFD model calculations Now that the main CFD models have been presented, we turn attention to comparisons of the results from different models. There are comparisons available in [7,8,34,35,37,[39][40][41][42][43][44][45][46] and a very small subset of those comparisons will be provided here. We have selected the classic problem of flow over a square blockage. This canonical problem has the features that elucidate the strengths and weaknesses of the particular models. For instance, some important parameters relate to the time-averaged interactions between the fluid and the solid structure (drag force). Also, there are significant unsteady phenomena, particularly in the wake region that provide a challenging test for the models. In addition, this is a problem with extensive experimental work that will serve as the basis for evaluating the results. To begin we refer to Figure 4 which shows the solution domain (similar to [35]). A number of computational meshes were used and an example mesh is shown in Figure 5. The images are provided in a series of increasing magnification. Image (a) is the most global view, part (b) is focused on the square obstruction, and image (c) reveals details of the elements in the near-wall region, near a corner of the cylinder. With this mesh, we present results for a large number of computational methods. We note here that in reality appropriate meshes may differ depending on the turbulence model that is used. For instance, a mesh that is suitable for a k-w simulation may not be appropriate for SST, and vice versa. We recommend that mesh independent studies be carried out for each turbulence mode that is employed. The results, set forth in Figures 6 and 7, provide the drag coefficient on the square cylinder (large aspect ratio). Each model has its own color. Literaturebased values from experiments are also included (shown as gray x symbols). In the above calculations, which were first set forth in [35], the SST and transitional-SST models were most accurate (when compared with existing experiments) for calculating the drag coefficient. On the other hand, since these approaches were RANS, they lose some local detail and flow structure. For example, in Figure 8 which is provided below, we show velocity vectors, overlaid atop a velocity contour image. It is evident from the upper part of the figure that there are the expected stagnation locations at the leading edge, and in the wake region. There is also a slow-moving recirculation zone above and below the cylinder that are a result of flow separation at the leading corners. However, the lower images show a focus on the flow patterns at the leading edge. It is seen that with the SST RANS model, there are no small-scale eddies at this location. But for the LES model, there are two LES results that are obtained at two different instances in time. These sequential images show the time-varying flow field. While a RANS model like the SST is excellent for full-body drag, it does not capture some small flow structures. Researchers thus need to consider their computational needs before selecting a CFD model. The last result to be presented is shown in Figure 9. There, instantaneous results are displayed for the SST model. There, clearly, the unsteady nature of flow in the downstream wake region are evident. If the simulation of Figure 9 was carried out with a steady state SST solver, there would still be timewise changes in the flow field but they would have a different frequency than the unsteady calculations. In order to elucidate the iteration-by-iteration fluctuations in drag that result from a steady state solver (compared to an unsteady simulation), Figure 10 is prepared. This figure shows the timewise (iteration wise) fluctuations in drag force on the square cylinder first with a steady state SST solution and then with a truly unsteady solution. The steady state results are calculated using a "false transient" approach wherein the algorithm steps forward to new iterations using a nonphysical time. The figure has two call outs that provide focus on different parts of the graph. The important conclusion is that the average value of unsteady fluctuations of drag obtained by the steady state algorithm are an excellent match that that attained from the unsteady calculations. On the other hand, the period is very different between the two. Concluding remarks This chapter has presented a brief overview of a large number of turbulence models. While there is no "correct" turbulence model, there are models that are better suited for particular situations. For flows that are truly laminar with no regions of intermittency or turbulence, a laminar solver can be used. However, if there is a potential for any turbulent flow, caution is warranted. For flows that are fully turbulent, particularly wall bounded flows, the SST model is recommended. In our experience it is more able to capture flow phenomena compared to other RANS models. It also has excellent performance for a wide range of thermal-transport situations. If regions of mixed flows (laminar/transitional/turbulent) are expected, of if the flows might change in time (pulsatile flows for example), the SST transitional model is recommended. This new approach is rapidly becoming more common in the CFD community and could replace fully turbulent models in the future. For situations where small scale and short-lived flow must be captured, we recommend the scale-adaptive SST model or the LES model. They are more computationally expensive but the scale adaption enables small features to be calculated. We also direct readers to two further excellent resources [47,48] for more in depth discussion.
6,024.2
2021-08-27T00:00:00.000
[ "Engineering", "Physics" ]
Tracer Elements analysis of Aerosols in the Atmosphere of Lahore using Radio analytical techniques The perturbations of atmospheric processes by anthropogenic activities of man have been of great concern these days. The deposition of trace and major elements from the atmosphere to the ground is an important factor for animal and plant health, and it is of major consideration in studies on the cycling of elements that may function in the atmosphere as nutrients or potentially toxic pollutants. When assessing the input of materials in natural waters and land, the sources and composition of atmosphere need to be determined. Geological and anthropogenic contributions to air pollution were monitored by analyzing aerosol particulates present in the atmosphere of Lahore. Various sorts of experiments were performed for studying total suspended particulate matter (TSPs) using gravimetric techniques. The average value of TSPs was found 450 ug/m in working days and 240 ug/m in holidays. Their size distribution and trace elemental composition and their wet removal through precipitation in the atmosphere of Lahore was studied by using scanning electron microscopy (SEM) and instrumental neutron activation analysis (INAA) respectively. Eighteen elements were analyzed. Geological nature of the land was attributed the presence of Yb, Cs, Sc, Rb, Co, Eu, La, Ba, Zn and Hf s in the aerosol particulates. The presence of Cr, Fe, Ce, Pb and Cd could be linked to anthropogenic activities. Their amount was two times higher than the limits reccomended by the U.S. Environmental Protection Agency for the urban environment, mostly during working days and at various day Introduction Aerosol particulates in the atmosphere signify geological, environmental and anthropogenic activities.Thus, it is important to have a complete considerate of these three aspects pertaining to the sites under mega city (Lahore) in Pakistan.(Colbeck. I et al, 2010) Lahore is the second largest city of Pakistan.It is 1305 km to the North-East of Karachi at 74.3 o E and 31.5 o N on global scale, at a height of 213 meter from sea level.Its 2251 square kilometer metropolitan area has near about 10 million populations with 2000 persons per square mile population density and 4.5% per annum expanding rate.The effect of atmospheric aerosols on human health strongly depends on their capability to penetrate the respiratory tract.Generally, the smaller particles penetrate into the respiratory system more profoundly.Coarse particles may deposit in the pharynx and larynx, causing dryness of the nose and throat but have no effect on the mucociliary clearance.Environmental protection agency emphasis is aimed at developing air quality standards based on the specific size fraction of particles, which can reach the trachea (J.J. Chow et al 1973, M. Kanakidou. et al 200).The upper cut-off limit has been proposed for inhalable particles having size of 10-15µm diameter.The choice of the 15µm cut off point is based on the worst case situation of mouth breathers, because in nose breathers particles larger than 10µm are either rejected by the nose or restricted to the nasopharyngeal region. Sampling Parameters and its strategy Sampling was carried out in sunny days and the sampling interval was kept for at least 24h.The days were randomly chosen for the sampling of the aerosol particulates.High volume portable dust sampler Model L30 MK III of Rotheroe and Mitchell Company (U.K) having sampling capacity of 40 L/min were used throughout the work.For the determination of wind velocity and directions, three cup anemometer type DEM 6 made of Tientsin Meteorological and Marine Instrument of China were used.Before sampling Whatman-41 filter papers were washed in de-ionized water, while Whatman microfiber glass filter paper were soaked in dilute HCl for half an hour, washed with excess of de-ionized water and fired at 500 o C. Prior to their loading into the sampler, they were equilibrated to their weight at 55% relative humidity and 25 o C to eliminate the effect due to the hygroscopic nature of the filter papers.The filters having 6 Cm diameter (28 Cm exposed area) were loaded in the sampler equipped with mass flow indicator and sampling time counter.Duplicate and some time triplicate sampling were carried out.Five sampling stations were installed at the Institute of Chemistry University of the Punjab old and new campus, Ichra, Gulberg and Shahdara under the sampling conditions as described above.Sampling was also carried out during working days, holidays, and rainy days, after rain fall and at various day and night hours.Each Sampling was carried out for 24-144 h duration, during which 50-350 m 3 air was sampled.Sampling parameters e.g.weight of blank filter paper, sampling rate, climate, temperature, humidity, total sampling hours and total volume of air sampled were also recorded during sampling.During scavenging process the sampler was put in a position that it was safe from the direct exposure to rain, but wind freely force the sampler from all directions for sampling. Determination of the TSPs, size distribution and morphological structure of the aerosol particulates Total suspended particulate matter was measured by using standard gravimetric technique.Scanning electron microscope model JSM-35F was used for the evaluation of the size distribution and morphological structures of the aerosol particulates trapped on the surface of the filter paper media.The amount of TSPs and size distribution pattern of the aerosols in the atmosphere of Lahore may be seen in Table -1 and 2 Activation and Analysis In order to carry out neutron activation analysis, quartz ampoules of the air filtered sampled and blank filters along with the ampoules of the appropriate amounts of IAEA and NBS standard reference materials e.g.Marine sediments MS (IAEA/SD-D-1/2), Pond Sediment (PS) and Lake Sediment (SL-1 & SL-3) were cold welded in aluminum capsules, and were irradiated for different durations in the periphery of reactor core of the Pakistani Research Reactor (PARR-1) at thermal neutron flux of 3-7x 10 13 n Cm -2 S -1 .Thermal neutron monitors (Au and Co foils) were inserted between the samples and reference materials to monitor the fluctuations in the thermal neutron flux gradient and these were found to be insignificant.The irradiated samples and standards after cooling for 1-14 days were transferred to pre-weighed polyethylene capsules.The gamma ray spectra were measured for varying time ranging from 1h to 16h with a coaxial 30 Cm 3 Ge(Li) detector, with a FWHM of 2.0 keV for the 1332.5 keV peak of 60 Co and a peak to Compton ratio of 40:1, coupled to a 4K Series 85 Canberra multichannel analyzer.The multichannel analyzer (MCA) was calibrated with standard point sources of 60 Co and 137 Cs (from M/S Ammersham) prior to sample and standard analysis.The amount of the elements was calculated by making the use of relative method.The precision, reproducibility and accuracy of this method were checked by analyzing MS and SL-1 IAEA reference materials and may be seen in somewhere else (M.Z. Iqbal et al 1992).About 11 trace elements were analyzed by NAA technique; However Pb and Cd were measured by standard AAS techniques.The details of AAS techniques for the determination of Pb and Cd may be seen some where (M.Z. Iqbal et al 1990).The results thus obtained are given in Table-III. Total suspended particulate matter and their size distribution The amount of the total suspended particulate matter (TSPs) on working days at Lahore was obtained in the range of 350-511µg/m 3 with an average of 450µg/m 3 , its amount decreased to 160-300µg/m 3 with an average of 240µg/m 3 on non-working days.The amount of TSPs during 9.0 p.m. to 3.0 a.m. at Lahore was 215µg/m 3 which further decreased to 190µg/m 3 at 3.0 a.m. to 9.0 a.m.The amount of TSPs again increased from 10.0 to 4.0 p.m. at about 470µg/m 3 , which indicated high anthropogenic activity in evening times.The amount of TSPs at Lahore decreased to 80µg/m 3 during precipitation and after 24 hours its amount approached to 140µg/m 3 .It is interesting to know the differences of 24h sampling and 12h sampling system.Shorter sampling systems usually do not give accurate and reliable results as compared to long-term sampling systems (W.C. Achinger et al 1968).The TSPs at Lahore on working days were of the order of 450µg/m 3 on 24h sampling basis, where as at day or night hours (12h sampling) the 20005-p.2amount of the TSPs was 590 and 550µg/m 3 respectively.The higher amounts of the TSPs at night hours may be explained due to the peak traffic density i.e. 5.0 p.m. to 10.0 p.m.The traffic density decreases tremendously after midnight as shown by the data given in Table-1.The size distribution of the aerosol particulates in the atmosphere of Lahore is given in Table-2. Trace elemental composition of the aerosol particulates The major source of Yb, Cs, Sc, Rb, Co, Eu, La, Ba and Hf in the atmosphere has been reported to soil derived aerosols (P P. Parekh et al 1987, P. P. Parekh 1989).The amount of Yb, Cs and Sc in the atmosphere of Lahore was in the order of 0.92± 0.2, 32.7±5.7 and 2.75±0.71ηg/m 3 respectively.Although the main source of Sc is soil-derived aerosols, but some workers have also linked its presence to un-refined fuel as well (Y. Hashimoto et al 1970).The amount of Rb, Co and Eu in the atmosphere of Lahore was 24.8±5.1, and 2.5±0.27ηg/m 3 , while La and Ba was 10.3±1.15,11.1 ± 2.3ηg/m 3 .Like other Chalcophillic elements (Sc, S, Cu, Pb and Cd), Zn is also recognized as chalcophillic elements.The sources of Zn into the atmosphere are coal-burning plants, sulphide ore smelters, and refuse incinerations (P P. Parekh et al 1987, P. P. Parekh 1989).The amount of Zn in the atmosphere of Lahore was 16.1±1.36ηg/m 3 .The sources of Se in the environment have been reported from coal rather than oil (Y.Hashimoto et al 1970, S. J. Tuncel et al 1985, S. J. Tuncel 1986).The amount of Se in the atmosphere of Lahore was found 11.5±3.21ηg/m 3 .Trace Elemental Composition of the Aerosol Particulates in the Atmosphere of Lahore in Table 3. Many researchers (P.L. Kalhomaki et al 1984, P. G. J. Renzel et al 1984, H. J. Raithel et al 1988, Hansen at al 1985, A. Zober at al 1987, P.L. Kalliomaki at al 1987, Fujiware at all) have studied the effects of fumes having Cr on the lungs of human beings and on cells and pointed out that Cr may cause various sorts of diseases including cancer.Potassium dichromate is used in airconditioning and cooling towers and has been assigned as the source of Cr in Washington aerosols (U.S. Dept. of Health Education and Welfare 1966), but some workers have described that steel mills are the main source of Cr because V, Ni, Fe and Cr alloys are used for the manufacturing of various sorts of steel.The amount of Cr in the atmosphere of Karachi and Nilore (P P. Parekh et al 1987, P. P. Parekh 1989, A. Rasheed et al 1987) have been reported to be 24-26 and 7.5 ηg/m 3 respectively, while in U.S.A. its amount was found 21 to be in the range of 15.33 ηg/m 3 .The amount of Cr in the atmosphere of Lahore was noted as 53±7.96ηg/m 3 .The amount of Fe in the atmosphere of Lahore was 14.3 ± 2.3ηg/m 3 . The sources of Ce in the atmosphere are expected to the almost entirely anthropogenic.Both Ce and S are mostly associated with fossil fuel combustion the amount of Ce in the atmosphere of Karachi ((P P. Parekh et al 1987) has been reported to be 6.5 ηg/m 3 , while in the present work we have observed its amount in the range of 16.5 ± 2.9ηg/m 3 at Lahore.The amount of Sb in the atmosphere of Lahore was 2.57± 0.1 ηg/m 3 .(P P. Parekh et al 1987, P. P. Parekh 1989) found that the amount of Pb in the atmosphere of Karachi varied from 93±128 to 274 ηg/m 3 , and stated that automotive exhaust were the most important source of air-borne Pb in an urban atmosphere, especially that of Karach, where leaded gasoline is extensively used.Marshal (D. T. Marshall et al 1986) in 1985 found lead in aerosol particulates of Atlanta area in the range of 25.8±1090 ηg/m 3 , with an average of 278ηg/m 3 by using proton induced X-ray emission (PIXE) technique.(D. George Thurston et al 1985) reported mean Pb level as 326ηg/m 3 mainly due to fine and coarse motor vehicle derived aerosols in metropolitan Boston.The amount of Pb in the atmosphere of Lahore was observed as 549±25ηg/m 3 .Cadmium comes into the atmosphere due to its presence in vehicle tire (M.Z. Iqbal et al, 1990) Fig.1.Morphological structures of the aerosol particulates in the atmosphere of Pakistan This is an Open Access article distributed under the terms of the Creative Commons Attribution License 2 0 , which .permits unrestricted use, distributi and reproduction in any medium, provided the original work is properly cited. . The amount of Cd in the atmosphere of Lahore was 20.5±5 ηg/m 3 .U.S. Dept. of Health Education and Welfare, Air Quality Data from National Air Sampling Networks (1964-65), Public Health Division of Air Quality, Cincinnati Ohio, 1966.W. C. Achinger and R. T. Shigehara, J. Air Pollut. Table 2 . Size Distribution of the Aerosol Particulates in the Atmosphere of Lahore Table 3 . Trace Elemental Composition of the Aerosol Particulates in the Atmosphere of Lahore 20005-p.4
3,080
2013-01-01T00:00:00.000
[ "Environmental Science", "Geology" ]
NDF and PSF Analysis in Inverse Source and Scattering Problems for Circumference Geometries : This paper aims at discussing the resolution achievable in the reconstruction of both circumference sources from their radiated far-field and circumference scatterers from their scattered far-field observed for the 2D scalar case. The investigation is based on an inverse problem approach, requiring the analysis of the spectral decomposition of the pertinent linear operator by the Singular Value Decomposition (SVD). The attention is focused upon the evaluation of the Number of Degrees of Freedom (NDF), connected to singular values behavior, and of the Point Spread Function (PSF), which accounts for the reconstruction of a point-like unknown and depends on both the NDF and on the singular functions. A closed-form evaluation of the PSF relevant to the inverse source problem is first provided. In addition, an approximated closed-form evaluation is introduced and compared with the exact one. This is important for the subsequent evaluation of the PSF relevant to the inverse scattering problem, which is based on a similar approximation. In this case, the approximation accuracy of the PSF is verified at least in its main lobe region by numerical simulation since it is the most critical one as far as the resolution discussion is concerned. The main result of the analysis is the space invariance of the PSF when the observation is the full angle in the far-zone region, showing that resolution remains unchanged over the entire source/investigation domain in the considered geometries. The paper also poses the problem of identifying the minimum number and the optimal directions of the impinging plane waves in the inverse scattering problem to achieve the full NDF; some numerical results about it are presented. Finally, a numerical application of the PSF concept is performed in inverse scattering, and its relevance in the presence of noisy data is outlined. Introduction Inverse problems have been widely studied by mathematicians, scientists, and engineers. Broadly speaking, the direct problem can be defined as given the cause, find the effect, whereas for the inverse problem, given an effect, the cause would be determined. The inverse problem solution provides fruitful information valuable to many applications, such as inverse seismic methods in geophysics, ultrasonic methods in medical imaging, computed tomography [1], and inverse electromagnetic problems. The inverse electromagnetic problem includes inverse source and inverse scattering problems. The inverse source problem is a linear problem that entails reconstructing a current source, using information about its radiated field in the frequency domain and some prior knowledge; for instance, the current distribution of an antenna can be evaluated from its radiation pattern. As a result, it is meaningful in a range of applications, including antenna design, testing, and diagnostics. When a known electromagnetic field illuminates a scatterer, the inverse scattering problem aims at reconstructing its features, such as its material, geometry (shape), and location, based on the sensing of the scattered field data. It is a non-linear problem and can be linearized under suitable approximations. The imaging system and the inversion algorithm determine the achievable resolution in reconstructions. In particular, the Point Spread Function (PSF) represents the reconstruction of a point-like source/scatterer in the spatial domain, and the resolution can be defined in terms of the PSF of the system. The PSF is a suitable tool for understanding the efficiency of the system because it provides the minimum detail that can be reconstructed. The analysis of PSF behavior is additionally connected to the Number of Degrees of Freedom (NDF) of the problem, i.e., the number of independent pieces of information which can be reconstructed faithfully by an imaging algorithm in the presence of noise on data [2]. The NDF for a few decades has been researched in Refs. [3][4][5] to be used for optical imaging applications. The concept of the NDF in solving inverse source [6][7][8] and inverse scattering problems [9][10][11][12] has been widely proposed and turns out to be very helpful. The Singular Values Decomposition (SVD) of the relevant operator is the mathematical tool to provide the NDF; in fact, it is related to the number of its significant singular values and might be roughly supposed as the number of independent point-like sources/scatterers that can be reconstructed reliably in the presence of noisy data. Therefore, it can be helpful to provide the maximum achievable resolution. The most critical criterion for evaluating the efficiency of a radar system is its ability to differentiate between two close objects. The resolution [13] describes this criterion, and it can be measured by using a numerical analysis based on the system function. The concept of PSF has been studied in Refs. [14,15]. For instance, in Ref. [16], a new analytical PSF for 3D inverse source imaging based on integral analytical solutions has been proposed. Furthermore, characterizing the PSF behavior of radially displaced point scatterers for circular synthetic aperture radar has been presented in Ref. [17]. In Ref. [18], an analytical estimation for achievable resolution and linking it to configuration parameters have been addressed by using the PSF for the magnetic and electric strip sources. In Ref. [19], we have addressed the PSF analysis of inverse source and scattering problem for strip geometry; moreover, an approximated PSF, the achievable resolution, and two applications of PSF have been provided. For the considered geometries, λ/2 and λ/4 resolution have also been appreciated for the inverse source and scattering problems, respectively. This paper aims at providing a PSF analysis for far-field observations to investigate achievable resolution in imaging. To this end, we address circumference geometries for investigation, as it is possible to find the NDF in closed-form, at the variance of Ref. [19]. Moreover, again a valuable approximation of the relevant PSFs for both inverse source and inverse scattering problems in the 2D scalar geometry is introduced. The relationship between NDF, PSF, and resolution in reconstructions can be highlighted in this way. Some numerical simulations for each geometry are presented to validate the analytical NDF, the analytical evaluation of the PSFs, and their role in the resolution. In addition, we deal with the problem of investigating the minimum number of independent plane waves and their directions in the inverse scattering problem. The inversion of linear operators, as the radiation operator, connecting the source current to the far-field, depends not only on their kernel function but also on the geometry of both the domain and the codomain sets. We have considered a circumference source geometry in different papers, mainly for a full angle observation domain. In Ref. [20], for the first time, we examined the PSF for this geometry and compared the numerical results for an approximate evaluation for both the full and the limited observation angle far-field cases. Then, we observed a spatially variant resolution for the latter case of limited angular observation domains. In Ref. [21], we examined more general conic geometries and, for the first time, introduced a closed-form approximate evaluation of the PSF for the full observation angle case. This leads to a uniform resolution in point-like source reconstructions, i.e., it does not depend on the source position. In Ref. [22], we investigated the role of a limited observation domain in the source radiation by a similar inverse problem approach. An original accurate closed-form evalua-tion of the pertinent PSF allows us to establish its angular variant behavior. This leads to introducing a numerical procedure to define an optimal source discretization: the spacing of the array elements whose radiated field has the same Number of Degrees of Freedom of the continuous circumference source. A non-uniform spacing is derived, which means that the resolution in reconstructions of point-like sources depends on the source position. In the present paper, again, we consider a full angle observation domain, and we not only recall the previous results of the inverse source problem but also extend the approximate evaluation of the PSF to the (linearized) inverse scattering operator. Again, a different, but uniform, resolution is achieved. The paper is organized as follows. Section 2 is devoted to the NDF and PSF analysis in the inverse source problem. The same analysis is addressed in Section 3 for the inverse scattering problem. The discussion in Section 4 focuses on the optimal number of independent plane waves, and their directions in the inverse scattering are found for some examples. Finally, an example referring to the application of the present PSF in inverse scattering imaging is provided in Section 5. Conclusions follow in Section 6. The Inverse Source Problem This section aims at providing an analysis of the PSF in the inverse source problem for circumference geometries. In particular, we consider the case of a source domain composed of two concentric circumferences and evaluate the NDF and the PSF in closed form (the details are provided in Appendix A). Next, we introduce and validate an approximate evaluation of the PSF to establish a helpful approach in the following section. We start by demonstrating the relationship between the PSF definition and the NDF and introducing its approximated evaluation. To this end, we investigate how to estimate them when the radiated field is obtained over the whole observation domain in the far zone over the angular observation sector (−π, π), because the whole discussion is strictly linked to the behavior of the singular values of the radiation operator. Let us consider two circumferences sources, as shown in Figure 1. The geometry is a zinvariant electric current source J= J 1 (φ 1 ) J 2 (φ 2 ) T , where T means transpose, defined on each circumference, where ρ 1 and ρ 2 are the radius of the inner and outer circumference, respectively, and φ i , i = 1, 2 are the angular variable on each circumference. The source is embedded in a homogeneous medium with the free-space dielectric permittivity ( 0 ) and magnetic permeability (µ 0 ). evaluation of the pertinent PSF allows us to establish its angular variant behavi leads to introducing a numerical procedure to define an optimal source discretiza spacing of the array elements whose radiated field has the same Number of De Freedom of the continuous circumference source. A non-uniform spacing is d which means that the resolution in reconstructions of point-like sources depend source position. In the present paper, again, we consider a full angle observation domain, and only recall the previous results of the inverse source problem but also extend the imate evaluation of the PSF to the (linearized) inverse scattering operator. Again, ent, but uniform, resolution is achieved. The paper is organized as follows. Section 2 is devoted to the NDF and PSF in the inverse source problem. The same analysis is addressed in Section 3 for the scattering problem. The discussion in Section 4 focuses on the optimal number pendent plane waves, and their directions in the inverse scattering are found f examples. Finally, an example referring to the application of the present PSF in scattering imaging is provided in Section 5. Conclusions follow in Section 6. The Inverse Source Problem This section aims at providing an analysis of the PSF in the inverse source p for circumference geometries. In particular, we consider the case of a source doma posed of two concentric circumferences and evaluate the NDF and the PSF in clos (the details are provided in Appendix A). Next, we introduce and validate an appr evaluation of the PSF to establish a helpful approach in the following section. We demonstrating the relationship between the PSF definition and the NDF and intr its approximated evaluation. To this end, we investigate how to estimate them w radiated field is obtained over the whole observation domain in the far zone over gular observation sector (− , ), because the whole discussion is strictly linked to havior of the singular values of the radiation operator. Let us consider two circumferences sources, as shown in Figure 1. The geom z-invariant electric current source =[ 1 ( 1 ) 2 ( 2 )] , where means transp fined on each circumference, where 1 and 2 are the radius of the inner and o cumference, respectively, and , = 1, 2 are the angular variable on each circum The source is embedded in a homogeneous medium with the free-space dielectric tivity ( 0 ) and magnetic permeability ( 0 ). At a single frequency, the total electric far-field E(θ) in the angular source sector (−π, π) can be provided by the linear integral operator. and The wavenumber is β = ω √ 0 µ 0 = 2π/λ, and ω and λ denote the angular frequency and the wavelength, respectively. Since the operator (1) is linear and compact, the SVD can be defined for each source geometry and consists of the triple {v n (φ), σ n , u n (θ)} [23], where u n and v n are the singular functions, and σ n is the n-th singular value. In Appendix A, it is shown that the σ n exponentially vanish when n > [βρ MAX ], where [·] stands for the nearest integer and ρ MAX = max(ρ 1 , ρ 2 ) because of the asymptotic behavior of the Bessel functions for orders much larger than arguments. Consequently, the closed-form NDF can be achieved by 2[βρ MAX ] + 1 as the number of significant singular values for a stable inversion [24]. The adjoint operator of (1) can also be defined to find the solution to the inverse problem as L + = L + 1 L + 2 T , where we have the following: The PSF is now considered to evaluate the performance of the reconstruction algorithm. The PSF analysis is used in the inverse source problem to determine how the source geometry and observation domain affect the resolution. Here, we intend to analyze only the influence of the source geometry. The final goal is to obtain an analytical estimation of the achievable resolution and link it to geometrical parameters. The PSF of our interest is provided by the current distribution in the source domain for a point source located at φ 0i . It is defined mathematically as the impulsive response of the system provided by the cascade operator L −1 L, as follows: where L −1 is the regularized inverse operator of L, δ is the Dirac delta function, i and j can be either 1 or 2. The PSF is given by the completeness relation truncated to the singular functions with non-zero singular values because the minimum-norm solution of the inverse source problem is a projection of the actual source onto the singular function v n with non-zero singular values. Thus, the PSF is dependent on the number of retained singular values, i.e., the NDF. Based on the SVD properties, the actual closed-form PSF function (see Appendix A) is analytically provided by (A9). Now the approximated PSF is introduced as follows. From the spectral theorem for compact self-adjoint operators applied to the cascade L + j L i , we can compute the following: Electronics 2021, 10, 2157 5 of 21 whose kernel is as follows: Finally, by substituting Equations (7) to (6), it reads as follows: where J 0 is the zeroth-order Bessel function of the first kind. Now, a general strategy to build a good approximation of the PSF concerns the approximation of the inverse operator in Equation (5) by the adjoint one [19,25,26]. This happens when the singular values of the pertinent operator exhibit a nearly constant behavior before the knee of their curve. As a result, we define the approximated PSF as follows: Therefore, Equation (8) provides the analytical evaluation of (9) for the geometry under consideration. We now provide some numerical simulations to examine the results of the abovementioned exact and analytical evaluations of the PSF. In this way, we can also investigate how adding an inner circumference inside an outer circumference affects the NDF, and whether it is possible to reconstruct the inner source or not. We addressed the same analysis to evaluate the NDF for two square sources in Ref. [27]. It has been shown that increasing the size of the inner square has no noticeable effect on singular value behavior, and regardless of how small the inner square is, the behavior of singular values in the outer square would be similar to that of two squares. Thus, it can be confirmed that the inner source cannot add the NDF by considering a comparison of different sizes of the inner source. Let us consider the geometry of Figure 1, for ρ 1 = 4λ and different values of ρ 2 . The upper bound of the NDF is 51. Figure 2 shows that changing the size of the inner source can affect the behavior of singular values only slightly; however, as expected by the result of Appendix A, the NDF does not change. It can be concluded that the contribution of the inner circumference is negligible, and it is possible to ignore it to achieve the whole NDF. Furthermore, it is difficult to understand whether the inner source is present or not, and it may be impossible to reconstruct it. This problem is still challenging in inverse source problems, and maybe it can be reconstructed by using more prior information about it. Next, the behavior of the analytical PSF is verified. Since the PSF is a function only of φ − φ 0 it can be arbitrarily selected the point source locations in the source domain: it is the angularly invariant function; that is all point-like sources can be imaged in the same way, independent of their positions. Figure 3 shows the actual PSF of the outer circumference and its influence on the inner circumference for a point-like source located at φ 0 = 0. Generally, the resolution result is evaluated by the Full-Width Half Maximum (FWHM) criterion. The result confirms that the resolution is equal to λ/2 in the inverse source problem. In addition, the PSF result of the outer circumference cannot significantly affect the inner one. can affect the behavior of singular values only slightly; however, as expected by the resul of Appendix A, the NDF does not change. It can be concluded that the contribution of th inner circumference is negligible, and it is possible to ignore it to achieve the whole NDF Furthermore, it is difficult to understand whether the inner source is present or not, and it may be impossible to reconstruct it. This problem is still challenging in inverse sourc problems, and maybe it can be reconstructed by using more prior information about it. Next, the behavior of the analytical PSF is verified. Since the PSF is a function only of − 0 it can be arbitrarily selected the point source locations in the source domain: it is the angularly invariant function; that is all point-like sources can be imaged in the same way, independent of their positions. Figure 3 shows the actual PSF of the outer circumference and its influence on the inner circumference for a point-like source located at 0 = 0. Generally, the resolution result is evaluated by the Full-Width Half Maximum (FWHM) criterion. The result confirms that the resolution is equal to 2 ⁄ in the inverse source problem. In addition, the PSF result of the outer circumference cannot significantly affect the inner one. It can be observed that increasing the size of the inner source can reduce the amplitude of the PSF of the outer source; in addition, the amplitude of its influence on the inner rises, but the resolution remains unchanged. This means that, when it is known in advance that the source is composed of two circumferences, it can be expected that, for spacing smaller than , the resolution along the radius is about /2. Finally, a comparison between the approximated and the actual PSF is provided in Figure 4. The amplitude of both PSF normalized to 1. It is confirmed that, in the main lobe region, the closed-form approximation is acceptable. It can be observed that increasing the size of the inner source can reduce the amplitude of the PSF of the outer source; in addition, the amplitude of its influence on the inner rises, but the resolution remains unchanged. This means that, when it is known in advance that the source is composed of two circumferences, it can be expected that, for spacing smaller than λ, the resolution along the radius is about λ/2. Finally, a comparison between the approximated and the actual PSF is provided in Figure 4. The amplitude of both PSF normalized to 1. It is confirmed that, in the main lobe region, the closed-form approximation is acceptable. The Inverse Scattering Problem In this section, we address the analysis of the PSF in inverse scattering problem as in Section 2; additionally, the NDF of an arc of the circumference is provided analytically Let us consider a dielectric scatterer, with ( ) as relative permittivity, belonging to a domain referred to as the investigation domain (ID), which is located in a homogenous background, with the free-space permittivity, 0 , and illuminated by incident plane waves from different angles. Figure 5 depicts the general circumference geometry of the problem. The direction of plane wave and direction of observation is denoted by and , respectively. For each incident plane wave, the scattered field is observed in the far zone; and are the extremal angles of the scatterer, respectively, and is the radius of the arc of circumference. Hence, for the forward problem, the scattered far-field ( ) under the Born approxi mation is provided by the following: The Inverse Scattering Problem In this section, we address the analysis of the PSF in inverse scattering problem as in Section 2; additionally, the NDF of an arc of the circumference is provided analytically. Let us consider a dielectric scatterer, with s (φ) as relative permittivity, belonging to a domain referred to as the investigation domain (ID), which is located in a homogenous background, with the free-space permittivity, 0 , and illuminated by incident plane waves from different angles. Figure 5 depicts the general circumference geometry of the problem. The direction of plane wave and direction of observation is denoted by θ i and θ s , respectively. For each incident plane wave, the scattered field is observed in the far zone; α and γ are the extremal angles of the scatterer, respectively, and ρ is the radius of the arc of circumference. The Inverse Scattering Problem In this section, we address the analysis of the PSF in inverse scattering problem as in Section 2; additionally, the NDF of an arc of the circumference is provided analytically Let us consider a dielectric scatterer, with ( ) as relative permittivity, belonging to domain referred to as the investigation domain (ID), which is located in a homogenou background, with the free-space permittivity, 0 , and illuminated by incident plan waves from different angles. Figure 5 depicts the general circumference geometry of th problem. The direction of plane wave and direction of observation is denoted by and , respectively. For each incident plane wave, the scattered field is observed in the fa zone; and are the extremal angles of the scatterer, respectively, and is the radiu of the arc of circumference. Hence, for the forward problem, the scattered far-field ( ) under the Born approxi mation is provided by the following: Hence, for the forward problem, the scattered far-field (E s ) under the Born approximation is provided by the following: where χ(φ) = 1 − s (φ)/ 0 is the contrast function. Now, since T is a linear and compact operator for the multi-view and single frequency scattering configurations of our present interest, the SVD can be applied to evaluate the NDF in terms of the pertinent singular values and the PSF in terms of the corresponding singular functions. Since it is difficult to find an explicit expression of the SVD of (10), in order to provide some clues about it, we apply the method proposed in Ref. [28] to estimate the NDF of an arc source to present the scattering problem. First, we rewrite Equation (10) as follows: Then we expand the contrast function under a Fourier series as and to examine the contribution that each of the contrast harmonics can provide to the scattered field. Next, by exploiting the Jacobi-Anger expansion [29] and the orthogonal property of the exponential functions over the interval [0, 2π], we compute the scattered field by one of harmonics as follows: consequently resulting in a double Fourier series, where φ m = (γ + α)/2 and L = (γ − α)/2. Now, the expansion coefficients peak at the order π s = L(n − m), because of the sinc function dependence, and, because of the exponential decay of the Bessel functions for arguments much larger than the order, the maximum order is max|m| = [βρ]. In an inverse scattering problem, the scattered fields for all plane waves are observed, and then the task is to reconstruct the contrast function. In order to solve the inverse scattering problem, the adjoint operator of (10) can be helpful, and it is defined as follows: From the spectral theorem for compact self-adjoint operators applied to T + T , it follows that whose kernel is as follows: Electronics 2021, 10, 2157 9 of 21 is provided by the square of the Bessel function of the first kind and zeroth order. Finally, substituting Equations (15) to (14) results as follows: The PSF analysis is used in the inverse scattering problem to determine how the scatterer geometry and observation domain may influence the resolution. Only the effect of scatterer geometry on the resolution is considered here. The reconstruction of a point-like scatterer is provided by the PSF. As pointed out in Section 2, when the PSFs for the inverse source problem have been introduced, the adjoint operator T + can be used to approximate the inverse operator T −1 and hence an approximated PSF can be introduced as well, in Section 2, by replacing T −1 and T + with L −1 i and L + j , respectively, in Equations (5) and (9). Two Circumferences This subsection considers two circumferences to show the difference between this case and the considered geometry in the inverse source problem. As in Section 2, we consider different sizes of inner circumference. First of all, we consider the NDF of this geometry and follow the same approach as in (1) to define a block operator accounting for more than one scattering object. Following the same reasoning of Refs. [27,30,31], if the distance between two scatterers is sufficiently large, the kernel norms and, consequently, the operator norms for the diagonal contributions of the pertinent T + T operator are expected larger than for the off-diagonal ones. Therefore, the T + T operator becomes a diagonal block operator, and its eigenvalues are the combination of each block. This result implies that the whole functional space of the scattered far fields can be approximately decomposed under two individual orthogonal subspaces. Thus, the NDF of an arbitrary number M of scattered fields can be given as follows: and the total NDF can be obtained approximately by summing the and NDF n of each scatterer. The implication of the increase of the NDF for the inverse scattering problem compared to the inverse source problem for the same geometry concerns the possibility of reconstructing a point-like scatterer lying on the inner circumference. As mentioned before, inserting another circumference source inside the outer circumference source cannot increase the NDF; on the other hand, the NDF will be increased in the inverse scattering problem. Furthermore, it is impossible to reconstruct the inner circumference source in the inverse source problem, whereas the inner circumference object can be reconstructed in the inverse scattering problem. To this end, some numerical simulations will be presented. The behavior of normalized singular values of the relevant operator is plotted in Figure 6. The characteristic of the two circumferences are α 1,2 = −π, γ 1,2 = π, ρ 1 = 4λ, and the different sizes of ρ 2 = λ, 2λ and 3λ, where the index 1 and 2 indicate the outer and the inner circumference, respectively. It can be seen that the total NDF is achieved approximately by summing the NDF of two circumferences. The NDF is varied by changing the radius of the inner circumference compared to the inverse source problem. For reference, we just choose a sufficiently large number of plane waves to ensure to achieve all the predicted NDF so that the number of plane waves for this simulation is 60. Figure 7 shows the actual PSF of the outer circumference and its influence on the inner one for a point scatterer located at 0 = 0. The actual PSF is numerically computed by a custom numerical code run in MATLAB environment by discretization of the relevant operator over a sufficiently fine grid. The FWHM criterion evaluates the achievable resolution result. The result confirms that the achievable resolution is 4 ⁄ . As can be observed, the amplitude of the main lobe of the outer circumference does not change when the radius of the inner varies. Furthermore, the PSF of the outer circumference cannot influence the inner. The actual PSF of the inner circumference and its influence on the outer circumference for a point-like scatterer located at 0 = 0 is plotted in Figure 8. It can be observed that the amplitude of the main lobe of the inner circumference varies when the radius of the inner changes. The amplitude of the main lobes of the inner circumference is larger Figure 7 shows the actual PSF of the outer circumference and its influence on the inner one for a point scatterer located at φ 0 = 0. The actual PSF is numerically computed by a custom numerical code run in MATLAB environment by discretization of the relevant operator over a sufficiently fine grid. The FWHM criterion evaluates the achievable resolution result. The result confirms that the achievable resolution is λ/4. As can be observed, the amplitude of the main lobe of the outer circumference does not change when the radius of the inner varies. Furthermore, the PSF of the outer circumference cannot influence the inner. Figure 7 shows the actual PSF of the outer circumference and its influence on the inner one for a point scatterer located at 0 = 0. The actual PSF is numerically computed by a custom numerical code run in MATLAB environment by discretization of the relevant operator over a sufficiently fine grid. The FWHM criterion evaluates the achievable resolution result. The result confirms that the achievable resolution is 4 ⁄ . As can be observed, the amplitude of the main lobe of the outer circumference does not change when the radius of the inner varies. Furthermore, the PSF of the outer circumference cannot influence the inner. The actual PSF of the inner circumference and its influence on the outer circumference for a point-like scatterer located at 0 = 0 is plotted in Figure 8. It can be observed that the amplitude of the main lobe of the inner circumference varies when the radius of the inner changes. The amplitude of the main lobes of the inner circumference is larger The actual PSF of the inner circumference and its influence on the outer circumference for a point-like scatterer located at φ 0 = 0 is plotted in Figure 8. It can be observed that the amplitude of the main lobe of the inner circumference varies when the radius of the inner changes. The amplitude of the main lobes of the inner circumference is larger than the amplitude of the side lobes of the outer circumference in Figure 7. The PSF of the inner circumference cannot affect the outer one. Consequently, it is possible to reconstruct the inner circumference, and a point-like scatterer can be distinguished lying either on it or on the outer one. Electronics 2021, 10, 2157 11 of 21 than the amplitude of the side lobes of the outer circumference in Figure 7. The PSF of the inner circumference cannot affect the outer one. Consequently, it is possible to reconstruct the inner circumference, and a point-like scatterer can be distinguished lying either on it or on the outer one. Next, a comparison between the actual PSF and the approximated one is provided to appreciate the accuracy of the approximated PSF. Figure 9 confirms that the main lobes of approximated ̃ (dotted red curve) and the actual PSF (solid red curve) are nearly overlapped. It means that the approximation is very accurate at estimating the achievable resolution that can be achieved. Two Arcs of Circumferences In this subsection, in order to provide a more general example, we address the case of two arcs of two different circumferences. The geometry of this problem is depicted in Figure 5 so that 1 = 3 , 1 = 6 ⁄ , 1 = 3 4 ⁄ and 2 = 4 , 2 = 5 4 ⁄ and 2 = Figure 8. The PSF of the inner circumference and its effectiveness on the one. Next, a comparison between the actual PSF and the approximated one is provided to appreciate the accuracy of the approximated PSF. Figure 9 confirms that the main lobes of approximated PSF (dotted red curve) and the actual PSF (solid red curve) are nearly overlapped. It means that the approximation is very accurate at estimating the achievable resolution that can be achieved. than the amplitude of the side lobes of the outer circumference in Figure 7. The PSF of the inner circumference cannot affect the outer one. Consequently, it is possible to reconstruct the inner circumference, and a point-like scatterer can be distinguished lying either on it or on the outer one. Next, a comparison between the actual PSF and the approximated one is provided to appreciate the accuracy of the approximated PSF. Figure 9 confirms that the main lobes of approximated ̃ (dotted red curve) and the actual PSF (solid red curve) are nearly overlapped. It means that the approximation is very accurate at estimating the achievable resolution that can be achieved. Two Arcs of Circumferences In this subsection, in order to provide a more general example, we address the case of two arcs of two different circumferences. The geometry of this problem is depicted in Figure 5 so that 1 = 3 , 1 = 6 ⁄ , 1 = 3 4 ⁄ and 2 = 4 , 2 = 5 4 ⁄ and 2 = Two Arcs of Circumferences In this subsection, in order to provide a more general example, we address the case of two arcs of two different circumferences. The geometry of this problem is depicted in Figure 5 so that ρ 1 = 3λ, α 1 = π/6, γ 1 = 3π/4 and ρ 2 = 4λ, α 2 = 5π/4 and γ 2 = 11π/6. The aim is twofold: to confirm that the whole NDF is approximately equivalent to the summation of the NDF of each scatterer and to appreciate the validity of the approximation of the PSF for resolution purposes. Figure 10 shows the behavior of normalized singular values of the relevant operator with 60 plane waves. The upper bound NDF of the two scatterers is 51. The result proves that the total NDF is approximately achieved by summing the NDF of two scatterers. Electronics 2021, 10, 2157 12 of 21 11 6 ⁄ . The aim is twofold: to confirm that the whole NDF is approximately equivalent to the summation of the NDF of each scatterer and to appreciate the validity of the approximation of the PSF for resolution purposes. Figure 10 shows the behavior of normalized singular values of the relevant operator with 60 plane waves. The upper bound NDF of the two scatterers is 51. The result proves that the total NDF is approximately achieved by summing the NDF of two scatterers. Figure 11. In this way, we can appreciate how imaging performance remains constant when the position of the point scatterer changes. In fact, if the location of the point scatterer changes, the width of the main lobe does not change, which means the resolution remains constant. Furthermore, the PSF of scatterer 2 cannot affect the other one. The next numerical simulation refers to the influence of the actual PSF of the arc of circumference 1 on the other one. The considered point positions are φ 0 = 4π/3, 3π/2 and 5π/3 and the corresponding PSFs results are shown in Figure 11. In this way, we can appreciate how imaging performance remains constant when the position of the point scatterer changes. In fact, if the location of the point scatterer changes, the width of the main lobe does not change, which means the resolution remains constant. Furthermore, the PSF of scatterer 2 cannot affect the other one. Electronics 2021, 10, 2157 12 of 21 11 6 ⁄ . The aim is twofold: to confirm that the whole NDF is approximately equivalent to the summation of the NDF of each scatterer and to appreciate the validity of the approximation of the PSF for resolution purposes. Figure 10 shows the behavior of normalized singular values of the relevant operator with 60 plane waves. The upper bound NDF of the two scatterers is 51. The result proves that the total NDF is approximately achieved by summing the NDF of two scatterers. Figure 11. In this way, we can appreciate how imaging performance remains constant when the position of the point scatterer changes. In fact, if the location of the point scatterer changes, the width of the main lobe does not change, which means the resolution remains constant. Furthermore, the PSF of scatterer 2 cannot affect the other one. A numerical simulation is presented to show the influence of the actual PSF of scatterer 2 on the other. The position of the considered points are φ 0 = 4π/3, 3π/2 and 5π/3 and the corresponding PSFs results are displayed in Figure 12. Again, we observed that the width of the main lobes does not vary when the position of the point scatterers changes. It indicates that the PSF is space invariant, which means that the resolution of a point-like scatterer is independent of its location. Electronics 2021, 10, 2157 13 of 21 5 3 ⁄ and the corresponding PSFs results are displayed in Figure 12. Again, we observed that the width of the main lobes does not vary when the position of the point scatterers changes. It indicates that the PSF is space invariant, which means that the resolution of a point-like scatterer is independent of its location. The same simulation as in the previous subsection is here provided to compare the results of the relevant PSF and the ̃ for different positions of 0 . As demonstrated in Figure 13, the main lobes of the approximated ̃ (dotted lines) are approximately overlapped with the actual PSF (solid lines). It can be concluded that in the inverse source and inverse scattering problem, when the observation domain is between − and , while the PSF maximum value may depend on the point source/scatterer, the width of its main lobe does not change and is independent of the position of the point source/scatterer. Since the resolution of a point-like source/scatterer is limited by the width of the PSF, it is the same over the whole source/investigation domain. The same simulation as in the previous subsection is here provided to compare the results of the relevant PSF and the PSF for different positions of φ 0 . As demonstrated in Figure 13, the main lobes of the approximated PSF (dotted lines) are approximately overlapped with the actual PSF (solid lines). 3 ⁄ and the corresponding PSFs results are displayed in Figure 12. Again, we observed that the width of the main lobes does not vary when the position of the point scatterers changes. It indicates that the PSF is space invariant, which means that the resolution of a point-like scatterer is independent of its location. The same simulation as in the previous subsection is here provided to compare the results of the relevant PSF and the ̃ for different positions of 0 . As demonstrated in Figure 13, the main lobes of the approximated ̃ (dotted lines) are approximately overlapped with the actual PSF (solid lines). It can be concluded that in the inverse source and inverse scattering problem, when the observation domain is between − and , while the PSF maximum value may depend on the point source/scatterer, the width of its main lobe does not change and is independent of the position of the point source/scatterer. Since the resolution of a point-like source/scatterer is limited by the width of the PSF, it is the same over the whole source/investigation domain. It can be concluded that in the inverse source and inverse scattering problem, when the observation domain is between −π and π, while the PSF maximum value may depend on the point source/scatterer, the width of its main lobe does not change and is independent of the position of the point source/scatterer. Since the resolution of a pointlike source/scatterer is limited by the width of the PSF, it is the same over the whole source/investigation domain. Optimal Number of Incident Plane Waves In the previous section, we performed an NDF analysis of the field scattered by circumference geometries under plane wave incidence. It was assumed that a sufficiently high number of waves impinge on the object so that the NDF is achieved. In inverse scattering problems, the scattered field data can be acquired by changing either the observation direction or the direction of the impinging plane wave. The knowledge of the NDF of the linearized scattering operator cannot provide any clue about how to discretize the scattered field data optimally. Therefore, from the theoretical point of view, two questions arise when multiple plane waves are used to achieve the NDF: The first one is, how many independent plane waves are needed? The second one is, which directions of plane waves are the optimal ones, in the sense that they allow us to achieve the total NDF of the operator? In the absence of theoretical arguments, we choose to investigate numerically only the question of the minimum number of impinging plane waves that can be significant in achieving the total NDF. An answer to this point can be valuable in decreasing the cost of imaging systems, as for instance, in radar imaging, since it allows us to reduce the number of transmitters (and receivers) to a minimum. In principle, increasing the number of independent plane waves can increase the NDF, namely the number of significant singular values of the operator; after that, it can only affect their behavior in the decaying region. Generally, finding the exact optimal number and its direction for each geometry is complicated because they are dependent on the geometry of the problem, i.e., the number of the scatterers, the distance between them, their location, size, and shape. It means that it is difficult to introduce a general rule for every geometry, and it cannot be predicted easily how many independent plane waves are enough and which directions are the best. To this end, the reciprocity theorem might be helpful, as the inverse scattering problem reduces to an inverse source problem for a single plane wave incidence. According to it, the far-field scattered along a direction θ for a plane wave impinging on the direction θ i is the same when we consider a plane wave impinging on θ and observe the far-field at the direction θ i . Therefore, if we call NDFs the NDF of the corresponding inverse source problem, it would result from NDF = (NDFs + 1) × NDFs/2 for the pertinent inverse scattering problem and NDF, and plane waves would be needed, as each individual one might add pieces of information to the total scattered field. While this reasoning can provide an upper bound about the NDF and the number and directions of the independent plane waves, the actual number is lower. Likewise, in Ref. [31], we have analyzed the same problem for strip geometries in θ and u variable. It has been shown that two incident plane waves for two strips and four incident plane waves for the cross-strip are adequate for achieving all NDF of these scattering geometries. Moreover, optimum directions of the plane waves for each geometry were introduced. It was proved that the same NDF could be achieved independently on the observation variable, and the only difference is the behavior of singular values. The purpose of this section is to investigate the minimum number of independent plane waves for the considered geometries to achieve the whole NDF, as well as to introduce the optimal directions of plane waves. To this end, a comparison between different numbers of plane waves for each geometry is presented. Since, at the moment, it is difficult to introduce a closed-form rule to define the optimal number and its direction, we provide them numerically. We first consider an arc of a circumference that its radius is equal to ρ = 3λ, α = π/6 and γ = 3π/4. Figure 14 illustrates the behavior of the singular value of the relevant operator for different numbers of plane waves. For reference, we just choose a sufficiently large number of plane waves to ensure to achieve all the predicted NDF. which are 0 and . It must be noticed that finding the optimal number is dependent on the size of the circumference, which means that, if its size increases, the optimal number of the plane wave will increase. However, two plane waves are only adequate for this geometry. This result agrees with Reference [31], as this circumference geometry is not far from a rectilinear one. The second numerical simulation is the same as the first one to represent the behavior of the normalized singular values for two arcs of two different circumferences at various numbers of incident plane waves from different directions, as shown in Figure 15. The result shows again that the NDF increases by increasing the number of plane waves from one to two (purple line) plane waves; only the behavior of the singular values varies beyond two. The direction of two plane waves is 6 ⁄ , . It can be observed that when we consider three plane waves with = 0, 2 ⁄ , , the total NDF is achieved as well. Consequently, the optimal direction of the plane waves depends on the scattering geometry compared with the previous simulation. As can be observed in Figure 14, the NDF is added by increasing the number of plane waves; however, the NDF will not add after two plane waves (purple line), and the behavior of the singular values can only change within the decay region. Overall, it can be seen that the NDF can be approximately achieved by considering only two plane waves, which are 0 and π. It must be noticed that finding the optimal number is dependent on the size of the circumference, which means that, if its size increases, the optimal number of the plane wave will increase. However, two plane waves are only adequate for this geometry. This result agrees with Ref. [31], as this circumference geometry is not far from a rectilinear one. The second numerical simulation is the same as the first one to represent the behavior of the normalized singular values for two arcs of two different circumferences at various numbers of incident plane waves from different directions, as shown in Figure 15. The result shows again that the NDF increases by increasing the number of plane waves from one to two (purple line) plane waves; only the behavior of the singular values varies beyond two. The direction of two plane waves is π/6, π. It can be observed that when we consider three plane waves with θ i = 0, π/2, π, the total NDF is achieved as well. Consequently, the optimal direction of the plane waves depends on the scattering geometry compared with the previous simulation. The third simulation is devoted to considering one circumference whose radius is equal to ρ = 4λ. Figure 16 shows the behavior of the singular value of the relevant operator for different numbers of plane waves. It can be viewed that increasing the number of plane waves can increase the NDF; however, the NDF will not increase after four plane waves (red line), and the behavior of the singular values can only change. Overall, the NDF can be approximately achieved by considering only four plane waves, which are 0, π/2, π, and 3π/2. result shows again that the NDF increases by increasing the number of plane waves from one to two (purple line) plane waves; only the behavior of the singular values varies beyond two. The direction of two plane waves is 6 ⁄ , . It can be observed that when we consider three plane waves with = 0, 2 ⁄ , , the total NDF is achieved as well. Consequently, the optimal direction of the plane waves depends on the scattering geometry compared with the previous simulation. The third simulation is devoted to considering one circumference whose radius is equal to = 4 . Figure 16 shows the behavior of the singular value of the relevant operator for different numbers of plane waves. It can be viewed that increasing the number of plane waves can increase the NDF; however, the NDF will not increase after four plane waves (red line), and the behavior of the singular values can only change. Overall, the NDF can be approximately achieved by considering only four plane waves, which are 0, 2 ⁄ , , and 3 2 ⁄ . The fourth numerical simulation concerns the behavior of the normalized singular values for two different circumferences, which the radius of the outer and inner are equal to 1 = 4 and 1 = 2 , respectively, as shown in Figure 17. As can be illustrated, the NDF is added by increasing the number of plane waves; however, the NDF will not add after four plane waves (green line), and the behavior of the singular values can only change. Thus, four plane waves are approximately adequate because the exponential decay of the singular values is met approximately at the same point compared to six and 60 plane waves, although the only difference is the behavior of their singular values. The directions of the four plane waves are = 6 ⁄ , 3 4 ⁄ , 5 4 ⁄ , 7 4 ⁄ . In conclusion, the numerical simulations confirm that finding the optimal number of the plane waves and their directions are dependent on the size of the scatterer, the number of the scatterer, their shapes, and their locations. Therefore, it may be difficult to find a general rule. The fourth numerical simulation concerns the behavior of the normalized singular values for two different circumferences, which the radius of the outer and inner are equal to ρ 1 = 4λ and ρ 1 = 2λ, respectively, as shown in Figure 17. As can be illustrated, the NDF is added by increasing the number of plane waves; however, the NDF will not add after four plane waves (green line), and the behavior of the singular values can only change. Thus, four plane waves are approximately adequate because the exponential decay of the singular values is met approximately at the same point compared to six and 60 plane waves, although the only difference is the behavior of their singular values. The directions of the four plane waves are θ i = π/6, 3π/4, 5π/4, 7π/4. A Numerical Application in Inverse Scattering This section aims at showing the practical relevance of the above theoretical discus sions by the numerical reconstruction of an object dielectric located in free space. It con sists of two dielectric semi-circumference strips with 1 = 4 , 2 = 2 and the width 0.4λ, as illustrated in Figure 18, and the contrast is = 1. Then, we assume that two cracks (free space condition, = 0) exist in the outer semi circumference, and two cracks are present in the inner semi-circumference. In the oute semi-circumference, the width of each crack and the distance between them is 4 ⁄ ; th length of each crack and the distance between them is 6 ⁄ in the inner semi-circumfer ence. All the simulation parameters are the same as in the previous section. The normal ized singular values of the related operator is plotted in Figure 19 to compute the NDF For this simulation, the number of plane waves is 60, and the expected NDF is 81. In conclusion, the numerical simulations confirm that finding the optimal number of the plane waves and their directions are dependent on the size of the scatterer, the number of the scatterer, their shapes, and their locations. Therefore, it may be difficult to find a general rule. A Numerical Application in Inverse Scattering This section aims at showing the practical relevance of the above theoretical discussions by the numerical reconstruction of an object dielectric located in free space. It consists of two dielectric semi-circumference strips with ρ 1 = 4λ, ρ 2 = 2λ and the width 0.4λ, as illustrated in Figure 18, and the contrast is χ = 1. A Numerical Application in Inverse Scattering This section aims at showing the practical relevance of the above theoretical discus sions by the numerical reconstruction of an object dielectric located in free space. It consists of two dielectric semi-circumference strips with 1 = 4 , 2 = 2 and the width 0.4λ, as illustrated in Figure 18, and the contrast is = 1. Then, we assume that two cracks (free space condition, = 0) exist in the outer semi circumference, and two cracks are present in the inner semi-circumference. In the outer semi-circumference, the width of each crack and the distance between them is 4 ⁄ ; the length of each crack and the distance between them is 6 ⁄ in the inner semi-circumfer ence. All the simulation parameters are the same as in the previous section. The normalized singular values of the related operator is plotted in Figure 19 to compute the NDF For this simulation, the number of plane waves is 60, and the expected NDF is 81. Then, we assume that two cracks (free space condition, χ = 0) exist in the outer semicircumference, and two cracks are present in the inner semi-circumference. In the outer semi-circumference, the width of each crack and the distance between them is λ/4; the length of each crack and the distance between them is λ/6 in the inner semi-circumference. All the simulation parameters are the same as in the previous section. The normalized singular values of the related operator is plotted in Figure 19 to compute the NDF. For this simulation, the number of plane waves is 60, and the expected NDF is 81. It must be noted that the mathematical procedure of the reconstruction algorithm is significant to solve the problem. Since all actual scattered fields always include noise, the reconstructed image may also be noisy, and the cracks may not be detected. Therefore, it is crucial that the inversion algorithm can mitigate the noise effect. To this end, the Truncated SVD algorithm is adopted to reconstruct the contrast function, and we fix the threshold value to include the first NDF singular values. Then additive white Gaussian noise is added to the simulated scattered field, with a noise level such that the Signal-to-Noise Ratio (SNR) is 10 dB, and results are shown in Figure 20. As can be observed, it can be distinguished that two cracks are present along the outer semi-circumference; oppositely, it is impossible to differentiate between two cracks in the inner semi-circumference. It is important to note that the two cracks separated by the distance 4 ⁄ can be resolved well also in the presence of noise on data. Hence, the resolution analysis is important for the reliable reconstruction of defects in dielectric objects in inverse scattering problems. It must be noted that the mathematical procedure of the reconstruction algorithm is significant to solve the problem. Since all actual scattered fields always include noise, the reconstructed image may also be noisy, and the cracks may not be detected. Therefore, it is crucial that the inversion algorithm can mitigate the noise effect. To this end, the Truncated SVD algorithm is adopted to reconstruct the contrast function, and we fix the threshold value to include the first NDF singular values. Then additive white Gaussian noise is added to the simulated scattered field, with a noise level such that the Signal-to-Noise Ratio (SNR) is 10 dB, and results are shown in Figure 20. It must be noted that the mathematical procedure of the reconstruction algorithm is significant to solve the problem. Since all actual scattered fields always include noise, the reconstructed image may also be noisy, and the cracks may not be detected. Therefore, it is crucial that the inversion algorithm can mitigate the noise effect. To this end, the Truncated SVD algorithm is adopted to reconstruct the contrast function, and we fix the threshold value to include the first NDF singular values. Then additive white Gaussian noise is added to the simulated scattered field, with a noise level such that the Signal-to-Noise Ratio (SNR) is 10 dB, and results are shown in Figure 20. As can be observed, it can be distinguished that two cracks are present along the outer semi-circumference; oppositely, it is impossible to differentiate between two cracks in the inner semi-circumference. It is important to note that the two cracks separated by the distance 4 ⁄ can be resolved well also in the presence of noise on data. Hence, the resolution analysis is important for the reliable reconstruction of defects in dielectric objects in inverse scattering problems. As can be observed, it can be distinguished that two cracks are present along the outer semi-circumference; oppositely, it is impossible to differentiate between two cracks in the inner semi-circumference. It is important to note that the two cracks separated by the distance λ/4 can be resolved well also in the presence of noise on data. Hence, the resolution analysis is important for the reliable reconstruction of defects in dielectric objects in inverse scattering problems. Conclusions We have investigated the role of the NDF and the PSF in the linear electromagnetic inverse for both source and scattering problems to estimate the achievable resolution. Since the results may depend on the geometry, we have taken into consideration one different from Ref. [19], that is, the circumference one. The PSF can be implemented by the numerical solution of the relevant inverse problem. Since finding the exact evaluation of the PSF is complicated and can only be performed numerically for most geometries, an approximate analytical assessment was introduced, and its accuracy was assessed against the actual PSF for each geometry by numerical simulations. In addition, the closed-form evaluation of the PSF in inverse source and the analysis of the closed-form NDF in inverse scattering were provided. Two circumferences were addressed to prove that the inner source is negligible to achieve the NDF and cannot add the NDF; conversely, the inner circumference can increase the NDF in inverse scattering. We have demonstrated that, when the observation domain scans the full angle in the far zone region, space invariant PSF is achieved, which means that the maximum resolution of the reconstruction of point-like sources/objects is obtained independently of their locations within the investigation domain, and the resolution remains unchanged. Specifically, the achievable resolutions for considered geometries are λ/2 and λ/4 for the inverse source and scattering problems, respectively, as in Ref. [19]. Furthermore, we have shown that increasing the number of independent plane waves can add the NDF until the minimum number of plane waves that are needed to achieve the whole NDF; thus, the optimal number of independent plane waves and their direction was introduced numerically. The result illustrated that finding optimal number and their direction is dependent on the characteristics of the scatterer, such as its size and the number of the scatterer. In Section 5, the numerical example has shown the relevance of the approach to microwave tomography of dielectric objects, and when noise is present, the two cracks separated by the distance λ/4 can be reconstructed well. Conflicts of Interest: The authors declare no conflict of interest. Appendix A In this appendix, we derive the singular system of the operator (1) to compute the closed-form of the PSF in inverse source problems. For a circle source, the singular system is readily provided in the closed-form [29] by the following: v n (φ) = e jnφ √ 2π , σ n = 2πρ J n (βρ), u n (θ) = j n e jnθ √ 2π (A1) where J n (.) is the n-th Bessel function of the first kind. As is well-known, the singular functions and the singular values of the operator L satisfy the equations L v n = σ n u n , and L + u n = σ n v n (see (2)). Therefore, the following two eigenvalue problems arise: L + L v n = σ 2 n v n , and L L + u n = σ 2 n u n . We start considering the latter one and evaluate the following: By exploiting the Jacobi-Anger expansion [29] and the orthogonal property of the exponential in θ; over the interval [0, 2π], (A2) becomes the following: Therefore, the following is apparent: It is possible to show that σ 2 n decay exponentially for n > [βρ MAX ], where [.] stands for the nearest integer and ρ MAX = max(ρ 1 , ρ 2 ). Now, we consider the following equation to compute the singular functions: Then we get the following: L + j u n (θ) = ρ j π −π e jnθ √ 2π e −jβρ j cos (θ−φ j ) dθ = ρ j √ 2π π −π e jnθ ∑ n (−j) n J n βρ j e −jn(θ−φ j ) dθ = 2π ρ j (−j) n e jnφ j √ 2π J n βρ j (A6) By substituting Equation (A6) to (A5), it follows that we get the following: σ n v j n = 2π ρ j (−j) n e jnφ j √ 2π J n (βρ i ) (A7) Then the singular functions are given by the following: v j n φ j = (−j) n J n βρ j ρ j Finally, the closed-form PSF is given by the following: PSF φ 0i , φ j = ∑ n v j n φ j v i * n (φ 0i ) = ∑ n ρ i ρ j J n (βρ i ) J n βρ j ∑ k ρ 2 k J 2 n (βρ k ) e jn(φ j −φ 0i ) 2π (A9) which holds both for i = j and i = j.
14,730.8
2021-09-04T00:00:00.000
[ "Mathematics" ]
A new heat propagation velocity prevails over Brownian particle velocities in determining the thermal conductivities of nanofluids An alternative insight is presented concerning heat propagation velocity scales in predicting the effective thermal conductivities of nanofluids. The widely applied Brownian particle velocities in published literature are often found too slow to describe the relatively higher nanofluid conductivities. In contrast, the present model proposes a faster heat transfer velocity at the same order as the speed of sound, rooted in a modified kinetic principle. In addition, this model accounts for both nanoparticle heat dissipation as well as coagulation effects. This novel model of effective thermal conductivities of nanofluids agrees well with an extended range of experimental data. A nanofluid [1] is defined as a mixture of nanosized particles suspended in liquid as the base fluid. The nanofluid is perceived as an extended scope of earlier efforts to study the effective thermal conductivity of multiphase systems containing microscale particleembedded solid materials [2][3][4] and a solid dispersion in liquid [5]. Since the first article on measurements of the enhanced thermal conductivity of nanofluids (suspension of Al 2 O 3 and CuO nanoparticles in either water or ethylene glycol) using the transient hot-wire technique was published in 1999 [6], a number of successive measurement studies have supplemented the original findings and extended the parametric variations affecting the level of conductivity enhancement [7][8][9][10][11][12][13][14][15][16][17][18][19][20][21][22][23]. These experimental examinations have revealed the parametric importance of thermal conductivity enhancement, including the volume concentration of nanoparticles and their sizes, clustering or aggregation effect, pH effect, surfactant effect, and the base fluid temperature. As a systematic approach, Chon et al. [24] have constructed an experimentally extrapolated equation that predicts the nanofluid conductivity in terms of the related parameters. Despite these advances, however, the published studies on theoretical predictions [25][26][27][28][29][30] of the thermal conductivity enhancement of nanofluids continue to be controversial and far from comprehensive. Table 1 shows the chronological presentation of published theories predicting conductivities, either for particleembedded solid materials or for nanofluids. The first attempt at mathematical modeling dates back to 1873 by Maxwell [2], who presented effective thermal conductivity for a heterogeneous solid material, consisting of spherical solid particles of thermal conductivity k p embedded in a continuous solid phase with thermal conductivity k BF . The volume concentration f of the embedded spheres is taken to be sufficiently small, such that the spheres do not interact thermally and the effect of the particle size is assumed negligible. In 1962, Hamilton and Crosser [3] extended Maxwell's model and incorporated a modification for non-spherical particles by the empirical shape factor n. A number of alternative models have been proposed with the use of the Brownian motion-induced microconvection in a nanofluid. By adding the second term to the Maxwell model, Xuan et al. [25] proposed a model incorporating the Brownian motion of nanoparticles in 2003. A year later, Jang and Choi [26] introduced the Brownian-motion-driven convection model and attempted to describe the temperature-dependency of nanofluid thermal conductivity. They assumed the Nusselt number (Nu) to be the product of Reynolds number (Re) and Prandtl number (Pr), i.e., Nu = Re 2 Pr 2 , based on the postulation of Reynolds number of the order of unity. However, this assumption is invalid because it is incorrect to neglect the first two terms, i.e., lower degree terms of Re·Pr, in the expression for the Nusselt number that Acrivos and Taylor [31] have derived for heat transfer from a spherical particle at low values of the Reynolds number. Kumar et al. [27] also attempted to incorporate the nanoparticle thermal conductivity based on the Brownian velocity. However, their model failed as Keblinski et al. [32] asserted that "the Brownian motion mean free path of a nanoparticle in fluid (by Kumar et al.) is on the order of 1 cm, which is unphysical." In 2005, Prasher et al. [28] developed a model combining the Maxwell-Garnett model [33] incorporating both the Kapitza resistance effect of particles with the surrounding medium and the effect of the Brownian motion-induced convection. Later, they expanded their theoretical prediction for nanofluid thermal conductivity by adding aggregation conductivity contributions for the convection enhancements [29]. However, they assumed a less justifiable Brownian velocity of nanoparticles as p based on the kinetic theory of gas, which is valid just for fine particles suspended in a dilute gas (Boltzmann constant k b = 1.3807 × 10 -23 J/K, the base fluid temperature T, the nanoparticle densify r p , and its diameter d p )-but not quite valid for nanoparticles suspended in liquid. Quite possibly because of this conflict, their model fits only to a subset of experimental data, e.g., agrees fairly well with Al 2 O 3 nanofluid data, but fails to fit to CuO nanofluid data. The effect of the Brownian motion-induced microconvection remains controversial among different research groups. Eapen et al. [34] strongly argued that microconvection around randomly moving nanoparticles does not influence the thermal conductivity of the nanofluid. In 2007, Das group proposed a nanofluid thermal conductivity model based on a cell model [30]. Their cell model tried to explain the nonlinear dependence of thermal conductivity of nanofluids on particle volume fraction. However, their empirical constants were defined only to fit to their experimental data. In fact, their model constants did not show consistency for an identical Al 2 O 3 nanofluid. The kinetic principle well describes the thermal conductivity of gas, as the gas molecules are assumed to be freely moving due to their relatively lean distributions [35]. For liquids, however, their stronger intermolecular forces, primarily because of the higher packing density, make it necessary to modify the kinetic theory. In addition, the molecular collision velocities of gases are too low to explain liquid thermal conductivities that are at Hamilton and Crosser [3] k eff Xuan et al. [25] k eff Jang and Choi [26] k eff Kumar et al. [27] k eff Prasher et al. [28,29] Patel et al. [30] k eff f denotes the volume concentration, n is the empirical shape factor (n = 3 for sphere), and C a , C b , C g , C δ and m are empirical constants. Suggested constants [26,27,29] are 18 × 10 6 for C a , 2.9 to 3.0 for C b , 40000 for C g , and 2.4 to 2.75 for m. least one order of magnitude higher than the gas conductivities. Hence, the thermal conductivities of denser liquids are conjectured to be more properly expressed by the faster sound propagation in the case of liquids, and by the phonon velocity in the case of solids. In this article, a novel theoretical model describing the nanofluid thermal conductivities, considering all major effective parameters including the size, density and volume concentration of nanoparticles, the fluid temperature and viscosity, and relevant thermal parameters such as thermal conductivity of base fluid and heat capacity of nanoparticles, is proposed and examined for its validity against available experimental data. Introduction of heat propagation velocity The enhanced thermal conductivity of a liquid suspension containing highly conductive metal or metal-oxide particles, such as nanofluids with Au, Al 2 O 3 , or CuO, is believed to be attributed to the interaction of nanoparticles with the base fluid molecules. The thermal conductivity of a liquid is given by [36]: where ρ and c v are the liquid density and specific heat, respectively, u is sonic velocity in liquid, and a is the molecular travel distance between two successive collisions. Likewise, the thermal conductivity enhancement of a nanofluid can include the thermal properties of nanoparticles (r p , c p ), the heat propagation velocity V ht , which substitutes the sonic velocity, the heat travel distance l ht , which replaces the collision travel distance a, and additional consideration of the volume fraction of nanoparticles f [14,35]: Note that the combined term V ht ·l ht relates to the increase of thermal diffusivity of nanofluid as The heat travel distance l ht , which is defined as the freely traveled distance of heat energy during the interaction of base fluid molecules and nanoparticles, is shown to be equivalent to the root-mean-square displacement of nanoparticles [25] as: where μ is the dynamic viscosity of the base fluid and c 1 is a dimensionless proportional constant. In the case of nanofluid, if l ht is assumed to have the same order of magnitude as the mean free path of water molecules, one can estimate l ht~0 .170 nm. The heat propagation velocity can be estimated by examining the order-of-magnitudes of the involved parameters in Equation 2. For example, for 47-nm Al 2 O 3 at 1 vol.% concentration (f·r p ·c p~3 .2 × 10 4 ), the thermal conductivity enhancement Δk enh is found to range from 0.025 to 0.100 W/K m [24]. Thus, the heat propagation velocity V ht is estimated to be on the order of 10 3 m/s. While a more rigorous analysis to determine the heat transfer velocity is yet to be discussed, this estimation is consistent with the conjectures of the characterisitc heat propagation velocity being on the scale of the sound propagation velocity of an order 10 3 m/s of both in a liquid medium [22] and in a colloidal medium [37,38]. The heat propagation velocity V ht represents the heat propagation rate by the vibration of base fluid molecules. In a stationary liquid, individual molecules are constantly moving, and their motions are largely confined within a "cage" formed by the closely packed neighboring molecules [36]. This virtual cage is conceived by the energy barrier of height G + 0 /Ñ where G + 0 represents the molar free energy of activation for escaping the cage andÑ denotes the molar Avogadro number. The molecular vibrational frequency ν is given by: where k b denotes the Boltzmann constant, h and R are the Planck constant and the specific gas constant, respectively, and T is the fluid temperature. The free energy of activation, G + 0 , is assumed to be constant for a specified fluid and also assumed to be directly related to the internal energy of vaporization at the normal boiling point [39]. The internal energy is given from Trouton's rule [40] as The propagation length scale λ ht , is calculated based on the assumption that the base fluid moledules and nanoparticles are arranged in a cubic lattice, with a center-to-center spacing given by Ṽ Ñ New model for nanofluid thermal conductivity Substituting Equations 3 and 5 into Equation 2 gives an expression for the effective nanofluidic thermal conductivity k eff as: Two additional modifications of Equation 6 are implemented. First, the volume fraction f is modified to a reduced volume fraction f a (a < 1) to account for the coagulation of nanoparticles that effectively reduce the original volume fraction [38]. The coagulation becomes more severe to require a smaller exponent a with increasing particle concentration because of the decreased inter-particle distance. For example, The surface-to-surface distance of nanoparticles is twice the particle size at 1 vol.%; however, it can decrease to half the particle size at 5 vol.%. Secondly, the effective thermal conductivity of Equation 6 is modified by multiplying the heat capacity ratio of the base fluid to nanoparticles, c BF c p . It is known that shorter heat dissipation time from nanoparticles into the base fluid enhances the effective thermal conductivity of nanofluid [41,42]. The heat dissipation time decreases with increasing heat capacity of the base fluid and decreasing heat capacity of the nanoparticles. In other words, nanoparticles with a smaller heat capacity require shorter heat dissipation time to the base fluid, and this results in greater thermal diffusion and higher effective thermal conductivity. The effective conductivity increases in consistency with the heat capacity ratio c BF c p . Therefore, after accommodating the above two modifications, the effective thermal conductivity of nanofluids of Equation 6 is given by: where C is a modified constant and c BF is the base fluid specific heat. The heat transfer length scale l ht is difficult to be calculated directly, but may be determined by order analysis and merged into the constant C. The exponents a and b are empirical constants that represent the effect of nanoparticle coagulation and of nanoparticle heat dissipation, respectively. A regression analysis of published experimental data by the authors [24] provides a = 0.70, b = 1.5, and C = 3.58 × 10 -14 m for the case of Al 2 O 3 nanoparticles of three different sizes (11 nm, 47 nm, 150 nm diameters) suspended in water under various experimental conditions of a volume concentration range of 1 to 4 vol.% and a tested temperature range of 21 to 71°C. Figure 1 compares different types of velocity scales that are considered relevant in describing nanofluid thermal conductivity models: (1) three differently defined Brownian velocities for 47-nm Al 2 O 3 nanoparticles [26][27][28], (2) the Brownian velocity of the base fluid (water) molecules [26], (3) the heat propagation velocity based on the currently proposed model (Equation 5), (4) the sound velocity in water [43], and (5) phonon velocities for selected solid mediums of αalpha-Fe and silicons [44,45]. The phonon velocities are expected to be faster than the heat propagation velocity in liquid because of the relatively higher heat conductivities in solid mediums. Table 2 shows the functional expressions of these velocities and their calculated magnitudes for the tested temperature range of 21 to 71°C. Note that all the previously reported nanofluid thermal conductivity models use the Brownian velocities for the heat propagation velocity, while the present propagation velocity is comparable to the sonic velocity in the base fluid that is several orders larger than the Brownian velocities. The Brownian velocities based on the nanoparticles are too slow to be compatible with the relatively faster heat conduction phenomena in liquids. Furthermore, the Brownian velocities of the base fluid of water molecules are also considered too slow to properly model the nanofluidic conductivities. Nevertheless, we do not mean that the Brownian motion is not related to the thermal conductivity enhancement. Nor do we mean that Brownian convection is not significant. What we imply is that the assumption in [26], i.e., the Nusselt number can be expressed as Nu = Re 2 Pr 2 , is invalid because it is incorrect to neglect the first two terms, i.e., lower degree terms of Re·Pr, in the expression for the Nusselt number that Acrivos and Taylor [31] have derived. In addition, in order to have significant convection effect by wavelength mode of long molecular motion, the bulk fluid needs externally imposed gradients such as pressure, gravity or temperature. However, a nanofluid has quiescent condition, which cannot support any convection [34,46]. The Brownian velocity, as shown in Figure 1, is several orders of magnitude lower than the required velocity scale of 10 3 in modeling nanofluid conductivity enhancement. Figure 2a-c shows the present model for thermal conductivities of water-based nanofluids, Equation 7, in comparison with five published models [25][26][27][28][29][30], for the three different nanofluids. The symbols represent the corresponding experimental data for Al 2 O 3 [24] and CuO [present work]. For all three nanofluids with 47nm Al 2 O 3 at 1 and 4%, and 30-nm CuO at 1%, Xuan et al. [25] overestimated the Maxwell's model [2] for nanofluids. Jang and Choi's model [26] shows proximity with Prasher et al. [27] Jang & Choi [25] Kumar et al [26] Brownian velocity of 47-nm Al 2 O 3 1480-1555 Heat propagation velocity [Present model, Equation 5] Xuan et al. [24] Prasher et al. [27,28] Kumar et al. [26] Present model Patel et al. [29] Figure 2 Comparison of the present model (the solid curves) with published models [25][26][27][28][29]for the thermal conductivities of nanofluids. The symbols represent the presently (CuO nanofluids) and previously (Al2O3 nanofluids [23]) measured conductivities from the University of Tennessee laboratory: (a) 1 vol. % Al2O3 nanofluid [13] [27] wrongly postulates the mean free path of the base fluid, as pointed out by Keblinski et al. [32], and completely fails to predict nanofuidic thermal conductivities for all presently tested conditions. Prasher et al. [28,29] show fairly good agreement with the experiments for the Al 2 O 3 nanofluid, as shown in Figure 2a, c. However, for the CuO nanofluid (Figure 2b), their model underestimates the corresponding experimental data [24]. When completely different model parameters were imposed for CuO from that of Al 2 O 3 , the model agrees well with the data; however, the model then lacks comprehensiveness because different model parameters need to be determined for different types of nanofluids. Finally, Patel et al. [30] agrees fairly well with the experimental data at higher concentrations (Figure 2c) but overestimate the thermal conductivities for low volume concentrations (Figure 2a, b). In contrast, the present model of Equation 7 shows consistent agreement with the experimental data not only for both nanofluids but for all the tested conditions of temperatures and volume concentrations. Furthermore, Figure 3 demonstrates the comprehensiveness of the present model of Equation 7 in comparison with published experimental data for both Al 2 O 3 and CuO nanofluids from different leading groups [6,10,24]. Concluding remarks In order to alleviate the controversy associated with the relatively slow Brownian velocity of nanoparticles to describe the microconvection effect on thermal conductivities of nanofluids, a new and faster heat transfer velocity is proposed, on the same order as the speed of sound and rooted from a modified kinetic principle. Furthermore, the new model for effective thermal conductivities of nanofluids, which is based on the faster heat propagation velocity and accounts for both nanoparticle heat dissipation and coagulation as follows, can more accurately and comprehensively describe the effective thermal conductivities of nanofluids with different types (Al 2 O 3 and CuO nanofluids) and sizes of nanoparticles (ranging from 10 to 150 nm), for a relatively wider range of temperatures in comparison with the most popular range of up to 50°C of published studies. As similar conceptual studies, the recent thermal-wave [47] and the dual-phase lagging heat conductions [48] are attracted by researchers because both models can explain the high-rate heat flux in microscale and also can be applied to the thermal conductivity of nanofluid. Thermal-wave and dual-phase lagging heat conduction are developed analytically, however the new model is approached by physical manner and it considers more practical factors such as particle coagulation effect and heat dissipation effect. Therefore our new model will be bridging the practical thermal conductivity enhancement of nanofluid and theoretical concept of the high-rate heat flux of nanofluid such as thermal-wave dual-phase lagging heat conduction of nanofluid.
4,340
2011-04-27T00:00:00.000
[ "Physics" ]
Degradation and Breakdown of Polymer/Graphene Composites under Strong Electric Field : In this work, we study the effect of strong electric fields on a polymer/graphene composite and the resulting morphology upon its dielectric breakdown. Our model system was produced by compounding up to 0.25 wt% graphene nanoplatelets (GNP) into poly(ethylene- co -vinyl acetate) (EVA), which is a soft polymer with low melt viscosity. A strong electric field of up to 400 V rms /mm was applied to the EVA/GNP composite in the melt. The sample’s resistance over the electric field application was simultaneously measured. Despite the low GNP loading, which was below the theoretical percolation threshold, the electric conductivity of the composite during electric field application dramatically increased to >10 − 6 S/cm over 5 min of electric field application before reaching the current limit of the experimental apparatus. Conductivity growth follows the same scaling relationship of the theoretical model that predicts the rotation and translation time of GNPs in a polymer melt as a function of electric field strength. Since no significant GNP alignment in the composite was observed under transmission electron microscopy (TEM), we hypothesized that the increase in electrical conductivity was due to local electrical treeing of the polymer matrix, which eventually leads to dielectric breakdown of the composite. Electrical treeing is likely initiated by local GNP agglomerates and propagated through conductive channels formed during progressive dielectric breakdown. Introduction Most polymers are good electrical insulators. Since macromolecular chains are held together by covalent bonds, electron transport through most polymers is poor. Adding hard filler particles to a host polymer matrix can further improve their material properties such as increased toughness and hardness, and resistance to heat and chemicals. As such, polymer composites containing inorganic insulating fillers such as glass, silica, and alumina are widely used in industry for electrical insulation applications, from low-voltage home appliances to high-voltage power grids [1,2]. In order to design polymer composites that are safe and provide adequate dielectric strength for high-voltage applications, the phenomenon of electric field-induced breakdown of particle-reinforced polymers is wellstudied [3,4]. Conductive polymer composites, on the other hand, are a different class of materials, produced by loading electrically conductive particles into a host polymer. Conductive polymer composites are applied in electrostatic discharge protection and electromagnetic interference shielding. In their application, these composites may be temporarily exposed to high instantaneous electric fields created by static electricity or the surrounding environment [5]. The resultant electrical current may lead to potential electrical breakdown and material failure, thereby posing significant safety risks. However, the effect of electric field on the possible breakdown of conductive polymer composites containing electrically conductive fillers is not well understood. Therefore, studying the effect of the electric field on conductive polymer composites could offer insights into the Poly(ethylene-co-vinyl acetate) (EVA, Elvax 40 W, 40 wt% vinyl acetate, density = 0.965 g/cm 3 ) was obtained from the Dow Chemical Company. EVA was dried in a vacuum oven at 40 • C for at least 12 h prior to use. Graphene nanoplatelets (GNPs, N002-PDR) were obtained from Angstron Materials (Dayton, OH, USA), and their material characteristics are reported in our previous publication [14][15][16][17]. Probe sonication was used to disperse and exfoliate GNPs in tetrahydrofuran (THF, reagent grade, Sigma-Aldrich, St. Louis, MO, USA). In a centrifuge tube, the desired amount of GNP was added into~40 mL of THF. Then, the resulting suspension was continuously probe-sonicated (Branson Digital Sonifier SFX 250, Danbury, CT, USA) using a 1 4 inch probe at 75 W for 1 h under an ice water bath. In a separate container, 4 g of EVA was dissolved in 40 mL of THF. Then, the sonicated GNP suspension was added into the EVA/THF solution and stirred at room temperature for 5 min, followed by coprecipitation into~500 mL of methanol (reagent grade, Sigma-Aldrich). The resulting composite was filtered and dried in vacuo for at least 24 h to remove THF. This was pressed into 1 mm thick sheets by compression molding using a hot press (Wabash Carver Press, Wabash, IN, USA) at 180 • C and~9 MPa pressure. Lastly, sample swatches were cut using a 3 mm diameter circular die punch. Electric Field Application with Heating Stage The electric field was applied to the polymer melt using a PC-controlled high-voltage sequencer (LabSmith, HVS448-1500, Livermore, CA, USA) in conjunction with a heating stage (Figure 1a). LabSmith Sequence PC software was used to generate a sinusoidal voltage while simultaneously recording voltage and current. The rate of data sampling and recording was 16 Hz. The composite sample was inserted into a circular hole of 3 mm diameter at the center of a 1 mm insulating silicone rubber spacer, which was then sandwiched between two copper electrodes. An image of the sample is given in Figure 1b. Copper wires were soldered onto the electrodes, which were in turn connected to the sequencer. The entire sandwich was placed on top of a homemade stainless-steel heating stage, constructed using a flexible heating element (40 W, 10 W/in 2 , Omega Engineering, Norwalk, CT, USA) and a PID temperature controller (Omega Engineering). An electric field was applied after the composite sample had been heated to 160 • C to reduce the viscosity of the polymer matrix. The electric field was applied until the maximal allowable current (~3-10 mA) had been reached, when part of the sample was burnt through. Smoke emitted from the sample, and the remaining sample always contained a hole (Figure 1c). heating stage (Figure 1a). LabSmith Sequence PC software was used to generate a sinusoidal voltage while simultaneously recording voltage and current. The rate of data sampling and recording was 16 Hz. The composite sample was inserted into a circular hole of 3 mm diameter at the center of a 1 mm insulating silicone rubber spacer, which was then sandwiched between two copper electrodes. An image of the sample is given in Figure 1b. Copper wires were soldered onto the electrodes, which were in turn connected to the sequencer. The entire sandwich was placed on top of a homemade stainless-steel heating stage, constructed using a flexible heating element (40 W, 10 W/in 2 , Omega Engineering, Norwalk, Connecticut, USA) and a PID temperature controller (Omega Engineering). An electric field was applied after the composite sample had been heated to 160 °C to reduce the viscosity of the polymer matrix. The electric field was applied until the maximal allowable current (~3-10 mA) had been reached, when part of the sample was burnt through. Smoke emitted from the sample, and the remaining sample always contained a hole ( Figure 1c). Characterization with Transmission Electron Microscopy After electric field application, the sample was rapidly quenched to below the glass transition temperature using dry ice. Then, the sample was carefully removed from the sandwich and embedded in epoxy resin, which was cured at room temperature. Ultrathin (~90 mm) sections were obtained by cryomicrotomy (Leica UC6, Wetzlar, Germany) at −140 • C using a diamond knife. Sample sections were prepared in the plane parallel to the direction of electric field application. Bright-field transmission electron microscopy (TEM) images were obtained using a FEI Tecnai G2 Spirit BioTWIN microscope (Hillsboro, OR, USA) with an accelerating voltage of 120 kV. In the TEM images, the horizontal stripe features represent knife marks (artifacts from cryomicrotomy) and are parallel with the direction of the applied electric field. Direct Electrical Conductivity Measurement during Electric Field Application The electrical conductivity of the EVA/GNP composite was measured continuously during electric field application. Figure 2a shows the time-resolved raw voltage and current signals as measured by the high-voltage sequencer across three EVA/GNP_0.25 wt% samples in the field direction. Electric field strength was E 0 = 400 V rms /mm and ω AC = 1 s −1 . Since the voltage and current signals were in phase with each other, the composite effectively acted as a resistor in the AC circuit. The conductivity of the composite (σ z , in S/cm measured in the axial (z) direction based on the setup given by Figure 1) was calculated using Ohm's law: where d is the thickness (0.1 cm) and A the surface area (0.08 cm 2 ) of the composite, and I and V are the current and voltage signals from the high-voltage sequencer, respectively. The time-resolved electrical conductivities of the three samples are shown in Figure 2b. Current resolution was ±10 µA, representing the background noise of the sequencer. On the basis of the sample geometry, this corresponds to a lower current detection limit of~2 × 10 −8 S/cm. Results from Figure 2b suggest that the electrical conductivity of EVA/GNP composite increased exponentially as a function of time when the electric field was turned on. Then, the increase in electrical conductivity became slower once the conductivity had reached~10 −6 S/cm. The sequencer reached its current limit (~5 mA) at 3-5 min of electric field application. When the electrical current through the sample was above 1 mA, the current signal became unstable, fluctuated widely, and triggered the internal circuit mechanism of the sequencer to shut off the applied electric field. Therefore, the maximal electrical conductivity the composite was reached using the high-voltage sequencer occurs between 10 −6 and 10 −5 S/cm. In comparison, the neat EVA polymer exhibited insulating behavior when an alternating electric field was applied in the molten state for over 20 min (Figure 3). Voltage and current signals were out of phase with each other, and the AC current fell completely within the range of instrument background noise (<10 µA). Given that there were no current conducting species, this behavior was as expected for a neat polymer matrix. The difference in the time-dependent electrical conductivity between EVA/GNP composite and EVA neat polymer during electric field application suggested that the electric field imparted microstructural changes to the conductive GNPs within the composite. Lastly, we measured time-resolved conductivity change to the EVA/GNP composite by changing the electrical field strength (Figure 4a), temperature (Figure 4b), or GNP concentration (Figure 4c). Results show that EVA/GNP composites required a longer time until electrical breakdown when electric field strength, temperature, or GNP concentration was lowered. For all samples shown in Figure 4 (except for the EVA/GNP_0.05 wt% shown in Figure 4c), conductivity measurement was stopped due to electrical breakdown when smoke was emitted from the sample, and a hole through the sample was created (Figure 1c). Lastly, the onset of electrical current increase showed sample-to-sample variations, likely from the inherent inhomogeneity in graphene dispersion between swatches. Nonetheless, the rate of current increase was comparable once the current reading was above the instrument's detection limit. Lastly, we measured time-resolved conductivity change to the EVA/GNP composite by changing the electrical field strength (Figure 4a), temperature (Figure 4b), or GNP concentration ( Figure 4c). Results show that EVA/GNP composites required a longer time until electrical breakdown when electric field strength, temperature, or GNP concentration was lowered. For all samples shown in Figure 4 (except for the EVA/GNP_0.05 wt % shown in Figure 4c), conductivity measurement was stopped due to electrical breakdown when smoke was emitted from the sample, and a hole through the sample was created ( Figure 1c). Lastly, the onset of electrical current increase showed sample-to-sample variations, likely from the inherent inhomogeneity in graphene dispersion between swatches. Nonetheless, the rate of current increase was comparable once the current reading was above the instrument's detection limit. by changing the electrical field strength (Figure 4a), temperature (Figure 4b), or GNP concentration ( Figure 4c). Results show that EVA/GNP composites required a longer time until electrical breakdown when electric field strength, temperature, or GNP concentration was lowered. For all samples shown in Figure 4 (except for the EVA/GNP_0.05 wt % shown in Figure 4c), conductivity measurement was stopped due to electrical breakdown when smoke was emitted from the sample, and a hole through the sample was created ( Figure 1c). Lastly, the onset of electrical current increase showed sample-to-sample variations, likely from the inherent inhomogeneity in graphene dispersion between swatches. Nonetheless, the rate of current increase was comparable once the current reading was above the instrument's detection limit. TEM Imaging of Composites after Electric Field Application Next, TEM imaging of the EVA/GNP composites was performed to evaluate whether electric field application changed the microstructural morphology of the composite. Due to the softness of the EVA polymer and the black visual appearance of the EVA/GNP composite, characterization with the naked eye or optical microscopy was not possible. On the other hand, TEM allows for the direct visualization of GNP dispersion and orientation TEM Imaging of Composites after Electric Field Application Next, TEM imaging of the EVA/GNP composites was performed to evaluate whether electric field application changed the microstructural morphology of the composite. Due to the softness of the EVA polymer and the black visual appearance of the EVA/GNP composite, characterization with the naked eye or optical microscopy was not possible. On the other hand, TEM allows for the direct visualization of GNP dispersion and orientation within the blend. Cryomicrotomy was performed in the plane of the electric field application after a portion of bulk sample had been embedded in an epoxy matrix. During cryomicrotomy, small defects on the diamond knife cutting edge create two types of artifacts that help in identifying the direction of the electric field application ( Figure A1). The first type of artifact is knife marks, shown as the sparse patterns which orient perpendicular to the knife's cutting edge. The second type of artifacts is chattering that arises from sample vibration and irregular compression between specimen cutting face and knife edge. This produces dense patterns that orient parallel to the knife's cutting edge. Figure 5 shows representative TEM images of EVA/GNP_0.25 wt% samples before and after electric field application. No percolating GNP network was observed in either sample because the GNP concentration was below the percolation threshold for a homopolymer matrix (~0.5 wt% based on our previous work [17]). that help in identifying the direction of the electric field application ( Figure A1). The first type of artifact is knife marks, shown as the sparse patterns which orient perpendicular to the knife's cutting edge. The second type of artifacts is chattering that arises from sample vibration and irregular compression between specimen cutting face and knife edge. This produces dense patterns that orient parallel to the knife's cutting edge. Figure 5 shows representative TEM images of EVA/GNP_0.25 wt% samples before and after electric field application. No percolating GNP network was observed in either sample because the GNP concentration was below the percolation threshold for a homopolymer matrix (~0.5 wt % based on our previous work [17]). Discussion Next, we discuss two possible hypotheses related to the electrical conductivity increase in EVA/GNP composites under the applied electric field. First, we discuss whether GNPs could become aligned under the electric field, which leads to the increase in bulk electrical conductivity increase. Next, we discuss the effect of localized electrical treeing formation during dielectric breakdown could induce an increase in electrical conductivity. GNP Alignment-Induced Conductivity Increase Small particles such as GNPs can be aligned under an electric field via dielectrophoresis. In order to evaluate whether an individual GNP aligned under our experimental conditions, we applied the model proposed by Wu et al. [11]. We modeled EVA/GNP composite as a dielectric matrix with conductive solid inclusions. The application of a sinusoidal alternating electric field can result in both rotational and translational movement to an individual GNP nanosheet. Assume that an individual GNP sheet is initially oriented at angle θ 0 relative to the electric field and placed at distance x 0 from another GNP sheet. In response to the field, this GNP sheet first requires a rotation time (t r ) to be fully aligned within 1 • with the field direction: Once it becomes aligned with the field, it requires additional translation time (t c ) to form an end-to-end connection with the adjacent GNP sheet, which can be estimated as follows: The detailed derivations of these equations can be found in [11]. The following values relevant to our EVA/GNP system were used in our calculations: a and b refer to the lateral dimension (~1 µm) and thickness (~1 nm) of individual GNP sheets, respectively. The melt viscosity of the EVA matrix at 160 • C was η = 450 Pa·s (experimentally measured with ARES-2 rheometer); E 0 = 400 V rms /mm is the strength of the electric field; ε m = 6.7 ε 0 is the dielectric constant of the EVA matrix (experimentally measured using the dielectric rheology accessory at processing temperature); k t is the translational friction coefficient of GNP given as follows [18]: Our calculation shows that commanding 5 min of 400 V rms /mm was sufficient to align GNP sheets within 15 • of the electric field direction ( Figure 6). Once GNPs became aligned, end-to-end connection of two adjacent sheets that were separated by a few micrometers was formed within seconds, which leads to the formation of a conducting network in the field direction ( Figure 6). However, TEM results in Figure 5 show that applying the electric field to the EVA/GNP melt did not result in global GNPs alignment. Contrary to the model that assumes a single GNP sheet in the polymer matrix, GNPs in the composite were local agglomerates (highlighted as dashed circles). Even though blends were prepared by extensive probe sonication and solution blending, GNPs were not fully exfoliated due to the strong π-π interactions between sheets. Therefore, the Wu model that describes individual graphene alignment under the electric field does not adequately describe the present system. The strong interparticle interaction causes GNP agglomeration, which reduces the polymer/particle interfacial volume and reduces the particles' ability to improve the composites' dielectric properties. Modifying nanoparticle-polymer interaction can lead to Figure 6. Rotational time (in black, t r ) required for individual GNP to become aligned with electric field as a function of initial degree of misalignment (bottom x-axis), and translational connection time (in red, t c ) for two aligned GNP to form end-to-end connection as a function of initial distance of separation (top x-axis). The melt viscosity and dielectric constant of the polymer matrix correspond to our EVA/GNP composite at 160 • C under the experimental conditions. However, TEM results in Figure 5 show that applying the electric field to the EVA/GNP melt did not result in global GNPs alignment. Contrary to the model that assumes a single GNP sheet in the polymer matrix, GNPs in the composite were local agglomerates (highlighted as dashed circles). Even though blends were prepared by extensive probe sonication and solution blending, GNPs were not fully exfoliated due to the strong π-π interactions between sheets. Therefore, the Wu model that describes individual graphene alignment under the electric field does not adequately describe the present system. The strong interparticle interaction causes GNP agglomeration, which reduces the polymer/particle interfacial volume and reduces the particles' ability to improve the composites' dielectric properties. Modifying nanoparticle-polymer interaction can lead to changes in the final composites' material properties, such as rheology, mechanical properties, particle dispersibility, and dielectric strength [19][20][21][22]. Increased particle-polymer interaction changes both the polymer morphology and local charge distribution at the polymer-nanoparticle interface, thereby improving the composites' dielectric properties. Siddabattuni et al. demonstrated that the dielectric constants of TiO 2 /epoxy composites can be controlled by modifying TiO 2 surfaces with self-assembled monolayer organophosphate ligands with different chemical functionalities [22]. However, the chemical functionalization of neat graphene sheets is difficult due to the strong sp 2 hybridization of carbon structure and can lead to the decreased electrical conductivity of graphene nanoparticles. We performed statistical analysis on the TEM images of EVA/GNP blends after electric field application (Figure 5c,d) on the orientation of the GNP sheets, but found no correlation between GNP orientation and electric field direction. After electric field application, we extracted the orientation angle of each individual GNP sheet (total number of sheets was 150) within a single TEM image. The standard deviation of the orientation angle was ±52 • relative to the mean angle of orientation, suggesting that the sheets were still randomly oriented over a wide distribution. Additionally, the mean angle of GNP orientation was 40 • different from the electric field direction. This evidence suggests that the electric field had a much weaker effect on inducing the alignment of GNP aggregates than that of individual GNP sheets. Therefore, the effect of electric field on the composite was localized. Since TEM imaging only provides a limited field of view (on the order of 10 µm 2 ) whereas the cross-sectional area of the test specimen is~8 mm 2 , it is possible that the actual conductive pathway induced by electric field application occurred in a different area. Nevertheless, should the electric field induce global effects on the composite such as graphene orientation, these effects would be observed throughout the composite regardless of the specific area being examined. Dielectric Breakdown Induced Conductivity Increase Another hypothesis that explains the observed increase in electrical conductivity during electric field application is due to the samples' dielectric breakdown. When the sample was removed from the electrodes after reaching the maximal allowed current from the sequencer, a cavity was found towards the edge of nearly all our samples (Figure 1c). Additionally, upon reaching the maximum allowable current, smoke would emit from the sample. The smoke indicated that the polymeric composite underwent electrical and thermal degradation, which ultimately caused soot particles to be formed [23]. These observations suggest dielectric breakdown of the sample occurred due to the electric field. The dielectric breakdown of polymers by tree propagation due to an external electric field can be described in three stages [24]. First, tree inception occurs from a point of a high local electric field near the electrode. This process is usually enhanced by the presence of local defects in the composite, such as cavities in the dielectric medium, the presence of conductive fillers near the surface, roughness of the contacting electrode, and partial discharge activities [3,25]. Second, tree growth is initiated by the partial discharge of polymer surface that leads to surface erosion and decrease in material thickness. The loss of material could come from a number of processes related to electric field applications, such as direct ion bombardment, localized heating due to gases generated by the degradation of polymers to CO 2 , and excitation and oxidation of surface molecules [4]. Eventually, electric-field-induced surface erosion creates small channels that penetrate the polymer matrix, forming the first conductive bridge across the two electrodes [25]. Third, as a conductive pathway is formed between the two electrodes, small and branched channels are widened. Electrical conductivity continues to increase until dielectric breakdown. The early stages of electrical treeing and channel growth are often modeled as a stochastic process that is proportional to the electric field strength [20,26]. Nonetheless, electrical treeing at high field strengths can be deterred by adding welldispersed insulating inorganic particles (e.g., silica, titanium dioxide, aluminum oxide, silicon carbide) into the polymer matrix. This enhances the dielectric strength of the host polymer, creating functional materials for high-voltage insulation (>10 3 V/mm) applications. When insulating nanoparticles are well-dispersed in a polymer matrix, the large interfacial area between the particle and the host matrix and the small interparticle distance both provide physical barriers that impede the flow of electric current between two electrodes (Figure 7a). For conductive fillers, Han et al. found that their dispersion state strongly affects the rate of electrical treeing in polymer nanocomposites [27]. In graphene/silicone rubber systems, a small amount (~0.005 wt%) of well-dispersed GNPs can act as physical barriers that inhibit electrical treeing. During tree growth, the channels align and grow preferentially along the polymer/graphene interface, creating a "bush tree" pattern that propagates slowly within the matrix (Figure 7b). On the other hand, poorly dispersed GNPs at higher concentrations can create local highly conductive regions with reduced particle distances within clusters and dielectric strengths. In turn, tree channels propagate through these regions rapidly, causing lower degradation resistance and faster dielectric breakdown (Figure 7c). While the EVA/GNP blends used in this study were prepared by probe sonication and solution blending, local areas of GNP aggregates are still readily found throughout the blend ( Figure 5). Hence, regions within the blend that contain clustered GNP agglomerates likely contribute to the electric treeing and dielectric breakdown of the composite under electric field application. Since the electrical treeing phenomenon is localized and occurs at a random location within the bulk sample, it is difficult to utilize imaging techniques such as TEM to directly observe treeing. and faster dielectric breakdown (Figure 7c). While the EVA/GNP blends used in this study were prepared by probe sonication and solution blending, local areas of GNP aggregates are still readily found throughout the blend ( Figure 5). Hence, regions within the blend that contain clustered GNP agglomerates likely contribute to the electric treeing and dielectric breakdown of the composite under electric field application. Since the electrical treeing phenomenon is localized and occurs at a random location within the bulk sample, it is difficult to utilize imaging techniques such as TEM to directly observe treeing. Figure 7. Schematic diagram of electric tree growth through (a) polymer/insulating silica particle composite with good particle dispersion, and (b,c) a polymer/graphene composite in which graphene within a local area is (b) at low concentrations and evenly dispersed or (c) at higher concentrations and poorly dispersed. In (a), tree growth proceeds in the area without particles but is hindered near the polymer/particle interface. Treeing process strongly depends on particle dispersion if conductive particles such as graphene are used. (b) Graphene is locally well-dispersed and tree formation is inhibited by electric field distortion along the polymer/graphene interface. (c) Local graphene agglomerates create locally conductive regions (highlighted in red) with lower dielectric strength that accelerate electrical treeing. Figure (a) is adapted with permission from Ref. [3]. Copyright © 2009, IEEE. Figures (b,c) are adapted with permission from Ref. [27]. Copyright © 2019 by the authors. Parameters That Could Influence Dielectric Breakdown under Electric Field We lastly measured the evolution of electrical conductivity as a function of time when EVA/GNP composites were subject to an applied electric field up to the point of dielectric breakdown. We systematically varied electric field strength (Figure 4a), matrix viscosity (Figure 4b), or GNP concentration (Figure 4c) in order to understand factors that could influence the rate of dielectric breakdown within these composites. The growth in electrical conductivity of EVA/GNP composites under the external electric field undergoes three stages. Initially, the sample conductivity is low because the GNP concentrations of all samples were well below the percolation threshold in EVA (~0.6 Figure 7. Schematic diagram of electric tree growth through (a) polymer/insulating silica particle composite with good particle dispersion, and (b,c) a polymer/graphene composite in which graphene within a local area is (b) at low concentrations and evenly dispersed or (c) at higher concentrations and poorly dispersed. In (a), tree growth proceeds in the area without particles but is hindered near the polymer/particle interface. Treeing process strongly depends on particle dispersion if conductive particles such as graphene are used. (b) Graphene is locally well-dispersed and tree formation is inhibited by electric field distortion along the polymer/graphene interface. (c) Local graphene agglomerates create locally conductive regions (highlighted in red) with lower dielectric strength that accelerate electrical treeing. Figure (a) is adapted with permission from Ref. [3]. Copyright © 2009, IEEE. Figures (b,c) are adapted with permission from Ref. [27]. Copyright © 2019 by the authors. Parameters That Could Influence Dielectric Breakdown under Electric Field We lastly measured the evolution of electrical conductivity as a function of time when EVA/GNP composites were subject to an applied electric field up to the point of dielectric breakdown. We systematically varied electric field strength (Figure 4a), matrix viscosity (Figure 4b), or GNP concentration (Figure 4c) in order to understand factors that could influence the rate of dielectric breakdown within these composites. The growth in electrical conductivity of EVA/GNP composites under the external electric field undergoes three stages. Initially, the sample conductivity is low because the GNP concentrations of all samples were well below the percolation threshold in EVA (~0.6 wt%). Thus, the current signal was below the resolution of the high voltage sequencer (~10 µA, corresponding to~2 × 10 −8 S/cm). Next, as the electric field induced the rotation and translation of individual GNP sheets, there was a sharp insulator to conductor transition. Lastly, the current growth appeared to be self-limiting upon nearing the current limit of the sequencer (~1 mA, corresponding to~3 × 10 −6 S/cm). At that stage, electric treeing had presumably already created a conductive pathway across the specimen thickness and dielectric breakdown is occurring. Here, we implement a simple logistic model to describe the progressive dielectric; similar models were used to describe the breakdown of metal-oxide semiconductors [28]. The simple three-parameter logistic growth model of the composite's electrical conductivity σ(t) is constructed as follows: where σ 0 is maximal conductivity prior to dielectric breakdown, t 0 is the critical time when the rate of conductivity increase is the highest, and τ is the characteristic duration for which the conductivity grew from 0.1σ 0 to 0.9σ 0 [28]. Fitting results are summarized in Table 1. Overall, the logistic model provides an adequate fit to the electrical conductivity growth of EVA/GNP composites, especially after the onset of the sharp increase in the electrical conductivity. During the early stage of electric field application, the measured electrical conductivity is subject to higher variability and uncertainty. Possible reasons include sample-to-sample variability such as sample surface roughness and defects, and imperfect contact with the Cu electrode. These factors could all affect the probability of tree inception. Nonetheless, the simple logistic model offers important insights into the effects of field strength and filler concentration on the dielectric breakdown of polymer/graphene composites. We could establish several general observations from our results. When GNP concentration was near or above the percolation threshold (~0.5 wt%), EVA/GNP underwent instantaneous runaway increase in electrical conductivity when the field was turned on. As these samples were instantaneously broken down, conductivity measurements could not be obtained. When the particle loading is above the percolation threshold, local conductive pathways already exist in the bulk sample. As such, electric treeing proceeds through the conductive regions with lower dielectric strengths and leads to early onset of dielectric breakdown. Additionally, t 0 /τ, which is the ratio between the time of fastest conductivity growth and the characteristic duration falls within a narrow range of 4.1-4.3. In the subse-quent discussion, we use parameter τ, the characteristic duration of conductivity growth, to describe the effect of the electric field leading towards the eventual dielectric breakdown of the composite. For the lowest concentration of the 0.05 wt% GNP sample, a slightly lower σ 0 = 2.0 × 10 −7 S/cm and higher t 0 /τ ∼ 5.6 were found. First, we study the effect of the electric field strength on the dielectric breakdown when GNP concentration was fixed at 0.25 wt%. According to Table 1, τ measured at E 0 = 250 V rms /mm was~2.7 times the value compared to when E 0 = 400 V rms /mm. Even though direct evidence of GNP alignment was not observed in TEM, the characteristic duration of conductivity growth similarly scaled with both the rotation and translation time of individual GNP sheets under the electric field on the basis of Equations (2) and (3), t r and t c ∼ η/E 2 0 . Given the experiments were performed at the same temperature of 160 • C, the viscosity of the EVA is identical across all samples. Therefore, the individual particle rotation and translation time between the two datasets would be inversely proportional to the square of the electric field strength, i.e., [(250 V rms /mm)/(400 V rms /mm)] −2 = 2.56 ≈ 2.7. Next, the effect of matrix viscosity on the dielectric breakdown is studied by varying the temperature of the composite melt during electric field application. In Table 2, we report the zero-shear melt viscosity of the EVA matrix at temperatures between 120 and 160 • C based on small-amplitude oscillatory shear measurements. By fitting τ obtained from the three-parameter logistics model given by Equation (5), as a function of different melt viscosities, we find the approximate relationship of τ ∼ η (see Figure A2). This also empirically agrees with the theory related to the rotation and translation of individual GNP under the electric field, i.e., t r and t c ∼ η/E 2 0 . Therefore, the result suggests that the rate of conductivity increase could be related to the rate of individual conductive particles' motion within the composite. The electric field induces motion to anisotropic conductive fillers to form a local conductive pathway, which in turn leads to a reduction in the dielectric strength to cause dielectric breakdown. Differences in the initiation time of electrical conductivity increase between different samples under similar processing conductions may arise from the inherent sample-to-sample variations in GNP dispersion and sample contact with the test electrodes. Lastly, we observed that τ dramatically decreased with increasing GNP concentration at fixed electric field strength of 400 V rms /mm. The rate of conductivity increase was much slower for blends with lower GNP concentration of 0.10 and 0.05 wt% (Figure 4c). Blends with higher GNP concentration exhibited shorter average interparticle distance. The lower concentration of the conductive filler species within the blend reduces the probability of the growth of electric treeing structure through conductive regions. Accordingly, blends with 0.10 or 0.05 wt% GNP were able to withstand eventual dielectric breakdown for longer duration (~25-60 min) as opposed to~2-3 min in EVA/GNP_0.25 wt% blends. For GNP concentrations above 0.1 wt%, σ 0 reached at least 10 −7 S/cm despite GNP concentration being well below the percolation threshold. The dramatic enhancement in the maximal electrical conductivity may indicate the formation of conductive pathway from electrical treeing during electric field application. While the origin of dielectric breakdown during electric field application is presumably dominated by the local electric treeing formation, blends with higher GNP concentration are more probable to exhibit locally conductive regions that can favor treeing propagation. Results from Table 1 show that doubling GNP concentration leads to an approximate order-of-magnitude reduction in τ. Conclusions In this work, we studied the effect of an alternating electric field on the electrical conductivity of polymer/graphene composites. Applying an electric field up to 400 V rms /mm to EVA/GNP composites in the melt greatly increased their electrical conductivity tõ 10 −6 S/cm, despite filler concentrations below the percolation threshold. While a theory predicts that an alternating electric field induces rotation and translation motion to anisotropic filler particles loaded in an insulating polymer matrix, we did not observe global particle alignment in the field direction through TEM imaging. Instead, visual inspection revealed that samples underwent dielectric breakdown. We attribute the origin of the dielectric breakdown to the propagation of electrical trees. Local GNP agglomerates increase electrical conductivity and lower the dielectric strength in those regions. This accelerates the channel formation of electrical trees during electric field application, ultimately leading to dielectric breakdown. We established a simple logistic growth model to describe the increase in electrical conductivity due to electrical tree formation as a function of time. Qualitatively, the electrical conductivity of EVA/GNP blends sees slower growth when the applied electric field strength was lower, the composite viscosity was higher, scaling as predicted by alignment of graphene due to the electric field. It is possible that the local alignment of a few random GNP sheets promotes electrical treeing of its surrounding polymer dielectric, which ultimately leads the dielectric breakdown of the polymer composite. Additionally, the characteristic duration of conductivity growth decreased exponentially with increasing GNP concentration. The very low concentration of well-dispersed graphene could delay treeing by forming a tree bush around isolated particles. However, our blends with higher GNP concentration exhibited shorter interparticle distance and contained more locally conductive regions with GNP aggregates that favor electrical treeing propagation. Even though the mechanism of electric conductivity growth within EVA/GNP composites was complicated by the nonuniform dispersion of GNPs, our model could be adapted to other polymer/GNP systems to establish the scaling relationship between nanocomposite properties, and the degradation and breakdown dynamics under an applied electric field. Conflicts of Interest: The authors declare no conflict of interest.
8,477
2022-05-10T00:00:00.000
[ "Physics", "Materials Science" ]
Recognizing Foreign Object Debris (FOD): False Alarm Reduction Implementation Received Jan 31, 2018 Revised Apr 6, 2018 Accepted Apr 20, 2018 Recognition of foreign object debris (FOD) on ruwanys is mandatory to avert the accidents and emergency. The accurate and precise estimation of FOD is very complex because of the intricated shape and their different tiny sizes as well which are noe easily visilble. For the prompt removal of the FOD from the runways a robust, accurate and precise system is badly needed. Therefore, in our research we have proposed a vigor system comprised of ultrasonic sensor and infrared images capturing device with a combination of fake alerts reduction algorithm based on infrared images distribution and morphological edge identification. After the segmentation and morphological processing, the decision a unifying divider was designed to identify the actual targets. Several approaches have been done for the detailed and rapid investigation of FOD. Testing and validation have proved that our proposed research performed well compared to the other techniques. In this research ultrasonic sensors results are integrated with the processed infrared images. INTRODUCTION The Foreign object debris has been a core reason for the for the aircraft infrastructure and human lives loss [1]. The debris must be detected briskly and addressed to the main station for the abrupt removal of the debris [2]. An annual loss of around 3 to 4 billion US dollar has been observed due to the inadequate foreign object debris identification systems [3]. Due the tiny enigmatic size and shapes of debris it is very hard to discriminate between real targets and fake alerts [3]. Mostly the Foreign object debris are made up of metal like nuts, bolts and strips having very small size [3][4]. Tarsier1100, FO Detect, FOD Finder have been legitimate FOD identification systems that uses millimeter radar with the combination of video camera [3][4]. The deadful incident of Concorde on July 25-2000 that was due to the metal strip, FOD analysis on runways and railway traction has been taken under deep examination [3][4][5]. Practically manual surveillance is not achievable due to the nonstop unending landings and takes off [3][4][5]. Birds, mini nut and bolts, tiny falling chunk or scrap have various sizes and shapes therefore it would be more difficult to observe FOD manually as it will have high false alarm rates due to the poor analysis of human being [4][5]. Several sensors based fusion algorithms have been developed to mitigate the fake alerts [6]. In large field infrared system needed high data rate bandwidth for the images. Designed Algorithm performed image distribution of particular infrared images by OTSU segmentation procedure and identified the edges by morphological processing [7]. Fusion algorithm of sensors permits the mixture of various data type of sensors to achieve higher accuracy and precision. Various techniques and methods have been applied for the real time investigation of FOD. We have proposed a novel cost effective solution with minimum fake alerts by designing a prototype using primary and secondary transducers with the combination of the camera based on some image processing algorithm. This research paper is further divided into four sections. Section II is explaining the recent FOD detection trends.Section III describes problem statement and section IV explains the methodology which comprised of problem statement and proposed technique. In section V experimental testing results has been elaborated whereas the conclusion was discussed in the section VI. RELATED EARLIER WORK Efficient visualization of hindrances and debris is required to prevent hazards in advance [8]. Highly autentic imaging technology is esse for the dense vigilance of airport runways and railway traction systems; therefore, carrier frequency was enhanced to a higher frequency like W-band (75-110 GHz) [8]. A sensory network system was connected by optical fiber [6][7][8]. Radio over fiber distributed frequencies to the radar systems [8]. It has been noticed empirically that an optical fiber linked with millimeter-wave radar system can be beneficial for quick and authentic identification of foreign object debris (FOD) in the larger field of aircraft runways [8] [9]. Figure 1 mentions that signal can be transmitted by central office to the millimeter wave radar head utilizing fiber optic link. The transmission loss is less compared to the RF co-axial cable [8]. Another unique approach was designing model of foreign object debris (FOD) on 2D algorithms in the Wband (75-110 GHZ) [9]. Synthetic Aperture Radar (SAR), having robust competency of weather monitoring and capturing high resolution images is playing a crucial role in National Economy, Military and other fields [3]. SAR imaging theory states that to calculate the two dimensions of actual target, the respective position between radar and target must be varied [3]. Basically, SAR covered all mini antennas together and modulated them to a large antenna [3]. SAR produced two dimensional images of the target and was delivered to the controller for the further processing [3]. Two various curled reflect array antennas with a combo of cosec-formed design have been utilized to enhance the system processing and area [10]. CFAR (constant false alarm rate) algorithm with the combined processing of moving target implication would be a better achievement [10]. Linear filtering approach is performed by mathematical operations but produce blur images but non-linear filtering methods can improve the edge of the image [11]. Usually edges are linked with the outer countour of the image therefore edge analysis can also be performed for the segmentation and features may also be extracted [12]. The main motive of the fuzzy based morphological component was to achieve huge conjoining edge characteristics of the image [12]. PROBLEM STATEMENT Foreign object debris can be in any shape and size and may cause dreadful accidents if they are not detected on early basis. Debris may harm pilots, passenger as well as aircraft and infrastructure loss. can be tool, wire, strip, hardware metallic part, connectors, nuts, bolts and nozzles. There are so many earlier research works available in which several methodologies and techniques have been designed and developed to recognize Foreign object debris exactly and flawlessly. For the security of aircraft and to avoid further losses, an intelligent and smart system is strongly needed that must be enough competent to detect the Foreign object debris (FOD) accurately and precisely. To eliminate the issue, we have proposed a costeffective solution based on transducers and image processing algorithm based on infrared images for the rigor FOD investigation. The main problem was the existence of missed information, high false alarm rate and errors in compressed images. To mitigate the issue, a transmission protocol was needed in the application layer so that the compressed images must be received properly and accurately in a right order. METHODOLOGY In large field infrared system needed high data rate bandwidth for the images. Our proposed algorithm segmented the images by OTSU segmentation (image distribution) procedure and detected the edges by morphological technique. The question arises mostly is that how can we achieve the accuracy and precision in detecting the real targeted objects to minimize the false alarm. OTSU and Edge Processing Setting of threshold level was quite complex due to the ambiguious surroundings and dissimilarities in greyscale of infrared images. We have two types of filtering approaches; linear filtering approach was performed by mathematical operations but produced blur images but non-linear filtering approaches can improve the image esdges as the edges are correlated with image boundaries. We have used iterative OTSU segmentation method that can be considered as real image processing algorithm. Edge detection was done by morphology. Morphological gradient can be elaborated as the change in maximum and minimum greyscale. It has the capability to improve the image. IRST (Infrared search and track) captured the real images in complicated long black coloured runways that has been tested and validated. OTSU segmentation and morphological processing has been done on original infrared images. Debris false alarm has been minimized by fusion decision criterion algorithm. Feature Extraction After performing the OTSU distribution and morphological processing it would be harder to discriminate the acutal targets from other fake alert elements as they have same grey scale features. Several characteristics were obtained which can help us to distinguish between actual debris targets and other similar jamming elements. Length width ratio waz analyzed. SNR and Constrat ratio SNR was calculated to investigate the intensity level compared to the surroundings using the following expression. In the equation no. 1 and can be considered as a greyscale value of actual target and runway surroundings and σ is the variance of the size. The local contrast ratio can be estimated as Where represents the average value of segmentation and represents the average value of the background. Classifier Moving debris or debris commonly exhibit legitimate frames in sequence of the image. While jamming elements pattern are haphazard and irregular. On the basis of these the motion is extracted. Figure 2 displays the infrared image processing results after the segmentation and morphological image processing. The feaure extraction technique was also performed to minimize the false alarm rate and to enhance the identication ratio high. Figure 3 illustrates the main flow of the proposed research for the foreign object debris identification. Ultrasonic sensors detect the presence of the obstacles, objects and debris and infrared images are transmitted through communication protocol that was used in the conference paper related to this enhanced research work [13]. Figure 5 shows the sensor status in the presence of the foreign object debris as it would be detected by the ultrasonic sensors. CONCLUSION Experimental outcomes confirmed that our proposed research worker well or equivalent to the earlier related work. Our suggested approach consisted of transducers and image processing algorithm with less fake alerts is a cost-effective resolution in contrast to all other recent approaches. Current procedurs like Radar based, transucers based and drone based approaches cannot be thought-out as a cost-effective solution. We have tried our best to mitigate the fake alert rate by using MSER up to 10% in our previous paper [13] and now we have performed infrared images processings based on the OTSU segmentaion, morphological image or edge processing and feature extraction. As a future work this research work can be expanded more by applying MLP (multi-layer perceptron) as a classifier to eliminate the false alarm more accurately and precisely.
2,381.4
2018-07-01T00:00:00.000
[ "Engineering", "Environmental Science" ]
Content Development for a Virtual Social Engagement Intervention Abstract Video technology has the potential to provide older adults with socially and cognitively engaging activities for in-home participation. We are exploring use of OneClick.chat, a video technology platform, to present older adults with and without mild cognitive impairment opportunities for engagement. In collaboration with iN2L we have developed events that will facilitate conversations that do not rely on episodic memory, cover a range of topics, and represent different cultures and interests. We selected event topics that were positive, socially and cognitively engaging, and included a range of pictures based on our previous research. Events were carefully controlled for length of presentation, picture type, and readability. Discussion questions related to the events were designed to stimulate engaging conversations through open-ended questions and to not burden memory recall or enforce stereotypes. Our work highlights potential future avenues for researchers and home and community-based organizations to use technology to promote social engagement. how this intervention was designed to facilitate engagement. This will be followed by a presentation by X. Lin on the relationship between social media usage and well-being across the lifespan, and mediators of this relationship. The session will conclude with a presentation by W. Qin on predictors of older adults' use of telehealth technology to support health and wellbeing during the COVID-19 pandemic. VIDEO CHAT TECHNOLOGY TO SUPPORT HOME AND COMMUNITY-BASED ORGANIZATIONS Brielle Ross, 1 Allura Lothary, 2 Dillon Myer, 3 Raksha Mudar, 4 Wendy Rogers, 1 and Madina Khamzina, 1 , 1. University of Illinois Champaign,Illinois,United States,Urbana,Illinois,United States,3. OneClick. Chat,Philadelphia,Pennsylvania,United States,Champaign,Illinois,United States Concerns about loneliness and social isolation for older adults were already evident but have been exacerbated during the pandemic. Home and Community Based Organizations (HCBOs) provide support for their older clients in the community and need to support their staff, who may be working remotely. We are exploring the potential of video chat technology to connect older adults with their friends, families, and other support. We review the technologies available to older adults in the community and staff working with older adults to promote social engagement. We are collaborating with OneClick.chat to identify the needs of the HCBOs through a literature review and qualitative interviews of staff members from different senior living environments. Their challenges and successes of engaging older adults through video chat technologies will provide guidance for design of an HCBO dashboard for OneClick.chat that will support diverse needs. . University of Illinois Urbana-Champaign, Champaign, Illinois, United States Video technology has the potential to provide older adults with socially and cognitively engaging activities for in-home participation. We are exploring use of OneClick.chat, a video technology platform, to present older adults with and without mild cognitive impairment opportunities for engagement. In collaboration with iN2L we have developed events that will facilitate conversations that do not rely on episodic memory, cover a range of topics, and represent different cultures and interests. We selected event topics that were positive, socially and cognitively engaging, and included a range of pictures based on our previous research. Events were carefully controlled for length of presentation, picture type, and readability. Discussion questions related to the events were designed to stimulate engaging conversations through open-ended questions and to not burden memory recall or enforce stereotypes. Our work highlights potential future avenues for researchers and home and community-based organizations to use technology to promote social engagement. THE RELATIONSHIP BETWEEN SOCIAL MEDIA USE AND WELL-BEING: THE MEDIATING ROLE OF SOCIAL SUPPORT Margie Lachman, 1 and Xin Yao Lin, 2 1. Brandeis University,Brandeis University,Massachusetts,United States,2. Brandeis University,Waltham,Massachusetts,United States Frequent social media usage can have negative effects on well-being, but the mechanisms involved are unclear. This study explored the mediating role of giving and receiving support. Using the Midlife in the United States Refresher eight-day daily diary study (N=782, age 25-75), multilevel structural equation modeling examined the hypothesized relationships at both the within-(intraindividual) and betweenperson (interindividual) levels. Results showed that at the within-person level, days with more social media use were associated with a larger proportion of time giving support and worse well-being (less positive affect and more stress, negative affect, and loneliness). At the between-person level, more social media use was associated with worse well-being. Giving support, but not receiving support, mediated the relationship between social media use and well-being at the within, but not the between-person level. Discussion focuses on ways to address the negative consequences of social media use related to social connections and well-being. ADOPTION OF TELEHEALTH AMONG OLDER ADULTS DURING THE COVID-19 PANDEMIC Weidi Qin, Case Western Reserve University, Cleveland, Ohio, United States The COVID-19 pandemic has disrupted older adults' in-person healthcare services. Many individuals rely on remote communication with their healthcare providers for non-urgent health or mental health issues. The present study investigated the effects of technology learning and depressive symptoms on new adoption of telehealth (e.g. online messaging, video call) to communicate with healthcare providers during the COVID-19 pandemic. A sample of 1,500 Medicare beneficiaries aged 65 or older was selected from the National Health and Aging Trend Study. A series of logistic regressions were performed. Results showed that older adults who learned a new online technology during the COVID-19 outbreak were more likely to adopt telehealth. Also, older adults with a higher level of depressive symptoms were more likely to start using telehealth. The findings highlight the importance of technology training to help older adults go online. Telehealth can be an important coping tool for depressive symptoms during the pandemic. TRAUMATIC EVENTS AND HEALTH: AN ECOLOGICAL AND LIFE COURSE PERSPECTIVE Chair: XinQi Dong During the past decades, researchers have shown an increasing interest in the study of traumatic events among aging populations. The majority of studies on trauma focus on mental health, which overlooks the possibility that trauma may also have an adverse effect on other health outcomes, such as cognitive function. A number of studies focus on a single traumatic event. However, this approach may underestimate its health impact as many people experience multiple forms of traumatic events. Indeed, the impact of traumatic events on health depends on the event itself (e.g., single or multiple forms, time) as well as ecological factors. This symposium aims to address the above limitations. The first longitudinal study An Ecological Model of Risk Factors in Elder Mistreatment (EM) Victims tested different dimensions of the ecological model to prevent recurrence of EM. The second study Polyvictimization and Cognitive Function in an Ethnic Minority Aging Population explored whether exposure to multiple forms of EM affects cognitive function. The third study Traumatic Events and Cognitive Function: Does Time Matter? examined whether traumatic events happened in childhood, adulthood, or old age will influence late-life cognitive function. The fourth study Face-saving and Help-seeking among Older Adults with EM identified cultural determinants of help-seeking behaviors in EM victims. This symposium will advance knowledge in the health consequences of polyvictimization and exposure to traumatic events in different life stages. It will also inform interventions to stop the recurrence of EM in immigrant families and enhance the help-seeking behaviors of ethnic minority older adults. Globally, around 1 in 6 older adults experienced some form of elder mistreatment in community settings. However, little is known about the prevalence of polyvictimization, or experience of multiple forms of abuse, which may exacerbate negative outcomes over that of any one form of victimization in isolation. Data were drawn from the PINE study. Polyvictimization was defined as exposure to multiple forms of victimization, including psychological, physical, and sexual mistreatment, financial exploitation, and caregiver neglect. Cognitive function was evaluated by global cognition, episodic memory, executive function, working memory, and MMSE. Regression analyses were performed. Among 3153 participants, 128 experienced two forms of abuse while 12 experienced three or more forms of abuse. Polyvictimization was associated with lower global cognition (b=-0.05, SE=0.02, p<.05), episodic memory (b=-0.06, SE=0.03, p<.05), working memory (b=-0.14, SE=0.07, p<.05), and processing speed (b=-0.68, SE=0.33, p<.05). Interventions could target older adults with polyvictimization and protect their cognitive function. AN ECOLOGICAL MODEL OF RISK FACTORS IN OLDER ADULTS WITH REPEATED EXPOSURE TO ELDER MISTREATMENT Mengting Li, 1 XinQi Dong, 2 and Qun Le, 3 1. Rutgers, The State University of New Jersey, New Brunswick, New Jersey,
1,902.6
2021-12-01T00:00:00.000
[ "Psychology", "Medicine", "Computer Science" ]
Speeding Document Annotation with Topic Models Document classification and topic models are useful tools for managing and understanding large corpora. Topic models are used to uncover underlying semantic and structure of document collections. Catego-rizing large collection of documents requires hand-labeled training data, which is time consuming and needs human expertise. We believe engaging user in the process of document labeling helps reduce annotation time and address user needs. We present an interactive tool for document labeling. We use topic models to help users in this procedure. Our preliminary results show that users can more effectively and efficiently apply labels to documents using topic model information. Introduction Many fields depend on texts labeled by human experts; computational linguistics uses such annotation to determine word senses and sentiment (Kelly and Stone, 1975;Kim and Hovy, 2004); social science uses "coding" to scale up and systemetize content analysis (Budge, 2001;Klingemann et al., 2006). In general text classification is a standard tool for managing large document collections. However, these labeled data have to come from somewhere. The process for creating a broadly applicable, consistent, and generalizable label set and then applying them to the dataset is long and difficult, requiring expensive annotators to examine large swaths of the data. We present a user interactive tool for document labeling that uses topic models to help users assign appropriate labels to documents (Section 2). In Section 3, we describe our user interface and experiments on Congressional Bills data set. We also explain an evaluation metric to assess the quality of assigned document labels. In preliminary results, we show that annotators can more quickly label a document collection given a topic modeling overview. While engaging user in the process of content-analysis has been studied before(as we discuss in Section 4), in Section 4 we describe how our new framework allows for more flexibility and interactivity. Finally, in Section 5, we discuss the limitation of our framework and how we plan to extend it in future. Interactive Document Labeling We propose an alternative framework for assigning labels to documents. We use topic models to give an overview of the document contents to the user. Users can create a label set incrementally, see the content of documents, assign labels to documents, and classify documents. They can go back and forth in these steps and edit label set or document labels and re-classify. Having labeled documents is necessary for automatic text classification. With a large collection of unstructured documents, labeling can be excruciating since it is essential to label enough documents in different labels to obtain acceptable accuracy. Topic models are a solution to reduce this effort since they provide some information about the underlying theme of corpus. Given a fixed number of topics, topic models Topic words can be used to reveal the content of a topic and thus content of documents with a high probability of that topic. Therefore, assuming the number of topics is chosen carefully, top documents for each topic are similar in content and can be labeled appropriately. Thus, rather than showing an unstructured collection of documents to the user, providing the topic words and highly relevant documents to that topic helps them in the process of document labeling, both in the step of choosing appropriate label names and choosing appropriate document to assign a label to. Another way to think about this is that if the topics are perfect (they are not too general or too detailed), all labels associated with the topic's high relevant documents can be viewed as subjects explaining the topic. Table 1 provides an example of how topic models can help a user craft document labels. Having a set of user labeled documents, classification algorithms can be used to predict the label of unseen documents. Next, classification results are shown. Users can change document labels. They can also edit/delete label set and re-run the classifier. The explained procedure can be repeated iteratively until satisfaction is achieved with existing (document,label ) pairs. Figure 1 shows the explained procedure. Experiments with Interactive Labeling Interface Data: In our experiments, we need a labeled corpus to be able to assess the quality of usergenerated labels. We chose US Congressional Bills corpus (Adler and Wilkerson, 2006). Gov-Track provides bill texts along with the discussed congressional issues as labels. Example of labels are "education", "agriculture", "health", and "defense". There are total of 19 unique labels. We use the 112 th congress, which has 12274 documents. We remove bills with no assigned gold label or that are short. We end with 6528 documents. Topic Modeling: To generate topics, we use Mallet (McCallum, 2002) to apply lda on the data. A set of extra stop words are generated based on tf-idf scores to avoid displaying noninformative words to the user. Features and Classification: A crucial step for text classification is to extract useful features to represent documents. Some common features for text classification are n-grams, which makes the dimensionality very high and classification slower. Since response time is very important in user interactive systems, instead of n-grams, we Interface: We start with the web-based interface of Hu et al. (2014) for interactive topic modeling. The existing interface starts with asking user information, corpus name, and number of topics they want to explore. Then it displays topic words and the most relevant documents for each topic. Also, the user can see the content of documents. Users can create new labels and/or edit/delete an existing label. When seeing a document, user has 3 options: 1. Create a new label and assign that label to the document. 2. Choose an existing label for the document. 3. Skip the document. At any point, the user can run the classifier. After classification is finished, the predicted labels along with the certainty is shown for each document. User can edit/delete document labels and re-run classifier as many times as they desire. We Refer to this task as Topic Guided Annotation(TGA). Figure 2 shows a screenshot of the interface when choosing a label for a document. Evaluation We introduce an interactive framework for document labeling using topic models. In this section, we evaluate our system. Our goal is to measure whether showing users a topic modeling overview of the corpus helps them apply labels to documents more effectively and efficiently. Thus, we compare user-generated labels (considering labels assigned by user and classifier altogether) with gold labels of US Congressional Bills provided by GovTrack. Since user labels can be more specific than gold labels, we want each user label to be "pure" in gold labels. Thus, we use the purity score (Zhao and Karypis, 2001) to measure how many gold labels are associated with each user label. Purity score is where U = {U 1 , U 2 , ..., U K } is the user clustering of documents, G = {G 1 , G 2 , ..., G J } is gold clustering of documents, and N is the total number of documents. Moreover, we interpret U k and G j as the set of documents in user cluster U K or gold cluster G j . Figure 3 shows an example of purity calculation for a clustering, given gold labels. Purity is an external metric for cluster evaluation. A very bad labeling has a purity score close to 0 and a perfect labeling has purity score of 1. Figure 2: A screenshot of interactive document labeling interface. The user sees topic words and the most relevant documents for each topic. The user has created two labels: "Education" and "Health" and sees the content of a documents. The user can create a new label and assign the new label to the document, or choose one of the two existing labels to assign to the document, or skip the document and view the previous or next document. Figure 3: An example of computing purity: Clusters correspond to user labels and different shapes correspond to different associated gold labels. Majority gold label numbers for three clusters are 4(U 1 ), 3(U 2 ), and 5(U 3 ). Purity is 1 17 × (4 + 3 + 5) ≈ 0.71. The higher this score, the higher the quality of user labels. To evaluate TGA, We did a study on two different users. For User 1, we chose 15 topics and for User 2, we chose 25 topics. They were asked to stop labeling whenever they were satisfied with the predicted document labels. We compare the user study results with a baseline. Our baseline ignores topic modeling infor-mation for choosing documents to labels. It considers the scenario when users are given a large document collection and are asked to categorize the documents without any other information. Thus, we show randomly chosen documents to users and want them to apply label to them. All users can go back and edit or delete document labels, or refuse to label a document if they find it confusing. After each single labeling, we use the same features and classifier that we used for user study with topic models to classify documents. Then we calculate purity for user labels with respect to gold labels. Figure 4 shows the purity score over different number of labeled documents for User 1, User 2, and baseline. User 1 did the labeling in 6 rounds, whereas User 2 did total of 7 rounds. User 1 ended with 116 labeled documents and user 2 had 42 labeled documents in the end. User 2 starts with a label set of size 9 and labels 11 documents. Two documents are labeled as "wildlife", other two are labeled as "tax", and all other documents have unique labels. This means that even if there are very few instance per label, baseline is outperformed. This is an evidence of choosing informative documents to assign labels with the help of topic models. On the other hand, User 1 starts with a label set of size 7 and labels 36 documents and is outperformed by baseline significantly. One reason for this is that assigning too many documents relevant to a topic, with the same label doesn't provide any new information to the classifier and thus the user could get the same purity score with a lower number of labeled documents, which would lead to outperforming baseline. User 1 outperforms the baseline in the second (8 labels and 50 labeled documents) and third round (9 labels and 58 labeled documents) slightly. In the fourth round, user creates more labels. With total of 13 labels and 82 labeled documents, the gap between user's purity score and baseline gets larger. Both users outperform baseline in the final round. To see how topic models help speed up labeling process, we compare the number of user labeled documents with the approximate number of required labeled documents to get the same purity score in baseline. Table 2 shows the results for User 1 and User 2. User 1 starts with man labeled documents and baseline can achieve the same performance with one third of the labeled documents. As the user keeps labeling more documents, the performance improves and baseline needs more labeled documents to get the same level of purity. For User 2, baseline on average needs over two times as many labeled documents to achieve the same purity score as user labels. These tables indicate that topic models help users choose documents to assign labels to and achieve an acceptable performance with fewer labeled documents. Related Work Topic Models such as Latent Dirichlet Allocation (Blei et al., 2003, lda) are unsupervised learning algorithms and are a useful tool for understanding the content of large collection of documents. The topics found by these models are the set of words that are observed together in many documents and they introduce correlation among words. Top words in each topic explain the semantics of that topic. Moreover, each document is considered a mixture of topics. Top topics for each document explain the semantics of that document. When all documents are assigned a label, supervised topic models can be used. slda (Mcauliffe and Blei, 2008) is a supervised topic model that generates topics that give an overview of both document contents and assigned labels. Perotte et al. (2011) extend slda and introduce hslda, which is a model for large-scale multiply-labeled documents and takes advantage of hierarchical structure in label space. hslda is used for label prediction. In general, supervised topic models help users understand labeled document collections. Text classification predicts a label for documents and help manage document collections. There are known classifiers as well as feature extraction methods for this task. However, providing an initial set of labeled documents for both text classification and supervised topic models still requires lots of time and human effort. Active learning (Settles, 2010), reduces the amount of required labeled data by having a learner which actively queries the label for specific documents and collects a labeled training set. In a user interactive system, the active learner queries document labels from users (Settles, 2010). In other words, the learner suggests some documents to the user and wants the user to assign a label to those. Settles (2011) discusses that having interactive users in annotation process along with active learning, reduces the amount of annotation time while still achieving acceptable performance. In more detail, they presents an interactive learning framework to get user annotations and produce accurate classifiers in less time. The shortcoming of active learning is that they don't provide any overview information of corpus, like topic model approaches do. Nevertheless, new methods in both analysis and evaluation are needed. Classification algorithms restrict document labels to a predefined label set. Grimmer and Stewart (2013) show that to be able to use the output of automatic text analysis in political science, we need careful validation methods. There has been some work done on bringing user in this task for refining and evaluating existing methods. Hu et al. (2014) show that topic models are not perfect from the user view and introduce a framework to interactively get user feedback and refine topic models. Chuang et al. (2013) present an interactive visualization for exploring documents by topic models to address user needs. We bring these tools together to speed up annotation process. We believe having users engaged in content analysis, not only reduces the amount of annotation time, but also helps to achieve user satisfaction. We propose an iterative and user interactive procedure for document annotation. We use topic models to provide some high-level information about the corpus and guid users in this task. We show top words and documents for each topic to the user and have them start labeling documents. Users can create/edit/delete labels. Then users can run a classifier to predict the labels for the unlabeled documents. They can change document labels and re-classify documents iteratively, until satisfaction is achieved. Future Work There are some obvious directions that will expand this ongoing research. First, we are planning to use active learning to better aid classification. We expect that active learning will reduce the number of required labeled documents while still getting a high purity score and user satisfaction. Second, we will use supervised topic models (Mcauliffe and Blei, 2008, slda) instead of lda after the first round to update topics based on document labels. slda uses labeled documents to find topics that explain both document content and their associated labels. We believe using slda instead of lda after the first round will give users more information about the overview of documents and help them further for applying labels to documents. Third, we want to allow the user to refine and correct labels further. Our existing interface allows the user to delete a label or edit a label. We believe it is also important for users to merge labels if they think the labels are too specific. In addition, we believe a crucially important step is to generate the label set. Giving the user some information about the range of documents can help them generate a better label set. One other option is to suggest labels to users based on topic models (Lau et al., 2010). Fourth, we will explore other corpora such as European Parliament corpus (Koehn, 2005). To our knowledge, there are no true labels for Europarl corpus and using our interactive tool can help users find the categorized information they need. Finally, for evaluating our method, in addition to using the correct labeling and purity score, we will conduct a user experiment with more users involved. Since the task of labeling congress data set requires some political knowledge, we will choose annotators who have some political science background. face. We also thank Alvin Grissom II for helping us in the user study. This work was supported by NSF Grant NCSE-1422492. Any opinions, findings, results, or recommendations expressed here are of the authors and do not necessarily reflect the view of the sponsor.
3,938.2
2015-06-01T00:00:00.000
[ "Computer Science" ]
Driver of Energetic Electron Precipitation in the Vicinity of Ganymede The driver of energetic electron precipitation into Ganymede's atmosphere has been an outstanding open problem. During the Juno flyby of Ganymede on 7 June 2021, Juno observed significant downward‐going electron fluxes inside the bounce loss cone of Ganymede's polar magnetosphere. Concurrently, Juno detected intense whistler‐mode waves, both in the quasi‐parallel and highly oblique directions with respect to the magnetic field line. We use quasi‐linear model to quantify energetic electron precipitation driven by quasi‐parallel and very oblique whistler‐mode waves, respectively, in the vicinity of Ganymede. The data‐model comparison indicates that in Ganymede's lower‐latitude (higher‐latitude) polar region, quasi‐parallel whistler‐mode waves play a dominant role in precipitating higher‐energy electrons above ∼100s eV (∼1 keV), whereas highly oblique waves are important for precipitating lower‐energy electrons below 100s eV (∼1 keV). Our result provides new evidence of whistler‐mode waves as a potential primary driver of precipitating energetic electrons into Ganymede's atmosphere. convection time of a flux tube across Ganymede may explain the energy-dependent loss cone features (the level of filling in the downward direction decreases with increasing energy). Although this mechanism may operate for lower-energy electrons, additional mechanisms are needed to explain the partially full downward loss cone at higher-energy electrons, for which the bounce period is shorter than their convection time. attributed this feature to energy-dependent pitch angle scattering when electrons bounce between Ganymede and their near-Jupiter mirror point. Using the general wave-particle scattering theory, further estimated pitch angle diffusion coefficients (potentially responsible for the partially or fully filled loss cone), as well as their energy dependence, while the driver of causing this pitch angle scattering was not identified. Tripathi et al. (2014) evaluated pitch angle diffusion by parallel whistler-mode waves near Ganymede based on the Galileo spacecraft measurement and found that whistler-mode wave amplitude of ∼16 pT is required to match the observed and calculated pitch angle diffusion coefficients. However, the quantitative role of whistler-mode waves with various properties (e.g., parallel vs. oblique wave normal angle; low vs. high frequency) in precipitating electrons in a broad range of energies into Ganymede's atmosphere still needs further investigations. In the present paper, we focus on understanding the features of energetic electrons and plasma waves near Ganymede, as well as the underlying mechanisms of causing energetic electron precipitation into Ganymede's atmosphere. Juno Observations Near the Vicinity of Ganymede On 7 June 2021, the Juno spacecraft (Bolton & Juno Science Team, 2010) traveled near Ganymede (Hansen et al., 2022) and crossed Ganymede's magnetosphere (with at least one magnetic footpoint on Ganymede) near the Jovian equatorial plane (see Figure S1 in Supporting Information S1). Figure 1 shows an overview of plasma waves observed by the Waves instrument (Kurth et al., 2017) and electron distributions measured by the Jupiter Energetic-particle Detector Instrument (JEDI; Mauk et al., 2017) and Jovian Auroral Distributions Experiment (JADE; McComas et al., 2017), as Juno moved from the Jovian closed magnetic field lines (green lines in Figure S1 in Supporting Information S1) to the regions where at least one magnetic footpoint is connected to Ganymede (red or blue lines in Figure S1 in Supporting Information S1). At ∼16:45 UT, a substantial change in electron distributions was detected with reduced electron fluxes at higher energies ( Figure 1c) and increased fluxes at lower energies (Figure 1d), as well as the changing shape of field-aligned local pitch angle distributions to more isotropic ones. These features indicate that Juno crossed the boundary between the Jovian closed magnetic field lines and Ganymede's magnetotail/wake region . Subsequently, Juno moved from the magnetotail/wake region (from ∼16:45 UT to 16:50 UT; Clark et al., 2022;Kurth et al., 2022) to Ganymede's magnetosphere at ∼16:50 UT (marked by the red vertical line), remained in the magnetosphere until 17:00 UT Clark et al., 2022;Kurth et al., 2022). Inside Ganymede's magnetosphere, electromagnetic whistler-mode emissions (below the electron cyclotron frequency) were intense during almost the entire interval from 16:50 to 17:00 UT (Figure 1b; Kurth et al., 2022), whereas strong electrostatic electron cyclotron harmonic waves (above electron cyclotron frequency) were mostly observed from 16:50:00 to 16:51:30 UT. Moreover, sudden decreases in electron fluxes over a broad energy range measured by JEDI from 30 keV to ∼1 MeV ( Figure 1c; Clark et al., 2022) and JADE from 100 eV to ∼30 keV (Figure 1d; Allegrini et al., 2022) were detected in the magnetosphere compared to those in the magnetotail/wake region. The electron local pitch angle distributions were field-aligned in the region dominated by the Jovian closed magnetic field (before ∼16:45 UT), exhibited a mixed distribution in the magnetotail/wake region (from 16:45 UT to 16:50 UT), and were mostly pancake but asymmetric in the region dominated by Ganymede's magnetic field lines (after ∼16:50 UT). In Figures 1e-1j, the white dashed lines with black dots represent the local bounce loss cone with the downward (upward) direction near 0° (180°). The bounce loss cone of Ganymede is the pitch angle value for the electrons that mirror at Ganymede's surface, estimated using a centered dipole magnetic field model of Ganymede's magnetosphere (with the equatorial magnetic field intensity of 719 nT) based on Kivelson et al. (2002), but neglecting the small tilt between the dipole moment and the anti-parallel direction from Ganymede's spin axis. Inside Ganymede's magnetosphere (marked by the green horizontal bar on the top part of Figure 1), the majority of electrons with pitch angles smaller than the local loss cone are lost to Ganymede's surface. However, outside Ganymede's magnetosphere (the regions except for the green horizontal bar), electrons with pitch angles smaller than the loss cone of Jupiter (400 km above the 1-bar level) are lost to Jupiter's atmosphere. From 16:50 UT to 16:56 UT (between the red and blue vertical lines), precipitating electron fluxes were relatively small at high energies (above tens of keV) with slightly higher downward-going electron fluxes than upward-going ones. It is noteworthy that due to the limited coverage in pitch angle for the JADE measurements, it is difficult to obtain precipitating electron fluxes from ∼16:46 UT to 16:53 UT at energies below ∼30 keV. However, starting from ∼16:53 UT, when precipitating electron measurements were available from JADE, downward-going electron fluxes were evidently higher than the upward-going ones. Between 16:56 UT (blue vertical line) and 17:01 UT (green vertical line), a clear asymmetry in pitch angle distribution was observed with almost full downward loss cone compared to the upward one. In particular, after ∼16:56 UT the downward bounce loss cone remained almost full even for higher-energy electrons (above tens of keV), which may indicate energy-dependent pitch angle scattering (e.g., Tripathi et al., 2014;. Time 1 Time 2 Magnetosphere Magnetotail/Wake Jupiter Jupiter Figure 2 shows the plasma wave observations in more detail, as also described by Kurth et al. (2022). Inside Ganymede's magnetosphere (after ∼16:50 UT), intense whistler-mode waves were observed in both electric ( Figure 2a) and magnetic wave power ( Figure 2b). Interestingly, these whistler-mode waves exhibit two distinct modes, as can be inferred from the ratio between the wave electric and magnetic fields (E w /cB w ; Figure 2c), where c is the speed of light, E w is the y component of the wave electric field, and B w is the z component of the wave magnetic field in spacecraft coordinates (Kurth et al., 2017). E w /cB w is expected to be close to or smaller than 1 for electromagnetic waves with small or intermediate wave normal angles, but large for electrostatic or highly oblique electromagnetic waves. The higher frequency component (>∼1 kHz) was highly oblique, but the lower frequency component (<∼1 kHz) was quasi-parallel. We calculated the magnetic amplitude of these two components of whistler-mode waves based on the Juno observation (Figure 2d), which indicates large amplitudes (up to a few hundred pT) for the quasi-parallel waves and modest amplitudes (up to tens of pT) for the oblique waves. The total electron density was inferred from the upper hybrid resonant frequency line ( Figure 2a) and is shown in Figure 2e (black line). It is interesting to note that the electron density smoothly increased (up to ∼15 cm −3 ) until 16:57 UT, after which it suddenly increased by a factor of ∼2 and remained elevated for ∼1 min, albeit with large fluctuations. This remarkable feature suggests that Juno crossed two different regions in Ganymede's magnetosphere Kurth et al., 2022). Therefore, we mark the region over 16:57-17:00 UT with horizontal black dashed lines in the colored blocks shown in the top row in Figure 2. Based on the measured ambient magnetic field intensity, we further calculated the ratio between the electron plasma frequency and electron cyclotron frequency (blue line in Figure 2e). It is noteworthy that the whistler-mode wave spectra changed at ∼16:57 UT with quasi-parallel wave power extending to even lower frequencies (below tens of Hz). Simultaneously, the oblique wave intensity decreased after ∼16:57 UT. Therefore, we chose two time snapshots to model energetic electron precipitation driven by whistler-mode waves: Time 1 (2) before (after) the change in wave spectra in the lower (higher) latitude of Ganymede's polar region. Since whistler-mode waves f pe /f ce indicated two distinct components, we quantified the effects of quasi-parallel and highly oblique whistler-mode waves separately. It is important to note that several studies suggested that Juno did not cross the closed field lines during this flyby and remained in the open field line region with one footpoint on Ganymede and the other on Jupiter Clark et al., 2022;Duling et al., 2022), although we may not completely exclude the possibility that Juno briefly crossed the closed field line region (e.g., Romanelli et al., 2022). In our modeling, we assume that Juno remained in the open field line region and electrons moved along the magnetic field line with one footpoint on Ganymede and the other on Jupiter. Modeling of Energetic Electron Precipitation Driven by Whistler-Mode Waves We model energetic electron precipitation due to interactions with whistler-mode waves based on the quasi-linear theory, which is commonly used in previous studies to evaluate the effects of plasma waves on energetic particles (e.g., Schulz & Lanzerotti, 1974;Thorne et al., 2010). The model used for Ganymede's magnetic field environment is a simple intrinsic dipole field (Kivelson et al., 2002) superimposed on the ambient Jovian magnetic field (JRM09+CON2020; Connerney et al., 2018Connerney et al., , 2020. The total electron density used in the model is based on the Juno observation (Figure 2e) from the magnetic footpoint on Ganymede to Jovian magnetic equator and an empirical density model (Dougherty et al., 2017) from Jovian magnetic equator to the magnetic footpoint on Jupiter. Whistler-mode waves are assumed to be present along the magnetic field line ( Figure S1b in Supporting Information S1) from Ganymede's surface to 50° of magnetic latitude in the Jovian JRM09+CON2020 coordinate. To model electron precipitation driven by whistler-mode waves through pitch angle scattering and acceleration, we solve the two-dimensional Fokker-Planck equation along the field line with one foot on Ganymede and the other on Jupiter . Note that since electrons bounce in a closed magnetic field line, the Fokker-Planck simulation approach is valid. The detailed description of the Fokker-Planck simulations is provided in Text S1 in Supporting Information S1. The modeled pitch angle distribution inside the loss cone is compared to the energy-dependent loss cone filling of electrons observed by Juno, as discussed below. Figure 3 shows the modeling results of energetic electron precipitation driven by whistler-mode waves, as well as the associated wave and plasma parameters that were used as model inputs. This modeling result is shown at Time 1, when Juno traveled to the lower latitude of Ganymede's polar region. Whistler-mode wave spectra (Figure 3a) indicate that the magnetic wave amplitude of quasi-parallel waves at several hundred Hz (∼73.3 pT) is larger than that of oblique waves (17.4 pT). The detailed information of the wave normal distribution of quasi-parallel and highly oblique whistler-mode waves is described in Text S2 in Supporting Information S1. Bounce-averaged diffusion coefficients in equatorial pitch angle (<D αα > in Figure 3d), momentum (<D pp > in Figure 3e), and mixed terms (|<D αp >| in Figure 3f) were calculated for the sum of quasi-parallel and oblique waves. <D αα > shows large values over a broad range of energies from tens of eV to several hundred keV, whereas large <D pp > values are observed mostly below a few keV near the bounce loss cone. Figure 3b shows the bounce-averaged diffusion coefficients (<D αα > and <D pp >) at the bounce loss cone for the quasi-parallel (blue) and highly oblique whistler-mode waves (red) separately. Moreover, the total <D αα > of quasi-parallel and highly oblique waves (black solid line) and the strong diffusion limit (D SD ; black dotted line) are overplotted for direct comparison. The strong diffusion limit is calculated as = 2 2 LC , where LC is the equatorial pitch angle of bounce loss cone and is the electron bounce period between Ganymede and near-Jupiter mirror point in the northern hemisphere. <D αα > is predominantly contributed from the quasi-parallel waves at electron energies above several hundred eV, whereas highly oblique waves play a dominant role in both <D αα > and <D pp > at energies below several hundred eV. It is interesting to note that the total <D αα > exceeds the strong diffusion limit at energies below several keV, indicating efficient pitch angle scattering due to whistler-mode waves at lower energies. Moreover, for oblique waves <D pp > is even larger than <D αα > at energies below ∼100 eV, indicative of strong energy diffusion. Figure 3c shows the observed electron pitch angle distribution at Time 1, demonstrating larger electron fluxes in the downward direction than upward ones. The observed electron pitch angle distribution is also shown with color-coded solid lines at various energies overplotted with the modeled electron pitch angle distribution (dotted lines only shown inside the downward loss cone) for the quasi-parallel waves (Figure 3g), oblique waves (Figure 3h), and full waves including both quasi-parallel and oblique waves (Figure 3i), respectively. The comparison between the observed and modeled electron pitch angle distribution indicates that quasi-parallel waves alone play an important role in scattering electrons between several hundred eV and tens of keV, leading to the observed electron precipitation (Figure 3g), although they play a minor role in pitch angle scattering lower-energy electrons (below several hundred eV). However, the oblique waves are effective in scattering lower-energy electrons (below several hundred eV) leading to an almost flat pitch angle distribution inside the loss cone, due to the combination of efficient pitch angle scattering and energy diffusion (Figures 3b, 3d, and 3e). The full waves by including both quasi-parallel and oblique waves lead to efficient electron precipitation over a broad energy range from tens of eV to tens of keV, most consistent with the observations. However, the modeled result still underestimates electron precipitation at higher energies (above tens of keV), suggesting that an additional mechanism is needed to explain this discrepancy. Figures 1 and 2). (a) Magnetic wave spectra of the observed whistler-mode waves for the quasi-parallel (blue) and oblique components (red). (b) Bounce-averaged pitch angle diffusion coefficients at the bounce loss cone for the entire whistler-mode waves (black solid line), quasi-parallel waves (blue solid line), and oblique waves (red solid line); bounce-averaged momentum diffusion coefficients at the bounce loss cone for the quasi-parallel waves (blue dashed line) and oblique waves (red dashed line); and the strong diffusion limit (black dotted line). (c) Electron flux as a function of local pitch angle and energy, where the black dashed lines represent the bounce loss cone of Ganymede. (d) Bounce-averaged electron pitch angle diffusion coefficients (<D αα >), (e) momentum diffusion coefficients (<D pp >), and (f) mixed terms (|<D αp >|) as a function of equatorial pitch angle and energy. (g) Observed electron pitch angle distribution (solid lines) and modeled electron pitch and distribution (dotted lines) only shown inside the downward loss cone, color-coded for various energies of electrons which interact with quasi-parallel whistler-mode waves. (h) The same format as panel (g) but for oblique waves. (i) The same format as panel (g) but for the full waves (sum of quasi-parallel and oblique waves). Figure 4 shows the modeled electron precipitation with the same format as Figure 3 but during Time 2, when Juno was in Ganymede's higher-latitude polar region. As discussed earlier, the whistler-mode wave spectra suddenly changed near 16:57 UT with much stronger quasi-parallel waves (∼275 pT) at lower frequencies extending down to tens of Hz (Figure 4a), while the oblique wave amplitude (11.6 pT) was slightly weaker than that during Time 1 (Figure 3a). Bounce-averaged diffusion coefficients (Figures 4b and 4d-4f) show that <D αα > due to quasi-parallel waves is large at energies above ∼1 keV, whereas both <D αα > and <D pp > due to oblique waves are large, exceeding the strong diffusion limit, for lower-energy electrons (∼tens of eV to 1 keV). The local electron pitch angle distribution demonstrates an almost flat profile inside the downward loss cone (Figures 4c and 4g-4i) over a broad range of energies from ∼30 eV to a few hundred keV, indicating stronger electron precipitation compared to that during Time 1. The comparison between the model and observation indicates that the quasi-parallel waves play a dominant role in precipitating higher-energy electrons above ∼1 keV (Figure 4g), whereas the oblique waves mostly contribute to precipitating lower-energy electrons below ∼1 keV (Figure 4h). It is interesting to note that the oblique waves were able to reproduce the overfilling loss cone feature that was observed with stronger electron flux inside the downward loss cone (than outside of it) for lower-energy electrons below ∼1 keV. The combined effects of quasi-parallel and oblique whistler-mode waves lead to effective pitch angle scattering over a broad energy range (from 10s eV to a few hundred keV), remarkably consistent with the observation (Figure 4i). Summary and Discussion During the flyby of Ganymede on 7 June 2021, Juno crossed Ganymede's polar magnetosphere (with one magnetic footpoint on Ganymede and the other on Jupiter) and observed evident loss cone features in association with intense whistler-mode waves. Using the Juno observations and quasi-linear modeling, we performed a quantitative analysis to determine the primary driver of causing energetic electron precipitation into Ganymede's atmosphere. The principal findings are summarized below. 1. In the vicinity of Ganymede, the local electron pitch angle distribution exhibited an asymmetric distribution with the downward electron fluxes larger than the upward ones, indicative of the evident loss cone feature. The amount of filling in the downward direction was energy-dependent (decreasing with increasing energy). 2. In association with enhanced precipitating electron fluxes, whistler-mode waves were detected in two distinct modes: quasi-parallel waves at lower frequencies up to a few hundred pT, and very oblique waves at higher frequencies up to tens of pT. 3. In Ganymede's lower-latitude polar region (Time 1 in Figures 1 and 2) where energetic electron precipitation was modest, quasi-parallel whistler-mode waves play a dominant role in precipitating electrons from several hundred eV to tens of keV through pitch angle scattering, while oblique waves are important for precipitating electrons from tens of eV to hundreds of eV. However, the modeled result underestimates electron precipitation at energies above tens of keV, which needs an additional explanation. 4. In Ganymede's higher-latitude polar region (Time 2 in Figures 1 and 2) where energetic electron precipitation was strong, the associated quasi-parallel whistler-mode waves were stronger and wave power extended to lower frequencies. The comparison between the observed and modeled electron distribution indicates that the quasi-parallel waves play a dominant role in precipitating higher-energy electrons above ∼1 keV, whereas the oblique waves mostly contribute to precipitation of lower-energy electrons below ∼1 keV. It is noteworthy that whistler-mode waves are assumed to be present with the constant wave amplitude along the magnetic field line from Ganymede's surface to 50° of Jovian magnetic latitude. Although it is shown that whistler-mode wave amplitude significantly increases near the vicinity of Ganymede and Europa (Shprits et al., 2018), how the whistler-mode wave intensity varies along the field line connecting Jupiter and Ganymede is currently unknown. However, based on the statistical distribution of whistler-mode waves using Juno data (e.g., Li et al., 2020;Menietti et al., 2021), whistler-mode wave intensity tends to remain in the similar level from the equator to higher latitudes. Therefore, the assumption used in the present study may be fairly reasonable. Nevertheless, further investigations using more plasma wave data and ray tracing modeling are required to improve our understanding of wave distribution along the magnetic field line connecting Ganymede and Jupiter. Regarding the underestimated electron precipitation at higher energies in Ganymede's lower-latitude polar region (at Time 1), it is possible that quasi-linear modeling underestimates energetic electron precipitation, especially for large amplitude and/or oblique whistler-mode waves (e.g., Bortnik et al., 2008;Gan et al., 2022;Hsieh et al., 2022;Zhang et al., 2022), or electrons are further scattered into the loss cone or accelerated when they bounce between Ganymede and their near-Jupiter mirror point through additional nonlinear waves or turbulence (e.g., Sulaiman et al., 2022). However, these effects are beyond the scope of the present paper and left for further investigations. In summary, our study provides new direct evidence that whistler-mode waves potentially play a dominant role in energetic electron precipitation into Ganymede's atmosphere over a broad range of energies from 30 eV to several hundred keV. Since electron precipitation into the atmosphere is known to generate aurora (e.g., Li et al., 2017Li et al., , 2021, we suggest that some of the diffuse aurora observed near Ganymede may be related to the pitch angle scattering driven by whistler-mode waves. However, the quantitative evaluation of Ganymede's aurora driven by whistler-mode waves is beyond the scope of the present study and is left as a future investigation. Data Availability Statement We
5,094.6
2022-12-24T00:00:00.000
[ "Physics" ]
An Intelligent Fault Diagnosis Method for Transformer Based on IPSO-gcForest Transformers are the main equipment for power system operation. Undiagnosed faults in the internal components of the transformer will increase the downtime during operation and cause significant economic losses. Efficient and accurate transformer fault diagnosis is an important part of power grid research, which plays a key role in the safe and stable operation of the power system. Existing traditional transformer fault diagnosis methods have the problems of low accuracy, difficulty in effectively processing fault characteristic information, and superparameters that adversely affect transformer fault diagnosis. In this paper, we propose a transformer fault diagnosis method based on improved particle swarm optimization (IPSO) and multigrained cascade forest (gcForest). Considering the correlation between the characteristic gas dissolved in oil and the type of fault, firstly, the noncode ratios of the characteristic gas dissolved in the oil are determined as the characteristic parameter of the model. /en, the IPSO algorithm is used to iteratively optimize the parameters of the gcForest model and obtain the optimal parameters with the highest diagnostic accuracy. Finally, the diagnosis effect of IPSO-gcForest model under different characteristic parameters and size samples is analyzed by identification experiments and compared with that of various methods. /e results show that the diagnostic effect of the model with noncode ratios as the characteristic parameter is better than DGA data, IEC ratios, and Rogers ratios. And the IPSO-gcForest model can effectively improve the accuracy of transformer fault diagnosis, thus verifying the feasibility and effectiveness of the method. Introduction Transformer fault will endanger the safe and stable operation of the whole power system. Transformer fault diagnosis can analyze equipment status information to ensure reliable and efficient operation of transformer equipment. erefore, accurate identification of transformer fault types and timely maintenance can provide an important guarantee for the normal operation of the power system [1,2]. Since the amount of dissolved gas in the oil inside the transformer tank is closely linked to the actual operating conditions of the transformer, it is necessary to use the dissolved gas analysis (DGA) technology to evaluate the condition and monitor the early discharge, overheating, and other faults of the transformer in the oil. Dissolved gas analysis in oil is mainly used in online monitoring of oilimmersed transformers [3][4][5]. Based on the characteristic gas of DGA for data correlation analysis, foreign researchers have proposed the IEC ratio method, Rogers ratio method [6], Dornenburg ratio method [7], and electrical cooperative research method. However, the traditional DGA method only gives the threshold discrimination boundary of fault diagnosis, which cannot show the relationship between characteristic gases and fault types. It cannot meet the requirements of actual operation of transformer [8,9]. With the advancement and development of artificial intelligence technology, the application of machine learning methods in transformer fault diagnosis has made remarkable achievements. Currently, expert systems [10], deep belief networks (DBN) [11][12][13], random forests (RF) [14], and support vector machine (SVM) [15,16] are commonly used in transformer fault diagnosis. Although these machine learning methods are widely used in transformer fault diagnosis, there are still certain drawbacks. For example, expert systems cannot learn to autonomously work in low efficiency, and it is hard to obtain accurate diagnosis results. DBN has strong self-learning ability, but it requires a large amount of sample data for training. e learning period of DBN is long, and it is easy to be overfit. RF is easy to be overfit when dealing with multiclassification problems of transformer fault diagnosis. SVM has outstanding performance when processing small sample data, but it is essentially a two-classifier, which is inefficient when dealing with multiclassification problems such as transformer fault diagnosis. e methods used in the above literature have improved the accuracy of transformer fault diagnosis. However, the transformer faults are diverse and complex, and the use of a single intelligent fault diagnosis method has the problems of insufficient reasoning ability and low diagnostic accuracy, which makes it difficult to obtain satisfactory diagnosis results. With the continuous development of big data technology in power system and the increase of transformer fault cases, the level of fault diagnosis needs to meet higher requirements. Multigrained cascade forest (gcForest) is a deep integrated learning model based on decision tree proposed by Zhou Zhihua in 2017 [17,18]. e model has the advantages of high parallel learning efficiency and strong representation learning ability. It is widely used in the fields of hyperspectral image classification [19], complex machine processing status monitoring [20], turbine fault intelligent diagnosis [21], and other fields with good results. e gcForest model consists of two parts: multigrained scanning procedure and cascade forest procedure. e multigrained scanning procedure mines the feature information of the original sample data and then supervises the learning layer by layer through the cascade forest. erefore, the generalization ability of the model is improved. Although they perform well in much application, the rationality of the architecture and its optimization remains an unresolved problem. Another significant but rarely studied problem in machine learning based classification and regression tasks is hyperparameter optimization. e hyperparameter settings such as the multigrained scanning window size q and the maximum cascade number allowed by the cascade forest l will have a greater impact on the model diagnostic performance. erefore, the problem of low diagnosis accuracy can be solved by adjusting random parameters through optimization algorithms and iteratively searching for the optimal parameters of the model. ere are several common optimization algorithms, such as simulated annealing algorithm [22], genetic algorithm [23], Bayesian algorithm [24], and particle swarm algorithm [25]. Particle swarm optimization (PSO) algorithm is more popular in the past few years. PSO algorithm is a group optimization algorithm that simulates the bird foraging process based on the activity of bird clusters. e PSO algorithm has fewer hyperparameters, and the parameter adjustment process is simple and easy to implement that makes it suitable for optimization under dynamic and multiobjective conditions. But the PSO algorithm tends to fall into the local optimal in the optimization process, which may cause a large error result. erefore, the use of improved particle swarm optimization (IPSO) algorithm in transformer fault diagnosis may help it a lot. e DGA-based transformer fault diagnosis method can analyze the equipment status information and detect the potential risks of the transformer in time, which is the key to ensuring the reliable and efficient operation of the equipment. erefore, we proposed a transformer fault diagnosis method to improve the accuracy of transformer diagnosis, in which the key parameters of gcForest model were optimized by IPSO algorithm. Firstly, the noncode ratios of the characteristic gas dissolved in oil are determined as the characteristic parameter of the model. en, the IPSO algorithm is used to iteratively optimize parameters q and l of the gcForest model. Under the premise of the highest diagnostic accuracy, the optimal parameters of the model are obtained through continuous iteration, and the IPSO-gcForest fault diagnosis model is established. Finally, the fault characteristic information of transformer is extracted by multigrained scanning, and the cascade forest has supervised the learning to diagnose the fault type of transformer. After that, the accurate diagnosis of transformer fault type can be got. e diagnostic performance of the IPSO-gcForest model under different characteristic parameters and size samples is analyzed through calculation examples, and the effectiveness of the method is verified. And the transformer fault diagnosis method proposed in the paper is applied to the transformer condition assessment system with good practical application results. Our contributions in this paper include the following: (1) Different strategies are used to update the inertia weight and acceleration factor of the traditional PSO algorithm in order to improve the convergence speed and search ability of the particles. (2) Under the premise of the highest diagnostic accuracy, the IPSO algorithm is used to iterate and update automatically to find the optimal value of the parameters in the gcForest model, which overcomes the problem of low accuracy caused by the traditional empirical selection of parameters. (3) It is proposed that using noncode ratios as the characteristic parameter of the model can significantly improve the accuracy of transformer fault diagnosis. (4) A new intelligent data-driven transformer fault diagnosis method is proposed. e multigrained scanning process of the gcForest model mines more transformer fault feature information. And the cascade forest process integrates multiple classifiers for parallel training layer by layer. It can ensure that features are distinguished in different operating conditions and improve the accuracy of classification. e rest of the paper is organized as follows: In Section 2, the principle of IPSO-gcForest model is described in detail, including the gcForest model, PSO algorithm, and its improved algorithm. In Section 3, based on IPSO-gcForest model, an intelligent transformer fault diagnosis model is built. In Section 4, the robustness of the fault diagnosis method is analyzed, and the process of parameter optimization of gcForest model by IPSO algorithm is discussed. Conclusions are presented in Section 5. PSO Algorithm. To the extent feasible, the PSO algorithm constantly adjusts each particle's speed and position based on its own search experience and that of other particles. Firstly, the state of the particle is initialized. e local extreme value and the global extreme value are iteratively searched according to the fitness function of the particle. en, it is constantly updated in the set number of iterations. e coordinates of the particles change depending on the search velocity at each iteration, which in turn depends on the inertial weight, acceleration factor, and local and global extreme values. e formula for calculating the position and velocity of each particle is shown in where xt i, d represents the d-dimensional coordinate component of the t iteration of the i particle; vt i,d represents the d-dimensional velocity component of the t iteration of the i particle; w t represents the inertia weight at the t iteration; st 1 and st 2 represent the two acceleration factors at the t iteration; r 1 and r 2 represent random values between [0, 1]; P i,d represents the local extreme value of the d-dimensional component of the i particle; G d represents the global extreme value of the d-dimensional component. IPSO Algorithm. It can be seen from formula (1) that the main factors affecting PSO algorithm update are three parameter variables: inertial weight w and acceleration factors s 1 and s 2 . is paper puts forward two improvement strategies based on the traditional PSO algorithm. First, according to the iterative process and the particle's following position, the inertia weight is varied in a nonlinear differential way to balance the overall speed of the particle search and the convergence velocity [26], as shown in equations (3) and (4). Secondly, the acceleration factor is dynamically adjusted by a cosine function to promote the coordination of the overall optimization and local optimization capabilities of the particles and improve the algorithm's optimization capability [27], as shown in where w ini and w fin represent the initial and final values of the inertia weight, respectively; t represents the current number of iterations; T max represents the maximum number of iterations, s 1, ini , s 1, fin and s 2, ini , s 2, fin represent the initial and final values of acceleration factors s 1 and s 2 , respectively. gcForest Model. e gcForest model is composed of multigrained scanning and cascade forest. e multigrained scanning stage can extract the features of the original sample set. e cascade forest structure can adaptively determine the number of cascading layers, and it can carry on representation learning and improve the generalization ability of the model. e complete random forest and random forest [17] in the gcForest model are integrated by CART decision trees. Decision Tree. e decision tree is based on examples to realize the tasks of classification and regression. In other words, it obtains classification rules by recursively analyzing the training set of the original sample set, thereby generating a decision tree to process the testing set. e decision tree is a hierarchical structure composed of nodes containing sample attributes and branches containing attribute test conditions. Starting from the root node of the decision tree, it applies the attribute test conditions to the training set, selects the appropriate branch according to the testing results, and then follows the branch to an internal node or uses the new attribute test condition to reach the leaf node. e structure of the decision tree is shown in Figure 1. e common algorithms of decision tree are ID3, C4.5, and CART. ID3 algorithm adopts a divide and conquer strategy and uses information gain as the selection criterion of attributes. So, all subsets only contain the same kind of information. e important improvement of C4.5 algorithm for ID3 is using information gain rate to select attributes [28,29]. e CART algorithm is the basic decision tree algorithm of the completely random forest and random forest, which uses Gini coefficient as the attribute's selection criterion. e CART algorithm divides the training set of the original sample set into two subsets by using category k and threshold u k , and then it minimizes the cost function H (k, u k ) to generate the purest subset. During the growth of the decision tree, we select the Gini coefficient as the best division metric for the root node and internal nodes. en, we use the Gini coefficient and the cost function to select the optimal attribute to divide the training set. After the decision tree is established, the testing set is used to prune the tree, and it can improve the generalization ability of the decision tree. e Gini coefficient and cost function are shown in where p j,k represents the percentage of training instances in which the node j belongs to category k, y left/right is the number of instances of the left and right subsample sets, and G left/right is the measure of the impurity of the left and right subsample sets. Multigrained Scanning. e multigrained scanning structure uses scan windows of different sizes to scan the original input features, which can produce many feature instances of different dimensions. en, the feature instances corresponding to the original input features are trained by a completely random forest and a random forest to generate a class probability vector. Finally, the feature vectors are obtained by splicing to improve the representation learning ability of the model. e multigrained scanning process is shown in Figure 2. As showed in Figure 2, the multigrained scanning phase is divided into two processes: feature scanning and feature conversion. Assume that the original input feature is of m × m dimensions, the sliding window size is of q × q dimensions, and the sliding step size is e. e scanning window extracts feature information by scanning the original input features and will generate N q-dimensional feature instances, as shown in If each forest outputs c-dimensional class probability vectors, after completely random forest and random forest training, all class probability vectors are connected into L-dimensional feature vectors, as shown in e scale of the feature vectors obtained by the multigrained scanning is much higher than that of the original input feature vectors. erefore, more feature information can be extracted. Cascade Forest. e cascade forest is integrated deep learning based on decision trees. e cascade forest has high accuracy when processing high-dimensional data and has scalability and parallelism. e supervised learning of cascade forest layer by layer can improve the representation ability of feature information. Each layer of the cascade forest contains two completely random forest classifiers and two random forest classifiers. e combination of multiple different types of base classifiers can fully learn the feature information of the input feature vector, thereby improving the overall recognition performance of the model. e cascade forest process is shown in Figure 3. e input feature vector of the cascade forest is the feature vector finally generated in the multigrained scanning process, and then supervised learning is carried out between cascading layers. e class vectors outputs between the cascade-forest layers are not merged before the logistic regression. e generated class vectors are spliced together with the input feature vectors as the input of the next layer. After layer-by-layer training, the final class vector is generated by logistic regression for all class vectors in the final cascade layer, from which the maximum value is taken to obtain the final classification of the original input features. In order to avoid overfitting in the cascade forest training, the completely random forest and random forest each are trained with 5-fold cross-validation to generate class vectors. e cascade level of cascaded forest can be adaptive, and the class vector of each cascading layer is dynamically updated. e performance of the whole cascade forest is evaluated according to the testing set. If the gcForest model does not improve significantly during training within several consecutive layers, the cascade process will be terminated automatically. is process can improve the accuracy of fault diagnosis and reduce the training time, and the dynamic changes of the cascade layer can make the gcForest model suitable for different sizes of sample data. When the sample data is small, the fault feature information will be closely combined to enhance the characterization learning ability of the original input feature. When the sample data is large, the number of cascade layers will be limited to accelerate the training process of cascaded forest. Transformer Fault Diagnosis Model Based on IPSO-gcForest 3 conditions and failure causes of the transformer, the analysis of the dissolved gas in the oil is a vital part. Different faults in power transformers will produce different characteristic gas, but the characteristic gas content in DGA data is quite different, which has a certain impact on the diagnosis and testing of internal faults of oil-immersed transformers. erefore, by comparing DGA data, IEC ratios (CH 4 /H 2 , C 2 H 4 /C 2 H 6 , C 2 H 2 /C 2 H 4 ), Rogers ratios (CH 4 /H 2 , C 2 H 2 /C 2 H 4 , C 2 H 4 /C 2 H 6 , C 2 H 6 /CH 4 ), and noncode ratios (CH 4 /H 2 , C 2 H 2 /C 2 H 4 , C 2 H 4 /C 2 H 6 , CH 4 / (C 1 +C 2 ), C 2 H 2 /(C 1 +C 2 ), H 2 /(H 2 +C 1 +C 2 ), C 2 H 4 /(C 1 +C 2 ), C 2 H 6 /(C 1 +C 2 ), (CH 4 +C 2 H 4 )/(C 1 +C 2 )) as the diagnostic accuracy of the model's characteristic parameters, the input characteristic parameters of the model are determined, where C 1 is CH 4 , and C 2 is the sum of C 2 H 2 、C 2 H 4 and C 2 H 6 . Since the dissolved gas content data in transformer oil is disturbed and affected by the monitoring device, ambient temperature, and personnel operations, the original data needs to be normalized. e normalization of feature quantity can reduce the impact of data on the performance of the model and improve the training speed and diagnostic accuracy of the model. In order to ensure that all feature quantities are in the same value range, it is needed to normalize the feature quantities, as shown in where y * is the normalized data; y min and y max are the minimum and maximum of a certain dimension feature vector; and y is the original data. IPSO-gcForest Diagnostic Model Technical Route. With its own internal structure, the gcForest model can fully mine fault feature information and accurately diagnose transformer faults. When the gcForest model is used to identify fault types, it is necessary to determine the key parameters of the model according to human experience or control variables, which may easily lead to poor diagnostic results. us, under the premise of satisfying the highest diagnostic accuracy, the IPSO algorithm obtains the optimal parameters of the gcForest model through continuous iterative solving, which will improve the diagnostic accuracy. e fault types of transformer can be divided into seven states: normal (N), high-energy discharge (D1), low-energy discharge (D2), partial discharge (D3), high-temperature overheating (T1), medium-temperature overheating (T2), and low-temperature overheating (T3). e fault diagnosis based on IPSO-gcForest model includes three main steps: data preprocessing, IPSO algorithm optimization parameters, and fault type identification. e whole process is shown in Figure 4, and the specific steps are shown as follows: Step 1: the noncode ratios of the characteristic gas dissolved in the oil are determined as the characteristic parameter of the model, and then the characteristic Figure 2: Multigrained scanning process. Mathematical Problems in Engineering parameter is normalized. According to the model testing requirements, the original sample was randomly divided into a training set and a testing set at a ratio of 8 : 2. Step 2: initialize the population particles randomly, and set the value range and search range of q and l. en, the number of particles and the maximum number of iterations can be determined. Step 3: build the gcForest model based on the values of initialized q and l. e training set and the testing set are used to train and diagnose gcForest, respectively, and then the diagnostic accuracy of the training set is used as the fitness value of the particles. Step 4: the local extreme values and global extreme values of particles are determined according to the initial fitness of particles, and the velocity and position of particles are updated by using equations (1) and (6). e corresponding particle fitness values are calculated and compared with local extreme value and global extreme value. e new local extreme value and global extreme value are determined to achieve the highest diagnosis and recognition accuracy. Step 5: when the particle fitness value tends to be stable or the number of iterations reaches a preset value, the particle iteration optimization is stopped to obtain the optimal parameters. Otherwise, return to step 4. Step 6: the IPSO-gcForest fault diagnosis model is constructed based on the optimal parameters obtained from the IPSO algorithm, and the diagnosis results are analyzed comprehensively with the evaluation index. Model Evaluation FN is of course the number of false negatives. Transformer Fault Diagnosis Model Based on IPSO-gcForest is paper collects fault sample data of transformer voltage level from 35 kV to 500 kV, from the transformer online monitoring data and historical fault data of China Southern Power Grid Corporation, the transformer fault oil chromatographic data in published papers, the "Typical Cases of Application of Power Grid Equipment Detection Technology" published by the State Grid and IEC TC 10 database. All the above data samples comprise 1601 cases of transformer fault data. In this paper, the training set and testing set are divided at the proportion of 8 : 2. Among them, 1280 cases received supervised training to adjust the parameters of the model to improve the fitting degree of the model. 321 cases were used to evaluate the performance and generalization ability of the model. us, the transformer fault diagnosis is realized. e sample data distribution for each fault type is shown in Figure 5. IPSO-Gcforest Model Parameter Selection and Optimization Results. After normalizing the data in Figure 5, the noncode ratios of the characteristic gas dissolved in the oil are determined as the characteristic parameter of the model. In the process of q and l optimization of gcForest model parameters by IPSO, the diagnostic accuracy of training set is taken as the particle fitness value. After adjusting the model parameters and comparative analysis of the diagnosis results, the model parameters are determined as follows: the number of decision trees in a random forest during multigrained scanning is 500, and the decision tree growth rule is that the purity of the leaf node reaches the optimal or the depth reaches 50. e number of decision trees in a single random forest of cascade layer is 101, and the decision tree growth rule is that the purity of the leaf node reaches the optimal or the depth reaches 50. e parameters are set during the optimization process as shown in Table 1. e fitness change of the particles during the optimization process is shown in Figure 6. As can be seen from Figure 6, the parameters q and l of the gcForest model go through five rounds of 100 iterations each. e accuracy of transformer fault diagnosis reaches the best in the 68, 49, 54, 65, and 52 iterations, respectively. At the same time, the IPSO algorithm optimization process is improved from 93.15% or 93.46% through 3 to 4 steps to the optimal fitness value of 94.70%. Finally, when q is 4 and l is 5, the particle fitness is the best, reaching 94.70%. Comparison of Different Characteristic Parameters. According to the data distribution in Figure 5, the noncode ratios were used as input characteristic parameters to test the IPSO-gcForest model. In order to verify the effectiveness of the proposed method, the DGA data, IEC ratios, and Rogers ratios are used as input characteristic parameters in contrast with the results obtained from noncode ratios. In order to diagnose and analyze transformer fault types, the above four different types of characteristic parameters were, respectively, input into RF model, DBN model, gcForest model, PSO-gcForest model, and IPSO-gcForest model for diagnosis. e RF model adopts bootstrap resampling method. e number of subtrees is 100, and the number of split features is 7. e activation function of the DBN model uses the sigmoid function, and the learning rate is 0.001. e momentum is 0.9, and the number of hidden layers is 3. e default parameter setting of gcForest model is that the number of decision trees in a random forest during multigrained scanning is 500, and the window size q is 2. e number of decision trees in a single random forest in the cascade layer is 101, and the maximum number of allowed cascades l is 7. e results are shown in Table 2. As can be seen from Table 2, the diagnostic accuracy of the same characteristic parameter increased in the order of RF model, DBN model, gcForest model, PSO-gcForest model, and IPSO-gcForest model. e diagnostic accuracy of the same method is improved according to the characteristic parameters of DGA data, IEC ratios, Rogers ratios, and noncode ratios. With noncode ratios as the characteristic parameter, IPSO-gcForest has the highest diagnostic accuracy, reaching 94.70%. Compared with RF model, DBN Mathematical Problems in Engineering model, gcForest model, and PSO-gcForest model diagnostic results, the accuracy of IPSO-gcForest fault diagnosis is improved by 10.90%, 9.03%, 5.91%, and 1.87%, respectively. Compared with characteristic parameters of DGA data, IEC ratios, and Rogers ratios, the diagnostic accuracy of IPSO-gcForest was improved by 10.59%, 7.16%, and 3.11%, respectively. It shows that noncode ratios can provide more characteristic information as the input characteristic parameter of the transformer fault diagnosis model. Comparison of Different Diagnostic Models. Due to the unbalanced distribution of the samples of each fault type in the collected transformer fault data, the performance of the model cannot be effectively verified only by the diagnostic accuracy. erefore, the precision, recall rate, and receiver operating characteristic (ROC) curve are used to measure the generalization ability of the model. e noncode ratios are used as the input characteristic parameter of different diagnostic models, and the diagnostic result is shown in Table 3. As can be seen from Table 3, the precision and recall rate of IPSO-gcForest method are all above 84%, and the average precision and average recall rate are 94.00% and 92.77%, respectively. e results show that IPSO-gcForest model has obvious advantages in the classification performance of each fault type. For the RF model to diagnose transformer fault types, the partial discharge fault diagnosis accuracy is the highest, reaching 88.24%. However, the recall rate of low energy discharge fault is the lowest, which is only 50.00%. e reason is that the fault types of transformers are related to each other, and different fault superpositions may occur. e recall rate of low energy discharge fault identified by IPSO-gcForest model is the highest, reaching 93.33%. e results show that it can effectively identify the actual fault types of transformer. e ROC curve draws the trend chart by the real case rate (vertical axis) and false positive case rate (horizontal axis) under different discriminant probability thresholds. e ROC curve can comprehensively evaluate the classification performance of fault diagnosis methods, especially for unbalanced sample. By calculating the area under the ROC curve, it can measure the learning effect of the model better on a few cost-sensitive samples that need attention. Moreover, the classification performance and the overall trend of the ROC curve can be intuitively evaluated. e ROC curves of different diagnostic models are shown in Figure 7. As can be seen from Figure 7, the area under the ROC curve of IPSO-gcForest diagnostic method is the highest, reaching 0.9873. Compared with the area under the ROC curve of other transformer fault diagnosis, it has increased by 13.67%, 11.71%, 6.77%, and 4.63% in turn. e results show that the proposed method has good classification ability for unbalanced sample. Comparison of Samples of Different Sizes. In order to further analyze the robustness of IPSO-gcForest diagnostic models under different size samples, according to the proportions of 25%, 50%, 75%, and 100%, the fault samples in Figure 5 are divided into sample 1 (400 cases), sample 2 (800 cases), sample 3 (1201 cases), and sample 4 (1601 cases). Each sample is divided into training set and testing set according to the proportion of 8 : 2, and the diagnostic accuracy is shown in Figure 8. As can be seen from Figure 8, the IPSO-gcForest model achieves high accuracy in fault diagnosis under different size samples. e results show that the performance of IPSO-gcForest model is better than that of the other three fault diagnosis methods. Compared with sample 1, sample 2, and sample 3, the diagnostic accuracy of IPSO-gcForest model in sample 4 increased by 9.51%, 5.88%, and 3.03%, respectively. It indicates that the larger the sample size, the more the feature information extracted. When the size of samples decreases, the diagnostic accuracy of each method will decrease. However, the reduction of sample size has little effect on fault diagnosis accuracy of IPSO-gcForest model. is indicates that IPSO-gcForest model has better model performance and strong robustness under small size samples. Table 4 shows the oil chromatographic data of a transformer with SFSZ9-50000/110 in a substation after its failure on January 21, 2020. Case Study. By selecting the noncode ratios as the input characteristic parameter of the IPSO-gcForest model, the oil chromatographic data is diagnosed and identified. e result of Mathematical Problems in Engineering the diagnosis is high-energy discharge with a probability of 87.63%. But the code determined by the three-ratio method is "202," and the corresponding fault type cannot be determined. e maintenance personnel found that the A phase low voltage coil of the transformer was burned in a large area. From the bottom 38 to 54 and 68 to 71, severe short-circuit and interturn short-circuit occurred between the cakes. e windings were melted and twisted in many places, and the upper coil had radial deformation. e sinking of the whole coil reaches about 40 mm, and the cushion block has dislocated and fallen off. ere are a lot of melted copper and carbonization marks of insulating material in the fault position, as shown in Figure 9. ere are obvious discharge traces between the low voltage coil and the iron core, and the silicon steel sheet deformed slightly, as shown in Figure 10. e B phase low voltage coil of the transformer is obviously bulged. e insulating paper is damaged, and the axial height of the winding sinks about 15 mm, as shown in Figure 11. ere is deformation between the lower end of the transformer C phase low voltage coil and the core, and there is a slight loosening when pressing by hand. ere are no obvious changes in the axial height of the winding, as shown in Figure 12. From the analysis of the field situation, there is a highenergy discharge problem existing in the transformer A phase low voltage winding, which is consistent with the diagnosis result of the transformer fault diagnosis method proposed in this paper. Conclusions is paper combines the current artificial intelligence technology and machine learning algorithms; thus, a transformer fault diagnosis method based on IPSO-gcForest model is proposed. e following conclusions are obtained from the example analysis results: (1) by improving the location update strategy of the traditional PSO algorithm, the key parameters of the gcForest model are optimized by using the IPSO algorithm, which overcomes the random fluctuation of the output of the gcForest model and makes the diagnosis model have better generalization performance. (2) Compared with the RF, DBN, gcForest, and PSO-gcForest models, the IPSO-gcForest model has higher diagnostic accuracy in the diagnostic model with the noncode ratios, DGA data, IEC ratios, and Rogers ratios as characteristic parameters. Among them, the model with noncode ratios as characteristic parameter has higher diagnostic accuracy than the other three characteristic parameters. (3) e proposed IPSO-gcForest transformer fault method has higher identification accuracy and higher recall rate than other compared methods. Moreover, its AUC value is also the highest, which improves the classification ability of unbalanced sample data. (4) With the increasing sample size, the IPSO-gcForest model achieves improved diagnostic accuracy and more stable diagnostic performance. In the future, it will be possible to increase the collection of discharge and overheat mixed fault cases to verify the effectiveness of the proposed method. And further research on optimization model structure will be conducted. Data Availability e data were obtained from the transformer online monitoring data and historical fault data of China Southern Power Grid Corporation, the transformer fault oil chromatographic data in published papers, and the paper entitled the "Typical Cases of Application of Power Grid Equipment Detection Technology" published by the State Grid and IEC TC 10 database. Conflicts of Interest e authors declare that they have no conflicts of interest.
7,957.6
2021-02-10T00:00:00.000
[ "Engineering", "Computer Science" ]
Content-aware QoE optimization in MEC-assisted Mobile video streaming The traditional client-based HTTP adaptation strategies do not explicitly coordinate between the clients, servers, and cellular networks. A lack of coordination leads to suboptimal user experience. In addition to optimizing Quality of Experience (QoE), other challenges in adapting HTTP adaptive streaming (HAS) to the cellular environment are overcoming unfair allocation of the video rate and inefficient utilization of the bandwidth under the high-dynamics cellular links. Furthermore, the majority of the adaptive strategies ignore important video content characteristics and HAS client information, such as segment duration, buffer size, and video duration, in the video quality selection process. In this paper, we present a content-aware hybrid multi-access edge computing (MEC)-assisted quality adaptation algorithm by taking advantage of the capabilities of edge cloud computing. The proposed algorithm exploits video content characteristics, HAS client settings, and application-layer information to jointly adapt the bitrates of multiple clients. We design separate strategies to optimize the performance of short and long duration videos. We then demonstrate the efficiency of our algorithm against client-based solutions as well as MEC-assisted algorithms. The proposed algorithm guarantees high QoE, equitably selects video rates for clients, and efficiently utilizes the bandwidth for both short and long duration videos. The results from our extensive experiments reveal that the proposed long video adaptation algorithm outperforms state-of-the-art algorithms, with improvements in average video rate, QoE, fairness, and bandwidth utilization of 0.4%–12.3%, 8%–65%, 3.3%–5.7%, and 60%–130%, respectively. Furthermore, when high bandwidth is available to competing clients, the proposed short video adaptation algorithm improves QoE by 11.1% compared to the long video adaptation algorithm. Introduction Multimedia content accounts for the majority of Internet traffic.According to the Cisco Visual Networking Index, 82% of global mobile data traffic will be video traffic by 2022 [8].To handle traffic demand related to multimedia, HTTP adaptive streaming (HAS) solutions are often used.These solutions include Apple's HTTP Live Streaming (HLS), Adobe's HTTP Dynamic Streaming (HDS), Microsoft's ISS Smoothing Streaming, and Dynamic Adaptive Streaming over HTTP (DASH) developed under MPEG and standardized by the ISO/IEC. In HAS, video content is encoded at multiple video rates and is stored on an HTTP server.The video content is fragmented into segments of fixed durations.The adaptive bitrate (ABR) algorithms run on the HTTP clients and adapt the video quality according to the network conditions.The HAS clients download the segments into the playback buffer before they are sent to the video player.The objective of the adaptation algorithms is to optimize the user experience by meeting conflicting video quality objectives.These objectives include selecting the highest feasible set of video bitrates, avoiding unnecessary video bitrate switches, and preserving the buffer level to avoid playback interruptions [10,11,24,29,39]. Traditionally, the ABR algorithms run on the HTTP client, and the clients are unaware of competing clients and radio channels.It has been shown that competing clients cannot achieve fair performance when the air interface is a bottleneck [6].Furthermore, the unfairness increases as the number of competing clients increase.Similarly, competing clients cannot coordinate with each other to efficiently utilize the bandwidth.Recently, a multi-access edge computing (MEC) paradigm has emerged that offers computation capabilities at the edge of a mobile network by deploying servers within the radio access networks.In addition, MEC provides real-time access to application and radio access network (RAN) information.The computational capabilities of MEC allow for cell-wide central adaptation of multiple clients competing for bandwidths. In our previous work [36], we analyzed the performance of MEC-assisted and client-based rate-adaptation algorithms under varying client, server, dataset, and network settings.The existing algorithms developed fixed control rules to select the video quality based on the estimated throughput [18,42], the playback buffer level [14], or a combination of the two parameters [15,28,33,34].The results in [36] revealed that the algorithms require significant tuning, and performance fluctuates from one network setting to the other.This leads to the algorithms providing inconsistent QoE in different environments.The video streaming services deploy segment durations differently.Microsoft's Smooth Streaming and Adobe's HTTP Dynamic Streaming offer segment durations of 2 seconds and 4 seconds, respectively [1,47].With the shorter segment duration, the client has more opportunities to adapt the video rate, compared to the longer duration.In a highly unstable network, the client could adjust the video rate quickly, downloading smaller segments.Similarly, different video players offer different buffer sizes.The buffer-based algorithms adapt the video rates aggressively or conservatively based on the playback buffer level.As the buffer level increases, the algorithms select the video rate aggressively.A smaller buffer would fill up quickly, compared to a larger buffer.This would allow the adaptation algorithms to aggressively increase the video rate.However, a larger buffer decreases the risk of playback interruption in case of a mismatch between the selected video rate and the available bandwidth.The ABR algorithms should be able to guarantee QoE under different settings.However, the existing adaptation algorithms do not consider buffer sizes and segment durations to adapt the video quality [36].In this work, in addition to the playback buffer level, we also consider clients' buffer sizes and segment duration to decide video quality. The results in [36] also reveal that the existing algorithms give precedence to one specific video quality objective over others.This trend is observed in both MEC-assisted and clientbased algorithms.It is easier to meet just one of the conflicting video quality objectives.For example, the video can be streamed at the highest available video rate throughout the streaming session.This would increase the risk of playback interruptions in an unstable environment.Similarly, the video can be streamed at the lowest available video rate to minimize the risk of playback interruption.However, this would lead to poor video quality.The aim of this work is to propose an adaptive algorithm that optimizes the QoE by simultaneously maximizing all metrics. Trends in video content have changed drastically since the advent of social media.The durations of video content from online video-sharing platforms such as YouTube have been drastically reduced over the years [2].The average duration of the top 10 videos on Facebook was 128 s in 2018 [17].Similarly, the average duration of movie trailers for over 20,000 movies released between 2000 and 2016 was 114.2 s [13].In a mobile network, the bandwidth for HAS clients depends on multiple factors, including propagation distance, fading, interference, and user mobility.The throughput may change drastically while downloading the segments.Therefore, the existing ABR algorithms strive to keep the buffer filled to a predefined threshold to minimize the risk of playback interruption [26,33,34].To keep the buffer above a predefined threshold, the algorithms compromise on video quality.This strategy is understandable while streaming a long video, such as a complete movie.However, with a short video, such as a movie trailer, the user expects to watch the complete video at the most feasible video quality.Therefore, to select video rates, it makes sense to design different strategies for short and long videos.To this end, we design separate quality adaptation algorithms for short and long duration videos. The understanding of MEC-assisted ABR strategies is still in the early stages.In this paper, we present a content-aware edge computing-assisted rate adaptation method for a single cell with multiple clients to centrally optimize the QoE of competing clients.The contributions of this research are as follows. & We design an integer non-linear programming (INLP) optimization model that jointly optimizes the QoE, fairness, and bandwidth utilization of HAS clients in a cellular network with MEC capabilities.& Due to the NP-Hardness of the problem, we designed content-aware greedy heuristic algorithms that solve the rate adaptation optimization problem for short and long duration videos.The algorithms consider video duration, segment duration, clients' playback buffer size, estimated throughput, and playback buffer level to jointly select the video rates for HAS clients.& We conducted extensive experiments to evaluate the performance of the proposed algorithms with varied segment durations, playback buffer sizes, numbers of competing clients, clients' moving speeds, and video durations. & The results from our extensive experiments show that the proposed algorithm guarantees QoE under varying client, server, dataset, and network settings.The proposed algorithm optimizes QoE by simultaneously enhancing all video quality metrics.& The results reveal that the proposed long video adaptation algorithm outperforms state-ofthe-art algorithms, with average improvements in video rate, QoE, fairness, and bandwidth utilization ranging from 7.3%-12.3%,8%-28%, 3.3%-5.7%,and 60%-130%, respectively.Additionally, when high bandwidth is available to clients, the proposed short video algorithm downloads 6% higher-quality segments, experiences 45% fewer switches, and improved QoE by 11.1%, compared to the proposed long video adaptation algorithm. Related work The ABR algorithms can be divided into three methods: 1) throughput-based, 2) buffer-based, and 3) hybrids.Throughput-based algorithms select the video quality based on the throughput observed while downloading segments [36] [5, 23, 27].It has been shown that they cannot accurately estimate the bandwidth when multiple clients compete against a network bottleneck [22].Therefore, some ABR algorithms suggest observing only the playback buffer to select the video quality [18,41].Multiple researchers have used a combination of playback buffer and estimated throughput to pick video quality [13,14,34,42].In [20], the authors used segment size in addition to throughput and the playback buffer for video rate adaptation.However, all these algorithms targeted client-side quality adaptation.Moreover, these algorithms do not target fair selection of the video rates in a multi-client environment.FESTIVE [6] uses an approach to improve HAS fairness by using a harmonic bandwidth estimator and randomizing the scheduling of the requested segments.Li et al. [10] presented an algorithm called Probe and Adapt (Panda) that probes for fair bandwidth sharing and adapts the video rate accordingly.Although these algorithms improve the fairness and stability in a wired network, they perform poorly under dynamic cellular links due to TCP unfairness.The Panda probing mechanism follows an additive-increase/multiplicative-decrease (AIMD) strategy.In a cellular network, a client close to the edge of the base station may observe low throughput due to propagation distance.When that client moves closer to the base station, it will observe higher throughput.However, due to Panda's AIMD strategy, it will take multiple segments to increase the client's estimated throughput. In [9], the authors proposed a server-side scheme using feedback control theory to execute measurement and control at the HAS server.The clients' video qualities are jointly adapted at the server.However, the scheme is not specifically designed for the cellular environment, and does not impose any constraints on radio resources, which might lead to overestimating or underestimating the video rates for adaptation.Petrangeli et al. [30] proposed a method to fairly utilize the bandwidth when multiple clients greedily compete for it.However, their proposed objective function and adaptation scheme do not consider the trade-off between the QoE of the clients and fairness.Other researchers [7,19,48] have proposed schemes that combine the designs of quality adaptation and resource allocation in a multi-client cellular environment.However, these schemes require modification of the standard cellular infrastructure. The concept of MEC has been proposed by the European Telecommunications Standards Institute (ETSI) to satisfy the requirements of 5G.Yang et al. [44] implemented a proof-ofconcept for a MEC-assisted mobile video streaming service.Tran et al. jointly utilized the processing capability of MEC along with edge caching to improve a streaming system [43,45].However, the focus of these works was to reduce video delivery latency without considering factors that impact the QoE of the clients.In [25], the authors proposed an MEC-assisted adaptation algorithm and a client to edge server mapping strategy to quantify the benefits of network-assisted solution.The authors compared the effect of network topology and interarrival time on the performance of MEC-assisted algorithm and purely client-based adaptation algorithms.The results in [25] showed that the client to edge server mapping mechanism led to clients achieving higher throughput.MEC-assisted algorithm utilized the higher available throughput to download higher quality segments compared to client-based algorithms.The MEC-assisted algorithm outperformed client-based algorithms in some of the video quality objectives when the achievable throughput was moderately high.However, the authors did not discuss the performance of the adaptation algorithms without employing client to serve edge mapping strategy.In addition, the authors did not use QoE and bandwidth utilization metric to compare the performance of the algorithms.Authors used a simple mathematical model to characterize the radio link as a function of distance of client from base station.In cellular networks, radio link depends on multiple factors, including propagation distance, fading, shadowing, and interference.The authors ignore factors such as fading, shadowing and interference in their mathematical model.Moreover, the authors ignored important content information in the design of the adaptation algorithm.In [46], the authors proposed an edgeassisted adaptive video streaming scheme based on a dueling deep Q-learning network.The objective of the video streaming scheme was to optimize QoE by jointly considering the physical layer transmission bandwidth and playback buffer status.In our previous work [35], we presented joint throughput estimation that assists an adaptation algorithm in fairly assigning video rates, as well as MEC-assisted rate adaptation method to enhance the viewing experience.These works [3,21,25,35,43,45,46] focused on jointly optimizing the QoE in a cellular environment.However, they do not focus on efficient utilization of the bandwidth by a HAS client.Moreover, these works do not take into account the video content and HAS client information.Authors in [12] introduced an edge-and SDN-assisted video streaming framework that exploited the capability of Software Defined Network (SDN) and Network Function Virtualization (NFV).This work focused on improving the user experience by improving playback video rate and minimizing playback interruptions.In [36], we showed the impact of segment duration, client buffer size, the number of competing clients, and clients' arrival times on the performance of HAS algorithms.The results revealed that the rate adaptation algorithms must consider these parameters in order to guarantee QoE under different settings. Different from existing works, the proposed algorithm investigates the impact of contentaware joint optimization of QoE, fairness, and bandwidth utilization for video streaming in MEC environments.The proposed algorithm jointly adapts video rates by exploiting cell-wide HAS clients' information, video content details, and device features.Furthermore, the proposed method uses different strategies to adapt video rates to stream short and long duration videos. Multi-access edge computing assisted streaming In this section, we describe the proposed system. Architecture overview Traditionally, the adaptation module runs on the HAS client.HAS clients are oblivious to the decisions made by other competing clients.Moreover, clients unfairly utilize the available bandwidth in the presence of competing clients [22].HAS clients rely on the underlying TCP to fairly and accurately estimate throughput; however, the underlying TCP is inaccurate and unfair in a cellular environment.The edge cloud can access RAN information and is computationally far more powerful than HAS clients.It is thus logical to shift the adaptation module from the client to the edge cloud.The adaptation module at the edge cloud exploits the channel knowledge of multiple streams to jointly adapt the video quality of the clients. Figure 1 illustrates the MEC HAS system for adaptive video streaming over a cellular network.The HAS server stores video content encoded into a set of m video rates R = {R 1 , R 2 , R 3 ,…, R m }.Each representation of a video is split into multiple fixed-duration segments, τ.A set of N HAS clients subscribes to HAS services, and each client is indexed by i, where i = 1, 2, …, N. The edge cloud is deployed at the base station to enhance the mobile services, and cellular entities such as the cellular scheduler operate in the same way as conventional cellular networks. The client initiates streaming by requesting information about the stored content from the HAS server.In response, the HAS server sends the media presentation description (MPD) so the adaptation module at the edge cloud and the HAS client have information on the available video representations.Then, the HAS client requests a video segment based on the available application layer information.The request from the HAS client is treated as a suggestion by the adaptation module for the edge cloud.Under conventional client-side adaptation, the cellular network forwards the request to the HAS server.In MEC-assisted adaptation, the edge cloud intercepts the request.The adaptation module overwrites the suggested video rate by the client, Rc, based on the cell-wide optimization of the clients.In addition to the information available Fig. 1 Streaming architecture for multi-access edge computing-assisted video streaming in the MPD sent by the server, clients' playback information for adaptation, such as the playback buffer level, device capabilities, observed throughput, and QoE status of the clients, is embedded in the feedback from clients.This is feasible, because the 3rd Generation Partnership Project standardized QoE reporting for clients by using the HTTP POST request carrying XML-formatted metadata in the body [32].The video rate adaptation results of the adaptation module for the edge cloud are then delivered to the HAS server for streaming the next segment.In this manner, the edge cloud can jointly optimize the user experience of the HAS client without modifying the client or server.The list of parameters and their descriptions have been summarized in Table 1. The downloaded segments are stored in the playback buffer, which contains the unviewed video.Let B(t) ϵ [0, B max ] be the buffer occupancy at time t.Different video players provide different buffer sizes, B max , depending on the service provider and storage limitations on the player.Figure 2 depicts the dynamics of the playback buffer.At time t k , the client downloads the k th segment encoded at the i th video rate, R i k .The size of the segment is equal to R i k × τ.The download time of the segment will be (R i k × τ/T k ) where T k is the throughput observed by the client during the download of the k th segment.Once segment k is downloaded, the client waits for Δt k seconds before sending the request for the k th + 1 segment.Waiting time is given by: Video duration and time the client waits before sending request for the next (k+1) segment ρ, β, φ, θ Adjustable weighting parameters of average bitrate, bitrate switching, fairness, and bandwidth inefficiency, respectively δ s , δ IE , δ F Threshold videos for switching level, inefficiency, and fairness Throughput T k at time t is calculated as follows: Let B k be the buffer level before the start of the download of the k th + 1 segment; then, B k + 1 is expressed as: The notation (x) + = max (x, 0) ensures that the term is always positive.Eq. 4 shows that if B k < τ× R i k /T k , the buffer will be empty before the video player completely downloads the k th segment.Note that the segment duration plays an important role in the change in buffer occupancy during the download of the segment.A longer segment duration increases the risk of playback interruption in case of a mismatch between the selected video rate and the throughput. Channel Model We consider a cell that consists of N HAS clients that stream video content and are served by a base station.The spatial distributions of the base station and the HAS clients are mutually independent.Ignoring interference among clients, the Signal-to-Noise-Ratio (SNR) of the HAS client can be calculated with: where P r , N o , and W denote the received power, the power spectral density of additive white Gaussian noise, and the client's bandwidth, respectively.The received power is related to the path loss and the transmitted power.The path loss, PL(d), is a function of propagation distance [31]: where d is the distance between the HAS client and the base station.Therefore, ( 5) can be expressed as: where P t denotes the transmitted power.Assume the total bandwidth is BW, and the bandwidth allocated to the j th client is ω j .Ignoring interference among the clients, the j th client's channel capacity, C j , can be calculated according to Shannon's theorem [31]: The higher the client's channel capacity, C j , the higher the throughput, T j , observed by the j th client during the download of a segment.Note that the channel capacity depends on the propagation distance.HAS clients may be located at different distances from the base station. A client close to the base station will achieve higher throughput, compared to a client located at the edge of the cell.On one hand, selecting the highest feasible video rate for both clients would decrease the fairness.On the other hand, selecting a video rate higher than the available throughput for the user at the edge of the circle (in order to improve fairness) would lead to buffer underflow.To this end, selecting a low video rate for the client closer to the base station would improve fairness, but increases bandwidth inefficiency.Therefore, optimizing both bandwidth utilization and fairness for the clients in a cellular environment is a challenging task. Quality of Experience A comprehensive survey on QoE under HAS determined the factors that affect user experience [38].These factors include selecting the highest feasible set of video bitrates, avoiding unnecessary video bitrate switches, and avoiding playback interruption.Playback interruptions and selection of video bitrates affect the user experience the most [16].There is a tradeoff between selecting the highest feasible video rate and the risk of playback interruption.We aim to provide optimal QoE based on the abovementioned conflicting criteria.The average video bitrate over downloaded segments by the j th client is given by: where R k ij is the i th video rate assigned to the j th client, k is the segment index, and S is the total number of segments downloaded by the client. Frequent video rate switches inversely affect the user experience.Abrupt switching impairs QoE more than smooth switching [11].Magnitude of the changes in the quality from one segment to another is given by: The client experiences playback interruptions if the download time (τ×R k ij /T k ) is higher than the playback buffer occupancy level.The total interruption time, IR, is In this study, we used the same QoE metric used by the authors in [45], which is defined as follows: For a video fragmented into N segments, q(R k i ) maps the video rate to the quality perceived by the viewer.IR j represents the total rebuffering time during the download of the video, while the final term discourages frequent changes in the video rate.The authors in [45] and μ = 3000, signifying that a playback interruption of 1 s receives the same penalty as reducing the bitrate of a segment by 3000 kbps.We also consider the same values in our evaluation.In this study, we calculated the average QoE per segment, that is, the total QoE metric divided by the number of segments.In Section 6, we evaluate the QoE of algorithms using Eq. ( 11). Fairness and bandwidth efficiency Rate adaptation algorithms are fairly effective when a client operates alone.When multiple HAS clients compete for the bandwidth, the clients inefficiently utilize bandwidth and select low-quality video rates [36]. In order to efficiently utilize the bandwidth, we strive to select for the competing clients the best suitable video rates such that their sum has the least difference from the total available bandwidth at the base station.The bandwidth inefficiency at time t is calculated according to: where BW (t) is the available bandwidth at time t, R t ð Þ ij is the video rate selected by the j th client at time t, and ∑ v≠ j R t ð Þ iv is the sum of the video rates of the competing video clients at time t. To ensure that the video rates are fairly allocated among the clients, we select for each client the most feasible video rate which has the least difference from the average of video rates allocated to other competing clients.The fairness index at time t is calculated according to: where R avg = 1 is the average video rate of the other active streaming clients.Low values for inefficiency and fairness are desired.A low inefficiency value signifies that the client selects the highest feasible bitrates that are lower than the actual throughput, while a low fairness value signifies that the competing clients achieve equitable video rates. Joint optimization problem The ultimate goal of video quality adaptation is to enhance the QoE of video clients in order to achieve higher long-term user engagement [10].With the abovementioned system, the utility maximization problem (jointly maximizing QoE for individual HAS clients, ensuring fairness, and reducing bandwidth inefficiency) is formulated as the following integer non-linear programming (INLP) optimization model. Subject to: We define four weighting parameters, 0 ≤ ρ, β, φ, θ ≤ 1 (ρ + β + φ + θ = 1), to control the respective video rates, the video-rate switches, fairness, and bandwidth inefficiency.The decision variable x ij defines the number of clients streaming the i th video rate stored on the server.The only decision variables here are integer variables x ij and R k ij .Variable B k is a dependent variable whose values depend on the values of the decision variables.The values for the rest of the variables are known in advance. Objective function (14) aims to jointly optimize the QoE of the j th client, the fairness, and the bandwidth efficiency, given throughput trace {T t , t ∈ [t 1 , t k + 1 ]}.Constraint (17) specifies that a specific video rate can be streamed by multiple clients.Constraint (18) ensures that the total bandwidth allocated to the clients by the base station does not exceed the instantaneously available bandwidth at the base station.Constraint (20) guarantees that the clients do not experience any playback interruptions during the whole streaming duration.Constraint (21) ensures that the video rate selected for the j th client at the MEC does not exceed the video rate, R c , as suggested by the client.And finally, constraint (22) specifies that the discrete video rate downloaded by the client from the server belongs to the set of available video rates. Proposed online algorithm In this section, we present the algorithms for solving the optimization problem described in Section 4. The algorithms are designed for online execution by the edge cloud.The existence of an integer decision variable in the optimization problem given in Section 4 makes it computationally intractable to solve the problem using exhaustive search methods.The complexity of exhaustive search methods grows exponentially with the increase in the number of clients making it impractical for DASH scheduling at a large scale.Moreover, the deployment of offline solution is unfeasible, since information about the clients is unknown in advance.To reduce complexity, we designed a heuristic online algorithm that is executed using the client data obtained for MEC. The algorithm selects the i th video rate from set R for the k th segment, denoted as R next .The video rate selected for segment k − 1 is denoted as R prev .As explained in Section 3.1, the adaptation module for MEC picks the video quality based on the video rate suggested by the client, R c .The proposed MEC-assisted algorithm is unaware of the client's capabilities.Therefore, the clients share with MEC the highest video rate they can play back, based on the observed throughput and buffer occupancy.Pseudo-code for the client-side adaptation algorithm is given in Algorithm 1, which first checks the current buffer occupancy level.If the buffer level is within the danger zone (B k < B min ), the algorithm cautiously selects the video rate.The buffer threshold, B min , is equal to the minimum segment duration and part of the buffer size, as follows: As explained in Section 1, video streaming services offer segments of different durations.As the segment duration increases, the risk of buffer underflow increases in an unstable environment, as shown in Eq. ( 4).Therefore, segment duration should be considered in the selection of B min .However, with a long segment and a small buffer, it is not feasible to select B min based only on the segment length.For example, if the segment duration is 10 s, and the buffer size is 20 s, setting B min equal to the segment duration means the client cautiously selects the video rate most of the time.Therefore, buffer size should also be considered in the selection of B min .To this end, we set B min equal to the minimum segment duration and 20% of the buffer size.Once the buffer level increases above B min , the client aggressively selects R c while ensuring that buffer occupancy does not enter the danger zone.Given the available throughput and segment duration, the client selects the highest video rate such that the predicted buffer occupancy level upon download of the next segment does not fall below B min .The client then shares the suggested video rate, R c , with the MEC adaptation module to jointly adapt the video rates of the competing clients.Pseudo-code for the MEC-assisted algorithm is given in Algorithm 2. Subroutine 1 Startup Phase The algorithm enters the startup phase (Subroutine 1) when the playback buffer is empty.At the start of a streaming session, MEC does not have any information regarding the throughput observed by the clients during segment download.For the first segment, the algorithm selects the highest available video rate for the first client (line 4).For the rest of the clients, the algorithm selects the highest video rate that is less than the average video rate of the competing clients (line 6).The reason is that as the number of streaming clients increase, the throughput available to each client decreases.Once the segment is downloaded, the client calculates the available throughput using Eq. ( 3).If the client enters the startup phase due to buffer underflow (line 9), the algorithm picks the highest video rate that is less than the available throughput for the next segment. Subroutine 2 Long Video Adaptation Algorithm As explained in Section 1, the user expects to watch short videos, such as a movie trailer or sports highlights, at the highest feasible video rate [40].However, the available throughput fluctuates over time in a cellular environment.While streaming a long video (such as a complete movie or a sports match) in an unstable environment, selecting high-quality video rates throughout the streaming session would increase the risk of buffer underflow.Therefore, separate approaches are required to select the video quality for short and long videos.The next question to answer at this point is how to differentiate between short and long videos in terms of duration.The answer to this question is not available in literature.Based on the average duration of movie trailers, the duration of most videos on Facebook, and the average duration of English Premier League (EPL) highlights, we set the maximum duration of a short video to 120 s [1,2,17].If the video is longer than 120 s, the algorithm runs Subroutine 2 (Long Video Adaptation Algorithm).Otherwise, the algorithm runs Subroutine 3 (Short Video Adaptation Algorithm). Long video bitrate selection In this section, we discuss the heuristic adaptation algorithm for long videos.The algorithm's objective is to simultaneously optimize QoE, fairness, and bandwidth utilization.Because throughput can fluctuate in a cellular network for many reasons, selecting the highest feasible video rate could lead to buffer underflow.Therefore, when the buffer level is in the danger zone (B k < B min ), we ignore switching, fairness, and bandwidth inefficiency conditions.The algorithm selects the most feasible video rate that is less than R c with the highest achievable utility objective value as the video rate for the current segment (lines 5-9).When the buffer level increases above B min , the proposed algorithm considers three known threshold values, δ s , δ F , and δ IE , for the switching level, fairness, and the bandwidth inefficiency index, respectively.Switching threshold δ S is computed as |max {r ϵ R} < T kmax {r ϵ R} < T k -1|, where max {r ϵ R} is the highest video rate in set R that is less than the throughput.The switching index associated with the selected rate is computed as |r − R prev |.Bandwidth inefficiency threshold δ IE is computed as |max {r ϵ R} < T k − T k |, and the bandwidth inefficiency index is computed as |r − T i |.The fairness index is computed as 1 -(r -R avg )/(R max -R min ), which takes a value between 0 and 1, where R avg is the average video rate of streaming clients.Utility objective value ( 14) is computed for all available video rates less than R c that satisfy video rate switching, fairness, and bandwidth inefficiency thresholds (lines 11-13).Among the video rates that satisfy these conditions, the video rate that maximizes the utility objective function is allocated to the clients for the next segment (line 14).If no such video rate is available, we compromise on the switching condition.Next, the utility objective function is evaluated for the set of video rates that satisfy the fairness and bandwidth inefficiency thresholds (lines [15][16][17][18].The candidate video rate that maximizes the utility objective function is the selected video rate for the next segment (line 19).If no such video rate is available, we compromise on fairness as well.The objective function is computed for the set of video rates that only satisfy the bandwidth inefficiency condition (20)(21)(22)(23).Similarly, the most suitable video rate that maximizes the utility value is streamed for the next segment.If none of the video rates satisfy even the bandwidth inefficiency threshold, the most feasible video rate with the highest achievable objective value is streamed for the next segment (lines [16][17][18][19]. After the video rate selection, the weighting parameters of the objective function are dynamically computed.A simplified flowchart of the dynamic tuning of the weighting parameters is shown in Fig. 3.At the start of the streaming session, the tuning parameters are set with initial values.Once the streaming session starts, the weighting parameters of the quality factors (average bitrate (ρ), bitrate switching (β), fairness (φ), and bandwidth inefficiency (θ)) at the download of each segment are dynamically computed.The scheme monitors the effect of each video quality factor (video rate, switching magnitude, fairness, and bandwidth inefficiency) on the user experience of the j th client, U j , during the download of 5 previous segments. where S′ is the index of current segment.U prev represents the value of U j computed for previous segment.If U j ≥U prev , the weighting parameters are not changed.If U j < U prev , the weighting parameters are adjusted by the algorithm.The weighting parameters (δ, β, φ, θ) are calculated based on how far the average of the video rates of the last 5 downloaded segments is from the most feasible bitrates.γ Q , γ QS , γ F , and γ IE in Eqs.(25a) to (25d) denote the difference between the selected video rates, and the most feasible video rates for Q j , QS j , F j and IE j , respectively.T denotes the average throughput observed over the download of last 5 segments.Next, the weighting parameters are selected according to Eqs. (26a) to (26d): After parameters are updated, the algorithm returns the client's local utility, which was computed using (14). Short video bitrate selection In this section, we discuss the heuristic adaptation algorithm for short videos.For short videos, the objective is to optimize QoE while efficiently utilizing the available bandwidth.Because the throughput in a cellular environment depends on multiple parameters, including propagation distance, client speed, interference, etc., clients located in different regions of the cell would observe different throughput.Therefore, equitably distributing video rates among competing clients means compromising video quality and/or QoE.To this end, fairness is ignored for short videos. Subroutine 3 Short Video Adaptation Algorithm Multimedia Tools and Applications (2023) 82:42053-42085 The proposed algorithm considers two known threshold values, δ s and δ IE , for the switching level and the bandwidth inefficiency index, respectively.In decreasing order of video rates, utility objective value ( 14) is computed for all available video rates less than R c that satisfy both switching and bandwidth inefficiency thresholds (lines 3-6).The most feasible video rate that has the maximum utility value is then selected as the allocated video rate for the current segment of the client (lines 6-9).If no such video rate is available, the utility objective function is evaluated for the set of video rates less than R c that only satisfy the bandwidth inefficiency threshold.Here, we compromise on the switching threshold as well (lines 10-15).Similarly, the video rate that maximizes utility objective value ( 14) is chosen as the video rate for the current segment.If none of the video rates satisfy even the bandwidth inefficiency threshold, the most feasible video rate with the highest achievable objective value is selected as the video rate for the current segment (lines [16][17][18][19].After video rate selection, the algorithm updates the weighting parameters as explained in Section 5.1 and returns the client's local utility, which was computed using (14). Computational complexity The computation of estimated throughput during the download of every segment takes O(τ) time, where τ is the segment duration.Execution of the startup phase results in complexity O (τ + |R|).The evaluation of switching, and inefficiency thresholds takes O(|R|) units of time.Similarly, the execution of short/long video adaptation algorithms results in complexity of O(τ + | R|).Putting all the above together gives N competing clients at an overall complexity of O(N.(τ + |R|)).Hence, the overall time complexity for the heuristic algorithm shall be in polynomial time. Performance evaluation In this section, we implement HTTP-based adaptive video streaming in a multi-access edge computing scenario (as shown in Fig. 1) to evaluate the performance of the proposed algorithm.We implemented the experiments by utilizing the simulation software ns-3.A detailed configuration of the underlying LTE cellular network is shown in Table 2. To achieve adaptive streaming, the HTTP server offers the client 12 presentation levels to adapt video rates, which are 184, 380, 459, 693, 1270, 1545, 2000, 2530, 3750, 5379, 7861, and 11,321 kbps.We assume that all clients can playback the highest available video rate.We set fairness threshold δ F at 0.6.We adopted the algorithms proposed in MECA [46], ECAAS [21], AAA [13], DBT [34], DASH-Google [27], and QLSA [5] as benchmarks in order to demonstrate the efficiency of the proposed algorithm.Table 3 presents the properties of the HAS algorithms.The proposed, Since the client-based algorithms are oblivious to the decisions made by other competing clients, they do not target bandwidth efficiency and fairness.In addition, the proposed greedy heuristic algorithms solve the rate adaptation optimization problem for short and long duration videos.To the best of our knowledge, this is the first work to design algorithms for both short and long duration videos. Long videos In this section, we evaluate the performance of the proposed long video adaptation algorithm. In our previous work [36], we showed that the performance of existing algorithms struggles to meet conflicting QoE objectives under different client/server settings.The reason is that algorithms employ fixed control laws, even though meeting different video quality objectives requires different strategies.In this work, we evaluated the algorithms under varying client speeds, network conditions, video durations, buffer sizes, and segment durations.A grid-based road topology is used to simulate mobility.The clients remain within a single cell throughput the streaming session.We analyzed the algorithms for the settings mentioned in Table 4.The experiment was repeated 10 times for each setting and the average of the results is presented in this section.The average YouTube video in 2018 was 11.7 minutes [4].For this section's experiments, a video was streamed for 12 minutes to evaluate the algorithms.As the experiments given in Table 2 were repeated 10 times and a video of 12 minutes was streamed during each run, the performance of each algorithm was analyzed for 120 minutes.The initial values of the tuning parameters for the objective function in (14) were set to ρ = 0.4, β = 0.4, φ = 0.1, and θ = 0.1.In the following results, we use Jain's fairness index [37] to quantify fairness and bandwidth inefficiency at time t is calculated by Effect of segment duration Figure 4 displays the performance of the algorithms when the buffer size was set to 15 s and the segment duration was set to 2 s and 4 s, respectively.During both experiments, the client arrival time was uniformly distributed within the first 30 s of the streaming session.In the first experiment, the segment duration was set to 2 sec.Figure 4a shows that the proposed algorithm achieved the highest average video rate among the compared algorithms, followed by ECAAS.Similarly, the proposed algorithm selected video rates fairly while efficiently utilizing the available bandwidth.Figure 5 shows that the proposed algorithm avoided unnecessary playback interruptions.The rebuffering per client metric represents the ratio of clients that experienced a playback interruption to the total number of clients, while average interruptions is the number of times a client experienced a playback interruption.Fig. 4 The effect of segment duration on the performance of the algorithms the highest QoE for the proposed algorithm.The DBT and AAA algorithms experienced fewer video rate switches.The reason is that the algorithms avoided switching video rates unless the buffer level increased above or decreased below predefined thresholds, irrespective of fluctuations in bandwidth.This led to mitigating unnecessary video rate switches, but also compromised video quality.On the other hand, the ECAAS algorithm downloaded high quality segment at the expense of high number of video rate switches. Next, we increased the segment duration to 4 s. Figure 4 shows that, like the previous experiment, the proposed algorithm achieved the highest video rate.However, the proposed algorithm experienced a high number of video rate switches.Because a longer segment takes more time to download, the proposed algorithm reacted aggressively to reduce the risk of playback interruption.This led to a higher frequency of switches.The DBT algorithm achieved an average video rate similar to the proposed algorithm; however, it achieved the highest QoE due to a low frequency of video rate switches and avoided playback buffer underflow.Figure 6 shows that the proposed, DBT, and DASH-Google algorithms were able to avoid playback interruptions.Furthermore, Fig. 4c and d show that the proposed algorithm achieved the highest fairness and the lowest bandwidth inefficiency values.The reason is that the proposed algorithm jointly optimizes fairness and ensures bandwidth is efficiently utilized.The ECAAS and MECA algorithms achieved better fairness and bandwidth inefficiency, compared to the client-based algorithms. Effect of buffer size In this section, we describe the effect on the performance of the algorithms from varying the buffer size.Figure 7 displays the performance of the algorithms when the segment was 4 s long and the buffer size was 15 s, 30 s and 60 s, respectively.Figure 7 shows that the proposed algorithm streamed higher-quality video irrespective of buffer size.The ECAAS algorithm also downloaded high quality segments, however, it also experienced the highest number of video rate switches.The proposed algorithm equitably allocated video rates to clients, and efficiently utilized bandwidth.Figure 7d shows that the proposed algorithm had the highest QoE value when the buffer size was increased to 30 s and 60 s.The MECA algorithm achieved a similar QoE when the buffer size was 30 s, but its QoE degraded when the buffer size was 60 s, because it downloaded low quality segments.We also observe that the DBT algorithm achieved low QoE and inefficiently utilized the bandwidth when the buffer size increased.The reason is that the algorithm downloads low-quality segments when the buffer size increases.Unlike the proposed algorithm, the DBT algorithm does not adapt the playback buffer Fig. 7 The effect of buffer size on the performance of the algorithms thresholds as the buffer size changes.The QoE of throughput-based algorithms fluctuated from one experimental setting to the other.Figures 6, 8, and 9 show that the proposed algorithm avoided playback interruptions in all the experiments.Figures 6, 8, and 9 also show that only the ECAAS, MECA, AAA, and QLSA algorithms experienced playback interruptions, and the ECAAS algorithm had the most.The reason is that the ECAAS algorithm selects high quality segments at the expense of depletion of playback buffer.In case of a large drop in the throughput in the middle of a segment download, this approach increases the risk of playback interruption.The AAA algorithm also experiences long interruption durations because the algorithm waits for the buffer level to decrease below a pre-defined threshold before it adapts the video quality.Because the video rate cannot adapt in the middle of a segment download, the algorithm failed to protect the buffer when there was a sudden drop in throughput. Effect of client speed Here, we compared the algorithms for the following scenarios: (1) the clients moved at vehicular speed (75 km/h), and (2) the clients moved at pedestrian speed (3 km/h).Figure 10 shows that when clients moved at pedestrian speed, the proposed algorithm achieved the highest video rate and fairness value, and the lowest inefficiency value.The table also reveals that the proposed, ECAAS and the MECA algorithms guaranteed high QoE when the clients operated at pedestrian speed.However, the ECAAS algorithm had a lower value when the clients operated at vehicular speed.The result shows that the performance of ECAAS algorithm degrades in case of large fluctuations in throughput.Under stable network condition, the ECAAS algorithm performs better.Figure 10 indicates that the algorithms achieved higher QoE and fairness, and efficiently utilized the bandwidth at pedestrian speed, compared to vehicular speed.Figure also reveals that DBT had the best QoE among the compared algorithms when the clients moved at vehicular speed; however, it underutilized bandwidth when the clients moved at pedestrian speed, whereas the proposed algorithm efficiently utilized bandwidth and downloaded high-quality segments during both experiments.The proposed algorithm did not experience any rebuffering when the clients moved at pedestrian speed, as shown in Fig. 11.The DBT algorithm also avoided playback interruptions at the expense of video quality.The MECA, ECAAS and AAA algorithms experienced buffer underflow while streaming high-quality videos.If a higher weight is given to playback Fig. 10 The effect of client speeds on the performance of the algorithms interruptions in Eq. ( 11), the QoE of the ECAAS, MECA and AAA algorithms would decrease further.The DASH-Google and QLSA algorithms are throughput-based methods that are unaware of the client buffer levels and competing clients.Therefore, they reacted aggressively to any changes in throughput, resulting in higher video rate switches and unfair selection of video rates. Effect of client arrival time In this experiment, we compared the performance of the algorithms for the following scenarios: (1) all clients simultaneously start streaming, (2) the client arrival time was uniformly distributed within the first 30 s of the streaming session.The results of scenarios (1) and ( 2) are shown in Fig. 12.Similar to the previous experiment, Fig. 12 shows that the proposed algorithm achieved the highest video rate and guarantees the highest QoE.Furthermore, the proposed algorithm equitably selects video rates and efficiently utilizes bandwidth.Figure 12 also reveals that the algorithms achieved slightly higher video rates and fairness when the clients joined the streaming session randomly.The reason is that when all clients start streaming at the same time, there is a tug-of-war between greedy clients to obtain bandwidth share.The comparison also shows that the proposed algorithm achieved similar QoE in both experiments.However, the QoE of the rest of the algorithms degraded when the clients started the streaming session simultaneously.Figure 13 shows that the proposed, DBT, and DASH-Google algorithms were able to avoid buffer underflow.The ECAA and MECA algorithms downloaded higher quality segments and more efficiently utilized bandwidth, compared to the DBT algorithm, but they achieved low QoE due to a higher number of playback interruptions and higher frequency of switches. Small videos In this section, we compare the performance of the algorithms for short and long video bitrate selection.The initial tuning parameters in objective function (14) were set to ρ = 0.6, β = 0.3, and θ = 0.1.As explained in Section 5.2, fairness was ignored.For short videos, we gave more weight to ρ in Eq. ( 14), since the objective is to select high quality throughout the streaming session.In a multi-client, bandwidth-constrained environment, a short video adaptation algorithm does not bring any notable advantage over a long video adaptation algorithm while streaming short duration videos (less than 120 s).Due to space limitations, we omit the comparison of the proposed short and long video adaptation algorithms for a bandwidth-constrained environment.Here, we compare the algorithms for a scenario where four clients compete through a bottleneck.The experiment settings used to evaluate the algorithms are given in Table 4.As each experiment was repeated 10 times and a video of 2 minutes was streamed during each run, the performance of each algorithm was analyzed for 20 minutes.Table 5 shows the initial tunings parameters used to evaluate the algorithms for experiments given in Table 6.We compare the following three strategies: (1) short video algorithm, (2) long video algorithm with same initial tuning parameters as used for short video algorithm (long video(SP)), and (3) long video algorithm with tuning initial parameters used in Section 6.1. Figure 14 (a) displays the average video bitrate and switching ratio experienced by the clients while employing the algorithms and tunings parameters given in Table 5. Figure 14a shows that the short duration adaptation algorithm outperformed the long video adaptation algorithm in both experiments irrespective of the tuning parameters.Even when an adaptation algorithm optimizes QoE, it is important to understand the distributions of the underlying parameters given in Eq. (11).The short video algorithm achieved a higher video rate and experienced fewer video rate switches.Figure 14b shows the short video achieved higher QoE when the achievable throughput was high.The short video algorithm is able to achieve higher QoE as it downloaded high quality video segments, mitigated unnecessary video rate switches and playback interruption.Figure 14b shows that when segment duration is set to 4 seconds, the long video(SP) algorithm performs worse than long video algorithm despite assigning higher weightage to video quality.Because the playback buffer was only 15 s, larger segment duration increased the risk of playback interruption in case of mismatch between the select video rate and available bandwidth.It forced the algorithm to conservatively select video rates to avoid playback interruption.Although the short video adaptation algorithm prioritizes higher video quality at the expense of fairness, Fig. 14c reveals that the short video algorithm achieved similar fairness and bandwidth inefficiency values compared to long video algorithms for both experiments. Summary The results in Section 6.1 reveal that the proposed algorithm guaranteed high QoE irrespective of buffer size, segment duration, client speed, number of clients, and client arrival times.However, the performance of other state-of-the-art algorithms varied from one setting to the other.The reason is that these algorithms employ fixed control strategies, even though optimizing different QoE objectives and experiment settings required different adaptive strategies.The ECAAS algorithm achieved high QoE under stable network conditions.the buffer-based algorithms, including DBT and AAA, waited for buffer occupancy to increase above (or decrease below) predefined thresholds.That minimized the switching ratio, but compromised video quality.Furthermore, this approach led to inefficient utilization of bandwidth.The throughput-based algorithms do not have information on the playback buffers; therefore, they did not take the risk of conservatively reacting to changes in bandwidth. In the following summary, consecutive numbers represent the results from ECAAS, MECA, AAA, DBT, QLSA and Google-Dash, in that order. Conclusion In this paper, we presented a context-aware hybrid MEC-assisted quality-adaptation algorithm that exploits video content characters, client-side settings, and application-layer information to achieve multiple objectives including: 1) Jointly optimize the user experience of multiple HAS clients in a cellular environment; 2) Guarantee QoE under varying client, server, dataset, and network settings; 3) Simultaneously meet conflicting video-quality objectives to optimize QoE, while fairly selecting video rates for competing clients, and efficiently utilizing bandwidth.To achieve these objectives, we designed a solution for content-aware MEC-assisted adaptation which considers the joint weighted maximization of QoE, bandwidth utilization, and fairness.Simulation results revealed that the proposed MEC-assisted algorithm outperformed state-of-the-art MEC-assisted and purely client-based algorithms.The results demonstrated that the proposed algorithm guaranteed improved user experience, irrespective of client playback buffer size, segment duration, the number of competing clients, client movement speed, and client arrival times.The proposed algorithm, on average, improved video quality by over 11%, fairness by over 6%, bandwidth efficiency by over 57%, and QoE by over 22%.Moreover, we presented separate strategies for short and long duration video content based on user expectations.The results showed that the proposed short video adaptation strategy achieved higher QoE and utilized bandwidth more efficiently than the long video strategy when achievable throughput was moderately high. Fig. 3 Fig. 3 Flowchart of proposed weighting parameters selection scheme Figure 4 displays the performance of the algorithms when the buffer size was set to 15 s and the segment duration was set to 2 s and 4 s, respectively.During both experiments, the client arrival time was uniformly distributed within the first 30 s of the streaming session.In the first experiment, the segment duration was set to 2 sec.Figure4ashows that the proposed algorithm achieved the highest average video rate among the compared algorithms, followed by ECAAS.Similarly, the proposed algorithm selected video rates fairly while efficiently utilizing the available bandwidth.Figure5shows that the proposed algorithm avoided unnecessary playback interruptions.The rebuffering per client metric represents the ratio of clients that experienced a playback interruption to the total number of clients, while average interruptions is the number of times a client experienced a playback interruption.Figures4 and 5also reveal that the selection of high video rates and avoiding playback interruptions led to Fig. 5 Fig. 6 Fig. 5 Comparison of (a) rebuffering per client and average number of interruptions, and (b) average buffering time of the algorithms when the buffer size was 15 s and segment duration was 2 s Fig. 8 Fig. 9 Fig. 8 Comparison of (a) rebuffering per client and average number of interruptions, and (b) average buffering time of the algorithms when the buffer size was 30 s and the segment duration was 4 s Fig. 11 Fig. 11 Comparison of (a) rebuffering per client and average number of interruptions, and (b) average buffering time of the algorithms when the clients moved at pedestrian speed Fig. 12 Fig. 13 Fig. 12 The effect of client arrival time on the performance of the algorithms Table 1 Cellular Network Configuration Video rate suggested by the client to edge cloud R max , R min Maximum and minimum video rate available B k , B max Buffer occupancy level at the download of k th segment and client's buffer size Received power and the power spectral density of additive white Gaussian noise BW, W j Total bandwidth and the bandwidth allocated to the j th client PL Path loss Q jAverage video bitrate over downloaded segments by the j th client QS jAverage magnitude of the changes in the quality from one segment to another F j Fairness contribution by allocating video rate to client j during its streaming session IE N, S Number of DASH clients and total number of segments downloaded by the client R, R i k Set of available discrete video rates and k th segment encoded at the i th video rate R c j Bandwidth inefficiency by allocating video rates to client j during its streaming session IR Total interruption time x ij Decision variable that defines number of clients streaming the i th video rate stored in the server t dur ,Δt k Table 2 Cellular network , and MECA are an MEC-assisted DASH systems for mobile video streaming; AAA and DBT are client-side buffer-based algorithms, whereas DASH-Google and QLSA are client-side throughput-based algorithms.In addition to throughput and buffer level, the proposed algorithm also takes segment duration and client's buffer size into consideration to select the video rate.The motivation is to maintain high QoE, equitably select video rates for video clients, and efficiently utilize bandwidth under different client and server settings.The rest of the algorithms do not consider any client or content characteristics to pick video quality. ECAAS Table 4 Experiment settings used to evaluate the algorithms Table 7 Average performance of the adaptive methods over all experiments
12,934.6
2023-04-04T00:00:00.000
[ "Computer Science", "Engineering" ]
A Siamese ResNeXt network for predicting carotid intimal thickness of patients with T2DM from fundus images Objective To develop and validate an artificial intelligence diagnostic model based on fundus images for predicting Carotid Intima-Media Thickness (CIMT) in individuals with Type 2 Diabetes Mellitus (T2DM). Methods In total, 1236 patients with T2DM who had both retinal fundus images and CIMT ultrasound records within a single hospital stay were enrolled. Data were divided into normal and thickened groups and sent to eight deep learning models: convolutional neural networks of the eight models were all based on ResNet or ResNeXt. Their encoder and decoder modes are different, including the standard mode, the Parallel learning mode, and the Siamese mode. Except for the six unimodal networks, two multimodal networks based on ResNeXt under the Parallel learning mode or the Siamese mode were embedded with ages. Performance of eight models were compared via the confusion matrix, precision, recall, specificity, F1 value, and ROC curve, and recall was regarded as the main indicator. Besides, Grad-CAM was used to visualize the decisions made by Siamese ResNeXt network, which is the best performance. Results Performance of various models demonstrated the following points: 1) the RexNeXt showed a notable improvement over the ResNet; 2) the structural Siamese networks, which extracted features parallelly and independently, exhibited slight performance enhancements compared to the traditional networks. Notably, the Siamese networks resulted in significant improvements; 3) the performance of classification declined if the age factor was embedded in the network. Taken together, the Siamese ResNeXt unimodal model performed best for its superior efficacy and robustness. This model achieved a recall rate of 88.0% and an AUC value of 90.88% in the validation subset. Additionally, heatmaps calculated by the Grad-CAM algorithm presented concentrated and orderly mappings around the optic disc vascular area in normal CIMT groups and dispersed, irregular patterns in thickened CIMT groups. Conclusion We provided a Siamese ResNeXt neural network for predicting the carotid intimal thickness of patients with T2DM from fundus images and confirmed the correlation between fundus microvascular lesions and CIMT. Introduction Over 500 million patients with Type 2 Diabetes Mellitus (T2DM) (1,2) globally are taking the high risk of macrovascular complications like cardiac, cerebral, and peripheral vessels (3).These complications may significantly increase the probability of morbidity and mortality.Carotid Intima-Media Thickness (CIMT) is a pivotal biomarker for assessing macrovascular pathologies (4,5).In patients with diabetes, thickened CIMT signals the early onset of atherosclerosis, thereby elevating the risk for cardiovascular incidents, including heart disease and stroke (6).Early detection of CIMT of course makes sense to T2DM patients.However, conventional checking methods like CT imaging evaluation and carotid artery ultrasound examination are expensive and cannot be performed routinely, especially in developing or underdeveloped regions.Many T2DM patients cannot receive early therapeutic intervention (7). Fundus imaging is universally known as an indispensable routine screening modality for T2DM (8).Biologically, the ophthalmic artery is an integral subsidiary of the internal carotid artery and the leading vascular provider for the retina (9); variation of the hemodynamics of the internal carotid artery definitely may result in anomalies of the retinal microvasculature (9).Fundus images can be an indirect barometer of systemic disease (10)(11)(12).More notably, artificial intelligence (AI) predicting methods from retinal images, which have significant advantages in multifactorial issues with high-dimensional data, has been widely applied in systemic disease diagnostics, such as cardiovascular diseases (13), cerebrovascular accidents (14), chronic renal disorders Alzheimer's disease (15), and carotid artery stenosis (16). Based on the fact that changes in retinal microvasculature can reflect internal carotid artery (17)(18)(19), Junlong Qu (16) proposed a multimodal fusion predicting model based on fundus images and clinical indices, which can detect carotid artery stenosis automatically.Although the model's accuracy (74.82%) is not high enough, the research confirmed that predicting CIMT for patients with T2DM from fundus images using deep neural networks is a potential method (16). For early detection of CIMT in T2DM, which can consequently benefit patients from the prevention of cardiovascular diseases via early intervention, this paper established a specific fundus images dataset and proposed a Siamese ResNeXt network for predicting CIMT.The accuracy of Siamese ResNeXt is 88.0%, which further confirms the correlation between CIMT and retinal abnormalities and provides a valuable tool for early detection of CIMT in patients with T2DM. 2 Materials and methods Patients have diabetes-specific symptoms, like xerostomia, polydipsia, polyuria, and inexplicable weight reduction, and the random plasma glucose level of who is equal to or exceeding 11.1 mmol/L; the fasting plasma glucose level after necessitating a minimum fast of eight hours equal to or exceeding 7.0 mmol/L; the postprandial plasma glucose level two hours after 75g oral glucose is equal to or exceeding 11.1 mmol/L (20). The diagnostic benchmarks for CIMT: The CIMT has not yet been clear in clinical, due to differences in ethnicity, age, and measuring equipment.Luca SABA and others (21) drew that the CIMT thickness threshold were between 0.7 millimeters and 1.2 millimeters, from 107 global studies on the correlation between CIMT thickness and vascular diseases.In this paper, based on the research of Chinese population (22), patients are classified into the normal group if their CIMT is less than 0.9 mm and the thickened group if their CIMT is over or equal to 0.9 mm Inclusion criteria (1) Individuals must be 18 years or older, with no restrictions based on gender. (2) Participants must meet the diagnostic benchmarks for T2DM as established guidelines stipulate. (3) The participants must have complete clinical records readily available for research evaluation. Exclusion criteria (1) Patients diagnosed with type 1 diabetes mellitus, gestational diabetes, or other specific diabetes variants are precluded from the study. (2) Patients with archived ophthalmic images of suboptimal quality, which precludes the extraction of valuable data for the study, are excluded. (3) Patients whose archived ultrasonographic assessments of the carotid arteries fail to detail the measurements of the CIMT are also excluded from participation. Data collection This retrospective case-control investigation systematically assessed a cohort of individuals diagnosed with T2DM hospitalized in the Second Affiliated Hospital of Anhui Medical University from January, 2021 to November, 2023.After excluding subjects with non-qualifying ophthalmic fundus images, the study encompassed a sample size of, 1236 patients.The dataset is randomly divided into test, validation, and training groups.The validation group consists of 50 normal patients and 50 patients with thickening, while the test group includes 30 normal patients and 30 patients with thickening.The remaining data are allocated to the training group.The process of dataset collection is illustrated in Figure 1.Their clinical parameters (including sex, age, and hospital admission identifiers), high-resolution fundus photographs, and CIMT values determined by ultrasonography were acquired.This research has received formal approval from the Ethics Committee of the aforementioned hospital.The approval number is YX2023-2011(F1). Fundus imagery was obtained utilizing the Canon CR-2 PLUS AF non-mydriatic digital fundus photography apparatus, which facilitated the capture of images depicting the natural dilation of the pupils at a 45-degree acquisition angle without the necessity of pharmacologic pupillary dilation, show in Figure 2. The measurement of CIMT was conducted by professional sonographers in the hospital's ultrasound department using a Siemens ACUSON S2000 ultrasound diagnostic instrument, which is equipped with an L16 transducer with a frequency range of 5-12 MHz, while the patient was at rest with their head turned to the side.The detailed methodology for measuring the CIMT is as follows: Initially, the precise location for the measurement must be identified, typically targeting the far wall of the common carotid artery (CCA), specifically about 1-2 cm above the carotid bulb.This area is chosen because of its relatively flat surface and the distinct clarity of the interface between the intima and media layers, which enables the capture of high-quality images.Subsequent steps involve pinpointing the carotid artery and acquiring both transverse and longitudinal sectional images to accurately determine the measurement point.The actual CIMT measurement is conducted on the longitudinal section, calculating the distance between the intima-lumen interface and the intimamedia interface, show in Figure 3. Privacy protection In the initial stages of data collection, rigorous measures were implemented to safeguard patient privacy rights comprehensively.This involved anonymizing all clinical data that could contain identifiable markers, thereby securing the confidentiality of personal information.Furthermore, fundus images were meticulously cropped to excise any segments potentially comprising individual identification elements.Throughout the entirety of the data collection and processing trajectory, the study carefully conformed to predefined standard operating procedures, guaranteeing data uniformity and comparability and thereby maintaining the scientific rigor and ethical integrity of the research endeavor. data processing In pursuit of a robust generalization for the model, a refined palette of geometric and photometric augmentation techniques was integrated to expand the dataset.Geometric augmentations are articulated through a stochastic rotation algorithm that adjusts random orientation within a precisely controlled angular spectrum of ±5 degrees, meticulously retaining the image's central fidelity.A bidirectional flipping in horizontal and vertical was implemented to enrich the model's interpretative versatility across variously oriented planes.Photometric augmentations, which delicately fine-tune the imagery's luminosity, contrast, saturation, and hue, were executed randomly after geometric enlargements, presenting diverse visual scenarios.Such deliberate and strategic data augmentation strategy primed the model for consistent and reliable performance under different imaging environments, thereby solidifying its practicality and robustness in real-world clinical applications. Diagnostic model 2.3.1 Network architecture Typically, there are two types of AI diagnostics models for systemic disease.The feature-driven analytical models depended on a definite correlation between image characters and disease.The VGGNet-16 network for assessing the risk of ischemic stroke ( 23) is based on the correlation between vascular caliber and cerebrovascular events (24).However, other excluded features may be ignored.Although the feature-free model is weakly interpretable, it is efficient, especially if classification characteristics are unclear.Even though the details of retinal vascular were enhanced, their features were not given in the predicting model of biological age based on the VGG-19 network (25). In spite of the significant correlation between CIMT and retinal abnormalities (19), biological characteristics are not pointed.Accordingly, feature-free deep learning algorithms based on two Ultrasound images of the cross-sectional and longitudinal sections for measuring the thickness of CMIT.popular deep learning frameworks, ResNet (26) and ResNeXt (27), were used to predict CIMT.The encoder and decoder can affect the classification results.Considering that a group of fundus images comprises images from the left and right eyes, pairs of images should be trained simultaneously.Three different encoder and decoder modes were designed in this paper (Figure 4).Besides, based on the correlation between age and CIMT thickening in demographic statistics, multimodal modes with the aged were also trained.Therefore, eight prediction models were trained in experiments. The raw images are RGB images with a resolution of, 2415 × 2387.The deep neural algorithms in each mode are ResNet50 or ResNeXt50 in this paper. Mode A: Stitching and resizing images Two raw images were stitched into one image (4830× 2387×3).Then, the stitched image was resized to 256×256×3.A center cropping operation is performed to obtain an image area of 224×224×3 before learning.An output feature vector of, 2048×1 was extracted from the deep neural network.The vector is fed into a fully connected layer for classification. Mode B: Parallel learning This is a structural parallel network.The architecture of the two networks is the same, but their parameters can be different.Firstly, two raw images (2415 × 2387×3) were resized to 256×256×3, and center cropped to 224×224×3 independently.Then, the processed images were fed into two deep neural networks independently for learning.The networks' two output feature vectors, 2048×1, were concatenated into a large vector of, 4096×1, and the large vector was fed into a fully connected layer for classification. Mode C: Siamese learning This is a siamese network, and the architecture and parameters value of the two sub-networks is identical.These networks follow the same path as ModeB during forward propagation. Multimodal mode The primary frameworks of the multimodal mode are similar to that of the unimodal mode.The significant difference is that the 128-dimensional age vectors and the output feature vector of deep neural networks were concatenated into one vector before being fed into a fully connected layer for classification. Model optimization and loss functionality A gradient-based optimization called Adam was adopted in this paper.The Adam optimizer can calculate the adaptive weighted moving averages of both gradients and their squared values,updates requisite for model training and convergence with pronounced efficiency (28).The formula for calculating the Adam optimizer can be found in Equation 1 (28). In light of the pronounced imbalance in sample sizes among different categories within our research dataset, we have adopted a weighted cross-entropy loss function for this classification task.This loss function can assign weights to the numbers of each class.Categories with fewer samples are allocated higher weights.Such a method can ensure that all categories can be classified accurately, particularly for those small categories.The calculation of these weights is specified in Equation 2. Within the equation, p(xi) epitomizes the ground truth associated with the i-th label, while q(xi) corresponds to the estimated predictive value, N denotes the whole number of data; n1 denotes the number of data of normal groups; n2 denotes the number of data of thickened groups. Assessment indicators Common indicators:Confusion matrix, precision, recall, specificity, F1 Score and ROC curve are used to evaluate the performance of different deep neural networks.The formulas for calculating the metrics precision, recall, specificity, and F1 Score can be found in Equations 3-9. Recall ̲ Normal = Num _ NT Num _ NT + Num _ TN (7) Specificity ̲ Normal = Num _ TT Num _ TT + Num _ NN (8) F1 Score = 2 * Precision * Recall Precision + Recall (9) Note: Num_TT: The number of thickened group instances correctly identified as belonging to the thickened group.Num_TN: The number of normal group instances incorrectly identified as belonging to the thickened group.Num_NT: The number of thickened group instances incorrectly identified as belonging to the normal group.Num_NN: The number of normal group instances correctly identified as belonging to the normal group.Precision: The ratio of correctly predicted positive observations to the total predicted positives.Recall : The ratio of correctly predicted positive observations to all observations in actual class.Specificity: The measure of the ability of the model to correctly identify negatives.F1 Score: The weighted average of Precision and Recall.Therefore, this score takes both false positives and false negatives into account. Clinical indicators: Specifically noted that, in clinical application, other than adding extra checks, the misdiagnosis that normal patients are classified into the thickening group of the carotid artery intima cannot produce serious consequences.However, the missed diagnosis that patients with thickening of the carotid artery intima have not been screened may delay early intervention for patients.Therefore, the recall of the thickened group is equally important as the overall accuracy of the classification model.The recall of the normal group is inferior to that of the thickened group and the overall model. Class activation map Activation Maximization (AM) (29), Deconvolutional Neural Network Visualization (DeconvNet) (30), Class Activation Mapping (31), and other methods are employed to enhance the transparency and interpretability of the black-box prediction model.This paper overlaps the heatmaps calculated through the Grad-CAM technique onto the input image to highlight the areas that contributed the most to the network's decision.First, the gradients of the predicted class score concerning the final convolutional layer's feature map, which were obtained when the input image was passed through the classification network, are computed.Then, the average gradient value for each channel is calculated using global average pooling.Finally, the feature maps were linearly weighted by their corresponding gradient values, then ReLU-activated and aligned to the original input image size (32,33). Training All eight models were trained under the same training strategy, with the entire process spanning 400 epochs.The process is divided into two main phases: In the foundational training phase, models initially load parameters pre-trained on ImageNet and are trained using the Adam optimizer.The initial learning rate is set to 0.001, with a decay to 10% of its value every 10 epochs.The batch size is set to 32 for all model types, including the Parallel, Standard, and Siamese models.The best-performing model parameters on the validation set are saved during this phase.In the subsequent 300 epochs of advanced training, models load the best-performing parameters from the foundational training and adjust the learning rate to either 0.0001 or 0.00005, with all other hyperparameters remaining unchanged, to conduct in-depth advanced training.After completing this series of training, the models' capabilities are comprehensively evaluated on the test set. Besides, the Grad-CAM was used to help highlight which regions of an input image after the prediction model was trained.This paper calculated the heatmaps only for the Siamese ResNeXt model, the performance of which was best. Demographic information In this retrospective analysis, the dataset encompassed 1,236 subjects, categorized into the CIMT-normal group with 387 individuals (31.31%) and the CIMT-thickened group with 849 individuals (68.69%).Subsequent subgroup analysis delineated a mean age of 37.33 ± 9.95 years for the CIMT-normal cohort.Conversely, the CIMT-thickened group exhibited an elevated mean age of 53.74 ± 9.99 years.Statistical evaluation revealed a statistically significant divergence in age distribution between the CIMT-normal and thickened cohorts (P< 0.001), indicating a pronounced correlation between age and the variation in CIMT measurements, shown in Table 1. Performance of prediction models The names of various predictive models are presented in Table 2.The performance of these models is illustrated in Table 3, Figures 5, 6. Comparison via common indicators Figure 5 shows that the Siamese ResNeXt network was identified as the most efficient model for robust and accurate performance in the four common indicators.Siamese ResNeXt network exhibited the highest recall rate reaching a value of 88.0% (Figure 5A and Table 3).Conversely, the ResNet network was the least efficient, with a recall rate of 80.0%.As shown in Figure 5B and Table 3, the ResNet model exhibited a precision of 80.00% in the validation group and 79.97% in the test group, indicating that its precision was relatively lower than that of other model groups.The precision of the parallel ResNeXt and Siamese ResNeXt models reached 88.20% and 88.00% respectively, which were the best in the validation group.However, in the test group, the precision of the Siamese ResNeXt model at 85.04% was higher than that of the parallel ResNeXt, which was only 78.36%.Figure 5C showed that the F1 Score values of the Siamese ResNeXt model were the highest in both the validation and test groups, achieving 88.0% and 85.0%, respectively.At the same time, the standard ResNet model demonstrated the worst performance, with an F1 Score of 79.97% in the validation group and 74.94% in the test group. Comparison via clinical indicators It is clear that the recall rates for the test set were marginally lower than those for the validation set overall, shown in Figure 5D.However, despite potential limitations in identifying normal CIMT states, most models exhibited superior performance in detecting thickened conditions.In the ResNet model series, the thickened group exhibited a notable enhancement in predictive recall rates in both validation and test groups, with increments of 8.0%-18.0%and 10.0%-16.77%respectively, when compared to the normal group.Within the Resnext model series, except for the Falltern Rensext model where the outcomes were identical in both scenarios within the validation group, the thickened group consistently achieved a higher recall rate than the normal group, ranging from 4.0%-24.0%across various cases.In the test groups of the Resnext series, the thickened group in the multimodal and standard Resnext models demonstrated an increased recall rate by 6.66%-23.0%over the normal group, while the Parallel ResNeXt and Siamese ResNeXt models showed a decrease in recall rate by 3.33% in the thickened group compared to the normal group. Comparison of Different Deep Neural Algorithms From the perspective of the deep learning algorithm, the performance of the ResNeXt algorithm consistently outshined the ResNet algorithm in Figure 5 and Table 3. Specifically, in the validation set, the ResNeXt algorithm surpassed the ResNet algorithm by 2% in the standard network architecture, 3% in the parallel network configuration, and 3% in the Siamese network setup.Regarding the test set, the ResNeXt algorithm demonstrated an increase of 5% over the ResNet algorithm in the standard configuration.However, in the parallel configuration, the ResNeXt algorithm fell by 3.34% compared to the ResNet algorithm.In the Siamese configuration, the ResNeXt algorithm showed a significant lead of 6.67% over the ResNet algorithm.Similarly, in both the validation set and the test set, the AUC of most ResNeXt models were higher than that of ResNet models under the same encoder and decoder (Figures 5E, F). Comparison of different encoder and decoder models In both the validation and test groups, the Siamese network configuration demonstrated a consistently superior performance profile, irrespective of whether ResNet or ResNeXt were used for extracting features, in Figure 5D.In the validation group, when the ResNet framework was employed, the standard network architecture yielded the lowest recall rate of 80.0%, whereas the Siamese configuration exhibited the highest recall rate, reaching 85.0%.When the ResNeXt framework was applied, the recall rate of the standard architecture marginally increased to 82.0%, but the Siamese architecture still achieved the highest recall rate, at a value of 88.0%.If the ResNet framework acted as the feature extractor in the test group, the standard architecture had the lowest recall rate at 75.0%.Yet, the parallel architecture achieved the highest recall rate at 81.67%.While the ResNeXt framework was used for prediction CMIT, although the recall rate of the standard architecture increased slightly to 78.33%, the Siamese architecture consistently presented the highest recall rate at 85.0%. Confusion matrices and ROC curves of the Siamese ResNeXt Figure 6A presents the confusion matrices for the Siamese ResNeXt model within both validation and test groups.In the validation dataset, the model showed impressive accuracy, accurately predicting 'Normal' cases with an actual positive rate of 88.0% and achieving the same accuracy for 'Thickened' patients.The false positive and false negative rates were 12.0%, indicating a balanced occurrence of Type I and Type II errors.In the test dataset, more precision is needed.The model proficiently identified 'Normal' cases with an 86.67% accuracy and 'Thickened' patients at 83.33%.Nevertheless, there was a slight uptick in misclassification rates, with 'Normal' cases incorrectly labeled as 'Thickened' in 13.33% of instances and 'Thickened' cases erroneously identified as 'Normal' at a rate of 16.67%. Figures 6B, C show the ROC curves of the Siamese ResNeXt model in the validation and test groups, respectively.In both datasets, the Siamese ResNeXt model consistently recorded the highest AUC values, achieving 90.88% in the validation set and 88.92% in the test set.This performance highlighted the model's exceptional robustness and efficacy in CIMT prediction. Results of class activation map Figure 7 presents the feature mapping of the Siamese ResNeXt network executed on retinal images.In Figure 7A, the Grad-CAM mapping displays the feature distribution for the normal CIMT group.In this group, the features mainly concentrate around the optic disc and vascular areas, exhibiting a centralized and regular pattern on the feature map.This centralization suggests that, in normal CIMT cases, the model focuses more on the optic disc and vascular regions, likely indicative of normal CIMT levels.In contrast, Figure 7B shows the Grad-CAM mapping for the thickened CIMT group.The feature map highlights elongated and various point-like circular shapes, with these features being more dispersed across the map.This distinct pattern in feature distribution is attributable to the unique presentation of retinal lesions in the pathological state of CIMT.The observable differences in retinal mapping between the normal and thickened CIMT groups potentially mirror key distinctions in retinal vascular characteristics associated with normal and pathologically altered CIMT states. Research contributions We provided a Siamese ResNeXt neural network for predicting CIMT of patients with T2DM from fundus images and confirmed the correlation between fundus microvascular lesions and CIMT. Clinical significance It is a well-documented statistic that cardiovascular complications account for the demise of approximately 50% of individuals with T2DM (20) because the continuous chronic hyperglycemia state of patients with diabetes can cause vascular inflammatory responses and endothelial injury (34).CIMT is widely recognized as a precursory biomarker of cardiovascular morbidity, and it is evidenced that incipient alterations in CIMT can be reversed or mitigated through precise pharmacological interventions (35).Therefore, the early detection of CIMT thickening is significant in effectively managing T2DM.Although carotid artery ultrasound is the standard method for CIMT examination, it is not a routine screening for T2DM.Many patients need to attend the early screening of CIMT.A routine and rapid screening method for T2DM is necessary in the clinic. Biological basis The ophthalmic arteries, responsible for delivering critical sustenance to ocular components, including the retina and crystalline lens (36), primarily arise from the internal carotid.Carotid stenosis induced by diabetes may enhance the risk of thromboembolic phenomena and attenuated blood flow (37), consequently resulting in ischemic ocular diseases such as retinal artery occlusion and ischemic optic neuropathy (38).Analysis of microvascular changes on fundus images provides valuable information for cardiovascular pathologies (39). Researchers (40)(41)(42)(43) have proved a definite correlation between CIMT and retinal pathologies.Wang (40) demonstrated that the degree of retinal arteriolar hardening has a significant positive correlation with the severity of the carotid atherosclerotic burden, which is characterized by intimal thickening and luminal stenosis (40).The findings of Ichinohasama (41) that individuals with T2DM are suffering from a high risk of CIMT demonstrated the potential of CIMT as an incipient marker for diabetic ocular alterations.Subsequent research elucidated a correlation between the increase in CIMT concomitant and the progression of retinopathy severity among T2DM patients (42,43).The inverse correlation between CIMT and blood flow and density of retina vascular (44,45) was further confirmed by Lilla Istvań and Lahme, utilizing sophisticated Optical Coherence Tomography (OCT).Pathophysiologically and physiologically, predicting CIMT of patients with T2DM from fundus images is underpinned by robust rationale and sufficient evidence. Intelligent diagnosis technologies Retinal images are complex and high-dimensional data.If clinicians do not have sufficient clinical experience, they cannot precisely diagnose.Besides, conventional artificial diagnosis methods can not accurately express the complex relationships between the multidimensional data and diseases.The spread of these artificial diagnostic methods may be restricted. It is acknowledged that Artificial Intelligence (AI) techniques have been pivotal in advancing the diagnostic acuity for various pathologies using retinography, especially in the diagnosis of ocular pathologies and prognostications of holistic health status (12,13).The study by Wong (46) illuminates the potential of AI-based The characteristic heatmap of the Siamese ResNeXt using the Grad-CAM algorithm.(A, B) depict the raw images and the heatmaps for the normal and thickened groups, respectively.The heatmaps were the Grad-CAM projections overlaid on these fundus images. application of deep learning techniques in oculomics derived from retinal images for evaluating systemic health, especially predicting conditions such as sarcopenia.These investigations not only mark significant advancements in retinal image analysis for predictive, preventive, and personalized medicine but also open new avenues for future research and clinical practices (48).They showcase the substantial potential of AI and retinal imaging technologies in refining the accuracy of diagnosing ophthalmological pathologies and in the comprehensive assessment of patients' overall health conditions. The investigative collective at West China Hospital, under the aegis of Kang Zhang, has adeptly applied the AneNet architecture for the screening of anemia via retinal vessel Optical Coherence Tomography (OCT) imaging, culminating in an accuracy apex of 98.65% and an exemplary Area Under the Receiver Operating Characteristic Curve (AUC) of 99.83% (49).Meanwhile, this team has pioneered the prognostication of chronic kidney disease through fundoscopic examinations, yielding an AUC span of 0.87 to 0.92 (50), signifying a robust predictive capability. In this paper, we provided eight models for predicting CIMT, based on ResNet and ResNeXt, using three encoders and decoders, under different data modalities (Table 2).Then, the performance of these models was compared.According to the results in Section 3.2.2,Siamese ResNeXt showed the overall best performance.Siamese ResNeXt achieves the highest accuracy, reaching up to 88.0%.The recall of the normal and thickened groups of Siamese ResNeXt is not the highest but can satisfy application requirements.The robustness of Siamese ResNeXt is the best. Although the performance of the paper is not as high as Diabetic retinopathy detection (an AUC of 99%), which can be attributed to various limiting factors, including the finite dataset size, single-center study design, and the unbalanced distribution of the sample, our accuracy advanced the accuracy previously reported by the consortium at Shenzhen Eye Hospital by an appreciable 14 percentage points (16). Analysis of different models 4.2.1 Analysis of different network structures ResNeXt represents an improvement over ResNet, aiming to enhance network representational capacity, computational efficiency, and parameter utilization.They are both classic residual neural networks performing well in image classification tasks.In this paper, ResNet50 and ResNeXt50 were applied in the predicting task CMIT.Overall, ResNeXt50 performed better than ResNet 50.The accuracy of ResNeXt50 is about 2~3% higher than that of ResNet50 in both the validation and test groups because of the 'cardinality' (51), which breaks down the width into multiple dimensions.Through group convolution, the network can learn different features more richly. Analysis of encoding and decoding mode The input data comprises two images, different from the object recognition task.The dimension of the input data of standard ResNet or ResNeXt is 224×224×3.An appropriate encoder should be designed for this specific task. Regardless of the deep neural network, the standard encoder performed worst, probably due to the deformation caused by the 'resize' operation.Some original features may be stretched, compressed, or distorted when images are deformed. Overall performance of the Siamese mode is best.The features of a pair of images can be learned simultaneously without deformation.In some tasks, redundant information in the highdimensional feature space may not significantly contribute to classification tasks.The model can focus more on crucial features despite losing some information by reducing the dimensions. Parallel Mode performed the best in the recall of the thickened group.Data imbalance between the normal and thickened groups may result in this performance.Meanwhile, overfitting to a specific category is common in Siamese network structures.Because of overfitting, the recall of the thickened group of Parallel ResNeXt is significantly lower than expected. This study reveals that the Siamese ResNeXt network exhibits superior robustness in terms of both predictive accuracy and model performance.A universal and robust feature was extracted from all samples through the sharing of weight parameters (52), which is of great significance to predict the thickness of the CMIT.Moreover, the attribute of shared weights enables the network to be effectively trained on smaller datasets by reducing the quantity of parameters that need to be learned, which in turn minimizes the risk of overfitting (53).This attribute may contribute to the Siamese ResNeXt network's heightened accuracy and robustness in predicting CIMT. Analysis of data modality Despite a statistically significant age distribution divergence between the CIMT-normal and thickened cohorts in the retrospective analysis, the performance of classification models is reduced if the age factor is embedded in the network, perhaps due to the relationship between age and CIMT prediction is not linear.The age should be processed more appropriately. Analysis of class activation map In this investigation, the strategic implementation of Grad-CAM technology on the Siamese ResNeXt network has yielded pivotal insights into the differential feature presentations within fundus imagery, particularly under the diverse physiological states of normal and CIMT.In instances of normal CIMT, the feature mappings prominently coalesce around the vascular environs of the optic disc, suggesting a heightened degree of focalization and structural order.This phenomenon ostensibly mirrors the inherent stability and uniformity of retinal vascular configurations in a salubrious state, implying a preservation of physiological integrity within these vascular zones.Consequently, these regions within the fundus imagery are algorithmically recognized as denotative of a normative vascular state devoid of significant carotid arterial thickening. Conversely, the feature mappings associated with thickened CIMT conditions are markedly disparate, characterized by dispersion and an absence of regular patterning, potentially signaling underlying pathological shifts.This dispersed mapping paradigm may directly correlate with pathological processes intrinsic to increased CIMT, culminating in the manifestation of irregular and heterogeneous vascular attributes within the fundus images.Such findings indicate potential alterations in the retinal vascular architecture consequent to CIMT augmentation, engendering a diverse array of morphological and structural retinal modifications.These revelations augment our comprehension of the intricate nexus between cardiovascular health and retinal vascular characteristics and significantly enhance the potential utility of fundus images as a sophisticated, non-invasive modality for cardiovascular risk assessment.This advancement holds substantial promise for enriching the armamentarium of clinical diagnostics and refining cardiovascular medicine preventative strategies. Conclusions The predictive analysis of CIMT through fundoscopic imaging bears critical implications for the preemptive risk stratification of macrovascular complications among patients diagnosed with T2DM.In this research, a range of deep neural network structures were applied to forecast the thickening of CIMT in T2DM patients.The architectures included conventional neural networks, neural networks with parallel structures, siamese neural networks, and multimodal neural networks integrating age factors.The siamese ResNeXt model, in particular, showed exceptional efficacy in predicting CIMT thickening, recording a recall rate of 88.0% and an AUC of 90.88% on the validation set and exhibited notable robustness in the testing phase.Nevertheless, with the impetus for future research to focus on enhancing interpretable machine learning features, alongside the enlargement of sample cohorts and multi-center study inclusion, significant advancements in the precision of CIMT predictive models based on fundoscopic imaging are expected.This research delineates a foundational framework for the integration of ocular fundoscopic assessments in the realm of cardiovascular diagnostics.It suggests expansive prospects for its application in clinical settings, enriching the early cardiovascular disease intervention methodologies. 1 The criteria for the diagnosis of T2DM: FIGURE 1 Flow FIGURE 1Flow chart of data collection. Figures 5A, B, C clearly illustrated that, in the validation group, when the factor of age was embedded into the last complete connection layer, the performance of the model called the Parallel ResNeXt & Age or the Siamese ResNeXt & Age decreased by 3% or 5%, compared with the Parallel ResNeXt or the Siamese ResNeXt.However, the results of the test group varied with that of the validation group.The recall of the Parallel ResNeXt & Age model increased by 3.34% over the Parallel ResNeXt model.Yet the Siamese ResNeXt & Age model decreased by 8.5% compared with the Siamese ResNeXt model. 5 FIGURE 5 Performance of Different Models for Predicting CIMT Thickness.(A-C) illustrate the comparative performance metrics of recall rate, precision, and F1 score for various CIMT prediction models.(D) displays the performance of models in terms of recall rate across average, thickened, and aggregate effects for predicting CIMT.(E, F) show the ROC curves and AUC of various deep learning models in the test and validation groups for CIMT prediction. 6 Performance FIGURE 6 Performance Evaluation of the Siamese ResNeXt Model Using Fundus Images for the Prediction of Carotid Artery Thickness.(A) displays the confusion matrices for the Siamese ResNeXt model's prediction of CIMT in both validation and test groups.(B, C) show the ROC curves and AUC performance of the Siamese ResNeXt model in the valid and test groups. TABLE 1 Demographic Characteristics by CIMT Category. TABLE 2 Names of different predicting models. The prediction model, which is stitching and resizing raw images, is a standard ResNet/ ResNeXt classification model. TABLE 3 Performance of different predicting models. the validation set includes 50 normal patients and 50 patients with thickening, while the test set comprises 30 normal patients and 30 patients with thickening.
8,121.4
2024-03-14T00:00:00.000
[ "Medicine", "Computer Science" ]
Noise models for low counting rate coherent diffraction imaging Abstract: Coherent diffraction imaging (CDI) is a lens-less microscopy method that extracts the complex-valued exit field from intensity measurements alone. It is of particular importance for microscopy imaging with diffraction set-ups where high quality lenses are not available. The inversion scheme allowing the phase retrieval is based on the use of an iterative algorithm. In this work, we address the question of the choice of the iterative process in the case of data corrupted by photon or electron shot noise. Several noise models are presented and further used within two inversion strategies, the ordered subset and the scaled gradient. Based on analytical and numerical analysis together with Monte-Carlo studies, we show that any physical interpretations drawn from a CDI iterative technique require a detailed understanding of the relationship between the noise model and the used inversion method. We observe that iterative algorithms often assume implicitly a noise model. For low counting rates, each noise model behaves differently. Moreover, the used optimization strategy introduces its own artefacts. Based on this analysis, we develop a hybrid strategy which works efficiently in the absence of an informed initial guess. Our work emphasises issues which should be considered carefully when inverting experimental data. Introduction Coherent diffraction imaging (CDI) is a class of microscopy method that circumvents the need of high quality optics.It is based on the calculation of a numerical lens to retrieve the quantitative sample image from coherently diffracted intensity measurements.The information obtained contains both the amplitude and phase distributions of the exit-wave field.This quantity can be related to various structural parameters such as absorption, dispersion, magnetization state, crystalline structure, etc. [1].Among the proposed CDI approaches, ptychography is particularly attractive since it allows the reconstruction of non-isolated objects, without a priori restrictions on the field of view [2] and without requiring any specific sample preparation.The ptychographic approach consists in scanning a sample across a finite-support beam and recording a diffraction intensity pattern for each probe position; assuming that the scan step is small enough, each point of the sample is encoded several times and in a different way.This redundancy ensures that the phase retrieval of the complex-valued diffracted field can be achieved.It is usually performed by iterative algorithms that combine the intensity patterns. Successful results have been obtained with visible light [3,4], with soft [5] and hard [6][7][8][9] xrays, and in electron microscopy [10][11][12].Major advantages of the ptychography approach are linked to the absence of serious physical aberrations: the method is lens-less, does not require any reference beam or sample [13,14], and is robust to inaccurately known parameters that can be retrieved simultaneously with the object image.Examples of this last issue include the illumination function [7,15,16], the probe positions [16,17] and intensity fluctuations in the incoming beam [18]. However, as the approach is based on an iterative algorithm, it can face problems with convergence, uniqueness of the solution, etc.The successive iterations lead to a solution which is reached when the constraints resulting from the overlapping condition and the intensity measurements are satisfied simultaneously.In the presence of shot noise, such a solution does not exist as the different intensity patterns are not anymore consistent one with another.Low counting statistics are of key importance in the study for instance of radiation-sensitive objects (especially biological structures), or when the object scatters weakly, or when one attempts to obtain very high-resolution images although only few photons are scattered at the needed high angles. In this work, we address precisely the question of the degradation of the solution that is obtained in a phase retrieval approach in presence of photon noise.While we study specifically the ptychographic scheme as an example, our methods and conclusions can be extended to the other iterative algorithm based lens-less imaging microscopies.We believe that the interested reader will find herein the material needed to adapt our approach to the case of support-based phase retrieval algorithm. We begin by defining some common noise models in order to derive a fitting function by the mean of the maximum likelihood principle [19,20].A noise-model dependent reconstruction is thereby obtained by the minimization of the corresponding fitting functions.For this purpose, two different optimization strategies are examined, namely the ordered subset (OS) and the scaled gradient (SG).The former strategy is equivalent to the well known ptychographical iterative engine (PIE) when the additional assumption of a Gaussian noise model is considered.It has the advantage of fast convergence in the early iterations, but its final convergence is precluded by the inconsistencies in the different diffraction patterns.In contrast, the latter is slower in the early iterations, but its asymptotic convergence remains in presence of noise.For the different inversion schemes, a Monte-Carlo analysis is conducted for different noise levels, allowing a direct comparison of the solutions.The quantitative evaluation of each pair "noise model/optimization strategy" is done through quality indicators like the bias and standard deviation.Our results demonstrate the large variety of trade-offs resulting directly from the use of inversion schemes and from the implicit physical models.These are discussed in detail.The conclusions we reach have important implications for experimental applications of diffractive imaging. The next section of this article presents the noise models that are considered for a CDI experiment.Section 3 gives the fitting functions that are derived from the maximum likelihood principle.Then, two iterative strategies that can be used for retrieving the object from the chosen fitting function are described.Section 4 presents the main results of this study: first, the definition of some performance indicators together with the description of the numerical sample are given; second, the convergence properties of the iterative strategies are briefly discussed; finally, Monte-Carlo analysis of the reconstruction algorithms are considered with regard to the selected noise models. Noise models for ptychographic data sets The ptychography approach requires the description of the exit field as a function of the probe p(r) and the sample scattering function ρ(r), named the object in the following.In the multi- where ρ is unknown and p j (r) := p(r − r j ) is the probe function shifted in r j .From a practical viewpoint, the reconstruction from ptychographic data requires the object and the probe to be discretised.In what follows, we denote by the object to be retrieved; N is thus the number of pixels in the object plane.This object is illuminated by a support-limited probe , where M is the number of pixels in the camera.This vector is converted into an M × N matrix P j so that the exit field is expressed in vector form by ψ j := P j ρ where the index j refers to the position of the probe.The corresponding far-field Ψ j := {Ψ m, j } M m=1 is computed from the exit-field by where W is the discrete Fourier transform operator.Provided that the size of the camera pixel or detector is small enough, the expected number of photons in the m-th detector reads where b m, j is the expected number of background events and A is the area of the detector.Since A can be incorporated into the probe, one can set A = 1 without loss of generality, so that h m, j = Ψ m, j 2 + b m, j is the expected number of events for the m-th measurement in the j-th illumination. The above relations give a deterministic relationship between the object ρ and the expected (noise-free) data set {h m, j } that is at the basis of any reconstruction numerical scheme.However, when realistic data are considered, the presence of photon noise results in a substantial degradation of the measured data set y := {y m, j } relative to {h m, j }.In order to take into account the noise issue in a ptychographic experiment, three distinct noise models are introduced in the following sections.Each of them leads to a specific criterion that links the unknown object to the measured data.We will show that these criteria are fitting functions that provide an estimate of the object further obtained via a minimization algorithm. Noise Model P: The standard photon counting model The far-field intensity is a quantity with nonnegative real number values; however, a detector collects a finite number of photons: this number takes integer values that can be considered as a random variable.The standard probability distribution function (PDF) considered for particle counting is the Poisson probability law.Assuming independent measurements y m, j , the probability that the entire data set y is collected reads For experiments performed with a single photon counting detector, like a cooled charge coupled device camera [21] or a pixel camera (e.g. the Maxipix [22] or the Pilatus [23]), the main noise encountered during the measurement is indeed the Poisson shot noise. The PDF given in Eq. ( 2) is standard in many applications dealing with low counting rates: for instance in transmission or emission tomography [24,25] or in astronomy [26].Although Poisson shot noise is sometimes used in the CDI community for testing algorithms [15,27,28], the noise model given in Eq. (2) has only recently been introduced in a phase retrieval algorithm [18,29]. Noise Model G : stabilizing the variance of the counting process Even if one deals with counting statistics, it is often convenient to consider that the data are corrupted by an additive Gaussian (thermal) noise.Such a (standard) noise model is built with the following observation equation with ε m, j an independent centered fluctuation drawn from Gaussian random vector with constant variance σ 2 , ∀(m, j).Under these hypothesis, it is deduced that the PDF of the transformed data set √ y m, j is also Gaussian and reads With this model, the transformed measurement y 1/2 m, j has a standard deviation σ independent from its expected value h 1/2 m, j , while these two quantities should be linked for a photon counting process [30].Therefore, it is clear that a model mismatch exists in the noise model f G .In practice, however, this Gaussian approximation works well.The proof is given by the presence of several ptychographic reconstruction algorithms in the literature [16,31], which are related to this simple noise model, as shown in the section 3.2.This results from the fact that the square-root transformation applied to the photon noise is known as a "variance stabilization" transform that allows, in a first order approximation, the variance and the expected value of the transformed data to be independent parameters [32].A proof of the variance stabilization of the photon noise by the square-root transform is provided in appendix A. Noise Model Q: An approximation of the counting model Finally, the following observation equation is considered where ε m, j is an independent centred fluctuation drawn from Gaussian random vector with variance σ 2 m, j .As we are considering photon counting, the fluctuation variance σ 2 m, j should be set to the unknown expected-value h m, j .It leads to the following PDF for the data set y Provided that the number of expected counts {h m, j } is "large enough", the central limit theorem (Ref.[20], Sec.8.47) ensures that the Gaussian PDF in Eq. ( 6) is a good approximation of its Poissonian counterpart given in Eq. (2).Hence, from the ptychographic image reconstruction viewpoint, Eq. ( 6) is a fair noise model that could be used for the design of a reconstruction Note that data with no detected photon have been suppressed to avoid division by zero.Since the standard deviation depends on the data, this last noise model is no longer Gaussian.It is used for imaging reconstruction issues with photon noise in e.g., Ref. [33][34][35]. Ptychographic image reconstruction by the maximum likelihood principle The estimation of the unknown object ρ from a noisy data set is now introduced.Following the standard statistical inference literature, the so-called maximum likelihood (ML) principle can be used to estimate the object.It derives directly from the noise model.In the case of the ptychographical reconstruction problem, the ML estimator for ρ is the quantity that maximizes (with respect to ρ) the PDF of the chosen noise model.In more formal terms, this ML estimate reads ρ • = arg min where "•" stands for P, G or Q (i.e., the noise model under consideration), and with the neg-loglikelihood [36], which is a fitting function adapted to the noise model f P , f G or f Q ; for more details concerning the ML principle the reader is referred to e.g., Ref. [20] (Chap.18).For the noise models considered in this article, these fitting functions split as a sum over all the probe positions: where L •; j is given by (up to irrelevant constant terms) where the dependencies with respect to (w.r.t.) the unknown object ρ are made explicit.From these expressions, one notes that the value y m, j = 0 leads to a contribution h m, j (ρ) in the summands of both Eq.(10b) and Eq.(10c).As a result, the fitting functions L P and L G are equivalent w.r.t. the camera pixels that do not detect any photon.On the contrary, zero intensity camera pixels are discarded from L Q (Eq.(10d)) which is expected consequently to lead to very noisy solutions since these pixels are legitimate constraints for the final solution (see Sec. 4.5 for an example).This problem is clearly circumvented if Eq. ( 10d) is modified so that the empty pixels are accounted for, i.e., The accuracy of this approximation [37] w.r.t. the Poissonian fitting function L P is studied in [33].When the counting process is Poissonian, L P is expected to be the "best" fitting function since it is perfectly adapted to the data fluctuations.With photon noise, the ML estimator drawn from L P is attractive because it benefits from good asymptotic properties: for high counting rates, the ML estimator is free of systematic errors and presents the best variance estimation (Ref.[20], p. 56).For limited counting rates, however, the situation can be different and another fitting function may be more appropriate.We also stress that (by definition) the ML does not account for any additional a priori constraints concerning the electronic density to be retrieved (e.g., support constraint, positivity).If the oversampling is too low and/or the number of diffraction patterns is limited (possibly equal to one), the ML may perform poorly and such additional constraints may be desirable (or even mandatory).This situation appears in support-based phase retrieval problems.However, since the present study aims at evaluating noise models for diffraction-pattern information only, the addition of object constraints has to be avoided because it may most probably blur the analysis.Hence, the ML is the appropriate tool to be considered.For sake of completeness, we also note that one can resort to the maximum a posteriori principle [38, p. 183] to introduce additional constraints within a statistical framework. Finally, we note that the assumption h m, j > 0, ∀(m, j), is mandatory in order to ensure that L P given in Eq. (10b) is always defined.Indeed, the same condition results in the existence of the L P and L G gradients, allowing the iterative algorithms introduced in the next section to be defined.For the sake of simplicity, we assume in the following that the assumption h m, j > 0 holds [39]. Computing the ML estimate From a practical viewpoint, computing a solution defined by Eq. ( 8) requires an iterative algorithm in order to minimize one fitting function among the ones given by Eq. ( 10).This computation reduces to an unconstrained optimization problem, the aim being to find a solution that makes the gradient of L • vanish.As a result, gradient-based algorithms are natural candidates for the optimization of the chosen likelihood.The gradient of the likelihoods given in Eq. ( 10) reads where the gradient for the j-th probe position ∂ •; j is given by with " †" the conjugate-transpose operator, and where Ψ •; j := {Ψ •;m, j } M m=1 is the corrected far-field that depends on the chosen fitting function The functions given by Eq. ( 10) being not strictly convex, local minima may exist and can trap gradient algorithms.Moreover, it is well known that ambiguous solutions exist so that a unique ML cannot be defined for the ptychographical problem [16,40].From a computational viewpoint, the gradients given in Eq. ( 11) are the basic ingredients in the design of iterative reconstruction algorithms dedicated to the noise models.Two different classes of iterative algorithms are considered in the next subsections.In section 4.5 we also consider a hybrid algorithm that uses the best properties of both strategies. Ordered-subset optimization strategies Within the ptychographic experiments, the successive acquisition of intensity patterns for different but overlapping illumination positions on the sample naturally defines a partitioning in the data set.Ordered-subset (OS) algorithms [41] [44,45] rest upon such a partitioning in order to update the object in a two nested loop process.Whereas the inner loop runs over the probe position j = 1 • • •J, updating consecutively the illuminated portion of the object, one full iteration k → k + 1 occurs once the J probes are considered.Thus, for a given initial guess ρ (0) , the algorithm is defined by the following updates for k = 0, 1, with D j a diagonal scaling matrix and where β > 0 is the step-length.One may note that the classical ptychographical iterative engine (PIE) is a special case of this generic OS strategy.For instance, the choice (where I is the identity matrix) is precisely [46] the version of the PIE introduced in Ref. [15] for the object update, stressing that the PIE is a reconstruction algorithm relying (implicitly) on the noise model given in Sec.In practice, OS strategies (like the PIE) have appealing properties for several reasons.Firstly, image reconstructions from large data sets are made computationally more compact because each object update (in the nested loop) uses a subset of the data set; this means also that the field of view can be changed dynamically in the case of a real-time reconstruction.Secondly, the unique 3D propagation and evolution of the probe as it passes through a thick object at each position can be modelled and inverted easily [47].Finally, these algorithms usually benefit from a fast convergence in the early iterations, hence providing an efficient means for the object estimate to "get into the right ball park" (see for instance Ref. [45]).However, some convergence issues exist for these algorithms.For instance, following [45], Godard et al. [28] show that the iteration (13) is not convergent to a local minimum of the fitting function L • (Eq.( 10)) if the scaling matrix D j depends on the probe position.Moreover, even when D j is constant over the probe position (i.e., if D j ≡ D, ∀ j), the authors find that the convergence toward a minimum of L • occurs only with noise-free data set.For noisy data, OS algorithms quickly find a relatively correct solution, but start to loop around after some iterations because the set of diffraction patterns is inconsistent, as a consequence of the presence of noise (see Sec. 4.4).Hence, at a given probe position, the algorithm "undoes" what it just did at the preceding probe position, reintroducing fully the noise within the associated diffraction pattern.In the next subsection, an iterative strategy that solves this convergence problem is considered. Scaled-gradient optimization strategies Given an initial guess ρ (0) , the following scaled-gradient (SG) strategy is defined for where the gradient ∂ • given by Eq. (11a) accounts for all the probes, and Λ ∈ R N×N is a diagonal scaling matrix chosen as As underlined in Ref. [48], the iteration ( 15) is a natural extension of the Error Reduction algorithm to the ptychographical approach.Since β and Λ are not dependent on the iteration number, the condition ρ (k) → ρ ∞ implies ||∂ • ρ (k) || → 0: the convergence toward a limit point implies that this point is a local optimum of L • .In practice, the step-length β > 0 is adjusted in order to generate a sequence converging toward a global or (at least) a local minimum of the fitting function.To our best knowledge, no result exists that gives admissible values for β ensuring the (local) convergence of Eq. ( 15).However, the tuning β ≈ 1 was found to ensure convergence in most cases investigated in the present study.Indeed, provided that β is properly tuned, the SG iteration was always found to be a convergent algorithm. For OS algorithms, the order in which the subsets are treated is often critical.This is clearly not the case with the SG strategy since the update (15) requires the full set of probe positions.The SG strategy is usually slower than the PIE in the early iterations because the latter performs J updates when the former performs only one.However, the SG strategy converges to a (local or global) minimum of L • , even with a noisy data set.An illustration of these distinct convergence behaviors is given in Sec.4.4. Data inversion: a resolution vs. robustness trade-off Clearly, the ML solution (Eq. ( 8)) is subject to fluctuations induced by the random nature of the measurements y.Because four distinct fitting functions are discussed here, it is appropriate to search for the "best" model for the reconstruction purpose.This task requires to first define how the estimators will be compared. where aims at compensating a global phase ambiguity.For complex random variables, the standard deviation of the estimation is defined by Note that Eq. ( 17) and Eq. ( 20) are intuitive quality indicators for the (noise model dependent) estimator ρ • : whereas the bias (17) gives the systematic error, the variance (20) tells if the estimator is robust w.r.t. the noise.A third indicator is interesting to introduce: the mean square error (MSE) that combines conveniently the preceding indicators.While a general closed-form expression for the bias and the standard deviation is not available, the computation of the averaged quantities given by Eq. ( 17) and Eq. ( 20) can however be achieved via Monte-Carlo simulations. Some implicit effects induced by the noise models This section aims at deriving typical features contained in the calculated solutions resulting from the choice of the noise model itself. The asymptotic case of an arbitrary large signal to noise ratio (SNR) is first investigated: since we are dealing with photon noise, this results in y m, j → h m, j (ρ ).Consequently from Eq. ( 11), the gradient evaluated in ρ vanishes whatever the noise model is.In this context, the true object ρ minimizes the four fitting functions and the bias vanishes, i.e. the four noise models are equivalent.Therefore, consideration of the four noise models is only relevant at low SNR.In particular, as Eq. ( 12) gives the following relation between the corrected exit-field drawn from the models P and G , it shows that the contribution in the final solution of a low SNR measurement [49] y m, j ∼ 1 is enhanced with the noise model P because its typical expected value is then h m, j < 1 ≤ y m, j .Such measurements being spread over the borders of the intensity pattern, one expects that the noise model P enhances i.e. the model R should lead to higher biases and to lower variances as compared to the noise model G .In summary, we can see that the specific behavior of each model is dominated by the set of pixels that collects the lowest number of photons.Finally, we also note that a photon noise, in the low SNR regime, produces very sparse intensity patterns.As these empty pixels are usually at the very edge, high-frequency components are missing in each intensity pattern and one can assume that the retrieved object is, more or less, a low-pass filtered version of the original object (with a loss in resolution being driven by the SNR).This result should hold whatever is the considered noise model. A test-chart that highlights the predicted effects To highlight these specific behaviors, a numerical test is now presented, which involves the evaluations of the estimation bias and standard deviation from Monte-Carlo simulations.The choice of the object is primarily motivated by its ability to illustrate the predicted "cut-off frequency" effect of each noise model explained above.For instance, a suitable object is a part of a Fresnel Zone Plate (see Fig. 1).The transmission coefficient of the object is set to one, while window of 260 × 260 pixels.The phase shift encountered by the beam is 1.72 radians and the radial frequency varies from 0.07 to 0.3 pixel −1 .The ptychographical data-set is composed of a total of 81 diffraction patterns, each one of size 100 × 100 pixels.The choice of a step size of about 20 pixels in both directions leads to an overlap ratio of 65%.In addition, two SNRs are considered in these simulations: the highest SNR provides a maximum of 10 6 expected counts over the 2D camera; the lowest provides 10 3 expected photons over the camera.The Monte-Carlo analysis presented below is based on a set of 100 noisy (photon noise) ptychographic data sets. Some issues concerning the iterative strategy It is clear from Sec. 3 that distinct iterative strategies can be derived for the minimization of the same fitting function.It is therefore appropriate to investigate the impact of the iterative strategy on the retrieved object.For that purpose, the inversion of a single ptychographic data set by either the OS or the SG strategy is now considered.For sake of simplicity, the fitting function L G is considered, but similar results are obtained with the other fitting functions. When a noise-free data set is considered, Fig. 2(b) shows that the OS strategy converges toward a minimum of the fitting function since the gradient norm decreases toward zero.With noisy data (i.e., corrupted by photon noise) however, the gradient norm starts to decrease before it reaches a stagnation, such that the convergence does not occur.Furthermore, Fig. 2(b, c) shows that the OS strategy should be stopped early in the iteration process [50] in order to pick the best solution w.r.t. the relative error (in the object plane) defined by where ρ is the true object and where When the SG strategy is considered, Fig. 2(a, b) shows that the iterations converge toward a (local or global) minimum of L • , even with a noisy data set.This minimum defines an estimate which is a global trade-off over the set of inconsistent diffraction patterns, leading to a lower relative error than the best relative error reached by the OS strategy (see Fig. 2(c) and the reconstructions shown in Fig. 2(d, e, f)). Investigation of the figure of merit for each noise model In an attempt to define the intrinsic merit of each noise-model, the impact of the minimization method has to be as low as possible.In other words, the minima of L • can be only compared w.r.t. the noise models if one ensures that the reconstruction quality is not affected by the way the data are handled along the iterative process.Therefore, the use of the (convergent) SG strategy is mandatory, with the additional condition of an initialization as close as possible to the minima.For that purpose, the true solution is chosen as initial guess, i.e., ρ (0) = ρ .The algorithms are stopped when the norm of the gradient reaches a conveniently small value. For each fitting function, and for the lowest SNR, the numerical evaluation of the averaged solution ρ • (both modulus and phase) given in Eq. ( 18) and the standard deviation given in Eq. ( 20) are provided in Fig. 3 and Fig. 4, respectively.The predicted cut-off frequency effect is clearly visible in Fig. 3: for L R , the edges of the modulus are smoothed and the phase is damped whereas they remain much more resolved (undamped) for L P .For this low SNR, the modulus is contaminated by fluctuations that come from the object phase function.The relative amplitude of these fluctuations is 8, 10, 17 and 10 % for L P , L G , L Q and L R , respectively.They reduce when the SNR increases and they become negligible (around 1%) for 10 6 photons.As explained in Sec.4.2, such artifacts appear because the retrieved object is essentially a lowpass filtered version of the original object (see Fig. 5).In the case of the present object, it results from a mixing between the real and imaginary parts of the object, leading to the observation of a phase-like structure in the modulus component.Finally, the standard deviation depicted in Fig. 4 confirms that L R has the highest robustness w.r.t. the photon noise whereas L P has the lowest.For all the fitting functions, the standard deviation grows with the collected number of photons, which is a standard result when one deals with photon noise (Ref.[51], p. 181).As expected, the fitting function L G reaches a tradeoff w.r.t.these two fitting functions, as expected from Sec. 4.2.Quantitatively, the numerical evaluation of the quality indicators defined in Sec.4.1 are reported in Table 1.For both SNRs, although the variance is lower with L G or L R , one notes that L P gives however the best results w.r.t. the MSE and the error in the object plane.The fitting function L Q being much less robust to the noise than the other three fitting functions (see Sec. 3), it is not considered as a valuable alternative for CDI in the low counting rate regime.Starting from a coarse initial guess In practice, the chosen initial guess is often a rough estimate and the iterative strategy adds its own bias and variance.For this reason, it is appropriate to investigate how the reconstruction quality deteriorates when one uses a coarse initialization with either the OS or the SG strategy.We further assume that no a priori object information can be used, resulting in the choice of a free-space estimate for the initial guess.The quality indicators achieved with 10 3 photons are reported in the Table 2; as the OS strategy does not lead to converging iterations (see Sec. 3.2), the iteration that gives the best (i.e., the smallest) error in the object plane is selected for each data set. To summarize, the behaviors exhibited in the preceding section are still valid here: the fitting function L P offers the lowest bias but the worst variance while the fitting function L R has the opposite characteristics.Moreover, every criteria is improved when using the SG strategy, the gain being more clearly evidenced with the noise model P.For the sake of completeness, the difference-map (DM) iteration [52] for ptychographic image reconstruction, as described in Ref. [53] is also implemented and compared.In all the tests performed, the DM iteration and the OS strategy with L G perform equivalently [54].In Fig. 6, the results of the SG strategy obtained from a single noisy data set is shown for the various fitting functions; these illustrate how a typical reconstruction looks like for each noise model when the SG strategy is used.Finally, the algorithms presented in this work have been tested on several object classes: phase objects, absorption objects, objects with low or high contrasts, etc.It is always the case that the Poissonian noise model P presents the least systematic errors, whereas the noise model R is the most robust, the Gaussian model G reaching a trade-off between the two others.It is also observed that the differences between all these algorithms tend to vanish when the SNR increases. The minimization of L P : a hybrid optimization strategy.It is clear from Tables 1 and 2 that L P is the fitting function that undergoes the strongest degradation if the initialization is far from the final solution.On the contrary, the minimization of L R is very robust w.r.t. the starting point.Moreover, the OS and the SG strategies are mostly equivalent for that fitting function.Hence, it is appropriate to search for a hybrid strategy that profits from both fitting functions.Therefore, we propose to use the OS strategy starting with the fitting function L R in order to get quickly to a first estimate which is subsequently introduced as an initial guess for the further minimization of the fitting function L P .As an example, one can perform 1000 OS iterations with L R followed by 1000 SG iterations with the fitting function L P .The quality indicators obtained with this strategy are shown in Table 2.One notes that these indicators are improved: they reach values similar to the ones obtained with the true object (see Table 1).Figure 7 also shows the reconstructed phase obtained by either the "hybrid" strategy or by the SG strategy with the true object as an initial guess.These phases are similar, showing that the hybrid strategy is a valuable technique for the optimization of the Poissonian fitting function. Conclusion In summary, we have addressed the question of the choice of the iterative process for coherent diffraction imaging in the case of data corrupted by noise.Several noise models compatible with photon (or electron) shot noise have been presented and further used within two inversion strategies, the OS and the SG.We have shown that any physical interpretation drawn from a CDI iterative technique requires a detailed understanding of this iterative technique.Our analysis emphasizes that iterative reconstruction algorithms for CDI often assume implicitly a noise model that may be more or less a coarse approximation of the data fluctuations.While standard asymptotic results for photon noise foresee that high SNR measurements should be handled in the same way by any model, each model has the ability to enhance or inhibit the weight of low SNR measurements in the final reconstruction.From this viewpoint, the noise models presented in this paper reach their own resolution vs. robustness trade-off.The merit of each noise model may be user and/or object dependent and, from an experimental perspective, the impact of the intensity fluctuations w.r.t. the noise model has to be tested on numerical samples prior to the inversion.An efficient strategy to circumvent the problem in the case of experimental intensity analysis consists in building a set of data for a model sample, designed as close as possible to the available experimental data set (Fourier space resolution, number of probe positions, SNR, etc.).This numerical data set can then be used to test the different noise model approaches and emphasizes the possible reconstruction artifacts.Whereas it is not a surprise that in presence of shot noise the initial object guess has a strong impact on the final solution obtained with CDI, the employed optimization strategy (OS or SG) generates its own artifacts.Clearly, algorithms that reach the minimum of the fitting function defined by the noise model should be used.On the contrary, if non-converging algorithms are employed, some additional reconstruction degradation is expected.Finally, based on this detailed study, a hybrid strategy has been presented that improves the convergence towards the minimum of the Poissonian fitting function when a good initial guess is missing. The ML principle adopted in this work does not rely on any prior model related to the unknown object.However, when such information is available, such models provide additional constraints that may enhance the resolution and the robustness of the reconstructions.In this context, the maximum a posteriori would be the natural extension of the ML principle when prior models are accessible.It would lead to a penalized fitting function as discussed elsewhere in e.g., Ref. [9,18].Projective algorithms (like ER or HIO) are also natural means to handle additional constraints concerning the unknown object.Another interesting perspective consists in the adaptation of these standard algorithms in order to cope with the various noise models presented in this article. A. The variance stabilization transform Let y be a random variable with mean y = μ and variance VAR(y) = σ 2 , and suppose that σ and μ are related by σ = f (μ) for some function f .A variance-stabilization transform aims at constructing a function h such that the random variable h(y) has an almost constant variance, without loosing any information (i.e., h has to be injective in the range of y). The Taylor expansion of h around μ in the first order is h(y) − h(μ) = (y − μ)h (μ) + R (27) where R stands for higher order terms.One then has VAR h(y) = VAR h(y) − h(μ) where (y − μ) h (μ) = 0 is used in the last line.Neglecting all the contributions from the terms higher than the first order gives: Thus, within the first order approximation, the variance of h(y) is independent of μ if a function h is exhibited such that h (x) x=μ = b/ f (μ) for a constant b in R. The obvious candidate h(x) = bx/ f (μ) is of no interest, being a linear function.A suitable choice for the Poissonian case in which f (x) = √ x is the function h(x) = √ x; we then find b = 1/4.This is the variance stabilization used in the Section 2.2.Anscombe, in [55], showed that the function h(x) = (x + 3/8) 1/2 has a better variance-stabilization capability than the square-root transform. Fig. 1 . Fig. 1.The test object is a (support-limited) quadrant of a Fresnel zone-plate extending over 100×100 pixels within a 260×260 pixel image.The modulus (a) is 1 within the support of the object and the phase (b) ranges from 0 to 1.72 rad.The corresponding cross-sections are plotted along the 86-th column of the image.A real probe function (c) is chosen so that it extends over 58×58 pixels (full-width at half maximum) within a 100×100 pixel image, corresponding in an oversampling ratio of 1.7; the corresponding cross-section is plotted along the 50-th column of the image.itvanishes outside the object window.The object extends over 100 × 100 pixels in a numerical #Fig. 2 . Fig. 2. A ptychographical reconstruction illustrates the convergence behavior of the OS and the SG strategies.In these examples, the fitting function is L G given in (10c), such that the OS strategy corresponds to the standard PIE.Top line: for the OS (dashed line) and the SG (solid line), (a) evolution of the fitting function L G w.r.t. the iteration k for a noise-free data set (thick line) and an example of a noisy data set (thin line) with a maximum of 10 3 photons on the camera; (b) idem for the gradient norm ||∂ G (ρ (k) )||; (c) idem for the error Err(ρ (k) ) defined by (25).Second and third lines: reconstruction from a noisy data set; with the OS strategy, ( ) is the estimate that minimizes the error depicted in (c) and ( ) is the estimate obtained after k =2000; with the SG strategy, (×) is the estimate after k =2000.The shown results correspond to a 180 × 180 window centered around the object central pixel.The respective color scales are indicated on the figure. Fig. 3 .Fig. 4 .Fig. 5 . Fig. 3.The average solution for each noise model evaluated over a series of 100 noisy data sets.The initial guess is the true object and the SG strategy is used for the optimization of the fitting function.For each noisy data set, no more than 10 3 photons impinge on the detector.The grey level scaling in each column shares the same linear scale.The shown results correspond to a 180 × 180 window centered around the object central pixel. #Fig. 6 .Fig. 7 . Fig. 6.Object reconstruction by means of the minimization of the fitting functions L P , L G and L R given in Eq. (10).The optimization is performed with the SG strategy presented in Sec.3.3 and an initial guess defined as an uniform object.The phase is set to zero outside of the object support for visualization purpose. # 173585 -$15.00USD Received 31 Jul 2012; revised 11 Sep 2012; accepted 11 Sep 2012; published 1 Nov 2012 (C) 2012 OSA 5 November 2012 / Vol. 20, No. 23 / OPTICS EXPRESS 25933 In the statistical literature, the accuracy of estimation is evaluated via two standard indicators, the estimation bias and the estimation standard deviation.Let • be the expectation (i.e., average over several realizations of the noise) operator, and ρ •;n and ρ ;n denote the n-th component of the ML solution ρ • and the true object ρ , respectively.Then, the estimation bias reads November 2012 / Vol. 20, No. 23 / OPTICS EXPRESS 25924 the spatial resolution (i.e.reduces the bias) w.r.t. the noise model G .However, this gain has necessarily a cost: because these low SNR measurements are plagued by large fluctuations, the model P should also lead to larger estimation variance.The opposite arguments holds for the noise model R since the condition h m, j 1 leads to #173585 -$15.00USD Received 31 Jul 2012; revised 11 Sep 2012; accepted 11 Sep 2012; published 1 Nov 2012 (C) 2012 OSA 5 Table 1 . Figure of merit of each noise model.The l 2 -norms of the bias, standard deviation (STD) and mean-square error (MSE) as well as the error (Err) in the object plane for the fitting functions defined in section 2 are given.The SG strategy is used with the true object as initial guess. Table 2 . The l 2 -norms of the bias, STD and MSE as well as Err in the object plane achieved by the fitting functions L P , L G and L R when either the SG or the OS strategy is used with a free-space as initial guess.The results achieved by the DM and the hybrid method are also presented.
9,646
2012-11-05T00:00:00.000
[ "Physics" ]
Shale gas production: potential versus actual greenhouse gas emissions Estimates of greenhouse gas (GHG) emissions from shale gas production and use are controversial. Here we assess the level of GHG emissions from shale gas well hydraulic fracturing operations in the United States during 2010. Data from each of the approximately 4000 horizontal shale gas wells brought online that year are used to show that about 900 Gg CH4 of potential fugitive emissions were generated by these operations, or 228 Mg CH4 per well—a figure inappropriately used in analyses of the GHG impact of shale gas. In fact, along with simply venting gas produced during the completion of shale gas wells, two additional techniques are widely used to handle these potential emissions: gas flaring and reduced emission ‘green’ completions. The use of flaring and reduced emission completions reduce the levels of actual fugitive emissions from shale well completion operations to about 216 Gg CH4, or 50 Mg CH4 per well, a release substantially lower than several widely quoted estimates. Although fugitive emissions from the overall natural gas sector are a proper concern, it is incorrect to suggest that shale gas-related hydraulic fracturing has substantially altered the overall GHG intensity of natural gas production. Introduction Over the past decade, economically recoverable shale gas has transformed the US natural gas industry, with some analysts characterizing it as a 'revolution' (Deutch 2011, Jacoby et al 2012. With shale driven growth, the US has become the world's largest gas producer (IEA 2011). The low gas prices that have accompanied this production boom have led to a renewed growth in gas demand by industrial users, a recovery viewed as extremely unlikely just a decade ago. The rise of Content from this work may be used under the terms of the Creative Commons Attribution-NonCommercial-ShareAlike 3.0 licence. Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI. shale gas has not been without controversy, however, with important concerns raised regarding water pollution (Osborn et al 2011), greenhouse gas (GHG) emissions, particularly those related to hydraulic fracturing (Howarth et al 2011a(Howarth et al , 2011b(Howarth et al , 2012, and uncertainty in estimates of the resource scale (Jacoby et al 2012, Urbina 2011, MIT 2011, Lee and Sidle 2010. In this analysis we focus on the issue of fugitive GHG emissions associated with shale gas fracturing and provide estimates of potential and actual emissions. Hydraulic fracturing and GHG emissions The economic production of shale gas is only possible through the use of hydraulic fracturing to increase production rates from the extremely low-permeability shale formations. The hydraulic fracturing process has two main stages: injection and flowback. During injection, a slurry made up of a carrier fluid, typically water, and a proppant agent, typically sand, is forced into the well at pressures significant enough to induce fractures in the reservoir rock. These propped fractures allow gas in the formation to flow from the well at economically acceptable rates. After the injection phase is completed, flowback takes place. Here some of the initially injected fluid returns to the surface over the course of a week or more. During flowback, the well also begins to produce gas. It is the amount of this gas, and how it is handled, that has been central to the debate about the GHG intensity of shale development. In 2011, the EPA revised upwards its GHG inventories for the natural gas system (EPA 2011), and some have attributed this increase to the expanded production of shale gas and the associated increase in hydraulic fracturing. It has been argued that large amounts of gas are directly vented to the atmosphere during flowback, and that this means shale gas has a significantly higher GHG intensity than conventional gas production (Howarth et al 2011a(Howarth et al , 2011b. In fact, with some specific assumptions about the global warming potential of gas it has been suggested that the GHG impact of shale gas might be greater than that of coal on a lifetime basis (Howarth et al 2011a(Howarth et al , 2011b. This perspective has been widely articulated via popular media (e.g., Soraghan 2011, McDonald 2011. Debate regarding this issue has been added to by research published by NOAA scientists (Petron et al 2012) that studied methane and other fugitive GHG levels in air samples taken in Colorado's Denver-Julesburg oil and gas basin. Their results suggest that fugitive emissions in Colorado's Weld County during 2008 amounted to 3.8% of the county's total gas production that year. The study area, the Denver-Julesburg Basin is a tight sandstone formation that produces appreciable amounts of both gas and oil. In 2008, the year of the study, there were 850 tight gas wells and 1583 oil/condensate wells drilled in the Denver-Julesburg area (HPDI 2012). An important point regarding the study is that it assessed fugitive emissions levels from the entire gas and oil production system in the basin, which includes many complex upstream and midstream systems widely known as fugitive emissions sources including gathering pipelines, compressor station and condensate tanks (EPA 2012a(EPA , 2011. Nevertheless, some have interpreted the NOAA analysis as a quantification of fugitive emissions resulting from hydraulic fracturing alone (Tollefson 2012). The conclusions of Howarth et al (2011b), have been questioned by some analysts (DOE 2011, Cathles et al 2012, and several groups working on the topic have come to different conclusions regarding the relative GHG impact of shale gas. Burnham et al (2011) concludes that the life-cycle GHG emissions from shale gas are slightly less than that of conventional gas, Weber and Clavin (2012) suggest they are approximately equal, while Jiang et al (2011) and Stephenson et al (2011) both conclude that shale gas has a lifetime GHG impact that is slightly higher than that of conventional gas. All of these groups do however conclude that the GHG impact of electricity generated using shale gas is significantly less than that of generated with coal. Analysis Analysis of this controversy begins with quantification of potential emissions produced during well flowback. This requires knowledge of the duration of the flowback stage, and the rate of gas production during that period. The EPA assumes that the flowback period lasts from 3 to 10 days (EPA 2011). A recent industry-sponsored survey suggests 3 to 8 days (ANGA 2012). The analysis of Howarth et al (2011a) assumes 9 days for the wells in the Barnett shale and 10 days in the Haynesville shale. Here we use a 9 day flowback period for wells in each of the major shale plays. Although it is certain that flowback durations vary from well to well, our 9-day assumption is at the conservative end of the reported range. Measured data on the rate of flowback from Haynesville shale wells reported by Fan et al (2010) show that within 9-10 days, the level of fluid production falls by ∼75%, and this confirms that 9 days is a reasonable estimate. We assume that gas production during flowback from a given well can be modeled as ramping linearly from zero at flowback initiation to the peak recorded production rate for that well at flowback completion. This assumption is supported by data presented during a recent EPA workshop (EPA 2012b), and by both simulation results and recorded gas production rates during the flowback of shale wells reported by Fan et al (2010). Integrating this production profile over the 9 day flowback period yields the potential fugitive emissions estimate for each well. In this report we assess the level of fugitive GHG emissions resulting from the hydraulic fracturing of 3948 horizontally drilled shale gas wells brought online in the US during 2010 (HPDI 2012), assuming a number of gas handling scenarios, which involve different levels of venting, flaring and gas capture. Table 1 shows the potential emissions estimates assuming the mean well peak production rates in each shale play for 2010. The table also illustrates the substantial well-to-well variability in potential emission levels by showing the estimates for the 20th, 50th and 80th percentile peak production rates. The peak production rate data underlying the values reported in table 1 can be seen in table S1 of the supplementary materials (available at stacks.iop.org/ERL/7/044030/mmedia). The variation in initial well productivity within and between the shale plays is driven in large part by underlying geological, geo-mechanical, geochemical and petrophysical characteristics of the shale formations. Reservoir pressure, total organic content, thermal maturity, porosity and other factors can all differ within and between plays, and this in turn results in well-to-well variation in productivity (Jarvine et It is useful to compare the per-well potential emissions from table 1 to the estimated ultimate production from wells in each play. There is appreciable uncertainty regarding the level of ultimate recovery that can be expected from shale wells. Much of this is due to the limited production history of the shale resource and, as yet, not well understood mechanisms of production in ultra-low permeability reservoirs (Anderson et al 2010, Lee and Sidle 2010). To account for this uncertainty we assume two well production lifetimes in this analysis; the commonly assumed 30 yr lifetime, and a more conservative 15 yr lifetime. It is important to acknowledge though that there is legitimate debate ongoing regarding whether the productive lifetimes of these wells may in fact be appreciably shorter than even our 15 yr case (Berman 2012, Hughes 2011). The results of the comparison between potential emissions produced during flowback and estimates of ultimate recovery based on 30 and 15 yr producing lifetimes are shown in table 3. The results indicate that in most shale plays, hydraulic fracturing-related potential fugitive emissions represent 0.4-0.6% of a well's estimated ultimate recovery. In the Haynesville, the ratio is higher at 0.8-1%, owing to the high initial production and production decline rates in that play, which are due to that particular shale's highly over-pressured reservoir (Baihly et al 2010). Should it become clear that shorter lifetimes are more representative, then the ratio of potential emissions to ultimate recovery will increase, though not proportionally as shale wells tend to be most productive during their early lives. Details of actual production dynamics from the ensemble of shale wells drilled since 2005 can be found in section S2 of the supplementary materials (available at stacks.iop.org/ERL/7/ 044030/mmedia). The proportions of the potential fugitive emissions that are vented, flared, or captured and sold via a reduced emission 'green' completion determine the actual GHG intensity of shale gas-related hydraulic fracturing. In this analysis we use specific GHG intensities for venting, flaring and reduced emission completions of 13.438 kg CO 2 e, 1.714 kg CO 2 e and 1.344 kg CO 2 e respectively, based upon a 100 yr Global Warming Potential (GWP) for CH 4 . Shindell et al (2009) argue that the use of a 100 yr integration period underestimates the actual warming impact of CH 4 and suggests that a higher GWP factor, based on a 20 yr integration period be used instead. Because the various GHGs have different lives in the atmosphere (e.g., on the scale of decade for CH 4 but centuries for CO 2 and thousands of years for some other GHG gases), the IPCC (2007) provides this factor for 20-, 100-, and 500 yr integration periods and uses 100 yr GWPs. MIT (2011) argues that 20 yr GWP would emphasize the near-term impact of methane but ignore serious longer-term risks of climate change from GHGs that will remain in the atmosphere for hundreds to thousands years. For a comparison, the specific GHG intensities of venting, flaring and reduced emissions completions assuming a 20 yr GWP for CH 4 are detailed in section S3 of the supplementary materials (available at stacks.iop.org/ERL/7/ 044030/mmedia). Significant opaqueness surrounds real world gas handling practices in the field, and what proportion of gas produced during well completions is subject to which handling techniques. Diverse opinions on this question exist even within the gas industry. Some analysts state that gas companies have had a policy of not investing in gas conservation measures due to the low rate of return (one referee of this paper pointed out an oral presentation given at the 2012 Goldschmidt International Geochemistry Conference in Montreal where gas insiders stressed this point and argued that venting of methane is a common practice since flaring draws public attention). By contrast, an industry survey of unconventional gas producers has suggested that reduced emission completions are being used on more than 90% of shale wells completions, and that in the case of those wells not subject to a reduced emissions completion, the duration of flowback is rarely more than 3 days (ANGA 2012). Some of the contemporary analysis on shale gas-related fugitive emissions has not attempted to account for the impact of real world gas handling field practice. For example, in Howarth et al (2011b) it is assumed that all potential fugitive emissions are vented. This is an unreasonable assumption, not least because some producing states have regulation requiring flaring as a minimum gas handling measure. The EPA in its quantification of fugitive emissions does assume a certain proportion of gas is flared (EPA 2011, 2012a); however, it does not separate fugitive emissions from shale wells with those from tight and other unconventional gas sources. Furthermore, the EPA analysis does not adequately assess gas capture levels, particularly in regions where flaring is required. We assess several gas handling scenarios, ranging from the assumption that all potential emissions are vented (Howarth et al 2011b), to that suggested by a gas industry group in which 93% of potential fugitive emissions are captured (ANGA 2012). However, our main estimate of actual fugitive emissions is based on a 'current field practice' gas handling scenario, where 70% of potential fugitives are captured, 15% vented, and 15% flared. This we believe is a reasonable representation of current gas handling practices in the major shale plays (EPA 2012b). (Further discussions of gas handling scenarios are presented in section S3 of the supplementary materials available at stacks.iop.org/ERL/ 7/044030/mmedia.) Table 4 contrasts the level of per-well actual fugitive emissions based upon the assumption of the 'current field practice' scenario and the 'all vented' scenario. Compared to the all-vented analysis (Howarth et al 2011b), which reports emissions from Barnett as 252 Mg CH 4 /well (or 370 000 m 3 CH 4 ) and 4638 Mg CH 4 /well (6800 000 m 3 CH 4 ) for Haynesville, our mean estimates are 35.1 Mg CH 4 /well and 151.3 Mg CH 4 /well, respectively. Beyond regulation, the methods selected to handle gas during well completions in the field are driven by economics. In the case of conventional gas wells, the volumes of potential emissions produced during completion are very low. According to the EPA, on average, 1040 m 3 CH 4 (36.36 Mcf) are produced by a conventional well completion (EPA 2010). The economic value of this gas would certainly not justify the use of a reduced emission 'green' completion. By contrast, the level of potential emissions from shale wells is very large. In Howarth et al (2011b) it is stated that 3.2% of the estimated ultimate recovery from a Haynesville shale well is produced during flowback. In that case, 3.2% of estimated ultimate recovery amounts to 6800 000 m 3 CH 4 . This is a very considerable amount of gas and assuming a conservative long-run wellhead gas price of $4.00/MMBtu (MIT 2011, NYMEX 2012, EIA 2012, simply venting, or indeed flaring this gas would amount to a revenue loss of $1.2 million for the operators. Admittedly, this is an extreme example since the performance of the particular Haynesville well in question is not representative of a typical Haynesville well; however, even when considering mean shale well performance data, the value of gas produced during flowback is substantial, and likely to warrant the cost of capture. Based on our mean estimates of potential emissions shown in table 1, the gross values of capturing this gas using a reduced emission completion ranges from $39 000 for a Barnett well to $166 000 for a Haynesville well. The aggregate gross value of the gas produced during flowback from the 3948 shale wells considered in this study amounts to $320 million. Capturing potential emissions is not without cost, of course, but these costs appear to be relatively modest (a detailed discussion of the variability in the gross value of gas produced during flowback, and the costs associated with reduced emission completions can be found in section S4 of the supplementary materials available at stacks.iop.org/ERL/7/044030/mmedia). If the cost of reduced emission completion is $1000 per day as stated by Devon (2008), 95% of the 2010 Barnett wells yielded positive net revenues, i.e., operators added to the value of their wells by capturing the potential fugitive emissions. Even at twice this reported capture cost, $2000 per day, 83% of the 2010 Barnett wells would still positive net revenues, and this trend is repeated in the all the other shale plays. The results of a sensitivity analysis exploring the impact of reduced emissions completion costs and gas price variation on the 2010 Barnett shale well ensemble are shown in figures S5 and S6 of the supplementary materials (available at stacks. iop.org/ERL/7/044030/mmedia). Conclusions Taking actual field practice into account, we estimate that in 2010 the total fugitive GHG emissions from US shale gas-related hydraulic fracturing amounted to 216 Gg CH 4 . This represents 3.6% of the estimated 6002 Gg CH 4 of fugitive emissions from all natural gas production-related sources in that year (EPA 2012a(EPA , 2012b. The entire natural gas value chain is estimated to have produced 10 259 Gg CH 4 of fugitive emissions in 2010, or about 3.1% of the nation's total GHG inventory (EPA 2012a(EPA , 2012b. Thus under a goal of GHG reduction it is clear that increased efforts must be made to reduce fugitive losses from this system. However, it is also clear is that the production of shale gas and specifically, the associated hydraulic fracturing operations have not materially altered the total GHG emissions from the natural gas sector. Furthermore, for the vast majority of contemporary shale gas wells, the revenues gained from using reduced emissions completions to capture the gas produced during a typical flowback cover the cost of executing such completions.
4,268.6
2012-11-01T00:00:00.000
[ "Geology" ]
Finding a New Way to Increase Project Management Efficiency in Terms of Time Reduction There are three basic constraints called ‘the golden triangle’ in each project: time, budget and scope. Researchers and practitioners are trying to find a way to increase the efficacy of the project’s outcomes in terms of shortening the project’s duration, lowering budgetary costs and meeting the scope. Although several publications have been written on that topic, there is still no common solution in place. However, more in-depth research related to the specific type of projects or industries is being conducted and this paper seeks to incorporate additional knowledge into that contemporary field. Furthermore, this is of growing importance in modern, turbulent times, where the expectations of profiting from lessons learned are ever increasing. Following the current market demands, in the article a new, innovative approach is proposed to manage the expectation that future projects will have a shorter duration. Therefore, the idea of creating specific roadmaps is proposed, which should help the decision makers improve the efficacy of project management in the company, where the process of such improvement is measured using the maturity levels assessment concept. Based on the world-wide quantitative studies in the construction, information technology and machinery industries, specific roadmaps for each industry were determined. The purpose of these roadmaps is to indicate the most effective investment sequence in the increase of project management maturity, which should result in a decrease of future projects’ duration. Moreover, discussions on the limitations of such investments are examined. Introduction Three of the world's most recognized Latin words, "Citius, Altius, Fortius," meaning "Faster, Higher, Stronger", have been the Olympic motto since 1894. At the time: Pierre de Coubertin proposed the motto, having borrowed it from his friend Henri Didon, a Dominican priest who taught sport close to Paris (IOC, 2007, p. 5). These three words encourage the athlete to give his or her best during competition. To better understand the motto, we can compare it with the Olympic creed: "The most important thing in life is not the triumph, but the fight; the essential thing is not to have won, but to have fought well." Together, the Olympic motto and the creed represent an ideal that Coubertin believed in and promoted as an important life lesson that could be gained from participation in sport and the Olympic Games: that giving one's best and striving for personal excellence was a worthwhile goal. It is a lesson that can still be applied equally today, not just to athletes but to each one of us (IOC, 2007, p. 5). Modern project management has its roots (Lenfle and Loch, 2010, p. 33) in the atomic bomb Manhattan project (Morris, 1994, p. 18) and the ballistic missile projects, Atlas and Polaris (Kerzner, 2013). The term "modern project management" is used by some authors and relates mostly to the project management approach started in the 1950s and continuing through today (Chen et al., 2011;Hill, 2004;Lenfle & Loch, 2010;Shenhar, 2001). Remarkably, from the very beginning of contemporary project management, projects were, and continue to be, under similar time pressure (Campos Silva et al., 2012;Chen et al., 2012;Griffin, 1993Griffin, , 1997Herroelen & Leus, 2005;Omorede et al., 2013;Radziszewska-Zielina, 2010;Zavadskas et al., 2010). Kach and colleagues (2012, p. 377) state: The speed of technological change and shortened product life cycles have made the time-to-market requirements for developing new products increasingly stringent (Kessler and Chakrabarti, Langerak et al., 2010; Heightened competitive forces have motivated many firms to move their new products through the design and manufacturing pipeline at a faster rate, encouraging greater focus on accelerated development and compressed time lines (Prasnikar & Skerlj, 2006;Wright et al., 1995). The above statement for NPD (new product development) projects can be applied to most projects, whatever their nature. In our current, turbulent times, the pressure of time seems to increase to an ever greater extent. Project managers and their teams experience similar pressure to athletes: to beat the record and to go faster and faster. The first word, "Citius", of the Olympic hendiatris 1 seems to dominate projects managed by 1 Hendiatris is a figure of speech in which three words are used to emphasize one idea, for example, Wine, women, and song. Hendiatris is often used to create mottos for organizations; for example, The motto at West Point is "Duty, Honor, Country." see K. Wilson and J. Wauson, the AMA handbook of business writing: the ultimate guide to style, grammar, companies today. The majority of project stakeholders (executives, sponsors, clients, managers) expect new projects to be completed faster than ever. Moreover, companies are always careful about spending money. Therefore, they also want to know where to invest their typically limited funds. This issue also arises when deciding where to invest to shorten the time of future projects. This paper gives new insight into the dilemma of where and when to invest limited company funds to achieve the best approach to time reduction of future projects. (R)evolution in Managing Projects Project management has evolved throughout history. This evolution has occurred mostly due to the tools and techniques applied to single projects and the gradual improvement of human resource management (Avots, 1969). This situation lasted until the 1990s, when the number of projects executed by companies increased. Moreover, companies' operating environments became turbulent (Keil & Mahring, 2010). Furthermore, the influence of project outcomes on the success of an entire company increased (Baron & Hannan, 2002). As a result, companies placed a greater emphasis on project management. To speed up time-to-market, companies experienced increased pressure to reduce the duration of projects. The existing methods for reducing the time of ongoing single projects (e.g., application of modern scheduling techniques supported by computer technology) (Brucker et al., 1999;Iacovou & Dexter, 2004) are significant; however, they are no longer sufficient. A new, more efficient and revolutionary approach to managing companies' portfolios of projects was needed (Cooper et al., 1999). This need generated new approaches for how to better manage projects to face the new challenges appearing in multi-project (Hofman, 2014), dynamic environments (Spalek, 2013). Accordingly, new concepts in project management were introduced, focusing on project environment and knowledge management (del Cano and de la Cruz, 2002;Ethiraj et al., 2005;Neverauskas and Stankevicius, 2008;Pemsel & Wiewiora, 2013). Among these concepts is the idea of assessing the maturity level of project management in the company (Cooke- Davies, 2007;Fraser et al., 2003;Spalek, 2014;Tan et al., 2011). Assessing Maturity: the Purpose To gain competitive advantage in executing projects, companies wanted to know how well they manage projects while taking into consideration different aspects influencing the effective execution of projects (Kerzner, 2013;Rudzianskaite-Kvaraciejiene et al., 2010). The assessment outcomes should note the areas for potential improvement (Ahmad et al., 2013). This expectation can be fulfilled by project management maturity assessment. There are several existing models of maturity assessment; however, their main purpose remains the same: to identify weak and usage, punctuation, construction, and formatting (New York: American Management Association, 2010, p. 216). strong areas in an organization (Belt et al., 2009). By knowing its strengths and weaknesses, a company can undertake actions to improve activities related to the management of projects, resulting in an increased maturity level and improved project outcomes. The majority of existing models assess project management maturity level on a scale from 1 to 5, where 1 represents the lowest and 5 the highest level (Khoshgoftar & Osman, 2009). The assessment is performed in different areas related to project management. Therefore, the assessment results in a matrix with maturity scores and testing areas (Spalek, 2011). Increasing Maturity: the Investment in the Future It is critical to understand that an increase of maturity level will mostly benefit future projects. Increasing by one maturity level up also has some impact, however limited, on existing projects. Because the planning phase in each project is crucial (Wyrozebski & Spalek, 2014), the biggest possible improvements are associated with future projects. There is even a well-known quotation saying 2 : "Show me how your project starts and I can tell you how it will end." If the level of maturity is increased, e.g., from level 1 to 2, the outcomes of that action will be beneficial in new projects. For existing projects, it will usually be too late to have an impact. Therefore, the decision to assess and then increase the project management maturity level in a company is an investment in future projects. Investment "in maturity" is time consuming and money intensive; therefore, to achieve the highest possible time reduction of future projects, it is crucial to decide where and when (in which sequence) a company investment should be placed. Moreover, a company's investment funds are often very limited, making the issue even more important. Therefore, it is essential for companies today to have a road map guiding decision makers on the following topic: "In which areas and in what sequence should limited funds be most effectively invested in my company?". The Research Method The prediction of the future is a complex issue (Glenn & Gordon, 2003), often involving various methods and techniques, and thus has a wide record in publications. (Booth, 2006;Galbraith & Merrill, 1996;Lacher et al., 1995;Landeta, 2006;Onkal et al., 2013). To predict reduction of the future time of projects, questionnaire-based cross-impact analysis was used following the ideas presented by Fabiana Scapolo and Ian Miles (2006, p. 680-681). The method included the following steps:  choosing the object of studies;  selection of subject to study;  choosing the experts to participate;  gathering the data using questionnaires;  data analysis and conclusions. Three Industries: Construction, Information Technology and Machinery The research presented in this article was part of a larger effort supported by the National Science Centre, focusing on world-wide studies of maturity in project management in the chosen industries. The overall research was designed in two major steps. The first step was to conduct quantitative empirical studies on project management maturity levels in three types of industries: machinery, construction and information technology. Data from 447 global companies, mostly medium-and large-sized ones, was collected. Ninety-eight per cent of them earned over € 2,000,000 per year, and 99,5 % of them employed over 49 people. The second step, which is discussed in this study, was designed to investigate the relationship between the increase in maturity level in project management and the predicted duration of forthcoming projects. The Assessment For the purpose of this study, a model was used that assesses the company's project management maturity in four areas (Spalek, 2011):  methods (M) (Gary et al., 2011;Ji & Sedano, 2011);  human resources (HR) (Levin, 2010;McDonough, 2000);  project environment (E) (Elbanna, 2013;Killen & Kjaer, 2012);  knowledge management (KM) (Basu, 2014;Gasik, 2011). The description of the maturity assessment areas is shown in table 1. In the applied project management maturity model the results of the assessment are reported from level 1 to 5: LEVEL 5: Self-improvement. The experts were asked to express their opinion on an increase of maturity level by one increment (from 1 to 2, from 2 to 3, etc.) and how it influences the time reduction of future projects in their companies. The possible impact was measured on a scale from 1 to 5, as shown in table 2. The assessment was made separately in each testing area. In our study, the experts were practitioners from the investigated companies. The invitation to participate in this study was sent to 308 chosen experts as a follow-up to the major global study on project management maturity. The experts were chosen based on the demographic information obtained in the first step of the overall research. They were experienced managers that had at least five years of experience and possessed a deep knowledge of the projects executed in their companies. Moreover, the invitations were sent to experts from companies that reported a maturity level of at least 2 in each of the testing areas. The response rate was 63 %. Such a high rate was obtained as there were individually approached named persons who expressed, in the first step of overall research, their willingness to participate in the second step of the studies. Non-response bias was tested by comparing the demographic data of participating and non-participating experts. No significant difference was found, proving that survey respondents represent the overall sample accurately. Results and Discussion Data from 39 information technology, 48 construction and 107 machinery industry global companies was collected. The reliability of data was checked using Cronbach's alpha, resulting in a value of over 0,9 in each testing area. Data analysis using mean, median and mode values was performed. The equality of variances was tested using Levene's test and the equality of means using a t-test, and satisfactory results were obtained as the significance for Levene's test was greater than 0,05. Additionally, Spearman's rho correlation coefficients and factor analysis using rotated component matrix were calculated for further rigid data analyses. The data analysis was performed using Statistical Package for the Social Sciences (IBM SPSS build V21.0.0). The results revealed differences in the predicted impact of change in maturity level on the duration of future projects. This level of impact depended on the specific change in the designated area of assessment (e.g., a change from level 1 to 2 in the knowledge management area) and varied between industries. The Impact on Future Projects: by Maturity Area and Type of Industry The data analysis revealed that the dispersion of the acquired data is low; consequently, the mean value was chosen to explain the results. Using the mean value allows the results to be presented more clearly, without going into deep statistical details and allows the development of a full, more comprehensive picture. The mean values of impact on the future projects are shown in table 3. The Impact on Future Projects by Industry (CONS, IND, IT) and Change of Maturity Level in the Areas of Methods (M), Human Resources (HR), Project Environment (E) and Knowledge Management (KM). The impact on future projects depends on the type of industry. The highest impact levels (3,96) were observed for the change from level 1 to 2 and from 2 to 3 in the methods (M) and human resources (HR) areas in the construction industry (CONS). The lowest impact (1.71) also appeared in the construction industry; however, it was observed in the areas of environment (E) and knowledge management (KM) for the change of maturity from level 4 to 5. Note, in each type of the studied industries, the potential impact on future projects is the highest if the company is at the initial (1) or standardized (2) level of maturity in project management and wants to increase it by going one level up. This observation is true for each assessment area: methods, human resources, environment and knowledge management. When a company reports already having appliance (3) or system management (4), maturity levels and wants to increase it by one level, the impact on future projects' duration subsequently decreases. However, this reduction is greater in the construction and machinery industries than in information technology companies. Therefore, which project management maturity area the company should first invest funds in to reduce the future project's duration depends on the type of industry. Moreover, the investment sequence depends on the current level of maturity in the company in each testing area, meaning that for a chosen industry, one should consider a different roadmap, showing where and in which sequence the investment in project management maturity should be placed. Different Industries, Different Approach? This study on the influence of the increase of project management maturity on future projects' time reduction was focused on three types of industries:  Construction, one of the "world present" sectors that has projects that are, to some extent, inherited in its long history. The projects associated with these sectors are described and considered by numerous authors (Davies et al., 2009;Dominguez et al., 2009).  Information Technology (IT), the companies of which are admittedly spread throughout the globe. The IT projects are described and discussed for many different types, such as agile (Thomke and Reinertsen, 1998) or Internet projects (Mahadevan, 2000).  The machinery industry, which operates in the background of the above two industries as well as many others, is present in many countries and is a backbone industry for the other sectors. Therefore, its importance for the global economy is very high. However, a limited amount of research in this particular sector can be found in the literature. The examples are mostly qualitative case studies describing new product development (NPD) practices (Ahmad et al., 2013;Huang et al., 2004;Matsui et al., 2008), which are crucial for machinery industry companies. The Investment Road Map: The Idea Based on the factor analysis (as shown in Table 4), a road map is proposed showing the investment path in maturity areas, which should result in the biggest pay-offs in reduction of time in future projects. Analysing the groups of factors in the table, the proposed roadmaps for the companies was built. This road map operates in the areas of methods and techniques (M), human resources (HR), project environment (E) and project knowledge management (KM), as well as the associated change in level of maturity (e.g., from 2 to 3), which is noted, for example, as follows: M 2-3 (the change in maturity from level 2 to 3 in the area of methods and techniques). In general, note that at the beginning of the road map, the investment has the biggest impact on the duration of future projects and subsequently decreases along the way. The Investment Road Map: Construction Industry The construction sector is one of the largest industries traditionally associated with using project management. The proposed road map revealed that the biggest pay-off is associated with an increase of maturity level from 1 to 2 and from 2 to 3 in the methods (M 1-2, M 2-3) and human resources areas (HR 1-2, HR 2-3). The road map can be described in the following four steps: Step One Assuming that the company is at an initial (1) maturity level in all areas, the first step in investment should be an increase of maturity in the methods (M) and human resources areas (HR) until they reach the level 3. Step Two Then, in the second step, the investment should be placed in gradually reaching level 3 in the environment and knowledge management areas (E 1-2, E 2-3, KM 1-2, KM 2-3), and the increase in methods and human resources should continually increase until they reach level 4 (HR 3-4, M 3-4). Step Three The third step should include activities improving the maturity in the environment and knowledge management areas until they report level 4 (E 3-4, KM 3-4) and the methods and human resources areas until level 5 (M 4-5, HR 4-5). Step Four Finally, in the fourth step, the increase should be from level 4 to 5 in the environment and knowledge management areas . Figure 1 shows the investment road map for the construction industry. Comparing Steps in Construction Companies: the Reduction of Impact Remarkably, if assuming that in the first step, the impact on reduction of time of future projects is 100, then in the second step, it equals 81; in the third, 62; and in the fourth, 43. This information is useful for investments in a specific company, as companies in the same industry differ from one another. After performing the first step, the total investment and detailed impact on a project's duration can be measured. Then, knowing the predicted reduction of impact, one can estimate the possible outcomes in the second step toward the investment effort and decide if it is worth it to proceed. The same approach can be performed before step three and four. Following this advice means, of course, that time gaps are needed between successive steps to reassess the situation. The Investment Road Map: Information Technology After construction, the second industry traditionally associated with the project management sector is information technology (IT). Moreover, both industries report a long project management application history. The investment road map for information technology consists of six steps. Step One In step one, the investment in increasing project management maturity levels should be performed to go from level 1 to 2 and from 2 to 3 in the methods and human resources areas (M 1-2, M 2-3, HR 1-2, HR 2-3). Step Two After the first investments, the company should consider investments to increase the level of maturity from 1 to 2 and then from 2 to 3 in the remaining two areas: environment and knowledge management (E 1-2, E 2-3, KM 1-2, KM 2-3). Step Three In the third step, the project management maturity level should be increased from 3 to 4 in the methods and human resources areas (M 3-4, HR 3-4). Step Four In this step, as in step three, one goes up from level 3 to 4 in the environment and knowledge management areas (E 3-4, KM 3-4). Step Five The fifth step is designated for improvement in the methods and human resources areas from level 4 to 5 (M 4-5, HR 4-5). Step Six In the last step, the investment should be placed to increase the maturity level from 4 to 5 in the areas of environment and knowledge management . Figure 2 shows the investment road map for information technology companies. Comparing Steps in IT Companies: The Reduction of Impact The highest potential impact on future projects in the IT industry occurs when investing in the areas indicated in the first step and then gradually decreases. Assuming that the impact in the first step equals 100, the impact on future projects' duration of investment in step two is 99. Accordingly, in step three, the duration is 81; in step four, 80; in step five, 62; and, finally, in step six, 61. Despite a relatively greater number of steps in comparison to the construction industry, the differences between some pairs of steps (1 and 2, 3 and 4, 5 and 6) are small. Thus, the decision to continue the investment after each step is of a different nature than in the construction industry where differences are bigger. The Investment Road Map: Machinery Industry The products of the machinery industry are the machines, tools, parts and devices used by the other industries. This industry, like construction and IT, also has a wide representation among companies world-wide. However, a limited amount of project management related research exists for this sector. The investment in project management maturity in the machinery industry should be considered in the following four steps: Step One In the first step, the investment in an increase of maturity levels from 1 to 2 and, successively, from 2 to 3 should be considered in the areas of methods and human resources (M 1-2, M 2-3, HR 1-2, HR 2-3). Step Two The next investments should be focused on further increases in the methods and human resources areas up to level 4 and, in parallel, they should be focused on further increases in the areas of environment and knowledge management from levels 1 to 3 (M 3-4, HR 3-4, E 1-2, E 2-3, KM 1-2, KM 2-3). Step Three In step three, an increase of project management maturity up to level 4 is advised in the environment and knowledge management areas (E 3-4, KM 3-4). Additionally, investment in the methods and human areas is suggested until they reach level 5 . Step Four This final step is dedicated to the environment and knowledge management areas to reach a level 5 of project management maturity . Figure 3 shows the investment road map for the machinery industry. Comparing Steps in Machinery Industry Companies: the Reduction of Impact If one assumes that the possible impact on future project time reduction in the machinery industry equals 100 as a result of performing step 1, the execution of step 2 will thus result in an impact level of 81. For steps 3 and 4, there is a further reduction of impact to 64 and 44, respectively. After performing each step and comparing the effort undertaken to the reported project outcomes, this information can be used to decide whether to continue with further investments in increasing maturity levels. The Traditional vs. Agile Approach to Managing Projects The outcomes for the construction and machinery industries are alike, whereas the results of the study in information technology companies differs. This could be the result of similarities between the type of projects undertaken by both the construction and machinery industry and the different projects undertaken by IT companies. The construction and machinery industry companies execute projects in a more traditional approach, whereas IT projects have a greater tendency to migrate toward agility in project management. The concepts of traditional vs. agile approach in project management are widely discussed by (Fernandez & Fernandez, 2008;Shenhar & Dvir, 1996). That contrast would explain the major differences between the proposed investment roadmaps that are associated with either a more traditional or agile approach to project management. Construction and Machinery: the Traditional Approach In the traditional approach, the general advice is to invest gradually in all four areas of maturity with some advance investment in project management standards, tools and techniques and in the competencies of people involved in projects. The investment in company structures supporting project execution or project knowledge management areas should be considered in direct relation to progress in the methods and human resources areas. Information Technology: The Agile Approach Agility, as defined by its founders 3 , is less focused on strict methods and tools and more on knowledge flow, creating an appropriate working environment and team building processes (Dingsoyr et al., 2012) explaining why the investment in that type of project should be performed gradually and in parallel in each of the following areas: methods, human resources, environment and knowledge management. Although this study is limited to three industries, the suggested roadmaps can be used by other industries as well, especially by a wide range of manufacturing companies (NAICS, 2012). A company can choose a road map by assessing the similarities of the projects being executed to construction and machinery or IT projects. However, for some sectors, such as healthcare (Adler et al., 2003) or education (Palacios-Marques et al., 2013), more studies are needed to develop their specific road maps systems. Continuous Improvement? Not Necessary! The majority of academics and practitioners think that continuous improvement is vital for a company's operations and its survival in the turbulent market. This approach is also a noticeable concept of increasing project management maturity levels as a continuous process, which should result in reaching the highest maturity level in all assessment areas. However, based on existing studies of maturity (Becker et al., 2009;Grant & Pennypacker, 2006;Mullaly &Thomas, 2010;Pasian, 2011;Rohrbeck, 2010), the vast majority of companies today report, on average, the second or third level of maturity in different industries and assessment areas. Therefore, more effort is put into discussing how the company should proceed with maturity improvement to achieve "the top." However, some doubts exist if "the top" is even obtainable as there is a risk that assessment criteria can be changed over time as project management develops further. The first signs of such an approach are described in the model proposed by PMI (2008), in which no levels of maturity are defined. Instead, maturity is measured using a best practices list, which is continuously expanded by the PMI. This approach results in a "never ending" continuous effort to increase project management maturity in the company. Continuous improvement, however appropriate in theory, cannot always be the best solution for a company. As our study revealed, the impact of the increase of maturity on the reduction of future projects decreases over subsequent levels of maturity. The biggest impact occurs at the very beginning when a company is making its first steps on the road map. Then, the impact decreases by more than 50 % in the traditional approach to project management (represented by construction and machinery industries) and by approximately 40 % in the agile approach (represented by information technology companies). This brings into the question whether continuous improvement in maturity is effective for the company in terms of invested funds and achieved outcomes. Therefore, there should be time breaks in between steps on the investment road map to measure, ex post 4 , the 4 The translation from Latin means "after the fact". The use of historical returns has traditionally been the most common way to predict the probability of incurring a loss on any given day. Ex-post is the opposite of real impact of the increase in project management maturity on the recorded project's time reduction. Based on this study, one knows the size of the predicted decrease of impact in the next step for traditional and agile projects. Therefore, one can compare possible benefits with the estimated effort needed to continue with the next step on the road map. As a result, one can conclude that the investment in further progress in project management maturity does not pay off and the limited company's investment funds can be spent on other more promising and vital company activities. Limitations and Future Directions The research is limited to three types of industries, represented by 194 organisations with 107 belonging to machinery, 48 to construction and 19 to IT companies. In quantitative analysis, the size of this sample, especially of the latter two industries, can be assumed to be rather small. Therefore, definitive conclusions and generalisations supporting the outcomes of our study should await a larger sample, especially of IT and construction industries. The method of the prediction of the costs of forthcoming projects by investigating experts' opinions is constrained by the quality of the respondents' judgment, as is the case in all surveys of experts. Generally, predicting future outcomes is an extremely difficult issue (Glenn & Gordon, 2003). However, it was a conscious decision to use this method, despite its limitations, to advance the current stage of knowledge. Some of the limitations of the study can be strengthened to show the directions for future research. The mainstream method would be to investigate the other industries for which projects are a vital part of their operations with the same method. Therefore, follow-up research could be dedicated to the automotive, aerospace or mining sectors. Moreover, a study on the relationship between different investment types and an increase in maturity level would be advised. It would also be desirable for a new study to investigate the direct influence of different types of investments on projects' outcomes. It may also be worth considering research into checking whether project management is not geographically sensitive, as the other recent studies on project management performance areas suggest. The results of this study advance the current state of knowledge in the project management area. However, the problem of linking investments in project management to outcomes for an entire company is complex. Hence, the considerations presented in the paper provide a better understanding of this complexity in the area of project duration and could be a trigger point for further studies of other types of project outcomes. Acknowledgments This work was supported by the National Science Centre grant. ex-ante, which means "before the event". Source: http://www.investopedia.com/ retrieved on November 2013.
7,299.6
2014-12-15T00:00:00.000
[ "Business", "Engineering" ]
PRC2 Is Dispensable in Vivo for β-Catenin-Mediated Repression of Chondrogenesis in the Mouse Embryonic Cranial Mesenchyme A hallmark of craniofacial development is the differentiation of multiple cell lineages in close proximity to one another. The mouse skull bones and overlying dermis are derived from the cranial mesenchyme (CM). Cell fate selection of the embryonic cranial bone and dermis in the CM requires Wnt/β-catenin signaling, and loss of β-catenin leads to an ectopic chondrogenic cell fate switch. The mechanism by which Wnt/β-catenin activity suppresses the cartilage fate is unclear. Upon conditional deletion of β-catenin in the CM, several key determinants of the cartilage differentiation program, including Sox9, become differentially expressed. Many of these differentially expressed genes are known targets of the Polycomb Repressive Complex 2 (PRC2). Thus, we hypothesized that PRC2 is required for Wnt/β-catenin-mediated repression of chondrogenesis in the embryonic CM. We find that β-catenin can physically interact with PRC2 components in the CM in vivo. However, upon genetic deletion of Enhancer of Zeste homolog 2 (EZH2), the catalytic component of PRC2, chondrogenesis remains repressed and the bone and dermis cell fate is preserved in the CM. Furthermore, loss of β-catenin does not alter either the H3K27me3 enrichment levels genome-wide or on cartilage differentiation determinants, including Sox9. Our results indicate that EZH2 is not required to repress chondrogenesis in the CM downstream of Wnt/β-catenin signaling. Proteins, do not result in ectopic chondrogenesis (O'Rourke and Tam 2002;Fan et al. 2016). In craniofacial development, Wnt/b-catenin signaling seems to have a unique role in the repression of chondrogenesis in the CM. b-catenin is a central transducer of the canonical Wnt signaling pathway, where it acts as a transcriptional coactivator of contextspecific target genes to regulate cell fate selection in many cell types during development (Bhanot et al. 1996;Korinek et al. 1998;Liu et al. 1999;Haegele et al. 2003;Verani et al. 2007). While b-catenin is typically known as a transcriptional activator, a stabilized or posttranslationally methylated form of b-catenin has been shown to function as a transcriptional repressor in vitro (Delmas et al. 2007;Hoffmeyer et al. 2017). However, the mechanism by which Wnt/b-catenin signaling in the CM prevents chondrogenesis, while ensuring proper cranial bone and dermal fibroblast cell fate selection in vivo, is unknown. Recent in vitro studies have suggested epigenetic histone modifications, by the PRC2 specifically, as a possible mechanism by which Wnt/ b-catenin signaling represses chondrogenesis. PRC2 is a multi-protein complex that is required for the repressive histone modification H3K27me3 (Jiang et al. 2002;Lund and Van Lohuizen 2004;Peng et al. 2009). In multiple cell types and organisms, numerous connections between the Wnt/b-catenin pathway and PRC2 have been demonstrated. First, like Wnt/b-catenin signaling, PRC2 is required for the regulation of cell fate selection (Lee et al. 2006;Sparmann and van Lohuizen 2006;Asp et al. 2011;Margueron and Reinberg 2011). Second, Sox9 and other chondrocyte differentiation determinants are known targets of PRC2 by H3K27me3 enrichment in multiple cell types ranging from mouse embryonic stem cells (ESCs) to chick limb bud micromass cultures (Peng et al. 2009; K. I. Kumar and Lassar 2014;Tien et al. 2015). Third, PRC2 regulates components of the Wnt/b-catenin pathway and vice versa (Wang et al. 2010;Zemke et al. 2015;Mirzamohammadi et al. 2016;Yi et al. 2016). Fourth, b-catenin can physically interact with PRC2 components (Shi et al. 2007;Li et al. 2009;Jung et al. 2013;Hoffmeyer et al. 2017). Fifth, b-catenin and PRC2 can cooperate with one another to enhance either Wnt signaling or PRC2 activity (Shi et al. 2007;Jung et al. 2013;Kumar and Lassar 2014;Hoffmeyer et al. 2017). It is important to note that these studies were all performed in cell culture models with one or more overexpressed proteins. Follow-up studies in vivo are therefore required. Understanding how Wnt/b-catenin signaling intersects with PRC2 to direct cell fate selection in vivo will provide new insights into the genetic mechanisms governing cranial bone and dermal development. Here, we test the hypothesis that repression of chondrogenesis in the CM by Wnt/b-catenin signaling requires PRC2-mediated epigenetic repression. In a conditional b-catenin loss-of-function model, among the genes dysregulated in both mutant CM and mutant dorsal mesenchyme, we found an overrepresentation of known targets of the PRC2 pathway. Conditional deletion of Ezh2 in the CM does not phenocopy the ectopic cartilage in the b-catenin mutants, nor do H3K27me3 levels change upon complete loss of b-catenin in the CM. Our results suggest that the repression of chondrogenesis in the CM is not reliant on PRC2, indicating that repressive mechanisms besides PRC2 are likely involved. We propose that the "off" state of chondrogenic genes is not actively maintained by PRC2 and that b-catenin represses chondrogenesis by regulating an unidentified inhibitory pathway. Mice and genotyping The following strains were used in this study: Engrailed1Cre (En1Cre) (Kimmel et al. 2000), Rosa26 Reporter (R26R) (Soriano 1999), b-catenin null (b-catenin Δ ) (Brault et al. 2001), conditional b-catenin floxed (b-catenin fl ) (Haegel et al. 1995), Twist2Cre(Dermo1Cre) (Yu et al. 2003), and conditional Ezh2 floxed (Ezh2 fl ) (Shen et al. 2008). Mice were maintained in mixed genetic backgrounds. For timed matings, En1Cre;b-catenin +/Δ males were crossed with R26R/R26R;b-catenin fl/fl females, and Dermo1Cre;Ezh2 fl/+ males were crossed with Ezh2 fl/fl females. Vaginal plugs were checked every morning and assigned as embryonic (E) 0.5. For each experiment, a minimum of three mutants with litter-matched controls were studied unless otherwise noted. Animals of both sexes were randomly assigned to all the studies. The Case Western Reserve University (CWRU) Institutional Animal Care and Use Committee approved all animal procedures in accordance with AVMA guidelines (Protocol 2013-0156, approved November 21, 2014. CM isolation At E13.5, the CM was isolated by manual dissection. An incision was made around the circumference of the CM and the tissue covering the brain was manually dissociated. The CM samples were a mixed cell population comprised of the CNC-and PM-derived CM, which is En1Cre-positive, and also contained the overlying ectoderm, which is negative for En1Cre (CM+ectoderm). Each embryo yielded $500,000 CM cells for the controls and 250,000-500,000 CM cells for the mutants. Individual embryos were kept separate and considered as single biological replicates. The wild-type samples isolated for co-immunoprecipitation were dissociated by incubating the tissue in 0.25% Trypsin-EDTA (Thermo Fisher Scientific 25200056) at 37°for 5-7 min, and the CM was selectively enriched from the ectoderm using an Invitrogen FlowComp Flexi Kit (Invitrogen 11060D) and a PDGFRa antibody (5-10 mg/2.5 million cells) (R&D Systems AF1062) (Goodnough et al. 2016) according to the manufacturer's guidelines. Immunofluorescence Heads of E13.5 embryos were fixed in 4% paraformaldehyde for 30 min at 4°and cryopreserved as previously described (Atit et al. 2006). Rabbit polyclonal antibodies against H3K27me3 (1:1000; Cell Signaling 9733), LEF1 (1:100; Cell Signaling 2286), SP7/OSX (1:1000; Abcam ab94744), and SOX9 (1:1000; Millipore ab5535) were used for indirect immunofluorescence assays. Appropriate species-specific Alexafluor 594 secondary antibodies were used (1:500; Invitrogen). Images were captured using an Olympus BX60 microscope and an Olympus DP70 digital camera using DC controller software. Confocal images were captured on a Leica TCS SP8 (Leica Biosystems) using Application Suite X software (Leica Biosystems). Images were processed using ImageJ/Fiji (Schindelin et al. 2012;Schneider et al. 2012) and Adobe Photoshop software. Images were prepared for cell counting in ImageJ/Fiji by subtracting the background and thresholding the signal across all replicates. The percent of the cells that were H3K27me3positive compared to DAPI was determined using the "analyze particles" feature in ImageJ/Fiji. Counting was performed on the supraorbital CM directly above the eye. RNA sequencing E13.5 CM+ectoderm was collected by manual dissection (described above). Total RNA was isolated from individual embryos as previously described (Hamburg-Shields et al. 2015). Libraries were prepared in the CWRU Genomics sequencing core using the Illumina TruSeq Stranded Total RNA kit-with Ribo Zero Gold. Paired-end sequencing was performed on an Illumina HiSequation 2500 v2 Rapid Run flow cell. The resulting 100 bp reads were aligned to the mouse mm9 assembly using TopHat (Trapnell et al. 2009;Kim and Salzberg 2011;Langmead and Salzberg 2012;D. Kim et al. 2013). Genomic assembly was completed using Cufflinks v1.3 (Trapnell et al. 2010, Roberts et al. 2011a. mm9_reFlat was used to annotate the data with a maximum intron length of 20,000 bp and genomic bias correction. Cufflinks FPKMs , 0.3 were floored to 0.3. Differential gene expression was determined with CuffDiff using the default settings plus genomic bias correction. Gene ontology analysis examining all differentially expressed genes was performed using Genomic Regions Enrichment of Annotations Tool (GREAT) by associating reads to the single nearest gene located within 5 kb (McLean et al. 2010). ChIP-seq E13.5 CM+ectoderm was manually dissected from three En1Cre;bcatenin f l/+ and four En1Cre;b-catenin fl/Δ embryos, pooled, and H3K27me3 immunoprecipitation and sequencing was performed by Active Motif (www.activemotif.com) (deposited in GEO, GSE96872). Next, 14 mg chromatin was immunoprecipitated with 4 mg rabbit anti-H3K27me3 (Millipore #07-449). Sequencing was performed on an Illumina NextSequation 500 producing 75-nucleotide, single-end reads. Drosophila DNA was "spiked in." The ratio of aligned Drosophila reads in the mutant vs. control samples (calculated to be 1.3) was used to normalize the number of reads in the mouse samples by downsampling the larger sample (mutant, in this case). Sequences were aligned and analyzed twice independently. The analysis was first performed using a custom pipeline consisting of Bowtie2 for genome alignment to the mouse mm9 genome and Macs 1.4 at default settings for peak calling (Zhang et al. 2008;Langmead and Salzberg 2012). To generate the windowed heat map from this analysis, the genome was divided into 40 windows of equal size 5 kb up-and downstream of each H3K27me3 peak genome-wide or on peaks located within 1 kb of known promoters. The median peak signal in each window was then converted to a z-score and mapped using Java TreeView (Saldanha 2004). The analysis was performed a second time using the NGS 2.8 pipeline (Strand NGS Manual, Version 2.8, Build 230243, Strand Life Sciences, Bangalore, India) and aligning to the mm10 genome. Peaks were called using Macs 1.4 at default settings. Association of peaks with specific genes was performed using PAVIS (Huang et al. 2013). Specific H3K27me3 peaks were visualized using the Integrated Genome Viewer (IGV) (Robinson et al. 2011;Thorvaldsdóttir et al. 2013). Ngs.plot was used to generate the average fold enrichment of H3K27me3 overview across the gene bodies (Shen et al. 2014). Cell culture The CM+ectoderm was manually isolated and dissociated by incubating the tissue in 0.25% Trypsin-EDTA (Thermo Fisher Scientific 25200056) at 37°for 5-7 min, and then plated in DMEM with 10% fetal bovine serum. Fibroblasts were allowed to adhere to the plate for 1-2 hr, after which the media was removed and fresh media was added. Chemical inhibition was performed at no later than passage 3. Next, 10% Wnt3aconditioned media and the chemical inhibitor, UNC1999 (Sigma SML0778) or GSK126 (Cayman Medical CAS1346574-57-9), were added simultaneously. The cells were incubated for the indicated amount of time. Following incubation, the cells were trypsinized and processed for protein or mRNA analysis. Statistics Graphs and statistical analyses were generated using Prism 6 (GraphPad Software). Data are presented as mean 6 SEM in all graphs unless otherwise stated. All pairwise sample comparisons were performed using a Mann-Whitney test. The P-values for statistical tests in all figures are represented as à P , 0.05 and Ãà P , 0.01. Data availability Strains are publicly available at the Jackson Laboratory. Sequencing data are available at GEO with the accession number GSE96872. RESULTS Genes dysregulated upon loss of b-catenin are enriched for the PRC2-associated H3K27me3 histone mark In an effort to determine a functional link between b-catenin and PRC2 in vivo, we conditionally deleted b-catenin in the CM using Engrailed1Cre (En1Cre), manually dissected the CM along with the ectoderm (CM+ectoderm), and collected all the CNC-and PM-derived mesenchyme surrounding the brain ( Figure 1A) (Kimmel et al. 2000;Tran et al. 2010). En1Cre is expressed in both the CNC-and PM-derived CM. In order to analyze in vivo tissues with minimal manipulation, the ectoderm was isolated with the CM. We then profiled the whole transcriptome on three littermatched E13.5 En1Cre/+;R26R/+;b-catenin fl/+ controls and four En1Cre/+;R26R/+;b-catenin fl/D mutants using the RNA-seq approach (GSE96872). The analysis of the data revealed 521 genes that were differentially expressed by at least 1.4-fold in the two experimental groups (P , 0.05). Of the 521 differentially expressed genes, 322 were downregulated and 199 were upregulated in the mutants relative to the controls. Validating the approach, changes in expression of known Wnt/b-catenin targets were observed despite the presence of ectodermal cells, in which canonical Wnt signaling is known to be active (Supplemental Material, Figure S1A in File S1 [all Supplemental legends are in File S2] (Budnick et al. 2016). To ascertain the function of all the 521 differentially expressed genes, we performed a gene ontology analysis using GREAT, which queries multiple ontology databases (McLean et al. 2010). As a comparison, we also analyzed RNA-seq data from E13.5 En1Cre/+;R26R/+; b-catenin fl/D dorsal dermal mesenchyme (GSE75944) (Budnick et al. 2016). The top five ontologies of the differentially expressed genes in both the mutant CM+ectoderm and mutant dorsal dermal fibroblasts included the Wnt signaling pathway, along with Cadherin signaling, Integrin signaling, and ECMreceptor interactions ( Figure S1, B and C in File S1) (Thomas et al. 2003). In the Molecular Signatures Database (MsigDB) Perturbations ontology, we also found that both data sets were highly enriched for genes regulated by PRC2 ( Figure 1B and Figure S2A in File S1) (Subramanian et al. 2005). Interestingly, enrichment for targets of PRC2 can be found in both the up-and downregulated genes. However, this enrichment is unique only to the genes differentially expressed in our b-catenin mutants. GREAT analysis on all genes expressed in the CM+ectoderm (FPKM $ 1) did not result in enrichment for targets of PRC2 in the MSigDB Perturbations ontology ( Figure S2B in File S1). Thus, the differential expression of PRC2 targets in the b-catenin mutant CM+ectoderm and dorsal mesenchyme reveals a potential functional link between the two pathways. Chondrocyte fate genes are enriched for H3K27me3 in the embryonic CM To establish a role for PRC2 in the repression of chondrogenesis in the CM in vivo, we queried for H3K27me3 enrichment in the loci of individual chondrocyte marker genes. We manually dissected the CM+ectoderm in E13.5 En1Cre/+;R26R/+;b-catenin fl/+ controls and performed ChIP using an antibody against H3K27me3 followed by massive parallel DNA sequencing (ChIP-seq). This assay allowed us to unbiasedly and comprehensively map the genome-wide distribution of the H3K27me3 modification (Active Motif Technology) (GSE96872). In the CM+ectoderm of E13.5 controls, the transcriptional start sites of multiple cartilage markers, such as Sox9, Col2a1, Col9a2, and Col11a2 ( Figure 1C), were enriched for H3K27me3, indicating that they are targets of PRC2 in the CM. Endogenous b-catenin and EZH2 may physically interact in the CM Given the emerging connections made between the Wnt/b-catenin pathway and PRC2 in various systems in vitro (summarized in Table S1 in File S1), we set out to test the hypothesis that b-catenin and PRC2 components physically interact at native protein levels in the mouse CM extracts. We manually dissected the CM, made a cell suspension, and used a PDGFRa antibody bound to magnetic beads to enrich for the CM population (Goodnough et al. 2016). We found comparable levels of mRNA for mesenchyme progenitor markers Pdgfra and Twist2, and diminished ectoderm marker Keratin 14 (K14) in the purified sample, confirming enrichment for CM ( Figure 1D). We then prepared cell extracts from sorted CM and used them in a co-immunoprecipitation assay for b-catenin and EZH2. EZH2 is the methlytransferase component of PRC2, and is required for the H3K27me3 modification (Margueron and Reinberg 2011). In line with our hypothesis, b-catenin successfully co-immunoprecipitated with the EZH2 antibody. In addition, we also observed reciprocal co-immunoprecipitation of EZH2 and another major PRC2 component, SUZ12, by the b-catenin antibody ( Figure 1E). These results suggest that PRC2 components and b-catenin may physically interact at wild-type expression levels in the CM. Thus, these data provide a potential molecular link between Wnt/b-catenin signaling and PRC2 in the mouse embryo. b-catenin is not required for PRC2 component expression or bulk H3K27me3 levels To determine if b-catenin is required for the formation of the PRC2 complex itself, we first examined the expression of the main PRC2 components: Ezh2, Suz12, and Eed. Based on FPKM values from our RNA-seq data set, we found no significant changes in the PRC2 component mRNA levels ( Figure 2A). To validate this result, we manually dissected E13.5 En1Cre/+;R26R/+;b-catenin fl/+ control and En1Cre/+;R26R/+;b-catenin fl/D mutant CM+ectoderm ( Figure 1A), and determined the mRNA levels of Ezh2, Suz12, and Eed by RT-qPCR. Similar to the RNA-seq data set, the relative mRNA levels of the individual PRC2 components were comparable in the control Figure 2B). In comparison, the expected changes in mRNA levels were observed in known b-catenin responsive genes Axin2 and Sox9 (Figure 2, A and B) (Jho et al. 2002;Goodnough et al. 2012). Evaluation of the total H3K27me3 and EZH2 protein levels using western blot assays also revealed comparable protein levels between control and b-catenin mutant CM+ectoderm (Figure 2, C and D). To obtain spatial information and account for levels in the ectoderm between our control and mutants, we performed indirect immunofluorescence for H3K27me3 in the E13.5 En1Cre/+;R26R/+;b-catenin fl/+ control coronal sections near the frontal bone primordia. (G) Indirect immunofluorescence of SOX9, H3K27me3, and DAPI in the supraorbital mesenchyme (n = 2 controls; 3 mutants). Images were taken near the frontal bone primordia (plane I). Dashed lines indicate the brain and ectoderm boundaries. ( à ) indicates region of ectopic cartilage. Bar, 200 mm. CM, cranial mesenchyme; DAPI, 4',6-diamidino-2-phenylindole; FPKM, fragments per kilobase of transcript per million mapped reads; n.s., not significant; RT-qPCR, reverse transcriptase-quantitative polymerase chain reaction. While we consider the CM to include the entire CM surrounding the brain (Figure 1A), we focused our indirect immunofluorescence analysis on the region directly above the eye (supraorbital CM) ( Figure 2E) due to easily identified histological landmarks such as the eye and brain ventricles. Considering that knockout of b-catenin results in ectopic chondrogenesis throughout the CM, we expect the supraorbital CM to be representative of the entire CM. In the supraorbital CM, both the control and b-catenin mutants are positive for H3K27me3, demonstrating that PRC2 is still active without b-catenin. Furthermore, H3K27me3 can still be found in the expanded SOX9 domain in the b-catenin mutants. We concluded that b-catenin is not required cell autonomously in the CM to regulate the relative mRNA levels of major PRC2 components, the EZH2 protein levels, and bulk H3K27me3 levels. However, these results leave open the possibility that it may be required to recruit PRC2 to site-specific loci on the genome. Loss of Ezh2 does not lead to ectopic cell type fate selection or chondrogenesis in the CM We next determined if PRC2 is required for the repression of chondrogenesis in the CM in vivo. In order to remove PRC2 function in the CM, we conditionally deleted Ezh2 using a floxed allele (Shen et al. 2008). Surprisingly, conditional deletion of Ezh2 at E10.5 using En1Cre did not lead to the expected loss of H3K27me3 in the CM by indirect immunofluorescence ( Figure S3 in File S1). We then conditionally deleted Ezh2 in the CM using Dermo1Cre, which is expressed in the CM by E10.0 (Yu et al. 2003;Goodnough et al. 2012). Loss of Ezh2 was sufficient to lead to an upregulation of Cdkn2a, a known target of PRC2 ( Figure 3A) (Shen et al. 2008;Lui et al. 2016). We also found depletion of H3K27me3 in the supraorbital CM ( Figure 3C) by indirect immunofluorescence between the Dermo1Cre; Ezh2 fl/fl mutants to Dermo1-Cre; Ezh2 fl/+ controls (Figure 3, B and D). H3K27me3 signal was maintained in both the ectoderm and the brain, where Dermo1Cre is not expressed. After confirming the absence of PRC2 activity, we then examined the protein level of cell fate markers for bone, dermis, and cartilage progenitors by indirect immunofluorescence. Conditional deletion of Ezh2 in the supraorbital CM did not lead to changes in the location and size of the dermal domain as indicated by LEF1, and the bone domain as indicated by Osterix (OSX) (Figure 3, E and F). Consistently, we did not observe ectopic expression beyond the cartilage base of the key cartilage differentiation determinant SOX9 ( Figure 2G). Based on these data, Ezh2 has little effect on the patterning of the tissue domains and minimal effect on the protein expression levels by immunofluorescence. To further test the H3K27me3-dependent role of PRC2 in the repression of chondrogenesis, we chemically inhibited EZH2 function in primary E13.5 CM+ectoderm cells in vitro (Figure 4, A and E). Incubation with small molecule methyltransferase inhibitor GSK126, which is specific for EZH2, or with UNC1999, which inhibits EZH2 and EZH1, led to a considerable reduction in bulk H3K27me3 protein levels (Figure 4, B and F). Upon treatment with GSK126 or UNC1999, the Sox9 and Col2a1 mRNA levels were not significantly increased ( Figure 4, C, D, and G). Taken together, these data indicate that EZH2 and H3K27me3 are dispensable for regulating the mRNA level of chondrocyte differentiation markers in the CM+ectoderm. Figure 4 Chemical inhibition of EZH2 methyltransferase does not lead to an upregulation of early chondrocyte markers in CM+ectoderm. (A and E) Schematic demonstrating the isolation of primary CM+ectoderm fibroblasts. GSK126 is specific to EZH2, and UNC1999 is specific to both EZH2 and EZH1. GSK126 and UNC1999 inhibit EZH2's methyltransferase activity. (B and F) Western blots demonstrating reduction/loss of H3K27me3 level following incubation with GSK126 ((IC50 = 75-100 nm) or UNC1999 (IC 50 , 10 nM for EZH2 and 45 nM for EZH1). (C, D, and G) qPCR analysis of the expression of Sox9 and Col2a1 following inhibition of EZH2. GSK126: n = 5 mutants and 6 controls for Sox9, and n = 3 mutants and controls for Col2a1. UNC1999: n = 7 mutants and 9 controls. CM, cranial mesenchyme; n.s., not significant; qPCR, quantitative polymerase chain reaction. à P , 0.05; Ãà P , 0.01. H3K27me3 ChIP-sequencing signal strength was measured across all genes bound by H3K27me3 or genes identified to be differentially Loss of b-catenin does not significantly alter H3K27me3 enrichment genome-wide Next, we tested to what extent b-catenin is required for the recruitment of PRC2 to the genome in a site-specific manner. We performed ChIP-seq assays, as described in Figure 1, to map the genome-wide distribution of H3K27me3 in the CM in vivo between En1Cre/+;R26R/+;b-catenin fl/+ controls and En1Cre/+;R26R/+;b-catenin fl/D mutants (GSE96872). Sequencing of the CM+ectoderm revealed, by two independent analyses, 14,337 peaks in the control and 10,752 peaks in the mutant, thus 25% fewer peaks in the mutant. Surprisingly, genome-wide comparisons between individual mutant and control H3K27me3 peaks revealed modest differences in enrichment fold between the two samples ( Figure 5A). The differences in peak numbers between b-catenin controls and mutants were associated with changes in smaller H3K27me3 peaks ( Figure 5, A', A''', B', and B'''). Furthermore, any gains and losses of strength of H3K27me3 peaks were not associated with gene expression changes ( Figure 5B). In addition, on all genes bound by H3K27me3, the signal intensity of the peaks was also comparable between the mutant and control across the gene body ( Figure 5C). Next, we examined changes in H3K27me3 peak signal across the gene body of the differentially expressed genes identified in our RNA-seq data. In both the up-and downregulated genes, the H3K27me3 enrichment was comparable between b-catenin controls and mutants ( Figure 5D). From our ChIP-seq data set, we observed variations of the H3K27me3 enrichment throughout the genome ranging from large peaks blanketing an entire gene body to smaller peaks located just on the promoter. To further investigate the connection between H3K27me3 peak enrichment strength and gene expression, we divided the H3K27me3 peaks into three categories based on the level of enrichment: strong (.20-fold enrichment), medium (10-20-fold enrichment), and weak (# fivefold enrichment) ( Figure S4 in File S1). Representative enrichment for strong, medium, and weak peaks can be found on the HoxA cluster, Sept9, and Lmtk3, respectively ( Figure S4A in File S1). The genomic location of each class of peak using GREAT revealed that the large majority of strong and medium peaks were within 5 kb of the transcription start site (TSS), and the weak peaks had a more even distribution spanning out 500 kb from the TSS ( Figure S4B in File S1). Between controls and b-catenin mutants, the number of strong and medium peaks was comparable, with most of the variation found in the weak peaks ( Figure S4C in File S1). To further characterize each class of peak, we performed gene ontology analysis on the control H3K27me3 ChIP-seq data set. Gene ontology analysis revealed distinct functions for the strong peaks such as DNA-binding/transcription regulation and conserved homeobox sites, and the medium and weak peaks shared functions such as Wnt signaling and ion transport ( Figure S5 in File S1). Furthermore, comparisons between each class of peak found near a TSS (65 kb) and the genes identified in our RNA-seq data set revealed that 70% of strong peaks, 53% of medium peaks, and 47% of weak peaks were associated with transcriptional repression (,1 FPKM) ( Figure S6A in File S1). These results indicate that the level of H3K27me3 enrichment may be predictive of its transcriptional repressive function. When we intersected genes bound by each class of H3K27me3 peak and differentially expressed genes, we found that each class of peak had similar enrichment between the down-and upregulated genes, indicating that H3K27me3 enrichment does not predict transcriptional repression by b-catenin ( Figure S6B in File S1). H3K27me3 enrichment is not depleted on ectopically-expressed chondrocytic gene determinants in b-catenin mutants To determine if the loss of b-catenin resulted in depletion of H3K27me3 on chondrocyte differentiation determinants, we examined the enrichment of H3K27me3 on Sox9 and its downstream target Col2a1, which have higher mRNA levels in the b-catenin mutant CM (Figure 2) (Goodnough et al. 2012). Cdkn2a, HoxA, and T/Brachyury are known targets of PRC2, contain strong H3K27me3 peaks, and serve as controls. Upon deletion of b-catenin, we did not observe a change in H3K27me3 enrichment on known PRC2 target genes ( Figure 5E and Figure S7, A and B in File S1). More importantly, H3K27me3 enrichment did not change on Sox9, Col2a1, Col9a2, or Col11a2 ( Figure 5E and Figure S7, H and I in File S1). Furthermore, H3K27me3 enrichment was similar between b-catenin controls and mutants on the TSS of critical bone and dermal marker genes, such as Runx2, Twist1, Twist2, Axin2, and Lef1 ( Figure S7, C-G in File S1). Mcm6 serves as a negative control and lacks H3K27me3 enrichment in either the control or mutant ( Figure S7J in File S1). It is worth noting that Cdkn2a, HoxA, and T loci had strong H3K27me3 enrichment peaks, while Sox9 and Col2a1 loci had medium enrichment peaks in both controls and b-catenin mutants ( Figure 5E and Figure S7, A and B in File S1). Our results showed that H3K27me3 enrichment is not depleted from the TSS of chondrocyte differentiation determinants in b-catenin mutants and remains enriched in actively transcribed genes. DISCUSSION Based on in vivo evidence that b-catenin is required to repress chondrogenesis in the CM, and on emerging in vitro evidence connecting b-catenin and PRC2 in other processes, we tested the hypothesis that repression of chondrogenesis by Wnt/b-catenin signaling requires epigenetic repression by PRC2 in vivo. Consistent with the findings from previous studies, our results demonstrate that an in vivo loss of b-catenin in the CM and dorsal mesenchyme leads to the activation of chondrogenic marker genes such as Sox9, Col2a1, and Col11a2, as well as other known PRC2 target genes. Further, we find that b-catenin can physically interact with PRC2 components at native protein levels in CM-enriched protein extracts. In contrast to findings from in vitro studies, we observe that b-catenin is not required for the expression of major PRC2 components in vivo and that PRC2 is dispensable for the repression of chondrogenic marker genes in CM cells. Conditional deletion of b-catenin in the CM does not alter H3K27me3 enrichment around differentially expressed genes nor genome-wide in vivo. Our data in genetic mutants in vivo are consistent with a model whereby EZH2 and H3K27me3 are not required in the CM to guide cell fate selection. Interrogating mixed cell populations is unlikely to account for our major finding, given that our CM-restricted deletion of b-catenin did not lead to changes in H3K27me3 profiles, and Ezh2 mutants in vivo did not show changes in cell fate selection in the supraorbital CM. expressed in b-catenin mutant CM+ectoderm. The x-axis demarcates the percent distance across a gene between the TSS and the TES. (E) IGV representation of H3K27me3 signal peaks between En1Cre/+;R26R/+;b-catenin fl/+ control (n = 1) and En1Cre/+;R26R/+;b-catenin fl/Δ mutant (n = 1) CM+ectoderm. Cdkn2a is a known target of PRC2. Sox9 and Col2a1 are chondrocyte marker genes. ChIP, chromatin immunoprecipitation; CM, cranial mesenchyme; FPKM, fragments per kilobase of transcript per million mapped reads; n.s., not significant; RNA-seq, RNA-sequencing; TES, transcription end site; TSS, transcription start site. Considering that loss of b-catenin at E10.5 leads to ectopic chondrogenesis, but loss of Ezh2 at E10.5 did not phenocopy the b-catenin mutant, the function of the physical interaction between b-catenin and PRC2 remains unclear. A recent study in human colon cancer cells demonstrated that EZH2 alone, independent of H3K27me3, was sufficient to repress transcription (O'Geen et al. 2017). While we did not observe genome-wide changes in H3K27me3 enrichment upon loss of b-catenin, it is possible that b-catenin is required to recruit EZH2 itself to the genome. Alternatively, EZH2 was recently shown to bind to b-catenin in mouse ESCs and trimethylate lysine 49 (K49me3) on the b-catenin protein itself (b-catMe3) (Hoffmeyer et al. 2017). The b-catMe3 protein could then function as a transcriptional repressor at defined loci in ESCs to govern neuronal vs. mesoderm fate. However, loss of Ezh2 in the CM did not lead to alteration in cell fate selection, indicating that K49me3 modification of b-catenin does not play a role in cell fate selection in the CM. Future studies examining DNA binding by EZH2 and b-catenin could provide a biological function for the physical interaction between b-catenin and EZH2. The lack of cell fate changes in the supraorbital CM of Ezh2 mutants could indicate that the role of EZH2, and by extension PRC2, is dependent on the developmental stage and cell type. Most studies linking PRC2 and cell fate selection were performed in ESCs in vitro. Differences in the role of EZH2 between in vivo CM and in vitro ESCs may indicate that the cell fate selection role of PRC2 is unique to ES cells or linked to specific cell types. In addition, previous studies deleting Ezh2 at similar developmental stages in the mouse embryo found varying craniofacial phenotypes and defects (Schwarz et al. 2014;Dudakovic et al. 2015). Deletion of Ezh2 in the premigratory cranial neural crest cells with Wnt1Cre by E8.5 resulted in severe reduction of facial and skull bones, and embryonic lethality (Schwarz et al. 2014). Conditional deletion of Ezh2 at E9.5 in posterior CM with Prx1Cre resulted predominantly in postnatal craniosynostosis. We did not find gross changes in embryonic craniofacial morphology upon deletion of Ezh2 in the CM at E10.0 with Dermo1Cre (data not shown). These results suggest that the role of PRC2 in embryonic development may be cell type-and developmental stage-specific. Future studies in vivo are required to tease out the timing and dynamics of developmental gene regulation by PRC2. There are recent data from several groups refining the role of PRC2 and H3K27me3 enrichment. According to the histone code hypothesis, H3K27me3 is often found on transcriptionally repressed genes and is widely considered to be a sign of transcriptional repression Lee et al. 2006;Roh et al. 2006;Barski et al. 2007;Heintzman et al. 2007Heintzman et al. , 2009). In early postmigratory mouse neural crest cells, H3K27me3 was shown to mark bivalent domains accessible by the activating mark H3K4me3, indicating transcriptional poising rather than repression (Minoux et al. 2017). Recently, the histone code model has been refined to show that H3K27me3 enrichment is not just predictive of transcriptional repression in mouse ESCs, but also indicative of a past transcriptional repressive state (Riising et al. 2014;Comet et al. 2016). Furthermore, in human colon cancer cells, ectopic deposition of H3K27me3 with an EZH2-dCas9 fusion construct was not sufficient for transcriptional repression (O'Geen et al. 2017). In mouse rib chondrocytes, an intersection of RNA-seq data with H3K27me3 ChIP-seq data also suggested that H3K27me3 enrichment on TSS was not sufficient for transcriptional repression. When compared to genes dysregulated upon knockout of a major PRC2 component, EED, only 11% of the genes dysregulated were enriched for H3K27me3. Thus, the biological role of the remaining 89% of H3K27me3 peaks is unclear (Mirzamohammadi et al. 2016). Our data are entirely consistent with these recent findings. Intersecting our in vivo RNA-seq and ChIP-seq studies revealed legitimate H3K27me3 peaks in genes that did not correlate with transcription levels. We found that both expressed and repressed genes in control CM+ectoderm can be enriched for H3K27me3, demonstrating that H3K27me3 is not sufficient to indicate repression. The H3K27me3 marks remain at Sox9 and other cartilage marker genes in b-catenin mutants, suggesting that these marks may be carried over and reflective of a past transcriptional off state. If PRC2 is not the principal repressor of chondrogenesis in CM, the question remains as to what factor exerts this function. We propose three other models that will require further testing. The first model calls for other epigenetic-related mechanisms such as direct covalent modifications of DNA (DNA methylation), or other histone modification-related mechanisms, such as G9a-associated K9me3 repression. A study in chick limb bud micromass cultures showed that the addition of exogenous Wnt3a led to an increase in DNA methylation by DNMT3a on the Sox9 promoter (Kumar and Lassar 2014). However, in our hands, the addition of DNMT inhibitors did not alter Sox9 and Col2a1 mRNA levels in primary CM+ectoderm cells cultured in vitro (data not shown). Further studies in vivo will be required to investigate this model. The second model postulates that b-catenin activates yet-to-be-identified signaling pathways or transcription factors that would be directly involved in repression. For example, Twist1 is positively regulated by Wnt/b-catenin signaling, and conditional deletion of Twist1 partially phenocopies the ectopic chondrogenesis found in the En1Cre/+;R26R/+;b-catenin fl/D mutants (Komori et al. 1997;Goodnough et al. 2012). The retinoic acid (RA) signaling pathway can interact with Wnt/b-catenin signaling, and it can promote chondrocyte development and function in vitro (Yasuhara et al. 2010;Uchibe et al. 2017). RA signaling pathway components are robustly expressed in the control CM. Their interaction with Wnt/b-catenin signaling and the role in the CM remains to be tested. A third model is that b-catenin does not directly control the transcription of cartilage determinants and marker genes, but may control the expression or activity of factors involved in the post-transcriptional modification of cell fate determination and chondrocyte differentiation genes. Overall, our data suggested a model whereby the repression of the chondrogenic fate by Wnt/b-catenin signaling does not rely on EZH2 and H3K27me3, but implies other yet-to-be-identified transcriptional or post-transcriptional mechanisms.
8,173.2
2017-12-09T00:00:00.000
[ "Biology", "Medicine" ]
The pd -->^3He eta pi0 reaction at T_p = 1450 MeV The cross section for the pd -->^3He eta pi0 reaction has been measured at a beam energy of 1450 MeV using the WASA detector at the CELSIUS storage ring and detecting one ^3He and four photons from the decays of the two photons. The data indicate that the production mechanism involves the formation of the Delta(1232) isobar. Although the beam energy does not allow the full peak of this resonance to be seen, the invariant masses of all three pairs of final state particles are well reproduced by a phase space Monte Carlo simulation weighted with the p-wave factor of the square of the pi^0 momentum in the ^3Hepi^0 system. The pd → 3 He X 0 reaction has long been used to study the production of neutral mesons or mesonic systems.Missing-mass experiments carried out near the production thresholds have clearly identified peaks corresponding to X 0 = ω, η ′ , and φ [1,2].Of particular interest are the data on the production of the η meson [3][4][5][6], which show a threshold enhancement that might indicate the formation of a quasi-bound η 3 He nuclear state [7].Evidence in favour of this hypothesis is to be found also in the coherent η photoproduction from 3 He, γ 3 He→ η 3 He [8]. However, exclusive measurements of a production process often yield important additional information.The study of pd → 3 He K + K − showed that the φ mesons produced and decaying into K + K − are strongly polarised with respect to the incident proton direction [9].In contrast, the ω mesons detected through the measurement of pd → 3 He π + π − π 0 have very low polarisation [10].This difference is in marked contrast to the Okubo-Zweig-Iizuka rule [11], which would suggest rather that the polarisations of these two vector mesons should be similar. The most quoted data on the pd → 3 He X 0 reaction are connected with the ABC effect, where a strong and sharp enhancement of the missing mass X 0 spectrum is seen a little above the threshold of two pions [12].The effect might be connected with the production of two ∆(1232) isobars or with the sequential decay of the Roper N * (1440) resonance.However, the full rich structure could only be made accessible through exclusive measurements, such as those carried out recently for pd → 3 Heπ 0 π 0 and pd → 3 Heπ + π − [13].It is interesting to see if any similar ABC effect were to be found in the production of other pairs of pseudoscalar mesons, such as η π 0 .In this case an exclusive measurement would be required in order to identify the reaction against the much larger background arising from multipion production. Many important results have appeared recently on the photoproduction of the π 0 η system.The data from hydrogen [14] have been interpreted in terms of a dominant cascade decay of the D 33 ∆(1700) isobar through the s-wave ∆(1700) → η∆(1232) followed by the p-wave ∆(1232) → π 0 p [15].The evidence for the importance of the ∆(1232) is clear from the invariant mass distribution, though some signal of the interaction of the η with the observed proton through the N * (1535) is also apparent [14].The coherent photoproduction of π 0 η pairs in γd → η π 0 d has also been observed [16].The positive signal of the similar reaction on 3 He raises the tantalising possibility of using the γ 3 He → η π 0 3 He reaction to study also the final state interaction (fsi ) of the η with the 3 He [17].The competition between the η 3 He and 3 He π 0 interactions would, of course, also be equally relevant if the system were produced in proton-deuteron collisions. Measurements of the pd → 3 He η π 0 reaction were carried out at the CELSIUS storage ring of the The Svedberg Laboratory in Uppsala, Sweden, using the WASA detector [18].The circulating proton beam of energy 1450 MeV was incident on a deuterium pellet target [19,20].The 3 He ejectiles were measured in the WASA forward detector (FD) [21], which covered laboratory polar angles from 3 • to 18 • .This corresponds to 92% of the 3 He phase space for ηπ 0 production at 1450 MeV.The lost events are those where the 3 He are emitted at small laboratory angles such that they escape detection down the beam pipe. The forward detector consists of a sector-like window counter (FWC) for triggering, a proportional chamber for precise angular information (FPC), a hodoscope (FTH) for triggering and off-line particle identification, a range hodoscope (FRH) for energy measurements, particle identification and triggering, and a veto hodoscope (FVH) for triggering. The η and π 0 mesons were identified via their decay into γγ pairs, with these photons being measured in the central detector (CD).Their energies and directions were determined using the information from the Scintillating Electromagnetic Calorimeter (SEC), which covers polar angles from 20 • to 169 • .The absence of a signal in the Plastic Scintillating Barrel (PSB) indicated that the photons arose from the decay of a neutral particle. A schematic overview of the WASA detector setup is shown in Fig. 1.The hardware 3 He trigger selected events where there was a hit with a high energy deposit in the FWC and an overlapping hit in either the FTH or the FRH.The 3 He were identified in the FD using the ∆E−E method, as described in detail in Refs.[22,23]. In the data analysis we considered only those events where the η meson decayed into two photons (BR = 39.3%), and therefore selected events with one 3 He plus four photons.Furthermore, one γγ combination was required to have an invariant mass close to that of the π 0 , |IM(γγ) − m π 0 | < 45 MeV/c 2 .The two remaining photons must have an opening angle θ η γγ > 70 o , motivated by Monte Carlo simulations of the reaction, and an invariant mass larger than 460 MeV/c 2 .In addition, the overall missing mass should be small, MM( 3 He 4γ) < 100 MeV/c 2 .Finally, all events with two π 0 candidates, i.e., where two γγ combinations satisfied |IM(γγ) − m π 0 | < 45 MeV/c 2 , were rejected.This reduced the background contribution from 2π 0 production by almost an order of magnitude.These selection criteria, when applied on phase space produced pd → 3 He ηπ 0 , η → γγ, lead to an acceptance of 11.1%. The above cuts reduce the acceptances for 2π 0 and 3π 0 production to ≈ 0.1% and ≈ 0.2%, respectively.However, since their cross sections are so much larger than that for the η π 0 channel, and only 39.3% of the η mesons decay into γγ, a significant background from multipion production will remain. The pd → 3 He η π 0 events are identified by the peak at the η position that appears in the 3 He π 0 missing mass spectrum shown in Fig. 2. The points are experimental data that satisfy the selection criteria.Phase space simulations are shown of pd → 3 He 2π 0 (dash-dotted line), pd → 3 He 3π 0 (dotted line), and pd → 3 He η π 0 (solid red line).The three contributions are normalised such that their sum (solid black line) gives the best fit to the experimental data.The 2π 0 and 3π 0 distributions are then roughly consistent with the cross sections obtained in Ref. [23].The η π 0 distribution normalised in this way contains 375 ± 35 events, where the quoted error is systematic, mainly arising from the ambiguity in the background subtraction. The number of η π 0 candidates is corrected for acceptance, taking the η → γγ branching ratio into account, and then divided by the integrated luminosity, which was determined as described in Ref. [24], in order to obtain the total cross section.This procedure gave a value of σ tot = 22.6 ± 1.5 ± 2.1 ± 14%.The first error is statistic and the second systematic, coming from uncertainties in the number of η π 0 events and the acceptance estimation.The third reflects the uncertainty in the normalisation, in which effects from both the luminosity (12%) and time-overlapping events (< 8%) are included, being added quadratically. In the pd → 3 He η π 0 reaction there are potentially three important final state interactions, which have been investigated by constructing the invariant mass distributions for the η π 0 , 3 He π 0 , and 3 He η systems.For this purpose we take all the events in Fig. 2 that lie within the interval 490 MeV/c 2 < MM( 3 He π 0 ) < 580 MeV/c 2 .This asymmetric choice is motivated by the fact that the η peak is shifted towards lower masses in both the Monte Carlo Fig. 2. (Colour online) The missing mass of the 3 He π 0 system for all events fulfilling the selection criteria given in the text.The dash-dotted line represents simulated pd → 3 He 2π 0 events, the dotted line pd → 3 He 3π 0 , and the red line pd → 3 He η π 0 .The weights of these three contributions have been adjusted so that their sum (solid black line) reproduces well the experimental data (points). simulation as well as in the experimental data.Within this mass interval there are ≈ 335 η π 0 candidates, with a signal-to-background ratio of 1.7. Figure 3 shows the invariant mass of the 3 He π 0 system, where the background, obtained from simulated 2π 0 and 3π 0 data has been subtracted from each bin.The remaining numbers of events have been corrected for acceptance, also estimated bin by bin.The results are shown as points with error bars that represent the statistical uncertainties.In addition to these there is a systematic uncertainty in each bin of less than 10% due to the background subtraction and acceptance estimation.The solid line shows a phase space simulation of pd → 3 He η π 0 events.The experimental data peak slightly below 3100 MeV/c 2 , which is approximately equal to 2m p + M ∆(1232) and points towards an involvement of the ∆(1232) isobar in the production process.At this energy, the full ∆ peak is not covered and the data are primarily sensitive to the p-wave rise towards the resonance position.To simulate this effect, Monte Carlo events have been weighted with k 2 , the square of the momentum of the π 0 in the 3 He π 0 rest frame.The resulting distribution is shown in Fig. 3 by the dotted histogram, where the normalisation is to the total number of events.This model reproduces well the shape of the data. Instead of studying the invariant mass of the 3 He η system, it is in practice more reliable to construct the missing mass of the π 0 .This is because the electromagnetic calorimeter was calibrated using the neutral pions decaying into γγ so that it is in this region more precise than for η decay.The basic procedure for obtaining the distribution is similar to that for the 3 He π 0 invariant mass.After subtracting the background, the data were corrected for acceptance and the result is shown in Fig. 4. Compared to the broadly semi-circular form of the phase space distribution, the experimental data show a peaking towards low missing masses.At first sight this might be interpreted as being due to a 3 He η final state interaction, which is very strong and attractive near the kinematic threshold.However, the dotted histogram, again showing phase space simulations weighted by the square of the π 0 momentum in the 3 Heπ 0 rest frame, strongly suggests that this also could be an effect of the p-wave interaction between the π 0 and the 3 He. The best measurement of the η π 0 invariant mass is obtained through the study of the 3 He missing mass because the nucleus is detected in the Forward Detector, which has a much better resolution than the electromagnetic calorimeter.The background subtracted and acceptance corrected results are shown in Fig. 5.The deviations from phase space are not so marked as in the cases that involved the 3 He but even here the small effects are fairly well reproduced by weighting the Monte Carlo simulation with the k 2 factor.It is, of course, not surprising that one sees no significant influence of the a 0 (980) scalar resonance since at T p = 1450 MeV the maximum η π 0 invariant mass that is accessible is only about 850 MeV/c 2 .Fig. 4. The missing mass of the π 0 for all events fulfilling the selection criteria given in the text and, in addition, 490 < M M ( 3 He π 0 ) < 580 MeV/c 2 .This distribution is equivalent to that of the invariant mass of the 3 He η system.The solid lines represent phase space η 3 He Monte Carlo data and the dotted ones the same but weighted by the square of the π 0 momentum in the 3 He π 0 rest system.Fig. 5.The missing mass of the 3 He for all events fulfilling the selection criteria given in the text and when 490 MeV < M M ( 3 He π 0 ) < 580 MeV.This distribution is equivalent to that of the invariant mass of the η π 0 system, The solid curve is a Monte Carlo simulation of phase space production, while the dotted curve represents these events weighted by the square of the π 0 momentum in the 3 He π 0 rest system. Although the p-wave ansatz reproduces all three final invariant mass distributions very economically through the introduction of the k 2 factor, there is no sign of any p-wave nature in the angular distributions where, within the limited statistics, the data are fairly isotropic in the angle between the proton and the π 0 in the overall c.m. frame.The same is true for the angle of the π 0 in the 3 He π 0 frame with respect to either the incident proton or recoiling η.However, in view of the large number of spin degrees of freedom it is hard to draw conclusions from such isotropy. Since WASA has a very large acceptance, the introduction of the k 2 factor into the Monte Carlo has only a limited effect, changing the total acceptance from 11.1% to 10.6% which changes the value of the total cross section for the reaction to σ tot = 23.6 ± 1.6 ± 2.2 ± 14%. In summary, we have carried out measurements of the pd → 3 He η π 0 reaction at a beam energy of 1450 MeV.Although the statistics are not sufficient to make a useful Dalitz plot, the invariant mass distributions of all three final pairs of particles are consistent with the p-wave influence that might arise from the formation of the ∆(1232) in the 3 He π 0 system.This is very much in line with the photoproduction data on hydrogen and deuterium obtained at higher excess energies [14,16].There is no sign of any enhancement of the ABC type in the η π 0 mass distribution and the angular distributions, which within large error bars are consistent with isotropy, are in marked contrast to the very rich structure observed for pd → 3 He ππ [13]. It would be highly desirable to have a microscopic model for the pd → 3 He η π 0 reaction.In particular it is important to identify the dynamical origin of the η.Does it come from a sequential decay of the D 33 ∆(1700) isobar, as suggested for the photoproduction data [14,15], or does it arise from a two-step process such as pn → dη followed by dp → 3 He π 0 , where the N * (1535) plays a role?Regarding the final state interactions, it seems already clear from our results that data would have to be obtained at higher energy in order to separate the different final state interactions and to have a chance of investigating the formation of any η 3 He quasi-bound state.The data would then extend over the peak of the ∆(1232) and thus allow firmer conclusions to be drawn.Experiments of this type could be carried out by the WASA-at-COSY collaboration [25]. We are grateful to the personnel at The Svedberg Laboratory for their support during the course of the experiment.This work was supported by the European Community under the "Structuring the European Research Area" Specific Programme Research Infrastructures Action (Hadron Physics, contract number RII3-cT-204-506078), and by the Swedish Research Council. Fig. 1 . Fig. 1. (Colour online) Side view of the CELSIUS/WASA detector setup [18,21].The CELSIUS beam pipe runs horizontally and the target pellets are injected downwards through the vertical pipe. Fig. 3 . Fig.3.The invariant mass of the 3 He π 0 system for all events fulfilling the selection criteria given in the text and, in addition, 490 < M M ( 3 He π 0 ) < 580 MeV/c 2 .The background has been subtracted and the distribution corrected for acceptance.The error bars represent only the statistical uncertainties.The solid lines show a phase space simulation 3 He η π 0 events.The dotted histogram shows phase space events weighted with the square of the π 0 momentum in the 3 He π 0 rest system.
4,100.6
2009-11-05T00:00:00.000
[ "Physics" ]
Evaluation of the Simulation of Typhoon Lekima (2019) Based on Different Physical Parameterization Schemes and FY-3D Satellite’s MWHS-2 Data Assimilation : In this study, the case of super typhoon Lekima, which landed in Jiangsu and Zhejiang Province on 4 August 2019, is numerically simulated. Based on the Weather Research and Forecasting (WRF) model, the sensitivity experiments are carried out with different combinations of physical parameterization schemes. The results show that microphysical schemes have obvious impacts on the simulation of the typhoon’s track, while the intensity of the simulated typhoon is more sensitive to surface physical schemes. Based on the results of the typhoon’s track and intensity simulation, one parameterization scheme was further selected to provide the background field for the following data assimilation experiments. Using the three-dimensional variational (3DVar) data assimilation method, the Microwave Humidity Sounder-2 (MWHS-2) radiance data onboard the Fengyun-3D satellite (FY-3D) were assimilated for this case. It was found that the assimilation of the FY-3D MWHS-2 radiance data was able to optimize the initial field of the numerical model in terms of the model variables, especially for the humidity. Finally, by the inspection of the typhoon’s track and intensity forecast, it was found that the assimilation of FY-3D MWHS-2 radiance data improved the skill of the prediction for both the typhoon’s track and intensity. Introduction The numerical weather prediction (NWP) models are largely related to initial conditions and the physical parameterization schemes applied in the model [1]. On the one hand, different schemes are introduced into the dynamic framework of the numerical model to describe the weather process at various scales reasonably. In order to simulate a typhoon's track and intensity accurately, the interaction between multitemporal and spatial scale weather systems should be considered in the model, such as the mutual effect of the subgrid physical processes and the large-scale background environment [2]. On the other hand, a model's initial field is affected by the quality of observation data [3][4][5][6][7][8][9][10][11][12] and the data assimilation approaches. Previously, it was mainly provided with conventional data, which have low resolution, large errors, and other problems. Currently, unconventional MWHS-2 Radiance Data The MWHS onboard the FY-3 satellite is an atmospheric humidity vertical detector, providing remote sensing data in the way of a cross-orbit continuous variable speed scanning mode. The satellite radiance data provided by MWHS-2 on FY-3D are applied in this study. The features of ice clouds and land surfaces can be detected by MWHS-2 channels at 89 GHz and 150 GHz, respectively. The eight oxygen channels around 118.3 GHz are newly added, which are sensitive to atmospheric temperature and the five 183 GHz water vapor absorption channels are sensitive to atmospheric humidity [27]. The main observation altitude of channel 1 and channel 8-10 on the instrument is at the ground level, while other channels are mainly applied to the high-altitudinal data. The MWHS2 s 5-7 and 11-15 channels are selected for this assimilation experiment. The detailed information of each channel can be seen in Table 1 [28]. The WRF model is a widely used mesoscale numerical model for weather research. The options of the physical parameterization schemes play an important role in typhoon forecast [29]. To consider the subgrid-scale physical processes, some physical parameterization schemes are usually taken to satisfy the model resolution and they are listed in Table 2. The following section mainly describes their impacts. Microphysical processes are able to simulate the evolution of water vapor, clouds, and precipitation, as well as the weather phenomena caused by the interaction of ice and water particles [2]. Four different microphysical parameterization (the option named mp_physics in WRF) schemes are selected. Among them, the WSM6-class scheme is able to indicate the mixing ratio of different phases like rain, cloud ice, and snow. The changing process of ice, snow, and graupel can be depicted by the Thompson scheme. Using the Morrison 2-moment scheme can reflect the mixing ratio and concentration ratio of the five particles (droplets, cloud ice, snow, rain, and graupel). The WRF dual-parameter (WDM6) scheme is the result of a further optimization of the WSM6 scheme. The radiation process can be divided into the shortwave radiation and the longwave radiation. Due to the transmission of the radiation, the radiation balance of the atmosphere and the land surface will be destroyed, resulting in temperature changes. Simulations of the land surface physical processes are able to provide favorable lower boundary conditions for the planetary boundary layer. Moreover, the surface physics scheme (the option named as sf_surface_physics) is more complex and changeable. The five-layer thermal diffusion scheme was selected to construct a model with a five-layer soil temperature unchanged. The unified Noah land surface model is a temperature and humidity model with four-layer soil, which is employed to deal with the ice and snow cover effect. The planetary boundary layer is the momentum sink, heat source, and water vapor source throughout the atmosphere, and the turbulent motion process in this layer is also crucial to the transmission of physical quantities [2]. The Yonsei University (YSU) scheme and the Mellor-Yamada-Janjic (MYJ) scheme were chosen in bl_pbl_physics. The YSU scheme is used to deal with the problem of excessive mixing, while the MYJ scheme is more computational efficient and accurate. WRF-3DVar Assimilation System The WRF-3DVar assimilation system was developed by the National Center for Atmospheric Research (NCAR). Its basic idea is to calculate the function of the model's initial field, which can measure the weight values of the observation and the background as follows [30]: where x b is the background field vector and y 0 is the observation along with the observation error covariance matrix R and the background error covariance matrix B. H(x) is the observation operator that converts the model variables into observations. The difference between the H(x) and y 0 is further calculated, before minimizing this scalar cost function J(x) to obtain the variable x. Radiance Data Assimilation Methodology The function of the rapid radiation transfer model (RRTM) is to convert the model variables into a virtual atmospheric radiation rate. In this experiment, the radiative transfer for TOVS (RTTOV) developed by European Organization for Meteorological Satellites (EUMETSAT) was adopted to assimilate the FY-3D MWHS radiance data under the clear sky conditions. Due to a series of errors caused by the accuracy of the instrument, the radiation transfer model, the improper operation of the instrument, and the background field error, quality control (QC) procedures are essential in the data assimilation as follows [31]: (1) Eliminate all channels with mixed observation data from the land surface. (2) Eliminate the satellite data detected at large scanning angles. (3) Eliminate data if the absolute difference between the bias-corrected brightness temperature (Tb) and the simulated Tb exceeds 15 K or if it is three times larger than that of the specified observation error. (4) Delete data with high scattering index (SI), which is the difference of the observed values in channel 1 (89 GHz) and channel 10 (150 GHz). Data are removed when SI exceeds the relative high threshold of 5 K [18]. Overview of Super Typhoon Lekima On 4 August 2019, a tropical depression formed over the Pacific Ocean at the east of the Philippines, then continued to develop and intensified into a severe tropical storm on 6 August. At 1500 UTC 7 August, it was officially numbered as super typhoon Lekima ( Table 3). The system upgraded under the influence of various conditions and reached its strongest stage of the whole process on 8 August, with a minimum sea level pressure (MSLP) of 915 hPa and a maximum wind speed (MWS) exceeding 65 m/s. Lekima moved northwestward because of the combined action of the subtropical high over the northwest Pacific Ocean and the internal forces in the typhoon itself. During the development of Lekima, Krosa was located near its east side (23 • N, 140 • E). There was no "Fujiwara Effect" among them, since the distance between the two typhoons was about 2000 km [32]. However, the indirect impact was found between the two typhoons, which helped Lekima move westward in the easterly winds. Meanwhile, due to the subtropical high in stable maintenance, it was squeezed westward. Thus, Lekima moved along the southwest of the subtropical high, twisting from west to the northwest. This typhoon landed on the coast of China's Zhejiang Province on 10 August. Afterwards, it began to weaken and finally ended on 13 August ( Figure 1). After it landed in China, the inverted trough that presented in the north of typhoon caused large-scale precipitation in Jiangsu, Zhejiang Province, and other areas. Latterly, the heavy precipitation in the Shandong and Hebei provinces was mainly affected by the deep westerly trough and the northwest cold air invading Lekima's inverted trough. Super typhoon Lekima brought serious rainstorm disasters and economic losses to the above affected areas [33]. The Experimental Setups This study adopted the WRF model version 4.0. Reanalysis data provided by the National Centers for Environmental Prediction (NCEP) with a resolution of 0.25° × 0.25° were used as initial boundary conditions. The model domain ( Figure 2) was from the western Pacific Ocean to the southeast coast of China, with its center at (128.4°E, 29.3°N) and the number of horizontal grids was 721 × 541 for a single domain. The horizontal grid distance of the WRF model was 9 km with 57 vertical levels, with the model's top pressure at 50 hPa. For super typhoon Lekima, deterministic 42 h forecasts were launched from 0600 UTC 8 August to 0000 UTC 10 August 2019 in the experiments. The time interval of forecast outputs was recoded every 6 h, and the time integration step was 20 s. Eight different physical parameterization schemes were selected for the sensitivity experiments (Table 4). In addition, the ninth experiment with data assimilation was designed with the sensitivity experiments for various physical parameterization schemes. The forecast field from exp4 was provided as the background field for the assimilation experiment valid at 0600 UTC 8 August 2019. The Experimental Setups This study adopted the WRF model Results The results of the parameterization sensitivity experiments and the MWHS-2 data assimilation experiment were analyzed. The observed track, MWS, and MSLP data of typhoon Lekima were from the China Meteorological Administration (CMA), which were used as the ground truth. The maximum wind at 10 m and the minimum surface pressure were directly determined as MWS and MSLP, respectively, in the numerical experiments with an appropriate search size to exclude other typhoons in the domain. (Table 4). In addition, the ninth experiment with data assimilation was designed with the sensitivity Remote Sens. 2021, 13, 4556 7 of 16 experiments for various physical parameterization schemes. The forecast field from exp4 was provided as the background field for the assimilation experiment valid at 0600 UTC 8 August 2019. Results The results of the parameterization sensitivity experiments and the MWHS-2 data assimilation experiment were analyzed. The observed track, MWS, and MSLP data of typhoon Lekima were from the China Meteorological Administration (CMA), which were used as the ground truth. The maximum wind at 10 m and the minimum surface pressure were directly determined as MWS and MSLP, respectively, in the numerical experiments with an appropriate search size to exclude other typhoons in the domain. Parameterization Sensitivity Experiments The eight experiments adopted different combinations of physical parameterization schemes. Quantitative comparative results are provided to demonstrate the effect of applying different parameterization schemes on the analysis and forecast of the typhoon's track and intensity based on the eight experiments from Table 2. 4.1.1. The Typhoon's Track Figure 3 shows the comparison between the track of simulation and observation for a 42 h forecast from 0600 UTC 8 August to 0000 UTC 10 August. As shown in Figure 3a, among the track comparison of exp1, exp2 and exp3, the smallest average track error is found in the exp3 scheme. The results in experiments of exp1 and exp2 are rather consistent, since the same parametrization schemes are applied in exp1 and exp2, except for the RL-DS and RGSL radiation scheme. When applying different microphysics schemes in exp2 and exp3, exp3 yields better results with the Thompson scheme, indicating the microphysical process has a more significant impact on the typhoon than the radiation process. Figure 3b displays the experimental results of exp3, exp4, and exp5 with the same microphysical parameterization scheme, while the other parameterizations of physical process schemes are different in terms of the radiation scheme, the land surface physical scheme, the surface layer physics, and the planetary boundary layer scheme. The results of these three schemes are relatively similar, indicating that other physical process schemes have a slight impact on the track forecasts. By comparing exp6-8 (Figure 3c), it is found that the track from the exp8 scheme is the most accurate. Among them, both exp6 and exp8 apply the same WSM6 scheme with the RL-DS scheme and the RGSL scheme, respectively, while WDM6 is used in exp7. The WDM6 scheme does not outperform the WSM6 scheme for the track forecast. A similar result was also found in other studies, which indicates that the empirical knowledge of the WDM6 microphysics scheme is probably not able to represent these microphysical processes in real tropical cyclone cases [34][35][36]. When comparing the results of exp6 and exp8, it can be found that their track forecasts are relatively similar, but the speed of exp8 is consistently better with the observed track. From Figure 3d, in general, among all schemes, the simulation track of the exp8 scheme has the lowest track error in both movement direction and speed. By contrast, the simulation results of the exp1 and exp2 are the worst. It is interesting to note that a southwest bias is commonly found in all the simulation tracks, From Figure 4, the mean track error from exp2 is the highest. The experiment exp8 yields the smallest track error consistently. It proves that the selection of the microphysical process scheme has the greatest influence on the simulation of the typhoon's track. It seems the WSM6 scheme is the main factor leading to a positive forecasting effect of the typhoon's track. The combination of the WSM6 mp_physics, RMM5 sf_sfclay_physics, and the YSU bl_pbl_physics is the best. From Figure 4, the mean track error from exp2 is the highest. The experiment exp8 yields the smallest track error consistently. It proves that the selection of the microphysical process scheme has the greatest influence on the simulation of the typhoon's track. It seems the WSM6 scheme is the main factor leading to a positive forecasting effect of the typhoon's track. The combination of the WSM6 mp_physics, RMM5 sf_sfclay_physics, and the YSU bl_pbl_physics is the best. Figure 5 shows the results of the typhoon intensity from the eight experiments along with the observed track. Overall, the intensity from the simulated typhoon is weaker than the observed track in terms of both MSLP and MWS. In Figure 5a, except for exp4, the results of other experiments are basically similar for the whole period. The largest average error is from exp3, due to the weakest predicted typhoon intensity. It seems that the selection of microphysical schemes has a slight effect on the simulation of the typhoon's intensity. In general, the intensity of exp4 matches best with the observed track for both MSLP and MWS. The results show that the intensity of the simulated typhoon is more sensitive to surface physics schemes. Figure 5 shows the results of the typhoon intensity from the eight experiments along with the observed track. Overall, the intensity from the simulated typhoon is weaker than the observed track in terms of both MSLP and MWS. In Figure 5a, except for exp4, the results of other experiments are basically similar for the whole period. The largest average error is from exp3, due to the weakest predicted typhoon intensity. It seems that the selection of microphysical schemes has a slight effect on the simulation of the typhoon's intensity. In general, the intensity of exp4 matches best with the observed track for both MSLP and MWS. The results show that the intensity of the simulated typhoon is more sensitive to surface physics schemes. Figure 5 shows the results of the typhoon intensity from the eight experiments along with the observed track. Overall, the intensity from the simulated typhoon is weaker than the observed track in terms of both MSLP and MWS. In Figure 5a, except for exp4, the results of other experiments are basically similar for the whole period. The largest average error is from exp3, due to the weakest predicted typhoon intensity. It seems that the selection of microphysical schemes has a slight effect on the simulation of the typhoon's intensity. In general, the intensity of exp4 matches best with the observed track for both MSLP and MWS. The results show that the intensity of the simulated typhoon is more sensitive to surface physics schemes. MWHS-2 Data Assimilation Experiment Based on the results of the typhoon's track and intensity simulation, it seems the track forecast skill from the parameterization scheme in exp4 is rather positive in terms of both track error and intensity error, which is why it was further selected to provide the background field for the following MWHS-2 radiance data assimilation experiment. Using the WRF-3DVar model, the MWHS-2 radiance data onboard the FY-3D was assimilated for this case to improve the analysis and forecast skill. The Impact on the Analysis The simulated Tb from the background and the analysis were compared with the observations along with the distribution of the observations minus background (OMB) and the observations minus analysis (OMA). From Figure 6a-c, the simulated Tb values of the background are notably warmer than the observed Tb values. After assimilating the MWHS radiance data, the simulated Tb is rather consistent with the observation. By comparing Figure 6d,e, it is shown that the magnitude of OMA is obviously smaller than that of OMB with the bias correction. In addition, the mean and standard deviation (stdv) of OMA are significantly less than those of OMB. MWHS-2 Data Assimilation Experiment Based on the results of the typhoon's track and intensity simulation, it seems the track forecast skill from the parameterization scheme in exp4 is rather positive in terms of both track error and intensity error, which is why it was further selected to provide the background field for the following MWHS-2 radiance data assimilation experiment. Using the WRF-3DVar model, the MWHS-2 radiance data onboard the FY-3D was assimilated for this case to improve the analysis and forecast skill. The Impact on the Analysis The simulated Tb from the background and the analysis were compared with the observations along with the distribution of the observations minus background (OMB) and the observations minus analysis (OMA). From Figure 6a-c, the simulated Tb values of the background are notably warmer than the observed Tb values. After assimilating the MWHS radiance data, the simulated Tb is rather consistent with the observation. By comparing Figure 6d,e, it is shown that the magnitude of OMA is obviously smaller than that of OMB with the bias correction. In addition, the mean and standard deviation (stdv) of OMA are significantly less than those of OMB. From the scatter plots of Figure 7a, almost all the scattered points lie above the contour line, indicating that the background simulated Tb values are higher than the observed Tb values. From Figure 7b, it is noted that the bias is largely corrected by comparing the observed Tb and the background Tb after the bias correction. Meanwhile, the root mean square (rms) error values of OMB are sharply decreased. Figure 7c shows the scatters of the analysis against the observation. Compared with Figure 7b, the distribution of scatter points around the isoline is denser and more convergent. Both the stdv and the rms of From the scatter plots of Figure 7a, almost all the scattered points lie above the contour line, indicating that the background simulated Tb values are higher than the observed Tb values. From Figure 7b, it is noted that the bias is largely corrected by comparing the observed Tb and the background Tb after the bias correction. Meanwhile, the root mean square (rms) error values of OMB are sharply decreased. Figure 7c shows the scatters of the analysis against the observation. Compared with Figure 7b, the distribution of scatter points around the isoline is denser and more convergent. Both the stdv and the rms of OMA are decreased to 0.433 K (Figure 7c), which indicates that the analysis better fits the observation. Figure 8a is the histogram of the difference between the observed Tb and the simulated Tb based on the background, while the difference between the observation and the assimilation analysis is provided in Figure 8b. It is found that the residual distribution of OMB is more discrete from −8 to 8, while the peak area of the OMA residual distribution is closer to 0 K. OMA are decreased to 0.433 K (Figure 7c), which indicates that the analysis better fits the observation. Figure 8a is the histogram of the difference between the observed Tb and the simulated Tb based on the background, while the difference between the observation and the assimilation analysis is provided in Figure 8b. It is found that the residual distribution of OMB is more discrete from −8 to 8, while the peak area of the OMA residual distribution is closer to 0 K. 7°N, 125.5°E). The positive water vapor increment is distributed near the east and the south of the typhoon center (the black dot in Figure 9). Moreover, a high water vapor increment is also observed near the Korean peninsula. It illustrates that the assimilation of the FY-3D MWHS-2 satellite data increases the humidity around Lekima and may be conducive to the analysis conditions of the water vapor. Figure 8a is the histogram of the difference between the observed Tb and the simulated Tb based on the background, while the difference between the observation and the assimilation analysis is provided in Figure 8b. It is found that the residual distribution of OMB is more discrete from −8 to 8, while the peak area of the OMA residual distribution is closer to 0 K. 7°N, 125.5°E). The positive water vapor increment is distributed near the east and the south of the typhoon center (the black dot in Figure 9). Moreover, a high water vapor increment is also observed near the Korean peninsula. It illustrates that the assimilation of the FY-3D MWHS-2 satellite data increases the humidity around Lekima and may be conducive to the analysis conditions of the water vapor. Figure 9). Moreover, a high water vapor increment is also observed near the Korean peninsula. It illustrates that the assimilation of the FY-3D MWHS-2 satellite data increases the humidity around Lekima and may be conducive to the analysis conditions of the water vapor. From the vertical profiles of the mean increment of different model variables over all the grid points on each level (Figure 10), notably analysis increments (the analysis minus the background) are observed for the humidity, the pressure, the temperature, and the wind velocity. In addition, the variability of temperature and pressure is most significant in the lower layer of the model. These curves show that after the assimilation of Tb, the distribution of the temperature and humidity profile are directly affected by the use of the RTTOV observation operator. Meanwhile, other variables are also effectively affected by the background error covariance. Remote Sens. 2021, 13, x FOR PEER REVIEW 13 of 17 From the vertical profiles of the mean increment of different model variables over all the grid points on each level (Figure 10), notably analysis increments (the analysis minus the background) are observed for the humidity, the pressure, the temperature, and the wind velocity. In addition, the variability of temperature and pressure is most significant in the lower layer of the model. These curves show that after the assimilation of Tb, the distribution of the temperature and humidity profile are directly affected by the use of the RTTOV observation operator. Meanwhile, other variables are also effectively affected by the background error covariance. From the vertical profiles of the mean increment of different model variables over all the grid points on each level (Figure 10), notably analysis increments (the analysis minus the background) are observed for the humidity, the pressure, the temperature, and the wind velocity. In addition, the variability of temperature and pressure is most significant in the lower layer of the model. These curves show that after the assimilation of Tb, the distribution of the temperature and humidity profile are directly affected by the use of the RTTOV observation operator. Meanwhile, other variables are also effectively affected by the background error covariance. Finally, the circulation structures related to the typhoon evolution is analyzed to further explore the improvements of forecast accuracy, valid at 0600 UTC 8 August 2019. In Figure 11a,b, the geopotential height at 500 hPa is shown for the background and the analysis. The subtropical high in the analysis is further east than that in the background. The southeast airflow on the southwest side of the subtropical high contributes to the northwest movement of Lekima, which is consistent with that in Figure 12d of the simulated track. Finally, the circulation structures related to the typhoon evolution is analyzed to further explore the improvements of forecast accuracy, valid at 0600 UTC 8 August 2019. In Figure 11a,b, the geopotential height at 500 hPa is shown for the background and the analysis. The subtropical high in the analysis is further east than that in the background. The southeast airflow on the southwest side of the subtropical high contributes to the northwest movement of Lekima, which is consistent with that in Figure 12d of the simulated track. Figure 12a shows the average error of the simulated typhoon's track by exp4 before and after the assimilation. The error is clearly reduced after the assimilation, especially after 24 h (Figure 12a). This track error is consistent with the track forecast in Figure 12d, in which the track from the experiment without the data assimilation shows a southwest bias. For the typhoon's intensity (Figure 12b,c), the error of the MWS and the MSLP increased around 0-12 h and gradually decreased after 18 h for both experiments. The results of the MWS and the MSLP are more consistent with the observed track than those of the experiment without any data assimilation. It seems that assimilating the FY3D MWHS2 radiance data contributes more to the track than to the intensity of the typhoon system. Summary In this study, the WRF model was applied to simulate the evolution of super typhoon Lekima (2019). Using the reanalysis data from NCEP, the sensitivity experiments of different physical parameterization schemes were carried out. After the sensitivity experiments, the MWHS-2 radiance data onboard the FY-3D satellite were further assimilated based on one parameterization scheme using the 3DVar method for the Lekima case. Our conclusions are as follows: (1) It seems the microphysical process had the greatest influence on the typhoon track Figure 12a shows the average error of the simulated typhoon's track by exp4 before and after the assimilation. The error is clearly reduced after the assimilation, especially after 24 h (Figure 12a). This track error is consistent with the track forecast in Figure 12d, in which the track from the experiment without the data assimilation shows a southwest bias. For the typhoon's intensity (Figure 12b,c), the error of the MWS and the MSLP increased around 0-12 h and gradually decreased after 18 h for both experiments. The results of the MWS and the MSLP are more consistent with the observed track than those of the experiment without any data assimilation. It seems that assimilating the FY3D MWHS2 radiance data contributes more to the track than to the intensity of the typhoon system. Summary In this study, the WRF model was applied to simulate the evolution of super typhoon Lekima (2019). Using the reanalysis data from NCEP, the sensitivity experiments of different physical parameterization schemes were carried out. After the sensitivity experiments, the MWHS-2 radiance data onboard the FY-3D satellite were further assimilated based on one parameterization scheme using the 3DVar method for the Lekima case. Our conclusions are as follows: (1) It seems the microphysical process had the greatest influence on the typhoon track simulation. The best simulation results were found using the parameterization schemes of WSM6, with a combination of RMM5 for near-surface layer process, UNLS for land surface process and YSU for boundary layer process. The results also showed that other physical parameterization schemes had a slight effect on the typhoon's intensity simulation. (2) In the data assimilation experiment, the simulated Tb based on the analysis better matched the observed Tb. Both the average and the standard deviation of OMA were decreased compared with OMB. The track and intensity of the data assimilation experiment were both more consistent with the observed track for super typhoon Lekima. In this study, only one case of typhoon was studied with physical parameterization sensitivity experiments in the framework of the three-dimensional variational data assimilation method. Future research on data assimilation could be carried out by using various assimilation methods, such as the cyclic assimilation, the four-dimensional variation assimilation [37], hybrid-variation assimilation [25], or all-sky assimilation by using a symmetric observation bias model.
6,830.6
2021-11-12T00:00:00.000
[ "Environmental Science", "Physics" ]
Speaker Distance Estimation in Enclosures From Single-Channel Audio Distance estimation from audio plays a crucial role in various applications, such as acoustic scene analysis, sound source localization, and room modeling. Most studies predominantly center on employing a classification approach, where distances are discretized into distinct categories, enabling smoother model training and achieving higher accuracy but imposing restrictions on the precision of the obtained sound source position. Towards this direction, in this paper we propose a novel approach for continuous distance estimation from audio signals using a convolutional recurrent neural network with an attention module. The attention mechanism enables the model to focus on relevant temporal and spectral features, enhancing its ability to capture fine-grained distance-related information. To evaluate the effectiveness of our proposed method, we conduct extensive experiments using audio recordings in controlled environments with three levels of realism (synthetic room impulse response, measured response with convolved speech, and real recordings) on four datasets (our synthetic dataset, QMULTIMIT, VoiceHome-2, and STARSS23). Experimental results show that the model achieves an absolute error of 0.11 meters in a noiseless synthetic scenario. Moreover, the results showed an absolute error of about 1.30 meters in the hybrid scenario. The algorithm's performance in the real scenario, where unpredictable environmental factors and noise are prevalent, yields an absolute error of approximately 0.50 meters. I. INTRODUCTION S OURCE distance estimation (SDE) refers to the task of estimating the interspace between a microphone and a sound source.It is very often performed in conjunction with direction of arrival (DoA) estimation, in which only the direction information about the source position is obtained.Both tasks are useful in many practical applications, including increasing the robustness of automatic speech recognition [1] by enhancing the performance of acoustic echo cancellers [2] and autonomous robotics [3], [4].Despite both DoA and source distance being estimated using multi-channel audio in most practical scenarios, the latter has been largely underresearched [5].Firstly, source distance estimation is widely regarded a more difficult task due to distance cues vanishing M. Neri and M. Carli are with the Department of Industrial, Electronic, and Mechanical Engineering, Roma Tre University, Rome, Italy.(e-mail: michael.neri<EMAIL_ADDRESS>A. Politis, D. Krause, and T. Virtanen are with the Faculty of Information Technology and Communication Sciences, Tampere University, Tampere, Finland.(e-mail: archontis.politis@tuni.fi,daniel.krause@tuni.fi,tuomas.virtanen@tuni.fi). with the increased space between the sound source and the receiver.Secondly, DoA offers sufficient information in many downstream spatial filtering tasks.However, many applications such as source separation, acoustic monitoring, and contextaware devices, would still benefit from full information about the sound source position, hence the need for further investigations on source distance estimation (SDE). Most methods for both DOA and distance estimation rely on arrays with more than two microphones [6].Multichannel data allows for exploiting spatial cues such as interchannel time differences (ITDs) and interchannel level differences (ILDs) to provide information for efficient DoA estimation, positively affecting distance estimation as well [4].However, using multiple microphones poses certain limitations in terms of budget and physical portability.To tackle this problem, some studies investigated using binaural recordings for that purpose, allowing for decreasing the number of channels to two by exploiting the human hearing cues [7], [8].However, the simplest scenario of estimating distance from a single microphone has been largely under-researched [9].Moreover, the vast majority of studies focus on a classification approach, in which the distance is discretized into a set of disjunctive categories, e.g., "far" and "near", allowing for easier model training and a higher accuracy [10], [11].However, using predefined categories does not allow for continuous estimation, which puts limits on the precision of the obtained sound source position. Towards this direction, in this work, we propose several novel solutions to tackle the problem of source distance estimation.Firstly, we define the task as a regression problem, differently from most state-of-the-art works that focus on classification-based methods.We propose a novel approach to distance estimation from single-channel audio signals in reverberant environments, overcoming the need for complex microphone arrays.In more detail, the proposed model is a convolutional recurrent neural network (CRNN) with an attention module, which is responsible for learning a timefrequency attention map.By doing so, it is possible to emphasize magnitude-and phase-related features that are the most informative for sound source distance estimation.The effectiveness of our approach is extensively tested for numerous acoustic scenarios, obtained by simulations with randomized configurations of room shapes, materials, and locations of the microphone and the speaker.In addition, tests have been carried out on real reverberant speech recordings, captured directly or emulated with real room impulse responses (RIRs). The remainder of the manuscript is organized as follows.Then, the maps are stacked along the channel dimension resulting in a feature tensor of size T × F × 3. To highlight the feature regions that are most informative for distance estimation, an attention map is learned from the three-channel tensor, which is then element-wise multiplied with the input feature tensor.The output is further processed by the convolutional layers with P i 1 × 3 kernels, also denoted as frequency kernels, yielding a T × 2 × P tensor that is arranged in a T × Q matrix, where Q = 2P .Subsequently, the resulting matrix is analyzed by two GRU layers with Q neurons to model temporal patterns.Finally, the output from recurrent layers T × Q is fed to three fully connected layers with R, 1, and 1 neurons respectively to map the features to the predicted distance ŷ. Section II provides a summary of the state-of-the-art.Section III describes the proposed methods, whereas the performance evaluations are in Section IV.Section V details the experimental results of the proposed approach on three acoustic scenarios.Finally, Section VI includes an overall discussion regarding the work, and Section VII draws the conclusions. II. RELATED WORKS SDE involves determining the distance between a sound source and the receiver.When compared to DoA estimation, SDE is an area that has received significantly less attention and is generally considered more challenging.This is primarily due to the fact that the accuracy of distance estimation declines rapidly for small-sized arrays commonly used in practice even for relatively short distances from the center of the array (up to 3-4 m).Several factors contribute to this phenomenon, including: a) the decrease in direct-to-reverberant energy ratio (DRR) and signal-to-noise ratios (SNRs) as the source distance increases, b) the reduction in inter-channel level differences and constant inter-channel time differences as the source transitions from a spherical wave to a plane wave captured by the array. The majority of studies related to SDE show results in conjunction with the DoA estimation task.Extensive research has been conducted on this subject for various acoustic systems that commonly use distributed microphone arrays.These systems encompass a range of setups, such as intelligent loudspeakers [12], spherical microphones [13], triangular configu-rations [14], and arrays of acoustic sensors [15].Simpler audio formats including binaural recordings have been investigated to a much lesser extent, including few studies with classical machine learning methods [4], [16] and very limited research related to deep learning [7], [8]. Regarding SDE modeling in isolation, most of the research has been focused on parametric approaches and manually crafted features.These methods often utilize information such as the DRR [17], RIR [18], or signal statistics and binaural cues such as the interchannel intensity difference (IID) [4].In some cases, classical machine learning techniques have been employed to leverage statistical features.For instance, a study by Brendel et al. estimated the coherent-to-diffuse power ratio to determine the source-microphone distance via Gaussian mixture models (GMMs) [5].Vesa utilized GMMs trained with magnitude squared coherence (MSC) features to incorporate information about channel correlation [19], [20].In [21], the authors used MSC on top of other features to train classifiers with methods such as K-nearest neighbours (KNN) or linear discriminative analysis (LDA).Georganti et al. introduced the binaural signal magnitude difference standard deviation (BSMD-STD) and trained GMMs and support vector machines (SVMs) using this feature [22].Most of these methods rely on compound algorithms that require careful tuning to adapt to varying acoustic conditions. Until now, the exploration of source distance estimation using deep neural networks (DNNs) has been quite limited.Yiwere et al. employed an approach inspired by image classification, utilizing CRNNs trained on log-mel spectrograms to classify three different distances in three distinct rooms [23].Although the models demonstrated promising outcomes for data within the same environment, their performance significantly deteriorated when dealing with recordings from different rooms.In another endeavor, Sobhdel et al. introduced relation networks to address this challenge through few-shot learning, which exhibited enhancements over conventional convolutional neural networks (CNNs) [24].Both studies conducted tests within a limited range of specific distances, encompassing a close proximity of up to 3-4 meters at most.In [8], the authors conducted experiments for data covering distances for up to 8 m, however the model was classifying them into two binary classes denoted as "far" and "near". Additionally, only a few works have addressed the topic of speaker distance estimation using single-channel audio.One of the first works employed low-level features such as linear predictive coding (LPC), skewness, and kurtosis of the spectrum to classify the distance of a speaker [11].Venkatesan et al. proposed both monaural and binaural features to train GMMs and SVMs [25].Regarding DNN approaches, Patterson et al. classified "far" and "near" speech in order to perform sound source separation from single-channel audios [9]. To the best of our knowledge, single-channel source distance estimation has been scarcely addressed as a regression problem, prioritizing classification approaches to ease model training.In addition, there are very few studies investigating the use of DNNs in this task.For these reasons, a learningbased approach for continuous estimation of the distance of the speaker is proposed.A first step towards continuous sound source distance estimation occurs in our preliminary study [26] where a CRNN was defined for estimating static speaker distance in simulated reverberant environments from a single omnidirectional microphone.However, that study was evaluated only on simulations, while in this work various degrees of realism are investigated, from simulated RIRs, to synthetic data with measured RIRs, to fully real recordings with distance-annotated sources.Hence, the potential of the method in a real-world scenario is demonstrated.In addition, the preliminary study was based on a simpler architecture without investigation on what architectural components contributed the most to the SDE, while here the architecture is refined and enhanced, with better overall performance, and specific choices investigated in an ablation study. To cope with these limitations, the contributions of this work are as follows • a major improvement of the results of the learning-based approach, i.e., a CRNN, proposed in our preliminary study [26] that simultaneously provides temporal framewise and utterance-wise distance estimation of the static audio source.In addition, an in-depth study regarding the model architecture is detailed; • definition of an attention module that estimates the most significant time-frequency patterns from the input features for speaker distance estimation; • experiments have been conducted on synthetic data, both in noiseless and noisy scenarios, to analyze the response of the proposed approach in controlled environments. Further tests on the CRNN have been conducted on a constructed hybrid dataset, i.e., measured RIRs convolved with anechoic speeches, and two real recording datasets, demonstrating the generalization capabilities of the proposed approach. III. PROPOSED METHOD In this section, a description of the acoustic features for the source distance estimation is provided.To process temporal, spatial, and spectral characteristics of these features, a CRNN has been employed for the experiments.This type of model has shown good results in many studies for sound event localization and detection (SELD) tasks [27], [28].In addition, an attention module is introduced to learn an attention map on the time-frequency audio representation.The overall architecture is depicted in Figure 1. A. Acoustic features extraction All the operations on the audio files are performed at 16 kHz.The selection of this sampling frequency is because the speech spectrum is mostly contained in the range 0-8 kHz [29].In addition, a lower frequency yields a lower number of samples, reducing the computational complexity of feature extraction and distance estimation.Initially, a preprocessing stage is employed to extract the complex STFT STFT{x} ∈ C T ×F from the single-channel audio signal x ∈ R 1×L , where T is the number of time frames, F the number of frequency bins, and L the number of samples.This transformation is computed using a Hann window of length 32 ms with 50% overlap.Subsequently, the magnitude (|STFT{x}| ∈ R T ×F ) and phase (∠STFT{x} ∈ R T ×F ) components of the STFT are computed from the complex matrix. Sinus and cosinus maps of the phase spectrogram are computed by applying sin(•) and cos(•) functions element-wise, since the features provide a smoother continuous representation of the raw phase information.The concept of utilizing the phase spectrogram has been adopted from contemporary research on multichannel source separation [30], learningbased localization [31], and speech enhancement [32] as phase information contains cues regarding the acoustic properties of the environment in which the sound propagates [33].Tests conducted using the raw complex spectrogram in our scenario, i.e., two separate branches that processed real and imaginary parts, yielded unsatisfactory training performance. Finally, the magnitude of the STFT and the sinus and cosinus maps are stacked into a T × F × 3 tensor.This representation is then fed into the attention module and the convolutional layers for further processing and analysis. B. Attention Module One of the main contributions of this work is the definition of an attention module which computes an attention map H ∈ R +T ×F ×3 from the audio features.The objective of this learned matrix is to emphasize the regions of the features that are most informative for the estimation of the distance.Specifically, this module is the function f ATT : R T ×F ×3 → R + T ×F ×3 . Its structure is composed of 2 convolutional blocks, having 16 and 64 3 × 3 filters, respectively.Then, a 1 × 1 convolutional layer with three filters, followed by a sigmoid activation, is used to map the features to yield the T × F × 3 attention map.Finally, the output acoustic features X ∈ R T ×F ×3 are obtained by element-wise multiplication (⊗) between the input acoustic features and the attention map as Examples of noiseless and noisy spectrograms and attention maps are depicted in Figure 2 and in Figure 3, respectively.It is worth highlighting how the attention module differently focuses on the parts of the signal where the speech is most likely to stand out from the noise, or where the characteristics of the speech are still recognizable.In fact, the attention map in a noiseless case is evenly distributed across the entire frequency range since there is no noise that interferes. C. Convolutional Layers The architecture employs three convolutional blocks for feature extraction.In more detail, the structure of each block involves a 2D convolutional layer comprising P i 1 × 3 filters, i.e., along the frequency axis with values of 8, 32, and 128 assigned to the respective layers.We denote these filters as frequency kernels whereas 3 × 1 filters are named time kernels.Square kernels, known for their capability to capture time-frequency patterns, are commonly used in convolutional layers applied to spectrograms due to their effectiveness in capturing local patterns and structures along the frequency axis.In this work, the proposed model adopts rectangular filters, and temporal information is modeled by recurrent layers at the end of the model.In fact, rectangular filters can be more parameter-efficient compared to square kernels.Since the former has fewer parameters than square kernels of the same receptive field size, they can lead to a more compact model, making training and inference more computationally efficient and potentially reducing the risk of overfitting, especially when working with limited data. Following this layer, a batch normalization [34] step is applied, along with max and average pooling operations along the frequency dimension.Then, the results of which are summed. The activation function utilized after each convolutional layer is the exponential linear unit (ELU) [35], which is denoted as where α is a coefficient that regularizes the saturation of negative values.Notably, each layer employs a specific pooling rate denoted by M P i , with values of 8, 8, and 2 assigned to the respective layers. D. Recurrent Layers To process the feature maps from the convolutional layers, two bi-directional GRU layers are utilized with tanh(•) as the activation function.These layers have exhibited promising results in audio and speech processing tasks, demonstrating parameter efficiency compared to long short-term memory (LSTM) networks [36]. The output of the CNN with shape T ×2×P is stacked along the channel dimension to produce a T × Q matrix to be fed to the recurrent layers.Then, in the proposed configuration, the extraction of reverberation-related information primarily relies on integrating information over time with the recurrent layers.Within this implementation, two bi-directional GRUs with Q = 2P = 128 neurons each for every time frame are employed. Then, to predict the distance, three fully connected layers are employed, where an independent mapping between each time frame is performed in each layer.Firstly, the initial linear layer projects time-wise features from the last GRU onto a matrix of dimensions T × R, where R = 128.Subsequently, the second linear layer independently maps each time frame of the T × R matrix onto a vector of size T × 1, denoted as the time-wise distance estimation ŷ.Specifically, this vector represents the distance estimation for each time frame.Finally, the last fully connected layer is employed to perform regression and thus estimate the predicted distance, denoted as ŷ ∈ R. E. Loss function The mean squared error (MSE) loss is used to train the DNN system.Let y ∈ R be the true distance of a static sound source.In addition, let y ∈ R T ×1 be the vector consisting of frame-wise ground truth distances.Then, the loss used in the training phase for a single sample is where the loss is averaged across the batch dimension to be exploited by the backpropagation algorithm.Thanks to the imposition of the loss, the model predicts a distance for each time bin and, from this information, a single-valued distance. Having two losses in a static source scenario operates as a regularization term since it forces the proposed approach to return coherently both time-wise and single-distance estimations.However, in the context of dynamic sound sources, it is important to highlight that only frame-wise loss is required. F. Metrics The performance evaluation of our approach utilizes the mean absolute error (MAE) (L 1 ) as the performance measure for the entire test dataset where the ground truth y ∈ R and the prediction ŷ ∈ R are considered.Additionally, the performance is assessed by calculating the MAE within different distance ranges.This analysis allows us to quantify the relative error of our model concerning source distance.We define the relative MAE (rL 1 ), which includes the real speaker distance in the evaluation, as follows: For the sake of clarity and brevity, MSE has not been considered in the performance evaluation. IV. PERFORMANCE ASSESSMENT This section describes how the performance assessment of the proposed approach has been carried out.To validate the work, three levels of realism have been addressed in the scope of speaker distance estimation: • Synthetic: simulated RIRs of an image-source room simulator are convolved with anechoic speech; • Hybrid: measured RIRs are convolved with anechoic speech; • Real: on-field reverberant speech recordings.Figure 4 depicts the histograms of distances in each dataset employed in the experimental results. A. Synthetic Dataset The dataset used for experiments follows the same setup as in [37].Briefly, anechoic speech recordings obtained from the TIMIT dataset [38] are convolved with the simulated omnidirectional RIRs from an image-source room simulator for shoebox geometries [39]. This simulator allows for frequency-dependent wall absorption and directional encoding of image sources in 5 th order Ambisonics format.The elevation range between the source and the receiver spanned from −35°to 35°.To compile a list of materials and their respective absorption coefficients for each surface type (ceiling, floor, and wall), we refer to widely used acoustical engineering tables [40].For each unique simulated room with its room-source-distance configuration, a random material is assigned to each surface, resulting in 2912 possible material combinations.Compared to randomizing directly the target RT60 for each simulated room, this randomization approach allows us to avoid matching unnatural reverberation times to specific room volumes (e.g., a very long RT60 for a small room) and ensure a more natural distribution of reverberation times. The final distribution of reverberation times exhibits a median, 10 th percentile, and 90 th percentile of 0.83 s, 0.42 s, and 2.38 s, respectively.Furthermore, the positions of the sound sources are uniformly distributed in terms of the azimuth angle relative to the receiver. The experiments include 2500 audio files of 10 s duration at 16 kHz in compliance with the speech dataset.In the evaluation, 5-fold cross validation is used where 1500, 500, and 500 files are assigned to training, validation, and testing in each fold. To assess the performance of the proposed approach under different noise levels, real background noise is added into the synthetic dataset.Specifically, environmental noise recordings from the WHAM! [41] dataset, captured in various urban settings such as restaurants, cafes, and bars, are employed.Random segments of the same length as the simulated speech recordings are injected, mirroring the same split as the WHAM! dataset, with several SNRs levels ( [50,40,30,20,10,5, 0] dB). B. Hybrid Dataset -QMULTIMIT The RIRs used in the hybrid dataset, contained in the C4DM RIR database [42], were measured in three rooms located at Queen Mary, University of London, London, UK.A Genelec 8250A loudspeaker was employed as the source for measuring all IRs, while each receiver position was measured using both an omnidirectional DPA 4006 and a B-format Soundfield SPS422B. A collection of 130 RIRs was captured in a classroom with dimensions 7.5 × 9 × 3.5 m (236 m 3 ) and consist of reflective surfaces such as a linoleum floor, painted plaster walls, ceiling, and a sizable whiteboard. The second room, denoted as the Octagon, is a Victorian structure that was finalized in 1888.Presently serving as a conference venue, the walls of this building still showcase book-lined interiors, complemented by a wooden floor and plaster ceiling.As the name implies, this room features eight walls, each measuring 7.5 m in length, and a domed ceiling towering 21 m above the floor, resulting in an estimated volume of 9500 m 3 .In the center of the room, a total of 169 RIRs were measured. The third room is The Great Hall which possesses a seating capacity of approximately 800.It encompasses a stage and seating sections both on the floor and a balcony.To capture the audio, the microphones were positioned within the cleared seating area on the floor, spanning an area of approximately 23×16 m.The microphone placements mirror the layout used for the Octagon, encompassing 169 RIRs over a 12 × 12 m region. Following the same setup of the synthetic dataset, anechoic speech recordings are convolved from TIMIT [38] and real background noises from WHAM! [41] are added with the measured RIRs, generating the hybrid QMULTIMIT dataset.For each RIR, 5 random speech recordings are selected from the TIMIT dataset, yielding 2340 audio files.RIRs are randomly divided into training, validation, and testing splits following a percentage ratio of 70-10-20.Finally, the MAE errors averaged across all the distance bins are provided. C. Real Dataset VoiceHome -2 [43].This dataset is specifically made for distant speech processing applications in domestic environ-ments.It consists of short commands for smart home devices in French, collected in reverberant conditions and uttered by twelve native French speakers facing the microphone.The data is recorded in twelve different rooms corresponding to four houses, with fully annotated geometry, under quiet or noisy conditions.More precisely, VoiceHome -2 includes everyday noise sources (with no annotations regarding their SNRs) such as competing talkers, TV/radio, footsteps, doors, kitchenware, and electrical appliances.Five speaker positions per room, comprising standing and sitting postures, are selected to encompass a broad range of angles and distances concerning the microphone array, which maintains a single, fixed position throughout all the room recordings.The sound is then captured by a microphone array consisting of eight microelectromechanical systems (MEMS) placed near the corner of a cubic baffle.For this study, only the first channel has been extracted.In total, VoiceHome -2 encompasses 752 audio recordings, lasting approximately 10 seconds for all the twelve rooms and the five noise scenes.The dataset is then randomly split using a percentage ratio of 70-10-20 training, validation, and testing splits, respectively, for the experiments.STARSS22 [44].The dataset includes recordings of human interaction scenes with spatio-temporal event annotations for thirteen target classes, primarily focusing on speech.It is part of the DCASE Challenge 2022 Task 3 development set.The recordings were made at two sites, Tampere University in Finland and Sony headquarters in Japan, in a total of eleven rooms maintaining a consistent organization and procedure regarding equipment, recording, and annotations.The dataset utilizes the Eigenmike spherical microphone array, offering two spatial formats.One format involves a tetrahedral subarray of omnidirectional microphones mounted on a rigid spherical baffle.The corpus is more challenging compared to the other datasets due to the natural movement and orientation of multiple speakers during discussions, as well as the presence of intentional and unintentional sound events other than speech.It also contains diffuse and directional ambient noise at significant levels.Finally, audio data from a single microphone of the Eigenmike array has been processed, extracting 2934 two-second single-speech excerpts that do not overlap with other annotated directional sources.As done before with the other datasets, STARSS22 is split using a percentage ratio of 70-10-20 training, validation, and testing splits, respectively. It is worth noticing that, as can be inspected in Figure 4, real dataset distances are differently distributed with respect to the synthetic and hybrid ones.The motivations of this behavior are as follows: • in many real-world scenarios, as in STARSS23 [45], sound sources are not always at a fixed distance from the recording device; • different recording environments can introduce variations in the speaker distance distribution.For example, in a controlled studio setting, speakers may be positioned at specific distances from the microphone to achieve desired sound characteristics.In contrast, field recordings or recordings made in everyday settings can have a wider range of distances due to the uncontrollable nature of the environment.Indeed, in this context, VoiceHome-2 [43] has been recorded in a domestic environment whereas STARSS23 [44] has been collected in office-like environments; • audio datasets are often curated to suit specific applications or scenarios.For instance, a dataset focused on speaker recognition in far-field scenarios may deliberately include more examples with distant speakers to simulate real-world challenges.On the other hand, a dataset for speech enhancement in close-proximity situations may prioritize examples with close speaker distances.Voice-Home -2 has been curately designed for enhancing distant-microphone speech whereas STARSS23 focuses on SELD, yielding dissimilar distance distributions.Accordingly with the distributions of distances in real scenarios, the distance bins used are {[1, 2), [2, 3), [3, 4.5)} and {[1, 2), [2, 2.5), [2.5, 3)} meters for VoiceHome -2 and STARSS22, respectively.The final MAE errors are averaged using a percentage ratio of 70-10-20 training, validation, and testing splits, respectively. V. EXPERIMENTAL RESULTS In this section, the experimental results are shown for each realistic scenario, as detailed in Section IV.First, the proposed architecture is tested on the synthetic dataset, both in noiseless and noisy scenarios, for the selection of hyperparameters.Next, the performance of the approach is evaluated on hybrid and real recordings by comparing the selected solution with different hyperparameters.Finally, an ablation study is provided to demonstrate the effectiveness of the attention module in all scenarios. A. Implementation details For both training and fine-tuning procedures on all scenarios, the model is trained for 60 epochs at a learning rate of 0.001 with batch size of 16 samples.A scheduled reduction (80%) of the learning rate is performed every 5 epochs when the MSE of the validation set does not improve.In this work, fine-tuning is carried out by training again the model, hence without the random initialization of the weights. B. Results on noiseless synthetic data The proposed approach efficiently estimates speaker distance with an average error of 11 cm in a noiseless scenario, as it can be inspected from Table I.Since there is no other published method that attempts regression-based SDE with a single microphone, for comparison purposes we present results on binaural SDE following the recently published work of [46].The binaural estimation model is similar to the CRNN model used herein; however, we modify it to include the attention operation proposed in this work for better comparison purposes.A similar simulator, range of acoustic conditions, and number of rooms was used in [46] as herein.The same spectrogram and binaural features are also used as in the original work.The binaural estimation results (86 cm) we obtain are, on average, better than the ones in [46] (151 cm), with the improvement most likely attributed to the use of the attention layers.However, the most striking difference is that of the monophonic omnidirectional results (11 cm) versus the binaural ones (86 cm).It seems that the complex frequency-, direction-, and orientation-dependent effects imposed by headrelated transfer functions (HRTFs) make it harder for the model to associate spectrotemporal reverberation patterns with the source distance.However, a definite conclusion on differences between single-channel omnidirectional versus binaural SDE requires further study.An increasing trend of the errors with respect to the distance is notable.This behavior is expected due to the dominant influence of the late reverberant component compared to the direct and early reflection components of the signal at long distances.These late reverberation cues exhibit statistical diffusion [47], meaning that short-term magnitudes and phases resemble noise-like characteristics.Consequently, extracting meaningful information from these dominant late reverberation cues may pose challenges for the model in effectively estimating speaker distance. Such behaviour is demonstrated in Figure 5. Considering that the balance between direct speech energy versus early and late reverberant energy is exemplified in the DRR, measured from the simulated RIRs, it is clear that dominance of the reverberation at low DRRs impacts negatively distance estimation.There seems to be an optimum balance where both direct sound and reverberation contribute to estimation, after which direct sound can start to mask reverberation-related cues for higher DRRs, with a subsequent small drop in performance.A closer investigation of distance estimation at very high DRRs or very small distances at the near-field of the microphone is left for future work. Moreover, the results of the study demonstrate that the GRU layers play a crucial role in the model's performance.The GRU layers likely contribute to the model's ability to capture sequential patterns and dependencies effectively.Additionally, the study found that using rectangular kernels, as opposed to square kernels, in combination with GRU layers improves the model's efficiency.In this scenario, the rectangular kernels are better at capturing different types of patterns and features in the data, leading to more effective and efficient information processing within the model.This statement, however, does not hold when no GRU layers are present. In addition, it is worth noting that using a single GRU layer slightly impacts the overall performance of the proposed approach, approximately halving the number of learnable parameters. C. Analysis of the impact of noise on synthetic data To assess the quality of the predictions in relation to noise strength, seven SNR values have been specifically chosen during training.More precisely, a separate model is trained from scratch for each SNR level.Table II depicts the results where a notable discrepancy between the noiseless and noisy scenarios becomes evident.This divergence is primarily attributed to the disruptive influence of background noise on the phase information [26], which has been also demonstrated in speech enhancement studies [48].It is worth noting from Figure 6 that the performance of the proposed method remains consistent across all SNR levels for distances up to 6 meters.However, beyond this distance, the error increases rapidly.This behavior can be attributed to the quadratic inverse relationship between distance and sound intensity, i.e., I s ∝ 1 d 2 .Due to this physical behavior, the direct sound and early distinct echoes exhibit similar energy levels compared to the late reverberant cues, hindering long-distance information. D. Results on hybrid data As done with the synthetic dataset, five SNR values have been selected to assess the performance of the proposed architecture by training a separate model from scratch for each SNR level.Table III shows the experimental results, highlighting the superiority of the chosen configuration.The notation [30, +∞) dB denotes the results of the model both in noiseless case and with at most 30 dB of SNR.It is worth noting that, differently from the synthetic scenario, the impact of background noise is smaller even at low SNR.In fact, comparing Table II with Table III, it is evident how synthetic RIRs are more affected by noise at higher SNR with respect to measured ones. Interestingly, the use of only sinus and cosinus maps yields poor performance at all SNRs levels whereas the STFT magnitude is essential for the task.This result agrees with the previous study [26] where the use of only sinus and cosinus features in noisy audio recordings is ineffective. E. Results on real data Table IV and Table V depict the results on VoiceHome -2 [43] and STARSS23 [44], respectively.Following the same rationale of the synthetic and hybrid scenarios, the selected F. Ablation study of the attention module To demonstrate the effectiveness of the attention module, an ablation study is performed on all the scenarios.First, performance assessment is carried out without the module.Then, instead of returning a T ×F ×3 matrix, a spectrogram attention map, i.e., T × F , is learned by a module.Then, an elementwise multiplication is performed between the magnitude of the STFT and the attention map. These three modalities are analyzed in Table VI, depicting the errors for each bin with their confidence intervals.Predicting an attention map for each feature provides better distance estimation on average.Moreover, the results demonstrate that all the approaches perform similarly in the short range, up to 8 meters.Conversely, applying the attention map on each of the feature maps in the feature set produces better outcomes in the long range with respect to the other two cases.When the speaker is far from the microphone, the learned attention maps enhance the features set, facilitating the extraction of features of the convolutional layers.Indeed, as the distance between the speaker and the microphone increases, detecting these patterns becomes more challenging due to their reduced salience [47]. Moreover, an ablation study has been carried out also on the hybrid and real data, as it can be inspected in Table VII.The attention map yields the best performance in the hybrid case when it is only applied to the STFT magnitude channel.This fact highlights the ineffectiveness of phase features in this specific use case.Instead, the results demonstrate the superiority of the attention map applied on all the channels in the real scenario. G. Cross-corpus generalization Tests have been carried out in a cross-corpus trainingtesting setup, e.g., synthetic-hybrid, synthetic-real, hybridreal, VoiceHome-STARSS.The model yields very large errors in case no finetuning is performed, as it can be inspected in Table VIII.This behavior highlights the discrepancy of feature patterns among different acoustic scenarios, levels of acoustical realism, and different distance distributions.If the model is fine-tuned to a different realistic scenario, the performance is slightly worse that the case when the model starts with random weights.The results of this situation is shown in Table IX. VI. DISCUSSION From the results of the noisy scenario in the synthetic dataset, it is important to highlight that even a minimal amount of noise severely corrupts phase-based features, which have been identified as the most critical information in our analysis of clean speech.For instance, the presence of direct sound and echo patterns, characterized by transients in the clean signal, becomes blurred over time due to the presence of noise and late reverberation, resulting in a loss of phase coherence across frequencies.This behavior, however, does not occur in the hybrid dataset where the effect of high SNR in the recordings does not correspond to a similar increase in estimation performance.That may be due to the recordings of the RIRs having a level of inherent measurement noise, which limits the effective SNR that we can achieve in the hybrid simulations.The imposition of the loss in (3) is required for predicting a time-wise distance vector.Due to the lack of baselines and datasets in the literature, only a single value of distance of the sound source is assigned for each time bin to ease the distance tracking task.Generally, this characteristic in audio datasets is referred as weak labels [49].Without time-wise distance references, denoted as strong labels, the model encounters challenges in fine-tuning its predictions, decreasing its overall performance.This scenario has been studied in literature for tasks that require a fine temporal resolution output, such as sound event detection (SED) [50] and SELD [51]. Furthermore, it is important to acknowledge that certain portions of the audio data encompass segments where speech information is absent or indiscernible.Consequently, this scarcity of informative speech content can considerably undermine the effectiveness and reliability of the predictors. In this direction, the proposed attention module can improve the ability of the model (Tab.VII) to identify the speech information that is relevant for the estimation of the distance.However, it is important to note that the attention module is learned by the model itself, without any direct supervision. To address these limitations, a potential avenue for improvement emerges, centering around the generation of more comprehensive and fine-grained labels.By augmenting the dataset with strong labels that introduces both speech activity and speaker distance estimation, the model may acquire a better understanding of the room acoustics.In addition, this augmentation enables the model to leverage additional contextual cues and refine its predictions, enhancing its performance in accurately estimating speaker distances and capturing the dynamics of speech activity. Moreover, one of the key areas for improvement is the availability of larger datasets of real recordings with a greater number of rooms and various speaker-microphone configurations.A larger dataset would enable the model to learn more diverse and representative acoustic characteristics, leading to improved performance in distance estimation tasks.Moreover, it could also improve the generalization ability of the approach, as it has been demonstrated how the performance of the proposed model is dependent on the nature of the audio recording (synthetic, hybrid or real).Additionally, by including different room types and microphone placements, the model can better generalize across various real-world scenarios.Furthermore, the use of a transformer-based [52] approach could be explored, leveraging a larger amount of data.Transformer models have shown remarkable success in various natural language processing tasks and have the potential to capture complex patterns and dependencies in acoustic data.Exploiting transformer architectures could enhance the model's ability to estimate distances accurately. Another possibility for future research is the integration of time-wise distance ground truth, as previously mentioned in the discussion section.By considering temporal information in addition to spatial cues, the model could potentially estimate the distance of a sound source more accurately.This would provide valuable insights in scenarios where multiple sound sources are present.Estimating and tracking the distance of a moving source is an application of interest that is scarcely explored in the literature. VII. CONCLUSIONS This work has explored the task of speaker distance estimation in noisy and reverberant environments.Multiple configurations, in terms of kernel size and recurrent layers of the model, have been provided, motivating the proposed architecture.In fact, the use of rectangular filters across the frequency dimension and the presence of GRUs layers yields the best performance in terms of distance errors.The experimental results obtained from the proposed model have demonstrated remarkable precision in scenarios where several types of RIRs are employed.In a noiseless synthetic scenario where RIRs have been generated with a room-source simulator, the model has achieved an absolute error of only 0.11 meters.With recorded RIRs, an absolute error of about 1.30 meters has been obtained.In the real scenario with on-field recordings, where unpredictable environmental factors and noise were prevalent, the model yielded an absolute error of approximately 0.50 meters.These results underscore the model's resilience and its capacity to effectively manage various realistic scenarios.Variations in performance across these scenarios can be attributed to differences in the distribution of acoustic parameters, such as the distance from the sound source.Analysis on moving sound sources in single-channel recordings will be carried out as a future work. Fig. 1 . Fig.1.Proposed architecture for speaker distance estimation.First, acoustic features are extracted from the single-channel audio.In more detail, 3 maps (magnitude of the STFT, sinus, and cosinus of the STFT phase) are obtained with shape T × F , where T and F are the time and frequency bins, respectively.Then, the maps are stacked along the channel dimension resulting in a feature tensor of size T × F × 3. To highlight the feature regions that are most informative for distance estimation, an attention map is learned from the three-channel tensor, which is then element-wise multiplied with the input feature tensor.The output is further processed by the convolutional layers with P i 1 × 3 kernels, also denoted as frequency kernels, yielding a T × 2 × P tensor that is arranged in a T × Q matrix, where Q = 2P .Subsequently, the resulting matrix is analyzed by two GRU layers with Q neurons to model temporal patterns.Finally, the output from recurrent layers T × Q is fed to three fully connected layers with R, 1, and 1 neurons respectively to map the features to the predicted distance ŷ. Fig. 2 .Fig. 3 . Fig. 2. Example of spectrogram and attention map on a noiseless sample of the synthetic dataset with a speaker talking at 10 meters. Fig. 6 . Fig.6.Comparison between noisy and noiseless performance of the proposed approach on the synthetic dataset. TABLE I HYPERPARAMETERS SELECTION ON THE SYNTHETIC DATASET WITH CLEAN SPEECH.THE GRAY ROW HIGHLIGHTS THE PROPOSED APPROACH. TABLE III DISTANCE ESTIMATION ERRORS FOR THE QMULTIMIT HYBRID DATASET.GRAY ROW HIGHLIGHTS THE PROPOSED APPROACH.ALL This occurrence can be attributed to the limited size of the datasets as the model overfits the training dataset.With a larger dataset, these outliers are expected to be mitigated, and the model's performance is likely to become even more reliable and precise.This observation underscores the potential for further advancement in distance estimation when working with more extensive datasets. TABLE IV DISTANCE ESTIMATION ERRORS FOR THE VOICEHOME -2 DATASET.GRAY ROW HIGHLIGHTS THE PROPOSED APPROACH.ALL FEATURES ARE USED IF TABLE V DISTANCE ESTIMATION ERRORS FOR THE STARSS23 DATASET.GRAY ROW HIGHLIGHTS THE PROPOSED APPROACH.ALL FEATURES ARE USED IF NOT TABLE VI ABLATION STUDY OF ATTENTION MAP USING FREQUENCY KERNELS ON SYNTHETIC DATA WITH CLEAN SPEECH.GRAY ROW HIGHLIGHTS THE TABLE IX CROSS -DATASET GENERALIZATION TESTS WITH FINETUNING.
9,903.6
2024-03-26T00:00:00.000
[ "Computer Science", "Engineering" ]
Morphological characterization of concrete aggregates by means of image analysis Properties of fresh and hardened concrete are affected by the morphological characteristics of the aggregates. However, there is not an established correlation, between the aggregate shape and the concrete properties, to be taken into account during the mix design process. Conventional aggregate shape measurement methods are subjective, and that is why image analysis has been recently used to determine the morphological characteristics of particles. In this study, the morphological characteristics of coarse aggregates from two different sources are determined using both, conventional methods and image analysis by means of Fourier descriptors. Mechanical properties of concrete prepared with coarse aggregates having different elongation indexes were evaluated. Results indicate that the aggregate shape has little influence in the concrete compressive strength and elastic modulus, while its influence in workability is significant. Introduction Caracterización morfológica de agregados/Morphological characterization of concrete Stone aggregates are fundamental elements of hydraulic concrete, asphaltic concrete and granular bases.Their characteristics affect not only fresh and hardened concrete properties but also its cost.Aggregates occupy between 70% and 80% of concrete volume, that is why it becomes important to know the aggregate properties and their influence on concrete properties, in order to improve not only its use and exploitation but also the concrete mix design process.María Patricia León, Fernando Ramírez Aggregate shape, texture and gradation characteristics affect the workability, finishing, bleeding and segregation of fresh concrete; and they also affect the strength, stiffness, retraction and permeability and durability of hardened concrete (Quiroga, 2003). Cement is the most expensive component of concrete.Cement paste (cement and water) is the element filling the voids among aggregates, provides workability for fresh concrete and creates adhesion or bonding among aggregates once concrete is hardened.The percentage of voids in an aggregate mix is mainly related with its gradation, shape and texture (De Larrard, 1999).The voids resulting from aggregates mixes with flat and elongated particles are generally higher than those from rounded particles, therefore, there will be a lower demand of cement paste for rounded aggregates, in order to achieve a desired workability and to obtain an adequate bonding among aggregates.The use of low paste dosages (within certain limits) a part from reducing costs, tends to create less difficulties in relation to cracking, heat of hydration and durability.During recent decades, image analysis techniques have been used to assess the shapes and texture of particules.From those techniques and texture indexes have been obtained, which define such properties quantitatively.Design methods for concrete mixes, do not consider in a direct way the aggregate shape and textures, for instance in the case of the design method ACI 211.1 (1991) the shape effect is partially taken into account by involving the sand fineness modulus and aggregates compact unit mass, however, this method does not establish water amount variations due to such factors.This situation together with limitations in some cities as far as aggregate provision is concerned, because of insufficient exploitation sources, high economical cost and exploitation environmental impacts make really necessary accurately know the characteristics of aggregates and their influence on the concrete properties, in order to explicity and rationally consider such information in the concrete mix design process. The main purpose of this study is the morphological characterization of aggregates used in hydraulic concrete mixes and the assessment of their influence on fresh and hardened concrete properties.This project comprises a physical and mechanical characterization of crushed aggregates, from different sources, used for concrete production in Bogota; the registration and interpretation of aggregates digital images from each selected source to obtain their shape characteristics, characterization of fresh concrete (settlement); and the mechanical characterization of hardened concrete (modulus of elasticity and compressive strength), to evaluate the influence of the aggregates morphology on the fresh and hardened concrete properties under. 2 Theoretical framework 2.1 Aggregate shape effect on concrete Aggregate characteristics have a major effect on fresh and hardened concrete behavior.The main aggregates characteristics affecting the concrete properties are shape and texture; gradation; absorption; mineralogy; compressive strength and elasticity modulus; maximum size; specific gravity; sulfates attack resistance and hardness.Once the influence of each individual property is determined on concrete behavior, it shall be possible to design more cost effective mixes. In order to achieve an optimal concrete mix some conditions are required, among others that concrete aggregate mixture compactness is the maximum possible, with a proper workability in order to minimize the amount of cement paste required for aggregate bonding.Likewise, concrete components are required to meet durability, workability, and strength specifications.The compactness assessment of a granular mix is a major problem for the handling and knowledge of concrete (Andersen y Johansen, 1991), and it depends on three fundamental parameters: aggregate size and gradation, shape (morphology and texture) and compaction method of the concrete mix. María Patricia León, Fernando Ramírez The higher volds content the higher amount of cement paste required.It has been found that the requirement of cement paste is reduced from 4% to 5% when a cubic aggregate is used instead of elongated and flat aggregates (Hudson, 1998).Similarly, as the shape of particles affect the aggregate mix compactness, it has a high incidence on the demand for cement paste, and therefore on concrete costs, also affecting workability and the mechanical properties of concrete.Aggregate shape and texture affect the compact unit mass, and therefore play an important role for mortar performance and fresh concrete and it may indirectly affect its strength by affecting concrete pouring and compaction. 2.1.1Aggregates shape effect on fresh concrete properties Particles shape affect workability and pouring of fresh concrete.The required amount of cement paste in the concrete mix is associated with the specific surface area of the aggregates.The particles having a lower specific surface area, such as cubic or rounded particles, require a lower amount of cement paste in order to achieve the same workability than a concrete mix made with higher specific surface area aggregates, such as those containing elongated and flat particles (Shilstone, 1999).In addition, flat, elongated, angular and rough particles resulting a high voids when arranging themselves, thus demanding more sand into the mix to deliver concrete workability.When this happens, the fineness of the aggregate mix is higher, i.e., it has a higher specific surface area, and therefore, paste demand increases (Legg, 1998).Apart from having a direct effect on the mix workability, flat, elongated, angular and rough particles produce mixes that make the concrete surface finishing and compactness difficult.Although surface texture affects workability, its influence is not that much representative as gradation and aggregate shapes (Galloway, 1994).Water demand into the mix is also influenced by the aggregates shape and texture.A higher demand of water to obtain a given workability reduces strength and increases concrete bleeding. Caracterización morfológica de agregados/Morphological characterization of concrete 2.1.2Aggregate shape effect on hardened concrete properties Aggregates shape and texture, apart from affecting significantly fresh concrete workability, have an effect on strength and durability of hardened concrete.Texture affects adhesion between the coarse particles and the mortar matrix thus reflecting a strength variation.Rough particles tend to create higher strengths than smooth particles (Kaplan, 1959), specially flexural strength (Galloway, 1994).However, rough particles increase water demand for a given workability, thus reducing strength and durability. Durability is associated with a low content of water, so angular, flat and elongated aggregates negatively affect concrete durability since they increase water demand.In the case of concrete pavements, flat particles located near the surface preventing bleeding of mortar water located under the particle, causing damage on the surface and consequently a decrease of pavement duration (Kosmatka, 1994).Alexander (1996) stated that aggregates shape and texture have a direct effect on strength influencing the strength concentrations on the composite material and the micro cracks and cracks before and after the failure.Mehta and Monteiro (1993) found that, aggregates shape and texture also affect the shape of the concrete stress-strain curve, since aggregates morphology influences the appearance of micro cracks in the transition zone.The influence of aggregates shape on concrete strength is controversial.Although, it has been observed that concretes manufactured with aggregates of different shapes and with a given cement content, can reach similar strengths, some authors ensure that concrete manufactured with rounded and cubic shape aggregates tend to produce higher strengths than those with elongated and flat shapes (Shilstone, 1990). In accordance with previous statements, there are different specifications limiting the content of elongated or flat particles aggregates used for concrete production.For example concrete specifications in Spain specify that the weight percentage of flat particles must be less than 35% of the total concrete weight.British regulation states that this percentage must be less than 40%.Figura 1. Terminología de forma de la partícula (Barret, 1980) Figure 1.Shape terminology on particles (Barret, 1980) María Patricia León, Fernando Ramírez Specifications by the Instituto de Desarrollo Urbano de Bogota indicate that the maximum percentage of elongated and flat particles must be from 15% to 20% depending on the kind of traffic. 2.2 Particles shapes analysis Shape, angularity or roundness, and surface texture are three concepts related with the morphological analysis that represent space geometrical variations at different dimension scales (Figure 1).Shape represents space variation at a large dimension scale; angularity or roundness represents variation at a medium dimension scale; and surface texture represent variation at a low dimension scale (Barret, 1980).Shape measurements on concrete aggregates have been widely conducted by means of manual methods employing elongation and flateness gauges.Such measurements are not only time consuming, but also highly subjective.Because of their inefficiency and cost, such measurements tend not to be representative enough to achieve a statistically valid result (Maerz y Zhou, 1999).Then an technologies such as image processing that may increase the accuracy and efficiency of such measurements, which are now being developed to measure aggregates shape, so that they can be implemented for common use.Caracterización morfológica de agregados/Morphological characterization of concrete Aggregate flateness and elongation index The flateness index of an aggregate is calculated as the weight percentage of having its fraction particles, the minimum dimension smaller than a given of the average aggregate dimension.The elongation index of an aggregate is obtained from particles weight percentage having its maximum dimension (length) higher than a given fraction of the average dimensioned .For example the regulation by the Instituto Nacional de Vias de Colombia INV E-230 (1998) defines the flateness index of an aggregate as the percentage of weight of particles, for which minimum dimension (thickness) smaller aggregate than 3/5 of the average dimension; the elongation index of an aggregate is defined as the percentage of weight of particles, for which maximum dimension (length) is higher than 9/5 of the average aggregate dimension. Fine aggregate voids content in loose condition This method describes the voids content determination of an aggregate sample in loose condition.By comparing the voids content of different aggregates having the same gradation, and indication of particle are for angularity, roundness and texture can be obtained. Aggregates shape and texture index By employing this test method a relative value can be obtained for aggregates shape and texture.This procedure has been used to estimate the effects of such characteristics on the compactness and strength of concrete mixes.The test consists in obtaining the voids content percentage for each material sample with degrees of compaction different, then calculating the aggregate shape index.The index value of a particle is obtained as follows: Where I a is the index value of a particle, and V 10 and V 50 are voids content percentages of each sample of compacted material after 10 and 50 strokes per layer, respectively. Measurement methods of shape by means of image analysis Digital image analysis and processing has been employed since 1960s.After the development of computer technologies, the application of digital analysis techniques has been diversified to different areas.In civil engineering, images analysis techniques have been implemented for detection and assessment of tensile stress, establishment of structural conditions, sediments transportation in stream flows, pollutants transportation through porous media, soil deformation, granulometry, and particles shape analysis; and for granular media reconstruction and simulation.Several attempts have been made to characterize particles shape by using image analysis.Some methods have been centered in measuring the shape in general, while others have compared angularity to roundness, and also texture among different shapes (Barret, 1980).Historically, particles shape measurement in soil mechanics has been developed by means of standard charts, useful to compare each particle individually.In the past decade advanced image techniques have been used, such as x-rays scanning and magnetic resonance imaging for the study of structures of granular materials. Aggregates shape characterization and the influence on the properties of fresh and hardened concrete are the main purpose of this research.Therefore images analysis is conducted by using the Fourier descriptor method, which represents particle shapes properly. Fourier analysis Fourier method (R, ) has been employed to determine some parameters related with the particle shape.In the general theory of Fourier morphological analysis the boundary or contour of a particle (Figure 2) is represented by equation 2, in terms of Fourier series (Bowman, E. et al., 2000). Being a 0 the average radius of a particle; terms (a m cos m +b m sen m ) describe the characteristics of a specific particle boundary, where a m and b m represent magnitudes and m represents frequency; and R( ) is the particle radius for angle.A particle shape is described by means of the following three parameters: (2) Where A m 2 =a m 2 +b m 2 and n1,n2, and n3 are limit frequencies that separate shape, angularity and texture, respectively.Wang et al. (2005) reported that for 25 mm diameter particles, frequency ranking up to m=4 define shape; m between 5 and 25 define angularity and m>25 define texture. A restriction exhibited by this method are concavities that may be present on the particle boundary, which provide two possible R ( ) values for a single angle, as depicted on Figure 2. Clark (1981) found that the Fourier descriptors method could be used to conduct a quantitative analysis of particles shapes.In this method, the boundary of the particle is run at constant speed in the complex plane.The step length is chosen in order to achieve the complete particle boundary path in time 2 with 2 k number of steps.The complex function obtained is shown in Equation 6. (3) (4) (5) Where x and y are the particle boundary coordinates, N is the total number of descriptors , M is the total number of points describing the particle, n is the descriptor number, m is the index number of a point in the particle, a and b are the coefficients for each descriptor, and i is the imaginary number.The shape index is calculated as the squared root of square additions of coefficients a and b. The total number of points selected to define the boundary, determines the number of descriptors obtainable from Fourier analysis.The complex nature of equation ( 6) means that the low order descriptors (n=+1 a +4 and n=-1 a -4) describe the general morphology of a particle and normally have higher coefficients according to described characteristic.Descriptor values normally decrease towards descriptors +64 and -63 (Bowman et al., 2001).For this reason revised bibliography recommends 128 points to conduct the Fourier analysis.It has been found that the first 15 descriptors are usually enough to describe the particle shape at a general level (Sonka et al., 1993).In the case of sands, three terms have been found to be enough to quantify the approximate morphology of the particle.Descriptors n=0,-1,-2 and -3 are related with material shape and provide radius, elongation, triangular and quadrangular characteristics, while descriptor n= +1 provides an asymmetry measure; and n=+2 and +3 are second descriptors order for elongation and triangularity.Such second order descriptors provide additional information related with roundness at the particle corner edge, but not about the shape of the particle, e.g., a high +3 descriptor would indicate a triangular particle with rounded vertexes (Figure 3).Descriptors 5 to 25 reflect the angular nature of a particle, and those higher than 25 are related with surface texture (Wang, et al., 2005).Typical descriptors of a particle are shown Figure 3. Materials The sources of selected material in this study come from two locations: Guasca and Tunjuelo.In both cases material is crushed, such process affects the morphology of the particles.These sources were selected because they are the most common used by concrete producers in Bogota.Physical characteristics of these materials were determined at the laboratory and they are shown in Table 1.Figures 4 and 5 indicate coarse aggregates gradation and Figure 6 represent fine aggregate gradation.Dotted lines on these figures represent given limits by ASTM C-33 specifications for aggregates used in concrete production. design In Bogota most mix designs are based on ACI 211.1 method.However, it has been found that few aggregates in Bogota meet the specifications of this method.Such method delivers aggregates in accordance with the aggregate maximum size and compact unit mass and with the sand fineness modulus.The selection of the required water amount is determined in accordance with the design settlement, the maximum aggregate size and the content of entrapped air. The ACI 211.1 method considers the aggregate dossification taking into account the fineness modulus (FM) of the sand, assuming that the aggregates used in the design fit the ACI specifications limits.Sand from Tunjuelo quarry has a 3.3 FM value, which is higher than the maximum values specified by ACI 211.1, and the granulometry for the sand and gravel used in this study exceeds the ACI specification.Consequently, aggregates dossification is determined considering ideal gradation curves (Sánchez, 1996), which purpose is to minimize the voids content the mix, without affecting concrete workability.The resulting aggregates combination was 45% gravel and 55% sand.Figure 7 shows the optimal aggregate combination together with different limits and ranks of ideal gradations. Three concrete mix designs were conducted, mix design type I for conventional concrete using Guasca aggregate with a design compressive strength of 21MPa and 7.5 cm of settlement; mix design type II using aggregate from Guasca, for a concrete with a design compressive strength of 21MPa and 15cm of settlement; mix design type III using aggregate from Tunjuelo, for a concrete with a design compressive strength of 21MPa and 15cm of settlement.After the design and adjustment processes on of the concrete mixes, the final material dossifications are shown in Table 3. 3. Mix design per concrete m 3 Caracterización morfológica de agregados/Morphological characterization of concrete Morphological characterization of the aggregates Aggregates morphological characterization was performed using the manual measurement method for elongation, flateness, and fracture faces indexes, and by means of images analysis. The manual process of measurement of indexes consists of separating the coarse aggregates by using a series of sieves, and then selecting particles using elong and flat Such indexes are calculated as the weighted sum of the weights of for each size of elongated or flat particles fraction.The fractured faces is subjective, and it consists of quantifying the porcentage of particles that have approximately 75% of fractured faces each fraction.The percentage of fractured faces is calculated as the weighted sum of the results for each fraction. The morphological analysis of the particles by means of images was conducted by using the Fourier descriptors method, described in section 2, which consists of running the particle boundary in the complex plane at constant speed.In the study, 123 descriptors (k=7) that, in accordance with the literature report, is enough to properly rebuild the input image. Images were obtained from photographs taken from groups of 20 particles with a 10-megapixeldigital camera.Then, with the help of an interpretation and images analysis software developed for this project, the geometry of a particles sample was studied for each fraction of the coarse aggregate for each of the materials described in Table 2.A total of 200 particles per fraction, randomly selected, were analyzed.The process followed using the developed software can be summarized in two stages.The first stage is the images conversion into a binary format and the determination of particles perimeter coordinates.The second stage consists of processing these coordinates to determine the Fourier descriptors following the method described in section 2.2.2.In this way, quantitative information of the aggregates was available for further correlation with fresh and properties. Evaluation of concrete properties Aggregate particles shape may affect the properties of fresh and hardened concrete.Dosing concretes with different aggregates, may affect its workability and also its mechanical properties.Aggregates having different shapes have different specific surface area that is why the amount of paste to achieve the same workability and strength may vary.The properties evaluated in this study to determine the shape influence on concrete behavior are: workability, by means of the settlement test (NTC 396-ASTM C 143), compressive strength (NTC 673-ASTM C 39), and elasticity modulus (NTC 4025-ASTM C 469). Morphologic characterization The results of the morphologic characterization using manual methods are shown in Table 4. Caracterización morfológica de agregados/Morphological characterization of concrete It is clear that, elongation indexes are similar for both of Guasca is slightly higher.A similar situation occurs in the case of fractured faces, where both aggregates have a high percentage of this property as expected, since both are crushed aggregates. Shape characterization by means of image analysis using the Fourier method, was conducted for descriptors -1: elongation, -2: triangularity, and -3: quadrature.For this purpose ranges of values were identified for each shape descriptor for which an important variation of shape.For descriptor -1 (Elongation), it is found that when this descriptor has a value lower than 0.05 the elongation is low; for values between 0.05 and 0.17 elongation is high, and for values greater than 0.17 elongation is intermediate.In the same way, descriptor -2 is analyzed, and shows low triangularity for values lower than 0.05, intermediate for values between 0.05 and 0.2, and high for values greater than 2. For quadrature, rounded particles are observed when descriptor -3 is lower than 0.02, and quadrangular particles for greater values.Such value ranges along with the corresponding shape variations are depicted in Figures 8, 9 and 10 for elongation (descriptor -1), triangularity (descriptor -2) and quadrature (descriptor -3), respectively.In the same way, Tables 5, 6 and 7 present the percentage of particles whith in each elongation, triangularity and quadrature ranges for each type of aggregate. Assuming that the high elongation range corresponds to the elongation index criteria of the manual method (Table 2), i.e., particles which length to the average size of the fraction ratio is greater than 9/5, then obtained results for both methods are comparable.Considering the errors associated to manual measurement and the sensitivity associated to the selection of ranges in the Fourier descriptors method, such results validate the application of the Fourier descriptor method for the elongation index. In the case of descriptors -2 and -3 (triangularity and quadrature), there is no significant variation for the three types of aggregates coming from Guasca quarry, as it was expected, since such material manipulation for obtaining G1, G2 and G3 samples only was developed in terms of elongation.Tabla 6. Descripción de triangularidad (Descriptor -2) Table 6.Description of Triangularity (Descriptor -2) María Patricia León, Fernando Ramírez Furthermore, quadrature and triangularity differences between aggregates from Guasca and Tunjuelo are minimal, because both are crushed materials.Figura 8. Geometría típica para los rangos del descriptor -1, elongación: a. Elongación baja con descriptor menor a 0.05, b. Evaluation of concrete properties In order to study the shape effect on the properties of fresh and hardened concrete, three mix designs were prepared, which dossifications are presented in Table 3 for the natural aggregates.Mix type I is used to determine the shape when the same dossification is employed.this purpose and using the dossification corresponding to mix type I, cylinders were prepared for each kind of aggregate G1, G2 and G3.Mix type II is employed to evaluate the effect of morphology keeping the same settlement and water/cement ratio, 9 cylinders were prepared for each kind of aggregate G1, G2 and G3.It is important to note that mixes type II result in different dossifications for each kind of aggregate G1, G2 and G3.Finally, mix type III is used to compare natural material coming from two locations (G1 and Tunjuelo).Concrete cylinders were tested to evaluate compressive strength and elasticity modulus, and the corresponding results are presented below. Shape effect -Same dossification -Mix type I The purpose of studying this mix type is to evaluate the effect of different kinds of coarse aggregates (G1, G2 and G3) on concrete workability and strength, when the same material dossification is used in accordance with mix design type I shown in Table 3. The average results of settlement for each kind of aggregate are shown in Table 8.It is noticeable that there is a great influence of the aggregate shape on fresh concrete workability.For this mix type, the use of material with an elongation index of 100% (G2) yields a 43% settlement reduction compared to the obtained settlement from natural aggregate (G1), while the employment of an aggregate with a 0% elongation index (G3) yields a settlement increase of 32%.Caracterización morfológica de agregados/Morphological characterization of concrete 4.2.2Shape effect -Same settlement and water-cement ratio, Mix type II Such mixes intend to find the amount of cement paste for concrete mixes for different kinds coarse aggregate G1, G2 and G3, for a specific settlement (15 cm) keeping the ratio water/cement ratio constant.In order to achieve the same settlement for different aggregates, the mix design type II (Table 3) is taken as a standard, and the coarse aggregate volume is modified until achieving the desired settlement.Such process yields dosages with different cement paste volumes.Final dosages per weight are indicated in Table 10 and the average values of settlement for each kind of aggregate are shown in Table 11. The presence of elongated particles involves a higher voids content, and therefore a higher amount of cement paste.Accordingly with results in Table 10, the mix prepared with aggregate type G2, which has an elongation index of 100%, requires a 1.9% paste volume increase than the natural aggregate; while the mix of G3 aggregate, with a 0% elongation index, requires 4.0% less cement paste compared to the non-manipulated aggregate mix.Demand of paste increases in 5.9% for a mix with aggregates without elongated particles (G2), compared to one with 100% elongated particles (G3). Tabla 10.Variación en la dosificación de la mezcla tipo II por m 3 de concreto debidas a la forma Table 10.Dossification variation of mix type II m 3 per of concrete, due to particle shapes Tabla 11.Trabajabilidad de las mezclas tipo II is no significant change in strength for concrete prepared with G1 and G2 aggregates.In the same way, the compressive strength and elasticity modulus behavior for G1 and G3 aggregates is similar with a minor decrease of compressive strength and elasticity modulus for G3 aggregate.Accordingly with Table 10, the mix prepared with G2 aggregate contains about 2.0% more paste than the mix prepared with aggregate; while G3 mix requires 4.0% less paste, which indicates that compressive strength and elasticity modulus behaviors can be mainly affected by the paste volume, not disregarding the influence in strenght of the particles.Mix designs in this comparison are type I and type III described in Table 3, corresponding to the two previously mentioned aggregates.Settlement results shown in Table 13 are consistent to literature reports, since the elongation index for Tunjuelo aggregate (T) is minor than that of Guasca aggregate (G1). Figure 20, Figure 21, and Table 14 show the results for compressive strength and elasticity modulus in mixes type I and type III.It can be seen that compressive strength and elasticity modulus for T aggregate are greater than for G aggregate, because T has better physicomechanical characteristics as proven by its higher specific weight and minor resistance to degradation percentage (Table 1). Tabla 12. Resistencias a la compresión y módulo de elasticidad mezclas Tipo II Table 12.Compressive strength and elasticity modulus mixes Type II Tabla 13.Trabajabilidad de las mezclas tipo I y tipo III Table 13.Workability of mixes type I and type III Conclusions Aggregates morphology affects the properties of fresh hardened concrete, with a higher influence on workability than on mechanical properties.Shape measurements by means of traditional methods is subjective, therefore during recent years image analysis technologies have been employed in order to determine the particles shape characteristics.In this study the morphologic characteristics of different kinds of aggregates were determined using traditional methods, and image analysis by means of Fourier descriptors, in order to evaluate the influence of the elongation of particles on the properties: settlement, compressive strength and elasticity modulus.Based on this research, materials, number of samples and analyses considered, the following conclusions can be obtained: a.The values obtained for elongation from manual methods and Fourier description analysis, yield minor differences.Such differences are originated by errors associated with manual measurements and the sensitivity in the selection of ranges in the Fourier method.Caracterización morfológica de agregados/Morphological characterization of concrete b.Mixes prepared with the same mix dossification significant settlement variations for different kinds of aggregates.Elongated particles decrease concrete settlement and, therefore, reduce its workability.This implies that adjustments must be made on the concrete mix design in order to obtain a desired workability. c. Compressive strength and elasticity modulus of mixes with the same dossification, but different content of elongated particles, do not show significant differences, therefore, shape is not an important factor on concrete mechanical properties.d.For mix designs using different shapes aggregates for a given settlement, it was found that the paste volume vary from 5.9% between aggregates with low elongation index (G2) and high elongation index (G3).Such mixes showed similar behaviors for compressive strength and elasticity modulus.e.By comparing the results of compressive strength and elasticity modulus on concretes prepared with aggregate T and G1, it is noticeable that T has higher strengths than aggregate G1, because T has better physicomechanical characteristics as proven by its higher specific weight and minor resistance to degradation percentage. Figura 2 . Figura 2. Partícula con dos posibles valores de radio para un mismo ángulo Figure 2. Particle with two possible radius values for a single angle Figure 8 . Figure 8.Typical Geometry for elongation ranges of descriptor -1: a.Low elongation with descriptor lower than 0.05; b.Intermediateelongation with descriptor between 0.05 and 0.17; and c. High elongation with descriptor greater than 0.17 Table 1 . Aggregates and their physical characteristics Table 11 . Workability of mixes type II Results for compressive strength and elasticity modulus are shown in Figures 18 and 19, and Table 12.
7,060.2
2010-01-01T00:00:00.000
[ "Engineering", "Materials Science" ]
A continuum mechanical framework for modeling tumor growth and treatment in two- and three-phase systems The growth and treatment of tumors is an important problem to society that involves the manifestation of cellular phenomena at length scales on the order of centimeters. Continuum mechanical approaches are being increasingly used to model tumors at the largest length scales of concern. The issue of how to best connect such descriptions to smaller-scale descriptions remains open. We formulate a framework to derive macroscale models of tumor behavior using the thermodynamically constrained averaging theory (TCAT), which provides a firm connection with the microscale and constraints on permissible forms of closure relations. We build on developments in the porous medium mechanics literature to formulate fundamental entropy inequality expressions for a general class of three-phase, compositional models at the macroscale. We use the general framework derived to formulate two classes of models, a two-phase model and a three-phase model. The general TCAT framework derived forms the basis for a wide range of potential models of varying sophistication, which can be derived, approximated, and applied to understand not only tumor growth but also the effectiveness of various treatment modalities. Introduction Tumor growth and treatment is an area of science of significant interest to society. Ideally, scientists wish to understand fundamental aspects of tumor growth in sufficient detail to enable accurate mathematical models of the behavior at the length scale of interest in humans, which is on the order of centimeters. The problem that arises is a common problem in science and applied mathematics: how to most efficiently and effectively account for important small-scale behavior in larger scale models. To understand the issue more completely, one must consider the scales involved, which we will identify as the molecular scale, the microscale, and the macroscale. At the molecular scale, one might endeavor, for example, to understand the processes and reactions that lead to the damage and repair of DNA, gene variations important for specific types of cancer, the role of environmental factors, and interactions among contributing factors. Within this context, important fundamental understanding, such as the hallmarks of cancer [24,25], emerges. Such fundamental, small-scale work is a principal focus of the medical research community and much has been learned over the last few decades. However, a disconnect exists between the molecular scale of such fundamental work and the typical length scale of tumors in humans. As a result, it is not obvious how molecular-scale studies can be used to describe tumor growth [39]. Because of this, tumor growth is often described based upon purely statistical representations of empirical fits to observations [7,8,27,34,58,62]. While such fits to data may be good, mechanistic understanding is lacking from such approaches. Put another way, empirical fits are not based on system physics and thus provide an insufficient basis fundamentally to describe factors affecting tumor growth and also to make meaningful, mechanistically based descriptions of how fundamental changes in a system will affect tumor growth. As an alternative, microscale continuum methods can be used to describe tumor growth. The microscale is a small scale at which continuum mechanical approaches are valid. Even though tumors occur in biological systems, it can be reasoned that common continuum mechanical notions such as conservation principles, mass transfer, reactions, and thermodynamics are applicable and are relatively well understood at the microscale. The challenge that emerges from a microscale modeling approach is the need to represent meaningfully the processes that occur in a wide variety of matter types including healthy tissue, active tumor regions, necrotic tumor regions, extra-cellular matrix regions, and blood vessels. At the microscale, the domains of each of these entities change with time; interfaces form between the entities; and common curves form where three entities meet. Because of this inherent complexity, and the length scales of interest, microscale modeling approaches are not practical or feasible for the mechanistic description of the dynamics of tumors in a living being. Just as averaging of molecular-scale phenomena is necessary to formulate a microscale continuum model that abstracts away a portion of the mechanistic detail, larger-scale averaging can be performed to derive a macroscale representation from a microscale formulation. At the macroscale, one endeavors to describe the dynamics of the entities (phases, interfaces, common curves) involved in an averaged sense with notions such as volume fractions and other specific entity measures-concepts that do not exist at the microscale. A point in a macroscale model thus represents the averaged conditions embodying all entities around the centroid of a small region. Such models can be formulated in a deterministic sense if and when the averaged conditions are insensitive to small changes in the scale of the averaging region. Useful macroscale models must account for subscale behavior in an approximate, averaged sense; but they must also mechanistically describe tumor dynamics at the scale of applications. A variety of approaches exist to formulate macroscale models of tumor growth [57]. More broadly considered, a variety of homogenization and averaging methods have been applied to develop macroscale models of porous medium systems [2,21]. Typically macroscale model formulations are formed and closed phenomenologically directly at the macroscale. Examples for this are the multiparameter models based on mixture theory [26,37,50,51]. While an expedient approach, phenomenological macroscale models do not provide a firm connection with the microscale and cannot be assured to be consistent with the second law of thermodynamics. Recent continuum mechanics work has resulted in the development of the thermodynamically constrained averaging theory (TCAT) [15,19,44]. The TCAT approach formally averages microscale quantities to the macroscale, including not only phases but also interfaces and common curves, incorporates thermodynamics in a scale-consistent manner, and results in entropy inequality expressions that can be used to guide the formulation of models. TCAT also includes evolution equations for the geometry of the phase regions and their boundaries that reduce the closure problem, are based upon mathematical theorems, and are separate from all conservation principles. Recently, notions from integral and differential geometry have been used rigorously to address closure relations needed to describe capillary pressure [42,47]. Because macroscale TCAT models are firmly connected with microscale antecedents, experimental observations or computational simulations at the microscale can be averaged to the macroscale and used to validate a resultant macroscale model. Considerable microscale resources exist, which can be leveraged to advance macroscale models. While some aspects of TCAT formulations have been used to model tumor growth [31,54], a complete and rigorous hierarchy of models formulated and closed using TCAT procedures has not yet been accomplished. Such an advancement is possible by leveraging recent advances in the TCAT approach and applying these to tumor growth and treatment. This advancement provides the development of a hierarchy of models of varying sophistication that can be applied to this important class of problems. tumor growth; (4) to formulate a rigorous three-phase macroscale model for tumor growth; and (5) to discuss ways in which the model-building framework can be used to formulate models to describe a wide variety of more complex and detailed systems than the ones provided explicitly herein. TCAT approach The approach to be taken to meet the goal and objectives of this work is to leverage existing TCAT model-building components [19,52] to formulate models for tumor growth. The advantage of this approach is one of relative simplicity and expediency: using the available formulated components, model building is relatively straightforward; and the substantial amount of effort and manipulations needed to derive an SEI are eliminated. The disadvantage of this approach is that it could be viewed as jumping into the middle of a carefully structured model formulation approach. To circumvent this potential misperception, a brief summary of the TCAT approach is provided to orient readers who are not yet familiar with TCAT. General guidance for the TCAT approach is available in the literature [15,19,[44][45][46], and specific details of a two-fluid phase compositional model for a porous medium system have also been presented [52]. In general, the TCAT approach is initiated with a general, minimal description of the system to be modeled. This description includes the entities to be modeled, specification of the physical and chemical phenomena occurring within each entity, and the interactions among entities. For the case of concern herein, entities considered include three phases, three interfaces, and a common curve. Entropy, momentum, and energy are resolved at the entity level, and the chemical composition of the mass of each entity is resolved. Classical irreversible thermodynamics is used, and continuum methods are assumed to be valid and deterministic at the macroscale. All conservation, balance, thermodynamic, and potential equations are formulated at the microscale and then systematically averaged to the macroscale. The macroscale balance of entropy is arranged to solve for the entropy density production rate, which is known to be a nonnegative quantity from the second law of thermodynamics. All macroscale conservation and potential equations are arranged such that the terms in the equation sum to zero. Each collection of terms is multiplied by a Lagrange multiplier and added to a system entropy balance. The Lagrange multipliers are solved for to eliminate material derivatives to the extent possible, yield a dimensionally consistent equation, and connect the processes that produce entropy to the rate of entropy production. Rearrangement of this augmented entropy inequality and reduction to a strict flux-force form, requiring approximations, is a key archival result of the TCAT approach. All of the manipulations leading up to this equation do not need to be repeated for each application that uses the SEI for a given class of models. Furthermore, macroscale conservation and evolution equations are also already available and can be used to formulate models. The chief remaining work when leveraging an extant model hierarchy is thus to use the SEI to formulate model closure relations and combine these equations with conservation and evolution equations to produce a well-posed model. Because of the general approach taken that includes minimal assumptions, a typical SEI supports the formulation of a hierarchy of models of varying sophistication, which results from applying secondary restrictions to the general SEI (e.g. entities of importance and their properties, specific forms of closure relations). This brief overview of the TCAT approach will be detailed in the sections that follow and used to produce example tumor growth models. We will consider tumor growth models that can be idealized as containing two fluid phases and one solid phase. Compositional effects for mass will be important. An existing TCAT model hierarchy that meets these specifications has been derived [52] and will be relied upon as a foundation for the TCAT modeling approach that follows. Use of this hierarchy will simplify the model formulation process. Macroscale equations Macroscale equations relied upon in the TCAT approach to form an entropy inequality include conservation equations, a balance of entropy equation, thermodynamic equations, and potential equations. In this section, we summarize only the conservation equations, which are used to construct the target models of concern in this work. Details of the derivation of these equations are available in the literature [19,52]. The approach followed in this work leverages available results without the burden of reproducing these model components-greatly simplifying the model-building process. We make use of the conservation equations for mass, momentum, and energy, which can be written in a common form for all species in all entities. The compositional, massconservation equation for species i in entity α is ℳ * * iα = D α ϵ α ρ α ω i α Dt + ϵ α ρ α ω i α I: d α + ∇ · ϵ α ρ α ω i α u iα − ϵ α r iα − ∑ κ ∈ J cα M iκ iα = 0 for i ∈ J s , α ∈ J, (1) where ϵ α is the entity extent measure (volume fraction, specific interfacial area, specific common curve length), ρ α is the mass density, ω i α is the mass fraction, I is the identity tensor, d α is the rate of strain tensor, u iα is the deviation velocity, r iα is the rate of mass production of species i resulting from all reactions in entity α, M iκ iα represents the rate of mass transfer of species i from connected entity κ to entity α, J s is the index set of chemical species, J cα is the index set of connected entities, and J is the index set of all entities in the system. Superscripted entity and species qualifiers denote macroscale quantities. In general, the set J cα may contain entities of lower dimensions and higher dimensions than the dimension of the α entity. For example, the closed set of the solid phase adjoins the wetting fluid phase and the non-wetting fluid phase at the interface that forms between the respective pairs of phases, and the common curve formed at the intersection of the solid phase and the two fluid phases. Thus, J cs = ws, ns, wns , where s in an index specifying the solid phase, and the grouping of indices refer to the respective interface and common curve entities. We will restrict the inter-entity transfer of mass and entropy to entities of at most one dimension higher or lower than a reference entity. For momentum and energy, we will allow a concentrated force along the common curve to act on the solid phase in the most general case, and the inter-entity transfer of internal energy will include interactions between the common curve and the solid phase as well. These restrictions are incorporated into the form of the conservation and balance equations written. Conservation of momentum may be considered from either a compositional or an overall entity perspective [17]. Taking the latter approach, the momentum equation is written as where v α is the velocity, g iα is the body force per unit mass acceleration vector, v α, κ is the velocity of flow in an entity averaged over the boundary of the entity, u i α, κ is the deviation velocity in an entity averaged over the boundary of the entity, t α is the stress tensor, T 0 κ α represents the transfer of momentum from entity κ to entity α, and singular, or concentrated, forces of a common curve acting on a solid phase are included [19]. Conservation of energy equations is written for an overall entity as where E α is the internal energy density, K E iα is the kinetic energy per unit mass resulting from velocity fluctuations, E i α, κ is the partial mass energy averaged over the boundary of the entity, K Ei α, κ is the average deviation kinetic energy averaged over the boundary of the entity, density due to body force and velocity fluctuations, and Q 1 κ α is the transfer of internal energy from entity κ to entity α other than by phase change. Equations (1)-(3) are the basic conservation equations needed to formulate the models of interest in this work. Additional, and available, equations needed for model simplification, closure, and completion will be introduced as needed in the formulation process. Simplified entropy inequality A key concept from the TCAT approach is the use of an entropy inequality to formulate closure relations that are consistent with the second law of thermodynamics. A strict flux-force form of the entropy inequality is needed to satisfy this purpose; this form is referred to as the SEI. The formulation of an SEI requires skill and substantial mathematical manipulations. However, once derived the SEI can be used without understanding all of the details needed to arrive at the final form. A general SEI is available for the class of model considered in this work [52] and can be written as ∑ α ∈ J f 1 θ α ϵ α t α + ϵ α p α I : d α + 1 θ s ϵ s t s − ϵ s t s : d s + ∑ α ∈ J I 1 θ α ϵ α t α − ϵ α γ α I − G α : d α + 1 θ wns ϵ wns t wns + ϵ wns γ wns I − G wns : d wns − ∑ α ∈ J ϵ α q α + ϵ α q g0 where all symbols are defined in the notation section and the interactions between the solid phase and the common curve have been explicitly noted. The general SEI given by Eq. (4) contains the flux-force pairs that can produce entropy in a system. While they may appear overwhelming to the non-specialist, a brief review of the terms by line number can aid understanding of the fluxes considered for which closure relations will be developed. Lines 1-3 are fluxes involving stress tensors and their products with deformation rate tensors. Line 4 is a conductive heat transfer and deviation term flux and their product with a temperature gradient force. Lines 5 and 6 consist of a deviation velocity flux and a force that is a gradient in potential terms, and higher order terms in deviation velocities. Lines 7 and 8 are reaction fluxes resulting from potentials and deviation quantities grouped as forces. Lines 9-14 represent inter-entity mass transfer resulting from differences in potentials and deviation quantities. Lines 18-31 express the inter-entity transfer of energy resulting from forces that are a difference in temperatures. Lines 32-46 are momentum fluxes resulting from differences in entity velocities. Lines 47 and 48 are fluxes in extent measures resulting from a deviation in capillary pressure between the fluid phases from equilibrium. Lines 49 and 50 are changes in porosity resulting from a balance of forces in the direction normal to the solid phase. Lines 51 and 52 represent a change in the wetted fraction of the solid phase resulting from motion of the common curve due to a balance of forces acting tangent to the surface of the solid phase, Lines 53 and 54 express the changes in the porosity resulting from common curve forces acting normal to the solid surface, and Line 55 is the total entropy density production rate of the system. Because each member of the set of fluxes is independent of all other members of the set of fluxes, the fluxes can be considered in turn to derive permissible forms of the closure relations. Examples of how to do so are available in the literature [19,28,52]; examples for modeling tumors are detailed in the following sections. 6 Two-phase system Description The purpose of this section is to consider a relatively simple macroscale model to describe tumor growth. This example will enable a straightforward exposition demonstrating the TCAT model building and closure process. Since the focus is on the model formulation process, neither the model solution nor evaluation will be considered herein. We wish to use the TCAT model components summarized above to formulate a macroscale model consisting of two phases: a solid phase denoted with the index s, and a single fluid phase denoted with the index f, which is a subset of the more general two-fluid phase TCAT model hierarchy that is summarized above. The solid phase consists of the extra-cellular matrix (ECM), live and necrotic tumor cells, glucose, oxygen, water, and a chemotherapeutic drug. These species are important for the solid phase because of the separation of mass transfer and homogeneous phase reactions, and the conceptual representation of the operative processes affecting tumor growth. The fluid phase represents the interstitial fluid, which consists of a dominant water species, glucose, oxygen, and a chemotherapeutic drug. Both phases contain a background species that comprises all other species that are not explicitly considered. Lysis of necrotic cells is also considered. Based on the above description, the index set of entities is J = f, s, fs , (5) where fs denotes the fluid-solid interface. The index set of the nine species considered is denoted J s = l, n, e, g, o, c, w, x, y , where l denotes a living tumor species, n a necrotic tumor species, e the extra-cellular matrix species, g glucose, o oxygen, c a chemotherapeutic drug, w the water, x a collective background species for the fluid phase, and y a collective background species for the solid phase. The background species represent a set of non-reactive background species that do not undergo mass transfer. The solid phase may contain all species but x, and the fluid phase may contain the g, o, c, w and x species. Primary restrictions specify the general basis for the TCAT model framework to be used. The primary restrictions for the model are: continuum mechanics represents the system of concern with sufficient fidelity; a clear separation of length scales exists between the microscale and the macroscale; and classical irreversible thermodynamics can be used to describe the system of concern. Secondary restrictions are specified to simplify the general TCAT model hierarchy to the simplest possible form that represents the system of concern. These restrictions have implications for model formulation and closure, including simplification of the general SEI. The secondary restrictions for this application are: (SR1-2P) the system consists of one fluid phase, one solid phase, and an interface; (SR2-2P) the system is isothermal; (SR3-2P) the interface is massless, (SR4-2P) kinetic energy terms are higher order and of negligible importance; (SR5-2P) body force vectors and potentials are identical for all species; (SR6-2P) chemical reactions can be formulated in terms of chemical affinities; (SR7-2P) inertial terms in the momentum equations are insignificant due to the slow dynamics of the systems considered; (SR8-2P) density-weighted, area-averaged velocities and deviation velocities are equal to their volume-averaged counterparts; (SR9-2P) the Lagrangian stress tensor product with the Green's deformation tensor can be neglected from the driving force difference for mass transfer to the solid phase; and (SR10-2P) the activity coefficient of all species is unity. Formulation The two-phase model described above will be formulated into a closed mathematical model. The steps involved in doing so are detailed in the subsections that follow and include the formulation of a restricted SEI, use of the restricted SEI to formulate a permissible set of closure relations, and a closed model formulation based upon conservation equations and closure approximations. (4) is a general expression that applies to a wide range of systems involving two fluid phases, a solid phase, three interfaces, and a common curve; and a set of flux-force pairs involving dissipative processes involving mass, momentum, and energy. This general expression can be simplified for the example two-phase system described above, and many other applications of concern as well. While several general SEIs have been developed for various systems [16][17][18]28,29,52], the ability to simplify a general SEI for a complex system to describe simpler systems involving fewer entities than the general expression is a hallmark of the TCAT approach. For the application at hand, the original seven entity SEI can be reduced to a three entity SEI, where J = f, s, fs . Furthermore, the other secondary restrictions noted above result in substantial simplifications of Eq. (4), while preserving the necessary terms needed to formulate a closed, well-posed model. Applying all secondary restrictions to Eq. (4) yields the restricted SEI given by Restricted SEI-Equation where θ is the isothermal temperature of the system, J rxnα is the index set of all reaction equations in the α phase, R kα is a molar rate of reaction, A kα is a chemical affinity, and SR6-2P has been used to write the reactions in terms of chemical affinities [19]. Each of these flux-force pairs is explained physically and used to formulate a permissible set of closure relations in the section that follows. Closure relations- We will consider the flux-force pairs in Eq. (7) in order and use these pairs to formulate a permissible set of closure relations, which will be used to formulate a closed, solvable model. References will be made to line numbers in this equation as individual flux-force pairs are considered. Each member of the set of fluxes is independent of all other fluxes, and each member of the set of forces is independent of all other forces. However, a flux may depend upon not only the conjugate force that appears as product in the SEI but also on other members of the set of forces as well, which is referred to as cross-coupled closure. The SEI is an equation that specifies constraints on permissible forms of closure relations. For both of these reasons, closure relations are not unique, and the complexity of the approximate forms can be tailored to the applications. Approaches to derive approximations have been developed and found to have utility for describing a range of physical systems [19]. We will follow these traditional approaches to generate models that can be compared to data. If the models are not consistent with the observations, the closure relation approximations and the model restrictions may be reexamined and modified as needed. Thus, a clear path to modifying macroscale continuum models exists. The first term in Line 1 of Eq. (7) represents a flux involving the stress tensor for the fluid and the fluid pressure, and the conjugate force is the deformation rate tensor for the fluid. The product of this flux and force must be nonnegative under all conditions and zero at equilibrium. A zero-order approximation is the simplest possible approximation, and it has been found to provide a good description of macroscale porous medium systems [19]. The zero-order approximation is that the flux term is zero under all conditions such that allowing the macroscale stress tensor to be approximated as This is a reasonable approximation because at the macroscale fluid flow through a porous medium is essentially inviscid, with momentum transfer to the solid phase particles being a dominant process. This dominant exchange process is represented by T 0 f fs , which appears in Line 7 and is discussed below. Following a similar line of reasoning, the stress tensor for the solid phase is t s = t s , (10) and the stress tensor for the interface appearing in Line 2 of Eq. (7) is t fs = γ fs I − G fs . (11) Equation (10) suggests that the macroscale stress tensor for the solid phase can be derived based upon intrinsic averaging of the microscale stress tensor, which is determinate based upon the solid behavior at the microscale. Equation (11) relates the interfacial stress tensor to the product of the interfacial tension and the orientation of the interface. Line 3 in Eq. (7) involves a flux represented by the deviation velocity u iα and the conjugate force, which is the gradient of the chemical potential. The difference in chemical potential occurs because of the necessary constraint that ∑ i ∈ J s ω i α u iα = 0 for α ∈ J, Miller et al. Page 12 Arch Appl Mech. Author manuscript; available in PMC 2022 July 08. which implies that N − 1 deviation velocities are independent. Choosing w as the dominant reference species in the f phase and e as the dominant reference species in the s phase yields the linear first-order closure relation for the fluid phase given by and the corresponding closure relation for the solid-phase deviation velocity given by ω i s u is = − D ie s · ∇ μ is − μ es for i ∈ l, n, g, o, c, w, y , where D ij α are second-rank symmetric tensors that parameterize the deviation velocities in terms of gradients in chemical potentials. Line 4 of Eq. (7) expresses the flux-force pair of the rate of reaction and the chemical activity. A permissible closure relation must be generated from this pair and related to the reaction term appearing in Eq. (1), which is the term involving r iα that is the rate of mass production of species i per time resulting from all reactions in phase α. A linear closure relation is where K kα is a nonnegative reaction rate coefficient for the k th reaction in the α phase. Substitution of this approximation into the SEI yields a nonnegative production rate. Two tasks remain to formulate closure approximations for r iα : the closure approximation must be related to the general reaction variable, and the set of chemical reactions occurring in each phase must be formulated. The overall mass production rate results from the set of reactions such that r iα = ∑ k ∈ J rxnα v ikα MW i R kα for α ∈ J P , (16) and the affinity is defined as where MW is the molecular weight, and ν is a molar stoichiometric reaction coefficient. Equations (15)- (17) can be combined to yield the mass production rate resulting from reaction given by Miller et al. Page 13 Arch Appl Mech. Author manuscript; available in PMC 2022 July 08. If the set of molar biochemical reactions and molecular weights are known for all reactions in each phase, then the species mass production rate is fully specified and closed. When the reactions are nonlinear, resolution issues related to averaging exist, which will not be considered herein [19]. It is assumed that no reactions occur in the fluid phase, which implies that the g, c, and o species are non-reactive in this phase, although all species may undergo mass transfer to and from the solid phase. Reactions occur in the solid phase that lead to tumor growth, the consumption of glucose, oxygen and the chemotherapeutic drug species, tumor species death due to the chemotherapy drug, conversion of a living tumor to a necrotic tumor resulting from a lack of oxygen, and necrotic tumor lysis. The set of reactions are summarized in Table 1, where C iα is the molar concentration of species i in phase α, and MW i is the molecular weight of species i. Line 5 of Eq. (7) is a flux-force pair involving mass transfer between the fluid and solid phases. A conjugate flux-force closure for the mass transfer density rate is where K M ifs is a nonnegative mass-transfer rate coefficient. Lines 6 and 7 of Eq. (7) are a flux-force pair involving momentum of the fluid phase. A conjugate flux-force closure can be formulated for the isotropic case as where R f is a positive definite resistance tensor for the fluid phase. where R fs is a positive-semidefinite resistance tensor for the interface. Line 9 of Eq. (7) involves the change in the porosity related to forces acting on the fluidsolid interface. A conjugate flux-force approximation can be written as D s ϵ Dt = c p f fs + n s · t s · n s s fs + γ fs J s fs , (22) where c is a nonnegative solid compressibility coefficient. The set of conjugate flux-force closure relations summarized in this subsection can be combined with a set of conservation of mass and momentum equations to yield a closed model. This model is formulated in the section that follows. Closed model-A closed two-phase model can be formulated by combining conservation equations given in Sect. 4 with closure relations given in Sect. 6.2.2. Because we are assuming isothermal conditions, the model will include conservation of species mass and entity momentum equations, but it will not be necessary to include conservation of energy equations. Other choices of secondary restrictions would result in different macroscale models, but the derivation process would follow a similar path. The conservation of mass equations for a species in the fluid phase are and for the solid phase where the dominant species for each phase is eliminated using the constraint equation ∑ i ∈ J s ω i α = 1 for α ∈ J P , (25) where J P is the index set of phases and some species are omitted from each phase by definition. Thus, four species conservation of mass equations exist for the fluid phase, and seven species conservation of mass equations exist for the solid phase. Combining these equations with constraints given by Eq. (25) fully specifies the composition of both phases. Equations (23) and (24) contain terms involving deviation velocities, reactions, and interphase mass transfer. The previously derived closure relations can be used to explicitly specify these equations. Substituting Equations (14) and (19) into Eq. (23) yields the species conservation of mass equation for the fluid phase and for the solid phase where only the reactions involving species i need to be considered for the i species transport equation, which is specified by the summation over index set J rxnis that is the set of all reactions in the s entity that involve species i. Chemical potentials can be written in terms of mass fractions to minimize the closure problem with the conservation of mass equations. The macroscale chemical potential may be written as where μ 0 iα p α , θ α is a reference state chemical potential, R g is the ideal gas constant, x iα is a mole fraction, and γ iα is an activity coefficient, which we will assume is equal to 1 for all species and all compositions encountered. The mole fraction can be written as where the molecular weight of the α entity is defined as Equation (1) can also be summed over all species, which yields an overall conservation of mass equation for an entity. This equation can be further expanded using an approximation for the macroscale velocity, which follows from the conservation of momentum equations that follow. These standard manipulations are detailed in the literature [19]. Recall the conservation of momentum equation, which can be written as This equation may be combined with the closure relations and manipulated to deduce a pair of closed momentum equations for the phases. The interface momentum equation is trivial because of the secondary restriction specifying a massless interface. Equations (9), (19), and (20) can be substituted into Eq. (31) to yield the resultant fluid phase momentum equation (32) or approximated using the Gibbs-Duhem equation as [19] ϵ where the inertial terms have been dropped due to the slow dynamics of the system and the area averaged velocities multiplying the mass exchange approximations have been replaced with the volume averages, as specified in secondary restrictions SR7-2P and SR8-2P. Equation (31) can be summed over all entities, eliminating the momentum exchange terms, and yielding after substitution of the closure relations for the macroscale stress tensors Further manipulations using this equation are possible, and expressions for the stress tensor have been developed using TCAT [20]. The end objective of such a momentum equation is to determine the velocity of the solid phase. Conditions may exist in which this velocity can be approximated without consideration of the details of solid mechanics. The existence of mass exchange complicates the usual solid mechanics mapping between Eulerian and Lagrangian coordinate systems, and open research questions remain, which will not be considered in this work. 7 Three-phase system Description For the two-phase system considered above, the tumor species were considered part of the solid phase. However, it is generally accepted that cells and tissues may be modeled as fluids [14]. Such approaches may be used to model adhesion of cells among themselves, to the ECM, and to substrates [1,30,56], and to model chemotaxis and haptotaxis to represent active cellular movement, such as observed in angiogenesis, invasion, and branching [9,31]. Hence, the purpose of this section is to consider a more complicated three-phase model involving tumor growth. This will require defining the entities, species in the entities, reactions, and other aspects of the model. An alternative form of the SEI will be relied upon to guide model closure. The basic model-building steps will parallel those steps followed to formulate the two-phase model presented in the previous section. The three phases consist of a solid phase, s, a wetting fluid denoted w, and a non-wetting fluid phase denoted n. The wetting fluid preferentially wets the solid phase compared to the non-wetting fluid phase, although a contact angle can exist between the fluid-fluid interface and the solid surface; measured through the wetting fluid phase this contact angle will be < 90 degrees and 0 degrees for a strongly water wet solid phase. The index set of all entities is J = w, n, s, wn, ws, ns, wns , (35) where the grouping of two symbols denotes the interface that forms between the two respective phases and wns is an index for the common curve formed where three phases meet. The index set of the ten species considered is identical to the set previously considered for the two-phase tumor model with the exception that an additional collective background species z is added to the set J s = l, n, e, g, o, c, w, x, y, z . Formulation The three-phase model described above is formulated into a closed model following a similar approach that was taken for the simpler two-phase model formulated above. The sections that follow detail the restricted SEI, the closure relations, and the closed model. − ∇ ϵ n p n + ϵ n ρ n ∑ i ∈ J s ω i n ∇μ in + ∇ψ n + g n − T 0 n wn − T 0 n ns · v n − v s − ∑ Equation (37) can be used to generate closure relations that are consistent with the second law of thermodynamics. While the form of this equation is much simpler than the general case given by Eq. (4) because of the secondary restrictions applied, it is significantly more complicated than the two-phase SEI given by Eq. (7) as a result of the additional entities that must be considered. Closure relations- The procedure suggested for deriving closure relations is similar to that used for the two-phase case, although additional relations are needed for the three-phase case. We consider the first 16 lines of Eq. (37) to generate a candidate set of closure relations, which will in turn be used to formulate a closed model in the following section. A zero-order closure for the fluid phases yields from Line 1 of Eq. (37) t α = − p α I for α ∈ J f , (38) and for the solid phase t s = t s . A zero-order approximation based on Line 2 yields an approximation for the stress tensor for the interfaces t α = γ α I − G α for α ∈ J I . A first-order conjugate flux-force approximation based upon Line 3 in Eq. (37) yields the following approximations for the deviation velocities in each of the phases ω i w u iw = − D iw w · ∇ μ iw − μ ww for i ∈ g, o, c, z , (41) ω i n u in = − D il n · ∇ μ in − μ ln for i ∈ n, g, o, c, w, x , and ω i s u is = − D ie s · ∇ μ is − μ es for i ∈ g, o, w, y , (43) where water is taken as the reference species in the wetting phase, the live tumor species is the reference species in the non-wetting phase, and the extra-cellular matrix species is the reference species for the solid phase. Line 4 in Eq. (37) can be used in a conjugate flux-force form to approximate thermodynamically consistent reaction rates. The manner in which this is done is analogous to the approach previously detailed for the simpler two-phase model. An example set of reactions are detailed in Table 3. The reactions are related to tumor growth, destruction, necrotic species formation, and lysis. All reactions occur in the non-wetting fluid phase. We emphasize that this is merely an example set of reactions and alternative reaction sets are permissible. The general expression for reactions in phases is A first-order conjugate form closure relation for the rate of mass transfer can be deduced from Lines 5-7 in Eq. (37) and is of the form M iα iβ = K M iαβ μ iα + ψ α − μ iβ − ψ β for α, β ∈ J P , (45) where the indices denote that mass transfer can occur between any of three binary combinations of phases. Lines 8-11 of Eq. (37) can be used to generate cross-coupled approximations involving the interphase transfer of momentum for the fluid phases of the form ∇ ϵ w p w − ∑ i ∈ J s ϵ w ρ w ω i w ∇ μ iw + ψ w + g w + T 0 w wn + T 0 and ∇ ϵ n p n − ∑ i ∈ J s ϵ n ρ n ω i n ∇ μ in + ψ n + g n + T 0 n wn + T 0 where R is a positive semi-definite resistance tensor. A first-order approximation based upon Line 12 in Eq. (37) yields (47) ∇ · I − G α ϵ α γ α + ∑ κ ∈ J cα + T 0 κ α = − R α α · v α − v s − ∑ κ ∈ J cα + R κ α · v κ − v s for α ∈ J I . (48) Lines 13 and 14 from Eq. (37) can be used to develop a first-order approximation for the relaxation of capillary pressure to its equilibrium state, which is the case when the term in Line 14 vanishes. The resulting approximation is D s ϵ w Dt − χ s ws D s ϵ Dt − γ wn k 1 wn ϵ wn − ϵ eq wn p w wn − p n wn = c wn p w wn − p n wn − γ wn J w wn , (49) where c wn is a positive capillary pressure relaxation coefficient. As a result of SR12-3P, the curvature of the solid phase simplifies to J s ss = J s ws = J s ns , (50) which provide an approximation for the change in porosity as a function of normal forces on the solid surface of the form D s ϵ Dt = c ss χ s ws p w wn + χ s ns p n ns + n s · t s · n s ss + χ s ws γ ws + χ s ns γ ns J s ss , (51) where c ss is a positive compressibility coefficient. In addition to the closure relations that can be deduced from the SEI, evolution equations based upon averaging theorems [19,22] can also be used to close the three-phase model. The use of such evolution equations to produce closed models is an important aspect of TCAT models. For the three-phase model considered here, an evolution equation for the fluid-fluid interfacial area can be written as D s ϵ wn Dt + ∇ · ϵ wn w wn − G wn · v s + ϵ wn G wn : d s − J w wn D s ϵ w Dt + χ s ws D s ϵ s Dt − k wn ϵ eq wn − ϵ wn − cos φ ws, wn ϵ ws + ϵ ns D s χ s ws Dt + sin φ ws, wn ϵ wns ϵ ws + ϵ ns D s ϵ s Dt = 0, (52) where neglecting common curve contributions this equation simplifies to D s ϵ wn Dt + ∇ · ϵ wn w wn − G wn · v s + ϵ wn G wn : d s − J w wn D s ϵ w Dt + χ s ws D s ϵ s Dt − k wn ϵ eq wn − ϵ wn − cos φ ws, wn ϵ ws + ϵ ns D s χ s ws Dt = 0 . (53) G wn is a diagonal tensor of trace 1 for the isotropic case, which is a reasonable approximation. For other cases, an evolution equation for this quantity is needed, which can be derived at the microscale, averaged to the macroscale, and approximated in a convenient form. The interfacial velocity vector is given by w wn , which is the macroscale velocity of the fluid-fluid interface, which moves in a direction normal to the interface. An exact equation for this kinematic quantity is not available. An approximation can be derived based upon the averaging theorems of the form ϵ wn 2 w wn = − ∂ϵ w ∂t ∇ϵ w + n w Ω ws , Ω , (54) where an approximation is needed for the average of the normal vector, which is trivial for the common isotropic case. As surfaces deform, their curvatures change. An evolution equation relating the change in the mean and the Gaussian curvature can be written as ϵ wn ∂K n wn ∂t + K n wn ∂ϵ wn ∂t + ∇ · ϵ ωn K n wn w wn + J n wn 2 ∇ ∇ : I − G wn ∂ϵ n ∂t + J n wn 2 ∇ · ϵ wn J n wn w wn = 0. (55) A constraint equation based upon Galilean invariance for the mean curvature is − ∇ · ϵ wn J n wn I − 2G wn + 2K n wn ∇ϵ n − ∇ ∇ : I − G wn ∇ϵ n = 0, (56) and the Gaussian curvatures can be related to the Euler characteristic for smooth closed boundaries using the Gauss Bonnet theorem giving 4πχ n = ϵ wn K n wn + ϵ ns K n ns . (57) Additional closure relations for the three-phase model are available through the specification of state equations. Another example of a state equation is a formulation that can be deduced from integral geometry that relates volume fractions, interfacial areas, and curvatures of the form [47] ϵ n = F ϵ wn + ϵ ns , ϵ wn J w wn + ϵ ns J s ns , χ n , (58) where F is a smooth differentiable function that can be deduced from state data. Equation (58) has been shown to provide an accurate representation of capillary pressure that is based on theory, hysteretic free-unlike traditional capillary pressure equations, and applicable for not only equilibrium conditions but also dynamic states as well [23]. Equations of state can be formulated to relate densities to pressures and compositions as well. Closed model- The purpose of this section is to assemble the pieces needed for a complete, closed three-phase model of tumor growth, which can be used as a basis to derive numerical approximations. The three-phase model is more complicated than the previously formulated two-phase model. This results from the existence of additional entities and the attendant closure problem. Overall conservation equations for mass and momentum are needed for each phase, and compositional equations including mass transfer and reactions are needed for the species in a phase. The massless interface and common curve assumptions do simplify the formulation for lower dimensional entities. We note that other TCAT formulation approaches are possible, and we are merely providing an example for completeness. This model follows from the conservation equations and closure relations based upon the SEI that were previously presented. The quantities we wish to resolve include the composition of each phase, the velocity and mass density of each phase, fluid pressures and solid stresses, entity extents, and interfacial curvatures needed to approximate the state of the system. The model that follows is based upon these considerations. Summing Eq. (1) over all species yields a set of conservation of mass equations for the phases of the form D α ϵ α ρ α Dt + ϵ α ρ α I: d α − ∑ i ∈ J s ∑ κ ∈ J cα M iκ iα = 0 for α ∈ J P , (59) which can be written using Eq. (45) as D α ϵ α ρ α Dt where the chemical potentials can be written in terms of mole fractions and mass fractions as previously formulated in Eqs. (28) and (30). A momentum equation is also required for the solid phase, which can alternatively be expressed as the total change in momentum for the system, which can be formulated summing Eq. (2) over all entities using the closure approximations given by Eqs. (38) and (40) yielding D w ϵ w ρ w v w Dt + D n ϵ n ρ n v n Dt + D s ϵ s ρ s v s Dt + ϵ w ρ w v w I: d w + ϵ n ρ n v n I: d n + ϵ s ρ s v s I: d s − ϵ w ρ w g w − ϵ n ρ n g n − ϵ s ρ s g s − ∇ · −ϵ w p w I − ϵ n p n I + ϵ s t s + ∑ κ ∈ J I γ κ I − G κ = 0. In the event that a zero velocity at the macroscale is not a reasonable assumption for the solid phase, expressions available in the literature [20] can be used to produce a solvable equation. As previously noted, additional work on the solid mechanics of these biomechanical systems is warranted, and this is an active area of research [6,11]. The three-phase model is complete when augmented with evolution equations for the volume fraction given by Eq. (49) and the interfacial area given by Eq. (53), and equations of state given by Eq. (58) and state equations for the dependency of phase densities as a function of pressures and compositions. Approximations would also be needed for the specific interfacial area ϵ ns and the mean curvature of the solid phase J s ns . Discussion Tumor occurrence and growth exhibits classical problems of scale. A complete understanding requires molecular, genetic, and cellular perspectives, while the systems of concern are at scales of the human body and thus several orders of magnitude above the length scales of fundamental concern. The perspective advanced herein is a practical approach focused on advancing a mechanistic mathematical description that is able to describe the system of concern. The TCAT approach provides a means to formulate such models, which are founded upon conservation principles, thermodynamics, and a set of mathematical theorems. Even though molecular and cellular approaches are essential to advance fundamental understanding of tumor formation and growth, the laws of continuum mechanics must still apply provided the necessary primary restrictions have been met. The approach taken is to develop macroscale models in which tumors are described in an averaged sense with admissible changes in both space and time. With this general approach, a wide variety of specific models is possible. The primary goal of this work was to illustrate how the TCAT approach can be used to formulate macroscale models of varying sophistication and complexity. Two specific examples were provided: a two-phase formulation, and three-phase formulation; the former included a single fluid phase and the latter included two fluid phases and both included a solid phase. Both of these examples included interfaces, and the full three-phase case can include a common curve, which was included in the SEI but dropped for simplicity in the final example formulation. Both approaches include several species, reactions for tumor formation, tumor destruction, necrotic tissue formation, and necrotic lysis, and mass transfer. We emphasize that the examples provided are intended to be a starting point for advancement in understanding of the how the TCAT approach can be applied, evaluated, and validated. It is expected that details of the methods, especially the species and reaction sets, will evolve as understanding of the essential components of an optimally useful model also evolve. As understanding, and available computing power, expands, so too will the complexity and fidelity of the underlying models. However, the approaches illustrated will provide a solid foundation for continuum mechanical modeling of tumor growth. Some previous tumor growth models have included elements of TCAT but not to the extent detailed in this work [32,[53][54][55]60]. They have been successful in modeling tumor growth but not entirely satisfactory. Especially the missing interfaces required questionable approximations when linking capillary pressures with saturations. It is hoped that the presented approach will allow a step forward in this direction. The next steps beyond the numerical implementation of the detailed equations will be the inclusion of angiogenesis in the model and finally the addition of drug delivery. The number of parameters involved will require a sensitivity analysis to identify the leading order effects and to motivate the most important experimental work. Uncertainty quantification is strongly linked to this problem. It is hoped that mechanistic models of the sort presented herein will help to elucidate mechanisms resulting in cancer growth and treatment, which may be considered independent of the genetic origins. In fact, recent studies have revealed extensive variations in genetic signatures, gene expression, and post-translational modifications among different tumors and within tumors, creating complex tumor heterogeneities. Heterogeneity is frequently attributed to genetics, but it is related to non-genetic influences too [5]. Clearly, this directly impacts therapeutic treatments and outcomes [40,43,59]. Thus, better understanding of heterogeneities will help to improve treatments of metastatic diseases. This brings us to the final aim of our modeling effort which is the evaluation of the efficiency of cancer drugs. This needs an efficient model of tumor growth linked to a bio-distribution model of the drug under scrutiny. A motivation for three-fluid-phase models would be a need to represent tumor systems based upon three distinct phases with immiscible fluid-like properties that form separate regions of matter with distinct interfacial tensions. While such models may be needed, much can be done with the general framework for one-fluid and two-fluid models advanced in this work. In the event that model evaluation and validation efforts of the sorts of models advanced in this work result in a conclusion that three-fluid-phase continuum models are needed to represent tumor growth with adequate fidelity, TCAT can be used to derive such models. While the conservation and balance equations are in place for such advancements, additional thermodynamic, evolution equation, and state equation work would be required. A new general SEI would also need to be derived. Of course, such a theoretical framework would be applicable to others systems as well. Models with some similarities to those detailed herein are being used for successful medical applications (e.g. [6,38]), and extensions to clinical applications of drug delivery and evaluation of drug efficiency are also possible; initial works in these directions can be found in [4,10]. Because the intent of this work was to provide a foundational example of how TCAT can be used to formulate macroscale models to describe tumor growth, it has not included the numerical methods aspects of approximating these models. Furthermore, the worth of a model is dependent upon its ability to represent physical systems of concern with sufficient fidelity to be of use to those assessing such systems. Such an assessment requires not only an approximation of the formulation butal so parameter estimation and comparison to observed systems. It is hoped that this work will enable efforts to approximate, apply, evaluate, and validate a range of candidate TCAT models in pursuit of realistic and worthwhile representations of systems important to society. Much work remains to be done to fulfill this vision. Conclusions Several conclusions follow from this work: 1. Macroscale continuum mechanical approaches provide a means to describe tumor formation, growth, and various treatment modalities at a length scale relevant to human systems of concern. 2. TCAT provides a framework for formulating such macroscale models in a manner that is consistent across length scales with respect to established conservation and thermodynamic principles. 3. Two-phase and three-phase TCAT models were formulated as examples of the formulation process and reduced to a closed form. 4. The work advances prior efforts in providing a more complete set of entities and including recent advances in differential geometry evolution equations and integral geometry state equations, which provide accurate closure approximation for two-fluid porous medium models. 5. The foundational nature of this work is intended to support future work to improve upon the example formulations provided in this work in concert with numerical approximation of the formulated models, parameter estimation, evaluation, and validation. 6. It is anticipated that advances in fundamental molecular and cellular level understanding of tumor formation, growth, and treatment processes will inform and help improve continuum scale models of focus in this work, which may enable more routine simulation at scales of concern and help advance more effective treatment approaches. Mass transfer coefficient Table 1 Reactions for two-phase systems Description Reaction References Tumor formation ν gfs MW g C gs + ν ofs MW o C os → ν tfs MW t C ts [12,48] Tumor destruction ν tds MW t C ts + ν cds MW c C cs → ν eds MW e C es + ν wds MW w C ws [35,41] Necrotic formation ν tns MW t C ts → ν nns MW n C ns [3,12] Necrotic lysis ν nls MW n C ns → ν els MW e C es + ν wls MW w C ws [33] Arch Appl Mech. Author manuscript; available in PMC 2022 July 08.
13,393.8
2021-06-09T00:00:00.000
[ "Physics" ]
Exploring Current Concepts and Challenges in the Identification and Management of Early-Stage COPD The need to improve health outcomes, as well as disease prognosis, has led clinicians and researchers to propose new ways of identifying COPD in its earliest forms. This initiative is based on the hypothesis that an earlier intervention would have a greater prognostic impact. However, the operational definition of a patient in the initial stages of the disease is complex, and there is still no unanimously accepted definition. GOLD has recently proposed different concepts to identify COPD in its early stages, such as COPD in young people or COPD with mild functional impairment. In addition, GOLD proposes two other concepts, called pre-COPD (symptomatic non-obstructive patients) and PRISm (preserved ratio with impaired spirometry), which aim to identify the patient at risk of developing this chronic airflow obstruction. However, despite the attractiveness of these concepts, none have been taken up universally by the medical community. A universally accepted identification of how to define COPD in its early stages is necessary as a preliminary step in order to design clinical trials to find out the best way to treat these patients. This review deals with these concepts of COPD at the onset of the disease, highlighting their importance and the problems involved in identifying them as therapeutic targets in real clinical practice. Introduction Traditional approaches to Chronic Obstructive Pulmonary Disease (COPD) appear to have failed in terms of prevention of disease progression, deterioration of Forced Expiratory Volume in one Second (FEV 1 ) or mortality.Despite the potential impact of advanced inhaled therapy [1,2] and non-pharmacological approaches [3,4], COPD continues to be a leading cause of mortality worldwide [5].Additionally, 70% of COPD patients are diagnosed at advanced stages and 50% die approximately 3.6 years after the first hospitalization [6].Therefore, the epidemiological data indicate that COPD continues to be a disease with a high impact in the population.Consequently, there has recently been a general call for alternative formulas that provide an opportunity to improve the prognosis of the disease [7,8].One of these initiatives is to establish a protocol which allows us to identify the disease in its earliest stages [9].The hypothesis behind this is that it would be possible to improve prognosis and change the natural history of the disease at these stages [10].This proposal has a solid background for several reasons.First, it has been shown that the most important loss of lung function occurs in both younger patients under 50 years of age [11] and in patients who have not yet developed severe airflow obstruction [12,13]. There is also data showing that patients with mild COPD suffer a more rapid loss of lung function in the presence of exacerbations than patients at more advanced stages of the disease [14,15].Second, it has been shown that we can reduce mortality risk with inhaled therapies in the earlier stages of the disease [16].Third, non-pharmacological therapies have a greater impact during the earlier stages than they do during later, more advanced stages of the disease [17,18]. In this context, one of the main limitations that exists today is the identification of patients in the less advanced stages, for which the term 'early COPD' has been coined.However, despite the term 'early COPD' being widely used, its concept is poorly defined.To date, this clinical situation has been referred to interchangeably with different concepts such as 'COPD at the initial phases of the disease', 'COPD in young people', or 'mild COPD' without giving any suitable definition of any of these clinical situations.Recently, the Global Initiative of Obstructive Lung Disease's (GOLD's) 2023 document (the current version of the document summarizing the recommendations for the diagnosis and treatment of COPD internationally, available at https://goldcopd.org/, accessed on 3 July 2023) made an effort to start defining these terms [19].These concepts are currently being validated as a proposal that would potentially impact treatment.Consequently, doubts still remain among the clinical community about their implementation in daily clinical practice.In addition, GOLD 2023 proposes other terms, such as Pre-COPD (symptomatic non-obstructive subjects) and PRISm (preserved ratio with impaired spirometry), which also need explaining thoroughly.Therefore, the aim of this narrative review is to broaden the debate on early COPD with the idea of making it easier to identify these patients for whom there are greater opportunities for therapy. Concept and Importance The 2023 revision of the GOLD document proposes using this term exclusively to refer to the severity of how it affects the target organ, which, in the case of COPD, refers to the severity of chronic airflow obstruction measured by spirometry [19] (Figure 1).This group of patients may be relevant for three main reasons.First, it represents the vast majority of COPD patients in the community: taking into account the limitation of the high rate of under-diagnosis, according to data from the EPISCAN II study in Spain, 56% of all patients diagnosed with COPD were mild, forming the largest group [20].Second, despite the mild airflow limitation, these patients may also suffer from a profound impact of the disease.Several studies have shown that despite this mild obstruction, patients may be symptomatic, and may have emphysema, decreased health-related quality of life (HRQoL), decreased exercise capacity and increased use of health care resources, as well as suffering recurrent exacerbations [21][22][23].In the ECLIPSE study, 20% of the frequently exacerbating patients were patients with mild COPD [22].These patients may also have comorbidities such as cardiovascular disease, lung cancer, or a higher risk of depression [24][25][26].Thirdly, it may represent an opportunity for an early intervention.In the Understanding Potential Long-Term Impacts on Function with Tiotropium (UPLIFT) stud, the authors evaluated the response of FEV 1 and Forced Vital Capacity (FVC) to tiotropium.They concluded that the response of these two spirometric parameters to bronchodilators decreased significantly over time and with the severity of airflow obstruction, evidencing a greater likelihood of arresting lung function loss in patients with lower degrees of airflow obstruction [27].This is consistent with other studies showing that long-acting bronchodilators may slow the decline in lung function, as well as reduce exacerbation rates and improve HRQoL in patients with mild to moderate COPD [28].In terms of non-pharmacological interventions, the impact is similar.Smoking cessation reduces the rate of lung function decline at all stages of the disease [29].Interestingly, the patients with mild to moderate COPD have a higher rate of FEV1 decline compared to patients with severe or very severe COPD [14].Similarly, there is evidence that a cohort of patients with mild COPD may suffer a more rapid loss of lung function in the presence of exacerbations than patients with more advanced disease [14,15].very severe COPD [14].Similarly, there is evidence that a cohort of patients with mild COPD may suffer a more rapid loss of lung function in the presence of exacerbations than patients with more advanced disease [14,15]. Limitations Although this concept is easy to measure, has clinical implications, is well standardized, and is universally accepted, certain arguments need to be clarified before opting for this form of early patient identification.COPD has traditionally been understood as a disease characterized by an accelerated decline in lung function.Consequently, it would seem logical to expect that all COPD patients would go through a first phase of spirometrically mild COPD that would be followed by a subsequent progressive decline.However, this statement may not always be true.It has been shown that not all patients with mild COPD progress to more severe airway obstruction [30].Therefore, there are patients with mild disease that do not progress to more advanced stages even if they continue to smoke.In addition, recent studies have shown that a proportion of COPD patients experience a normal decline in lung function from a low peak lung function in early adulthood [31], and that the degree of lung function development may be impaired from an early stage of life [32].Consequently, it would be plausible to find patients with airflow obstruction that may never be mild due to this impairment in lung function development.Consequently, the idea of COPD as a disease that starts out mild and then proceeds to progressive functional decline may not be true in either of its two premises at the patient level.Therefore, having mild COPD is not a valid way of identifying individual patients who will have accelerated lung function decline in the future. Another issue in the debate about how to identify functionally mild patients is whether or not spirometry should be the way to identify them.However, some researchers argue that spirometry may underestimate existing physiological damage.Accordingly, other diagnostic methods are advocated, such as measuring the diffusing capacity Limitations Although this concept is easy to measure, has clinical implications, is well standardized, and is universally accepted, certain arguments need to be clarified before opting for this form of early patient identification.COPD has traditionally been understood as a disease characterized by an accelerated decline in lung function.Consequently, it would seem logical to expect that all COPD patients would go through a first phase of spirometrically mild COPD that would be followed by a subsequent progressive decline.However, this statement may not always be true.It has been shown that not all patients with mild COPD progress to more severe airway obstruction [30].Therefore, there are patients with mild disease that do not progress to more advanced stages even if they continue to smoke.In addition, recent studies have shown that a proportion of COPD patients experience a normal decline in lung function from a low peak lung function in early adulthood [31], and that the degree of lung function development may be impaired from an early stage of life [32].Consequently, it would be plausible to find patients with airflow obstruction that may never be mild due to this impairment in lung function development.Consequently, the idea of COPD as a disease that starts out mild and then proceeds to progressive functional decline may not be true in either of its two premises at the patient level.Therefore, having mild COPD is not a valid way of identifying individual patients who will have accelerated lung function decline in the future. Another issue in the debate about how to identify functionally mild patients is whether or not spirometry should be the way to identify them.However, some researchers argue that spirometry may underestimate existing physiological damage.Accordingly, other diagnostic methods are advocated, such as measuring the diffusing capacity for carbon monoxide (DLCO) [10], structural abnormalities detected in computed tomography (CT) [33], such as the thickening of the wall of segmental and subsegmental bronchi or emphy-sema [34], or even detecting dysfunction of the small airway through techniques such as impulse oscillometry [35]. Summary Considering some of the above aspects, classifying patients by severity of airflow obstruction alone is probably an insufficient strategy [36].Although we need more evidence on specific therapeutic interventions for this group of patients, as most large clinical trials focus on patients with severe or very severe COPD, current data show that intervening with both pharmacological and non-pharmacological treatment could modify the natural history of the disease [27][28][29], a concept which has the advantage of being easily objectifiable and clearly defined using spirometry. Concept and Importance This is also a term which may appear easy to define at first, as it refers to the chronological age of the patient (Figure 1).The current GOLD proposal uses this term to refer to COPD patients aged between 20 and 50 years [19].This group of patients is particularly important for several reasons.Firstly, because it is an age group with a high rate of underdiagnosis [20].Second, it is a group of patients with a high impact of the disease.Notably, the available evidence shows that young patients are not asymptomatic; instead, they have a higher symptom burden, worse HRQoL, and considerably more exacerbations than older COPD subjects [10].Additionally, there is also evidence that COPD subjects under the age of 50 years old have a more accelerated FEV 1 loss [11,14].Notably, some treatments, such as tiotropium, have been shown to improve HRQoL, decrease exacerbation rate and lead to a significant reduction in the decline in post-bronchodilator FEV 1 in younger patients with COPD [11].Third, we have data showing that among COPD patients under 50 years of age, there is a high percentage of active smokers [11].Therefore, young COPD represents a subtype of patients which is easy to identify and has outstanding clinical consequences. Limitations Although this term is therefore attractive for clinical practice, it deserves additional comment.Firstly, its prevalence has been only sparingly studied [37].It has traditionally been assumed that COPD begins at 35-40 years of age, with no clearly defined lower limit.However, it is possible that COPD begins earlier.However, the prevalence of classic COPD has not been studied in younger ages.In the EPISCAN II study, 4.1% of subjects between 40 and 50 years of age had COPD, of whom only 9% had been diagnosed prior to the study [20].In addition, another consideration is the prevalence and role of alpha1 antitrypsin deficiency among this group of young patients.Although the classical phenotype of patients under the age of 50, with the presence of emphysema with progressive respiratory symptoms especially if the patient is a non-smoker or minimal smoker, may be due to alpha1antitrypsin deficiency, the reality is much more complex.Consequently, the number of patients whose alpha1 antitrypsin deficiency has been identified is currently low [38].Another confounder in younger cohorts is the presence of asthma.We know that one of the many differences between asthma and COPD concerns the age of onset, which is earlier in asthma.Consequently, the study of chronic respiratory symptoms in a young patient should begin by evaluating a diagnosis of asthma [39].Additionally, the combination of the two diagnoses in a single patient constitutes a challenge for the clinician, especially at younger ages, with direct clinical consequences [40,41].Yet another consideration is that few patients in this age group have been included in large clinical trials.Despite this, several studies have shown a greater recovery rate of FEV 1 in younger patients than in the total group of patients with the application of pharmacological treatment and preventive measures [9,11,42], while this effect is lost in older patients or those with more advanced disease [27].Finally, the age cut-off at 50 years is arbitrary.With the general aging of the population, and the improvement in both longevity and HRQoL, the current age threshold should probably be re-evaluated. Summary In summary, although more scientific evidence is needed to support the best therapeutic alternatives for this group of patients, we do have data that suggest that taking action at this stage could modify the natural history of the disease.This definition has the advantage of being easily objectifiable and clearly defined, although this group has several disadvantages, such as having a high under-diagnosis rate, a high disease impact, and a high rate of active smoking, and not many clinical trials have been conducted on the therapeutic impact [11].Future studies should evaluate the prevalence of COPD under the age of 50 and evaluate the impact of the therapeutic measures in the long term. Early COPD 4.1. Concept and Importance This term refers to the onset of the natural history of the disease that ends up leading to chronic airflow obstruction and its accompanying symptoms, together with an accelerated decline in lung function (Figure 1).The current hypothesis holds that COPD develops from an abrupt increase in inflammation at a specific time point, which results in airflow obstruction [43].Early COPD seems to refer to the moment when this increased inflammation starts, if such a moment exists, which therefore corresponds to when the obstruction begins to appear, therefore making it a more biological concept. The potential implication of identifying these patients is clear, since, if the development of the disease can be identified early on, there is clear potential for intervention.Currently, there is some evidence suggesting that this could indeed be the case.For example, several initiatives have been published on the protection of the respiratory system with antioxidant drugs [44], while others have recently started to explore study designs for potential clinical trials with these patients, who have not yet developed COPD but are at risk [42]. Limitations Although this is extremely attractive in principle, unfortunately, it is not known when this onset of inflammation occurs, and it will probably present differently in different types of COPD patients.There are also certain challenges that make it difficult to pinpoint the exact moment.Part of this difficulty probably lies in not being able to differentiate between a true, recent onset of COPD and a persistently mild COPD that will not progress over time.To arrive at this definition would require a series of prospective cohort studies of people at risk of developing COPD of sufficient duration to demonstrate what factors ultimately influence the initiation of lung inflammation. In an attempt to give a more operational definition, some authors have proposed different approaches.Martinez et al. use this term for patients < 50 years old with a history of smoking > 10 pack-years and who meet one of the following criteria: FEV 1 /FVC < at the lower limit of normal, compatible alterations in CT scan or a fall in FEV 1 ≥ 60 mL/year that is accelerated relative to FVC [45].Other initiatives have shown that the combination of a low baseline lung function and a rapid decline in lung function (defined as an average annual fall in FEV 1 > 40 mL) results in an increased risk of developing COPD compared to subjects with no or only one of these traits [46].Based on these data, a functional definition of early COPD was proposed which refers to patients < 50 years old, with smoking history > 10 pack-years and FEV 1 /FVC < 0.7 (or < lower limit of normal).These were then further classified as low-activity (FEV 1 > 50%, dyspnea measured by the modified Medical Research Council scale < 2, without frequent exacerbations and with DLCO > 80%) and early COPD with high disease activity (FEV 1 < 50%, dyspnea mMRC > 2 and/or, > 2 exacerbations/year and with DLCO < 80%) [47].Therefore, these definitions already include symptoms, exacerbations and tests such as DLCO.Some of these data have been corroborated in subsequent studies linking early COPD with increased symptomatic burden and exacerbations, in addition to structural abnormalities in CT and functional alterations, such as decreased DLCO [33].Unfortunately, these definitions are researchbased and have been poorly corroborated in subsequent studies.There are also authors who advocate paying more attention to early asymptomatic COPD and non-smokers, as well as focusing on early-life events such as childhood respiratory infections, exposure to air pollution and genetics [48][49][50], with the idea that the development of COPD is a long-term cumulative process, and that the development of lung function begins in the embryonic stage [51]. Another approach considers tobacco exposure to be the main factor to be explored.Smoking plays a fundamental role in the natural history of COPD, according to recent data.Depending on exposure to tobacco smoke, young adults with early COPD (defined as those with FEV 1 /FVC < the lower limit of normal and age < 50 years) are more likely to develop clinical COPD (defined as subjects with FEV 1 /FVC < 0.7 and FEV1 < 80%) [52].Some studies have proposed tobacco-induced small-airway dysfunction to be the initial stage in the development of COPD [53,54], which may be detectable through techniques such as impulse oscillometry [35] or parametric response mapping techniques on chest CT that show how small-airway disease precedes emphysema [54]. Finally, from a therapeutic point of view, the effects of treatment in patients with early COPD are still unknown, mainly because we do not have an agreed upon and validated definition of this term, and there is still much to discover about the determinants that initiate the initiation.Interestingly, the evidence that early COPD is associated with poor clinical outcomes leads us to believe that early diagnosis and treatment could modify the natural history of the disease, and several studies are underway with this objective [47].With respect to pharmacological interventions, as discussed above, there are several studies that show that treating patients with mild COPD or young COPD can modify the natural history of the disease, although none of these phases fits the definition of early COPD [47], as considered here. Summary In conclusion, the concept of biological early COPD is of great importance, because it makes direct reference to the beginning of the natural history of the disease, the moment at which it is possible to intervene to modify it.Unfortunately, although different approaches have been made to define early COPD, none of them, as mentioned above, have been agreed on or validated, and there are still many gaps in our knowledge in this respect. Concept and Importance This term evolved from the previously used term GOLD 0, which was proposed in the first GOLD documents.It includes persons of any age with previous inhaled exposure presenting with respiratory symptoms and/or structural abnormalities and/or functional alterations in the absence of airflow obstruction on spirometry [19] (Figure 1).The concept implies that abnormalities other than spirometric obstruction, which could be clinical, functional or radiological, must be observed. From the perspective of clinical presentation, the definition of the term pre-COPD indicates that it refers to people with respiratory symptoms.Among these we must highlight those with chronic cough and expectoration, otherwise termed chronic nonobstructive bronchitis.These are relevant for being most commonly associated with a higher risk of COPD progression, as well as being related to worse HRQOL and episodes similar to exacerbations [55].This subset of patients is distinguished by having a clear form of pre-COPD, as the underlying pathobiological feature (mucin production) has been identified, and they have specific structural abnormalities, such as airway wall thickening on CT [56]. In addition to symptoms, other functional impairments apart from a spirometric obstruction can also be mentioned.One of these physiological abnormalities is a decreased FEV 1 , as studies show that belonging to lower quartiles of FEV 1 , despite being within the limits of normality, is associated with an increased risk of developing COPD [57].This term overlaps with the concept of Preserved Ratio with Impaired Spirometry (PRISm), which we will discuss later.Another functional impairment is an accelerated fall in FEV 1 , which, as previously discussed, combines a low initial lung function followed by a rapid decline, defined as an average annual fall in FEV 1 > 40 mL, and is associated with an increased risk of developing COPD [46].Among other functional alterations, the one that has become most important is measuring the DLCO, as it is a test capable of identifying people with a higher risk of developing COPD in smokers without airflow obstruction [56].It has been shown that DLCO may start impairment at early stages of the disease and continues deteriorating in advanced stages of the disease, as opposed to what happens with FEV 1 [58]. The last aspect related to pre-COPD refers to structural abnormalities.This refers to radiological alterations such as segmental and subsegmental bronchial wall thickening or emphysema, both of which are associated with an increased risk of developing airflow obstruction [34,59]. The importance of this concept of pre-COPD, in any of the forms mentioned above, is that most of these subjects are at higher risk of developing COPD, although not all of them will do so, especially if preventive measures are taken [60].Therefore, it could represent a window of opportunity for an early intervention similar to the concept of early COPD discussed above.This has led to it being referred to recently as 'latent COPD' [61].Incidentally, the prevalence of pre-COPD in people without airflow obstruction in the Spanish general population has been reported to be as high as 22.3% [62]. Limitations Despite the potential for this concept, a few issues should be mentioned.One major limitation is that it confers disease status on subjects who, according to current guidelines, have not actually been diagnosed as having COPD and may never progress to the disease.Therefore, the challenge for the clinician is to determine which patients will make this transition to symptomatic clinical COPD and, therefore, which patients would be eligible for some type of therapeutic intervention for preventive purposes.Although numerous factors have already been described, such as symptoms of chronic bronchitis, decreased DLCO or alterations in the CT scan such as emphysema or bronchial wall thickening [56], much remains to be understood about the onset and progression of the disease in the patient.Secondly, with regard to therapeutic interventions that may be capable of modifying the course of the disease, apart from risk reduction measures, there have been very few therapeutic clinical trials, and there is no clear evidence of the effects that treatments might have on the natural history of the disease, although some studies have been launched in this direction [56]. Summary In conclusion, this term has the advantage of presenting a clearer definition than the previous proposals, and although there are still some factors to be defined, some determinants of progression and diagnostic tests capable of detecting them have been proposed; however, we have fewer data regarding which therapeutic interventions may be capable of modifying the natural history. Concept and Importance This term refers to individuals with a healthy FEV 1 /FVC ratio (i.e., without airflow obstruction) but with impaired spirometry in the form of a decrease in either FEV 1 or FVC (Figure 1).The term PRISm was initially coined for patients who are or have been smokers [14], and although many of the subjects with PRISm fall into these two categories, it has subsequently been found that the prevalence of PRISm in the general population, which ranges from 7.1 to 20.3%, is not much different from that of smokers or ex-smokers, which is around 12.3% [63,64].Interestingly, several factors make this form of functional pre-COPD worth considering beyond its mere prevalence.Subjects with PRISm seem to have specific characteristics associated with this form of lung function impairment [65], such as frequent cases of nutritional disturbances [66,67].There is also evidence that PRISm is associated with greater respiratory symptomatology, lower exercise tolerance and more admissions than people with normal spirometry [63,66,68].Consequently, it is not surprising that this form of pre-COPD may be related to mortality after adjusting for comorbidities and smoking [64,66,68,69]. Limitations Although this concept seems appealing, there are still many gaps in our knowledge about this category, as many of these subjects will evolve into normal spirometry and others into airflow obstruction [64,65,69], without us knowing the pathophysiological mechanisms that cause them to evolve in a certain way.What is certain is that the concept needs to be refined, since, from a spirometric point of view, this pattern can be associated with at least three different situations: • Firstly, a preserved ratio with decreased FVC may lead to a restrictive pattern secondary to some degree of lung hyperinflation or dynamic small airway collapse in smokers [70,71]. • Secondly, this restrictive pattern could also be due to different comorbidities, such as associated pulmonary fibrosis or heart failure, and the presence of bronchiectasis or mutual or many other respiratory and non-respiratory comorbidities [72]. • Thirdly, the situation could arise where the spirometry shows a normal FEV 1 /FVC ratio and a normal FVC, but decreased FEV 1 .This latter circumstance is a spirometric pattern with uncertain consequences that should be further explored in future research. In this regard, some authors have identified three different types of PRISm pattern [63]: PRISm-restrictive, with higher FEV 1 /FVC ratio, higher forced expiratory flow between 25 and 75% of FVC, and less emphysema and air trapping; PRISm-COPD, with average body mass index and lower FEV 1 /FVC ratio, as well as more air trapping and emphysema; and PRISm-metabolic, with higher body mass index and higher percentage of diabetics together with higher FEV 1 impairment, lower forced expiratory flow between 25 and 75% of FVC, and thickened bronchial walls in CT [63].Interestingly, the PRISm-COPD subgroup showed less dyspnea, better exercise capacity, measured as meters walked in the 6 min walk test, and less hypoxemia.Therefore, the authors hypothesize that this subgroup of subjects could be considered as early COPD, where the patients have not yet developed an obstructive pattern in spirometry [63].Unfortunately, this has not been demonstrated in subsequent studies [14]. Another challenging issue regarding the concept of PRISm is whether we should maintain the fixed ratio as a signal to identify airflow obstruction or, instead, lower the limit of what is considered to be normal [63].Such a paradigm shift would logically change the prevalence and possibly the clinical relevance of PRISm.Notably, it is important to mention that most of the studies have been carried out in Western populations.Very recently, however, new data have shown that in a Japanese population, PRISm is also related to an increase in cardiovascular and all-cause mortality and a higher risk of developing airflow obstruction [73]. Summary In conclusion, the main advantage of this term is that its definition is based on spirometric values, and the focus is clearly on the important impact it has.However, little is known about the pathophysiological mechanisms and causes behind PRISm, and it seems to refer to at least three very different subgroups, all of which makes it difficult to establish diagnostic guidelines and design therapeutic strategies capable of modifying the natural history of the disease. Bronchodilators in Non-Obstructive Lung Disease The debate described above about the various ways of identifying a patient with COPD in its earliest stages makes sense only if there is a clear explanation of the therapeutic strategy to follow.Although some of the therapeutic initiatives have been discussed in this review, there is still considerable controversy over the role of bronchodilators in patients who have not yet developed an obstruction identifiable by spirometry. There are two possible benefits of this use of the drugs evident in the available literature.Firstly, they provide symptomatic relief.It is known that COPD is a disease that has a slowly progressive onset [74].In this clinical context, the patient usually adapts progressively to this functional limitation, in such a way that the patient may often not be aware of the symptomatic impact that their clinical situation may have.Additionally, it is possible that administering a bronchodilator treatment may lead to an improvement in the functional alterations underlying the onset of COPD leading to symptomatic relief, despite them not having an easily identifiable obstruction on spirometry. Secondly, it is known that COPD in its earliest stages can also be associated with a situation of hyperinflation [75].In this context, we know that long-acting bronchodilators have an effect that improves or decreases pulmonary hyperinflation [76,77], and therefore could play a role in improving this hyperinflation and consequently the symptoms, despite there not being an obstruction detectable by spirometry.Interestingly, a recent analysis explored the characteristics and bronchodilator responsiveness of early COPD patients with and without lung hyperinflation [71].The authors described that early COPD patients with lung hyperinflation were associated with poorer lung function but better bronchodilator response.Additionally, the use of bronchodilators also has an effect on exercise capacity, prevention of exacerbations, or HRQoL [78,79] that should also be assessed in symptomatic non-obstructive cases.Consequently, some authors defend the early administration of bronchodilators [80,81]. Future Directions With all of the context reviewed in this document, the future lines must address two crucial aspects (Figure 2).Firstly, to advance the identification of these types of COPD in the initial stages, and secondly, to validate an agreed-upon definition that has clinical and prognostic relevance.In this sense, various authors have already proposed strategies to make advances in this direction using various instruments, from simple questionnaires or respiratory function tests to complex laboratory techniques [61,82,83].Here, understanding well what the determinants are that influence the more or less symptomatic clinical presentation, before the obstruction appears, the factors that condition the appearance of the obstruction and the impact of modifying some of them would be research questions that should be addressed.Secondly, future clinical trials are necessary to provide information about the efficacy and safety of long-acting bronchodilator drugs [87] or other treatment modalities in these non-yet-obstructive patients (Figure 2), who are symptomatic and may have a morphological or inflammatory alteration. In this sense, a working group in Spain has begun to assess an ambitious and multicenter research initiative named ANTES (anticipating the diagnosis and treatment of COPD in the 21st century) [9,10,88].This initiative is based on the hypothesis that anticipating the diagnosis and treatment of COPD will lead to reducing the impact of the disease, improving its prevention, its treatment and its prognosis.It is probably that the results of this and other initiatives will shed new light on the resolution of the questions raised that will lead to better prevention and management of patients. Conclusions In summary, there is a considerable call among members of the scientific community to try to find an ideal way to identify COPD patients at the onset of their disease, given that early interventions can have a great impact on the future burden of the disease or its prognosis.The concepts reviewed here constitute an initial approximation of the different forms that COPD can have at its onset, all of which represent a window of opportunity that requires further study to find the optimal and most practical way to identify these cases in the real world.However, these initiatives are only the beginning.As soon as we decide how to accurately identify these cases, the next step will be to clarify which is the most appropriate therapeutic approach, pharmacological or non-pharmacological, that would enable a significant reduction in the burden of the disease or its prognosis.It is a complex and challenging journey, but the effort will clearly be worth it.Secondly, future clinical trials are necessary to provide information about the efficacy and safety of long-acting bronchodilator drugs [87] or other treatment modalities in these non-yet-obstructive patients (Figure 2), who are symptomatic and may have a morphological or inflammatory alteration. In this sense, a working group in Spain has begun to assess an ambitious and multicenter research initiative named ANTES (anticipating the diagnosis and treatment of COPD in the 21st century) [9,10,88].This initiative is based on the hypothesis that anticipating the diagnosis and treatment of COPD will lead to reducing the impact of the disease, improving its prevention, its treatment and its prognosis.It is probably that the results of this and other initiatives will shed new light on the resolution of the questions raised that will lead to better prevention and management of patients. Conclusions In summary, there is a considerable call among members of the scientific community to try to find an ideal way to identify COPD patients at the onset of their disease, given that early interventions can have a great impact on the future burden of the disease or its prognosis.The concepts reviewed here constitute an initial approximation of the different forms that COPD can have at its onset, all of which represent a window of opportunity that requires further study to find the optimal and most practical way to identify these cases in the real world.However, these initiatives are only the beginning.As soon as we decide how to accurately identify these cases, the next step will be to clarify which is the most appropriate therapeutic approach, pharmacological or non-pharmacological, that would enable a significant reduction in the burden of the disease or its prognosis.It is a complex and challenging journey, but the effort will clearly be worth it. Figure 1 . Figure 1.Summary of the different concepts.The figure briefly shows the main characteristics of the concepts that are reviewed by means of icons, showing the points of convergence in a visual way (see text for explanation). Figure 1 . Figure 1.Summary of the different concepts.The figure briefly shows the main characteristics of the concepts that are reviewed by means of icons, showing the points of convergence in a visual way (see text for explanation). Figure 2 . Figure 2. Key questions and future directions for research.The lines represent the progression of FEV1 throughout life.In blue, cases with normal lung development.In red, cases with insufficient lung development.Dashed lines represent the appearance of rapidly progressive deterioration.The star marks a point from which this progressive deterioration would begin to appear.The yellow rectangle represents what would be an age range of interest for research in adults.Graph made as a drawing on the basis of previous studies [57,84-86], not based on real data. Figure 2 . Figure 2. Key questions and future directions for research.The lines represent the progression of FEV1 throughout life.In blue, cases with normal lung development.In red, cases with insufficient lung development.Dashed lines represent the appearance of rapidly progressive deterioration.The star marks a point from which this progressive deterioration would begin to appear.The yellow rectangle represents what would be an age range of interest for research in adults.Graph made as a drawing on the basis of previous studies [57,84-86], not based on real data.
8,523.2
2023-08-01T00:00:00.000
[ "Medicine", "Biology" ]
Improving Neural Parsing by Disentangling Model Combination and Reranking Effects Recent work has proposed several generative neural models for constituency parsing that achieve state-of-the-art results. Since direct search in these generative models is difficult, they have primarily been used to rescore candidate outputs from base parsers in which decoding is more straightforward. We first present an algorithm for direct search in these generative models. We then demonstrate that the rescoring results are at least partly due to implicit model combination rather than reranking effects. Finally, we show that explicit model combination can improve performance even further, resulting in new state-of-the-art numbers on the PTB of 94.25 F1 when training only on gold data and 94.66 F1 when using external data. Introduction Recent work on neural constituency parsing (Dyer et al., 2016;Choe and Charniak, 2016) has found multiple cases where generative scoring models for which inference is complex outperform base models for which inference is simpler. Let A be a parser that we want to parse with (here one of the generative models), and let B be a base parser that we use to propose candidate parses which are then scored by the less-tractable parser A. We denote this cross-scoring setup by B → A. The papers above repeatedly saw that the cross-scoring setup B → A under which their generative models were applied outperformed the standard singleparser setup B → B. We term this a cross-scoring gain. This paper asks two questions. First, why do recent discriminative-to-generative cross-scoring se- * Equal contribution. tups B → A outperform their base parsers B? Perhaps generative models A are simply superior to the base models B and direct generative parsing (A → A) would be better still if it were feasible. If so, we would characterize the cross-scoring gain from B → B to B → A as a reranking gain. However, it's also possible that the hybrid system B → A shows gains merely from subtle model combination effects. If so, scoring candidates using some combined score A + B would be even better, which we would characterize as a model combination gain. It might even be the case that B is a better parser overall (i.e. B → B outperforms A → A). Of course, many real hybrids will exhibit both reranking and model combination gains. In this paper, we present experiments to isolate the degree to which each gain occurs for each of two state-of-the-art generative neural parsing models: the Recurrent Neural Network Grammar generative parser (RG) of Dyer et al. (2016), and the LSTM language modeling generative parser (LM) of Choe and Charniak (2016). In particular, we present and use a beam-based search procedure with an augmented state space that can search directly in the generative models, allowing us to explore A → A for these generative parsers A independent of any base parsers. Our findings suggest the presence of model combination effects in both generative parsers: when parses found by searching directly in the generative parser are added to a list of candidates from a strong base parser (the RNNG discriminative parser, RD (Dyer et al., 2016)), performance decreases when compared to using just candidates from the base parser, i.e., B ∪ A → A has lower evaluation performance than B → A (Section 3.1). This result suggests that both generative models benefit from fortuitous search errors in the rescoring setting -there are trees with higher probability under the generative model than any tree proposed by the base parser, but which would decrease evaluation performance if selected. Because of this, we hypothesize that model combination effects between the base and generative models are partially responsible for the high performance of the generative reranking systems, rather than the generative model being generally superior. Here we consider our second question: if crossscoring gains are at least partly due to implicit model combination, can we gain even more by combining the models explicitly? We find that this is indeed the case: simply taking a weighted average of the scores of both models when selecting a parse from the base parser's candidate list improves over using only the score of the generative model, in many cases substantially (Section 3.2). Using this technique, in combination with ensembling, we obtain new state-of-the-art results on the Penn Treebank: 94.25 F1 when training only on gold parse trees and 94.66 F1 when using external silver data. Decoding in generative neural models All of the parsers we investigate in this work (the discriminative parser RD, and the two generative parsers RG and LM, see Section 1) produce parse trees in a depth-first, left-to-right traversal, using the same basic actions: NT(X), which opens a new constituent with the non-terminal symbol X; SHIFT / GEN(w), which adds a word; and RE-DUCE, which closes the current constituent. We refer to Dyer et al. (2016) for a complete description of these actions, and the constraints on them necessary to ensure valid parse trees. 1 The primary difference between the actions in the discriminative and generative models is that, whereas the discriminative model uses a SHIFT action which is fixed to produce the next word in the sentence, the generative models use GEN(w) to define a distribution over all possible words w in the lexicon. This stems from the generative model's definition of a joint probability p(x, y) over all possible sentences x and parses y. To use a generative model as a parser, we are interested in finding the maximum probability parse for a given sentence. This is made more complicated by not having an explicit representation for p(y|x), as we do in the discriminative setting. However, we can start by applying similar approximate search procedures as are used for the discriminative parser, constraining the set of actions such that it is only possible to produce the observed sentence: i.e. only allow a GEN(w) action when w is the next terminal in the sentence, and prohibit GEN actions if all terminals have been produced. Action-synchronous beam search Past work on discriminative neural constituency parsers has shown the effectiveness of beam search with a small beam (Vinyals et al., 2015) or even greedy search, as in the case of RD (Dyer et al., 2016). The standard beam search procedure, which we refer to as action-synchronous, maintains a beam of K partially-completed parses that all have the same number of actions taken. At each stage, a pool of successors is constructed by extending each candidate in the beam with each of its possible next actions. The K highest-probability successors are chosen as the next beam. Unfortunately, we find that action-synchronous beam search breaks down for both generative models we explore in this work, failing to find parses that are high scoring under the model. This stems from the probabilities of the actions NT(X) for all labels X almost always being greater than the probability of GEN(w) for the particular word w which must be produced next in a given sentence. Qualitatively, the search procedure prefers to open constituents repeatedly up until the maximum number allowed by the model. While these long chains of non-terminals will usually have lower probability than the correct sequence at the point where they finally generate the next word, they often have higher probability up until the word is generated, and so they tend to push the correct sequence off the beam before this point is reached. This search failure produces very low evaluation performance: with a beam of size K = 100, action-synchronous beam search achieves 29.1 F1 for RG and 27.4 F1 for LM on the development set. Word-synchronous beam search To deal with this issue, we force partial parse candidates to compete with each other on a wordby-word level, rather than solely on the level of individual actions. The word-synchronous beam search we apply is very similar to approximate decoding procedures developed for other generative models (Henderson, 2003;Titov and Henderson, 2010;Buys and Blunsom, 2015) and can be viewed as a simplified version of the procedure used in the generative top-down parsers of Roark (2001) and Charniak (2010). In word-synchronous search, we augment the beam state space, identifying beams by tuples (|W |, |A w |), where |W | is the number of words that have been produced so far in the sentence, and |A w | is the number of structural actions that have been taken since the last word was produced. Intuitively, we want candidates with the same |W | = w to compete against each other. For a beam of partial parses in the state (|W | = w, |A w | = a), we generate a beam of successors by taking all of the next possible actions for each partial parse in the beam. If the action is NT(X) or REDUCE, we place the resulting partial parse in the beam for state (|W | = w, |A w | = a + 1); otherwise, if the action is GEN, we place it in a list for (|W | = w + 1, |A w | = 0). After all partial parses in the beam have been processed, we check to see if there are a sufficient number of partial parses that have produced the next word: if the beam (|W | = w + 1, |A w | = 0) contains at least K w partial parses (the word beam size), we prune it to this size and continue search using this beam. Otherwise, we continue building candidates for this word by pruning the beam (|W | = w, |A w | = a + 1) to size K a (the action beam size), and continuing search from there. In practice, we found it to be most effective to use a value for K w that is a fraction of the value for K a . In all the experiments we present here, we fix K a = 10 × K w , with K w ranging from 10 to 100. Table 1 shows F1 for decoding in both generative models on the development set, using the top-scoring parse found for a sentence when searching with the given beam size. RG has comparatively larger gains in performance between the larger beam sizes, while still underperforming LM, suggesting that more search is necessary in this model. Experiments Using the above decoding procedures, we attempt to separate reranking effects from model combination effects through a set of reranking experiments. Our base experiments are performed on the Penn Treebank (Marcus et al., 1993), using sections 2-21 for training, section 22 for development, and section 23 for testing. For the LSTM generative model (LM), we use the pre-trained model released by Choe and Charniak (2016). We train RNNG discriminative (RD) and generative (RG) models, following Dyer et al. (2016) by using the same hyperparameter settings, and using pretrained word embeddings from Ling et al. (2015) for the discriminative model. The automaticallypredicted part-of-speech tags we use as input for RD are the same as those used by Cross and Huang (2016). In each experiment, we obtain a set of candidate parses for each sentence by performing beam search in one or more parsers. We use actionsynchronous beam search (Section 2.1) with beam size K = 100 for RD and word-synchronous beam (Section 2.2) with K w = 100 and K a = 1000 for the generative models RG and LM. In the case that we are using only the scores from a single generative model to rescore candidates taken from the discriminative parser, this setup is close to the reranking procedures originally proposed for these generative models. For RG, the original work also used RD to produce candidates, but drew samples from it, whereas we use a beam search to approximate its k-best list. The LM generative model was originally used to rerank a 50-best list taken from the Charniak parser (Charniak, 2000). In comparison, we found higher performance for the LM model when using a candidate list from the RD parser: 93.66 F1 versus 92.79 F1 on the development data. This may be attributable to having a stronger set of candidates: with beam size 100, RD has an oracle F1 of 98.2, compared to 95.9 for the 50-best list from the Charniak parser. Augmenting the candidate set We first experiment with combining the candidate lists from multiple models, which allows us to look for potential model errors and model combination effects. Consider the standard reranking setup B → A, where we search in B to get a set of candidate parses for each sentence, and This does seem to be the case for both generative models, as shown in Table 2, which presents F1 scores on the development set when varying the models used to produce the candidates and to score them. Each row is a different candidate set, where the third row in each table presents results for the augmented candidate sets; each column is a different scoring model, where the third column is the score combination setting described below. Going from RD → RG to the augmented candidate setting RD ∪ RG → RG decreases performance from 93.45 F1 to 92.78 F1 on the development set. This difference is statistically significant at the p < 0.05 level under a paired bootstrap test. We see a smaller, but still significant, effect in the case of LM: RD → LM achieves 93.66, compared to 93.47 for RD ∪ LM → LM. We can also consider the performance of RG → RG and LM → LM (where we do not use candidates from RD at all, but return the highestscoring parse from searching directly in one of the generative models) as an indicator of reranking effects: absolute performance is higher for LM (92.20 F1) than for RG (89.55). Taken together, these results suggest that model combination contributes to the success of both models, but to a larger extent for RG. A reranking effect may be a larger contributor to the success of LM, as this model achieves stronger performance on its own for the described search setting. Score combination If the cross-scoring setup exhibits an implicit model combination effect, where strong performance results from searching in one model and scoring with the other, we might expect substantial further improvements in performance by explicitly combining the scores of both models. To do so, we score each parse by taking a weighted sum of the log-probabilities assigned by both models (Hayashi et al., 2013), using an interpolation parameter which we tune to maximize F1 on the development set. These results are given in columns RD + RG and RD + LM in Table 2. We find that combining the scores of both models improves on using the score of either model alone, regardless of the source of candidates. These improvements are statistically significant in all cases. Score combination also more than compensates for the decrease in performance we saw previously when adding in candidates from the generative model: RD ∪ RG → RD + RG improves upon both RD → RG and RD ∪ RG → RG, and the same effect holds for LM. Strengthening model combination Given the success of model combination between the base model and a single generative model, we also investigate the hypothesis that the generative models are complementary. The Model Combination block of Table 3 shows full results on the test set for these experiments, in the PTB column. The same trends we observed on the development data, on which the interpolation parameters were tuned, hold here: score combination improves results for all models (row 3 vs. row 2; row 6 vs. row 5), with candidate augmentation from the generative models giving a further increase (rows 4 and 7). 2 Combining candidates and scores from all three models (row 9), we obtain 93.94 F1. Table 3: Test F1 scores on section 23 of the PTB, by treebank training data conditions: either using only the training sections of the PTB, or using additional silver data (+S). Semi-supervised silver data Choe and Charniak (2016) found a substantial increase in performance by training on external data in addition to trees from the Penn Treebank. This silver dataset was obtained by parsing the entire New York Times section of the fifth Gigaword corpus using a product of eight Berkeley parsers (Petrov, 2010) and ZPar (Zhu et al., 2013), then retaining 24 million sentences on which both parsers agreed. For our experiments we train RD and RG using the same silver dataset. 3 The +S column in Table 3 shows these results, where we observe gains over the PTB models in nearly every case. As in the PTB training data setting, using all models for candidates and score combinations is best, achieving 94.66 F1 (row 9). Ensembling Finally, we compare to another commonly used model combination method: ensembling multiple instances of the same model type trained from different random initializations. We train ensembles of 8 copies each of RD and RG in both the PTB and silver data settings, combining scores from models within an ensemble by averaging the models' distributions for each action (in beam search as well as rescoring). These results are shown in the bottom section, Ensembling, of Table 3. Performance when using only the ensembled RD models (row 10) is lower than rescoring a single RD model with score combinations of single models, either RD + RG (row 3) or RD + LM (row 6). In the PTB setting, ensembling with score combination achieves the best overall result of 94.25 (row 13). In the silver training data setting, while this does improve on the analogous unensembled result (row 8), it is not better than the combination of single models when candidates from the generative models are also included (row 9). Discussion Searching directly in the generative models yields results that are partly surprising, as it reveals the presence of parses which the generative models prefer, but which lead to lower performance than the candidates proposed by the base model. However, the results are also unsurprising in the sense that explicitly combining scores allows the reranking setup to achieve better performance than implicit combination, which uses only the scores of a single model. Additionally, we see support for the hypothesis that the generative models can achieve good results on their own, with the LSTM generative model showing particularly strong and selfcontained performance. While this search procedure allows us to explore these generative models, disentangling reranking and model combination effects, the increase in performance from augmenting the candidate lists with the results of the search may not be worth the required computational cost in a practical parser. However, we do obtain a gain over state-of-theart results using simple model score combination on only the base candidates, which can be implemented with minimal cost over the basic reranking setup. This provides a concrete improvement for these particular generative reranking procedures for parsing. More generally, it supports the idea that hybrid systems, which rely on one model to produce a set of candidates and another to determine which candidates are good, should explore combining their scores and candidates when possible.
4,452
2017-07-01T00:00:00.000
[ "Computer Science" ]
Multi-Scale Masked Autoencoders for Cross-Session Emotion Recognition Affective brain-computer interfaces (aBCIs) have garnered widespread applications, with remarkable advancements in utilizing electroencephalogram (EEG) technology for emotion recognition. However, the time-consuming process of annotating EEG data, inherent individual differences, non-stationary characteristics of EEG data, and noise artifacts in EEG data collection pose formidable challenges in developing subject-specific cross-session emotion recognition models. To simultaneously address these challenges, we propose a unified pre-training framework based on multi-scale masked autoencoders (MSMAE), which utilizes large-scale unlabeled EEG signals from multiple subjects and sessions to extract noise-robust, subject-invariant, and temporal-invariant features. We subsequently fine-tune the obtained generalized features with only a small amount of labeled data from a specific subject for personalization and enable cross-session emotion recognition. Our framework emphasizes: 1) Multi-scale representation to capture diverse aspects of EEG signals, obtaining comprehensive information; 2) An improved masking mechanism for robust channel-level representation learning, addressing missing channel issues while preserving inter-channel relationships; and 3) Invariance learning for regional correlations in spatial-level representation, minimizing inter-subject and inter-session variances. Under these elaborate designs, the proposed MSMAE exhibits a remarkable ability to decode emotional states from a different session of EEG data during the testing phase. Extensive experiments conducted on the two publicly available datasets, i.e., SEED and SEED-IV, demonstrate that the proposed MSMAE consistently achieves stable results and outperforms competitive baseline methods in cross-session emotion recognition. I. INTRODUCTION A FFECTIVE Brain-computer Interfaces (aBCIs) employ brain imaging techniques to capture and interpret human emotional states, aiming to achieve emotional communication and expression between humans and computers.This endeavor enhances both the immersive user experience and the efficiency of human-computer interaction.Additionally, aBCIs exhibit promising applications in fields such as healthcare and education for long-term monitoring and prediction of emotional states, enabling personalized psychological interventions and treatment plans [1], [2].With aBCIs, a variety of modalities have been utilized, including functional magnetic resonance imaging (fMRI), Near-Infrared Spectroscopy (NIRS), and electroencephalography (EEG).In particular, EEG-based aBCIs have garnered increasing attention due to the rapid advancements in noninvasive, user-friendly, and lowcost EEG recording devices, particularly with the aid of portable dry electrode devices [3]. EEG-based aBCIs have demonstrated their capability to decode users' intentions from brain recordings and have showcased potential applications in neural rehabilitation systems [4].However, individual differences and the nonstationary characteristic of EEG [5] render the development of stable EEG-based emotion recognition models a challenging task.Consequently, it is necessary to collect labeled samples for each subject at each time to train new models, leading to time-consuming and expensive labeling work.To mitigate the reliance on the labeled data, in recent years, an increasing number of researchers have turned their focus on applying transfer learning methods to reduce individual differences [5], [6], [7], [8], [9] and improve feature invariance representation [10], [11], [12]. Currently, the predominant transfer learning methods employed in EEG-based aBCIs include domain adaptation (DA) and domain generalization (DG).These methods are designed to reduce the distribution discrepancy between the source and target domains, thus resulting in an improved recognition performance in the target domain.Nevertheless, DA methods require utilizing the target domain during the training stage and typically assume that the data distribution remains invariant or changes minimally between the source domain and target domain.In scenarios where the data distribution continuously evolves during real-time data acquisition, DA methods cannot effectively adapt these variations.On the other hand, DG generates domain-invariant representations from the source domains without exposure to data from the target domain, thus being more suitable for practical applications.However, DG methods require large numbers of source domains to train the model and enhance its generalization capabilities. DA methods require access to target domains with data distributions, while DG methods need large numbers of source domains.These approaches are impractical for the following cross-session emotion recognition scenario: when only one session (i.e., one source domain) of labeled data is available for a specific subject during the training stage.In this context, the primary concern is effectively utilizing the limited labeled data to train a subject-specific model for cross-session emotion recognition. Within the context of the brain-big-data center, real-time EEG data from a vast group of individuals are continuously transmitted, resulting in an abundance of unlabeled signals from various subjects and sessions, potentially containing some degree of corruption.Therefore, this situation presents an intriguing challenge: Can these unlabeled data be combined with the limited labeled data to train a subject-specific model for cross-session emotion recognition?This paper addresses this challenge by proposing Multi-Scale Masked Autoencoders (MSMAE).The MSMAE model is based on a multi-scale Vision-Transformer hybrid architecture, incorporating spectrum embedding, multi-head spatial attention, and multi-scale feature fusion to capture channel and spatial information of the EEG signals effectively.Specifically, MSMAE is pre-trained using unlabeled EEG data from multiple subjects and sessions, encoding and reconstructing channel-level and spatial-level representations of EEG signals to extract noise-robust, subjectinvariant, and temporal-invariant features.Subsequently, only a small amount of labeled data from specific subjects is necessary to fine-tune the model for personalization.Under this comprehensive training, the subject-specific model demonstrates a remarkable ability to decode emotional states from a different session of EEG data during the testing phase. The main contributions of this study can be summarized in three aspects: 1) We introduce a unified multi-scale pre-training framework aimed at addressing challenges related to missing EEG channels and limited labeled data in emotion recognition.This framework significantly enhances the practicality and effectiveness of EEG-based emotion recognition in real-world applications. 2) We present an innovative multi-scale fusion approach that combines channel-level and spatial-level learning.Our model aligns spatial-level correlations between pre-training and fine-tuning data to mitigate inter-subject and inter-session variations.Furthermore, it fine-tunes channel-level representation to ensure the exclusivity of subject-specific features.These techniques enhance adaptability and robustness for subject-specific cross-session emotion recognition tasks. 3) Our proposed model exhibits superior performance on two publicly available datasets for cross-session emotion recognition, even when only one session of labeled data is accessible for training. The organization of this paper is structured as follows: Section II offers a brief review of related works.Section III elaborates on the proposed method.Section IV conducts a comprehensive evaluation of the proposed method.Finally, Section V concludes the paper. II. RELATED WORK A. EEG Emotion Recognition EEG-based emotion recognition depends on extracting sufficiently discriminative EEG features.The widely used EEG features can be categorized into four groups: temporal-domain features, frequency-domain features, time-frequency-domain features, and brain connectivity features.The commonly employed statistical information in the temporal domain includes entropy, the fractal dimension, and higher-order crossings [13], [14].Within the frequency domain, power spectral density (PSD) [15] and differential entropy (DE) [16] stand out as two of the most frequently employed features.Several approaches [17], [18], [19], [20] have demonstrated excellent performance for time-frequency-domain features.Nalwaya et al. [19] employed the Fourier-Bessel domain adaptive wavelet transform (FBDAWT) to analyze multi-sensor EEG signals, accurately identifying emotional states.Bhattacharyya et al. [20] integrated the empirical wavelet transform (EWT) with Fourier-Bessel series expansion (FBSE), resulting in enhanced time-frequency representation of multi-component signals.For brain connectivity features, two crucial features, namely the Phase Lag Index (PLI) and the Phase Lock Value (PLV), were utilized to assess the phase synchronization among electrode signals across various brain regions.Liu et al. [21] employed the PLI feature to discern the emotional states of individual subjects, highlighting its remarkable discriminative capability.Chen et al. [22] integrated frequency-domain features with brain connectivity features for cross-subject emotion recognition, demonstrating superior performance.Furthermore, with the widespread adoption of deep learning methods, Alhagry et al. [23] utilized a twolayer long short-term memory network to extract temporal features.Zhang et al. [24] employed a recurrent neural network (RNN) to capture spatial-temporal representations from EEG signals.Zhong et al. [8] introduced a regularized graph neural network that considers the topological structure of EEG channels.Although these supervised approaches have successfully enhanced emotion recognition performance based on EEG signals, they require well-annotated and robust EEG data, which is relatively challenging to obtain in practical applications.Additionally, they often ignore the influence of session differences, such as the variations in the duration and content of the elicitation videos across different experiments, which introduce emotional biases. B. Transfer Learning Transfer learning seeks to enhance the performance of a new task by leveraging knowledge from a source task.DA, a subset Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply. of transfer learning, has been extensively applied in EEGbased emotion recognition, demonstrating promising results.Chen et al. [25] introduced a multi-source marginal distribution adaptation method that captures domain-invariant and domain-specific features for emotion recognition.Li et al. [26] developed an innovative domain adaptation method for emotion recognition, which extracts generalized features across different subjects and sessions by simultaneously adapting both the marginal and conditional distributions to approximate the joint distribution.However, these DA methods require access to target domains with data distributions.Unlike DA, DG aims to generate domain-invariant representations from the source domains without utilizing data from the target domain.Ma et al. [27] developed a domain residual network that facilitated the separate learning of domain-specific and domain-shared weights, with the latter being used to classify emotion in unknown domains.Ozdenizci et al. [10] proposed an adversarial inference approach to extend deep learning models for EEG-based person identification, aiming to learn session-invariant person-discriminative representations.However, this requirement becomes impractical when only one source domain of labeled data is available.Recently, Li et al. [28] utilized self-supervised learning for initial model pre-training and subsequently fine-tuned the model on new data, demonstrating notable performance in emotion recognition tasks, including scenarios where data may be incomplete or corrupted.However, this model cannot handle complex tasks such as cross-session analysis.Conducting cross-session emotion recognition with limited training data still poses significant challenges. III. METHOD A. Formulation We transform the EEG channels into a two-dimensional plane using the EEG electrode distribution map to improve spatial information consistency among adjacent channels, as depicted in Fig. 1.Specifically, each channel is repositioned onto a two-dimensional electrode topology, with a size of 9 × 9, and zero-padding is performed for missing electrodes.We apply this transformation to frequency-domain features, resulting in the EEG image x ∈ R 9×9×C f , where C f represents the number of frequency bands.The pre-training dataset consists of unlabeled data from various subjects and sessions, represented as X Pr e = {x , where N F is the number of samples in this dataset.The test data and labels for the specific subject s are denoted as X s T = {x , with N T representing the number of samples in the test dataset. B. Overview We propose a Multi-scale pre-training model based on mask autoencoder (MAE) [29], as shown in Fig. 2. The framework consists of a multi-scale pre-training stage, a personalized finetuning stage, and a personal testing stage. In the multi-scale pre-training stage, both the channel-level feature extractor E Pr e_1 and the spatial-level feature extractor E Pr e_3 are employed to extract general information, which is shared by all subjects.Specifically, the unlabeled EEG data x Pr e is initially convolved with different scales of kernels (1×1 and 3×3), which are represented by Conv 1 and Conv 3 , resulting in channel-level representation xPre_1 and spatiallevel representation xPre_3 .For channel-level representation xPre_1 , considering the presence of missing data in some channels, we avoid encoding these channels with missing data to preserve complete information and prevent the introduction of noise.We reconstruct the masked portions to learn the encoder E 1 and obtain z Pr e_1 .For spatial-level representation xPre_3 , which include multiple channel information, we apply the attention feature extractor, denoted by Attn, to align the features of pre-training data and fine-tuning data based on brain region correlations, resulting in the aligned feature xPre_3 .We subsequently employ masking and reconstruction on xPre_3 to learn the encoder E 3 and obtain z Pr e_3 .The formulas are as follows: (1) In the fine-tuning calibration stage, only a limited amount of labeled data from a specific subject is employed to finetune channel-level feature extractor E Pr e_1 for the personal emotion predictor.Simultaneously, we freeze the parameters of the pre-trained spatial-level feature extractor E Pr e_3 for the generalized emotion predictor.Finally, we fuse the channellevel representation with the spatial-level representation to perform the final emotion classification.Through this comprehensive training, the subject-specific model demonstrates an exceptional capability to decode emotional states from a different session of EEG data during the test phase.We elaborate on each stage as follows. C. Multi-Scale Pre-Training To use more corrupted EEG data and enhance the learning capacity of the model, we adopt the MAE framework with a transformer-based backbone network [30].The model splits images into equal blocks and uses transformer encoders to extract features, with an asymmetric encoder-decoder design for image reconstruction.It leverages transformers for global information, masking for robustness, and self-supervised training for generalizability.In our study, we employ convolutional kernels for patch embedding.The size of the convolutional kernel offers different interpretations for partitioning in two-dimensional EEG images, where 1 × 1 convolutions partition individual electrodes to learn inter-channel relationships, and 3 × 3 convolutions are utilized to learn about broad spatial features.We conduct multi-scale feature fusion to enhance data utilization and model representation capacity, enabling the extraction of deeper emotional representations from the frequency domain channel features and spatial features of the EEG. 1) Channel-Level Representation: By employing 1 × 1 convolution, we map each EEG electrode to a patch, enabling the vision-transformers framework to encode channel relationships and capture specific feature information.However, the challenge of partially missing channels and zero-padding, combined with random masking, risks losing valuable data.To address this, we have improved our approach by ensuring all zero-padded patches are masked, preserving meaningful channel information in our feature extraction process.More specifically, given the input pre-training data x Pr e , we embed patches using C 1 convolutional kernels of size 1 × 1 with added positional embeddings, obtaining xPre_1 ∈ R 9×9×C 1 : where Conv 1 represents a convolution operation.Assuming that out of the 81 (9 × 9) patches, there are p non-zero padded patches (e.g., p = 62 as illustrated in Fig. 1).To ensure the effectiveness of subsequent feature encoding, we randomly mask these p non-zero padded patches in addition to masking all zero-padded patches.The formula is as follows: M i, j = 0, i f position(i, j)should be masked 1, other wise (4) where i=1, j=1 ∈ R 9×9 represents the matrix corresponding to the 2D EEG images with missing channels, represents the updated mask, and • denotes the element-wise multiplication. 2) Spatial-Level Representation: When using a 3 × 3 convolution for partitioning, each patch contains more electrode channel information.The neighboring channels in EEG signals influence each other and reflect the corresponding brain region's signal characteristics.The connectivity between these brain regions is closely related to their spatial positions.The spatial features of EEG signals reflect the coordination and interaction among different areas of the brain, which is crucial for analyzing the spatial distribution and temporal variations of neural activity.In cross-session emotion recognition experiments, factors, like induced emotional stimuli, external environments, and physiological expressions contribute to Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply. variability.However, the regional influence of EEG signals remains more objective and stable.Therefore, we consider further encoding and decoding the spatial features.By using large-scale convolutions for weighted average partitioning, we not only incorporate spatial features of brain regions to some extent but also exhibit universality across all EEG data with missing channels, reducing the workload of data preprocessing.Specifically, given the input pre-training data x Pr e , C 3 convolutional kernels of size 3 × 3 are applied to obtain xPre_3 ∈ R 3×3×C 3 : Here, Conv 3 represents a convolution operation, and I(•) denotes the indicator function that returns 1 if the condition is true and 0 otherwise.xPre_3 (i, j) ∈ R C 3 (for i = 1,2,3 and j = 1,2,3) represents the patch obtained through 3×3 convolution, and it is then normalized by dividing by the number of existing channels in each corresponding patch. 3) Invariance Learning for Region Correlation: We align pretraining and fine-tuning data features based on brain region correlations to obtain subject-invariant and temporal-invariant features.Considering that each individual's emotional fluctuations are unique and represent their distinct characteristics, we choose to align the shared features based on brain region correlations instead of directly aligning the pretraining spatial-level representation data xPre_3 and fine-tuning spatial-level representation data xF_3 .This approach partially attenuates the differences in data distribution while preserving the unique characteristics of EEG signals.Specifically, xPre_3 ∈ R 3×3×C 3 and xF_3 ∈ R 3×3×C 3 are first rearranged into xR Pr e_3 ∈ R 9×C 3 and xR F_3 ∈ R 9×C 3 , respectively.Subsequently, an attention mechanism is employed to capture the correlations between patches: Here, Q Pr e ∈ R 9×d k and K Pr e ∈R 9×d k refer to the queries and keys for x R Pr e_3 , respectively, obtained by performing linear transformations on x R Pr e_3 , while Q F and K F are the corresponding queries and keys for x R F_3 ; the dimension of the keys (queries), denoted as d k , is used for scaling the dot product. Then, the similarity between A Pr e in the pre-training data and A F in the fine-tuning data is measured using Maximum Mean Discrepancy (MMD): where B stands for the number of samples in a training minibatch, i and j are the indexes within the batch, A Pr e represents the correlation matrix of the i-th pre-training sample, A ( j) F represents the spatial correlation matrix of the j-th fine-turning sample, and ∅(•) denotes the mapping function. By doing so, we can quantify the distribution differences in attention representations between the impaired pre-training data and the fine-tuning data.Introducing this type of loss mitigates the feature disparities between different subjects while preserving the emotional characteristics inherent to the subject, thereby enhancing the model's classification performance and generalization ability.The attention mechanism is further used to obtain the aligned feature xPre_3 : where V Pr e is the values obtained by performing a linear transformation on xR Pr e_3 , and Attn is the attention feature extractor.Finally, xR Pr e_3 is rearranged into xPre_3 ∈ R 3×3×C 3 for the subsequent 2D masking with the size of 3 × 3. At this point, xPre_3 has better spatial features and prior knowledge compared to the initial data, and it also has some complementary relationship with the channel-level representation. 4) Encoder, Decoder, and Reconstruction: Based on the aforementioned embedding using different scales, we obtain the data xPre_1 and xPre_3 .We apply masks to these data based on different meanings of scale features.Then, we utilize a multi-layer Transformer encoder to extract features, followed by a decoder to reconstruct the images.The formula is shown below: where M(3) ∈ R 3×3 represents the random mask for xPre_3 , ⊙ denotes the element-wise masking operation, and the mask values are broadcasted correspondingly.z Pr e_1 and z Pr e_3 are the masked data obtained through the encoder E 1 and E 3 . x ′ Pr e_1 and x ′ Pr e_3 are the reconstructed data obtained through the decoders D 1 , D 3 .Then, we use the mean squared error (MSE) to measure the quality of the masked reconstruction.The reconstruction loss is computed only from masked nonzero patches to avoid introducing noise.Specifically, the formulas are as follows: where 1 represents the index set of the masked non-zero patches for x Pr e , 3 represents the index set of masked patches for xPre_3 , | • | denotes the number of elements in the set, (i, j) is the index of the masked patches, x ′ Pr e_1 (i, j) ∈ R C f , and x ′ Pr e_3 (i, j) ∈ R C 3 .We obtain the reconstruction losses, L r econ_1 and L r econ_3 , for two segments of different scales.For a mini-batch training dataset, the reconstruction losses can be expressed as L B r econ_1 and L B r econ_3 . Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply. D. Fine-Tuning Stage and Test Stage After pre-training, generalized feature extractors E Pr e_1 and E Pr e_3 are obtained, which can be fine-tuned to obtain a personalized feature extractor Ê, adapting it to a new task.However, certain modifications have been made when it comes to model transfer for EEG data. In channel-level representation learning, in order to address the issue of zero-padding when mapping EEG data to twodimensional brain images, we use channel masking during the pre-training stage to minimize the impact of zero-padding on the pre-training data.Similarly, during the fine-tuning stage, in the self-attention mechanism, a masking matrix can be used to assign a weight of 0 to the contribution of the padded regions in the attention weights.This effectively removes the influence of missing data on the attention weights and prevents the padded regions from interfering with the results. Specifically, given the input x F , we obtain xF_1 through the patch and positional embedding.Afterward, we calculate the corresponding attention matrix A chan ∈ R 81×81 within the encoder E 1 , where each element A chan i, j is defined as: where n represents the number of patches, e i, j represents the similarity score between the i-th and j-th patches, determined through the dot product of two vectors, and M (F) i, j serves as a padding patch indicator.If either the value of the i-th or the j-th patch is missing (i.e., represented by a padding value), then M (F) i, j is set to -∞ to eliminate their contribution to the attention matrix ( i.e., lim i, j is set to 0. In the pre-training phase for spatial-level representation, spatial feature alignment has already been performed through fine-tuning data.Therefore, the aligned data can be directly used for feature extraction in the encoder.To reduce the number of tuning parameters and enhance model stability, in this stage, we choose to freeze the pre-trained parameters of spatial-level representation without adjustments.This approach allows us to effectively leverage the previous pre-training results while avoiding issues such as overfitting during finetuning, thus improving the model's generalization ability.Finally, the features extracted from the channel-level representations and spatial-level representations, denoted as z F_1 and z F_3 , respectively, are concatenated together and passed through a Batch Normalization layer to enhance the model's robustness and generalization ability.A classification layer is then incorporated into the fused features z F for emotion classification, and we compute the classification loss using cross-entropy. In the test stage, we employ a new session of EEG data from the specific subject, denoted as x s T and y s T , to validate the effectiveness of the subject-specific model.The details of our proposed method are shown in Algorithm 1. Algorithm 1 Multi-scale Masked Autoencoders Input: Pre-training data X Pr e = {x Mask the pre-training data and encode: Reconstruct the input data: ). 8: Optimize E Pr e_1 by minimizing the reconstruction loss L B r econ_1 .9: until all samples in X Pr e have been drawn. Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply. E. Implementation Due to the different number of blocks in the channel-level and spatial-level representation learning stages, 9 × 9 and 3 × 3, respectively, different mask rates are established for each stage.Specifically, the mask rate for the channel-level representation learning stage is set as 0.75, in accordance with the original MAE [29], and for the spatial-level representation learning stage, it is adjusted to 0.5, due to the limited number of blocks. The encoder and decoder parameters for channel-level and spatial-level representation learning are set identically for simplicity.Specifically, the dimensions for the encoder and decoder are chosen from {128, 256, 512, 1024}, with the number of layers selected from {1, 2, 3, 4}, and the number of self-attention heads from {2, 4, 6, 8}.MSMAE is optimized using the SGD optimizer with a learning rate of 0.001, 50 epochs, and a batch size of 32.The parameter settings are detailed in Table I. IV. EXPERIMENT A. Datasets Experiments are performed on two publicly available datasets, namely SEED and SEED-IV.The SEED dataset includes EEG signals from 15 subjects, which are recorded using an ESI NeuroScan System with 62 channels [31].Each subject participates in three sessions, with an interval of approximately one week between sessions.During these sessions, the subjects' data are collected while watching emotion-eliciting movies designed to evoke three different emotional states: negative, positive, or neutral.The signals are initially recorded at a sampling rate of 1000Hz and are subsequently downsampled to 200Hz for analysis.They are further segmented into non-overlapping 1-second segments, with each segment treated as a sample.Consequently, for each subject and each session, there is a total of 3,394 samples. The SEED-IV dataset consists of EEG signals of 15 subjects recorded using the same recording device as SEED [32].Similar to the SEED dataset, each subject participates in three separate sessions with intervals between them.In this case, four emotional states are collected: happiness, sadness, fear, and neutral.The signals are divided into 4-second nonoverlapping segments, and each segment is regarded as an individual sample.Consequently, for Sessions I, II, and III, there are 851, 832, and 822 samples per subject, respectively. B. Data Preprocessing To construct a unified pre-training model, it is necessary to preprocess all the data in a consistent manner.Firstly, based on the structure of the EEG cap, the EEG channels of each frame are mapped into a two-dimensional EEG image to preserve the spatial location of the electrodes, as shown in Fig. 1.This transformation is applied to frequency-domain features for each sample.We employ Differential Entropy (DE) for the frequency-domain feature, which is widely used in emotion recognition [31].Specifically, DE features are derived from five predefined frequency bands, which include Delta (1-3 Hz), Theta (4-7 Hz), Alpha (8-13 Hz), Beta (14-30 Hz), and Gamma (31-50 Hz).Additionally, min-max normalization is performed at the sample level to address the issue of varying feature ranges, improve the convergence performance of the model, and eliminate the dimensional differences between different features. C. Cross-Session Evaluation Compared to other datasets, the SEED and SEED-IV datasets possess unique characteristics in that each subject completed the experiment in three different sessions.We utilize this distinctive feature to investigate the generalization of models across sessions, specifically assessing whether the models can consistently deliver satisfactory performance when training and testing data come from different sessions.When receiving the same stimuli, the recognition accuracy of various methods for predicting the emotions of the same subject at different times will show temporal stability variations.However, up to the present, there have been limited studies on cross-session experiments, most of which involve the acquisition of test data to minimize the data distribution discrepancy with the training data during the training process.In contrast, our experimental setups do not require the inclusion of test data during training.This approach, while more challenging, offers enhanced practical value.Specifically, we use one session's EEG data as training data and another as testing data.The pairs of sessions used for validation encompass session1-session3, session2-session1, session3-session2, session1-session2, session2-session3, and session3-session1.Through a comprehensive six-fold crossvalidation, we calculate the average recognition accuracy, along with the standard deviation, for all 15 subjects. D. Method Comparisons We compare the proposed MSMAE with several relevant models on the SEED and SEED-IV datasets to demonstrate Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply. TABLE III ABLATION STUDY OF OUR MODEL WITHIN SEED AND SEED-IV its effectiveness.We select models focusing on spatial features to ensure a more meaningful comparison, including Vit [37], SimpleVit [33], FBCCNN [34], STNET [35], and DGCNN [36].Furthermore, we implement these models using TorchEEG, a PyTorch-based library for EEG signal analysis.We search the parameter space of these compared models following the descriptions outlined in their respective papers.The average accuracies (± standard deviation) for each method are reported in Table II.The experimental results demonstrate that our method significantly outperforms existing methods.Specifically, as shown in Table II, our model achieves a recognition accuracy of 80.86% on the SEED dataset with a standard deviation of only 6.21%.On the SEED-IV dataset, our model achieves a recognition accuracy of 59.33%, accompanied by a standard deviation of 12.61%.Furthermore, according to the results in Fig. 3, our method exhibits performance improvement across different sessionto-session transfers.Even without utilizing the target domain, our model can reduce the influence between different domains by aligning regional features.Additionally, as illustrated in Fig. 4, our model demonstrates advantages for each subject, indicating the generalization and stability of our model. E. Ablation Study To evaluate the effectiveness of each module in the MSMAE model, we conduct ablation experiments with the MAE model and the Vit model at different scales, namely (1 × 1) and (3 × 3).We also compare the results with the feature fusion of both methods at scale (1 × 1&3 × 3), which are listed in Table III.By comparing the experimental results at different scales, it is observed that the results at the 3 × 3 scale consistently outperform those at the 1 × 1 scale, indicating the advantage of spatial frequency features in EEG emotion recognition tasks.Furthermore, by comparing the results of the Vit and MAE models on 1 × 1 scale on the SEED-IV dataset with relatively limited pre-training and fine-tuning data, we find that using MAE for pre-training a large model tends to lead to overfitting and a decrease in accuracy compared to Vit without pre-training.However, such pre-trained models are highly dependent on the volume of data, as the performance of the model largely relies on the quality and diversity of the data used during the training process.Building upon this foundation, we further enhance the model's performance by fusing multi-scale features and conducting pre-training.Importantly, our model achieves higher stability and generalization performance by aligning the region correlations between the pre-training and fine-tuning data.Through these ablation experiments, we validate the importance of scale selection, pre-training, and multi-scale feature fusion in our model.These results provide strong support for our research and application in complex EEG emotion recognition tasks and offer valuable directions for future improvements and optimizations. We randomly select one subject from the SEED dataset for visualization.The t-SNE visualization of different methods is presented in Fig. 5.In comparison to other methods, MSMAE demonstrates a reduction in data distribution discrepancy to some extent, even without utilizing target domain information. F. Interpretability To validate the interpretability of our proposed method, we conduct EEG topographic visualization using adjacency matrices at a scale of 1 × 1 learned from MSMAE.Followed by [38] and [39], we visualize the degree centrality of each scalp EEG electrode based on the adjacency matrices.Suppose à = { Ãi, j } p i, j=1 is the submatrix of A chan ∈ R 81×81 , where p represents the number of non-zero padded patches in channellevel representation (with p = 62 in the SEED dataset).In this matrix, the i-th row and i-th column values correspond to the connection weights associated with the i-th electrode.The degree centrality of the i-th EEG electrode, denoted as DC i , can be derived by of the spatial distribution of the emotion recognition task, which reflects the intercorrelation analysis of EEG signals between electrodes in our method.By examining Fig. 6, we observe that the regions of emotional activity are primarily concentrated in the frontal and temporal areas.These findings from saliency maps have been validated and are consistent with existing research on emotions [40], [41], [42].Furthermore, we note that in neutral emotions, the neural patterns are relatively smoother compared to positive and negative emotions.Positive emotions are more readily activated in the lateral temporal areas compared to negative and neutral emotions, consistent with the finding in [31].In addition, we observe that the activation range of negative emotions is larger in the frontal regions. G. Cross-Dataset Generalization We perform cross-dataset experiments to assess the generalization ability of our model.We chose the unlabeled data from the latest publicly available dataset, FACED [43], as the pre-training data.This dataset contains EEG signals from 123 subjects with 32 channels.Given that the SEED and SEED-IV datasets lack the A1 and A2 electrodes, we exclude these channels and remain 30 channels for our analysis.We fine-tune the model with data from one session of a specific subject from the SEED or SEED-IV dataset and test the model on another session of the same subject.The challenge of crossdataset experiments is that pre-training is conducted using unlabeled data with 30 channels, whereas fine-tuning used 62-channel data from the SEED or SEED-IV dataset, which resulted in missing channels and differences between devices.Notably, in our cross-dataset and with-dataset settings, the only difference lies in whether the pre-training data originates from the same dataset as the fine-tuning data. We compare the performance of MSMAE under the crossdataset and within-dataset settings.Additionally, Vit (1 × 1) and MAE (1 × 1) under the within-dataset setting are also included for comparison, as depicted in Fig. 7. Based on the experimental results, our model demonstrates consistent and stable generalization ability in the cross-dataset setting.Furthermore, it confirms our model's capability to address the issue of missing channels, validating the robustness and portability of our model. V. CONCLUSION This paper introduces a unified, multi-scale pre-training framework to overcome challenges related to missing EEG channels and limited labeled data in emotion recognition.We propose a novel multi-scale fusion approach combining channel-level and spatial-level representation learning with an improved masking mechanism to preserve electrode relationships and invariance learning for regional correlations.Compared to the Vit (1 × 1) without pre-training, MSMAE significantly improves accuracy by 10.76% on the SEED dataset and 11.9% on the SEED-IV dataset.Moreover, MSMAE surpasses the original MAE (1 × 1) in accuracy by 9.13% on the SEED dataset and by 15.31% on the SEED-IV dataset.MSMAE also demonstrates superiority over current state-of-the-art methods, outperforming the secondbest method by 2.84% and 1.26% on the SEED and SEED-IV datasets. In summary, the proposed model significantly elevates the performance of cross-session emotion recognition in a selfsupervised fashion.MSMAE is a general framework that can be easily extended to other EEG-based learning tasks, offering promising directions for future research.However, the current implementation of MSMAE relies on handcrafted features as input, potentially resulting in the loss of valuable information in the original signals.Consequently, our future efforts will explore MSMAE's potential for directly extracting information from raw signals, addressing this constraint, and enhancing the framework's utility. Fig. 1 . Fig. 1.Mapping EEG electrode distribution map to a two-dimensional plane.The left illustration depicts the spatial arrangement of channels on the brain cap, while the right is the 2D converted feature matrix format.The missing channels are filled with 0. Pr e }N Pr e i=1 ∈ R N Pr e ×9×9×C f , with N Pr e being the number of samples in this dataset.The fine-tuning data contains a limited amount of labeled data from a specific subject s, represented as X s F = {x Fig. 2 . Fig. 2. Overall structure of MSMAE.The framework consists of a multi-scale pre-training stage, a personalized fine-turning stage, and a personal testing stage. specific subject s; the number of epochs Epoch and the batch size B. Output: The generalized feature extractors E Pr e_1 and E Pr e_3 (include Conv 3 , Attn, and E 3 ); the personalized emotion predictor Ê; and the predicted emotion class ŷs T = { ŷ(i) T } N T i=1 .Pre-training Stage for Channel-level Representation: 1: Randomly initialize E Pr e_1 .2: for i = 1: Epoch do 3: repeat 4: Draw one batch of pre-training data x B Pr e .5: Embed the pre-training data x B Pr e to obtain xB Pr e_1 .6: Fig. 3 . Fig. 3. Comparison between MSMAE and other algorithms in various cross-session scenarios within SEED and SEED-IV. Fig. 6 Fig. 6 presents the EEG topographic maps of positive, neutral, and negative emotions in the SEED dataset.The values of DC are scaled to the interval of [0, 1].Through scalp mapping visualization, we can gain a direct and intuitive understanding Fig. 4 . Fig. 4. Comparison of MSMAE and other algorithms on different subjects within SEED and SEED-IV. Fig. 5 . Fig. 5. Feature visualization by different methods and at different scales within SEED dataset. Fig. 6 . Fig. 6.Topographic maps learned from the MSMAE model within the SEED dataset. 10 : Return E Pr e_1 .Pre-training Stage for Spatial-level Representation: 11: Randomly initialize E Pr e_3 .12: for i = 1: Epoch do 13: repeat 14: Draw one batch of pre-training data x B Pr e and one batch of fine-tuning data x B F .15: Embed the input data x B Optimize Conv 3 and Attn by minimizing the reconstruction loss L mmd .18: until all samples in X Pr e have been drawn.19: Return Conv 3 and Attn.until all samples in X Pr e have been drawn.27: Return E Pr e_3 .Personalized Calibration Stage: 28: Initialize Ê with E Pr e_1 , E Pr e_3 , and frozen E Pr e_3 .29: for i = 1: Epoch do
8,584.8
2024-04-15T00:00:00.000
[ "Computer Science" ]
The level of conservatism and earnings management during IFRS adoption We analysis to determine the level of conservatism and earnings management in the period of IFRS adoption in Indonesia. We used a quantitative approach and was tested using a different group test, i.e. Mann-Whitney U, ANOVA, and MANOVA. The object of this research is all manufacturing companies listed in Indonesia Stock Exchange (IDX) for the period of 2012-2017. The number of samples used in this research is 516 firm-years. Earnings management is measured by two approaches, i.e. accrual earnings management and real earnings management, while conservatism is measured by Basu Model. The level of conservatism and earnings management in this study focuses on after the IFRS adoption period. We reveal that IFRS adoption does not change accounting conservatism in financial statements. In addition, the greater adoption of IFRS is not able to reduce the level of overall earnings management both in accrual earnings management and real earnings management. Introduction Based upon a report of IFRS Foundation (2018), there are 144 countries that have implemented IFRS for their accounting standards. In line with IFRS Foundation's mission, this standard is expected to improve the quality of financial reporting for transparency, accountability, and efficiency in the global economy. Although IFRS has been adopted by many countries, thus far how it affects the quality of accounting information has not reached any conclusive results, as specified in Trimble's study (2018). The Indonesian Institute of Accountants (IAI) is an organization authorized to create financial accounting standards in Indonesia. After a preliminary reference to the US Generally Accepted Accounting Principle (US GAAP), in 1994 IAI developed a standard referred to as the International Accounting Standards (IAS) (which was afterward modified to International Financial Reporting Standards (IFRS) in 2001). By that time the financial accounting standards were revised several times, as for particular standards -referred to as Statement of Financial Accounting Standards/PSAK) pertained to international accounting standards, while others remained using PSAK that had been formerly designed on basis of local needs. From that moment on, in 2008 IAI declared a commitment to the convergence of IFRS. The first phase began in 2012 with reference to IFRS standards with 3 years' time lag. The second phase began in 2015 by reducing the time lag for up to 1 year (Indonesian Institute of Accountants, 2018). Indeed the preferred approach of the convergence process to IFRS is a gradual one, not "big bang". The adoption of international accounting standards is envisioned to improve the information quality of the company's financial reporting. Based on a conceptual framework of financial reporting (IAI, 2018), the financial reporting aims at providing beneficial information for decision making by external parties. The main goal of financial reporting is to gain limited access to companies for acquiring direct information through financial statements in order to ensure the information asymmetry between company management and external parties can be minimized by disclosing financial reporting. Accounting conservatism is no longer regarded as the main principle in the conceptual framework of International Financial Reporting Standards (IFRS). Therefore, it prompts further research to empirically examine the impact of IFRS application on the conservatism level of financial reporting. Some previous studies have not reached any conclusive outcomes in establishing evidence of the decrease in conservatism level. In fact, there are some studies that validate an increase in conservatism (Barth et al., 2008;Guenther et al., 2009;Dimitropoulos et al., 2013), while other studies demonstrate the decrease in conservatism (Hellman, Hou, Jin, & Wang, 2014;Manganaris, Spathis, & Dasilas, 2015). The Indonesian data in this research portrays an approach of gradual adoption that provides research contribution which is different from any previous studies from other countries. An approach of gradual adoption is supposedly more capable of evaluating the impact of IFRS adoption on the quality of financial reporting. It allows sufficient time for companies to learn and implement new accounting standards based on PSAK which cannot be executed at once. Indonesia has been adjusting these accounting standards since 1994. The impacts of IFRS accounting standards are expectedly isolated from any interferences of the transitioning processes that could disrupt the learning process in the company. On the contrary, a comparison of before and after IFRS adoption as previous studies did suggest some companies were encouraged to directly implement the standards overall (which consisted of 50-60 PSAK) which were different from a year ago that had implemented an old accounting standard. Considering objectives envisioned in the IFRS adoption as well as the long period of the IFRS convergence process in Indonesia, an evaluation is definitely required to measure the improved quality of financial reporting in Indonesia. Up to the present, there have been few studies in Indonesia undertaking similar interests. On the contrary, recent studies from other countries that have adopted IFRS have not yielded any conclusive results. Studies from other countries employ data from those who have adopted IFRS with an approach of "big bang", not gradual one like Indonesia, that would likely result in different outcomes. Therefore this study seeks on evaluating the quality development of financial reporting in Indonesia, mainly on earnings management and conservatism during the adoption of IFRS. Employing data of 516 manufacturing companies in Indonesia during the 2012-2017 periods, this study revealed both accrual and real management declined over the past 6 years which was non-significant, except for a decline in real earnings management production costs. Instead, conservatism had increased in 2012-2016 then eventually decreased in 2017. The research findings provide a contribution to the financial accounting standards board (DSAK IAI) in Indonesia that the adoption of IFRS over the past 6 years has not produced significant improvements in the quality of corporate financial reporting in Indonesia. This outcome might be due to the need for improvement in Indonesian institutions that encourage companies to produce financial reporting, as stated by Ball, Robin, & Wu (2003), along with the quality accounting standards. In the next section, a literature review explains the conceptual framework as well as the research method. The research finding is discussed in the next section and eventually drawn to a close with a conclusion. Hypotheses Development This study has an underlying agency theory proposed by Jensen & Meckling (1976). Financial reporting offers a solution to solve agency problems between principals and agents in companies. The principal demands information as well as monitor over fund management that have been invested to companies, whereas the agent holds responsibility in managing the company. For that reason, quality financial reporting is required. One of the necessary attempts is improving the existing accounting standards to determine three major things. First, recognition (what will be recorded and when it is recorded), second measurement (how it will be measured in terms of money value), and third, reporting and disclosure (in which part of the financial statement will be reported and what additional information for mandatory disclosure). The impact of IFRS on earnings management The earnings management is an option that the company's management might consider over Jurnal Keuangan dan Perbankan Volume 24, Issue 1, January 2020: 53-67 | 56 | existing accounting policies as well as real activities of the company with the purpose of influencing the number of reported earnings based on certain goals (Scott, 2015). Earnings management can be carried out through (1) accounting policies or accrual earnings management (AEM) and (2) through real activities or real earnings management (REM). AEM is performed by selecting accounting methods, judgment, and estimations of existing accounting (Scott, 2015). REM is employed by managing real activities of the company such as excessive production, sales discounts, time arrangement for advertising, research and development activities (Roychowdhury, 2006). The earnings management distorts information presented by the company and does not represent its actual conditions. Even though it might work otherwise, it also is carried out for the benefits of a company in which to overcome blocked communication as well as a means of signaling to the market (Scott, 2015). The main characteristics of IFRS that are principle-based and dominantly market-based are regarded as better at reflecting the condition and performance of the company (Spiceland et al., 2018). The modification of accounting standards from local standards to IFRS was examined by measuring the quality of the company's financial reporting after a new standard is enacted. One of which is by measuring the level of corporate earnings management. On the other hand, IFRS seemingly offers a higher opportunity for companies to conduct earnings management based on the inherent risk of principle-based (Christensen et al., 2015;Lippens, 2008;Callao & Jarne, 2010;Ahmed, Neel, & Wang, 2013;Zéghal et al., 2011). Nevertheless, IFRS appears workable to improve the quality of accounting information through mandatory disclosures that increase opportunities for external parties in detecting earnings management. Yet some research findings reveal the modification of accounting standards impact directly to the accounting policies. Therefore, the adoption of IFRS could reduce AEM practices. On the contrary, as for earnings management is needed by companies to achieve particular goals either opportunistic or efficient ones, the increase in REM expectedly prevails through the company's operational policy actions. As for REM, Oz & Yelkenci (2018) found that 14 countries experience non-significant changes REM after the adoption of IFRS, with the exception of real earnings management through discretionary expenditure which underwent a significant decrease. In line with these findings, Doukakis (2014) and Trimble (2018) confirmed non-significant changes in REM after the adoption of IFRS. All these studies employed data from countries that have adopted IFRS all at once and at the same time ("big bang"). By using data from companies in Indonesia that have adopted IFRS step by step and following the inconclusive findings of previous studies, this study proposes a non-directional hypothesis. The impact of IFRS on accounting conservatism Accounting conservatism is not regarded as one of the major accounting principles in the Financial Reporting Framework of IFRS. Scott (2015) defines conservatism as the precautionary principle in financial reporting, especially companies that post- The level of conservatism and earnings management during IFRS adoption Yie Ke Feliana, Jessica Bagus | 57 | pone to recognize and measure both assets and profits, but immediately recognize any possible losses and debts. Hellman (2011) argued when compared to conventional accounting, IFRS emphasizes relevant records that develop higher dependence on estimates and various judgments. In this matter, policies issued by IASB (International Accounting Standard Board) trigger off lower pressure on the consistent application of conservative accounting in financial reporting based on IFRS. Despite several studies (Ball, Kothari, & Robin, 2000;Lang et al., 2003Lang et al., , 2006Leuz et al., 2003, Ball & Shivakumar, 2005, 2006Conover, Miller, & Szakmary, 2008) argue that one of the characteristics in the quality earnings is conservatism. As conservatism counterbalances management bias that is likely over-optimistic, therefore it stands in need of creditors and other parties in contracts with companies. The previous findings on the impact of IFRS adoption on conservatism level are diverse. A study by Barth et al. (2008), Guenther et al. (2009), and Dimitropoulos et al. (2013) demonstrated the impact of IFRS adoption on the increase in conservatism level. Other studies state otherwise, suggesting IFRS adoption decreases the level of conservatism (Hellman, 2011;Hou et al., 2014;Manganaris et al., 2015). All research findings used company data in countries that have adopted IFRS in one time and produced non-conclusive results. Therefore, this study proposes a non-directional hypothesis H 2 : the adoption of IFRS in Indonesia influence the level of accounting conservatism Method, Data, and Analysis Indonesian Financial Accounting Standards that have adopted IFRS are mandatory for any business entities that have significant public accountability (issuing shares or debt securities) in all types of industries (Circular of financial service authority No.30/SEOJK.04/2016 concerning the form and content of the Issuer's Annual Report or a public com-pany). Thus the research problem existed in all business entities listed on Indonesia Stock Exchange (IDX). Yet the selection of population is limited to manufacturing companies due to both accrual and real-based earnings management models which have been commonly used based on the residual value of the regression among variables that expectedly influence regularly affect total accruals (in AEM), operating cash flows (REM-CFO) and production costs (REM-PROD). For that reason, the modeling covers companies that come from similar types of business to ensure a valid measurement of earnings management. The concern is abnormality value (residual) that certainly varies among industry types. On the other hand, this study avoids a very narrow limit of objects because the research problems occur in all types of listed companies in Indonesia. Thus, to mitigate the two issues, the research object is limited to manufacturing companies that have similar but non-exclusive types the same way as classification of nine types of industries on the IDX. Aside from that, the selection of the manufacturing industry as a measure of REM demands information on production costs that only prevail in manufacturing companies. Moreover, the selected time period is 2012-2017, since 2012 is the early term of DSAK IAI that had implemented IFRS adoption with three years lag time. In the following years, some Statement of Financial Accounting Standards (PSAK) as part of complete Financial Accounting Standards (SAK) are continually modified to adjust with the modification of IFRS. 2017 is the final year of investigation on the latest data available during research period. By employing data from that period, it includes 2 stages of IFRS adoption in Indonesia, these are period of 2012-2014 when SAK had three years lag from IFRS and period 2015-2017 when SAK is 1-year lag from IFRS. The sampling technique in the study is nonprobability sampling with purposive judgmental sampling because the sampling data has met some established criteria. Data sources are acquired from The sample of initial research includes 906 company-years. Of the total sample, only 516 company-years is selected as sampling, demonstrated in Table 1. The selected sample must include companies that have entire data for 6 years (2012-2017) because this study is aimed at investigating the development of accounting information quality during that period. Thus this study employs a longitudinal time horizon. The first variable is accrual earnings management (AEM) which is proxy through Discretionary Accruals (DA). Meanwhile DA was estimated using the model of Kothari et al. (2005) as follows: Moreover, the second variable is real earnings management (REM) which is proxied through Residual Cash Flow from Operations (R_CFO) and Residual Production Cost (R_PROD). In addition to this, R_CFO and R_PROD are measured using the Roychowdhury (2006) Companies are listed in succession 122 732 Companies whose financial statements do not use the Indonesian Rupiah (IDR) and do not close the books as of December 31 The company whose financial statements use Rupiah (IDR) and close the book on December 31 92 552 Companies whose information in the Annual Report does not explain the data needed by researchers Total sample of manufacturing business entities that can be used for research 86 companies 516 year companies Tabel 1. Sampling criteria Where: TA it = total accruals at company i year t obtained from net income minus cash flow from operating activities; REV it = the difference in revenue in company i between years' t and t-1; AR it = the difference between the account receivable with company i between years' t and t-1; PPE it = property, plant and equipment in company i year t; ROA it = return on assets in the company i year t; A it-1 = total assets in company i year t-1 R_PRODit = PRODit/Ait-1-((β1[1/Ait-1] + β 2[Sit/Ait-1] + β ] + β 3[ΔSit/Ait-1])/Ait-1) ] + β [ΔS /A ]) + Where: R_CFO it = residual cash flow from operations at company i year t; R_PRODit = residual production cost in company i year t; CFO it = cash flow from operations at company i year t; A it-1 = total assets in company i year t-1; S it = revenue in company i in year t; S it =difference of revenue in company i between years t and t-1; S it-1 = difference of revenue in company i between years t-1 and t-2; PROD it = production cost = COGSt + INVt s Production costs are not available from published financial statements, therefore their proxy is the cost of goods sold and changes in the value of finished goods inventory, presuming all goods produced were sold out in that year (Roychowdhury, 2006). | 59 | Non-discretionary accruals, normal operating cash flow, normal production costs are estimated using the model of Kothari et al. (2005) and model of Roychowdhury's (2006Roychowdhury's ( ) using 6-year data (2012Roychowdhury's ( -2017 for all companies classified in one category of the manufacturing industry. Conservatism will be measured using the Basu model (1997). Even though some criticized the measurement of conservatism using the Basu model (Dietrich, Muller, & Riedl, 2007;Gigler & Hemmer, 2001;Givoly, Hayn, & Natarajan, 2007;Patatoukas & Thomas, 2011), it remains well-known and widely used model (Ball et al., 2013). The Basu Model is shown in the following. The coefficient of DRit*RETit is a variable used to measure the level of conservatism, i.e., the EPS response following negative stock return in the market. The greater and more positive this coefficient, the higher likely of bad news assessed by the market reflected in the greater information in financial statements represented by the EPS variable. It indicates higher level of conservatism. The results of DA, R_CFO and R_PROD variables wll be analyzed using different test groups of Mann-Whitney U, ANOVA, and MANOVA. Due to its form of coefficient, conservatism will be analyzed for its trends within six years. Results An overall overview of all variables appears in Table 2. AEM that is estimated with a mean value of DA variable demonstrates a negative mean close to zero (-0.00081892). REM operating cash flow estimated with mean value of R_CFO variable for six years is 0.086599, while REM production cost is estimated with the mean value of R_PROD variable is 0.11232543. Therefore among these three measures of earnings management, REM production cost has EPSit /Pit= α1+ α2DRit + α3RETit + α4DRit * RETit + εijt (4) Where: EPS it = earnings per share in company i year t; P it = price per share in company i year t; DR it = dummy variable 1 for return on company i year t negative, 0 for the others; RET it = stock return on company i year t, the stock market price is taken 3 months after closing the book taking into account when the deadline for financial statements must be published Table 3 demonstrates the results of different group tests on the magnitude of AEM and REM that compare the initial year of 2012 and the end year of 2017 in the IFRS adoption period. AEM indicates a non-significant difference. The result is consistent with REM on cash from operating activities which indicates a non-significant difference; however, REM production activity indicates a significant difference. In addition, One Ways ANOVA testing is carried out to understand whether AEM and REM experience significant differences when examined from overall years. The test compares one dependent variable with one fixed factor, i.e. year. Table 4, Table 5, and Table 6 demonstrate the testing results between year group of accrual and real earnings management. From the testing result of One Ways ANOVA, it can be presumed that no difference occurs between AEM and REM in cash from operating activities during IFRS adoption. However, REM for production activities has significant differences during IFRS adoption. After that Multivariate Analysis of Variance (MANOVA) is carried out to examine the effect of year on AEM, REM operating cash flow and REM production costs. The testing results indicate differences in the level of earnings management both accruals and real-based operating cash flows as well as production costs during IFRS adoption, suggesting the significant value smaller than 0.05 of Wilks' lambda which is 0.000 (Table 7). Because the testing demonstrates differences in results between ANOVA and MANOVA, the research also undertakes another test of univariate. This testing is aimed at robustness check whether earnings management experiences significant differences when measured using the variable of year. In this test, three dependent variables are brought into one and compared to one fixed factor, i.e. year. Based on the univariate test, there is the significant difference during the adoption of IFRS in real earn- The level of conservatism and earnings management during IFRS adoption Yie Ke Feliana, Jessica Bagus | 61 | ings management of production activities due to smaller significant value less than 0.05 that is 0.000 that is consistent with the results of ANOVA, except period between 2013 and 2014, and between 2015, 2016 and 2017. Table 8 shows the results of the regression test using the Basu Model to examine the impact of accounting conservatism in the financial statements from 2012 to 2017. Accounting information only discovers conservatism in 2014 with a significant positive coefficient, but then again in other years accounting information does not reveal any significant difference in the recognition between bad news and good news. IFRS adoption in Indonesia affects the level of earnings management The result of group different test between 2012 and 2017, one way ANOVA for an overall year and univariate test indicate consistent results for real earnings management (REM) through significant production costs. For this reason, IFRS adoption for the past 6 years has prevailed in reducing REM only through production activities. To be specific, the univariate test results of real earnings management of production costs reveal significant differences from 2012 to 2013, suggesting a significant decrease, and there was a significant change from 2014 to 2015, suggesting a significant increase. These two years (2012 and 2015) are initial years of first and second stages of IAI implementation. Therefore the overall modification of standards based on IFRS with a three years time lag has a positive impact on the quality of financial reporting by a significant decrease in REM production costs. Yet when IAI employs the second stage, there is a significant increase in REM production costs even though it is still far below the value in 2012. To understand more in-depth about the fluctuation of means value of AEM and REM over a period of 6 years, Figures 1, 2 and 3 can be used to understand trends in mean value of AEM, REM operating cash flow and REM production costs. | 62 | Based on Figure 1, AEM had experienced significant decline from 2012 to 2013, yet it then increased again and remains stable with a value lower than before 2012. Thereby AEM indeed decreased during the adoption period of IFRS, yet it was not significant. Real earnings management through operating cash flow in 2012-2015 went a slight increase, which was followed by a decrease from 2015-2017 with a value lower than the value in 2012. Real earnings management through production activities significantly declined from 2012 to 2013, but it rose again and since 2015 it has been relatively stable with a value lower than that of 2012. The trend of real earnings management of production costs over the 6 year period of observation (figure 3) is in line with the trend of accrual earnings management (Figure 1). The result is presumed to be related to overall IFRS adoption that leads companies to reduce real earnings management activities by overproduction with the intention of increasing earnings for that period. Companies are more concerned about the long-run impacts of additional costs for handling inventory and any potential obsolescence of inventory value in the future. It is consistent with inventory valuation in PSAK 14/IAS 2 that implements Lower Cost or Net Realizable (LCNR), in such a way when the net realizable value expectedly declines as a result of obsolescence, the inventory value will decline by recognizing the expense. To a large extent, the results of groups' different tests, both ANOVA and MANOVA, confirm that the larger adoption of IFRS in Indonesia does not significantly influence earnings management, except real earnings management through production activities. This research finding related to AEM is consistent with Jeanjen & Stolowy (2008), Wang & Campbell (2012), Doukakis (2014), Brice et al. (2015), and Trimble (2018), while REM conveys nonsignificant effect, consistent with the previous study by Doukakis (2014) and Trimble (2018). Based on the testing results, we can conclude that H 1 was rejected. The results of tests cannot produce any evidence that the adoption of IFRS in Indonesia can influence earnings management intensively. In actual fact, the adoption of IFRS has a positive impact on minimizing accrual and real earnings management. Under any circumstances, this impact must be strengthened by enforcement and institution that urges faithful representation reflected on the company's financial reporting, with the purpose of reducing earnings management significantly and sustainably as part of improving the quality of financial reporting. Maintaining quality, in the long run, is required to monitor fluctuating trends of earnings management that indicates the determination of a company's management to adjust with changes in financial accounting standards. This discourse is in line with research findings from Ball et al. (2003), stating that the adoption of quality accounting standards alone is insufficient to improve the quality of financial reporting in countries. The adoption of IFRS in Indonesia changes the level of accounting conservatism Based on the estimation results of the Basu Regression Model as shown in Table 8, the accounting information in the company's financial statements was only conservative in 2014 around the same time when the F test result was significant. Based on the results of standard modification both comprehensively and per PSAK, there is no specific standard that affects conservatism in that year. Further studies need to be taken by considering external and internal factors during the year. The research findings are not consistent with previous studies that revealed both an increase in conservatism (Barth et al., 2008;Guenther et al., 2009;Dimitropoulos et al., 2013) and decrease in conservatism (Hellman, 2011;Hou et al., 2014;Manganaris et al., 2015). Based on the testing results, it leads to the conclusion that H2 is rejected. They are unable to reveal that the adoption of IFRS in Indonesia can influence accounting conservatism in financial reporting. In the beginning, the adoption of IFRS in Indonesia improved conservatism that suggested an improvement in the quality of financial reporting. Yet in the last year of investigation, the conservatism declined. It needs further research to see whether the decreasing trend of conservatism sustains in the following years. If the decline persists, it will demonstrate that IFRS does not positively contribute to conservatism by failing to mention specifically the qualitative characteristics of financial reporting in the Financial Reporting Conceptual Framework. It would seem reasonable because many account measurement in IFRS employs fair value. The fair value will fluctuate in conformity with current conditions, therefore the ability of financial reporting to reflect bad news and good news that | 64 | is initially epitomized by the market is equal. For that reason, the accounting information is not much more conservative, which highlights the recognition of bad news which is more fast-tracked than good news. Conclusion This study investigates whether the level of earnings management and conservatism, which characterizes the quality of financial reporting, change throughout the period of IFRS adoption. Out of the data analysis, it leads to the following conclusions. As it appears from the testing results of Mann-Whitney U, ANOVA and MANOVA, it has been proven that significant differences only occurred in real earnings management for production activities, particularly in 2012 and 2015 in the time of IFRS adoption. Despite the fact, the adoption of IFRS that is larger in number and smaller in time difference does not affect the level of accrual and real based earnings management through operating cash flow. The level of accrual earnings management and real earnings management in cash from operating activities indeed decline but non-significant. Accounting conservatism had increased throughout the adoption period of IFRS, yet in the last year of observation in 2017, the accounting conservatism declined. It validates the adoption of IFRS does not affect accounting conservatism in the company's financial statements. This study acknowledges some limitations. For instance, this study assumes that external environmental factors and the company's internal conditions are constant during the observed period, in such a way that changes in the quality of financial reporting are mainly resulting from the existing financial accounting standards. Suggestion for further studies is to take into account the external environmental factors and company characteristics to prevent any outcomes as primary result of financial accounting standards.
6,629
2020-01-28T00:00:00.000
[ "Business", "Economics" ]
Sources Identification of Water Inrush in Coal Mines Using Technique of Multiple Non-Linear Machine Learning Modelling Water inrush is a major threat to the working safety for coal mines in the Northern China coal district. The inrush pattern, threaten level, and also the geochemical characteristics varies according to the different of water sources. Therefore, identifying the water source correctly is an important task to predict and control the water inrush accidents. In this chapter, the algorithms and attempts to identify the water inrush sources, especially in the Northern China coal mine district, are reviewed. The geochemical and machine learning algorithms are two main methods to identify the water inrush sources. Four main steps need to apply, namely data processing, feature selection, model training, and evaluation, in the process of machine learning (ML) modelling. According to a calculation instance, most of the major ions, and some trace elements, such as Ti, Sr, and Zn, were identified to be important in light of geochemical analysis and machine learning modelling. The ML algorithms, such as random forest (RF), support vector machine (SVM), Logistica regression (LR) perform well in the source identification of coal mine water inrush. Introduction Water inrush is one of severe hazards to coal mines in China.According to statistical material, more than 25 billion tons of coal resources are at the risk of water inrushes in China.From 2000 to 2015, 1162 water inrush accidents were reported, causing 4676 deaths.The number of accidents and deaths took 3.3% and 7.8% of all accidents in coal mines.In spite of the low proportion, major accidents often took place, leading to severe property and live loss. Northern China district is an important coal base area, reserves of which takes nearly 40% of all country.Therefore, the prevention of water inrush accident is a key issue to the mining safety.The main threats of water inrush to the working face can be grouped into mainly four types, namely surface water, coal roof aquifer water, coal floor aquifer water, and goaf water.The coal roof water is usually relative to coal seam sandstone aquifers, sometime associate with quaternary aquifers.Goaf water formed when the working face closed and ground water filling up this space.The coal floor water is usually relative to limestone aquifers in the Ordovician system and the Taiyuan Formation in the Carboniferous system. The different types of water inrush threats show various foreshadow, bursting behaviour, and hazard rating, and corresponding treating technology is essential.Therefore, the technique to predict and evaluate the accident potential, forecast the accident occurrence, and identify water inrush sources, is a key step to prevent the accidents or disasters, and protect the working safety and human health and lives. In this chapter, the main techniques that used to identify water inrush sources and its application, mainly focusing on the Northern China district, are illustrated. Methods and it's applications for source identification A basic strategy to identify source of water inrush is based on the geochemical characteristics.Some researchers have compared concentrations of major ions, including K + , Na + , Ca 2+ , Mg 2+ , Cl − , SO 4 2− , CO 3 2− , HCO 3 − , and also total dissolved solid, between different aquifers to determine water sources. In different aquifers, the water composition is a response of its original characteristics and water-rock interaction process.In the Northern China, two main groups of aquifers are coal bearing strata aquifers and limestone aquifers.The geochemists used to find key ions in water, sometimes using geochemical figures, to determine the water sources. While the geochemical strategy is based on some unique ions and parameters in a lower dimension, another strategy, namely the machine learning (ML) algorithms, is based on multivariate analysis, including some specific methods, and provide more quantitate and reliable results. Geochemical methods The geochemical method is a popular technique in the water inrush identification, for mainly two reasons.First, some coal mines, especially for the large companies, have their own laboratory to test water geochemistry.Therefore, it is easy to obtain data.Second, the experienced technicians are familiar with the water geochemical data, especially for the major ions and important parameters.Researchers usually begin their study from the normal water geochemistry, to investigate water characteristics in every aquifer, set up identification model to distinguish water type from others, and find out the water-rock interaction mechanism for the water composition. An easy-to-handle method to identify water source is to analyse the major ion characteristics.Cheng et al. [1] analysed water geochemistry in quaternary aquifers, magmatic aquifers, limestone aquifers in the Huaibei coal mine district, Anhui province.The data was grouped into different chemical types, which can be used as database for the water source identification.Chen and Gui [2] discussed water geochemistry in Wanbei coal mine district in Anhui province.Zhang and Cao [3] analysed ground water in Hancheng coal mine district in Shannxi province, founding that the potential water burst point was related with the limestone aquifer.Dai et al. [4] discussed water characteristics in Xiangshan coal mine in Shannxi province.The data was grouped using SPSS to set up a database for further coal mine monitoring and forecasting.The author's group have collected and analysed more than 30 water samples in the Lu'an coal mine district in Shanxi province, the pattern of water flow underground and water characteristics in every ground aquifer were summarised, some important ions, trace elements, and parameters, were identified and used to distinguish water sources from others. A geochemical chart, the piper diagram, is usually used to analyse and group water samples into different groups by drawing the data as points in two triangle and a diamond figure.Zhang and Cao [3], Dai et al. [4] have applied this technique to identify the water sources.Author's group have collected samples in 2019, Table 1 shows part of the data, and Figure 1 shows the water geochemistry in a piper diagram. As Figure 1 showing, water in coal bearing seam shows similar characteristics, Na + and K + take more than 80% and up to more than 95% of all the cations.TDS of the most water sample were less than 1000 mg.The limestone water shows a spanning pattern.TDS of the limestone aquifer water also showed a much wider range, from less than 500 mg to higher than 3000 mg.In the limestone aquifer, water volume is larger, and water-rock interaction is stronger than that in the coal bearing seam, which maybe the reason to the water characteristics in the limestone aquifers. The source identification using basic geochemical technique is a qualitative, or semi-quantitative method, which may mainly depend on researchers' experiences.If distinguished differences between aquifers are observed in low dimensions, the basic geochemical technique is useful and easy to use.However, while the differences reveal in a higher dimension, i.e. the difference of ions' composition, this method system may lead to a confusing result. Not only the major ions, but also trace element concentrations and isotope values can be used to distinguish one source from others.Some researchers have used the trace elements to distinguish water samples from others or set up discriminant models.Feng and Han [5] analysed concentration and occurrence of trace elements and modelled its formation using PHREEQC.Chen et al. [6] collected 24 samples from the quaternary aquifers, coal seam sandstone aquifers, and limestone aquifers in Wanbei coal mine district in Anhui province and tested 24 types of trace elements, including Be, B, Sc, V, Cr, etc.The samples and trace elements were clustered.Then eight trace elements, including Be, Zn, Ga, Sr, U, Zr, Cs, Ba, were found to be key parameters to set up discriminant model.The key trace elements were used to train Bayes discriminant analytical model with a good performance. Isotopes are also used in the water inrush in the coal mines.The most popular isotopes are δD and δ 34 S of water.In recent years, the studies are applied in Wanbei coal mine district [7-9], and Fushun coal mine district [10], etc.In the author's research in Lu'an coal mine district in Shanxi province, the major ions and trace element were treated together, then SO 4 2− , Ti, Sr, Mg, K + Na, Zn, and Cl − were chosen to be typical ions or elements to train models. Furthermore, the water form in a scale of whole water unit, therefore the analysis should be carried out in a scale of whole water unit, but not a single point.In the Northern China area, several ground units can be divided, the water-rock interactions among which show similar pattern in different coal mine district.Therefore, the analysis of coal mine district scale and comparison between different coal mine district is an important task to summarise the common mechanism of the waterrock interaction and distinction models. Machine learning methods The geochemical method is effective only if the water samples can be grouped and divided very clearly by one or very few parameters.In most scenario, the ion-distinguishing method is confusing and lack of accuracy.The difference of water samples is embedded in a high dimension, i.e. the combination of major ions, trace elements and other parameters.It is hard to find the dividing mode just by observation or simple drawing.Benefiting from the developing of data science and technology, the environmental and geological issues, including the ground water can be described, and divided by ML methods. The ML algorithm can be simply divided into supervised, unsupervised, and semi-supervised, depending on how the target variables are labelled.For some environmental and geological problems, the target variables cannot be labelled, then the unsupervised ML algorithm, such as principal components analysis (PCA) Sources Identification of Water Inrush in Coal Mines Using Technique of Multiple Non-Linear… DOI: http://dx.doi.org/10.5772/intechopen.94288are applied.For example, Shan et al. [11] applied the PCA method to analyse the occurrence and leaching mechanism in coal and host rock, Pumure et al. [12] found out successfully of the occurrence of As and Se in coal host rock.Self-Organising Maps (SOM) is a kind of unsupervised artificial neural network (ANN) used in a large data amount scenario [13-14].The PCA algorithm is only used for the water inrush if the target variables cannot be labelled [15][16][17]. While the researchers carrying out their studies, discriminant models should be trained.The training data is obtained from the samples collected from every aquifer.In this step, the data is usually marked clearly.Therefore, the target variables can be obtained for most research cases, and the supervised ML algorithm can be used, which shows high precise and accuracy than the unsupervised ML algorithm.There are several algorithms are suitable for the model training, such as artificial neural network (ANN), support vector machine (SVM), discriminant analysis (DA), decision tree (DT), random forest (RF), boosting, and regression, etc. In the Northern China, supervised ML algorithm has been used in several coal mine districts.Table 2 shows part of the research cases in the Northern China area in recent years.It can be concluded from the table that DT criterions are most implemented, some other methods, such as SVM, and ANN, are also used. Supervised machine learning algorithm Up to present, the DA is a most popular method analysis to identify sources of water inrush in the Northern China district.Two criterions are usually used, namely Fisher criterion and Bayes criterion.In the framework of Fisher-criterion based DA algorithm, high dimensional data is projected to a one-dimension space, then a discriminant criterion is obtained to achieve the maximum variance between two groups and the minimum in-group variance.Because this method is used to handle a two-group problem, many rounds of calculation are needed for a multiple-group problem.The Bayes-criterion base DA method calculate the posterior probabilities of the sample in each group, then the sample can be classified into the group with the highest posterior probability.Comparing with the Fisher criterion, the Bayes criterion is more frequently used. The DA is a kind of linear algorithm.Along with the development of ML technology, non-linear modelling is widely used in researches, including the geological, environmental, and engineering area.In order to deal with problems of surface water and ground water, the SVM method is applied to predict water quality and water level [23,24], ANN and DT are used to predict the [NO 3 − ] of ground water [25], set up the water quality monitoring system [26].Boosting tree is also used to classify distributed water and ground water. However, the non-linear ML method is relatively less applied to deal with the water inrush problems in coal mines, though higher accuracy maybe achieved compared to the linear algorithm.According to literature research, the ANN [27] and SVM [20] have been implement in this area.The ANN is a very popular technique in many areas, including figure and voice identification, driverless driving, etc.However, the ANN usually needs large amount of data to train model to control its over-fitting problem.On the other hand, the data of the environmental and geological area, including the water inrush analysis are usually structured data, and limited to a small data quantity.As a result, the problem of over-fitting problem is hard to control, which means low accuracy of prediction is prospected while using the ANN model to check using the testing data, though a high accuracy may be obtained while testing the model using the training data.The algorithm of SVM perform better to control the over-fitting problem.Other than SVM, the DT, DT, boosting tree, Bayes network (BN) also have good prospect, in consideration of the characteristics of the coal mine ground water, i.e. structured small data quantity. Data selection and feature engineering The tested data of ground water is material of model training.However, the data preparing is essential to ensure or enhance the model quality.The data preparing work mainly includes data selection and feature engineering.performance, that's why the non-linear models are used.Along with the increasing of model complexity, the prediction error of the training samples becomes lower steadily.On the other hand, the prediction error of the testing samples gets lower at first, then higher again.That suggested over-fitting problem in the ML model.Therefore, the feature has to be processed if a good performing model is acquired. The feature engineering includes feature fusion and feature selection.A common feature fusion method is PCA.The PCA can reduce dimension of data, then the features in a lower space could stand for most data information.As combination of the original features, the new feature cannot reflect the data characteristics directly.While the researchers want to analyse the importance of the parameters in the original data, the feature selection technique should be used. Popular feature methods include RF, and Lasso regression, etc.The RF based feature selection undergoes the following steps. 1.The data set X contain N samples, draw samples randomly from the data set X using the bootstrap resampling method.The resampling is carried out k times, to construct k regression tree.In this process, the probability of no drawing of each sample is p = (1-1/N) N .The p tends to 0.37 while the N increasing to infinity.That means that about 37% of the samples in the data set X are not drawn, these data are not used in the DT training, calling out-of-bag (OOB) data.These OOB data is used to test the regression trees. 2. For k bootstrap samples, k unpruned regression trees are created respectively.In the training process, for each node, m attributes are randomly selected from the total M attributes as internal nodes.Then, an optimal attribute is selected from m attributes as a split variable to make the branches grow, according to the minimum Gini index principle. 3. The k decision tress comprises a random forest, the model quality could be evaluate using two indices: large mean square error of OOB (MSE OOB ) and low coefficients of determination (R 2 RF ). ( ) Where n is the total number of the samples, i ŷ is the predicted output obtained by the generated RFR regression model, i y is the observed output value, and the 2 y σ is the predicted variance of the OOB output. 4. The RF regression model provides two methods to determine the importance degree of each variable index: mean decrease in Gini index and mean decrease in accuracy.In a regression model, the mean decrease in Gini is usually used, and the mean decrease in accuracy is more applied for the classification problem.The water inrush source identification is a kind of classification problem, therefore the mean decrease in accuracy is selected. While carrying out the inrush source identification, the attributes could be used in the model includes major ions, trace elements, important parameters, and isotopes, etc.In which, the data of major ions and important parameters are easier to obtain.On the other hand, adding of trace elements and isotopes into the models may enhance the model performance, for these parameters carries a lot of information of the water samples.In consideration of easy using, only major ions and important parameters is used, while considering for the model accuracy, more parameters could be added.Therefore, it is a balance need to consider while building models. In our previous study in the Lu'an coal mine district in Shanxi Province, all the prescribed parameters have been tested.In the first step, the feature selection was applied on the major ions and important parameters.Feature selection result of all data using RF algorithm. Figure 1 . Figure 1.Piper drawing of the ground water (In the figure, the squares stand for surface water, the triangles stand for quaternary water, the circles stand for coal bearing seam water, and the stars stand for limestone aquifer water). In a wide sense, the data selection includes data cleaning, which means treatment of unit and missing data.Then the data should be selected to determine those used in the model training step.The data selection is applied in two stages, before model training and after model training.Before the model training, the suitable data for the model training means to make sure all the data is labelled correct.Uncorrected marked data leads to wrong model definitely, regardless of the quality of models.After the model training, the training data should be checked again.The data have to be checked very carefully to find wrong classified data.While it is determined to wrong pre-labelled, then the data should be deleted, and new model need to be trained.The other important work before the model training is feature engineering.The basic mechanism to process feature engineering is to achieve a best performance of the model.Figure 2 shows the idea of feature engineering.Number of features, or parameters, means the model complexity.More features in the model lead to a higher complexity of the model.As Figure 2 showing, the prediction performance is related to model complexity.A very simple model lead to very bad model Figure 2 . Figure 2. Correlation of the model complexity and prediction error. Figure 3 Figure 3 . Figure 3. Feature selection result of major ions and important paraments using RF algorithm. Figure 4 . Figure 4.Feature selection result of all data using RF algorithm. Table 1 . Major ion data in ground aquifer (mg/L).
4,380.6
2020-10-27T00:00:00.000
[ "Environmental Science", "Engineering", "Computer Science" ]
Molecular dynamics simulation of interface dynamics during the fcc-bcc transformation of a martensitic nature The structural and dynamic properties of the interface during the fcc-bcc transformation in pure iron have been investigated by molecular dynamics simulations. An embedded atom method potential was used for the atomic interactions. Two interfaces, close to the Bain and Kurdjumov-Sachs orientation relations, have been examined during the fcc-to-bcc transformation. In each simulation the system was left to evolve freely at the imposed temperature. In a system with fully periodic boundaries no interface motion has been observed, whereas systems with at least one free boundary do show a mobile interface. After an incubation time, there is a very fast transformation from fcc to bcc, with interface velocities reaching values in the range of 200–700 m/s, depending on the interface orientation and on temperature. The characteristics of the transformation are of a martensitic nature, without this being imposed on the system. During the incubation time a complex interface structure is formed, which appears to be essential for the martensitic transformation. From the atomic displacements during the transformation, the occurrence of slip planes can be identified. I. INTRODUCTION The kinetics of phase transformations in metallic alloys has been studied extensively, especially transformations that are governed by the longe-range diffusion of alloying elements. 1Also interface-controlled phase transformations 1 have been the subject of numerous studies.Although a general insight into the kinetics of these diffusional phase transformations has been developed, observations on the actual atomic processes taking place at the interface during the transformation are still very scarce. 2,3The insight in the nature of martenstic transformations is even more limited than for diffusional transformations.The definition of a martensitic transformation is based on the characteristics of the atomic processes, viz. a collective motion of atoms which move over less than an interatomic distance during the process, but neither the kinetics nor the atomic processes at the interface have been investigated extensively.The reason for this lies, of course, in the experimental difficulties for such studies.No experimental technique is capable of observing the atomic motion taking place during the movement of the interface at a velocity possibly as high as the velocity of sound. 4Therefore, the scientific question on the fundamental character of the martensitic transformation remains largely unanswered, whereas this question is not only of scientific but also of great practical importance, for instance, for martensite formation in steel and in shape-memory alloys. At present, simulation by means of molecular dynamics ͑MD͒ seems to be one of the very few methods available to acquire information about the nature of the martensitic transformation.Because the transformation is very fast, the actual transformation time can be covered in an MD-simulation.A limited number of studies applying the MD technique to martensitic transformations have been reported in the literature.Lill and Broughton 5 have studied the martensitic transformation after artificially imposing the nucleation event by a particular choice of simulation conditions.7][8] Unfortunately, none of these studies are focused on the interface.Important aspects as interface structure and interface velocity are not mentioned as the focus is more on the resulting microstructure.In another study, Meyer and Entel have studied the martensite-to-austenite retransformation in iron. 9In the present study we will investigate the decomposition of austenite in iron.Austenite is the fcc phase of iron, which is stable at higher temperatures.During cooling, it is known to transform either diffusionally into ferrite or by a martensitic transformation into martensite.In this study, the simulation system consists of an fcc grain that is neighbored on two sides by a bcc grain.This system is allowed to freely evolve in time at a temperature at which the bcc phase is the stable phase.Therefore, no transformation mechanism or kinetics are imposed.The focus of this work is on the properties of the moving interface.This is also the main reason that the start configuration for the simulations already contains a stable bcc phase. The simulations have been performed on different simulation systems, with the relative orientations of the bcc and fcc grains and the surface area as important parameters ͑Sec.II͒.By "surface" we mean the free boundaries of the system, i.e., those boundary planes that are not connected to periodic images of the system.The simulation results of the observed phase transformations ͑Sec.III͒ are discussed in terms of the interfacial structure, the free boundaries, and the influence of temperature and driving force in Sec.IV. A. Johnson-Oh embedded atom method formalism A good description of an interface in motion requires a large three-dimensional ͑3D͒ simulation system, consisting of at least 10000-100000 atoms.Because of this requirement, the use of highly sophisticated atomic interaction schemes would lead to unfeasibly long simulation times.We have chosen a relatively simple yet sufficiently realistic interaction model, namely, the embedded atom method ͑EAM͒. 10This class of N-body potentials is known to function well in cases where defects are important. The EAM, first developed by Daw and Baskes, 11,12 describes the potential energy V of a system as Here i is interpreted as the electronic charge density at the site of atom i, resulting from the spherically symmetric charge densities a ͑r͒ carried by each of the neighboring atoms j, and r ij is the interatomic distance.The embedding function F͑͒ describes the potential energy of an atom embedded in a given electronic charge density.The pair potential ͑r͒ is the two-body contribution to the potential energy. Johnson and Oh have developed an analytical EAM model for bcc metals 10 in which the potential parameters are expressed as functions of seven properties of the element to be modeled.These are the cohesive energy E c , lattice constant a, atomic volume ⍀, bulk modulus B, Voigt average shear modulus , anisotropy ratio A, and the unrelaxed vacancy formation energy E 1V UF .The pair potential is given by with r the interatomic distance, r 1e the nearest-neighbor distance in the equilibrium bcc crystal, and K 0 -K 3 constant parameters.The spherical charge density is expressed as with f e a dimensionless factor that is immaterial for monoatomic potentials, and the power ␤ has been given the value 6 by Johnson and Oh. The embedding function has the form with e the electron density at each lattice site of the equilibrium crystal, and n is given by To limit the calculation time, the potential and the spherical density function are set to zero at a cutoff distance r c , where the value of V becomes very small.This distance has been chosen as r c = r 2e + 1 2 ͑r 3e − r 2e ͒ with r 2e and r 3e the second and third neighbor distances.The seven input properties for iron were taken as reported by Johnson and Oh and are listed in Table I. The key quantity for the relative stability of the phases involved in the transformation is the free-energy G, or more specifically, the free-energy difference between the phases.Therefore, it is important that the chosen potential describes the fcc-bcc free-energy difference well.Figure 1 shows the free-energy difference for the iron EAM potential used here, as determined with a MD adaptation of the method introduced by Miller and Reinhardt. 13It is seen that for this system the bcc phase is the stable phase.At low temperatures, the value of ⌬G for the present system is similar to the experimental value for iron, as can be readily calculated from thermodynamical databases.With increasing temperature, ⌬G does decrease in absolute value, as expected, although not rapidly enough to reach ⌬G = 0 at the ferrite-austenite equilibrium temperature of 1184 K that is found for real iron.In fact, the bcc phase is more stable than the fcc phase over the entire temperature range considered. A peculiar property of this EAM model of iron is the density difference between the fcc and bcc phases.In the entire temperature range considered, the equilibrium density is lower for fcc than for bcc by ϳ5%.In real iron, fcc is the denser phase by a similar difference.Although we have not studied the effect of this on the transformation in detail, it is felt that the magnitude of the density difference is much more important than its sign, since with either sign local strains will develop during the transformation. B. Simulation conditions All simulations have been performed at zero pressure and at constant temperature, using a barostat and a thermostat of the Berendsen type. 14The MD time step was not fixed but was determined by a maximum displacement criteron of 0.02 Å per time step. The simulations of the fcc-to-bcc transformation have been performed with systems of different sizes, and with periodic boundaries in either one, two, or three directions.The simulation box ͑Fig.2͒ was rectangular, and the system always contained two bcc/fcc interfaces perpendicular to the z direction, in which periodic boundary conditions were applied in all cases.Table II gives an overview of all the interface variations that have been examined. Interface types A, B, and C have an fcc͕100͖ ʈ bcc͕110͖, fcc͗100͘ ʈ bcc͗011͘ Bain orientation relation, and interface type D has an fcc͕111͖ ʈ bcc͕110͖, fcc͗112͘ ʈ bcc͗011͘ orien-tation relation, which is close to the Kurdjumov-Sachs orientation relation.The difference between A, B, and C is the number of periodic directions.An important characteristic of the systems, also reported in Table II, is the volume to surface area ratio .Figure 2 shows a typical starting configuration for interface type C. The starting configurations were constructed by generating fcc and bcc crystals at their own equilibrium densities and bringing them together at a distance equal to the interplanar spacing.The lattice parameter of the fcc and bcc parts had to be slightly adapted ͑Ϸ0.2%, with opposite signs for the two phases͒ to create a fit within the common periodic boundaries.Simulations with an explicit relaxation time period ͑realized by a very slow warm-up to the required tem-perature͒ showed no difference in behavior in comparison to simulations without this relaxation period.The thermostat quickly removes the excess energy of the atoms that have an unphysically strong interaction in the initial unrelaxed structure. At any moment during the simulations each atom is determined to be in an fcc or a bcc configuration by consider- TABLE II.Overview of all interfaces types, temperatures, and system sizes that have been simulated, as well as the volume to surface area ratio for each system.ing the locations of its nearest neighbors averaged over 1.4 ps, described in terms of angles of atom triplets.This procedure is based on rotationally invariant spherical harmonics as proposed in Ref. 15. A. Simulation system A After a simulation time of 8.6 ns at T = 1520 K in system A, about 0.15 monolayers of the initially fcc-configured atoms have transformed to bcc, but after that the bcc phase does not grow.Even after 48 ns there is no significant increase in the fraction of bcc atoms.Simulations at different temperatures show the same behavior: a small increase in the number of bcc-configured atoms in the initial stage and no subsequent phase transformation. Close examination of the atomic configurations at the interface shows that the structure of the fcc and bcc planes that make up the interface changes in a very brief time span, much shorter than the 8.6 ns mentioned before.A perfect bcc͕110͖ plane in a system of these dimensions contains 252 atoms and a perfect fcc͕100͖ plane contains 220 atoms.After 68 ps at 1520 K, the fcc plane has acquired seven atoms from the adjacent bcc plane.Consequently, both planes contain a relatively large amount of free space in comparison to a perfect bcc͕110͖ plane.The free space in these interfacial planes is not present in the form of vacancies or dislocations but rather in the form of density inhomogeneities within the plane.The atomic structure in the two interfacial planes is partially disordered.Occasionally, vacancies do form and diffuse into the bcc crystal. B. Simulation systems B, C, and D Figures 3-8 show that for systems with at least one free boundary, the fcc phase transforms into bcc on a time scale of tens of picoseconds for the present system dimensions.After a certain incubation time, in which the transformation proceeds relatively slowly over a few monolayers at each interface, one or both of the interfaces start to move very rapidly up to complete transformation.Although the presence of a free boundary turns out to be essential for the transformation to take place, primarily because of the density difference of the phases, the transformation kinetics also depend on the type of interface, the ratio , and the temperature. Figure 3 shows that the incubation time is longer for a larger .This can be explained by the larger absolute misfit that has to accommodated at the free surface.The relative strain ͑0.2%͒ has no influence, since it is independent of . All systems require a certain incubation time before the transformation starts.During this period the atomic structure at the interface changes, and when the interface motion starts, all interface types show a structure that appears to be of a universal character.Figure 9 shows an example.To enhance the level of detail, the scale in the y direction has been elongated by a factor of 7. The results for the different interfaces indicate that the interface type plays an important role in the temporal development of the interface structure.This type of interface structure, which is formed by gradual motion of the atoms during the incubation time, must necessarily be formed across the entire interface before the interface can start to move.The interface clearly shows a close resemblance to a network of screw dislocations.This is corroborated by the slip mechanism that is active during the transformation ͑see Fig. 10͒.Figures 4-7 suggest that the formation of the interface structure required for transformation is related to the driving force: the incubation time is found to be inversely proportional to ⌬G, and the temperature dependence of the proportionality constant is given by an effective activation energy of 0.06 eV. The deformation index 16 u i for atom i is defined as where j represents all nearest neighbors of atom i in the initial atomic configuration, r ij is the vector between atoms i and j in the final configuration, and r ij 0 is the same vector in the initial configuration.The deformation index is therefore the maximum relative displacement of an atom with respect to its nearest neighbors.Most atoms in the system have a deformation index that is distinctly smaller than the interatomic distance; for these atoms the transformation takes place by means of small atomic displacements, a picture that is usually connected to a martensitic transformation.These displacements have been observed in the present study to be highly coordinated.In Fig. 10, the atoms are shown that have a larger deformation index, viz. between 2.4 and 2.5 Å, a distance close to the interatomic distance.From this figure it can be concluded that in addition to the small displacements for most atoms, atomic displacements on the order of an interatomic distance occur along certain planes.These planes can be recognized as ͕111͖ planes in the fcc structure and ͕110͖ planes in the bcc structure.Note that the same atoms are depicted in both frames of Fig. 10.The figure therefore shows that the transformation is accompanied by dislocation glide, since each set of two parallel planes can be understood to consist of the atomic planes on either side of a slip plane.The principal reason for the dislocation glide is in the stresses caused by the phase transformation in combination with the constraints on the system because of the periodic boundary conditions in the z direction.Figure 10 also shows that an orientation relation exists between the parent phase and the newly formed phase, according to which the closest packed planes in both structures are parallel ͑consistent with the Kurdjumov-Sachs and Nishiyama-Wasserman orientation relations, which were not imposed on the system by the initial configuration͒. Figure 3 shows that the ratio only has an influence on the incubation time, but not on the transformation rate.On the other hand, the temperature does influence the maximum interface velocity, as shown in Fig. 11 for system C. The linear decrease of the interface velocity with increasing temperature indicates that the velocity is not determined by a thermally activated process, but rather by the freeenergy difference, which linearly decreases with increasing temperature ͑Fig.1͒, acting as a driving force for the transformation. IV. DISCUSSION The fcc-bcc interfaces in simulation system A are not mobile.As mentioned in Sec.III A, in simulation system A an fcc͕100͖ plane contains 220 atoms.If the fcc phase would transform into the bcc phase by, for example, a Bain distortion of the lattice, the resulting bcc plane would again contain 220 atoms.The ratio of the width and height of the plane changes as this transformation takes place.However, if periodic boundaries are used, this change is not possible and any new bcc plane must take the shape and size ͑and therefore also the same number of atoms͒ of the bcc planes already present.Even when the more "flexible" Parrinello-Rahman 17 periodic boundary conditions are used, the already present bcc phase will prohibit the required shape change of the simulation volume.Because the periodic boundary conditions prohibit a transformation by a single collective motion of the atoms, the only alternative mechanism left is a diffusional transformation.That such a transformation does not take place must be ascribed to the density difference between the two phases.Each new bcc place resulting from a diffusional transformation must contain 252 atoms.The extra 32 atoms can only be acquired by the formation of vacancies in other parts of the system.The formation of such an extraordinary large concentration of vacancies during the present simulation times is extremely unlikely. With the introduction of a free surface as in simulation systems B, C, and D, both ͑diffusional and martensitic͒ transformation mechanisms can be more easily established.The surface can readily accommodate the density difference and will also allow a shape change of the crystal volume. Although the details of the transformation mechanism for the three systems with their different types of interface are different, they share many characteristics.Two of those characteristics also belong to typical martensitic transformations.The first is the very high interface velocity; the second is the coordinated, but very small, movement of the majority of atoms during the transformation.Besides these small atomic displacements, a slip mechanism occurs along closely packed planes in both crystalline structures ͑see Fig. 10͒. The question whether or not a transformation will take place in a more realistic system, in which many 3D grains are present and open grain boundaries may act as a source for absorbing density differences, remains unanswered.Future work on this subject is planned. The temperature and the volume-to-surface area ratio have a large influence on the formation of the specific interface structure during the incubation time ͑Fig.9͒.The scatter in the incubation times of replica runs makes it very difficult to find an accurate relationship between these two parameters and the formation of the interface structure.Nevertheless, the present simulations indicate that the temperature dependence of this process is primarily determined by the thermodynamic driving force ⌬G. Temperature does not only influence the incubation time, but also the maximum interface velocity.With increasing temperature the maximum interface velocity decreases considerably, although the transformation mechanism does not seem to change.The underlying reason for this behavior is again the free-energy difference ⌬G, which decreases linearly with temperature ͑Fig.1͒.It therefore appears that for this martensitic transformation, similar to the role of the driving force during interface-controlled diffusional transformations, 1,18 the interface velocity is also proportional to the driving force.The proportionality constant, the interfacial mobility, assumes a very high value, i.e., approximately 0.3 molm/ Js, as compared to mobilities on the order of 10 −7 molm/ Js found for diffusional austenite to ferrite transformations. 19 V. CONCLUSION The fcc-bcc interface is immobile in a system with full periodic boundary conditions, but moves very rapidly in systems with at least one free boundary.For both the fcc͕100͖ ʈ bcc͕110͖ and the fcc͕111͖ ʈ bcc͕110͖ interface orientations, the same kind of transformation mechanism has been found.The following picture is obtained: during an incubation time, the duration of which depends on the temperature, a specific interface structure is formed.Once the required interface structure has been formed, the transformation proceeds with martensiticlike characteristics.The movement of the atoms during the transformation is highly coordinated, over a small distance.In addition, slip occurs along closely packed crystallographic planes. The temperature dependence of the maximum interface velocity is related to the temperature dependence of the freeenergy difference, which acts as the driving force for the transformation.The approximate proportionality between interface velocity and driving force indicates an interface mobility of ϳ0.3 molm/ Js, which is several orders of magnitude larger than experimentally found for the diffusional austenite to ferrite transformation. *Present address: Max Planck Institute for Metals Research, FIG. 3 . FIG. 3. Typical transformation curves for interface type B at T = 810 K, for different values of . FIG. 4 . FIG. 4. Typical transformation curve for interface type C at T = 304 K. FIG. 9 . FIG. 9. Close up of the structure at the interface, just before the interface motion starts, for interface type C at T = 810 K.The interface clearly shows a close resemblance to a network of screw dislocations.Dark atoms are bcc; lighter atoms are fcc.The y coordinates have been multiplied with a factor 7. The x direction in the figure coincides with the fcc ͓100͔ direction, y with fcc ͓010͔, and z with fcc ͓001͔. FIG. 10 . FIG. 10.The slip planes of the dislocations shown in the initial fcc structure ͑left͒ and in the final structure ͑right͒.Only the atoms with a deformation index between 2.4 and 2.5 Å are shown. TABLE I . Input parameters for the iron EAM potential. FIG. 1.The free-energy difference between fcc and bcc for theJohnson-Oh 10 iron potential at zero pressure.
5,059.6
2006-03-30T00:00:00.000
[ "Materials Science" ]
Effect of Chemical Composition (Cr/Ni) on the Hysteresis of 17-4PH Stainless Steel The resistance strain gage was adopted to investigate hysteresis property of 17-4PH steel treated with different proportions of Cr/Ni under the same heat treatment process. The relationship between mechanical properties, microstructure and hysteresis of the material under different proportions of elements was established. The results show that the residual austenite content has an important effect on hysteresis of the material, while the δ-ferrite content has little effect on the hysteresis of the material. Introduction The accuracy of force measurement is constantly improved, and the material of elastic components is the key to determine the high accuracy and stability of the sensor that requires elastomer materials with high strength limit, high yield limit, stable elastic modulus, low hysteresis, and excellent fatigue and impact performance [1] .It is known that hysteresis error is one of the most important characteristics of force transducers [2] .17-4PH precipitation-hardened martensitic stainless steel could be an ideal substitute material because of its high mechanical properties and good corrosion resistance.The difference between the values obtained with increasing force and with decreasing force determines the relative hysteresis error which can be calculated by using the Eq. ( 1): where v is the relative hysteresis (reversibility) error of the force transducer, ' is the reading on the indicator with decreasing test force, is the reading on the indicator with increasing test force and XN is the average reading on the indicator with maximum test force [5] .A schematic representation of hysteresis error is also given in figure 1. It was determined that the performance characteristics of force transducers were mostly dictated by the heat treatment that was applied to the spring element.The hardest material seemed to exhibit the best hysteresis performance [3,4] .These researches focused on improving the hysteresis properties of materials by changing the heat treatment.The relationship between the hysteresis properties and microstructure caused by the change of chemical composition has not been discussed.This research intends to study the influence of the change of the micro-structure of the material with different proportions of Cr/Ni on the hysteresis under the same heat treatment process. Materials and experimental procedures The content of Cr and Ni in the material are changed to obtain different material structures on the basis of the chemical composition of 17-4PH.The specific material configuration is shown in Table 1.The ingot was forged a size of Φ25mm and then treated by solution (1040℃, 1.5h, air cooling) and aging (480℃, 4h, furnace cooling).The residual austenite content was measured by Bruker D8 ADVANCE X-ray diffractometer with Co target materials.The scanning angle was from 45° to 115° with 1°/min scanning rate.The microstructure of selected area was observed by the JEM-2100 transmission electron microscope (TEM).Nano-indentation experiments are performed on Agilent G200 equipment.The sample was etched to make the microstructure distinguished in the nano-indentation testing equipment.In this paper, resistance strain gauge is used to measure the strain change of the elastomer, so the signal difference of the sensor during the unloading and loading process is the hysteresis, expressed in mv/v.In sensor calibration, the hysteresis of sensor is usually represented by the ratio between the hysteresis and the output of the sensor's full scale.In this test, φ5mm tensile sample was used.The full scale of the sensor is 1500με while the biggest load is 6KN.The FE-J10 standard force machine is used.Force accuracy tolerance is smaller than 0.02%.Ambient temperature is 24℃.Step load is 1KN during loading and unloading.2. For elastomer materials, the yield strength determines the upper limit of the elastomer's pure elastic phase [6] .Therefore, this paper focuses on the change of yield strength of materials.With the increase of Cr/Ni ratio, the yield strength of the material decreases first and then increases, as shown in figure 2. groups of materials contain a certain amount of residual austenite.The residual austenite content of C# specimen is significantly higher than that of the other 3 specimens, as shown in figure 4. The relationship of the yield strength and content of residual austenite in different Cr/Ni is showed in figure 2. The content of residual austenite is the key role to determine the yield strength.While the δ-ferrite content in D# materials was highest, but the tensile and yield strength of D# materials decreased little, indicating that the δ-ferritic content in the materials is not the key role to determine the yield strength of the materials.Figure 5 presents the TEM micrographs of microstructure of the 4 specimen.No significant ε-copper could be observed in A#, B# and C# specimen, which was due to the small size of ε-copper under the current heat treatment process.The δ-ferritic is observed in D# samples, as showed in figure 5 (d), and ε-copper phase could be observed in the δ-ferritic, as showed in figure 5 (e).Some free of dislocation loops around ε-copper particles can be observed that will be changed into Orowan loops with the dislocation keep on moving.The accumulated Orowan loops will shorten the interparticle distance and increase the driving force for the dislocation to across the ε-copper particles.And this is a typical strengthening mechanism in 17-4PH steel which is known as the source-shortening effect [10] .The nano-hardness of δ-ferrite, residual austenite and martensite were tested on C# sample and D# sample respectively, and the results are shown in Table 3.The indentation morphology on δ-ferrite and residual austenite is shown in the figure 6.The nano-hardness of δ-ferrite is higher than the one of residual austenite because of the strengthening effect of ε-copper particles.The relative hysteresis error test results are shown in figure 7. It can be seen that the maximum hysteresis errors of the four groups of materials all appear in 20%~70% of the full scale range.The maximum hysteresis error of C# specimen is significantly higher than that of the other three groups. The maximum hysteresis error of B# specimen is the smallest.The maximum hysteresis error of A# materials is slightly lower than that of D#.The variation of relative hysteresis error is consistent with the content of residual austenite in the 4 samples.The content of the δ-Ferrite has little effect on the relative hysteresis error. The study [7][8] shows that dislocation motion exists when materials are loaded in the elastic stage, and the interaction between dislocation and point defects produces nonlinear anelastic internal friction, and the internal friction of grain boundaries also occurs when the material is stressed.For elastomer materials, reducing the internal friction of the material and reducing the hysteresis error are always the purpose, so it is necessary to inhibit the movement of the dislocation at a lower strain level, while reducing the internal friction level at the grain boundary.The interaction between the dislocation and copper-rich phase is orowan mechanism and shear mechanism [9] , so increasing the dispersion of copper-rich phase can effectively increase the motion resistance of the dislocation and reduce the elastic hysteresis error of the material.The nano-hardness of residual austenite is lower than that of ferrite, indicating that the dislocation motion resistance of residual austenite is lower than that of ferrite phase.At the same stress level, the residual austenite has low resistance to produce internal friction.This is the reason that the residual austenite content almost determines the relative hysteresis level.There are dispersed ε-copper particles in the δ-ferrite phase, which strengthen the matrix obviously and hinder dislocation motion, so the content of δ-ferrite phase has little influence on the hysteresis. Conclusions The conclusions of the study can be outlined as follows: 1 With the increase of Cr/Ni ratio, the strength of the material decreases first and then increases.The correlation between yield strength and residual austenite content is strong, but not strong with ferrite content. 2 Microstructural changes play an important role on the hysteresis error of 17-4PH steel spring element.Reducing the residual austenite content can effectively reduce the hysteresis error of the material, and the ferrite content has little effect on the hysteresis. 3 The 0.49 of Cr/Ni ratio is a good chose to design the material for relatively low residual austenite content and small hysteresis error. Figure 2 . Figure 2. The yield strength and content of residual austenite in different Cr/Ni Figure 7 . Figure 7. Hysteresis test results of spring elements Table 2 shows the mechanical properties of materials with different Cr/Ni content ratio.The yield strength of the four groups of materials can all reach 1200MPa.The tensile strength and yield strength of B# material are the highest and the tensile strength and yield strength of C# material are lowest.The plasticity is out of consideration.When Cr content is constant and Ni content is increased, the strength of the material decreases.When Ni content is constant, the strength of the material decreases with the increase of Cr content.In order to evaluate the effect of the relative contents of Cr and Ni on the microstructure and properties of the material, the ratio of Cr and Ni content (Cr/Ni) was used, as shown in figure Table 2 . Test results of mechanical properties of material
2,077.6
2023-11-01T00:00:00.000
[ "Materials Science", "Physics" ]
Bioprocessing of Marine Chitinous Wastes for the Production of Bioactive Prodigiosin Recently, microbial prodigiosin (PG) has received much attention due to its numerous beneficial applications. The aim of this study was to establish the bioprocessing of marine chitinous wastes (MCWs) for the cost-effective preparation of PG. Of the MCWs, demineralized shrimp shell powders (de-SSP) were found to be a potential source of carbon/nitrogen (C/N) for PG production by bacterial fermentation using Serratia marcescens strains. Further, PG scale-up production was investigated in a 15 L bioreactor system, and the highest yield (6200 mg/L) was achieved during fermentation using 5 L of a novel-designed culture broth that included 1.60% C/N sources (a de-SSP/casein ratio of 7/3), 0.02% K2SO4, and 0.05% K2HPO4, with an initial pH of 6–7. Fermentation was conducted in the dark at 27.5 °C for 8.0 h. This study was the first to report on the utilization of shrimp wastes for cost-effective, large-scale (5 L/pilot) PG production with high productivity (6200 mg/L) in a short cultivation time. The combination of 0.02% K2SO4 and 0.05% K2HPO4 was also found to be a novel salt composition that significantly enhanced PG yield. The red compound was purified and confirmed as PG after analyzing its HPLC profile, mass, and UV/vis spectra. The purified PG was then tested for its bioactivities and showed effective anticancer activities, moderated antioxidant activities, and novel anti-NO effects. Introduction Prodigiosin (PG) is a red pigment compound that belongs to the prodiginine family. PG is a metabolite of various bacteria, such as Serratia marcescens, Alteromonas rubra, Rugamonas rubra, Streptomyces coelicolor, Serratia rubidaea, Janthinobacterium lividum, Streptoverticillium rubrireticuli, etc. [1]. Of these bacteria, Serratia marcescens has most commonly been reported to be used for PG production [2]. Numerous beneficial bioactivities of PG and its applications have led to a dramatic increase in the investigation of PG biosynthesis, and numerous studies on PG production have been published. However, in almost all previous reports, commercial nutrient mediums were used as C/N sources for fermentation, such as tryptone soy, tryptone yeast, yeast malt, glycerol [10], yeast extract [11], nutrient broth [12], glycerol-tryptone [13], peptoneglycerol [14], Luria/Bertani broth [10], and 3-[N-morpholino]-ethanesulfonic acid [15]. Some nontraditional media, such as crude glycerol, peanut oil, sesame seed, corn steep, cassava, coconut oil, sesame oil, peanut seed, copra seed, and the complexes of mannitol/corn steep and mannitol/cassava, have been investigated for the lower-cost production of PG [6,[16][17][18][19][20]. In this study, we reported for the first time the reuse of shrimp wastes as the source of C/N for PG synthesis by bacterial fermentation. Shrimp shell is one of the most abundant marine chitinous materials mainly obtained from by-products of fishery processing. Shrimp shells and crab shells have been widely utilized for chitin and chitosan preparation via chemical processes [21][22][23]. However, chemical preparation results in environmental issues; thus, the use of microbial technology for the production of chitin and chitosan from chitinous waste is the current trend [24,25]. Through microbial conversion, various other bioactive materials such as proteases, chitinase, chitosanases, and oligomers of chitin and chitosan, as well as antioxidants, anticancer, and antidiabetic agents have been produced from shrimp shells [26][27][28][29][30]. In our previous report, we showed that chitin plays a key role in enhancing PG yield via S. marcescens fermentation, and α-chitin was found to be a more effective PG enhancing agent, compared to β-chitin and other carbon sources [31]. Squid pen powder (containing β-chitin) and crab shell powder (containing α-chitin) have been extensively studied for PG production [2,32,33]. However, no study has reported the use of shrimp shells, which have the largest amount of chitinous wastes containing α-chitin, for PG production via microbial fermentation. Thus, in this study, we established the reuse of this low-cost material for the biosynthesis of PG in a flask (a small scale) and reported PG production scale-up in a bioreactor system, its purification, and the evaluation of biological activities. Reclamation of Demineralized Shrimp Shell Powders (de-SSP) as a Potential Source for Effective Production of Prodigiosin via Fermentation Various kinds of MCWs, including squid pen powder (SPP), shrimp head powder (SHP), fresh shrimp shell powder (fr-SSP), demineralized crab shell powder (de-CSP), and demineralized shrimp shell powder (de-SSP) were used for Serratia marcescens TUN02 fermentation and PG biosynthesis comparison. As shown in Figure 1, S. marcescens TUN02 produced PG at a high level on the first day of fermentation (2.62 mg/mL) in the medium containing SPP; however, the PG yield reached its highest (3.98 mg/mL) on day two in the medium containing de-SSP. Thus, de-SSP was chosen as the low-cost material for all following experiments. Notably, in this experiment, we found that the two shrimp shell materials gave quite different results. The fermented culture broth reached a high level of PG yield (3.98 mg/mL) using demineralized shrimp shell powder and a low PG yield (1.32 mg/mL) using fresh shrimp shells as the C/N sources for fermentation. The mineral salt content in the shrimp shells was 14% (w/w), which was too high of a concentration for microbes to grow properly, leading to a reduction in the production of microbial metabolites, including alpha-glucosidase inhibitors [34], and PG in this study. The results suggested that marine chitinous wastes should be preprocessed before application for fermentation. Figure 1. Bioproduction of PG by S. marcescens TNU02 by fermentation using various MCWs, such as squid pen powder (SPP), shrimp head powder (SHP), fresh shrimp shell powder (fr-SSP), demineralized crab shell powder (de-CSP), and demineralized shrimp shell powder (de-SSP), as major C/N sources with supplementary casein as a free protein at the ratio of 7.0/3.0. The C/N source (1.60%) was added to a liquid medium of 0.03% K 2 HPO 4 and 0.05% CaSO 4 . The fermentation was performed for two days at 150 rpm (shaking speed) in the dark at 25°C. The error bars in the figures are standard errors (SE). PG production by fermentation has been reported in numerous studies; however, almost all of them used commercial nutrient mediums [10][11][12][13][14][15] or agricultural products [6,[16][17][18][19][20] for fermentation. Different from previous studies, we used a designed medium containing the low-cost material de-SSP for the cost-effective biosynthesis of PG. In addition, de-SSP represented a newly found potential source for cost-effective PG production by S. marcescens in this study. Establishment of the Process for de-SSP Bioprocessing into PG by S. marcescens on a Small Scale We investigated the effect of different S. marcescens strains (1), different free protein sources (2), salt composition (3), and various parameters of cultivation on PG production (4) on small-scale fermentation (100 mL flask). (1) PG production by different S. marcescens strains Different bacterial strains of S. marcescens produce PG metabolites to different extents in the same conditions [31,33]. To determine the most active PG-producing strain to convert de-SSP into PG, a total of four strains of S. marcescens were examined for fermentation. The results (Table 1) showed that all the tested strains could produce high PG yields in the range of 3.562-4.015 mg/mL. Of these strains, S. marcescens TNU01 biosynthesized PG with a slightly higher yield than the other strains. Thus, this strain was chosen for further experiments. (2) The influence of different free protein sources on PG production In some previous reports [31,33], the addition of free protein sources to the culture medium significantly enhanced PG biosynthesis by S. marcescens. Thus, to evaluate the effect of free protein sources on PG production, five sources of free protein were added to a medium containing de-SSP and fermented for two days by S. marcescens TNU01. Of these tested proteins, casein was found to be the most suitable protein source. The medium supplemented with casein reached the highest PG yield of 3.991 mg/mL and was thus used for further experiments to assess the most suitable ratio of de-SSP/casein. As shown in Figure 2b, the combination of de-SSP and casein at the ratios of 7/3, 6/4, and 5/5 gave a high PG yield of 4.05-4.18 mg/mL. To utilize shrimp wastes for cost-effective PG production, the de-SSP/casein at the ratio of 7/3 was chosen for further investigations. Oral casein and oral de-SSP were also fermented for comparison. As shown in Figure 2b, these two control mediums reached lower PG yields of 2.35 and 0.91 mg/mL, respectively, than that (4.18 mg/mL) of the mixture medium (de-SSP/casein = 7/3). The influence of protein sources (a) and de-SSP/casein ratio (b) on PG productivity via fermentation by S. marcescens TNU01. Carbon/nitrogen sources in different proteins were combined with de-SSP at a ratio of 3.0/7.0 (a) and de-SSP was combined with casein in various ratios ranging from 2/8 to 8/2 (b) were used at the concentration of 1.60% in a liquid medium containing 0.03% K 2 HPO 4 and 0.05% CaSO 4 . Oral casein and oral de-SSP were also fermented for comparison. The fermentation was performed for two days with no light, at 150 rpm and 25 • C. The error bars in the figures are standard errors (SE). Demineralized shrimp shell powder (de-SSP) and protein (casein) were mixed at the ratio of 7/3 and used as a C/N source for fermentation. A 1.6% C/N source was added to a liquid medium containing 0.03% K 2 HPO 4 and 0.05% CaSO 4 . The fermentation was performed for two days at 25 • C in the dark at a shaking speed of 150 rpm. (3) The influence of salt composition on PG production A suitable composition of phosphate and sulfate salts has shown enhanced PG productivity by S. marcescens strains [31][32][33]. For more effective PG production in this study, various kinds of phosphate and sulfate salts were utilized for fermentation ( Figure 2). The results indicated that, compared to other phosphate salts, the addition of K 2 HPO 4 salt into the culture medium resulted in the highest PG synthesis by the TNU01 strain ( Figure 3a). A control group (no addition of phosphate salt) was also investigated for comparison, and this control group showed the lowest PG yield compared to all experimental groups. In the next experiment (Figure 3b), the addition of K 2 HPO 4 at a concentration of 0.05% was found to be the most effective on PG production by S. marcescens TNU01. Thus, K 2 HPO 4 salt at its optimal concentration of 0.05% was further combined with various kinds of sulfate salts to screen the most effective sulfate salt on PG biosynthesis. Among the various examined sulfate salts, K 2 SO 4 was found to be the best source of sulfate salt ( Figure 3c) when added to the medium at a low concentration of 0.02% (Figure 3d). This was the first study to report the use of K 2 SO 4 as a potential sulfate salt source for the significant enhancement of PG production by S. marcescens. (4) The influence of cultivation parameters on PG production To reach the highest PG yield produced by S. marcescens TNU01, some parameters of cultivation, including the initial pH, fermentation temperature, volume of the liquid medium, and time courses of fermentation were examined (Figure 3e-h). S. marcescens TNU01 produced the highest PG yield at an initial pH of 6-7 ( Figure 3e) and culture temperature of 27.5 • C (Figure 3d). These results were similar to those reported in many previous studies [2,5,31,33,[35][36][37][38][39]. These optimal factors (initial pH = 7 and culture temperature = 27.5 • C) were used in the next experiments to investigate the effect of culture volume and time course of fermentation. As shown in Figure 3g, PG was produced with a high yield when the culture liquid medium volumes were controlled in the range of 10-40 mL in a 100 mL flask. Regarding the large-scale harvesting of PG, the maximum culture medium volume of 40 mL in a 100 mL flask (culture medium/flask volume ratio = 4/10) was considered for further studies. This recorded result was similar to some of our previous works [2,31,33]. The fermentation time needed to harvest the maximum PG yield was also examined. The result shown in Figure 3 indicates that S. marcescens TNU01 produced PG with the highest yield cultivation time at 1.5-2 days. Although some factors (initial pH, culture temperature, volume of medium) for PG production by S. marcescens TNU01 were similar to those reported in many previous studies, this bacterial strain could produce the maximum yield of PG in a shorter fermentation time (36 h) Overall, S. marcescens TNU01 was found to produce the highest yield of PG when using the novel-designed medium containing a 1.60% source of C/N (de-SSP/casein = 7/3), 0.02% K 2 SO 4 , 0.05% K 2 HPO 4 , and a number of fermentation parameters, including an initial pH of 6-7, a medium culture volume of 40 mL, and culture temperature of 27.5 • C for 1.5 days. Notably, the PG yield was significantly enhanced after optimization of the culture conditions and showed an approximately 1.5-fold increase (3.98-5.91 mg/mL). The fermentation conditions and PG yield before and after optimization are mentioned in Table 2. To date, various methods have been used for PG biosynthesis, including immobilized cultures, batch fermentation, continuous fermentation, and fermentation in bioreactor systems [2,33,40,41]. Of these, bioreactor cultures are suitable for industrial fermentation and the large-scale production of microbial metabolites such as PG. In addition, it has been suggested that the production of PG in a bioreactor system results in PG production on a large scale with higher productivity in a shorter cultivation time [33]. Thus, in this study, we preinvestigated PG production in a flask (small scale) and then scaled up PG production using a 15 L bioreactor system. Scale-Up of PG Production in a Bioreactor System and the Purification and Qualification of S. marcescens TNU01 PG To scale up the PG biosynthesis, a bioreactor system with a full volume of 15 L was used for fermentation. The biosynthesis of PG on a small-scale in a 100 mL flask was also conducted for comparison. As shown in Figure 4, S. marcescens TNU01 produced the highest PG yield of 6200 mg/L after eight hours of fermentation, at which time the color of the culture broth appeared the reddest ( Figure S1 To date, many studies have produced PG with a high yield. However, in nearly all of these previous works, PG was produced on a small scale, in flasks, using commercial nutrients as a C/N source for fermentation [1,2]. To scale up PG production, bioreactor systems have been utilized for fermentation in some previous studies [35][36][37][38]. As summarized in Table 3, many different sizes of bioreactors, such as 1.5, 5, 7, 10, 15, and 100 L, with true working volumes of 0.935, 2.75, 6.5, 3, 4.5, and 50 L, respectively, were examined for PG production in mass, and the reported PG yield was in the range of 521.64-872 mg/L [40][41][42][43][44]. However, all these previous reports used commercial nutrients as C/N for fermentation. Unlike them, we established the bioconversion of MCWs for the production of PG and successfully approached PG production on a large scale of 3 L [2] and 4.5 L [33] with a high PG yield of 3450 and 5100 mg/mL, respectively. In this study, a 15 L bioreactor system with a working volume of 5.0 L was used for fermentation to produce a high yield of PG (6200 mg/L). Notably, the fermentation time needed to produce the highest PG yield in the bioreactor system in this study was significantly shorter (8.0 h) than most previous reports (12-65 h). The red compound was extracted and isolated from the fermented broth in the bioreactor system ( Figure S2a) following the assay presented in detail in our previous study [31]. In brief, this process had several steps, including separation of different liquid layers using ethyl acetate to obtain an extract rich in PG ( Figure S2b), fractionation of this crude extract by a column loaded with silica gel (Figure S2c), and the final separation and isolation of the red compound via thin-layer chromatography ( Figure S2d). The red compound was confirmed as PG by analysis of its HPLC profile, mass, and UV/vis spectra. As shown in Figure S3, both the PGs obtained from our earlier report [33] (Figure S3a) and the red compound purified in this study ( Figure S3b) appeared as a single peak at the approximate same retention time in the range of 12.283-12.40 min. In addition, this red compound had a molecular weight and max UV/vis of 323.2063 g/mol ( Figure S4) and 535 nm ( Figure S5), respectively, which were similar to the mass and optimal UV/vis absorption of PG [2,33,45]. Thus, the isolated red compound was confirmed as PG. Evaluation of the Biological Effects of Prodigiosin To date, this pigment compound has been shown to carry out numerous biological activities [1]. Various studies have reported the potential inhibitory effect of PG against numerous cancerous cell lines [1]. To confirm the PG produced in this study was an active anticancer compound, its bioactivity on several cancerous cell lines was tested. As shown in Table 4, PG exerted its potent anticancer effect on a number of cell lines, such as A549, MCF-7, WiDr, and HepG2, which all recorded low IC50 values of 0.07, 0.05, 0.22, and 0.06, respectively. The commercial anticancer compound mitomycin was also tested, and its IC50 values were 0.15, 0.11, 0.13, and 0.14, respectively. Thus, PG demonstrated lower inhibition against WiDr but much higher inhibitory activity against A549, MCF-7, and Hep G2, compared to that by mitomycin. Antioxidants have been proven to protect DNA, proteins, and lipids from damage due to free radicals. Thus, antioxidant compounds may help to reduce and prevent cells from a vast array of diseases [2,46]. In this study, assays of the DPPH and ABTS radical scavenging effect were used to detect antioxidant activity. To compare the antioxidant effects, a standard antioxidant agent (α-tocopherol) was also tested under the same conditions. As shown in Table 5, the antioxidant compound α-tocopherol showed effective DPPH and ABTS radical scavenging effects with the very low IC50 values of 24.3 and 12.7115 µg/mL, respectively, while PG demonstrated moderate antioxidant effects, with IC50 inhibition values of 235 µg/mL for DPPH and 115 µg/mL for the ABTS radical scavenging effects. Many studies have reported that GP shows DPPH radical scavenging activity with a potent maximum inhibition of 86% [47], 98% [2], 99% [48], and 96% [33]. PG also shows ABTS radical scavenging properties [33], with a maximum inhibition of 98.3% and a moderate IC50 value of 1.25 mg/mL. However, the ABTS radical scavenging activity of PG has been rarely reported [33]; thus, the data on the ABTS radical scavenging capacity in this study could contribute to the available antioxidant data of PG. The max inhibition (%) of the samples was detected at 8 mg/mL (PG) and 50 µg/mL (α-tocopherol). The anti-nitric oxide effect of PG was also examined (anti-NO, an indicator of proinflammatory properties, which is related to some disorders such as rheumatoid arthritis, chronic hepatitis, and pulmonary fibrosis) [30,[49][50][51]. In this report, LPS-stimulated-RAW264.7 cells were used to assess the anti-NO activity of PG, and homogentisic acid was used as the standard for the anti-NO assay. As shown in Figure 5, PG showed potent anti-NO properties with a maximum inhibition of 91%, which was comparable to that of homogentisic acid (92%) at 80 µg/mL. To clarify the results, the anti-NO activity was also recorded as IC50 values. PG showed a low IC50 value of 19.1 µg/mL, and this activity was comparable to that of homogentisic acid (IC50 = 15.9 µg/mL). The biological activities of PG have been increasingly reported, especially anticancer activities. However, few data on the anti-NO activity of PG are available. In a previous report, PG was reported to show pro-inflammatory activities via an in silico assay [52]. Thus, this study supported the in vitro study data to confirm the novel potential anti-NO effect of PG on LPS-stimulated-RAW264.7 cells. Materials The S. marcescens strains were obtained from previous works, such as S. marcescens TKU011 [5], S. marcescens TNU01, S. marcescens TNU02 [31], and S. marcescens CC17 [53]. The marine chitinous wastes (MCWs), including shrimp shells, shrimp heads, crab shells, and squid pens were provided by Shin-Ma Frozen Food Co. (I-Lan, Taiwan), and dem-ineralization of the MCWs was performed following the method detailed in an earlier study [54]. The cancer cell lines MCF-7, A549, Hep G2, and WiDr were purchased from the Bioresources Collection and Research Centre (Hsinchu, Taiwan). Study of Bioproduction of Prodigiosin via Bacterial Fermentation The Influence of Different S. marcescens Strains on PG Production A total of four Serratia marcescens strains, including TKU011, TNU01, TNU02, and CC17, were used for PG production. Demineralized shrimp shell powder (de-SSP) and protein (casein) were mixed at the ratio of 7/3 (de-SSP/casein) and utilized as a C/N source for fermentation. The C/N source (1.6%) was added to a liquid medium containing 0.03% K 2 HPO 4 and 0.05% CaSO 4 . Fermentation was performed for two days at 25 • C in the dark at a shaking speed of 150 rpm (*). The Effect of Different Free Protein Sources on PG Production Different proteins (nutrient broth, casein, beef extract, peptone, and yeast extract) were combined with de-SSP at the ratio of 3/7 and used as the C/N source for fermentation. The C/N source (1.6%) was added to a liquid medium containing 0.03% K 2 HPO 4 and 0.05% CaSO 4 . The fermentation was carried out as mentioned above (*). Casein was screened as the most suitable free protein source. Therefore, this protein was further investigated for its suitable concentration to the medium. Various ratios of de-SSP combined with casein at a ratio of 2/8-8/2 were used at a concentration of 1.6% in a liquid medium containing 0.03% K 2 HPO 4 and 0.05% CaSO 4 , and fermentation was performed in the same condition as mentioned above (*). The Effect of Phosphate Type and Concentration on PG Production Several types of phosphates, such as Ca 3 (PO 4 ) 2 , KH 2 PO 4 , K 2 HPO 4 , Na 2 HPO 4 , and NaH 2 PO 4 , were tested. The liquid medium contained a 1.6% source of C/N, 0.03% phosphate salt, and 0.05% CaSO 4 , and fermentation was performed in the same condition as mentioned above (*). K 2 HPO 4 demonstrated the most enhancing effect on PG yield; thus, it was further added to the culture medium at the concentration range of 0.01, 0.02, 0.03, 0.05, 0.1, and 0.2%, and fermentation was performed in the same condition as above (*) to explore the optimal K 2 HPO 4 concentration. The Effect of Sulfate Type and Concentration on PG Production Various types of sulfates of (NH 4 ) 2 SO 4 , K 2 SO 4 , FeSO 4 , MgSO 4 , ZnSO 4 , and CaSO 4 , were examined for their effect on PG productivity. The culture broth contained a 1.6% source of C/N, 0.03% K 2 HPO 4 salt, and 0.05% sulfate salt, and fermentation was performed in the same condition as mentioned above (*). K 2 SO 4 demonstrated the most enhancing effect on PG yield; thus, it was further added to the culture medium at the concentration range of 0.01, 0.02, 0.03, 0.05, 0.1, and 0.2%, and fermentation was performed in the same condition as mentioned above (*) to explore the optimal K 2 SO 4 concentration. The Effect of Cultivation Parameters on PG Production To achieve the highest PG yield, a liquid medium containing a 1.6% source of C/N (de-SSP/casein = 7/3), 0.02% K 2 SO 4 and 0.05% K 2 HPO 4 was fermented by S. marcescens TNU01 under different conditions but with some parameters, including the initial pH of the liquid medium (a pH range of 5-9.5), fermentation temperature (25, 27.5, 30, 32.5. and 35 • C), volume of the liquid medium (10,15,20,25,30,35,40,45, and 50 mL in a 100 mL flask), and time courses for fermentation (0-4 days) were examined. All the following experiments were designed according to the optimal parameters achieved from previous experiments. Scale-Up Production of PG in the Bioreactor System A 15 L BioFlo/CelliGen 115 bioreactor system (Eppendorf North America, Enfield, CT, USA) was used for fermentation. The optimal culture conditions explored from all the above-mentioned experimental results were applied for the mass production of PG in a bioreactor. A total of 500 microliters of bacterial seed were cultured in a 1 L flask at 27 • C for 1.50 days and then injected into the fermenter system containing 4.5 L of medium. This medium contained a 1.6% source of C/N (de-SSP/casein = 7/3), 0.02% K 2 SO 4 , and 0.05% K 2 HPO 4 with an initial pH of 6-7, and fermentation was conducted at 27.5 • C in the dark. Sampling and detection of the PG yield were performed every two hours (from 0 to 10 h of fermentation). Qualification, Extraction, and Purification of Prodigiosin PG was qualified according to the method described in a previous study [32]. A cultured broth (0.25 mL) was mixed with 2 mL methanol and 0.25 mL of 2.0% AlK(SO 4 ) 2 ·12H 2 O, and the mixture was then centrifuged at 1400× g for five minutes. This supernatant (0.5 mL) was added to a flask containing 4.50 mL acidic methanol adjusted by 0.5 HCl, and the optical density (OD 535nm ) of this final solution was measured. The PG compound previously isolated in our earlier work [33] was used to establish the standard equation for converting the OD 535nm value into the PG concentration. The extraction and purification of PG were carried out according to the method presented in a previous report [33]. The cultured broth in the bioreactor system at optimal conditions was centrifuged for 10 min at 10,000× g. The supernatant was then collected. Ethyl acetate (EA) was added to this supernatant at an equal volume and the mixture was kept for three hours in a glass funnel, with shaking performed every 30 min. The EA layer was then collected. The residue in the supernatant was further mixed with EA twice for the complete extraction of PG. All the EA layers were mixed together for evaporation in a rotary evaporator (IKA, Germany) at 55 • C and then dried to a powder (crude PG). This crude pigment was separated by loading it onto a column of silica gel (Geduran ® Si 60, size: 0.040-0.063 mm). The solvent system contained methanol and chloroform with a gradient from 0/10 to 2/8 (v/v) for the elution of PG, and the pigment was finally isolated after separation by thin-layer chromatography plated with the solvent system of methanol/chloroform (1/9). The target compound in the red band was completely cut out into small pieces, and methanol was utilized for the extraction of the red pigment compound. The methanol-rich PG was dried in a rotary evaporator (IKA, Germany) at 55 • C and then dried to powder (crude PG). This purified compound was used to assess its bioactivities as well as examine its mass, UV/vis spectra, and HPLC. Detection of the Bioactivities of Prodigiosin The PG purified in this study was used for the evaluation of certain biological effects, including anticancer, antioxidant, and anti-NO activities. ABTS and DPPH assays were used for the detection of antioxidant activities according to the previous studies [48,49], respectively. The anticancer properties were examined according to the method reported in our earlier work [55]. The anti-NO effect was also tested according to the assay presented in our earlier report [50]. Conclusions Prodigiosin (PG) has received much attention due to its numerous beneficial biological activities. The potential applications of PG have led to a dramatic increase in the investigation of PG biosynthesis. However, in almost all previous reports, commercial nutrient mediums were used as C/N sources for fermentation, and PG was produced small-scale in flasks. This study utilized shrimp wastes (low-cost materials) as the source of C/N for the production of PG by bacterial fermentation and used a bioreactor system for scale-up PG production. The PG yield produced by S. marcescens TNU01 in the bioreactor system in this work reached a higher yield (6200 mg/L) than that of some previous reports (521.64-872 mg/L). Notably, the fermentation time for max PG yield production in this study was also significantly shorter (8.0 h) than the reported studies (12-65 h). The red pigment purified from the cultured medium was confirmed as PG via analyzing its HPLC profile, mass, and UV/vis spectra. The purified PG showed effective anticancer activities, moderated antioxidant activities, and anti-NO effects. PG has been reported to possess various biological effects, especially anticancer properties. However, few data on the anti-NO activity of PG are available. The results obtained in this study suggested that shrimp wastes are potentially used for the cost-effective production of bioactive PG in mass. Supplementary Materials: The production process in a 15 L bioreactor system, the extraction and purification, the HPLC profiles, the HREIMS, and the UV/vis spectrum of the purified prodigiosin (PG) produced via fermentation in this study are found in the supplementary material ( Figures S1-S5).
6,766.2
2021-05-24T00:00:00.000
[ "Environmental Science", "Chemistry", "Biology", "Materials Science" ]
Thermal Decomposition Studies of Layered Metal Hydroxynitrates (Metal: Cu, Zn, Cu/Co, and Zn/Co) Layered metal hydroxynitrates and mixed metal hydroxynitrates (copper/cobalt hydroxynitrates and zinc/cobalt hydroxynitrates at different mole ratios) were synthesized by hydrolysis of urea and metal nitrates at 140C. Layered metal hydroxyl nitrates derive their structure from brucite mineral and generally crystallize in hexagonal and monoclinic phases. Isothermal decomposition studies of Cu 2 (OH) 3 (NO 3 ), Co 2 (OH) 3 (NO 3 ), Cu 1.5 Co 0.5 (OH) 3 (NO 3 ), Cu 1.34 Co 0.66 (OH) 3 (NO 3 ), Zn 5 (OH) 8 (NO 3 ) 2 (H 2 O) 2 , Zn 3.75 Co 1.25 (OH) 8 (NO 3 ) 2 (H 2 O) 2 , and Zn 3.35 Co 1.65 (OH) 8 (NO 3 ) 2 (H 2 O) 2 samples were carried out at different intervals of temperature and the structural transformations during the process weremonitored using powder X-ray diffractograms. Biphasic mixture of metal hydroxynitrate/metal oxide is observed in case of cobalt/zinc based layered hydroxynitrates, while copper hydroxynitrate or copper/cobalt metal hydroxynitrate decomposes in a single step.The decomposition temperatures of layeredmetal hydroxynitrates andmixed layeredmetal hydroxides depend on themethod of preparation, their composition and the nature of metal ion, and their coordination. Introduction Layered metal hydroxysalts (LHS) are used in dye adsorption, catalysts in organic reactions such as catalyst supports, fire retardants, polymer composites, electrodes, magnetism, electrochromic devices, and photocatalysts [1][2][3][4][5].The structure of layered metal hydroxysalt is comprised of a hydroxyl deficient positively charged compound and the composition [M(OH) 1−x ] x+ where M denotes metal ion; = 0 to 1 [6].Anions (A n− ) will get intercalated to compensate its charge neutrality resulting in the formula [M(OH) 2−x (A n− ) x/n ] [7,8].The value of "" dictates the composition of the compound; that is, when = 0, we get layered metal hydroxide-[M(OH) 2 ] and, at "" = 2, metal nitrate MA 2 to obtain two end members.The structure of metal hydroxide is comprised of hexagonally close packed OH − ions in which divalent metal ions occupy octahedral sites resulting in the formation of charge neutral layers.These charge neutral layers stack on top of each other having an interlamellar spacing of 4.6 Å. Intermediate values of "" will dictate the composition of the layered metal hydroxysalt."" = 0.5 results in the formation of M(OH)⋅ (A n− ), and M 2 (OH) 3 X/M 3 (OH) 4 (NO 3 ) 2 /M 5 (OH) 8 (NO 3 ) 2 will be obtained at "" = 0.67 where the above compounds crystallize in hexagonal or pseudo-hexagonal symmetry [9][10][11][12][13]. Several methods have been reported on the preparation of layered hydroxysalts [9][10][11][12][13][14]. Precipitation method involves addition of an alkali solution to metal nitrate solution to attain pH-6.8-7.0.Aging of metal oxide in aqueous solution of metal salt has also been adopted for the precipitation of layered hydroxysalts [3].Hydrolysis of metal nitrates using urea/hexamine precipitates metal hydroxysalts [15,16].Thermodynamics and kinetic factors control the stoichiometry and composition product formation.The kinetics of the reaction can be controlled by adjusting the temperature, concentration of the reagents, pH, hydrolysis rate, and metal salt to hydrolyzing agent ratio. Zinc, nickel, cobalt, and copper based layered hydroxysalts have been extensively studied.Cobalt and nickel based layered metal hydroxynitrates (CoHN/NiHN) crystallize in the hexagonal system and have been used as magnetic materials.While zinc hydroxynitrate is used as an anticorrosive agent and copper hydroxynitrate is used in vehicle air bags [17,18].Copper based hydroxynitrate (CuHN) with composition Cu 2 (OH) 3 (NO 3 ) crystallizes in orthorhombic and monoclinic crystal systems [19,20].The crystal structure of monoclinic copper hydroxynitrate [Cu 2 (OH) 3 (NO 3 ) or Cu(OH) 1.5 (NO 3 ) 0.5 ] is comprised of copper hydroxide layers in which some of the hydroxyl ions ( = 0.5) are replaced by the nitrate ions and are directly coordinated to the sheets.Copper occupies two different distorted octahedral sites within the layer and the structure of copper hydroxynitrate-Cu 2 (OH) 3 NO 3 is shown in Figure 1. Divalent metal ions or cations occupy octahedral or tetrahedral sites in the sheets of layered hydroxynitrate.One of the classic examples is zinc hydroxynitrate (ZnHN) with the composition Zn 5 (OH) 8 (NO 3 ) 2 (H 2 O) 2 which is comprised of hexagonal close packing of hydroxide ions in which threefourths of the octahedral sites are occupied by divalent Zn 2+ ions and one-fourth of the octahedral sites are vacant in the layer.Two tetrahedral sites, one above and one below the layer, will be available if one of the octahedral sites is vacant in the sheet of zinc hydroxynitrate.Zn 2+ ion occupies tetrahedral coordination in the sheets of zinc hydroxynitrate which are coordinated by 3 OH − ions on the layer and one of the OH − group rom the water molecule present in the interlayer region to form tetrahedron.Nitrate ions are present in the interlamellar region and do not have any bonding or coordination with the ions present in the lattice.The formula of zinc hydroxynitrate is Zn 3 (octa)Zn 2 (tetra)(OH) 8 (NO 3 ) 2 (H 2 O) 2 [21].Figure 2 shows the crystal structure of zinc hydroxynitrate. Extensive studies have been carried out on the decomposition of magnesium hydroxide, nickel hydroxide, and calcium hydroxide to obtain their respective metal oxides.There are also reports on the utilization of thermal energy storage density of magnesium hydroxide and calcium hydroxide during hydration-rehydration process [22,23].We had reported on the structural transformations during thermal decomposition of nickel hydroxide to nickel oxide, depending on the conditions of synthesis and crystallinity [24,25].Metal oxides have been extensively investigated in the last two decades in view of their application as magnetic materials, electrochromic devices, gas sensors, high-temperature solar selective absorbers, and catalysts [1,[26][27][28][29].The information about the decomposition studies of layered metal hydroxysalt is important to prepare high surface area solid for adsorption, catalytic supports, and catalysts.Decomposition of layered hydroxysalt also results in formation of metal oxide.We have reported the thermal decomposition studies of cobalt and nickel based layered hydroxynitrates and their mechanism of structural transformation to metal oxides.On continuing our investigations on layered hydroxysalts, the present work is concerned with the synthesis and characterization of layered metal hydroxynitrates, layered mixed metal hydroxynitrates, and their thermal evolution studies.To the best of our knowledge, there are no reports on what is mentioned above and in this paper, we have reported on the thermal decomposition of metal hydroxynitrates (metal: Cu, Co, Zn, Cu/Co, and Zn/Co) and their phase evolution using powder X-ray diffraction analyses.This work describes the preparation and characterization of zinc/copper/cobalt hydroxynitrate samples.The term zinc/copper/cobalt used in this paper covers not only mixed metal salts, but also a single metal ion present in the lattice of the compound.Thermal decomposition studies of CoHN, CuHN, ZnHN, Cu 1−x -Co x HN ( = 0.25, 0.33, 0.5), and Zn 1−x -Co x HN ( = 0.25, 0.33, 0.5) and their structural transformation have been investigated.Stoichiometric quantities of the divalent transition metal nitrates, that is, M(NO 3 ) 2 ⋅6H 2 O (where M = Cu/Co/Zn), were mixed with 2 g of urea and 2 mL of distilled water in 100 mL beaker.The mixture was stirred well, and beaker was covered with watch glass and placed in hot air oven maintained at 140 ∘ C for 2 h.The above mixture was periodically stirred and after 2 h, it was cooled to room temperature (RT).The solid formed was washed with distilled water and dried at room temperature till constant weight is obtained.In separate experiments, mixed metal hydroxynitrates were prepared by following the procedure reported above by taking mixed metal nitrates [Zn(NO in the mole ratios of = 0.25, 0.33, 0.5.Solid solution series of layered mixed metal hydroxynitrate ( = 0.25, 0.33, 0.5) have been prepared and the details are given in Table 1. Thermal Decomposition Studies. Thermal conversion of layered metal hydroxynitrates and layered mixed metal hydroxynitrate samples to their respective oxides has been carried out on the following samples: Cu 2 (OH) In a typical procedure, approximately 100 mg of asprepared layered metal hydroxynitrate/layered mixed metal hydroxynitrates was placed in an alumina boat and isothermally heated to different temperatures (100, 150, 200, 250, 300, 350, 400, and 500 ∘ C) for 2 h and cooled to room temperature.The details of the starting compounds are given in Table 1.The decomposed products were collected for structural characterization using powder X-ray diffraction. Characterization All the samples were characterized by powder X-ray diffraction (pXRD) using Bruker D8 advanced diffractometer (CuK source = 1.5418Å; 30 mA and 40 kV) in a 2 range of 5 ∘ to 65 ∘ .Data was collected at a scan rate of 4 ∘ min −1 with 2 steps of 0.05 ∘ .X-ray diffractogram of the following compounds: Cu 2 (OH) ZnO (hexagonal), and Zn 0.85 ⋅ Co 0.15 O was obtained at different experimental conditions and their decomposed products were indexed using the powder diffraction data of from ICSD database.provide control over the crystal growth [30].Hence urea, (NH 2 ) 2 CO, is a nontoxic, cheap, stable, and water soluble hydrolyzing agent that was used to control the precipitation of layered metal hydroxysalts and the products obtained will be homogeneous in nature.Decomposition of urea decomposes at 70 ∘ C leading to the following reaction: Results and Discussion In acidic or neutral conditions, HNCO will be converted to carbon dioxide and ammonia/ammonium cation: Consumption of H + ions results in an increase in the solution pH.The precipitation of layered metal hydroxynitrate involves reaction of hydroxyl ions with metal nitrate solution. In the pXRD patterns of the CuHN, successive reflections, up to at least the third order, indicate the lamellar structure and could be indexed to monoclinic crystal system space group-P 21 using the JCPDS-ICDD 75-1779 file.The cell parameters of copper hydroxynitrate are as follows: = 5.79319(79) Å, = 5.94419(66) Å, = 10.9092(18)Å, = 96.6288(95)∘ , and = 373.158(90) Å3 with an interlayer distance of 6.9 Å.The powder X-ray diffraction patterns of Cu 2 (OH) 3 (NO 3 ), Cu 3 (OH) 4 (NO 3 ) 2 (H 2 O) 2 , and Cu 4 (OH) 6 (NO 3 ) 2 samples were simulated using the single crystal data obtained from ICDD database.The simulated powder X-ray diffraction pattern of Cu 2 (OH) 3 (NO 3 ) matches the observed X-ray diffraction pattern of copper hydroxynitrate (see Figure 4).Thermal decomposition of layered hydroxynitrate follows dehydration, denitration, and disruption of layered framework under atmospheric conditions.The elimination of hydroxyl and nitrate groups from the precursors has great influence on the crystallinity of metal oxides, and new intermediate phase might also form during the disruption of the original structure.The copper hydroxynitrate undergoes decomposition in one step producing CuO according to the equation: The theoretical weight loss for the reaction Cu 2 (OH) 3 (NO 3 ) → 2CuO is 33.17%. Powder XRD was very important in the identification of the phase formation and in Figure 5 the data of the copper hydroxynitrate heated to different temperatures is shown.The XRD results show that up to 200 ∘ C the samples do not show any structural changes.Thermal decomposition of copper hydroxynitrate occurs at single step in which dehydroxylation and decomposition of the nitrate in the interlayer space take place simultaneously leading to the destruction of the layered structure (250 ∘ C) (see Figure 5).The total weight loss on isothermal heating of the copper hydroxyl nitrate sample is 31%.Copper oxide can exist in monoclinic or cubic crystal system and in Table 2 the XRD data is shown.The decomposed product could be indexed to monoclinic phase of copper oxide.Figure 6 shows the crystal structure of copper oxide (monoclinic) formed on decomposition of copper hydroxynitrate."" which results in the formation of continuous solid solution series.Goldschmidt's rule states that a continuous solid solution cannot be obtained if the difference in ionic radii is greater than 15% of the smallest cation.The ionic radii of Co 2+ are 0.745 Å for high spin and 0.65 Å for low spin complexes.The ionic radius of Cu 2+ is 0.73 Å when the coordination number is 6.Co 2+ substitution in Cu 2+ sites of layered metal hydroxynitrate is permitted as the difference in their ionic radii is <10% and hence the values are within the limits set by Goldschmidt, and solid solution series Cu/Co hydroxynitrates with the composition of Cu 1.5 Co 0.5 (OH) 3 (NO 3 ) and Cu 1.34 Co 0.66 (OH) 3 (NO 3 ) were prepared by using urea hydrolysis route and their powder X-ray diffraction patterns are shown in Figure 3.We observe structural similarities among Cu 2 (OH) 3 (NO 3 ), Cu 1.5 Co 0.5 (OH) 3 (NO 3 ), and Cu 1.34 Co 0.66 (OH) 3 (NO 3 ) as the peak positions in their diffraction patterns appear at similar positions in 2.However, the Co 2+ and Cu 2+ have d 7 and d 9 electronic configurations, respectively.Cu 2+ ion is prone to Jahn Teller distortion and results in the irregular octahedral coordination arrangement around it.On substitution of Co 2+ into the lattice, we observed a decrease in its irregularity in the structure of cobalt substituted copper hydroxynitrate.The pXRD patterns of Cu heated to different temperatures are shown in Figures 7 and 8, respectively.The mixed metal hydroxynitrates [Cu 1.5 Co 0.5 (OH) 3 (NO 3 ) and Cu 1.34 Co 0.66 (OH) 3 (NO 3 )] undergo dehydration and denitration at lower temperatures compared to pure zinc/copper/cobalt based hydroxynitrates.It was found that on substitution of cobalt into the structure of basic copper hydroxynitrate in mixed metal hydroxynitrates, the decomposition temperature is reduced by ∼50 ∘ C. The dramatic decrease in the decomposition temperature was due to decarboxylation reaction on substitution of cobalt for copper in the cobalt substitute copper hydroxynitrates. Zinc Hydroxynitrate. ) 2 the total weight loss observed was 50%, while theoretically the total weight loss expected was 40% indicating higher water content.Figure 12 shows the powder X-ray diffraction patterns of the Zn 5 (OH) 8 (NO 3 ) 2 (H 2 O) 2 samples obtained on isothermal heating at different intervals of temperature.The powder X-ray diffraction pattern of zinc hydroxynitrate heated to 100 ∘ C shows additional peaks indicating evolution of second phase of zinc hydroxynitrate with composition Zn 3 (OH) 4 (NO 3 ) 2 .The formation of Zn 3 (OH) 4 (NO 3 ) 2 could arise due to dehydration of a Zn 5 (OH) 5 (NO 3 ) 2 (H 2 O) 2 .At 175 ∘ C, peaks due to Zn 5 (OH) 5 (NO 3 ) 2 (H 2 O) 2 and Zn 3 (OH) 4 (NO 3 ) 2 disappear and the zinc oxide peaks are prominently observed.Tables 4 and 5 show the XRD data of zinc oxide in hexagonal and cubic systems, respectively.The experimental data of the decomposed product obtained from Zn 5 (OH) 5 (NO 3 ) 2 (H 2 O) 2 can be indexed to hexagonal phase of ZnO.The decomposition of layered metal hydroxynitrate results in the absorption of heat and the process is endothermic in nature.The decomposition reaction of this type yields metal oxide and water vapour and nitrates as by-products.Figure 13 shows the crystal structure of zinc oxide formed on decomposition of zinc hydroxynitrate-Zn 5 (OH) 5 (NO 3 ) 2 (H 2 O) 2 .Tables 6 and 7 shows the lattice parameters, expected, of different phases of layered metal hydroxynitrate and mixed metal hydroxynitrates and their respective decomposed products.Table 8 shows the percentage weight loss of the layered metal hydroxynitrate and mixed metal hydroxynitrates, respectively. of mixed metal hydroxynitrates (zinc/cobalt hydroxynitrates and copper/cobalt hydroxynitrates) were investigated.The mechanism of thermal decomposition of the layered metal/mixed metal hydroxynitrates was deduced and the effect of substituting other metal cations into the structure of layered hydroxynitrate was determined.The single metal hydroxynitrate will have higher decomposition temperature than cobalt substituted copper hydroxynitrates.It was found that the mixed metal hydroxynitrate decomposed at lower temperatures than the single metal hydroxynitrate studied.The work will be used as a basis for further studies of mixed metal hydroxynitrate for the preparation of novel catalyst precursors.The mechanism of decomposition provides insight into the nature of phases formed at different temperatures which can significantly affect the catalytic activity.These metal/mixed metal hydroxynitrates with high surface area may exhibit unusual and unexpected properties for device based applications. Table 1 :) Weight of the reagents used to prepare layered metal hydroxynitrates.Sample Weight of zinc nitrate Weight of cobalt nitrate Weight of copper nitrate Weight of urea Volume of water (cm 3 Figure 4 : Figure 4: Simulated powder X-ray diffraction patterns of different polymorphic modifications of copper hydroxynitrate. Figure 13 : Figure 13: Schematic representation on the structural transformation of solid solution of zinc hydroxynitrate to ZnO on isothermal heating (175 ∘ C). Table 6 : Lattice parameters of decomposed products obtained from layered hydroxynitrate samples (expected). Table 7 : Lattice parameters of different polymorphic modifications of layered hydroxynitrate samples.
3,945.2
2015-01-27T00:00:00.000
[ "Materials Science", "Chemistry" ]
Cost-minimization analysis of adjuvant chemotherapy regimens given to patients with colorectal cancer in Japan Background Consideration of medical costs as well as effectiveness and adverse events is rapidly been becoming an important factor in the selection of chemotherapy regimens. However, practical data on the costs of chemotherapy are scarce. We clinically estimated the medical costs of 6 adjuvant chemotherapy regimens for colorectal cancer on the basis of clinical and cost-related data and compared their cost-effectiveness by cost-minimization analyses. Methods All patients who received adjuvant chemotherapy for colorectal cancer between April 2012 and May 2015 at four hospitals affiliated with Showa University were studied retrospectively. Clinical and cost data related to adjuvant chemotherapy were collected from medical records and medical fee receipt data, respectively. Six adjuvant chemotherapy regimens were studied: capecitabine and oxaliplatin (CapeOX); 5-fluorouracil (5-FU), ℓ-leucovorin (LV), and oxaliplatin (modified FOLFOX6 [mFOLFOX6]); 5-FU and LV (5-FU/LV); tegafur and uracil (UFT), and LV (UFT/LV); capecitabine; and tegafur, gimeracil and oteracil (S-1). The regimens were divided into 2 groups according to whether or not they contained oxaliplatin because of the difference in effectiveness. Cost-minimization analyses, where relative costs of regimens showing equivalent effectiveness were simply compared, were performed to evaluate the cost-effectiveness of the regimens in each group. Results A total of 154 patients with colorectal cancer received adjuvant chemotherapy during the study period. Fifty-seven patients were treated with CapeOX, 10 with mFOLFOX6, 38 with UFT/LV, 20 with capecitabine, and 29 with S-1. No patient received 5-FU/LV. The total costs of oxaliplatin-containing regimens were significantly higher than those of oxaliplatin non-containing regimens. The high cost of oxaliplatin, but not the costs of drugs or various tests for the treatment of adverse events, was the primary reason for the higher costs of the oxaliplatin-containing regimens. The cost-effectiveness of the oxaliplatin-containing regimens CapeOX and mFOLFOX6 were comparable. Among the oxaliplatin non-containing regimens, the cost-effectiveness of S-1 and capecitabine was superior to that of UFT/LV. Conclusion Thus, we provided the cost-effectiveness data of 5 adjuvant chemotherapy regimens for colorectal cancer based on practical clinical and cost data from Japanese patients. The results can be included as a factor in regimen selection because these results would represent the real world. Trial registration This study is a retrospective observational study and does not include any health care interventions. Therefore, we did not register the protocol of this study. Background Cancer therapy has rapidly evolved over the past two decades, contributing to improvements in the survival and quality of life of cancer patients. However, the costs of the cancer therapy have also rapidly increased in parallel to progress in cancer therapy [1]. A previous study reported that 30.6 % or more of patients with cancer are complaining about the rising costs of cancer therapy [2]. Another study found that the frequency of bankruptcy was 2.65-fold higher among patients with cancer than those without the disease [3]. Many highly effective anticancer drugs have recently been developed and are now used in clinical practice. However, the costs of these drugs are generally high. For example, the cost of one intravenous dose of the cytotoxic anticancer drug oxaliplatin is higher than 80,000 yen (800 US dollars, assuming that 100 yen is equivalent to 1 dollar) when the drug is given to a Japanese patient with an average body surface area (BSA) of 1.69 m 2 [4]. As for molecularly targeted drugs, the cost of one dose of bevacizumab or cetuximab is higher than 100,000 yen (1000 dollars). In the case of the immune checkpoint inhibitor nivolumab, which was very recently launched, the cost of a single intravenous dose of the drug exceeds 1,000,000 yen (10,000 dollars). Given the remarkable increase in the costs of anticancer drug therapies, oncologists can no longer ignore or blindly accept that costs have no place in medical decision making [5]. Therefore, it has been widely recommended that costs related to cancer chemotherapy should be considered in addition to effectiveness and adverse events in the selection of treatment regimens [5,6]. However, cost data on cancer medications in Japan are extremely limited; patients and oncologists generally choose treatment regimens on the basis of only effectiveness and adverse events, without considering costs. Several economic studies have examined the costeffectiveness of adjuvant chemotherapy for colorectal cancer in Japan [15][16][17]. The clinical data used in these studies were derived from international phase 3 trials, but not based on clinical practice. The cost of a drug or a test was calculated by multiplying the pre-determined numbers of drug doses or tests by their respective unit prices. These methods have the advantage that cost calculation is straightforward and simple. However, the costs related to adjuvant chemotherapy thus obtained might differ from those obtained by using patient data in the real world, because patients' backgrounds are different between international phase 3 trials and clinical practice. In clinical practice, subpopulations of patients with advanced age, comorbidities, organ dysfunctions, or lower performance status who generally cannot participate in international phase 3 trials are given adjuvant chemotherapy. Given that patients who receive adjuvant chemotherapy in clinical practice might receive a lower dose intensity and suffer more severe adverse events than patients enrolled in international phase 3 trials, considerable differences in the medical costs from the phase 3-based approach are plausible. When selecting regimens for patients in clinical practice, the use of the medical costs reflecting the actual situation is desirable. Based on these backgrounds, we calculated the total costs of 6 regimens of adjuvant chemotherapy for colorectal cancer by using data from Japanese patients treated in clinical practice. Based on the costs thus obtained, we compared the cost-effectiveness of these regimens. Selection of patients All patients who received either CapeOX, mFOLFOX6, 5-FU/LV, UFT/LV, capecitabine, or S-1 at the aforementioned hospitals and completed all scheduled cycles were studied. Patients were required to undergo potentially curative resection for colorectal cancer before receiving adjuvant chemotherapy. Chemotherapeutic regimens CapeOX consisted of a 2-h intravenous infusion of oxaliplatin (130 mg/m 2 ) on day 1 and oral capecitabine (1000 mg/m 2 ) twice daily on days 1 to 14, repeated every 3 weeks for 8 cycles [8]. mFOLFOX6 consisted of LV (200 mg/m 2 ) given as a 2-h infusion and oxaliplatin (85 mg/m 2 ) given as a 2-h infusion, followed by a bolus infusion of 5-FU (400 mg/m 2 ) and a 46-h continuous infusion of 5-FU (2400 mg/m 2 ). This regimen was repeated every 2 weeks for 12 cycles [10]. Brand-name oxaliplatin was used in CapeOX and mFOLFOX6. 5-FU/ LV comprised a 2-h infusion of LV (250 mg/m 2 ) and a bolus infusion of 5-FU (500 mg/m 2 ) given 1 h after starting the LV infusion, repeated weekly for 6 weeks followed by a 2-week rest [11]. This regimen was given for 3 cycles. UFT/LV consisted of oral UFT (300 mg/m 2 ) and LV (75 mg/patient) given 3 times daily on days 1 to 28 followed by a 7-day rest, repeated for 5 cycles [12]. Capecitabine was given orally in a dose of 1250 mg/m 2 twice daily on days 1 to 14, followed by a 7-day rest, repeated for 8 cycles [13]. S-1 was administered orally twice daily for 28 consecutive days, followed by a 2-week rest. S-1 was given in a fixed dose based on the patient's BSA according to the dose recommendations of the manufacturer's package insert in Japan. The dose was 80 mg/day for patients with a BSA of less than 1.25 m 2 , 100 mg/day for those with a BSA of 1.25 to 1.5 m 2 , and 120 mg/day for those with a BSA of more than 1.5 m 2 . This regimen was given for 4 cycles [14]. Data collection Patient background data, such as age and disease stage, as well as data during adjuvant chemotherapy, including laboratory tests, prescribed drugs, and adverse events, were collected from the patients' medical records. Cost data related to adjuvant chemotherapy were extracted from medical fee receipt data. Costs for outpatient visits, laboratory tests, imaging tests for tumor diagnosis, and prescription fees for administered drugs were collected. The cost of each administered drug was calculated by multiplying the drug dose prescribed by its unit price according to the Japanese National Health Insurance fee-for-service system in 2014. The summation of these costs was defined as total cost. Since all hospitals in Showa University have adopted the diagnosis procedure combination (DPC) system [18], hospitalization costs were constant regardless of the number of drugs administered and laboratory tests performed. When the total hospitalization costs calculated by the DPC included the cost of drugs related to adjuvant chemotherapy, the drug costs were calculated by the method described above (the drug dose prescribed x its unit price), and the hospitalization cost was calculated by subtracting the cost of chemotherapy-related drugs from the hospitalization cost according to the DPC. This analysis was performed from the perspective of the health care payer. We described the unit of all costs by Japanese yen and US dollars, assuming that 1 US dollar was equivalent to 100 Japanese yen. Cost-minimization analyses Cost-minimization analysis is one of methods to evaluate cost-effectiveness of therapeutic options [19], in which relative costs of therapeutic options showing equivalent outcomes of interventions are simply compared. We performed cost-minimization analyses for the oxaliplatin-containing regimens (CapeOX and mFOL-FOX6) and the oxaliplatin non-containing regimens (5-FU/LV, UFT/LV, capecitabine, and S-1) because of the following reasons: 1) Because there was no direct comparison between CapeOX and mFOLFOX6, we compared the effectiveness of these regimens based on the following considerations. As demonstrated by 2 international phase 3 trials, 16968 [8] and MOSAIC [9], the effectiveness of CapeOX and FOLFOX4 was significantly superior to that of 5-FU/LV and LV5FU2, respectively (Table 1 and Fig. 1a)). Because the effectiveness of LV5FU2 and 5-FU/LV [20,21] and that of FOLFOX4 and mFOLFOX6 were comparable [10] (Table 1), the 3-year disease-free survival (DFS) rates of both CapeOX and mFOLFOX6 were comparable and approximately 5 % higher than that of 5-FU/LV. 2) Two international phase 3 trials, NSABP C-06 [12] and X-ACT [13] (Table 1), showed that UFT/LV and capecitabine were noninferior to 5-FU/ LV in terms of 5-year overall survival (OS). In addition, the ACTS-CC international phase 3 trial demonstrated that S-1 was noninferior to UFT/LV with respect to the 3year DFS rate [14] (Table 1 and Fig. 1a)). On the basis of these results, we assumed that the effectiveness of these 3 regimens was comparable and nearly equivalent to the effectiveness of 5-FU/LV. Statistical analyses Differences in quantitative variables, including cost data, were tested using the nonparametric Wilcoxon ranksum test. Differences in qualitative variables were tested using the χ 2 test. Two-tailed P values of less than 0.05 were considered to indicate statistical significance. All analyses were carried out with the use of JMP version 12.0 software (SAS Institute, Cary, NC). Patient characteristics From April 2012 through May 2015, a total of 154 patients with colorectal cancer received adjuvant chemotherapy in hospitals affiliated with Showa University. Fifty-seven patients were treated with CapeOX, 10 with mFOLFOX6, 38 with UFT/LV, 20 with capecitabine, and 29 with S-1 ( Table 2). No patient was given 5-FU/ LV during the study period. The distributions of gender, age, site of cancer, and performance status were similar among the 5 regimens. The stage of cancer significantly differed among these regimens (P < 0.001). Ratios of patients with stage III in CapeOX and mFOL-FOX6 were higher than those in UFT/LV, capecitabine, and S-1. Among the oxaliplatin non-containing regimens, the total cost of UFT/LV was significantly higher than that of capecitabine (P < 0.001). The cost of capecitabine was significantly higher than that of S-1 (P = 0.003). Factors causing the higher costs of oxaliplatin-containing regimens To address the causes of the higher total costs of oxaliplatin-containing regimens, the breakdown of the costs for each regimen was calculated (Fig. 2). The cost of oxaliplatin in CapeOX was about 1,150,000 yen (11,500 dollars), which was equivalent to approximately 60 % of the total cost. In the case of mFOLFOX6, the cost of oxaliplatin was about 900,000 yen (9000 dollars), which was equivalent to approximately 40 % of the total cost. The total cost of mFOLFOX6 also included hospitalization costs (400,000 yen [4000 dollars]), such as the fee required to prepare a central venous port for administration of 5-FU, LV, and oxaliplatin. Thus, the hospitalization costs required for mFOLFOX6 increased the total cost of this regimen to a level comparable to the cost of CapeOX. The costs of drugs for supportive care required to administer CapeOX and mFOLFOX6 were approximately equivalent to 10 % of the total costs. The breakdown of the costs of supportive care drugs is shown in Fig. 3. The costs of the drugs prescribed to treat peripheral sensory neuropathy, which is frequently associated with oxaliplatin-related chemotherapy, were approximately 7500 yen (75 dollars) for CapeOX and 4300 yen (43 dollars) for mFOLFOX6, which comprised only 0.4 and 0.2 % of the total costs of CapeOX and mFOLFOX6, respectively. We considered the possibility that a lower frequency of peripheral sensory neuropathy in the present study than in previous studies led to the lower cost of prescriptions for this adverse event. The frequency of peripheral sensory neuropathy of CapeOX in the present study was lower than the results of previous study (Table 3). However, in the case of mFOL-FOX6, the frequency and grade of peripheral sensory neuropathy in the present study were not necessarily lower than those of previous studies (Table 3). On the other hand, the costs of antiemetics were approximately 118,000 yen (1180 dollars) for CapeOX and 116,000 yen (1160 dollars) for mFOLFOX6, accounting for about 6 % of the total costs. Antiemetics such as aprepitant, azasetron, domperidone, granisetron, metoclopramide, ondansetron, palonosetron, prochlorperazine and ramosetron were prescribed in CapeOX and mFOLFOX6 regimens. The parentages of patients who used palonosetron and aprepitant were 100 and 26 % in CapeOX, and 60 and 40 % in mFOLFOX6, respectively. Cost-minimization analyses Because the effectiveness (Methods session and Fig. 1a)) and the total costs (Fig. 1b)) of CapeOX and mFOL-FOX6 were comparable, the cost-effectiveness of these regimens was judged to be similar ( Table 4). As described in the Methods session and Fig. 1a), the effectiveness of the oxaliplatin non-containing regimens was comparable. Therefore, on the basis of the total costs of these regimens (Fig. 1b)), the cost-effectiveness of S-1 was superior to that of UFT/LV, and the cost-effectiveness of capecitabine was superior to that of UFT/LV, which were caused by the high cost of LV. Discussion The present study compared the cost effectiveness of 5 regimens of adjuvant chemotherapy given to patients with colorectal cancer. The total costs were calculated with the use of clinical and cost data obtained from Japanese patients who received each regimen of adjuvant chemotherapy in clinical practice. This is in contrast to most previous studies assessing the costs of adjuvant chemotherapy for colorectal cancer in Japan, which based the costs of treatment on clinical data obtained from large phase 3 clinical trials [15][16][17]. To date, three studies of cost-effectiveness employing clinical data from phase 3 clinical trials have been performed: Hisashige et al. [15] analyzed the costeffectiveness of UFT by comparing clinical and cost data between patients who received or did not receive UFT in the NSAS CC trial [22]. In other Japanese studies, the cost-effectiveness of 5-FU/LV and capecitabine [16] was evaluated with the use of clinical data from X-ACT trial [13], and that of 5-FU/LV and FOL-FOX4 [17] was evaluated with the use of data from the Grade of neuropathy was evaluated according to the Common Terminology Criteria for Adverse Events version 3.0. a Data from reference [8]; b Result of FOLFOX4 [9]. Effectiveness and safety of mFOLFOX6 were comparable to those of FOLFOX4 [10]. MOSAIC trial [9]. We compared the costs required for the following 3 categories between the present study and previous studies based on large international phase 3 trials: 1) anticancer drugs, 2) drugs used for supportive care, and 3) laboratory tests. [16] was higher than that estimated by us (about 420,500 yen [4205 dollars]). The reason for the higher cost of capecitabine in the previous study is considered to be the difference in relative dose intensity (RDI) of capecitabine between the two studies. The previous study used a theoretical RDI of 100.0 %, whereas our study used the clinically observed RDI of 75.4 %. The cost of capecitabine estimated by Shiroiwa et al. [16] would have been about 407,200 yen (4072 dollars) if an RDI of 75.4 % had been adopted, which is nearly comparable to our estimated cost. 2) The costs of agents prescribed for supportive care in previous studies of UFT and capecitabine [15,16] were about 300 yen (3 dollars) and 7000 yen (70 dollars), respectively, while those in the present study were about 8400 yen (84 dollars) for UFT/LV and about 17,500 yen (175 dollars) for capecitabine, demonstrating clearly higher costs for supportive care in our study. The primary reason first considered for the higher supportive care costs in our study was a higher incidence of adverse events in the present study than in previous studies. However, the incidence of bilirubin increase in the NSAS CC trial was 60.0 % [22], as compared with 10.5 % in the present study. The incidence of hand-foot syndrome associated with capecitabine regimens was 60.0 % in the X-ACT trial [13] and 30.0 % in our study. Thus, the incidences of adverse events were not necessarily higher in our study as compared with previous phase 3 trials. As shown in Fig. 3 [17] was lower than that in our present study (about 106,500 yen [1065 dollars]). These findings indicate that the costs of 1) anticancer drugs, 2) drugs prescribed for supportive care, and 3) laboratory tests calculated on the basis of clinical data from phase 3 trials differ from those calculated on the basis of data from actual clinical practice. Because the costs calculated from patient data in clinical practice would precisely represent the actual situation, cost-effectiveness data thus obtained can be used for regimen selection. In Japan, a system of the public health insurance for the entire nation has been adopted. Patients have to pay for medical costs according to their age and income. The cost borne by the patient ranges from 10.0 to 30.0 % of total medical costs. In addition, the patient's financial burden is maintained below specified limits under the high-cost medical care benefit system. The specified limits are determined by the patient's income. If this system is applied, the costs for adjuvant chemotherapy that would be actually paid by the patient could be lower. Data from Showa University Hospital indicate when the public health insurance was applied to a patient, the cost of oxaliplatin-containing regimens was approximately 550,000 yen (5500 dollars), and that of UFT/LV was 263,000 yen (2630 dollars). The difference was 287,000 yen (2870 dollars). However, when the specified limits were applied, the cost of oxaliplatin-containing regimens was approximately 448,000 yen (4480 dollars), and that of UFT/LV was approximately 262,000 yen (2620 dollars), leading to a difference of 186,000 yen (1860 dollars). Thus, the specified limits might lower the medical costs of oxaliplatincontaining regimens to a greater extent than the costs of UFT/LV, although the specified limits system is not necessarily applicable to all patients because application of this system depends on the income of each patient. It is plausible that patients who derive an economic benefit tend to select oxaliplatin-containing regimens over other regimens. The medical costs are supplemented with taxes from Japanese citizens. To maintain the patient's financial burden below specified limits, Japanese citizens have to pay higher taxes. This is an important issue to be discussed by health care payer. An analysis of patient characteristics showed the stage of cancer significantly differed among the regimens (Table 2). However, the total costs of the CapeOX, UFT/ LV, and S-1 regimens did not differ significantly between stage II and stage III. (P = 0.668, P = 0.711, and P = 0.743, respectively). Therefore, there might be no relation between the stage of cancer and total costs. Our study had several limitations. 1) Direct comparisons of effectiveness are not available for some of the regimens. For example, no phase 3 trials have compared effectiveness between CapeOX and mFOLFOX6 or between UFT/ LV and capecitabine. We therefore compared the effectiveness of CapeOX and mFOLFOX6 by the indirect comparisons of independent phase 3 trials (see Methods session). 2) The phase 3 trials that we referred to when comparing the effectiveness of the regimens were not necessarily performed in Japan. Theoretically, the effectiveness of the regimens should have been compared on the basis of data from phase 3 trials performed in Japan; however, we used data from clinical trials performed in whites because suitable Japanese trials were unavailable. It is well known that the survival advantage of a specific regimen in Japanese trials is generally better than that in clinical trials performed in other countries. For example, trials conducted in only Japanese patients tend to have better 3-year DFS rates and 5-year OS rates than those performed in whites [23]. One of the reasons is thought to be the better operation quality in Japan. For example, the extent of lymph-node resection during cancer surgery is greater in Japan than in other countries. 3) Some of the phase 3 trials that we referred to when comparing the effectiveness of the regimens included patients with stage III, but others included those with stage II and stage III. The effectiveness of these phase 3 trials might be affected by the difference in stage of patients enrolled. Taken together, our comparisons of the effectiveness of different regimens might have been biased by such factors. Conclusions Costs of oxaliplatin-containing regimens were significantly higher than those of oxaliplatin non-containing regimens, but the cost-effectiveness of the oxaliplatin-containing regimens CapeOX and mFOLFOX6 were judged to be comparable. Among the oxaliplatin non-containing regimens, the cost-effectiveness of S-1 and capecitabine were superior to that of UFT/LV. Costs based on clinical data from phase 3 trials were shown to differ from costs based on data from actual clinical practice. Because costs based on patient data in clinical practice would more precisely represent the actual situation, the resulting cost-effectiveness data can be used for regimen selection.
5,195.4
2016-11-09T00:00:00.000
[ "Economics", "Medicine" ]
Review on Impedance Detection of Cellular Responses in Micro/nano Environment In general, cell culture-based assays, investigations of cell number, viability, and metabolic activities during culture periods, are commonly performed to study the cellular responses under various culture conditions explored. Quantification of cell numbers can provide the information of cell proliferation. Cell viability study can understand the percentage of cell death under a specific tested substance. Monitoring of the metabolic activities is an important index for the study of cell physiology. Based on the development of microfluidic technology, microfluidic systems incorporated with impedance measurement technique, have been reported as a new analytical approach for cell culture-based assays. The aim of this article is to review recent developments on the impedance detection of cellular responses in micro/nano environment. These techniques provide an effective and efficient technique for cell culture-based assays. Introduction Cell culture, which cultures cells as a monolayer on a surface of a cell culture vessel (e.g., Petri dish or multi-well microplate) is widely used in life science research for the investigation of cellular behavior.It has the advantage of simplicity in terms of operations and observations.In general cell culture-based assays, monitoring of cell number, viability, and metabolic activity are commonly performed to provide information of cellular responses under a specific culture condition studied.Conventionally, counting cells microscopically, quantifying indicative cellular components (e.g., DNA), live/dead fluorescent dye staining, and analysis of indicative metabolites synthesized by the cultured OPEN ACCESS cells are adopted.These analytical methods have become standard protocols for the cell culture-based assays.However, these approaches are normally labor-intensive and time-consuming, limiting the throughput of the cell culture-based assay works like drug screening or toxin testing.In addition, analysis of the indicative cellular components and fluorescent dye staining normally need to sacrifice the cultured cells and thus hamper the observation of the subsequent cellular responses.Therefore, alternative analytical methods are crucial in need for achieving both effective and efficient detections. In the past decade, microfluidic system, also called "lab-on-chip (LOC)", "bio-chip", or "micro-total-analysis-system (TAS)", has attracted attention because of its capability of combining engineering and life science [1][2][3].Therefore, it is often interpreted as a miniaturized and automatic version of a conventional laboratory.Due to their miniaturization and automation, there are a number of advantages of using microfluidic systems, such as less sample/reagent consumption, reduced risk of contamination, less cost per analysis, lower power consumption, enhanced sensitivity and specificity, and higher reliability.Microfluidic systems have been developed for various biological analytical applications, such as DNA analysis [4][5][6][7][8], immunoassay [9][10][11][12][13], and cell analysis [14][15][16][17][18].Moreover, a number of demonstrations showed that cell culture can be performed on the microfluidic systems to achieve higher throughput and more reliable results [19,20].For example, a microfluidic device for culturing cells inside an array of microchambers with continuous perfusion of medium was reported to provide a cost-effective and automated cell culture [21].Each circular microchamber was 40 m in height and surrounded by multiple narrow perfusion channels of 2 m in height.The high aspect ratio between the microchamber and the perfusion channels offered a stable and homogenous microenvironment for cell growth.Human carcinoma (HeLa) cells were cultured in 10  10 microfluidic cell culture array and able to grow to confluency after eight days.Moreover, a fully automated cell culture screening system was developed and demonstrated on maintaining cell viability for weeks [22].Individual culture conditions in 96 independent culture chambers can be customized in terms of cell seeding density, composition of culture medium, and feeding schedule.Each chamber was imaged with time-lapse microscopy to perform quantitative measurements of the influence of transient stimulation schedules on cellular activities.In these excellent demonstrations, optical imaging was utilized to quantify cellular activities.However, this measurement technique is time-consuming and may induce large tolerance.Alternatively, impedance measurement was proposed to be one of the promising techniques to quantify cellular responses during culture on the microfluidic systems.The detection results are represented by electrical signals, which can easily interface with miniaturized devices.Typically, a pair of electrodes as an electrical transducer is utilized to measure the impedance change caused by the existence of the biological substances.Literature has demonstrated the use of the similar principle for the detection of various biological substances such as enzymes [23], antibodies and antigens [10,[24][25][26], DNA [27,28], and cells [17,[29][30][31][32][33].This technique provides a non-invasive and label-free measurement, and is found practically useful for the detection of substances in miniaturized analytical devices like microfluidic systems. The aim of this article is to review recent developments on the impedance detection of cellular responses in micro/nano environment.Cell number and cell viability are the important characteristics during cell culture, and can be monitored by various impedance measurement techniques.Moreover, as a microfluidic system is an integrated system for multi-purposes, monitoring of metabolic activities of cells with cell stimulation is also significant for cell culture-based studies.Literature review and in-depth discussion of the impedance measurement will be presented.Microfluidic systems incorporated with impedance measurement technique provide an effective and efficient technique for cell culture-based assays. Electrical Equivalent Circuit Generally, an electrical equivalent circuit is used to curve fit the experimental data for the explanation of the characteristics of the impedance detection system.A number of electrical equivalent circuits were proposed to describe the cellular detection [34].In order to have an easier understanding, a simplified electrical equivalent circuit and its impedance spectrum were reported and are shown in Figure 1 [31].It is generally suggested that two identical double layer capacitances at each electrode (C dl ) are connected to the medium resistance (R sol ) in series, and the dielectric capacitance of the medium (C di ) is introduced in parallel with these series elements.In the equivalent circuit, there are two parallel branches, which are C di and C dl + R sol + C dl .The impedance of each branch could be expressed with the following equations: ) ( Reprinted from [22] with permission from Elsevier). At a frequency below 1 MHz, the C di is inactive and is modeled as an open circuit.Current could not pass through the branch of dielectric capacitance and the total impedance is expressed as Z 1 .Both C dl and R sol are included in this frequency region, and they dominate at different frequencies, as shown in the impedance spectrum.At a low frequency range, the spectrum shows capacitive characteristics, which is contributed by the C dl .The impedance decreases with increasing frequencies.Up to a certain frequency (depending on the electrode dimensions, and the conductivity and permittivity of the medium), the C dl offer no impedance.The total impedance is contributed by the R sol and is frequency-independent (resistive characteristics).When cells are present in the system, the presence of the electrically insulated cell membranes influences the C dl as biological cells are very poor conductors at frequencies below 10 kHz [32].The conductivity of the cell membrane is around 10 −7 S/m, whereas the conductivity of the interior of a cell can be as high as 1 S/m [35].Therefore, cell proliferation can be estimated by the total impedance at low frequency region. Detection of Cells Adhered on the Electrode Surface If cells adhere and proliferate on the surface of the measurement electrodes, the electrode surface area is effectively reduced and the total impedance across the electrodes is, hence, increased for the detection of the presence of cells.Most of the impedance biosensors are based on this principle.A pioneer work of cellular monitoring with an applied electric field was reported in 1984 [36].Later, impedance measurement of cell concentration, growth, and the physiological state of cells was demonstrated [32].An interdigitated electrode was utilized to demonstrate on-line and real-time cellular monitoring.Long-term cellular behavior was clearly shown by the impedance change of the electrodes.This detection principle was also applied to detect Salmonella typhimurium in mike samples [31].An interdigitated microelectrode was utilized as impedance sensors to measure the bacterial growth curve at four frequencies (10 Hz, 100 Hz, 1 kHz, and 10 kHz).Illustration of the experimental setup is shown in Figure 2. The most significant change in impedance was observed at 10 Hz.The biosensor can detect the bacterial concentration of 10 5 -10 6 CFU/mL.Moreover, in order to detect cells specifically, antibodies are utilized to capture cells and provide selectivity to the sensor.Microelectrode array biosensors, with surface functionalization, were reported for the detection of Escherichia coli O157:H7 [37] and Legionella pneumophila [17].The sensor surface was functionalized for bacterial detection using immobilized antibodies to create a biological sensing surface.The bacteria suspended in liquid samples were captured on the sensor surface and the impedance change was measured over a frequency range of 100 Hz-10 MHz.The sensors were able to determinate cellular concentrations of 10 4 -10 7 CFU/mL and 10 5 -10 8 CFU/mL, respectively.Another approach was to use magnetic nanoparticle-antibody conjugates (MNAC) to capture the specific cells.A microfluidic flow cell with embedded gold interdigitated array microelectrode was developed for rapid detection of Escherichia coli O157:H7 in ground beef samples [38].MNAC were used to separate and concentrate the target bacteria from the samples.The cells of E. coli O157:H7 inoculated in a food sample were first captured by the MNAC, separated and concentrated by applying a magnetic field, washed and suspended in solution, injected through the microfluidic flow cell, and attracted by magnetic field on the active layer for impedance measurement.This impedance biosensor was able to detect as low as 1.6 × 10 2 and 1.2 × 10 3 cells of E. coli O157:H7 cells present in pure culture and ground beef samples, respectively.[22] with permission from Elsevier). Detection of Suspended Cells When cells suspend in the liquid buffer, impedance measurement can also be used to determine cell number in the buffer.However, the impedance spectroscopic responses are very dependent on the conductivity of the buffer used in the systems.The detection of Salmonella cell suspensions was demonstrated in deionzed (DI) water and phosphate buffered saline (PBS), respectively [39].It showed that bacterial cell suspensions in DI water with different concentrations can result in different electrical impedance spectral responses; conversely, cell suspensions in PBS cannot.The impedance spectra are shown in Figure 3.It was reported that the impedance of the cell suspensions in DI water decreased with the increasing cell concentration.It was suggested that the cell wall charges and the release of ions or other osmolytes from the cells caused the proportional impedance change. Monitoring of Cellular Viability Cell death leads to the release of cells from the surface of the measurement electrode.That induces the decrease of the impedance measured across the electrodes.Real-time evaluation of targeted tumor cells treated with a combination of targeted toxin and particular plant glycosides was demonstrated [40].HeLa cells were seeded onto interdigitated electrode and treated with targeted toxin.The impedance was directly correlated with the cell viability and able to trace the temporal changes of cell death during treatment.The above demonstration utilized a two-electrode system (i.e., interdigitated electrode) for the measurement.A three-electrode system was also demonstrated for the monitoring of cell growth with the treatment of potentially cytotoxic agents [41].It has the advantage of better reproducibility than traditional two-electrode impedance measurement.The cell chip consisted of an eight-well cell culture chamber incorporated with a three-electrode system on each well, as shown in Figure 4. Human hepatocellular carcinoma cells (HepG2) were cultured in the chamber and toxic effects on the HepG2 cells was monitored.The impedance was decreased after treatments with several toxicants, such as tamoxifen and menadione, indicating the detachment of dead cells.Moreover, a 10  10 micro-electrode array was used to monitor the culture behavior of mammalian cancer cells and evaluate the chemosensitivity of anti-cancer drugs using impedance spectroscopy [42].Human oesophageal cancer cells were cultured on the surface of the electrodes and then treated with anti-cancer drug.Morphology changes during cells adhesion, spreading, proliferation, and chemosensitivity effects on cells can be monitored by impedimetric analysis in a real-time and non-invasive way.Recently, commercial cell analyzers are available to monitor the cellular responses.Although they are not designed for microfluidic environment, but impedance measurement shows a promising tool for cellular analyses.Real-time detection of cell death in a neuronal cell line of immortalized hippocampal neurons (HT-22 cells), neuronal progenitor cells (NPC), and differentiated primary cortical neurons was demonstrated using the system [43].Schematic overview of the measurement principle is shown in Figure 5.These excellent demonstrations showed that impedance measurement is a convenient and reliable technique for real-time monitoring of cellular responses.Reprinted from [31] with permission from Elsevier). Monitoring of the Metabolic Activity of Cells Monitoring of the metabolic activity during cell culture is very important for the study of cell physiology.A microfluidic chamber was reported to enable the real-time measurement of extracellular lactate of single heart cell under simultaneous electrical stimulation [44].This device is comprised of one pair of pacing microelectrodes, used for field-stimulation of the cell, and three other microelectrodes configured as an electrochemical lactate micro-biosensor.Single heart cell was stimulated at pre-determined rates and its metabolic conditions were explored under the "working" situation.Moreover, monitoring of cell medium by comparing the rates of glucose and oxygen before and after contact with cells was demonstrated [45].Two arrays of glucose and oxygen electrochemical sensors were fabricated at the inlet and outlet microchannels of the microfluidic cell culture chip, as shown in Figure 6.Real-time monitoring of glucose and oxygen was shown and the chip was utilized to the study of transient effluxes of these species during cell culture.Reprinted from [33] with permission from Elsevier). Cell Monitoring from 2D to 3D Cell Culture Format Impedimetric cell monitoring in 2D cell culture format in microfluidic systems has been discussed and showed an effective and efficient technique for cell culture-based assays.2D cell culture is widely adopted because of its simplicity in terms of operations and observations of cellular behavior.More recently, 3D culture format was proposed to provide a better approximation of the in vivo conditions in some cases [46,47].Three-dimensional cell culture is that cells are encapsulated in a 3D polymeric scaffold material and can mimic the native cellular microenvironment since animal cells inhabit environments with very 3D features [46].Thus, that might provide a more physiologically meaningful culture condition for cell-based assays.However, since cells are encapsulated in the scaffold, direct observation of cellular behavior cannot be practically performed.Destructive methods, such as detection of indicative cellular components and fluorescent dye staining are commonly used for the cell analysis.Alternatively, impedance measurement technique was reported to provide a real-time and non-invasive way to monitor cellular response in the 3D scaffold [33].A microfluidic chip integrated with a pair of vertical electrodes in the 3D culture chamber was developed for quantifying cell number in the 3D scaffold.The impedance change was directly proportional to the cell number from 10 3 to 10 7 cells/mL in the 3D scaffold.This demonstration showed that the impedance measurement can be extended to monitor cellular responses from 2D to 3D cell culture format.It is expected that more demonstrations for real-time and non-invasive cellular monitoring will be reported. Conclusions With the rapid development of impedance measurement technique, commercial cell analyzers have been launched recently to provide convenient and reliable equipment for life science research and pharmaceutical development.In this article, impedance detection of cellular response in micro/nano environment has been discussed.The microfluidic systems incorporated with impedance measurement technique provide non-invasive and label-free monitoring of cellular responses in 2D and 3D culture format.More importantly, these systems are miniaturized and automatic.A sterile and homogenous microenvironment for cell culture can be created for precise monitoring.It is believed that more cell culture-based assays will be reported using the microfluidic cell culture systems. Figure 1 . Figure 1.(a) Electrical equivalent circuit of impedance measurement system with interdigitated electrode.(b) Typical impedance spectrum.C dl is the double layer capacitance at each electrode.R sol is the resistance of the medium.C di is the dielectric capacitance of the medium.(Copyright 2004.Reprinted from [22] with permission from Elsevier). Figure 2 . Figure 2. Experimental setup of the impedance measurement with the interdigitated electrodes for the detection of cells.(Copyright 2004.Reprinted from[22] with permission from Elsevier). Figure 3 . Figure 3. Impedance spectra of Salmonella suspensions in (A) DI water and (B) PBS with the cell concentrations in the range of 10 4 to 10 9 cfu/mL, along with water and PBS as controls.Frequency range: 1 Hz-100 kHz.Amplitude: ±50 mV.(Copyright 2008.Reprinted from [27] with permission from Elsevier). Figure 5 . Figure 5. Schematic overview of the measurement principle of cellular impedance.(A) Each well of the culture dish features a bottom with embedded gold-electrodes.The electrode array has a minimal distance of 30 μm between the electrodes.The right picture shows an upright view of the electrode array.(B) Cells were seeded on top of the electrode-covered surface of the culture dish.After attaching to the bottom of the well, the cells partially insulate the electrodes, causing a rise in impedance.With an increasing cell density, the cells have a greater overall insulating capacity, showing in a further increase in impedance.Inflicting cellular damage and cell death causes changes in membrane morphology, cellular shrinkage, and detachment, resulting in a decrease of the cellular impedance.(Copyright 2012.Reprinted from [31] with permission from Elsevier). Figure 6 . Figure 6.(a) Cross-section and (b) general schematic view of the developed biochip composed of two arrays of glucose and oxygen electrochemical microsensors integrated at the inlet and outlet microchannels of a PDMS microfluidic chamber.(Copyright 2008.Reprinted from[33] with permission from Elsevier).
3,970.4
2014-01-07T00:00:00.000
[ "Biology", "Engineering" ]
GNSS Spoofing Detection Based on Signal Power Measurements: Statistical Analysis A threat to GNSS receivers is posed by a spoofing transmitter that emulates authentic signals but with randomized code phase and Doppler values over a small range. Such spoofing signals can result in large navigational solution errors that are passed onto the unsuspecting user with potentially dire consequences. An effective spoofing detection technique is developed in this paper, based on signal power measurements and that can be readily applied to present consumer grade GNSS receivers with minimal firmware changes. An extensive statistical analysis is carried out based on formulating a multihypothesis detection problem. Expressions are developed to devise a set of thresholds required for signal detection and identification. The detection processing methods developed are further manipulated to exploit incidental antenna motion arising from user interaction with a GNSS handheld receiver to further enhance the detection performance of the proposed algorithm. The statistical analysis supports the effectiveness of the proposed spoofing detection technique under various multipath conditions. Introduction The received GNSS signal power at the output of a 3 dB gain hemispherical linearly polarized antenna at ground level is approximately −130 dBm [1].This makes GNSS receivers susceptible to nearby noise jammers and standoff spoofers (SS) that can easily transmit power levels well above −130 dBm.A high processing gain based on a long integration time is often the only option available to overcome a noise jammer.Nevertheless, if the GNSS receiver undergoes random motion, then the channel decorrelates quickly such that attaining such large processing gains to overcome jamming is neither feasible nor desirable from an operational perspective.Also a jammer is relatively easy to locate with radio direction finding and to potentially disable as its spectrum is significantly larger than the ambient noise [2,3].In addition, the noise jammer is at least detectable as the spectral power in the affected GNSS receiver band will be abnormally high.Hence the jammer can deny service but the user is aware of being jammed, limiting the damage potential of the jammer.A more insidious threat is the standoff spoofer that broadcasts a set of replicas of the authentic satellite vehicle (SV) signals visible to the mobile GNSS receiver [2].Disruption of GNSS services is achieved by randomly modulating the code phase over a small region of the overall Code Delay Space (CDS) that is commensurate with a target area.The spoofing attack is assumed to happen during the acquisition stage.Therefore, it is not possible to identify the SS signal based on the code phase as corresponding to an outlier navigation solution.The SS is assumed to remain synchronized with currently visible GNSS signals and then transmit a set of signals that would correspond to the typical GNSS signals observable by a receiver in the target area.Note that an effective SS does not necessarily synthesize a specific counterfeit location for a specific GNSS receiver but rather aims to disrupt GNSS services over a general target area by matching the Doppler offset of the replicated SV signals and adjusting the code phase such that it is commensurate with the intended target region.Hence the GNSS receiver cannot easily detect the contribution of these counterfeit signals as obvious outliers.An unaware receiver computes the navigation solution based on the SS generated counterfeit signals which are passed on to the user as being reliable with potentially damaging consequences. GNSS receivers tethered to a wireless data service provider will typically provide the user with an aided-GNSS (AGNSS) service, significantly reducing the CDS corresponding to a physical area of several square kilometres [4].Hence there is a diminishing gain for the spoofer attempting to affect a larger target area than this.Hence the counterfeit SS navigation solutions will be construed as plausible.As such, receiver-autonomous integrity monitoring (RAIM) and fault detection and exclusion (FDE) are ineffective in discriminating signals sourced from the envisioned SS [5]. The typical handheld consumer GNSS receiver coherently integrates the signal for about 10 to 20 ms resulting in a correlation peak in the CDS that has a spread in Doppler of about 100 Hz, which is commensurate with the Doppler spread of typical urban traffic (<50 km/hr) [6].Even if the GNSS receiver is equipped with other ancillary sensors such that the receiver velocity vector is independently known, this cannot be used to discriminate the SS signal as multipath Doppler spreading is approximately equivalent for both the SS and the authentic SV signals. Note that the receiver processing gain used for suppressing a jamming signal is not effective in the case of the spoofer signal.Consequently, the spoofer transmit power can be orders of magnitude less than that of the noise jammer, which makes the spoofer source much more difficult to locate and disable through radio direction finding and beam forming. The objective of this paper is to address a computationally efficient processing technique that can be added to relatively unsophisticated consumer grade GNSS receivers to discriminate the spoofer signals transmitted by an SS.The proposed processing is based on estimating and comparing the receiver signal power with a set of thresholds to verify the authenticity of the signal.The detection problem is formulated based on a Rayleigh fading multipath scenario.Nevertheless, it is shown that although suboptimal, the deduced expressions can be utilized for spoofing detection in a generalized Rician multipath channel with minimal performance degradation. The proposed technique is further extended to include incidental motions of the handheld receiver, instigated through the user interaction with the handset device, in the form of spatial translation and polarization rotation.User interaction with the handheld creates variability in the antenna response, which can be transformed into a diversity gain that adds to the general processing gain of the receiver [7][8][9][10].This processing gain enhances the estimation of the received signal power of the correlation peaks, that, is necessary information in spoofer discrimination.A case study based on GPS L1 C/A signals is developed to demonstrate the effectiveness of the proposed technique.Nevertheless, this technique can be directly extended for other GNSS signal formats such as GPS L2 C/A and GLONASS. The rest of the paper is organized as follows.In Section 2 the system definition and the assumptions are given.Section 3 formulates a multihypothesis detection problem and focuses on the statistical evaluation of the proposed technique, with the conclusions provided in Section 4. System Definition This paper considers the analysis of individual GNSS satellite signals, while realizing that simultaneous processing of the available GNSS signals provide extra diversity that can be used to further improve the performance of the proposed spoofer detection technique.The received complex GNSS baseband signal is denoted here by where the signal component of g(t) is represented by s(t) = A(t)s o (t), where "t" is time, A(t) is the channel response to the incident signal at the antenna, and s o (t) is the complex baseband component of the satellite signal, which can be written as where d(t) is the navigation data modulation, c(t) is the Pseudo Random Noise (PRN) code, τ is the code phase, Δ f represents carrier frequency offset (due to the Doppler of the GNSS signal as well as any frequency offset of the receivers local oscillator), and ψ is the initial phase offset.s o (t) is known to the receiver except for the navigation data, the code phase, the carrier frequency offset, and the initial phase offset ψ.The received signal, g(t), is corrupted by additive noise (WGN) which has an equivalent complex baseband representation denoted by w(t).It is assumed that w(t) is a complex normal random process, independent of the signal, and has a Power Spectral Density (PSD) that is constant within the bandwidth of the received signal. The GNSS receiver integrates a temporal snapshot of g(t) over the interval of t ∈ [0, T I ], where T I is typically smaller than the duration of one navigation data bit (20 ms).The signal snapshot of g(t) is collected by the receiver and then despread by a locally generated copy of s o (t) during the initial acquisition.The initial acquisition is typical of a multihypothesis detection in which the receiver searches the Code Doppler Space (CDS) for the frequency offset Δ f and the code delay, τ [11,12].Note that the initial phase offset ψ is not known to the receiver during the initial acquisition and as such the output of the despreading matched filter is a random complex variable. The despread baseband signal samples at a correlator output are represented by where " * " is a complex conjugate, the subscript "n" denotes the nth signal sampling interval which extends over t ∈ [(n−1)T I , nT I ], and s n , w n are the postintegration signal and the WGN components, respectively.In addition, τ and Δ f represent the estimated code phase and Doppler based on the initial acquisition which consists of a maximum likelihood search over the CDS of a signal sample, x n;τ,Δ f , such that where { τ, Δ f } are the maximum likelihood (ML) estimates of the true code phase and Doppler frequency, respectively.The estimated code phase and Doppler are then passed on to the tracking loops to facilitate further receiver processing.Consequently, N signal samples, namely, can be collected and used for spoofer detection. Theoretical Analysis of Spoofer Detection A hypothetical scenario is considered based on an SS transmitting spoofing signals in an urban environment as shown in Figure 1.The authentic signal and the spoofer signal are affected by multipath fading and therefore, the received signal power is random in space and polarization.In other words, multipath fading results in signal power fluctuation when the receiver is spatially translated or undergoes polarization changes due to rotation.Unlike the authentic signal power which is insensitive to signal power variations arising from pathloss in the target area (this is due to the fact that the satellite-receiver separation is approximately unchanged over a period of several minutes), the spoofer signal power varies with variation in the spooferreceiver separation.An empirical model of order n can be utilized to model the spoofer signal power variation due to pathloss as where R1 is a reference range, d is the spoofer-receiver range, n is the pathloss exponent, ρ is the average spoofer SNR at d, and ρ (sp) R1 is the average received spoofer SNR at d = R1 in dBs.Note that, for the spoofer to be effective, the average received spoofer signal power needs to be higher than that of the authentic signal in the target area.Therefore, the received signal power from a standoff spoofer varies significantly with range due to pathloss, meaning that the spoofer signal power is abnormally higher than that of the authentic signal when the receiver is in the proximity of the standoff spoofer.This characteristic of the spoofer signal can be exploited to limit the effectiveness of the SS in its target area based on comparing the measured signal power against a preset threshold. As stated earlier, a receiver records N signal samples with each of these n = 1, . . ., N signal samples belonging to one of the three hypotheses, namely, the noise hypothesis H0, the authentic signal hypothesis H1, and the spoofer signal hypothesis H2 as H0 : where the normalization t+T t |s o (t)| 2 dt = 1 is assumed and τ, Δ f are suppressed for notational convenience.A (a) (t) and A (sp) (t) represent channel gains associated with the authentic and the spoofer signals, respectively.Consequently, a detection variable, r = h(x), can be formulated to decide between the three hypotheses of (6) based on comparing "r" with a set of thresholds, ρ 1 , ρ 2 , as shown in Figure 2. Note that h(x) is a function that maps the measured signal samples x to a single variable, r, which is a sufficient statistic with respect to H0, H1, and H2.As will be shown in Section 3.1, r can be found from the probability density functions (PDF) of x [13] or alternatively from the PDFs of "r" which are denoted here by H0 : f r|H0 (r) H1 : f r|H1 (r) H2 : f r|H2 (r), (7) where f (•) denotes a PDF. International Journal of Navigation and Observation One optimization criteria for determining the thresholds (ρ 1 , ρ 2 ) is based on minimizing the probability of error, namely, where P(Hi) for i = 0, 1, 2 are the probabilities of H0, H1, and H2 states and P(Hi|Hi) denotes the conditional probability that indicates that of deciding Hi if Hi is correct.Consequently, where F r|Hi (•) denotes a cumulative distribution function (CDF) of the random variable "r" under H i .As can be seen from ( 9), P e is a function of the authentic signal, the spoofer, and the noise statistics.Therefore, any optimization based on minimizing the probability of error hinges on knowing the spoofer signal statistics, which is not available to an unsuspecting receiver given the capricious nature of a spoofer. Alternatively, a second optimization can be made based on maximizing the probability of detection for a given probability of false alarm.Assuming ρ 2 > ρ 1 , the threshold ρ 1 can be determined based on selecting a probability of false alarm P FA1 as Therefore, As is evident from (11), ρ 1 depends on P FA1 and on the noise statistic, which is approximately known to the receiver.As stated earlier, the average spoofer SNR is not known and varies with varying spoofer-receiver separation due to pathloss, spoofer transmit power variations, and so forth.However, the average authentic line of sight (LOS) SNR is approximately known, given that the average LOS CNR of GNSS signals at the ground level is typically within [40-50] (dB-Hz), which maps into a postprocessing SNR of approximately [10][11][12][13][14][15][16][17][18][19][20] dB based on 1 ms of coherent integration.This a priori information can be used to determine a second threshold, ρ 2 , based on selecting a probability of false alarm associated with H2 as Given that the satellite geometry is not known to an acquiring receiver, it is reasonable to assume that SVs are approximately uniformly distributed in the sky.Consequently, the PDF of the average post processing SNR of the authentic GNSS signals, ρ (a) , can be approximated as where U(ρ L , ρ H ) denotes a uniform PDF and ρ L ≈ 10, ρ H ≈ 20 dB denotes the lower and the upper bounds of the uniform distribution.Consequently, (a) .( 14) ρ 2 can be numerically computed by inserting ( 14) into (12).Finally, the probability of detection associated with H2 can be computed as 3.1.Spoofer Detection Based on a Moving Antenna.As stated earlier, the typical usage mode of a handheld receiver includes incidental motion in the form of spatial translation, polarization rotation, and blocking of the receiver antenna.It is known that any temporal variation in the antenna response results in a temporal signal decorrelation in a multipath environment such that extra diversity branches can be made available for receiver processing [7][8][9][10]. To exploit the extra processing gain arising from antenna motion, the statistical properties of x need to be considered.Distribution of scatterers in many multipath environments such as indoors or urban areas approximately resembles a uniform sphere of scatterers [9,14].The correlation coefficient between signal samples, s = [s 1 , . . ., s n ], collected through spatially translating an antenna over an arbitrary trajectory in a Rayleigh fading environment that resembles a sphere of scatterers can be shown to be [7] [C s ] mn = ηsinc k 0 p mn , (16) where k 0 = 2π/λ is the propagation constant, p mn = |p m − p n | is the spatial separation between the antenna positions at which signal samples x m and x n are collected (see Figure 3), and η is the variance of s.Consequently, x m are statistically uncorrelated if the spatial separation between the antenna positions at which the samples are measured are greater than half a carrier wavelength (in GPS L1 frequency this maps into a spatial separation of 10 cm), resulting in the approximation of C s ≈ ηI N , where I N is an N × N identity matrix. Rotation is another form of user interaction with a handheld receiver that results in variation in antenna's polarization.Variation in antenna's polarization is known to result in signal decorrelation.It can be shown that the covariance of signal samples measured through polarization rotation of a handheld antenna follows from [15] as where ψ mn is the angular separation of the polarization vectors at which signal samples x m and x n are collected (see Figure 3).Note that only three degrees of freedom are realizable based on a polarization rotation of a linearly polarized antenna [16].Therefore, N ≤ 3 uncorrelated signal samples are realizable based on a polarization rotation.A combination of polarization rotation and spatial translation can be utilized to further increase the number of diversity branches [9].The cross-covariance arising from a combined spatial-polarization translation of a GNSS handheld antenna can be shown to be [9] [C s ] mn = ηsinc k 0 p mn cos ψ mn . ( As can be seen from ( 18), the receiver motion in the form of a combined translation in space and rotation of polarization decorrelates the received signal and therefore can be utilized to synthesize several diversity branches useful for receiver processing. Uncorrelated Rayleigh Fading Channels. Assume that N uncorrelated signal samples are obtained based on a combined spatial and polarization translation of a GNSS handheld receiver in an uncorrelated Rayleigh fading channel such that C s = ηI N .Consequently, x = {x 1 , . . .x N } are jointly CN zero-mean RVs with x = s + w ∼ CN(0, C x ).C x = C s +C w denotes a covariance matrix of x with C w as the noise covariance matrix.To simplify the expressions to follow and without any loss of generality, the noise covariance is normalized such that C w = I N .Therefore, the SNR can be written as Consequently, the signal samples collected by a moving antenna in an uncorrelated Rayleigh fading channel are distributed according to x ∼ CN(0, (η + 1)I).It can be shown that is a sufficient statistics with respect to the hypotheses H0, H1, and H2 and as such is the detection variable [13].The thresholds (ρ 1 , ρ 2 ) can be found by determining the PDF of r and substituting in ( 10)-( 15) for any given P FA1 , P FA2 .Note that r is a measure of the received signal power.Therefore, the detection problem is based on comparing the received signal power, r, with a set of thresholds, (ρ 1 , ρ 2 ), to determine the authenticity of the received signal.For the spoofer to be effective, the spoofer signal power must be higher than that of the authentic signal in the target area such that the ML search in the CDS results in selecting the spoofer signal which has the largest correlation peaks.Therefore, r, which is a measure of the received signal power, can be utilized to discriminate the spoofer from the authentic signals. Generalized Rician Channels. In a generalized Rician channel, the channel gain, A(t), is a random variable distributed according to CN(μ, η/2) where μ = |μ| √ 2 exp( jα(t)) is the complex mean with α(t) denoting the phase of the complex mean and η/2 is the variance of the in-phase and the quadrature-phase Gaussian components of the channel gain.Consequently, x are jointly CN RVs and are distributed according to T is an N × 1 vector with α i denoting the phase, C x = C s + C w is a covariance matrix of x, and C w = I N is the normalized noise covariance.In a Rician channel, the average SNR, ρ, can be defined as ρ = 2|μ| 2 + η, and the magnitude of the mean, |μ|, and the variance, η, are related through the Rician K-factor, κ, such that κ = |μ| 2 /η.Since the angle of arrival (AoA) of the dominant signal component is not known to the receiver, μ cannot be estimated and therefore m and subsequently κ are unknown which makes it impossible to formulate a sufficient statistics based on a likelihood ratio test [13].Nevertheless, as will be shown here, the performance of the spoofer detection is approximately insensitive to the variation in the K-factor, κ, and to the cross-correlation of signal samples s as long as the cross-correlation remains moderately low, for example, <0.7.This is reasonable since diversity gain arising from combining equal-power diversity branches remains mostly unchanged for branch cross-correlations <0.7.Therefore, the suboptimal detection variable of (20) can be applied for spoofing detection in a generalized Rician channel with small performance degradation.Figure 4 shows ρ 2 being computed from (14) for various authentic and spoofer channel Kfactors (κ (a) and κ (sp) ) and based on ρ (a) = 15 (dB) and ρ (sp) = 20 (dB), two typical P FA2 = 0.01, 0.1, and for N = 1,3,5.As can be seen from Figure 4, smaller K-factors result in larger ρ 2 values.This is due to the increased uncertainty in the received signal power as the K-factor decreases.Nevertheless, the variation in ρ 2 is limited to a few dB and as such the K-factor does not play a major role in the optimization problem and may be ignored in the expense of slightly lower performance.Therefore, (20) can be applied to a generalized Rician channel as a suboptimal detector.In addition, as can be seen from this figure, a larger N results in a smaller ρ 2 for the same performance requirement of P FA2 .This is due to the diversity gain made available through the extra diversity branches for N > 1. Figures 5 and 6 show the receiver operating characteristics (ROC) based on the detection variable of (20) and for ρ (a) = 15 dB, various N and ρ (sp) , and based on κ (a) = κ (sp) = 1 and κ (a) = 10, κ (sp) = 1.As can be seen in these figures, the detection performance improves with increasing the number of diversity branches, N. Also, larger ρ (sp) result in a better detection due to the further separation between the PDFs of the authentic and spoofing signals.When a stronger LOS signal component is present (κ (a) = 10 in Figure 6), a better detection performance is realized due to the reduced uncertainty in the authentic signal power.Note that setting P FA2 = 0 in (12) results in ρ 2 = ∞ and therefore P D2 = 0.This corresponds to a receiver not equipped with any spoofer detection. To provide an alternative measure of performance improvement, the probability of error P e of (9) can be used.Figure 7 shows P e for various N, κ (a) , and ρ (a) = 15 and based on κ (sp) = 1 and ρ (sp) = 25 (dB).As can be seen from this figure, P e is approximately independent of the exact value of K-factor, which emphasizes the previous observations of Figure 4 where it was shown that the threshold ρ 2 is not very sensitive to the variations of the K-factor.P e decreases rapidly with increasing the number of diversity branches.The latter demonstrates the performance enhancement arising from the extra diversity branches made available through utilising a moving antenna. Figure 8 shows P e for ρ (a) = 15 (dB) and various ρ (sp) , κ (a) = κ (sp) = 1, and for N = 1, 3, 5, 10, 15, 20.Similarly, larger N and larger ρ (sp) , which provide a better separation between the authentic and the spoofing signal PDFs, result in a smaller P e .This is further demonstrated in Figure 9 where P e is plotted for N = 1-20, κ (a) = κ (sp) = 1, and ρ (sp) = 10-25 (dB).Note that, as the PDFs of the authentic and the spoofer signals become more alike, for example, ρ (a) = ρ (sp) , P e becomes larger. As stated earlier, the spoofer signal power is affected by pathloss quantifiable by (5).As a result, the received spoofer SNR varies with the proximity to the spoofer transmitter.To provide an average measure of performance enhancement arising from utilizing the proposed technique, the average probability of error, P e , can be defined as P e = ρmax ρmin P e ρ (sp) dρ (sp) .(21) Figure 10 shows P e for various N based on κ (a) = κ (sp) = 1 and ρ (a) = 15 (dB), and for ρ min = 15, ρ max = 25 (dB).The effect of diversity is further emphasized in this figure where the average probability of error decreases by increasing N such that P e 0.19 for N = 20, implying that the proposed technique is very effective in reducing the spoofer effectiveness in the target area. Conclusions A multi-hypothesis detection problem was formulated based on a likelihood ratio test applicable to GNSS spoofing International Journal of Navigation and Observation detection.A straight forward spoofing detection technique based on signal power measurements was proposed and was shown to be effective for verifying the authenticity of the received GNSS signals in urban multipath environments, meaning that the spoofer signal power is abnormally higher than that of the authentic signal when the receiver is in the proximity of the standoff spoofer. The proposed processing was further extended to exploit extra diversity branches made available based on a moving handheld receiver and was shown to further improve the spoofer detection performance.Unlike the previously proposed antispoofing techniques, the proposed technique does not require any hardware modification and can be readily applied to any handheld GNSS receiver with minimal firmware changes.It was shown that the proposed technique is largely insensitive to uncertainties in the statistical properties of the multipath channel as long as the collected signal samples are not strongly correlated.A suboptimal detector was proposed and effectively applied to a generalized Rician channel in which the channel parameters are not available to the receiver.An extensive statistical analysis was performed to assess the performance of the proposed technique.It was shown that the average probability of error can be reduced to less than 20% in a typical urban environment. Figure 1 : Figure 1: Hypothetical stand-off spoofer scenario in an urban canyon.The contours represent the random average spoofer signal power.The average authentic signal power is approximately constant over the entire area given that the receiver-satellite range is approximately unchanged and hence the pathloss. Figure 2 : Figure 2: A diagram of the PDFs of the detection variable "r" under H0, H1, and H2 hypotheses. Figure 3 : Figure 3: Spatial translation and polarization rotation of a GNSS handheld antenna.
6,227.2
2012-11-06T00:00:00.000
[ "Engineering", "Computer Science" ]
Topological transitions of the Fermi surface of osmium under pressure: an LDA+DMFT study The influence of pressure on the electronic structure of Os has attracted substantial attention recently due to reports on isostructural electronic transitions in this metal. Here, we theoretically investigate the Fermi surface of Os from ambient to high pressure, using density functional theory combined with dynamical mean field theory. We provide a detailed discussion of the calculated Fermi surface and its dependence on the level of theory used for the treatment of the electron–electron interactions. Although we confirm that Os can be classified as weakly correlated metal, the inclusion of local quantum fluctuations between 5 d electrons beyond the local density approximation explains the most recent experimental reports regarding the occurrence of electronic topological transitions in Os. Introduction Osmium has the highest density of all the known elements. High-pressure experiments and ab initio calculations indicate that the bulk modulus of hexagonal close-packed (hcp) Os rivals that of diamond carbon [1][2][3][4][5][6]. Recently, a combined experimental and theoretical study was presented, identifying two types of electronic transitions under pressure [1]. These transitions expressed themselves as anomalies in the pressure evolution of the measured c/a-ratio of the hcp structure. At approximately 150 GPa, a so-called electronic topological transition (ETT) was found, which is a topological change of the Fermi surface, also known as a Lifshitz transition [7][8][9]. Such transitions have previously been demonstrated to cause anomalies in the measured c/aratio [10] . It is worth to mention that a second anomaly seen above 400 GPa could not be identified with any ETT, but was explained by the overlap of core electron levels. Experiments also showed that the hcp structure is stable up to the highest pressure achieved, above 770 GPa. Earlier experimental and theoretical work has yielded conflicting conclusions regarding the c/a-anomalies. Measurements by Occelli et al [4]reported an anomaly in the c/a-ratio at 25 GPa. However, a similar study up to 60GPa by Takemura [5] did not report any such finding. On the theoretical side, Sahu and Kleinman [6] made a fully relativistic ab initio calculation of the lattice constants, and predicted a change of slope in the pressure dependence of c/a-ratio at 9.5GPa, but they did not consider the evolution of the Fermi surface under pressure. Ma et al [11] calculated an equation of state (EOS) based on the generalized gradient approximation (GGA) and found an anomaly in the c/a ratio at 25GPa, but without any sign of an ETT up to 80 GPa. However, high precision calculations by Koduela et al [12], demonstrated that although the form of the experimental pressure-volume curve is well reproduced with GGA, it is generally shifted towards larger volumes. Using the local density approximation (LDA), an EOS in closer agreement with experiment was obtained. Three ETTs were identified between 70 and 130GPa, at different points in the Brillouin zone (BZ). Nevertheless, the observed ETTs were concluded to be unrelated to the c/a anomaly. Finally, a study by Liang and Fang [14] did not find any peculiarity in c/a-ratio or any sign of ETTs up to 150GPa. In [1], the controversy was resolved. Using a highly accurate experimental EOS, measured up to 770 GPa, and treating electron-electron interactions with the more advanced scheme of combining the LDA with dynamical mean field theory (the LDA+DMFT method [15]), it was demonstrated that the peculiarity of the c/a ratio does exist at ∼150 GPa and that it can be related to ETTs. However, no studies of the Fermi surface evolution with pressure in hcp Os which include electron-electron interactions beyond the LDA or the semilocal GGA have been reported so far. The LDA+DMFT method does take dynamical correlation effects into account, and it is capable of interpolating between the metallic regime and the strongly localized Mott-insulating limit. In this paper, we present further details of the electronic structure of Os in the pressure range where these ETTs were detected, and demonstrate the impact of electron correlations beyond the LDA-local quantum fluctuations on the Fermi surface of this metal. We calculate the electronic structure of Os within the LDA+DMFT method for the range of pressures, from zero to about 250 GPa, and compare the Fermi surface and charge distributions obtained within LDA+DMFT and LDA. Band structure, Fermi surface, and charge density distribution plots give complementary information on this system from different aspects. In particular, earlier density analysis of Os has suggested the valence charge to be highly localized in the nearest-neighbor bonds [16], which could partly explain its high bulk modulus. It is therefore important to investigate what happens to the Os electronic structure and charge density when local quantum fluctuations are included, especially under pressure. Lattice constants Although DMFT calculations can nowadays be carried out to determine the EOS for a transition metal [17], they are still quite time consuming. Therefore, in the present work we have used the experimental lattice constants presented in [1] in order to relate the unit cell volume to pressure. To avoid additional complications in comparing LDA+DMFT and LDA results, the latter were obtained at the experimental lattice parameters as well. Besides, the use of experimental lattice parameters allows us to avoid complications related to inaccuracies of the calculated EOS for Os, clearly demonstrated in [1], as well as to skip the necessity to compare results of calculations obtained within local (LDA) and semi-local (GGA) approximations of density functional theory. Indeed, it is well documented, that the electronic structure as calculated with either LDA or GGA at the same lattice constants, is practically indistinguishable in nonmagnetic systems [13]. The employed values are summarized in table 1. LDA calculations In our LDA calculations we have used the all-electron full potential band structure method, provided in the Wien2k code [18]. We used a´32 32 32k-mesh, and kept the product , where we set the muffin-tin radius = R 2.5 MT a.u. We have also performed calculations using the all-electron full potential local orbital (FPLO) [19,20] method. LDA+DMFT calculations For LDA+DMFT calculations, we have used the toolkit implemented in the TRIQS package. References [21][22][23], which is fully self-consistent in both charge and local self-energy in the LDA+DMFT cycles. We have treated the partially filled 5d shell as correlated within the DMFT approximation. The corresponding effective Hamiltonian has the form: Table 1. Lattice constants used in the calculations. All values are experimental and taken from [1]. where  LDA is the energy of the system in LDA, from which the 5d contribution due to the on-site Coulomb repulsion is subtracted by the double counting term,  DC , which we approximate with the so-called around mean field form [24]: and finally,  U is the Coulomb interaction term. Within each DMFT iteration, the impurity problem was solved with the hybridization-expansion continuous-time quantum Monte Carlo method [25]. In each DMFT iteration we performed over 512 million Monte Carlo cycles at inverse temperature = T 1 40 eV −1 . The impurity problem was solved for the self-energy on the Matsubara axis, after which an analytical continuation was made to the real axis by a stochastic version of the maximum entropy method [26]. We have used the Coulomb interaction strength = U 2.80 eV and Hund's coupling constant = J 0.55 eV, based on the estimations in [27]. It should be noted that the U value may differ slightly, depending on crystal structure and different computational methods [28][29][30]. However, for a weakly correlated compound as Os, small variations in the U parameter should only slightly affect our quantitative predictions, and not our qualitative conclusions. LDA+DMFT electronic structure calculations were performed within the scalar-relativistic approximation and using a k mesh with 32×32×32 points in the full BZ. Since we are interested in comparing the LDA and LDA+DMFT approximations, we used the same parameters as for LDA. As shown in [1], the inclusion of spinorbit coupling in LDA calculations will split some of the bands, however these features are located far away from the Fermi level, even at high pressure. We have therefore performed our calculations within the scalarrelativistic approximation, neglecting spin-orbital coupling. Quasiparticle effective mass We have calculated the ratio of the effective mass of the correlated quasiparticles, m * , to that of uncorrelated electrons, m, as: where l is the orbital index, r ( ) E l F is the density of states of the lth orbital in ω space at the Fermi level and E F is the Fermi energy, w S ( ) i l is the local self-energy on the Matsubara axis. Since there are several d orbitals and the occupancy of each orbital by the itinerant electrons will vary, we take the averaged value of the effective mass, * m , to represent the correlation strength for all the 5d electrons. Electron density distribution In order to analyze details of the density distribution we have calculated the Fourier transform of the charge density to k-space. We have used a very dense 200×200 k-point grid, which is considerably denser than in the Fermi surface calculations. The charge density at each k-point was calculated by integrating the k-dependent density of states below the Fermi level, i.e., with the following formula: where the density of states, ρ, was obtained by summing over the orbital channels: , , 7 l l labeled by l. In turn, is the k-dependent Green's function of the lth orbital on the real frequency axis. w = + bw ( ) f 1 1 e is the Fermi-Dirac distribution function with b = k T 1 B , where k B is the Boltzmann constant and T is the temperature. The charge density, n k , is thus defined on a 2D grid in k-space. In fact, n k can be measured experimentally as well, e.g., by x-ray scattering. Correlation strength We begin by addressing the strength of electron correlations in hcp-Os. In 5d transition metals the effect of electronic correlations is believed to be small. However, previous studies on moderately correlated isoelectronic systems such as hcp-Fe, have shown that a description of correlations beyond the LDA level can lead to new features in the electronic structure [10]. More specifically, using the more advanced description provided by LDA+DMFT, an ETT was found for hcp-Fe in precisely the pressure range where an anomaly in c/a was seen experimentally. The strength of electron correlations can be quantified by the effective mass of the quasiparticles, which are compared to the mass of uncorrelated electrons with equation (5). As discussed in section 2.4, we express this as a weighted average over all the 5d-orbitals. In figure 1, we show the averaged effective mass of the quasiparticles plotted against pressure. From the figure, one can observe that the effective mass * m is very close to the mass of uncorrelated electrons. Therefore, this system is a weakly correlated system. Moreover, as expected, we notice that the effective mass decreases as pressure is increased, and kinetic energy starts to be comparable to the Coulomb energy. In figure 3 we show the Fermi surface of Os at the pressure P=0 GPa, as obtained with LDA. Figure 3(a), shows the Fermi surface with the convention of figure 2(a). In figure 3(b), we have replotted the Fermi surface with the convention of figure 2(c). One can see that the Fermi surface consists of four sheets. Two electron sheets surround the Γ point (green and yellow in figure 3(b)), the yellow one having a waist. In addition, one hole sheet is open (red), and disconnected ellipsodials form hole pockets (blue). These are all cut by the L-M-K-H plane. In addition, we see a hole pocket around the Γ point. This topology of the Fermi surface at ambient pressure is in agreement with the experimental work in [2] and the theoretical work in [11,12]. However, as discussed in [1], details of the Fermi surface at a given pressure is sensitive to the specific choice of the EOS, which can be obtained by calculations or experiment. For example, at the Γ point, a hole pocket was seen in [14], in contrast to [12]. Nevertheless, calculations at the experimental lattice constants, which we have carried out with the same computational method as in [12] (the FPLO method) clearly show a hole pocket (see figure 4). We therefore believe it is beneficial to compare results at fixed volume, and we therefore use the experimental values presented in table 1. We now compare results obtained with the LDA and LDA+DMFT methods, to see the impact of correlation effects beyond the LDA. Figure 6 shows a comparative side-view of the Fermi surface obtained with LDA +DMFT (left column) and LDA (right column). Comparing figures 6(a) and (b), we see how LDA+DMFT gives a wider gap at the L point compared to LDA. In addition, the hole pocket around Γ point seen in the LDA calculations, is not present at the Fermi surface calculated within LDA+DMFT. This part of the Fermi surface is not visible in figure 6, but is discussed in section 3.3.1. In figure 5 we show so-called fat-bands, indicating the d-character of the bands. It is seen that the band edge at the Γ point is of sp-character. At the L-point we also find a large weight of sp-electrons. The main impact of the LDA+DMFT treatment is concerned with the d-states, forming our space of correlated states, which are shifted along with the chemical potential. Comparing with LDA, the bands at the Γ and L-points appear shifted down in energy. 134 GPa Upon increase of pressure, the LDA+DMFT results in figure 6(c) show how the gap at the L point has only been reduced. This is in contrast to the LDA result shown in figure 6(d), where one of the sheets (red), has become connected, meaning that it has undergone a so-called necking transition at the L-H line. Figure 7 shows the same view of the reciprocal lattice as in 2(c), with the Γ point clearly visible. In LDA ( figure 7(b)), we see a pronounced pocket at the Γ point. There is no topological change of the Fermi surface, as compared to ambient pressure (see figure 3(b)), since the hole pocket is only seen to have become larger. In LDA+DMFT, (figure 7(a)), the pocket is significantly smaller. This pocket is not present at ambient pressure, and the ETT occurs just above 100 GPa [1], in much better agreement with experiment as compared to the LDA calculations. 247 GPa Increasing pressure further, to 247 GPa, LDA+DMFT also reveals a necking transition of the Fermi surface topology at the L point, as seen in figure 6(e). In this transition, holes pockets appear at the L point, as seen in both LDA and LDA+DMFT. Thus, this ETT consists of a necking transition, followed by the appearance of a hole pocket. We do not observe any of the hole pockets along the L-M line disappearing, which was suggested in [4]. In figure 7 we compare the Fermi surface evolution of hcp Os between the pressure of 134 and 247 GPa. We do not see any additional changes of the Fermi surface topology at the Γ point, neither with LDA+DMFT ( figure 7(a)) nor with LDA ( figure 7(b)). Thus, the only ETT that we see in this pressure range corresponds to the one at the L-point in the LDA+DMFT calculations. This result is in agreement with the experimental variation of the c/a lattice parameters ratio with pressure [1]. Concluding the discussion of the Fermi surface calculations for hcp Os, we see that taking into account local quantum fluctuations with LDA+DMFT, we find the ETTs to occur at higher pressure than in the LDA calculations. Indeed, with LDA+DMFT the transition at the L point is found to occur at around 180 GPa, rather than at 125 GPa, as predicted by LDA [1]. The ETT at the Γ point cannot be seen at all with LDA, since it occurs at a negative pressure. On the contrary, with LDA+DMFT it shows up just above 100 GPa. In this range of pressure, anomalous values of the c/a ratio were observed in [1]. Electron density distribution In order to see the pressure-induced features at the Γ and L points more clearly, we have calculated the Fourier transform of the valence electron charge density distribution on a plane in k-space containing the Γ A, L, and M points. In these 2D calculations one may employ a denser k-mesh to resolve fine changes due to the ETTs. Figure 8 shows the results. Ambient pressure In figure 8 we show the k-resolved charge density at ambient pressure. The Γ point is at the right-bottom corner and the L point is at the upper-left corner. With LDA+DMFT, at 0 GPa ( figure 8(a)) the L point is at a medium state density and the Γ point is at a highest state density. With LDA ( figure 8(b)), we already see the pocket at the Γ point, with a clear depletion in density. The ETT has thus taken place at a negative pressure. 134 GPa At 134 GPa, the L point stays at the same density in our LDA+DMFT calculations ( figure 8(c)), reflecting that no ETT has occured at the L point. However, with LDA, we see a clear depletion in charge density at this point ( figure 8(d)). At the Γ point, we note a very small drop in the density with our LDA+DMFT calculations. The drop is hardly visible at the scale of figure 8(c), and the region around the Γ point has therefore been replotted with a finer scale in figure 8(g). The small magnitude of this charge depletion is undoubtly connected to the small size of the pocket seen in figure 7(a). Within LDA (figure 8(d)) the drop is unmistakable, just as the pocket in figure 7(b). 247 GPa At 247 GPa, LDA+DMFT calculations (figure 8(e)) show how the density at the L point has dropped, reflecting the ETT seen at the L point. The density at the Γ point shows a pronounced drop. No qualitative changes are seen in the LDA results ( figure 8(f)). These conclusions agrees well with accurate estimations of the transition pressure given in [1], 101.5 GPa at the Γ point and 183 GPa at L point. Evidently, these changes for LDA+DMFT calculations also agree well with what is seen in the Fermi surface plots. Charge gradient In figure 9, we examine the LDA+DMFT results for the Γ point in yet greater detail, using a color scheme which allows us to see the gradient of the charge density more easily. As pressure increases from P=0 GPa ( figure 9(a)), the high density region around the Γ point becomes narrower in the k x direction and wider in the k z direction ( figure 9(b)). At 134 GPa ( figure 9(b)), there is a clear loss of electrons, which is connected to the ETT. As pressure increases further to 247 GPa (figure 9(d)), the drop at the Γ point is clear. The Fourier transform of the charge density distribution gives additional valuable information and has its advantages. It gives information about the charge density at each k-point, instead of the isoenergetic Fermi surface. Therefore, it is easier to notice subtle changes. For example, from figure 9, one can already see an early sign of an ETT at 86 GPa from the charge depletion as compared with 0 GPa. As pressure increases, we observe the ETT at the Γ point between 86 and 134 GPa. Summary and conclusions In this work a theoretical investigation of hcp-Os Fermi surface under pressure was performed using the LDA +DMFT technique. Details about the pressure-induced ETTs of its topology are presented. Although we find from the effective mass of the quasiparticles that Os should be classified as weakly correlated systems, the beyond-LDA treatment of electron correlations provided by the LDA+DMFT method leads to noticeable changes in the predicted pressure evolution of its electronic structure. The previously suggested ETT at the Γ point is not observed with LDA calculations if the experimental lattice constants are used. However, including correlations beyond LDA, by means of the LDA+DMFT method, we clearly see this ETT. Calculations with the LDA+DMFT method also see the other ETT at the L point, which is observed in LDA at a substantially lower pressure. Moreover, these ETTs can also be clearly seen in the Fourier transform of the charge density distribution. We have demonstrated how early signs of an ETT at the Γ point can be seen already at 86 GPa. Thus, LDA+DMFT calculations put theoretical predictions in much better agreement with the experimentally detected anomalies connected with ETTs. Our results thus confirm that local dynamical quantum correlation effects may be important even in weakly correlated transition metals.
4,837
2017-03-13T00:00:00.000
[ "Physics" ]
Grazing incidence mirrors for EUV lithography Extreme UV lithography is one of the most favoured options for the next generation of lithography systems, being considered as one of the keys of the 50 nm technology node. ZrN/TiN, multi-layered coatings for mirrors with high reflectivity at grazing incidence for EUV radiation (13.5) are proposed as coatings for the collection mirror in an EUVL system. These films were deposited on different substrates (Si, glass, and different metals) by d.c. magnetron sputtering and characterized by XRD, AES, EDX, AFM and EUV reflectivity using synchrotron radiation. Introduction The ongoing development of the knowledge-based society has brought about new challenges derived from the increasing volume and complexity of the information resources available.For a faster and more affordable transfer and processing of information, further progress is resting upon the development of increasingly smaller, higher density, integrated microelectronics, and their availability at lower prices.The actual industrial standard governing the manufacture of microprocessors, employing today's deep-ultraviolet lithography (DUVL), is limited by the minimum wavelength of 193 nm.As microprocessor feature size continues to decrease to sub-45 nm levels, greater spatial resolution is needed to be obtained.In an optical lithographic system the critical dimension is determined by the Rayleigh criterion, so that in order to obtain a smaller critical dimension one can increase the numerical aperture while maintaining the conventional lithographic wavelength of 193 nm.This approach permitted to obtain feature size less than 45-nm [Rothschild et al., 2005].However, the increased demands on the miniaturisation have already reached the limits of the DUVL.Considering the Rayleigh criterion, another possible approach to minimize the critical dimension is to decrease the radiation wavelength.This idea is materialised as the Extreme-Ultraviolet Lithography (EUVL), using a wavelength of 13.5 nm [Stulen & Sweeney, 1999].In Europe, ASML and its main co-developer Carl Zeiss, are heavily investing in the development of key technological issues, aiming for the realization of an EUVL α-tool [Meiling et al., 2003], [Meiling et al., 2005], for subsequent world-wide commercialization [ASML, 2009], while in USA, SEMATECH organization has the leading role [SEMATECH, 2009].EUVL is a domain undergoing intensive research at the moment, as one of the leading candidates among emerging lithography techniques, enabling 35 nm half-pitch patterning and providing also extendibility to 22 nm, using a solely reflective optics.The requirement for the replacement of transmitting lenses with reflecting mirrors within the patterning tool is due to the strong absorption of the EUV light in most of materialsgases included, a high reflection coefficient being difficult to obtain at either near grazing or near normal incidence angles.The requirement for an entirely reflective optics, placed in vacuum, introduces numerous technological challenges, as well as the requirement to efficiently generate 13.5-nm light, with high intensity and high reliability.The EUV source specifications are derived, accordingly to the existing standards in visible and DUV lithography, from the customer requirements: high throughput -more than 100 wafers /hour, imaging quality and cost of ownership.To generate EUV light, hot and dense plasmas are required.EUV radiation is generated using gas-discharge-produced plasmas (DPPs), laser produced plasmas (LPPs) with ultra-fast lasers, synchrotron or X-ray radiation [Stamm, 2004].EUVL systems using LPPs and DPPs sources rely on optical configurations including collection and projection modules composed by grazing and near incidence reflective mirrors [Singh & Braat, 2000], [Bakshi, 2006].In a LPPs-based system, EUV light is produced by bombarding a sliver of Sn with a high-power laser, while in a DPPs system the EUV light emerges from Xe plasma, enriched with metal EUV radiators, as Li and Sn [Banine & Moors, 2004].The EUV light produced by either source is being collected by specially engineered EUV mirrors, which then focus the EUV beam in the EUV scanner to produce microchip patterns. The main aspects to be addressed by the source specification are: the operating wavelength, the EUV power, the hot spot size, the collectable angle, the repetition rate, the pulse-to-pulse repeatability and the debris induced lifetime of components.While the first requirements are addressing only the EUV plasma source engineering, the last one deals also with the engineering of the optical system, mainly with the collector system -optical design and materials to be used.The mirrors responsible for collecting the light are directly exposed to the plasma and are therefore vulnerable to the damage done by the high-energy ions [Komori et al., 2004], [Hansson et al., 2002], and other debris [Srivastava et al., 2007].These damage associated with the high-energy process of generating EUV radiation have precluded the successful implementation of practical EUV light sources for lithography.This chapter addresses the issues related to the attainability of high reflectivity grazing incidence collection mirrors with extended life time, to be used in EUVL systems. Requirements for the collection optics in EUVL In the lithographic process of patterning fine-scale structures onto a substrate, the radiation used to selectively expose the recording medium (resist) can be optical, e-beam, X-ray or ion beam.If the optical radiation is used, including DUV and EUV, the lithographic system consists of four main elements, integrated in an unique optical system [Jaeger, 2002]: a. light source; b. mask containing the patterns corresponding to the structures to be fabricated; c. exposure system to generate an aerial image of the mask pattern and d. resist, for recording the image generated by the exposure system.In the EUV lithographic systems, there are two more modules to be integrated in the optical system: the collector and the projection modules [Bakshi, 2006].Concerning the light source, up to date, the most used EUV light sources are the DPPs ones.These sources are reported to generate more power, consume less energy, being less expensive.As a result, this type of source was integrated into alpha-level EUVL scanners.Also, the low power version of DPPs source has been used in EUV micro-exposure tools, in industrial EUV metrology and EUV resist development projects [Lebert et al., 2003], [Song et al., 2006], [Bolanti et al., 2003], [Zuppella et al., 2009] and [Fiedorowicz et al., 2005].This type of source was used also in the development and testing of the grazing incidence mirrors presented in the following [Choi et al., 2004].The part of the EUV optical system including the mirrors, which are designed to collect as much as possible EUV light produced by the source, is known as the collector module, while the mirrors designed to focus the EUV light on the resist, are part of the projection module. Apart from the mirrors, the optics may comprise also filters for cutting the wavelengths higher than 40 nm.The mask is part of the exposure system, including the resist exposure stage, with movement control in the Angstrom range [Itani, 2009], [Wallace et al., 2007].Due to the strong absorption of EUV light in any material, including gases (even at pressures in the Pa range), in an EUVL system all the components must reside in high and clean vacuum.The EUV light source it-self represents the cause of important demands on the EUV optics geometry and materials to be used, as the heat and debris emerging from it can seriously damage the optics facing the source.Usually the EUVL optics is protected from the energetic debris from the source using gas (He) curtains, electrostatic and mechanical shields [Vargas-Lopez et al., 2005], [Bakshi, 2006].The collection mirrors must have high reflectivity at grazing angles, high resistance to the energetic particle bombardment, high adhesion to the substrate and a high stability in a wide range of temperatures, as the heat load from the entire radiation emitted by the EUV source is important.The overall reflectivity of EUV mirrors, especially of those in the collection module, is under continuous degradation due to erosion and contamination from within the EUV source, as Xe, Li and Sn are the conventional, currently used, EUV light fuel [Allain et al., 2008].This is a matter of great concern because it directly affects the available power of the EUV source, and thus the final cost of production [Neumann et al., 2007], [Bakshi, 2006].The collector mirrors are facing a continuous bombardment of debris emerging from EUV light sources, fast ions, neutrals, off-band radiation, droplets, and background impurities (i.e., H, C, N, O), as well as the heat load generated by the sources themselves, all of them inducing serious damage to the nearby collector mirrors.The challenge is to obtain a collection mirror exhibiting high reflectivity at grazing incidence angles, high resistance to the bombardment done by energetic particles, micro-chemical stability at high temperatures and corrosion resistance, combined with small roughness, so as to prevent significant radiation loss via scattering. Materials for EUV mirrors The materials used for manufacturing the EUVL mirrors must have some valuable properties, as [Shin et al., 2009], [Hecquet et al., 2007]: -manufacturing design freedom of shape and size; -sustaining of polishing procedures, up to a tenth of a nanometre for the final roughness, because the roughness became a very sensitive parameter to be considered, as the wavelength decreases; -low coefficient for thermal expansion (CTE), in order to reduce the optics distortion due to geometrical factors.There are several types of such materials, as the well known silica and quartz, or the recently developed zerodur (lithium aluminosilicate glass-ceramic) and ULE (a titania-silica binary glass with zero CTE).The bare surfaces of these materials are presenting lower reflectivity values for grazing incidence, as compared to most of the metallic surfaces, considering the same surface roughness.An all reflective optical system can have either only near normal incidence mirrors, or a combination of grazing incidence and near normal incidence mirrors.In order to choose the best materials, modelling the EUV light interaction with matter is a valuable tool.The wavelength domain to be considered is centred near-by the value where the maximum EUV throughput is obtained from the EUV sources, e.g.13.5 nm [Bakshi, 2006].In the following all the materials will be considered with respect to their properties at this specific wavelength. The interaction with matter of the soft X-ray and EUV radiation (in the range 1 -40 nm), in terms of transmission or reflectivity, can be "experienced" by using the free on-line connection to the Center for X-Ray Optics, a facility within the Materials Science Division at Lawrence Berkeley National Laboratory, USA [CXRO, 2009].The details about the interactions of the soft X-rays with matter (photoabsorption and the coherent scattering) are clearly explained [CXRO, 2009].The basic assumption made is considering the condensed matter as a collection of non-interacting atoms, a condition fulfilled for energies sufficiently far from the absorption thresholds, while in the threshold regions, the specific chemical state become important, so that direct experimental measurements are to be used.The inelastic Compton scattering cross section has not been included in the reflectivity calculations, as the Compton cross section contribution is significant only for light elements (Z < 10) at energies higher than 10 keV [Hubbell et al., 1975]; Hubbell et al., 1977].Using the CXRO facility, one can easily obtain the ideal values for the EUV radiation reflectivity on a certain surface.For near-normal incidence mirrors, an enhancement of the reflectivity is obtained if we consider a typical Bragg reflector, in which the thickness of each layer is approximately a quarter-wave.Although the reflectivity of a single transition from a layer to another one is very small, the addition of multiple reflections results in a saturation to a maximum reflectivity.In EUV optics there are used multilayers (ML) formed by materials with alternate high and low absorption coefficients [Gloecker & Shah, 1995].In EUV, the light absorption is directly linked with the Z value of the considered material.The best known near-normal incidence mirrors for EUVL are made from 40 to 60 bi-layers of Mo (on top) and Si (on bottom part), the bi-layer period being 6.9 nm.Another important parameter to be considered is the ratio of the bottom layer thickness to the overall bi-layer thickness, which in this case have the value Γ = 0.4.The ideal reflectivity of such a mirror is about 72% at normal incidence (with respect to the surface, not to the surface normal, as in VIS optics) [Benoit et al., 2006], [Wang et al., 2006], [Fiegl et al., 2006], [Schroeder et. al., 2007].However, these mirrors are very sensitive to the oxidation, contamination, and the reflectivity is decaying in time, due to the mixing of the individual layers, the mechanism being diffusion driven.In fig. 1 is presented, comparatively, the ideal reflectivity obtained by simulation [CXRO, 2009] for a Mo/Si multi-layered mirror, with 60 bi-layers, with no inter-diffusion (Fig. 1a -s=0) and with layers inter-diffusion of 1 nm (Fig. 1b -s=1).It can be observed that not only the maximum reflectivity value is decreased, but also the maximum is shifting towards lower wavelengths, generating an overall loss of EUV light in the system.Due to the specific pattern obtained for the reflectivity of this multilayered mirror, it's role in the EUVL system is also to create a highly monochromatic radiation, by the repetitive reflection on several mirrors.As presented above, at this wavelength is also obtained the maximum EUV intensity from the Xe, Li or Sn radiators used in either DPPs or LPPs EUV sources.Concerning the reflectivity at grazing incidence, from the system's geometry, it results that the angle to be considered for the reflectivity modelling is about 6 0 , with respect to the surface.In Fig. 2 are presented the ideal values obtained by using the CXRO facility, for the reflectivity at λ = 13.5 nm, at 6 0 incidence with respect to the surface, for the most used materials in optics (silica, quartz, zerodur and ULE), and in Fig. 3 are presented the ideal reflectivity of some thick metallic layers.As expected the metallic surfaces present a much higher reflectivity for EUV light, as compared with the materials used as substrates, due to the different electronic configurations of these types of materials.Up to date, the EUV mirrors are usually made from special substrate materials as ULE or zerodur, subsequently coated with metallic thin films selected to be highly reflective in 12-15 nm domain, having a relatively high native oxidation resistance, e.g.palladium, ruthenium, and rhenium [Bakshi, 2006], [Alman et.al., 2007]. The new approach considered by the authors is to substitute the metallic coatings with covalent type materials, such as transitional metal carbides and nitrides thin films, with low surface roughness values (< 0.5 nm), which exhibit high reflectivity at 13.5 nm [Braic et al., 2005], [Braic et al., 2004], [Braic et al., 2008]. The nitrides and carbides of the transitional metals are well known for their stable microchemical properties at high temperatures, high oxidation resistance, high melting point, high hardness, high toughness and Young's modulus, high electric conductivity, excellent chemical stability, together with good wear resistance and high adhesion onto different .Ideal reflectivity values, at λ = 13.5 nm and 6 0 incidence with respect to the surface, for some thick metallic layers substrates [Barshilia & Rajam, 2006], [Ducros & Sanchette, 2006], [Braic et al., 2006].Their chemical inertness and high hardness is linked to their predominant covalent type bonding. Group IV -VI transitional metals are forming nitride and carbide compounds characterized by a large number of nitrogen/carbon vacancies.It is known that the nitride/carbides of the group IV metals (Ti, Zr and Hf) can crystallize only in the cubic NaCl (FCC) structure, while the metals form the group V (Nb, Ta) and VI (Mo) can form carbides either in cubic (MeC) or in hexagonal forms (Me 2 C).While the group IV of metallic carbides can accommodate up to about 50% vacancies on the non-metal sub-lattices, and still retain their cubic structure, the groups V and VI of metallic nitride/carbides crystallize under different structures, switching from hexagonal to the more stable cubic one, with increased nitrogen/carbon content, the process being also temperature dependent [Hugosson et al., 2001].Due to their superior stability in their FCC crystallographic structure, there were selected the nitride and carbide of the group IV transitional metals.From the group V, Nb compounds are good candidate, but they also present high Young modulus values, so that the adhesion to a non-metallic substrate may develop undesired problems.From the group VI, Mo may also represent a good candidate, but at lower temperatures (<1000 0 C) the FCC structure of its compounds tend to transform into a hexagonal or orthorhombic one.Also, the mechanical properties and oxidation resistance are less performing if compared to those exhibited by the compounds of the group V transitional metals, probably due to its large ionic component, compared to the covalent one [Kanoun et al., 2007]. In fig. 4 are presented the ideal values obtained by using the CXRO modelling facility, for the reflectivity at λ = 13.5 nm, at 6 0 incidence angle with respect to the surface, for the nitride and carbide compounds of Ti, Zr and Hf. From the results presented above, it results that the best coatings to be used as grazing mirrors in the EUV collector system, are the Zr based carbide and nitride films.However, one major problem to be tackled in this application is the requirements to obtain coatings with high adhesion to the substrate, and with a reduced mechanical intrinsic stress, which is known to build-up during the growth process.There are several strategies used for reducing the stress: ion bombardment during growth [Vladescu et al., 2007], moderate substrate temperature during film deposition, use of a substrate with a CTE similar to Ideal reflectivity values at λ = 13.5 nm and 6 0 incidence with respect to the surface, for Ti, Zr and Hf nitrides and carbides that of the film to be deposited.All these are currently used during films' deposition, however the films are still not stress-free. To illustrate the reflectivity dependence on the surface and interface roughness in Fig. 6 are presented the ideal reflectivity for a ZrN/TiN multilayer, n=40, Λ= 7 nm, Γ=0.1, either perfectly smooth (a -rms=0 nm) or with a higher roughness at the interfaces between the individual layers (b -rms=0.1 nm). From the presented results, it clearly results the superiority of Zr based coatings to be used for surface finishing of the collector mirrors used in EUV lithographic systems. Collector module design with grazing incidence mirrors A straightforward approach for the design of the collection optics, based on a wide emission angle of the EUV source, consists of an ellipsoidal configuration, the EUV source being in one focus and in the other one the image for the projection mirror is formed.The collected EUV radiation is focused as a narrow circular shape of illumination at a near-normal incidence angle on to a reflecting Mo/Si spherical mirror.The focused beam resulted from the projection mirror is directed towards the resist through a Si 3 N 4 or a Zr filter to a transmission mask.The role of the filters is to cut the radiation with wavelengths > 20 nm. The design of the first approach for the EUV collector used in our work is presented in Fig. 7. As it can be seen, the central cone emerging from the EUV source, which contains the most part of the debris (fast atoms, ions and electrons), is lost.Minimizing the central cone and using a very long ellipsoid (1 m between focal points at 0.12 m ID), a high collecting efficiency is obtained.However, in most of the used PDDs EUV sources, the emission of light and debris is directed mainly forward, in a narrow angle.So, an off-axis collection optics allowing the collection of the whole cone is required.In the same ellipsoidal geometry the source is oriented at approx. 10 0 off-axis and points towards a 210 mm long and 60 mm wide area, near by the central section of the ellipsoid.The image of the source is formed in the same position (focal point) as in the on-axis configuration.The total length of the optical path, from EUV source to the projection spherical Mo/Si mirror, is approx.1.4 m, imposing the same length for the vacuum chamber, which has to be pumped according with its volume, with turbo-molecular pumps [Braic et al., 2008].The off-axis configuration opened a new opportunity to reduce the optical path, placing the Mo/Si mirror between the grazing incidence ellipsoidal surface and the focal point, significantly shortening the length of the optical path to approx.0.8 m, also offering a significant reduction of the vacuum chamber volume and pumping unit (Fig. 8).The projector is a Mo/Si coated mirror with 13.5 nm centred wavelength.In the presented configuration, the diameter of the mirror is 2" and the curvature radius is 500 mm.Vacuum chamber is designed to be assembled from standard high vacuum stainless steel components ISO 100 and ISO 160.Sealing of the parts is done with fluoro-elastomer gaskets, with very low out-gassing rate.The high and also clean vacuum environment is considered appropriate for the exposure tests.Checking of the residual gas composition is done by an RF mass spectrometer connected to the exposure chamber.The UHV technology was considered not appropriate, because of the exposure module components, which are not suitable for high temperature baking (up to 250 0 C).The cleanliness of a vacuum environment is determined by the components and their out-gassing rate under vacuum.The chamber is differentially pumped by turbo pump located near the EUV source exit (mainly for heavy gases -Xe, Ar removing) and by another turbo pump located nearby the optics and exposure module (for He, resist and other components effluents).The turbo pumps are backed by mechanical dry rotary pumps.One requirement from the EUVL optics is to minimize the contamination of the Mo/Si sensible mirrors due to the debris or effluents resulted from the used components, e.g.resist, fixture materials, etc [Shin et al., 2009].As a result, the use of any coating including carbon was dismissed for further work related to EUV mirrors [Braic et. al., 2009] despite their valuable properties [Balaceanu & Braic, 2005]. Coatings' deposition for the grazing incidence mirrors The coatings were deposited on optical glass, Si, plain carbon steel and high-speed steel substrates by bi-bipolar pulsed reactive magnetron sputtering method, using Ti and Zr cathodes and a reactive atmosphere consisting of a mixture of N 2 and Ar gases.The deposition set-up is presented schematically in Fig. 9. Fig. 9. Deposition chamber set-up for the multilayered coatings The deposition cylindrical chamber (300 mm internal diameter and 500 mm height) has three rectangular (140*250 mm) magnetron targets.The Ti and Zr cathodes were fed using a pulsed bipolar generator -type ENI RPG5, to avoid the problems related to the target poisoning during reactive deposition.The magnetron discharges are fed by a two channel Brooks 502 mass low controller with argon and nitrogen.The argon is introduced by pipes in front of each magnetron target and the nitrogen is directed towards the substrates.The substrate holder assembly includes a thermocouple for temperature measurement during the deposition, is electrically insulated and provides the rotation of the individual sample holders.The rotation is provided by a stepper motor in order to have a computer controlled rotation speed and also to move the substrates in front of the two different targets. The distance from any of the cathodes to the substrate holder is 10 cm.The UHV vacuum system ensures a base pressure in the deposition chamber was of about 1x10 -5 Pa.The absolute pressure was measured with a MKS 626 Barocel capacitance manometer. To obtain alternated ZrN/TiN films, two shutters placed in front of each magnetron were used.By periodically opening and closing the shutters and rotating the substrates in front of the active magnetron, Ti and Zr ions/atoms sputtered from the magnetrons were alternatively introduced in the deposition atmosphere for a predetermined time.Depending on the deposition rates, previously measured for each type of monolayer (TiN and ZrN), and on the rotating speeds of the shutters, various multilayer configurations (with different bilayer thicknesses Λ and Γ parameter) were prepared.Following the information obtained by simulation, multilayer with a constant Γ value, equals to 0.1, and different thicknesses of the bilayer, were deposited.In order to differentiate between the different types of ML, the notation ZrN/TiN-n/Λ will be used, where "n" is the number of bi-layers and " Λ " is the bi-layer thickness.Different overall thicknesses of the coatings were deposited, depending on the tests taken into account: for EUV reflectivity measurements and surface roughness evaluation, the thickness was of about 280 nm (n = 40; Λ =7 nm), while for elemental, structural and mechanical characterisation the total thickness was much greater, of about 3500 nm (n = 500; Λ = 7 nm). Prior to deposition, specimens to be coated were chemically cleaned in an ultrasonic bath with isopropyl alcohol.Both the substrates and the magnetron targets were sputter cleaned in vacuum by Ar ion bombardment (1200 eV) for 10 minutes, in order to remove any residual impurities. To ensure the deposition of stoichiometric layers, the optical emission spectroscopy (OES) was done, by on-line monitoring of the Zr, Ti and N 2 + lines emitted by the discharge, by using a monochromator (Digikrom DK480), equipped with an R446 Hamamatsu photomultiplier.The acquisition, storage and processing of the spectra were performed by an Advantech 818 data acquisition system.The light signal emitted by the plasma was transmitted to the monochromater, through a quartz window and a collimator positioned in front of the target, by using an optical fibre.Concomitantly, the process gases were monitored by RF mass spectrometry (SRS RGA 100), in order to maintain a fine balance between the argon and nitrogen in the deposition chamber. In magnetron sputtering deposition of nitrides (TiN, ZrN), the ratio of Ar/N 2 partial pressures in the working gas determines two working regimes with different sputtering yield of the cathode's material under ion bombardment, affecting the film's stoichiometry. During the deposition, the compound deposited onto the substrates is also deposited on the cathode, so the decrease of cathode sputtering yield, of about 5 times, is observed.The transition between the "metallic" and the "ceramic" modes takes place abruptly at a small change of nitrogen flow (small increase of the nitrogen content in the working atmosphere).The continuous monitoring of the relative intensity of a metallic line (Ti or/ Zr) shows a sharp decrease in the transition region.By continuously monitoring the metallic lines, any small tendency of decrease, is followed by a command to either decrease the nitrogen flow or increase the current fed on magnetron.The optimized conditions were established for TiN and ZrN single layers.In Fig. 10 is presented the variation of the Ti line intensity (Ti -λ = 468.19nm) at the nitrogen flow increase in the deposition system. Coatings' characterisation The chemical composition of TiN, ZrN and ZrN/TiN coatings was determined by energydispersive X-ray (EDX) spectroscopy, by means of a XL-30-ESEM TMP scanning electron microscope. The coatings' texture, phase composition and bilayer period Λ were determined by high angle X-ray diffraction (XRD), with Cu K α radiation, using a Rigaku MiniFlex II device.Auger electron spectroscopy (AES) technique was used to determine the elemental composition of the films by using a PHI Model 3017 AES PC-Based System, equipped with an ion gun (for sputter cleaning and etching) in the range 3 ÷ 5 keV.The N/Ti ratio was determined from the positive slope of the nitrogen line located at 377 eV and the negative slope of the Ti peak at 418 eV.The coatings' resistance to ion beam bombardment (5 keV, Ar + ) was done in the AES system, using a collimated ion gun for in-depth analyses.The assessment was done by surface roughness evaluation by AFM.Films thicknesses were determined by a surface profilometer -Dektak 150.The surface morphology was observed by an atomic force probe microscope Veeco -Innova AFM/SPM, operating in tapping mode.RBS spectra were obtained using a 2.7 MeV He + ion beam, revealing the elemental composition and the modulation periodicity of the multilayer, with Λ bilayer values ranging from 80 to 160 nm.The backscattered particles were detected by surface barrier detectors placed at 165 0 to the beam direction.The deposited samples were measured for their reflectivity in the range λ ∈ [11, 17] nm at the synchrotron facility from the National Institute of Standards and Technology [NIST, 2009].Microhardness (Vickers) measurements were performed with a microhardness tester at 0.15 N load.Scratch tests under standard conditions (10 N/min*mm), using an indentor tip with electronic control of x, y, z position, were done to estimate the coatings adhesion.The critical load (L c ) values were determined by optical microscopy, L c being defined as the load where film flaking starts. Typical EDX spectra are shown in figures 11 -13.The elemental composition of the films, as resulted from the EDX analyses, is presented in Table 1.It can be seen that the single layer coatings are almost stoichiometric: N/Zr = 0.9; N/Ti= 1.1.The presence of a small amount of oxygen is probably due to the contamination during sample handling in open atmosphere before the composition analysis.For the multilayer, the EDX analysis revealed a much higher Zr content as compared with that of the Ti.This result is due to the differences in the ZrN and TiN individual layer thicknesses [Braic et al., 2006].Three typical diffraction patterns of ZrN/TiN multilayer deposited on optical glass with different bilayer periods Λ are shown in Fig. 14a, 14b and 14c.As in the case of the single layer coatings (TiN -Fig.15 and ZrN -Fig.16), the diffraction patterns for the multilayer exhibit a strong (111) preferred orientation.composed.It is worth to note that the diffraction pattern is not similar with that exhibited by a TiZrN layer (Fig. 17). For the nanometre scale ZrN/TiN multilayer (Λ < 10 nm), the pattern generally consists of a Bragg peak located at the average lattice spacing of the multilayer surrounded by equally spaced satellite peaks, as can be seen in Fig. 14b and Fig. 14c.The bilayer period Λ was calculated, as in ref. [Yashar & Sproul, 1999], from: where θ ± are the positions of the m-th order positive (+) and negative (-) satellite peaks, θ B is the position of the main Bragg reflection and the λ is the X-ray wavelength. In the cases illustrated in figures 14b and 14c, the calculations lead to the Λ values of 9.2 nm and 7.2 nm, in excellent agreement with the values determined from the measurement of coating overall thickness and deposition time (9.1 nm and 7.0 nm). The AES analyses were done on ZrN (Fig. 18a), TiN (Fig. 18b) and TiZrN (Fig. 19) mono layers, in order to obtain the specific pattern of the individual layers in multi-layered structure and also of the possible mixture of them.For this purpose, in order to have a clear answer, there were investigated multi-layers with two large Λ values (40 and 180 nm) and Γ = 0.5.The multilayers with high Λ values were studied in order to determine the etching rate of TiN and ZrN layers.The elemental analyses were done after cleaning the surface by ion bombardment etching (Ar + , 3 keV) for 10 min.2, where R a is the roughness average, R q is the rms roughness and R t is maximum height of the profile in the investigated area: R a = (nm) = ( ) These results do not show any changes in the roughness of coated surfaces as compared with uncoated ones.The experiments showed that the increase of the number of bilayers n, accompanied by the overall thickness increase, lead to roughness increase.A comparison between the roughness of ML coatings with different thickness -as resulting from variations in the number of bilayers, is presented in Table 3. The RBS spectra of the ZrN/TiN multilayer n = 5, Λ = 80 nm, Γ = 0.5, presented in Fig. 22, are experimental data and simulated curves.For the coatings with large bilayer periods (80 -160 nm), a good agreement between the experimental and theoretical curves was obtained. The individual layers may be clearly distinguished and their thicknesses were accurately determined.On the other hand, the in-depth resolution of the method does not allow observing the multilayer structure for bilayer periods less than 40 nm.The aim of the presented BRS analyses was to evidence the well structured multilayered coatings.For the ZrN/TiN-40/7 ML type, the measured reflectivity at λ = 13.5 nm, at grazing incidence angles ranging from 6° to 15° (Fig. 23) proved to be in good agreement with the values obtained by modelling (Fig. 5). The observed decreasing of the EUV reflectivity, as compared to the ideal values, is due to the surface contamination in open air during sample manipulation.5. Coating HV 0.015 (GPa) L c (N) As for the adhesion, the critical failure loads L c for the monolayers in the scratch test were in the range of 47 -56 N, the highest value being measured for ZrN layer.A higher adhesion strength was found for the ZrN/TiN -700/9 coatings (L c = 58 N).The result might be accounted for by the reduction in residual stress by the multilayered structure, as commonly reported for various multilayers, e.g.[Oh & Je, 1993].It can also be seen that the adhesion decreased with decreasing bilayer period.This finding could be attributed to the role played by the interfacial bonding, coherency strain and interfacial delamination [Abadias et al., 2007], which is more pronounced as the number of layers, and hence interfaces, increases. Conclusions ZrN/TiN reflective hard coatings with bilayer periods Λ ranging in the nanometre range were successfully deposited on Si, optical glass and other test substrates using the pulsed bipolar magnetron sputtering method.The monolayer films of ZrN and TiN were almost stoichiometric (N/Zr=0.9 and N/Ti=1.1).The XRD patterns of the ML with small Λ values exhibited a pattern typical for superlattice coatings, consisting of a main Bragg peak surrounded by satellite peaks.Ion bombardment (5 keV Ar + ) of the MLs, intended to mimic the bombardment with EUV source debris, showed for the ZrN layer a slight decrease of the rms roughness with 0.3 nm after 30 minutes of ion bombardment.The multilayers with bilayer period Λ in the 7 -9 nm range were the hardest (~ 35 GPa) and exhibited the best substrate adhesion, but good adhesion values were obtained for all the other coatings as well.The deposited films with a multilayered architecture led to an enhanced ion bombardment resistance, presenting promising reflectivity values for 13.5 nm EUV radiation.The obtaining of dense, adherent and highly reflective coatings for grazing incidence is a valuable research direction, using the bipolar pulsed magnetron sputtering deposition method. As future challenges, a diversity of problems such as the ability to create a reliable high power EUV source, the maintenance of EUV mirrors through the use of debris mitigation schemes and the cleaning of contaminants, the development of a resist with low line edge roughness, and of a defect-free EUV mask, need to be solved.Each challenge needs to be overcome for EUVL to be a viable candidate for high volume manufacturing.The combination of efficient debris mitigation schemes, innovative methods for mirrors quick cleaning in adequate gases, as well as new coatings with high toughness, high adhesion and chemical inertness will provide the optimal design for EUVL systems, as reliable tools for a cost effective process for the mass production of nano-electronics components. Fig.3.Ideal reflectivity values, at λ = 13.5 nm and 6 0 incidence with respect to the surface, for some thick metallic layers Fig.4.Ideal reflectivity values at λ = 13.5 nm and 6 0 incidence with respect to the surface, for Ti, Zr and Hf nitrides and carbides Fig. 7 . Fig. 7.The design of the first approach for the EUV collector, with ellipsoidal mirrors Fig. 8 . Fig. 8. Schematic view of the off-axis optics and vacuum chamber Fig. 14 . Fig. 14.X-ray diffraction patterns for ZrN/TiN multilayerFor the ZrN/TiN ML coatings with a large bilayer period (Λ = 550 nm, Fig.14a), the diffraction lines belong both to the ZrN and to the TiN films, from which the coating is Table 1 . Elemental composition of TiN and ZrN coatings www.intechopen.com Table 5 . Mechanical characteristics of the films
8,272.6
2008-12-09T00:00:00.000
[ "Engineering", "Materials Science", "Physics" ]
CeL-ID: cell line identification using RNA-seq data Background Cell lines form the cornerstone of cell-based experimentation studies into understanding the underlying mechanisms of normal and disease biology including cancer. However, it is commonly acknowledged that contamination of cell lines is a prevalent problem affecting biomedical science and available methods for cell line authentication suffer from limited access as well as being too daunting and time-consuming for many researchers. Therefore, a new and cost effective approach for authentication and quality control of cell lines is needed. Results We have developed a new RNA-seq based approach named CeL-ID for cell line authentication. CeL-ID uses RNA-seq data to identify variants and compare with variant profiles of other cell lines. RNA-seq data for 934 CCLE cell lines downloaded from NCI GDC were used to generate cell line specific variant profiles and pair-wise correlations were calculated using frequencies and depth of coverage values of all the variants. Comparative analysis of variant profiles revealed that variant profiles differ significantly from cell line to cell line whereas identical, synonymous and derivative cell lines share high variant identity and are highly correlated (ρ > 0.9). Our benchmarking studies revealed that CeL-ID method can identify a cell line with high accuracy and can be a valuable tool of cell line authentication in biomedical science. Finally, CeL-ID estimates the possible cross contamination using linear mixture model if no perfect match was detected. Conclusions In this study, we show the utility of an RNA-seq based approach for cell line authentication. Our comparative analysis of variant profiles derived from RNA-seq data revealed that variant profiles of each cell line are distinct and overall share low variant identity with other cell lines whereas identical or synonymous cell lines show significantly high variant identity and hence variant profiles can be used as a discriminatory/identifying feature in cell authentication model. Electronic supplementary material The online version of this article (10.1186/s12864-018-5371-9) contains supplementary material, which is available to authorized users. Background Cell lines are an indispensable component of biomedical research and serve as excellent in vitro model systems in disease biology research including cancer. Cell lines are usually named by the researcher who developed them and till recently were lacking a standard nomenclature protocol [1][2][3]. This had led to cell line misidentification and poor annotation. In addition, cell lines also suffer from cross-contamination from other sources including other cell lines [1,4]. All these factors affect overall scientific reproducibility. Common contaminants include Mycoplasma and other human cell lines including HeLa [5][6][7][8]. Cell line contamination is regarded as one of the most prevalent problems in biological research [1][2][3][4][5]7] and the ongoing publication of irreproducible research is estimated to cost~28 billion dollars each year in the USA alone [9]. Though cross contamination of cell lines have been acknowledged for almost 50 years [1][2][3][4]9], very few researchers check for contaminations probably because of lack of access to cell authentication methods. Recently, however, the awareness towards the importance of authentication of cell lines has increased, and also NIH and various journals now require researchers to authenticate cell lines [1,10]. It has been reported that approximately 15 to 20% of the cells currently in use have been misidentified [3,11]. This includes many from the large datasets stored in public repositories [11]. Profiling of short tandem repeats (STRs) across several loci is the most common and standard test for cell line authentication as recommended by the Standards Development Organization Workgroup ASN-0002 of American Type Culture Collection (ATCC) [1,2,[9][10][11]. However, unstable genetic nature of cancer cell lines such as microsatellite instability, loss of heterozygosity and aneuploidy in cancer cell lines, makes STRs based validation problematic [1][2][3]. Recent studies have also explored using more stable single nucleotide variant genotyping for cell line authentication either in combination with STR profiles or alone [1,9,11]. It has been shown that carefully selected panel of SNPs confers a power of re-identification at least similar to that provided by STRs [1,9,[11][12][13][14][15]. Although many SNP based methods have been developed and are being used for cancer cell line authentication, these methods still suffer from lack of rapid access and not being cost effective. With the advent and success of sequencing technologies, more and more researchers are using RNA sequencing to profile large amounts of transcript data to gain new biological insights. Moreover, RNA-seq data is also being used to identify single nucleotide variants in expressed transcripts [16]. It may be noted here that variants from RNA-seq cover around 40% of those identified from whole exome sequencing (WES) and up to 81% within exonic regions [17]. In a recent report, authors successfully re-identified seven colorectal cell lines by comparing their SNV profiles obtained from RNA-seq data to the mutational profile of these cell lines in COSMIC database [11,18]. In this study, we present a RNA-seq based approach for Cell Line Identification (CeL-ID). We identify variants in each cell lines using RNA-seq data followed by pairwise variant profile comparison between cell lines using frequencies and depth of coverage (DP) values. Comparative analysis of variants revealed that variant profiles are unique to each cell line. Our benchmarking studies revealed that CeL-ID method can identify a cell line with high accuracy and can be a valuable tool for cell line authentication in biomedical research. In addition, using linear model regression technique, the approach can also reliably identify possible contaminator if requested. We choose to explore the utility of RNA-seq data in cell line authentication because it is the most commonly used technique among the seq-based methods and also relatively inexpensive, and we also demonstrated the minimum sequence reads requirement for each RNA-seq to maintain the authentication accuracy using a series of subsampling BAM files at 1million up to 50 million reads. With the popularity and accessibility of RNA-seq technology, a significant number of studies anyway involve the use of data from RNA-seq and hence the same can also be used to check the authenticity of the cell line. CCLE dataset The Cancer Cell Line Encyclopedia (CCLE) is a collaborative project focused on detailed genomic and pharmacologic characterization of a large panel of human cancer cell lines in order to link genomic patterns with distinct pharmacologic vulnerabilities and to translate cell line integrative genomics into clinic [19,20]. Genomic data for around 1000 cell lines are available for public access and use. To be precise, National Cancer Institute (NCI) Genomic Data Commons (GDC) legacy archive hosts RNA sequencing data for 935 cell lines, whole exome sequencing (WES) data for 326 cell lines and whole genome sequencing (WGS) data for 12 cell lines (https://portal.gdc.cancer.gov/). The names of cell lines are used as is listed in NCI GDC archive and are listed in Additional file 1. We were able to download the RNA-seq bam files for all cell lines except one cell line named 'G27228.A101D.1' and whole exome sequencing bam files for all 326 cell lines. These bam files were processed using our in-house pipeline for variant calling. Variant calling process included removal of duplicate reads (samtools [21] and picard [https://broadinstitute.github.io/picard]), followed by local re-alignment and re-calibration of base quality scores (GATK [22]), and finally variant calling using VarScan [23] which includes both SNP and Indels. Downstream filtering (regionbased to only include exome regions, sufficient coverage, and detectable allele frequency) and all other analyses were done using in-house Perl and MATLAB scripts. No filtering based on mutation types (specific to missense, nonsense or frameshift indels) or allele types (such as bi-allelic) were applied to CCLE samples. An illustrative depiction of the overall pipeline is shown in Fig. 1a. CCLE gene expression data were collected from (https://portals.broadinstitute.org/ccle/data) and it contains RPKM values for all the genes in 1019 cell lines, covering all 935 CCLE RNA-seq set. Independent RNA-seq datasets We also used two publicly available RNA-seq datasets from GEO as independent test sets. First one is comprised of 12 MCF7 cell lines (GSE86316) whereas the second one has data for eight HCT116 cell lines (GSE101966) [24,25]. These were generated to profile mRNA expression levels in MCF7 cells after silencing or chemical inhibition of MEN1 [24] and in HCT116 cells after loss of ARID1A and ARID1B [25], respectively. We downloaded the fastq files for all these samples; aligned using RSEM [26] to align all reads to UCSC hg19 transcriptome, followed by variant calling using pipeline A B Fig. 1 Schematic overview of CeL-ID method. a Shown are, in brief, the different steps involved in CeL-ID including evaluation of robustness of the model, testing on an independent dataset (light blue) and effect of subsampling on accuracy (light brown). b Flowchart of the contamination estimation model described earlier (Fig. 1a). We purposefully used a different aligner, RSEM [26], here to check the effect of different read aligners. Correlation and hierarchical clustering To assess the confirmation of two cell-lines to be either identical or highly similar in terms of their sequence variation profiles genome-wide or their expression levels, we choose to use Pearson Correlation to evaluate altered allele frequencies (FREQ) across two cell-lines or expression levels, facilitated by the number of non-zero FREQ shared between two cell-lines with at least 10 fold coverage in both cell lines. We choose FREQ, instead of direct counting of altered allele depth (AD), because that majority of altered allele fractions does not change with the expression level, and allele-specific expression may appear in cell lines with certain treatments but hopefully it will be a small proportion over a typically massive number of SNPs under consideration. To be specific, for any two cell lines 〈 i, j 〉, the variants to be tested are where d i,k and f i,k are the depth of coverage (DP) and altered allele frequency at genomic location k of i th cell line, respectively. Note that we require variant has to exist in at least one cell line with 10 fold coverage. If a gene does not express, all mutations within this gene will not be considered unless its partner cell-line expresses this gene at a sufficient level. Therefore, the expression difference is already embedded in Pearson correlation, ρ ij ¼ σ 2 ij =σ i σ j , where covariance and standard deviations will be evaluated over all variants in V. Similarly, correlations over gene expression levels between two cell lines are evaluated also by Pearson correlation coefficient, with requirement that genes with expression level > 0.1 (RPKM level) in at least one cell line. Hierarchical clustering was performed using MATLAB, using Pearson correlation of FREQ as the distance measure (over SNPs determined by Eq. 1), and with average linkage method. To determine the significance of a detected correlation coefficient for a given cell line, we generated all pair-wise correlations for 934 RNA samples, and its distribution follows normal distribution N(μ, σ). Similar distribution is also observed in pair-wise correlation from WES samples. To estimate distribution parameters, we removed correlation coefficients less than 0 (unlikely) and greater than 0.8 (most likely due to replicate and derivative cell lines in CCLE collection), therefore it forms a truncated normal density function within an interval (a, b), as follows, where we fixed cut-off a = 0, and b = 0.8. ϕ and Φ are standard normal density and distribution functions, respectively. We chose b = 0.8 as a cut-off threshold since pairs with correlation > 0.8 are derived from same parental lines or with some other biological relevance (see subsection Cell line authentication using variant comparisons in Results Section). Maximum-likelihood estimate (using MATLAB mle() function) was employed in this study, and distribution parameters from distribution (scaled to match the histogram setting) for CCLE collection were estimated. For any given correlation coefficient where F is the cumulative distribution function of Eq. 2, we consider they are possibly related if p < 0.001, and they are most likely derived from same cell origin if p < 10 − 4 . Multiple samples are identified as matching cells, we can revise Eq. 1 to exclude all variants that shared from these matching cells, and then repeat the process. For gene expression level, the distribution of pair-wise correlation coefficient is more skewed towards 1.0; therefore, it is difficult to separate matching cells from mismatch cells (data not shown). Contamination estimation using linear mixture model In addition to authenticate cells, one may also want to know whether or not the processed cells are contaminated by other cells, possibly from CCLE or additional cell lines collected in the lab, along with RNA-seq data. Assuming the test sample is a mixture of cell lines x 1 and x 2 , with unknown proportion q 1 and q 2 , and we denoted the mixture cell as y, or, where y, x 1 , x 2 are vectors of FREQs from selected variant sites of test mixture sample and CCLE cell lines. Eq. 3 can be re-formatted into matrix Y = qX, where q = [q 1 , q 2 , …], if more than two cell mixture is hypothesized. To demonstrate the proof-of-concept, our current implementation takes top 200 sites, each direction that has most difference in FREQ comparing two samples (total of 400 SNPs). To further simplify the procedure, we also use our CeL-ID to identify the dominant cell, say x 1 first. Following the similar studies for de-convoluting cell type proportions [27,28], we then test all 934 cell lines within CCLE collection, as x 2 , using robust linear model regression method (implemented in MATLAB fitlm() function) to estimate q 1 and q 2 , provided q 1 + q 2 ≤ 1. Slightly different to typical cell-type deconvolution methods, after determining the first contaminator, we can iteratively add other candidates from the entire CCLE collection and perform linear regression, and terminate the process until q value becomes negative or regression fails (Fig. 1b). We designed a simulation procedure to evaluate the effectiveness of the robust linear model y, by the following method, where, in Eq. 4a, N(μ, σ) is the Gaussian noise we added to q values (vectorized to the size of number of variants, each taking a Gaussian random number with mean of q 1 and q 2 , normalized such that 1 L ðNðq 1 ; σ q 1 Þ þNðq 2 ; σ q 2 ÞÞ ¼ 1 . It followed by another Gaussian noise σ f added to the FREQ, which we will change from 0 to 20. Results Cell line misidentification and contamination is a common problem affecting the reproducibility of cell-based research and therefore cell line authentication becomes really important. SNV profiles have been used earlier to re-identify the lung and colorectal cancer cell lines as well as HeLa contamination but these studies were limited to only few cell lines [5,11]. In this study we have made an attempt to use variants derived from RNA-seq data for large-scale cell line authentication. Variant analysis RNA-seq data for 934 cell lines available from the NCI GDC legacy portal (https://portal.gdc.cancer.gov/) were downloaded and bam files were processed to call variants using an in-house pipeline described earlier in the methods section. Additionally, WES data for 326 cell lines available from GDC were also obtained and variants were identified. A total of 1,027,428 of variants were identified across all the cell lines with an average of 27,310 variants per cell line. As shown in Fig. 1a, all variant profiles of RNA-seq samples will be used to determine their correlation coefficient distribution and its corresponding significance level from CCLE collection, and the process to determine the CeL-ID accuracy and its robustness, followed by a validation procedure utilizing a collection of independently obtained MCF7 and HCT116 cells processed with different treatment [24,25], and down-sampling of RNA-seq samples to explore how little sequence reads are required to achieve the equivalent identification accuracy. Cell line authentication using variant comparisons We performed the pair-wise comparisons of variant profiles of all the 934 cell lines and computed correlation coefficients. It is interesting to note that only a few pairs of cell lines showed high correlation coefficients (ρ > 0.8) whereas most other pairs show poor correlation ( Fig. 2a and b). Moreover, most of the top identified cell line pairs with correlations (ρ > 0.9) were turned out to be known replicates, subclones, derived from same patients or have been known in the literature to share high SNP identity (CCLE legacy archive (https://portals.broadinstitute.org/ccle/data); Fig. 2a and b). As can be seen in Fig. 2a, correlation coefficients were used as distance metric to carry out hierarchical clustering. CCLE dataset happened to include replicates for two cell lines sequenced at different time and our CeL-ID method correctly identified these two pairs: G28849.HOP-62. As expected, correlated cell lines tend to share more common mutations (Fig. 2b). Transcriptome profiles of any given cells are known to change during various treatments, and adapt to their environment as well. For base-line expression data provide through CCLE project, we can see their correlation holds for pair G20492.HEL_92.1.7.2 & G28844.HEL.3 (ρ = 0.95, Fig. 2d), and the next-to-best correlated sample is also NCI-H1155 (ρ = 0.787). Notice the difference of correlation coefficients of the best sample and the next-to-best samples are much smaller than those derived from variant profiles. Furthermore, we analyzed WES data for 326 cell lines available from NCI GDC. These 326 cell lines include 112 cell lines from the RNA-seq dataset. All the variants from WES data were identified using pipeline showed in Fig. 1a. We used variants derived from WES data to compare it with those of RNA-seq and a high degree of concordance was observed. Determination of the significance of correlation coefficient Moreover, to determine the significance of a detected correlation coefficient for a given cell line, all pair-wise correlations for 934 cell lines were generated. Distribution plot of correlation follows normal distribution N(μ,σ) (Fig. 3a, light blue histogram). Similar distribution is also observed in pair-wise correlation from WES samples (Fig. 3a, dark blue histogram). To estimate parameter distribution, we used truncated normal distribution model by removing correlation coefficients less than 0 (unlikely) and greater than 0.8 (replicate and derivative cell-lines in CCLE collection). For variant profiles derived from RNA-seq, parameters are (μ, σ) = (0.464, 0.047). Therefore, at L 0.001 = 0.609, two samples will be considered similar with p < 0.001, or at L 10 -6 = 0.686 two samples will be unlikely similar (p < 10 − 6 ). As a comparison, between RNA-seq and WES variant profiles (μ, σ) = (0.275, 0.042), excluding all pair-wise comparison between same cell lines (see Fig. 3a, left pink histogram). COSMIC SNVs and cell line re-identification We constrained the variants being used for correlation calculation to only those present in COSMIC70 and COSMIC83 databases [18]. This led to a huge reduction in number of variants. Only 4% of total variants matched to COSMIC70 and 14% matched to the latest cosmic database COSMIC83 ( Fig. 2a). Interestingly, we observed that only COSMIC matched variants are sufficient to correctly re-identify the cell lines (Fig. 3b). Only COS-MIC70 showed relative poor performance with 2nd best match (beyond the pair) due to its lower number of SNPs for comparison. We note that using COSMIC mutation takes much less computation time for correlation coefficient evaluations across all cell lines. Robustness of the model We tested the robustness of CeL-ID method by adding noise (Gaussian noise with zero mean) to the allele frequency of variant data for six pairs of cell lines as aforementioned. As evident from the Fig. 4a, correlation drops significantly with increasing noise level and by the noise level σ = 15~20 cell line pair is not identifiable. Additionally, to estimate the false positive rate, we randomly permuted the mutation positions in these six cell lines and tried to find the other pair. We repeated it 100 times and as can be seen in Fig. 3b (last bar), with very low correlation coefficient (on average, ρ = 0.14). Moreover, we tested the robustness of CeL-ID method on two independent test sets. First independent test set comprises of 12 RNA-seq datasets for MCF7 cells, which were downloaded from GEO (GSE86316) and represents mRNA expression profiles in MCF7 cells after silencing of MEN1 using small hairpin or chemical inhibition that affected expression profile of selected group of transcripts [24]. The second independent set consists of 8 RNA-seq datasets for HCT116 cells. These were also obtained from GEO (GSE101966) and depict mRNA expression profiles in HCT116 cells after loss of ARID1A and ARID1B [25]. Variants were called using pipeline (Fig. 1a, light blue boxes) and as can be seen in Fig. 4b and c, even variants derived from altered mRNA expression profiles are sufficient for authentication/ re-identification of cell lines. Additionally, it may be noted that even the use of a different aligner RSEM do not affect cell re-identification potential. As mentioned earlier, MCF-7 and KPL-1 are known to share high SNP identity and hence both rightly passed threshold for unique identification. We removed variants that shared between these two cell lines with difference FREQ greater than 10 and high coverage depth requirement, reducing 17,730 variants in first pass to 2631. Detail analysis results are provided in Table 2. Notice that second pass p-value is much higher, which is due to the removal of common variants, only assess the agreement with variant sites perhaps differentiate MCF7 and KPL-1. Similar results were also obtained for HCT116 cells and are provided in Additional file 2. Furthermore, to test the robustness of the system, effect of sequencing depth on the results was checked. We randomly selected nine cell lines and randomly subsampled it to 1 million (1 M), 2 million (2 M), 5 million (5 M), 10 million (10 M), 25 million (25 M), and 50 million (50 M) reads and ran the pipeline on subsampled subset of reads. As evident from the Fig. 4d, even smaller subset of up to 5 M reads covering only around 15% of total variants (red line/right axis, Fig. 4d) are enough for cell line authentication (top blue line/left axis, Fig. 4d). Similar results were observed for all subsampled sets from all nine cell lines, as indicated by small error bars (Fig. 4d), demonstrating that our method is robust enough up to 5 M reads sequencing depths. Only notable observation is the variation of correlation for the second best-match (lower blue line/left axis, Fig. 4d) increases with the reduction of total read counts, particularly at 1 M and 2 M read count levels, indicating lower read counts will render much fewer unique variants available for mutation calling, and increases the chance of false positive. Sample mix-up and contamination estimation Cell line contamination is a major issue facing biomedical sciences [1,9]. Human error and oversight are thought to be the main cause of cell line mix-ups and contamination. It's necessary to have means to quality control these errors rapidly and periodically. Henceforth, we have developed a linear regression model (see Methods section, Fig. 1b) to estimate the level of mix-ups and contamination using variant frequencies Fig. 4 Test for the robustness of the model. a Shown are effect of adding noise to data with 6 pairs used in Fig. 3b; b test on an independent set of 12 MCF7 RNA-seq datasets of GSE86316, with their first best-match MCF7, second best-match KLP-1 and the third candidate The possible mixture is G20469.JHOS-2.2, with proportion q2 = 82.3%, with t-stat = 210.0, p-value 0.000000e+00 The identified cell line is the same as we started with, and proportion is 82.3% (or − 0.27 below the targeted 0.85 level); and 5. Report estimate results in Fig. 5. As evident from Fig. 5, that the linear model regression method can correctly estimate the level of contaminator to an extent. The linear model tends to slightly under-estimate the proportion (about 3%, for both 70%/ 30 and 85%/15% mixtures, blue line, Fig. 5) for simulated noise σ from 0 to 6. With the increase of the s, the t stats for each proportion variable estimate Fig. 5), at some noise level, the proportion will over-estimate the correct level (blue lines cross zero, Fig. 5), which indicates the inability of the linear model regression to identify a correct contaminator from 934 cell-line collections (indicated by a blue circle, Fig. 5). The best case scenario would have been to show the estimation accuracy on a real mixed test dataset and we will continue to investigate the availability of such dataset. Discussion In this study we describe a method (CeL-ID) for estimating cell line purity from RNA-seq data. A key advantage of using the CeL-ID method for cell line authentication is that it relies on a complete set of variants from the transcriptome instead of a fixed panel of small numbers of STRs or SNPs, and hence avoids the loss of statistical power caused by allelic dropout that affects STR-based authentication methods [1,[9][10][11]. This becomes more pressing in case of cancer cell lines where genetic instability is prevalent and known to exhibit aneuploidy and microsatellite instability [2,3,11]. Currently, STR profiling is the ANSI standard for authenticating cell lines [2]. STR profiles for a large number of cell lines are available for comparison, and a growing number of fee-for-service companies provide STR-based cell line authentication for a cost ranging from $100-295 [9,10]. SNP-based profiling methods had been developed as a simple and stable alternative but suffer from lack of accessibility and being too cumbersome for many researchers. Whereas CeL-ID was developed on the premise that a significant number of cell-based studies anyway employs RNA-seq-based transcriptome profiling in their research and the same can also be used to ascertain the identity of the cell line. In this way, researchers will save both the money and effort of separately authenticating the cell line. Benchmarking studies on independent test sets showed that CeL-ID method is precise and robust and can be used as a resource for cell line authentication. Genentech authenticated cell lines contain a consolidated list of 3587 cell lines [1], of which we had access to RNA-seq data for more than 900 cell lines covering most of the commonly used cell lines. We have generated and stored variant profiles for these 900 plus cell lines for comparison and will keep updating the database as we have access to RNA-seq data for additional cell lines. Therefore, as an end-user one just has to input either an alignment (bam) file or variant (vcf ) file for a given cell line and CeL-ID will carry out all the pairwise comparisons and output the perfect match and will also estimate about the possible contaminants if no perfect match was detected. Conclusions In summary, we have developed a new method called CeL-ID, for cell line authentication using variant profiles derived from RNA-seq data and has shown its robustness. CeL-ID successfully identifies identical, synonymous and derivative cell lines and also estimates about the possible contaminant. We have attempted to provide simple solution to problem associated with cell line authentication and hope this would help in adoption of regular cell line authentication.
6,273.4
2019-02-01T00:00:00.000
[ "Biology", "Medicine", "Computer Science" ]
ANALYSIS OF PARTICIPATION AND WILLINGNESS TO PAY COMMUNITY IN RURAL INFRASTRUCTURE DEVELOPMENT This study aims to analyze the level of participation and willingness to pay the community in rural infrastructure development. This study uses descriptive quantitative analysis in analyzing the participation rate and the Contingent Valuation Method in analyzing the willingness to pay the community. The Data used are primary and secondary data. Primary Data sourced from questionnaire result 92 respondents. Secondary Data is sourced from Pidodo Wetan Village Office. The results Showed that the level of community participation in the construction of infrastructure Pidodo Wetan village is in the high category. Form of participation is most Widely given the power and material / food. Furthermore, the average value of willingness to pay the community of Rp.10,500 with the total value of willingness to pay of Rp.13,728,000. Family income affects the value of the willingness to pay of the community, whereas gender, age, and education have no effect on the bid willingness to pay of the community. INTRODUCTION The village as the center of the smallest local governments, considered to have a significant role in national development. This is because the majority of the Indonesian population residing in the village, with the improvement of social welfare in the village will accelerate national development. According to Law No. 6 of 2014, rural development efforts to improve the quality of life and life and poverty alleviation through the fulfillment of basic needs, the development of rural infrastructure, build local economic potential, as well as the utilization of natural resources and the environment in a sustainable manner for the welfare of the villagers. Bratakusumah (in Melis 2016) said that the development paradigm that has been developed is the paradigm of empowerment that core public participation. In other words, community involvement is key to success in a development. The government only acted as intermediation and catalyst of all development planning, while the public should have a hand in the planning to the implementation of existing development (Melis et al., 2016). Infrastructure is an important dimension that supports the success of rural development. Rural infrastructure leads to the expansion of agriculture by improving crop yields, farmers' access to markets and availability of institutional finance (Satish, 2007). Most of the poor live in rural areas, and the growth of agricultural productivity and rural non-farm employment is closely linked to the provision of infrastructure (Pinstrup et al., 2006). Thus, infrastructure development is one of the priorities that need to be considered in realizing the government's rural welfare. Pidodo Wetan village is a village on Kecamantan Patebon, Kendal. The village has roads and irrigation canals are still inadequate. Pidodo Wetan Village Government states there measuring 2,310 m2 rural roads are still not on the asphalt and irrigation embankments along the 2,000 m is still unbuilt. This is due to the lack of willingness of the government-owned funds to meet all the needs of infrastructure development in conjunction with other government financing. In other words, the construction of roads and irrigation embankments at Pidodo Wetan village is still hindered by the fund. In Law No. 6 of 2014 on village, the government acts to help finance development by allocating the Village Fund. Village funds prioritized for the implementation of development and community empowerment, but in Pidodo Wetan, the village fund administration it is still not enough to meet the needs of rural development, especially infrastructure construction. This is because the priority programs of rural development Pidodo Wetan require large funds such as rural road infrastructure development, construction of multipurpose building and building IT planning early childhood, social facilities development and construction of production facilities. Based on this situation, public participation was an important element that is needed in rural development. Community participation can be realized in various forms such as ideas, energy, materials / food and money donations. In relation to the participation of the village community as one factor supporting the success of rural development programs, it is certain that public participation would be obtained if the programs in development really fit the needs of the community. Furthermore, it is certain also that development goals will be achieved anyway (Hardianti et al., 2017). With the description of the background, the authors wanted to examine the level of participation and willingness beriur rural community in helping the development of infrastructure that is evenly distributed in the Pidodo Wetan village. LITERATURE REVIEW The Concept of Rural Development and Infrastructure Definition of rural development based on statements is the entire village development activities that involve all aspects of village life, and implemented in an integrated manner to develop self-help mutual aid society. Rural development into a media utilize and maximize the potential of existing natural resources, and improving the quality of life of human resources, with guidance and assistance from government officials, in accordance with their respective duties. Rural development effort in accelerating rural development through the provision of facilities and infrastructure to empower the community, and also accelerate the economic development of the effective area and sturdiness. rural development objectives in the long term is the improvement of rural welfare directly through increased employment, business opportunities and revenue based approach to community development, business coaching, and building human. According to Law Decree No. 6 of the Rural Development Village aims to improve the welfare of the villagers and the quality of human life and reduce poverty through the provision of basic needs fulfillment, infrastructure development, local economic development potential, as well as the use of natural resources and environmentally sustainable manner. The targets of rural development itself is the creation of : Increased production and productivity, accelerated growth of the village, improved skills in the production and development of employment and productive business field, improvement initiatives and public participation, strengthening institutional. Rural development has a fairly broad scope and elastic depending on the interaction of many strengths such as program objectives, the availability of resources for planning and implementation, and others (Oni, 2015). Furthermore, in rural development has a scope that includes several parts: (1) development of rural infrastructure (including irrigation, roads, residential neighborhoods, etc.); (2) community empowerment; (3) management of natural resources (SDA) and human resources (HR); (4) job creation, business opportunities, increase revenues (particularly to the areas poor areas); and (5) structuring linkages between rural district with pekotaan region (inter-urban rural relationship). Infrastructure is a form of public capital (public capital), which was formed from the investment made by the government. According to Grigg (1998) infrastructure is a physical system that provides transportation, irrigation, drainage, buildings, and other public facilities, which are required to meet basic human needs both social needs and economic needs. In this case, matters related to infrastructure can not be separated from each other. The system can be connected environment for their infrastructure that sustains the social system and the economic system. The availability of infrastructure has an impact on the social system and the economic system in the community. Then the infrastructure needs to be understood as the fundamentals in making policy (Kodoatie, 2005). Facility infrastructure is a basic element in the package needs to be obtained society with a better life. Infrastructure is more directed to the nature of public goods. The type of goods needed by the people, but no one was willing to produce it or may be generated by the private sector but in limited quantities, types of goods are called public goods (Mangkoesoebroto, 1993). Public goods have two main characteristics in terms of use, ie non-rivalry and non-excludable. Non-rivalry refers to the idea that there are some goods whose benefits can be enjoyed by more than one person at the same time. Rivalry in consumption of goods meaning is that if an item is used by a person, the item can not be used by others. Nonexcludable means that when someone enjoys the benefits of an item when the person pays or not. When the goods are used by others and jointly use these goods, the goods can be regarded as a public good. Use of the infrastructure for the users are not charged directly for their use, due to the infrastructure provided by the government sebagain support socioeconomic activities (Stiglitz, 2000). The concept of Community Participation According , participation is community involvement in the planning and implementation of development programs is being done on a particular local scope. Participation is a public real action in the availability or willingness to make sacrifices and contribute to the development programs implemented. Oni (2015) states that the concept of community participation can be referred to as the active involvement of rural communities in decisions and matters concerning the welfare of the community itself. Active participation in society can be seen through the identification of their needs, planning and implementation of the solution. Type of community involvement includes participation in the concept of involvement in the thought, plan, decide, act and perform an evaluation which focuses on socio-economic development. Keith Davis in Sastropoetro (1988), adding some of the forms of participation are as follows: mind (psychological participation), power (physical participation), thought and effort (psychological and physical participation), expertise (participation with skills), goods (material participation), goney (money participation). Tjokroamidjojo (1995) found in the participation of one important party for development, and even became one of the goals of development itself. Namely the involvement movement and the entire community in the planned development process in accordance with the directives and strategies that have been established through a form of participation in the political system. On the other hand, the development process itself is expected to lead to expansion of participation. Concept of Willingness to Pay Willingness to pay is a concept that can be used to see how much people want to support rural development. Willingness to pay is available to get the goods or services they need. In the context of development, willingness to pay is expressed as a form of government organization in supporting rural development programs to meet common interests. Fauzi (2004) states that willingness to pay is referred to as a willingness to pay for goods and services produced by natural and environmental resources. Contingent Valuation Method approach is used to measure the value of a passive (non-use value) of natural resources or often also known as existence value. Wills and Garrod (1990) says that the technique CVM is based on the fundamental assumption regarding ownership rights, which means that if the individual who asked not own the rights to the goods and services produced from natural resources, the relevant measurement is the desire to pay the maximum (maximum willingness to pay) to get the goods. Willingness to pay can be measured in terms of revenue growth that causes a person to be in a position indifferent to exogenous changes. These exogenous changes can occur due to changes in prices (eg due to increasingly scarce resources) or because of changes in the quality of the resource. Thus, WTP can be defined as the maximum amount someone is willing to pay to avoid further losses against something. According Tietenberg (2016) total willingness to pay is a combination of three types of values: use value, optional value, and nonuse value. Formulation is expressed as follows: Use value reflecting the direct use of environmental resources. In other words, this value is the value resulting from the activities of direct use of environmental resources and then a negative impact on the community and environment, such as pollution, depletion of land and others. Option value the future value owned by the insider using the environment. This value reflects the WTP (willingness to pay) for the option to preserve the environment that will be used in the future. Use value reflects the value derived from the use of this time, while the desire to preserve the option value reflect potential future possible use. Passive-use or noncomsumptive use value the economic value of a given society although its use is not felt directly. This value appears because of public awareness that the environment is a legacy that must be maintained for the survival of future generations. RESEARCH METHODS The data used in this study are primary and secondary data. The primary data comes from interviews and questionnaires. Primary data is collected that is the identity of respondents, public perception, public participation and willingnes to pay people in rural development. Secondary data were obtained from literature on library materials and data obtained from books, journals, theses and internet. In addition, secondary data is also sourced from the Central Statistics Agency and the Department Kendal Pidodo Wetan Village Government. Data collection is done through questionnaires, interviews and documentation. Sampling method used in this research is probability sampling method, where all elements in a population have an equal chance to be selected in the sample. In this method, how the sample selection should be done randomly (simple random sampling). The number of samples in the study were determined by using a technique / formula Slovin. Thus, the obtained sample as many as 93 families. In this study, the descriptive statistical analysis was used to analyze community participation in the development of rural infrastructure. Descriptive statistical analysis carried out with the help of the Likert method.Likert scale is a positive statement that consists of very not agree, strongly agree, agree, disagree, and strongly disagree. The statement was given a score of 1 for strongly disagree statement, a score of 2 for statements do not agree, a score of 3 to a statement agreed, a score of 4 to a statement strongly agree, and a score of 5 for strongly agree a statement. Beriur willingness of society is measured by using the method of Contingent Valuation Method (CVM). Furthermore, untuk analyze the factors that affect the magnitude of the value of willingness to pay people to do using Tobit analysis. The data is processed by the application program EViews 9. According Gujarati (2009) tobit method assumes that the independent variables are not limited in value (noncensured); only variables are not independent, censured; all variables (both smoking and non-smoking) is measured correctly; no autocorrelation; no heteroscedascity; there is no perfect multicollinearity; mathematical models used in research is right. The second model used Tobit model for scale dependent variables are quantitative, and to analyze the influence of independent variables on the dependent variable. The second mathematical model in this study as follows: RESULT AND DISCUSSION Community Participation in Rural Development In the context of rural development, community participation categorized into some form of contribution, both physical and non-physical. This study analyzes the participation of society into four (4) sections, namely public participation in the form of the idea of participation, energy, materials / food and money donations. Overall, community participation in rural development can be said to be in the high category. Where the average of respondents who agree to participate is greater than the other ratings. There are as many as 39 percent of average respondents who had agreed to participate as a whole in development. Furthermore, if calculated on the basis of Likert scale, the obtained total community participation overall score that is 40 percent or in the range of category 40% -59.99%, which means that participation society as a whole are at "high". The form of participation of the most awarded public in the development is in the form of donated labor and material donations / food. A total of 93.5 percent of respondents have a very high level of participation in the form of energy and as much as 90.3 percent of respondents have a very high level of participation in the form of donations of material / food. This is caused by the characteristics of respondents where most respondents had incomes are still relatively small due to old age and educational background are still low. Thus, people tend to not understand the importance of the idea of participation in rural development concept and feel unable to provide for participation in the form of a financial contribution. Results of cross tabulation of respondents stated that men predominate in participating in infrastructure development compared to female respondents. The number of respondents who participate most are in the age range 36 to 45 years with the last educational background are located mainly at the elementary school level. Furthermore, based on the level of personal income, most respondents oramounting to 45.2 percent of respondents have a personal income below Rp.1,000,000, of which 23.7 percent of respondents agreed to participate in the form of ideas,as much as 43 percent of respondents agreed and strongly agreed to participate in contributing personnel, 42 percent of respondents agreed and strongly agreed to participate in contributing material, and as much as 8.6 percent of respondents who had agreed to participate in the form of a financial contribution. Then, based on the characteristics of the additional income, there are as many as 46.2 percent of respondents did have extra income, which shall amount to 24.7 per cent of respondents have a high level of participation in the form of ideas, amounting to 43.0 percent of respondents had a very high participation rate in the form of personnel, as many as 45.2 percent of respondents had a very high participation rate form of material / food and as much as 5.4 percent of respondents had a very high level of participation in the form of a financial contribution. Community Participation in the Form Idea Public participation in contributing ideas or suggestions in the village can be categorized Pidodo Wetan high. This was stated by most respondents where as many as 55 percent of respondents agreed and strongly agreed willing to participate in the form of ideas. Willingness to participate is further realized by as much as 51 percent of respondents who agree and strongly agree always give ideas or suggestions on any village meeting. As a whole, in Table 1 it can be seen that there are as many as 58.1 percent of respondents who have a high level of participation in giving an idea or suggestion on rural development while the remaining 40.9 percent of respondents had a low participation rate and only 1.1 percent of respondents have a very high participation rate. Based on the results obtained in total of public participation in the form of an idea that is the overall community participation index is 40 percent or in the range of 40% -59.99%, which means that overall community participation is in the "high" category. But even so, community participation in providing ideas often do not get a positive response from the community or the government. There are as many as 86 percent of respondents feel that ideas or suggestions they not received a positive response from the community or in a meeting, while they assume that development funds sufficient to meet their ideas or suggestions. Only 19 percent people who found their ideas or suggestions can always be implemented in the next year or even more than a year. This happens due to the lack of transparency by the government village development funds and the educational background of respondents are still low so do not understand the flow of financing in rural development. In Table 1 it can be seen that most respondents (as much as 48.4 per cent) last educated elementary school where as many as 30.1 percent of respondents had a high level of participation idea and the remaining 18.3 percent of respondents had a low participation rate idea. Community participation in the form of Energy Community participation in providing energy aid in Pidodo Wetan village can be said is very high. Overallthere are as many as 93.5 percent of respondents who have a very high level of participation in contributing force in rural development. This was stated by the majority of people (as many as 95 percent of respondents) chose agree and strongly agree always participate by providing labor. As for the remaining 5 percent of respondents strongly disagree and disagree always participate in a form of energy. If calculated using a Likert scale calculations of the obtained indices of public participation in the form of labor is 60 percent or in the range category 60% -79%. It can be concluded that the level of public participation in the form of energy that are in very high category. The level of public participation in contributing to the development effort is realized in a unit time. The results showed that 52 percent of respondents stated already participated by providing energy assistance twice, followed by 20 percent of respondents to participate as much as once, by 13 percent of respondents participated three times, as much as 11 percent of respondents to participate as much as four times more, and the rest A 4 percent of respondents have never participated in providing energy assistance. Furthermore, the number of days it takes the community to participate in the form of power whenever development activities are quite varied. Where 70 percent of the public believes takes a day in each development activities, amounting to 13 percent of people taking two days, as much as 11 percent of the people may take as much as four days, and as many as 2 percent of the people may take as much as three days. While the rest only 4 percent of people who did not participate in the form of energy. Then based on the frequency of the time required each time development activities, as much as 43 percent of respondents said take as much as five hours per day, followed by 40 percent of respondents take as much as four hours per day, as many as 12 respondents take as much as three hours per day, and as many as 1 percent of respondents take as much as two days. The remaining 4 percent of respondents has advised not to participate in providing energy assistance. In the last two years the average frequency of respondents in providing the contribution of labor participation of as many as two times. The average number of days given that as many as 1 day and the average time given is as much as 3-4 hours per day. If converted in the form of wages, the amount of rupiah given community in the contribution of labor participation is Rp.75,000 -Rp.100,000 (assuming a wage of Rp.12,500 / working hours). In addition, the reason people participate in providing energy assistance as requested by the majority of public / local governments is stated by as many as 83 percent of respondents. Another reason is because it is clearly in power. Table 1 has a personal income of less than Rp.1,000,000. Thus people choose to contribute in energy, that 41.9 percent of respondents have very high participation in the form of power. Community Participation in the Form of Material Contribution Community participation in the form of donations of material / food are at very high category. This was stated by as much as 90.3 percent of respondents who have a very high level of participation in contributing material / food in rural development. Where there are a number of 84 people (90 percent) of respondents who agree and as many as 2 people (2 percent) of respondents stated strongly agree always participate by donating materials and or food. Based of that result obtained likert scale index to the level of public participation in the form of donations of material / food obtained by 60 percent, Thus, the degree of public participation in the form of donations of material / food is in the range category60% -79%, which is at a very high category. The types of donations that are mostly delivered in the community development activities is the food. Almost all respondents, or 99 percent of respondents said always give food consumption at each development activity. Food was provided in the form of small meals, snacks, drinks and cigarettes. In addition, as many as 46 people or 49 percent of the respondents chose to participate by donating materials such as cement, sand, gravel and carpentry tools such as hoes, sickles and hammers. Then there are as many as 46 people or 49 percent of respondents stated participate by donating materials and food. Meanwhile, just as much as 1 respondents who declare not provide for participation in the form of energy. The intensity of the participating communities contributing material / foodwithin a period of two yearsthe average is counted twice. Thethe amount of the costs incurred once the community in activities ranging from Rp.10.000,00 -Rp.200,000.00 with the average cost incurred is Rp.40.000,00. Thus, it can be seen that the average amount of rupiah given society participation in the material / food is Rp.80,000. The community chooses to participate by contributing material / food, mostly because people ask questions expressed by as many as 75 people (81 percent) respondents. Another reason is due to the fairly low income conditions in which Table 1 records that most have personal income of less than Rp.1,000,000. Thus, people feel able to contribute material / food rather than in the form of money. This is stated by as many as 40.9 percent of respondents who have a very high level of participation in donating materials / food to rural development. Community Participation in the Form of Donation Money Community participation in the form of a financial contribution can be categorized as low. The results showed that only about 18 people, or 19 per cent of respondents agreed to participate in the form of donations of money, while the remaining 75 or 81 percent of respondents who stated strongly disagree and disagree always participate in the form of a financial contribution. Overall there are as many as 67.7 percent of respondents who had a low level of participation in contributing money to the development of the village. Low willingness of society to participate in the form of donations of money evidenced byberiur society's willingness during the period of last two years. There are as many as 82 percent of respondents stated strongly disagree and disagree give dues per month within two years. While the rest just as many as 17 people or 18 percent of respondents who agree give dues per month within two years. Furthermore, known total score of public participation in the form of a financial contribution obtained by 31 percent or in the range category 20% -39% or are in the low category. The intensity of the participating communities contributing money within a period of two yearsis still very low. The results showed that as many as 81.7 percent of respondents said never participated in the form of financial contributions. The remaining 8.6 percent of respondents participated twice, by 8.6 percent of respondents participated as much as one, as many as 3.2 percent of respondents participated four times, and as much as 1.1 percent of the respondents who participated three times. Thethe amounts of fees issued by the public ranged from Rp.10.000,00 -Rp.50.000 per month. The total contribution of a given society ranging from Rp.20.000,00 -Rp600.000,00. The reason people participate in the form of donations of money largely because people asked expressed by as many as 14 people or 15 percent of respondents. As for the other reasons due to the lack of transparency of funds by local governments, the public perception of the existence of insufficient funds the village, and background of people's income is still low. Table 1 shows that most people have an income below Rp 1,000,000, of which there are as many as 28.0 percent of respondents had a low level of participation in contributing money to the development of the village. Analysis Wiliingness to Pay Method Contingent Valuation Method Contingent Valuation Method approach is used to analyze the value of a given society who are willing (wilingness to pay) in the construction of rural infrastructure. In this study, the suggested infrastructure is the construction of roads, embankments and two. The value of the bid offered on respondents to the construction of irrigation embankments and roads are as follows: Furthermore, the number of respondents who are willing to give a contribution just as many as 42 people or 45 percent of respondents, while the remaining 51 people or 55 respondents said not willing to give dues. Based on the number of respondents who are willing to provide contributions, the majority of respondents (as much as 66.6 per cent) states are willing to give the contribution of Rp.12,500 per month in a year, while the remaining 16.7 percent of respondents are willing to give a contribution of Rp.9,000 and as much as 16.7 percent respondents are willing to give a contribution of Rp.4,000. Thus the total value obtained is willing given respondent in the construction of roads and irrigation embankments amounting to Rp.441,000. The value of the average willingness to pay the respondent can be calculated with the following formula: From the above calculation, the obtained results of the average value of the respondents WTP is Rp 10,500. Thus, the average value of WTP 10,500 can be used as a reference in determining the amount of community contributions. Furthermore, the data agregating conducted to determine the total value Willingness to Pay through the multiplication of the average value of WTP of respondents with the total population. In this study, total household population there are as many as 1144 households. Based on the above calculation, the result value of total WTP society (if the entire population is willing to pay) in the construction of road infrastructure and irrigation embankments in Pidodo Wetan village which was Rp 12.012 million per month. However, based on the number of respondents who are willing to pay, the total value of WTP in infrastructure development in rural Pidodo Wetan only Rp 5.405 million. Based on Table 2 it can be seen that as many as 66.7 percent of respondents expressed willing to provide US $ 12,500 contribution to rural infrastructure development Pidodo Wetan. Of the total respondents, the number of respondents who are willing to give dues largely male sex. The age range most are at the age above 55 years, namely, consisting of as many as 23.8 percent of respondents make your choice the bid value of Rp 12,500, 2.4 percent of respondents chose the bid value of Rp 9,000 and as much as 7.1 per cent of respondents chose the bid value Rp 4,000. Furthermore, most of the respondents are willing to give dues have primary school education last. Among them there are as many as 35.7 percent of respondents who chose the bid value of Rp 12,500, as much as 11.9 percent of respondents chose the bid value of Rp 4,000 and the remaining 7.1 per cent of respondents chose the bid value of Rp 9,000. Based economy characteristic, respondent's most personal income of Rp 1,000,000, of which as much as 38.1 percent of the number of 42 respondents tend to choose bids amounting to Rp 12,500. While respondents with personal income ranged between Rp 3.000.00 -Rp 4,000,000 tend to choose bids amounting to Rp 9,000. This is because respondents with low incomes average work in the field of pertaninan thus requires more irrigation than road embankments while income respondents usually work as traders or civil servants so feel no need of irrigation embankments. Approximately 31 percent of respondents who have additional income of less than Rp 1 million value of the bid of Rp 12,500. Furthermore, respondents who chose the bid value of Rp 12,500 at most have additional income of Rp.1,000,000 -2,000,000 USD, whereas at the level of additional income of Rp2,000,001 -Rp3,000,000 respondents who chose the bid value of Rp 12,500 at 7.1 percent. Thus, the greater the personal income and additional income which is owned can be factors that determine the amount of contribution of a given society in rural infrastructure development Pidodo Wetan. Factor Analysis of Factors Affecting Willingness to Pay Based on estimates shown in the Table 3 can be determined equation factors that affect the willingness to pay is as follows: The regression equation explains that the constant coefficient has a value of (-3.204). This means that if all diangggap independent variables constant, the great value of willingness to pay would be reduced to Rp.3,204. Variable income families have a probability equal to 0.035. This means that the variable family income has a significant effect on the magnitude of the value of positive willingness to pay communities. The results of this study are supported by the results of research Saptutyningsih (2007) and Rodríguez et al. (2017) which states that income has a positive influence on the magnitude of the value of WTP. Respondents would be willing to give a higher fee at a high level of income as well. This is because the value of willingness to pay big, the community will benefit greatly sacrificed also appropriate value. The variables sex, age, and education of the public latter has no effect on the amount of the value of willingness to pay. This is supported by the research results Dhungana (2016) which says that the variable gender and age did not significantly affect the value of willingness to pay. Furthermore Rezhen Harun ( 2015) states that the age and education no significant effect on the magnitude of the value of willingness to pay. This is due to the homogeneity between gender, age and education of respondents were taken so as not to affect the decision of the people in determining the value of willingness to pay communities in the development of rural infrastructure. In addition, respondents on average still less educated, so that they do not understand the concept of the value of willingness to pay. In this case, the respondents tend to choose a bid based on any personal needs and less attention to social benefits required by the other respondents. CONCLUSION Based on research that has been done in the analysis of the level of participation, and willingness to pay people in the village Pidodo Wetan, it can be concluded that: (1) the public perception will be the development of infrastructure in rural Wetan Pidodo can be quite good. This means that the public understand the importance of rural infrastructure development as an element in society still do not understand fully the responsibilities and the importance of community participation in the development of rural infrastructure; (2) community participation in infrastructure development in rural Wetan Pidodo can be said to be at a high category. The form of participation of the most widely given rural community is the participation of ideas, energy, materials / food, while participation in the form of financial donations are still very rare in the village Pidodo Wetan; (3) Willingness beriur (willingness to pay) community in rural infrastructure development can be said is still low. It can be seen from the number of people who are not willing to provide the infrastructure construction fee more than the number of people who are willing to give dues village infrastructure. The value of the bid is the most preferred development of rural communities is Rp 12500.00 categories of road infrastructure and irrigation embankments; and (4) the variable characteristics of the respondents that affect the value of the bid in a public willingness to pay is variable family income, while the variables of sex, age, and education of the public has no effect on the final value of the bid in the willingness to pay communities.
8,250.2
2019-01-16T00:00:00.000
[ "Economics" ]
Elemental Analysis and Natural Radioactivity Levels of Clay by Gamma Ray Spectrometer and Instrumental Neutron Activation Analysis Due to increased global demand for clay, the present work involves the use of INAA for elemental analysis and pollutants concentration in clay. The samples were collected from Aswan in South Egypt. The samples were irradiated using the thermal neutrons “at the TRIGA Mainz research reactor” and at a neutron flux “of 7 × 10 n/cm s”. Twenty-six elements quantitatively and qualitatively were specified for the first time upon studying the samples. The elements determined are U, Th, Ta, Hf, Lu, Eu, Ce, Ba, Sn, Nb, Rb, Zn, Co, Fe, Cr, Sc, Sm, La, Yb, As, Ga, K, Mn, Na, Ti, and Mg. The concentrations of natural radionuclides Th, Ra, and K were also calculated. Based on these concentrations, to estimate the exposure risk for using clay as raw materials in building materials, the radiation hazard indices such as radium equivalent activities, effective doses rate, and the external hazard indices have been computed. The obtained results were compared with analogous studies carried out in other countries and with the UNSCEAR reports. Introduction Due to increased global demand for clay and its industrial importance, for diversity of uses, it is considered as one of the leading minerals worldwide [1].Instrumental neutron activation analysis INAA using HPGe detector coupled with a multichannel pulse height analyzer and optimum choice of irradiation and delay times may yield promising results and have been employed in a number of methods for the analysis of geochemical materials.The elemental content in different environmental media has been widely employed by INAA technique [2,3].Until now, there is no database for the constituent elements for clay.Therefore, our results can be considered as reference data for the Egyptian clay.Natural radionuclides research has contributed greatly to developing a quantitative understanding of the environmental performance.In order to attain an improved understanding of the environmental destiny of contaminant radionuclides, it is necessary to characterize not only the biogeochemical properties of the radionuclides but also the expected biogeochemical processes occurring in the receiving environment [4,5]. The present work dealt with measuring the elemental content of clay samples collected from Aswan, South Egypt, and shed more light on the activity concentrations of the naturally occurring radioactive materials (NORM) to assess the radiation hazard parameters due to using clay as building materials. Experimental Technique 2.1.Samples Preparation.Clay samples were collected from Aswan, South Egypt.Each of the samples weighs about 1 kg and is then dried in an oven at about 105 ∘ to make sure that all the moisture has been removed.For elemental analysis using instrumental neutron activation analysis, the powder samples were sieved through a set of standard sieves with diameters ranging within 63-125 mm, and an electric shaker was used to obtain homogeneous samples; the samples were irradiated using thermal neutrons.With regard to measurement of the natural radioactivity, each sample was grinded and homogenized and thereafter the powder clay samples were sieved through a 200-mesh sieves, "which is the optimum size when enriched in heavy minerals" to become homogenized powder [6].The samples were weighed, Packed, and sealed in polyethylene Marinelli beakers, of 350 cm 3 volume each, and then stored for 4 weeks to attain secular equilibrium with the short-lived daughters of 232 Th and 226 Ra and their long-lived parent radionuclides [7]. Instrumentation and Irradiations. Polyethylene capsules were filled with a hundred mg from powder clay samples and then irradiated with a Dolerite WSE and Microgabro PMS standard reference material with thermal neutrons "at the University of Mainz Triga research reactor (100 kWth) with a flux of 7 × 10 11 n/cm 2 s".The concentration of the elements determined in the irradiated samples was quantitatively specified by comparison with the activities of the reference materials [9,10].After appropriate cooling times, the data were collected to conduct different measurements [11].The irradiation conditions for the elements determined were shown in Table 1. The measuring of activity concentration for radionuclide in studied samples was defined using gamma ray spectrometer system by HPGe detector with an electronic circuit.The HPGe detector has specifications as a follows: energy resolution (FWHM) is 1.70 keV at 1.33 MeV 60 Co, Peak-to-Compton ratio 60 Co is 65.2, and relative efficiency is 29.2 at 1.33 MeV 60 Co.The analysis of results was accomplished by the Inter-Gamma Software that was generated by Intertechnique "Deutschland GmbH, Mainz, Germany" [12][13][14][15][16][17].In all measurements, the electronic dead time is less than 10% and the Inter-Gamma Software has performed the correction automatically [2]. External Hazard Index (𝐻 𝑒𝑥 ).The recommended value of absorbed dose rate is 1.5 mSv y −1 [22,23].To limit the radiation dose value to this rate, the conservative model suggestion is based on infinitely thick walls, without doors and windows [24] to serve as a standard for the computation of index defined as external hazard index ex from the following relation: Results and Discussion Twenty-six elements were identified in studied clay samples.The average concentration of the elements determined is listed in Table 2.The elements Ti, K, Cr, Ga, Na, Mg, Mn, Sm, As, Sc, La, Co, Rb, Nb, Sn, Ce, Fe, Ba, Eu, Lu, Yb, Hf, Ta, Zn, Th, and U were determined.The concentration for all elements was expressed in g/g except for Mn, Mg, Na, Fe, K, and Ti which were given in g/kg.In all other situations, the elements were measured by their most distinctive peaks, with lowest statistical errors and free of interference.The measurements accuracy has been estimated using the PMS and WSE analysis, for the standard reference materials.From the obtained results, we can say that INAA is an effective and successful means to supply valuable data for clay samples with a satisfying precision.The accuracy for most elements in present results is in the range of 10% of the reference values, and a good precision has been shown in most results [25,26].The activation converts 238 U and 232 Th into 239 Np and 233 Pa, respectively, by neutron capture and successive -decay: The feature -rays can be detected using -spectroscopy [27,28]. Assessment of Natural Radioactivity and Exposure Risk. To estimate the exposure risk which naturally occurs as a result of the use of clay as raw materials in construction, the radiological indicators like the external hazard index ex , absorbed dose rate , and radium equivalent activity Ra eq and the specific activity concentrations of radionuclides in Bq/kg have been computed according to [19].The obtained results were listed in Table 3.Average activity concentration of 40 K, 232 Th, 226 Ra, and Ra eq in clay samples was 208, 28, 36, and 101 Bq/kg, respectively.These values are within the worldwide range of the activity concentration values of 232 Th, 226 Ra, and 40 K in soil and 30, 35, and 400 Bq/kg, respectively [21].Up to our knowledge, there is no published world average activity concentration of natural radioactivity in clay deposits.All over the world clays deposits are widely used as a building material (e.g., bricks and ceramic industries).Their radioactivity content could be a source of dwelling internal and external radiation exposure.Ra eq values were less than 370 Bq/kg, the maximum permissible activity concentration limit for building material.This value, 370 Bq/kg, is equivalent to maximum permissible limit [8,29,30]. The average activity concentrations of 226 Ra, 232 Th, 40 K, and Ra eq in clay bricks and building materials, from twelve different countries, were given in Table 4 [31].The average activity concentrations of natural radionuclides in Egyptian clay samples that could be used as a raw material for bricks industry were less than the average of all studies for 226 Ra, 232 Th, 40 K, and Ra eq [32]. Conclusions Twenty-six elements were quantitatively determined in the first time for clay samples collected from Aswan, South Egypt.The elements Ti, K, Cr, Ga, Na, Mg, Mn, Sm, As, Sc, La, Co, Ce, Sn, Rb, Nb, Fe, Ba, Eu, Lu, Yb, Ta, Zn, Th, and U were determined.From the obtained results, we can say that INAA is an effective and useful tool to provide good data for clay samples with a precision and satisfying accuracy.The average activity concentrations of natural radionuclides in Egyptian clay samples that could be used as a raw material for bricks industry were less than the recommended levels by UNSCEAR data for soil.It is hoped that our clay data are useful to those dealing with clay applications. Table 1 : Irradiation conditions for the elements determined. K ,19]h , and Ra are the activity concentrations of 40 K, 232 Th, and 226 Ra in samples, respectively.The definition is based on the supposition that 130 Bq/kg of 40 K, 7 Bq/kg of 232 Th, and 10 Bq/kg of 226 Ra create the same gamma radiation exposure dose[18,19].2.3.2.Absorbed Dose Rate ().The absorbed dose rate in air due to radionuclides at 1 m above the ground surface for the uniform distribution of 226 Ra, 232 Th, and 40 K was calculated according to guidelines supplied from UNSCEAR: Table 2 : Clay composition and the average concentration of the elements calculated using INAA. Table 3 : Specific activities and radiological indices for the clay studied samples. Table 4 : Comparison of 226 Ra, 228 Ra, and 40 K and radiation hazard parameter in clay from around the world [31].
2,144.8
2016-04-18T00:00:00.000
[ "Environmental Science", "Geology" ]
Controlling the emission from semiconductor quantum dots using ultra-small tunable optical microcavities We report the control of spontaneous emission from CdSe/ZnS core–shell quantum dots coupled to novel open-access optical microcavities. The cavities are fabricated by focused ion beam milling and provide mode volumes less than a cubic micrometre. The quantum dot emission spectrum, spatial modes and lifetime are all modified substantially by the presence of the cavity, and can be tuned by actively varying the cavity length. An increase in emission rate of 75% is achieved at room temperature, attributed to the Purcell effect in the ‘bad emitter’ regime. We demonstrate a high degree of control over the emission from the dots, including near single-mode operation and the ability to detect strong emission from individual nanocrystals. Introduction A controlled coupling between electric dipole transitions in matter and the surrounding electromagnetic field is a central theme of modern optoelectronics. The coupling of nanomaterials such as semiconductor quantum dots to discrete modes in optical microcavities and nanocavities has been researched extensively in recent years, opening up the door to such devices as displays, sensors (for a recent review, see [1]), nanolasers [2] and single-photon sources for metrology and quantum information technologies [3][4][5]. Most of these applications operate in the so-called 'weak coupling regime' of cavity quantum electrodynamics, where leakage of light from the cavity occurs faster than the interaction rate between the emitting dipole and cavity mode (see, for example, [6]). In this regime, spontaneous emission can be channelled into the desired cavity modes by the modification of the local density of states of the electromagnetic field (the Purcell effect [7]). The decay rate, emission spectrum and spatial distribution of emitted photons can each be tailored by an appropriate choice of cavity. For other applications such as quantum computing, a strong coupling regime is desirable, where the cavity leakage rate is slower than the coupling strength and natural decay rate of the atom. In this regime, coherent exchange of information between electronic and optical states is possible, and highly nonlinear effects permit the construction of quantum logic gates [8][9][10][11][12][13]. One design of microcavity that has been the subject of significant effort in recent years is the open-access microcavity, a Fabry-Pérot resonator with opposing mirrors on separate substrates, and with one or both mirrors concave in shape to provide lateral confinement of the cavity mode [14][15][16][17][18]. The attractions of this design are that the intensity maximum of the mode is accessible for coupling to free-standing objects such as atoms, molecules and nanoparticles; they are fully wavelength tunable in situ by controlling the cavity length with a piezoelectric actuator, and the leakage of light from the cavity mode through the mirrors can be coupled efficiently into a Gaussian beam or optical fibre waveguide [15]. A few different approaches have been used to fabricate such cavities [11][12][13][14], but in recent years ablation using a carbon dioxide laser has been the favoured method, producing the radius of curvature as low as 40 µm and with surface roughness as low as 1Å [15]. Purcell enhancement of the emission rate from quantum dots at low temperature has also been achieved [19]. Here we report on our latest experiments with open-access microcavities fabricated by ion beam milling [20]. By producing high-quality concave surfaces with radius of curvature, β, as small as 7 µm combined with cavity length, L, of 1.6 µm we are able to create the smallest mode volumes for this design of cavity to date, down to 0.53 µm 3 , while retaining quality factors of several thousands. To illustrate the benefits of these small sizes, we demonstrate Purcell enhancement of the emission from semiconductor nanocrystal quantum dots (NQDs) at room temperature. This corresponds to the 'bad emitter' regime of cavity quantum electrodynamics, 3 in which dephasing and/or spectral drift of the electronic transition render the transition line width much greater than the cavity line width, with the result that the dot-cavity coupling strength is greatly reduced. To observe Purcell enhancement in this regime, the use of ultrasmall cavities therefore becomes essential [21]. Recent theoretical work has shown that the bad emitter regime may be useful in producing advanced room temperature single-photon sources [22] and single-emitter lasers [23][24][25] by making use of the cavity feeding effect to produce controlled radiation from strongly dephasing solid-state photon emitters. A few other recent studies have investigated the room temperature Purcell regime using photonic crystal, Bragg pillar and whispering gallery mode resonators [26][27][28][29]. Experiment For our intracavity emitter we use commercially available CdSe/ZnS NQDs (eBioscience). These provide high fluorescence quantum yield (η > 0.9), 2 and well-characterized lifetimes and are solution based for ease of introduction into the cavities. The cavities are arrays of fully tunable half-symmetric open-access microcavities similar to those reported previously [17]. For the experiments reported here, we used cavities with radius of curvature β = 7 and 25 µm. The mirrors consist of ten pairs of SiO 2 (refractive index, n = 1.45)/ZrO 2 (n = 2.095) with reflectivity R = 99.4%, terminated with a λ/2 SiO 2 layer. Photoluminescence (PL) experiments were carried out using the apparatus depicted in figure 1(a). The lower featured mirror of the cavity pair was supported on a piezoelectric actuator to control the cavity length, and microscopy was performed through the fixed upper, planar mirror. A solution of the quantum dots in octadecene (n = 1.44) was introduced in the cavity during assembly. Optical emission from the NQDs in the cavities was investigated using a standard PL apparatus. Emission from the NQD ensemble was excited using a laser emitting at 473 nm (PicoQuant LDH470), focused to the cavity using a low-magnification objective lens (20×, NA = 0.45). The PL signal was collected by the same objective lens and guided to a spectrometer (Acton SP500i) with a liquid nitrogen-cooled CCD (Princeton Spec10). For experiments with very low concentrations of nanocrystals, a white light source was installed between the piezo actuator and the lower mirror so that cavity modes could be observed by monitoring the transmission spectrum of the cavities. All measurements were made at room temperature. Results and discussion Figure 1(c) shows an image of the PL intensity recorded from a 10 × 10 array of cavities under defocused laser excitation. Much stronger emission can be seen from the individual cavities than from the spaces between them or between the arrays, indicating that the fully confined modes of the half-symmetric cavities are more effective at directing light into the collection optics than are the one dimensional confined modes of planar-planar cavities. We will return to the subject of cavity coupling efficiency in more detail later in the paper. An interesting feature of figure 1(c) is the periodic variation in the spatial mode of the emission from left to right across the image. This is a result of a gradient in the cavity length due to a misalignment of the mirrors by about 5 mrad. We find that within the mode stability criterion of L < β, the behaviour of individual cavities is extremely robust to such misalignment, with no noticeable effect on the quality of the observed modes. Figure 2 shows PL spectra taken from four representative cavities with different β and L. For comparison, the emission spectrum of the NQD ensemble without cavity is also shown (figures 2(a)) and panel (b) shows a spectrum from a β = 25 µm cavity with L = 5.5 µm, revealing the Hermite-Gauss mode structure of the cavities. Longitudinal modes (TEM 00 ) are seen at 614, 640 and 667 nm, while TEM mn modes with m + n = 1, 2, 3, 4 and 5 are visible at the short-wavelength side of the longitudinal modes at 640 and 667 nm, respectively. Reducing L to about 1.6 µm increases the free spectral range to about 95 nm in figure 2(c), where only one longitudinal mode is visible within the stop band of the dielectric mirrors (note that here we define L to include the penetration depth of the field in the mirrors). This spectrum corresponds to cavity A labelled in figure 1(c). Cavity B corresponds to a detuning of about half the free spectral range and reveals the spectrum in figure 2(d). The 'doughnut'-shape PL image observed comes about because each photon couples to multiple modes which interfere destructively for all but the largest radius. Figure 2(e) shows a near single-mode spectrum obtained with a β = 7 µm cavity with L = 1.6 µm. The luminescence decays of the combined NQD-cavity systems were measured using time-correlated single-photon counting with a 100 ps duration excitation pulse and a 900 ps overall timing resolution. Care was taken to keep the excitation intensity below the level where biexcitons can be generated, resulting in Auger-limited decay at short time scales. Figure 3(a) shows the fluorescence decay for nanocrystals in the smallest cavities, compared to those outside the cavities, but in a similar average dielectric environment. The latter revealed approximately single exponential decays of lifetime 14 ns. The intra-cavity decay, by contrast, is highly non-exponential as a result of the spatial distribution of nanocrystals within the cavity, resulting in a range of coupling strengths. In order to identify the decay rate for nanocrystals with near optimal coupling to the modes (i.e. situated at the antinode of the electric field and with the transition dipole aligned parallel to the electric field), we fitted single exponentials only to the first 20 ns of the decay. This means any enhancements measured will be associated with the 14 ns single exponential free space optical decay rate, and will correspond to Purcell factors for those dipoles situated and orientated optimally with the field of the cavity mode. Figure 3(b) shows the fitted cavity coupled decay rates relative to the free space decay rate, γ /γ 0 , plotted as a function of mode volume. Data from cavities of two different radii of curvature are included in the figure, each of which provides a series of data points as the cavity (2) in the text. The solid stars are wholly established from FDTD calculations. The inset shows a greyscale image of the cross-sectional field distribution of the smallest cavity, as modelled using the FDTD method. is shortened in half-wavelength steps to bring successive TEM 00 modes to the 640 nm centre wavelength of the quantum dot emission. The cavity length L is established in each case from the measured free spectral range. Effective mode volumes are calculated using both the analytic expression in the paraxial approximation, and numerically by integrating over the field energy distribution where the electric field distribution E(r ). is established using finite difference time domain (FDTD) modelling software (Lumerical), and r QD is the location of the emitting quantum dot. These calculation methods are found to agree well provided that L < 0.7β (to satisfy the paraxial approximation), using a field penetration depth into each mirror of 1.08λ, 3 and so the analytic method is used for all subsequent analyses unless otherwise stated. 7 The experimental decay data reveal that a change in recombination rate can only be observed for mode volumes V < 2 µm 3 whereupon the rate increases steeply to 1.75 γ 0 at V = 0.53 µm 3 . This is the first time to our knowledge that the dependence of decay rate on mode volume has been mapped directly. The data shown are from cavities with two different radii of curvature, yet the graph suggests that only the mode volume is important in determining the exciton lifetime. This mode volume dependence is a distinctive characteristic of the bad emitter regime for these cavities, in which the effective Q factor is determined by the emitter and not the cavity. In the well-known analytic expression for the Purcell factor Q is the quality factor of the combined quantum dot-cavity system, given by Q = λ peak /( λ cav + λ QD ), where λ cav and λ QD are the homogeneous line widths of the cavity mode and the quantum dot emission, respectively [30,31]. In these room temperature experiments, λ cav ≈ 0.1 nm and λ QD ≈ 14 nm, so the relevant Q factor is approximately 45, determined by the exciton dephasing rate of the quantum dots. The Purcell factor for each confined mode therefore scales as 1/V . If one assumes that the free space emission is unperturbed by the presence of the cavity, then the modified decay rate is equal to F P + 1. The mode volume dependence of this modified rate is shown as a solid line in figure 3(b). The simple model presented above can be developed to into account three further physical factors: the influence of more than one resonant cavity mode; suppression of emission into unconfined continuum modes by the small cavities; and non-radiative recombination channels in the NQDs. Taking into account these factors leads to the expression where γ nr and γ rad are the non-radiative and radiative recombination rates of the NQDs in free space, γ 0 = γ rad + γ nr is the total free space recombination rate and η = γ rad /γ 0 is the fluorescence quantum yield in free space. The variable α is the effective Purcell factor for emission into continuum modes (0 α 1), representing the suppression of emission that is not matched to the confined modes of the cavity. The modified spontaneous emission rates were also simulated numerically using FDTD modelling, by comparing the total power radiated by the source in the cavity to that when emitting into free space [32,33]. This method has the advantage of taking into account in detail the modified local density of states, thus calculating the contribution from both resonant cavity modes and leaky modes, corresponding to i F P,i + α in equation (4). The source is constructed with its peak wavelength resonant with the strongest cavity mode and its line width equal to the homogeneous line width of the nanocrystals. This 'total power radiated' method yields, with no variable parameters, the four points marked as black stars in figure 3(b). These simulated rates agree remarkably well with the experimental data, with the modelled rate for the smallest mode volume equal to 1.78γ 0 . The FDTD calculations allow us to isolate the contributions to the total emission rate as described in equation (4). For the smallest cavity we find that emission into the primary mode occurs at a rate close to that of the entire free space emission, with F P = 0.77, and that emission into the continuum modes is suppressed by 32% giving α = 0.68. A small additional coupling to other cavity modes within the NQD line width makes up the remainder of the total emission rate. We now turn our attention to the cavity coupling efficiency of the emission, for which the quantum efficiency of the NQD also comes into consideration. The close agreement between the measured lifetimes and those modelled using FDTD support the claim that η is close to unity, but we continue to include the parameter here for completeness. It can easily be seen from equation (4) that the quantum efficiency for coupling into the ith cavity mode is given by Using the values determined above and η = 0.9, we find that emission into the primary cavity mode occurs with an efficiency of 40.7%. Finally, we demonstrate the measurement of emission from a single NQD into the cavity mode. This is achieved simply by diluting the quantum dot solution sufficiently so that on average a single dot is present in the cavity mode at any given time. The evidence for singledot emission is twofold. Firstly, figure 4(a) shows the spectrum for a single NQD at room temperature in free space and compares this with the emission seen in the cavity. Despite the presence of cavity modes at about 8 nm intervals, the range of excited modes is comparable with the homogeneous line width of 14 nm, in contrast with the spectra in figures 2(b) and (d) in which emission is observed across the inhomogeneously broadened ensemble spectrum. Secondly, figure 4(b) shows a time trace of the spectrum, which reveals substantial fluctuation in the mode intensity that is attributable to single-dot fluorescence intermittency. It is encouraging to note that the photon count rate from the single nanocrystal in the cavity is at least comparable with that measured in the absence of a cavity with an NA = 1.25 9 oil immersion lens. The latter would be expected to collect 35% of the light emitted on average from a single NQD, providing broad agreement with the modelled quantum efficiency calculated using FDTD. With further development of the ion beam milling method it appears possible to achieve even smaller mode volumes than are reported here. FDTD modelling suggests that mode volumes as small as 0.1 µm 3 are possible while retaining high quality factors if radii of curvature of about 3 µm can be achieved combined with optimal longitudinal field confinement. This would lead to a significant increase in the cavity coupling and is the subject of ongoing work. In summary, the spontaneous emission of CdSe/ZnS quantum dots in arrays of tunable open-access optical microcavities at room temperature is reported. We demonstrate efficient coupling of spontaneous emission into resonant cavity modes, and output coupling into low numerical aperture external optics. A spontaneous emission rate enhancement is observed due to the Purcell effect and effectively modelled using FDTD simulations. We further demonstrate the measurement of emission from a single quantum dot coupled to cavity modes. Our results show significant promise for room temperature single-photon sources, and for using these microcavities in single-molecule fluorescence sensing applications.
4,125.8
2012-06-26T00:00:00.000
[ "Physics" ]
Evaluation of monoxide film-based dosimeters for surface dose detection in electron therapy Generally, electron therapy is applied to tumors on or close to the skin surface. However, this causes a variety of skin-related side effects. To alleviate the risk of these side effects, clinical treatment uses skin dosimeters to verify the therapeutic dose. However, dosimeters suffer from poor accuracy, because their attachment sites are approximated with the help of naked eyes. Therefore, a dosimeter based on a flexible material that can adjust to the contours of the human body is required. In this study, the reproducibility, linearity, dose-rate dependence, and percentage depth ionization (PDI) of PbO and HgO film-based dosimeters are evaluated to explore their potential as large-scale flexible dosimeters. The results demonstrate that both dosimeters deliver impressive reproducibility (within 1.5%) and linearity (≥ 0.9990). The relative standard deviations of the dose-rate dependence of the PbO and HgO dosimeters were 0.94% and 1.16% at 6 MeV, respectively, and 1.08% and 1.25% at 9 MeV, respectively, with the PbO dosimeter outperforming the 1.1% of existing diodes. The PDI analysis of the PbO and HgO dosimeters returned values of 0.014 cm (–0.074 cm) and 0.051 cm (–0.016 cm), respectively at 6 MeV (9 MeV) compared to the thimble chamber and R50. Therefore, the maximum error of each dosimeter is within the allowable range of 0.1 cm. In short, the analysis reveals that the PbO dosimeter delivers a superior performance relative to its HgO counterpart and has strong potential for use as a surface dosimeter. Thus, flexible monoxide materials have the necessary qualities to be used for dosimeters that meet the requisite quality assurance standards and can satisfy a variety of radiation-related applications as flexible functional materials. Introduction Electron beam therapy (EBT) has a short penetration depth, which makes it suitable for treating tumors close to the skin. The therapeutic dose of EBT is calculated on the basis of 80% or 90%, and the R 90 at 6 MeV and 9 MeV corresponds to penetration depths of approximately 1.8 a1111111111 a1111111111 a1111111111 a1111111111 a1111111111 and 2.5 cm, respectively, where R 90 denotes the clinical range defined as a depth of 90% relative dose [1]. Skin-related side effects, such as erythema, desquamation, necrosis, epitheliolysis, and hypohidrosis, are common in EBT patients [2][3][4]. However, since the accuracy of the skin dose calculated in the treatment planning system is only ± 20%, a skin dosimeter is used in clinical practice to verify the skin dose [5]. Commonly used dosimeters include films, glass dosimeters (GD), optically stimulated luminescent dosimeters (OSLDs), and thermo luminescent dosimeters (TLDs). However, none of these devices can obtain the dose distribution on the body surface because they measure the point dose with the integrated dosimeter of the analog detection method. Additionally, as the attachment part on a patient's body, which is naturally curved, is checked visually, the positional accuracy can be unreliable [6]. For example, the average error rate of a digital MOSFET dosimeter was reported to be 22.8% [7]. Therefore, in clinical practice, there is an urgent demand for a patch-type digital surface dosimeter that can be attached to a patient's skin to measure the body surface in real time. A significant amount of radiation detector research is focused on developing flexible functional photoconductor materials [8,9]. Among the materials investigated thus far, lead oxide (PbO) and mercury oxide (HgO) have excellent physical properties with high atomic numbers (Z Hg : 80, Z Pb : 82, Z O : 8) and densities (ρ PbO : 9.53 g/cm 3 , ρ HgO : 11.14 g/cm 3 ) [10][11][12]. Meanwhile, in the particle-in-binder (PIB) method, which involves mixing powder material and binder, flexible materials can be produced by using a silicone rubber binder. Additionally, the PIB method shows the possibility of improving the electrical stability via passivation of the material itself by using a binder with insulating properties. Therefore, as a basic study considering the development of a large-area surface dosimeter, this study focuses on evaluating the performance of monoxide materials fabricated under optimized manufacturing conditions. In this study, flexible PbO and HgO dosimeters were manufactured to evaluate the LINAC quality assurance (QA) with respect to reproducibility, linearity, dose-rate dependence, and percentage depth ionization (PDI). Additionally, to verify the applicability of these dosimeters, they were compared with the measurement results of the diode and ion chamber. Experimental method The PIB deposition method is popular in the field of radiation detectors because it is possible to manufacture functional materials according to the binder and to manufacture large areas easily [13]. Therefore, in this study, flexible unit cell sensors based on polycrystalline PbO and HgO materials manufactured by the PIB method were fabricated, with their performance compared and evaluated according to the radiation treatment QA requirements. Fabrication of the film dosimeters A flexible indium tin oxide (ITO) film (polyester) substrate was used as the bottom electrode, with ultrasonic cleaning performed for 30 min to remove foreign substances [14]. Afterwards, the photoconductor material was prepared by mixing a T-2 binder (silicone rubber) with powdered PbO and HgO (Kojundo Chemical Laboratory Inc., Japan) with purities of 99.999% at a mixing ratio of 4:1. The mixed slurry-type photoconductor was applied using a screen-printing technique over an area of 1 cm × 1 cm. The thickness was 50 μm; to reduce this roughness to less than 5%, stone milling was performed for 30 min. And the material was dried in an oven at 70˚C for 8 h. Additionally, to prevent oxidation-related changes to the physical properties, a 10 μm-thick passive layer was deposited on the photoconductor material using CVD deposition of C-type parylene. The top electrode, designed to collect the electric charge, was deposited with a purity of 99.999% gold (Sigma Aldrich Inc., USA) over an area of 0.8 cm × 0.8 cm on the material produced by the PVD method. To test the performance of the sensor, we evaluated the reproducibility and linearity of the sensor at 6 and 9 MeV. For the measurements, we used a LINAC system (Infinity, Elekta AB, Stockholm, Sweden) that can irradiate the sensor with cone-shaped beams. The source-to-surface distance (SSD) was set to 100 cm. The build-up material depth was set to 1.3 cm and 1.9 cm for the mean electron energies of 6 MeV and 9 MeV, respectively, by using a slab phantom (RW3, PTW, Freiburg, Germany). The phantom provides a build-up region of "d-max" depth in which an electron equilibrium distribution is generated via stripped electrons, known as secondary electrons or δ-rays. An electrometer (Keithley, 6517A, USA) was used to apply a driving voltage of 1 V/μm through the fabricated sensor. Subsequently, an oscilloscope was used to acquire the waveforms of the radiation signals, while AcqKnowledge 4.2 software (Biopac, CANADA) was used to calculate the electrical charge collected from the acquired waveforms. Table 1 lists the irradiation conditions used during the measurement. Measurement setup In this study, reproducibility and linearity were evaluated to confirm the precision and accuracy of the sensor. Additionally, the dose-rate dependence and PDI were evaluated to analyze the response characteristics of the radiotherapy QA procedure. For reproducibility measurements, all the sensors were irradiated 10 times. Then, to evaluate the response characteristics according to repeated irradiation, the measurements were normalized based on the signal obtained from the first beam. In general, reproducibility represents the degree of scattering between the repeat measurements conducted under the same conditions and can be expressed using the relative standard deviation (RSD), which is derived as follows: where X i and X Ave represent the response signal and mean response signal measured using the detector, respectively, and n represents the number of measurements. To compare the reproducibility of detectors comprising individually prepared mixtures, an RSD analysis was conducted. The evaluation criterion was that the detector precision must be within an RSD value of 1.5% at a 95% confidence level [15][16][17]. In the case of linearity evaluation, the dose was gradually increased to 3, 10, 50, 100, 200, 300, and 400 MU and evaluated with respect to the coefficient of determination (R 2 ), as shown by the linear regression analysis. In this case, the evaluation criterion was set to R 2 � 0.9990. Accordingly, a sensor with high response stability and accuracy was selected based on the reproducibility and linearity evaluation results, with the dose-rate dependence and PDI evaluated subsequently. To evaluate the dose-rate dependence, doses of 50, 100, 200, and 400 MU were irradiated for the dose-rate settings of 100, 200, and 400 MU/min. The measured signal was normalized based on a dose rate of 200 MU, and the RSD (n = 3) for the value measured at 100 MU was calculated following the method reported in the diode study, and evaluated based on the reported diode error of 1.1% [18,19]. At this time, the results of the diode were as in [18], which was conducted under similar evaluation conditions to this study. In the case of PDI evaluation, the dose was measured at depths of 0.1-9 cm through the slab phantom. Then, to calculate the percentage, the measurements were normalized according to the d-max point and compared with the thimble chamber (TM31010, PTW, Freiburg, Germany) result based on R 50 [20]. The R 50,dos was determined by means of the measured PDI curves. The R 50,dos value is based on the distance between the water surface and the point beyond the dose maximum at which the PDI has a value of 50% (R 50,ion ) [21]. R 50;dos ¼ 1:029 R 50;ion À 0:06 cmðR 50;ion � 10 cmÞ ð2Þ R 50;dos ¼ 1:059 R 50;ion À 0:37 cmðR 50;ion > 10 cmÞ Additionally, the R 50,dos can be determined from the percentage depth dose (PDD) curve after the PDI is converted to the PDD. This is achieved by multiplying the PDI with the mass stopping power ratio s w,air according to the method described by Andreo et al. [20]. In this study, we compared the R 50,dos results with a thimble chamber using the PDI conversion method. Reproducibility The reproducibility and linearity were analyzed to evaluate the performance of flexible dosimeters based on PbO and HgO as monoxide photoconductors. Fig 2 shows the reproducibility results for each dosimeter. For irradiation at 6 MeV, the analysis yielded RSD values for PbO and HgO of 1.31% and 1.38%, respectively, with corresponding values of 1.23% and 1.43% at 9 MeV. Accordingly, both sensors satisfied the requirement of RSD � 1.5%. The PbO dosimeter outperformed the HgO dosimeter in terms of its stability characteristics by 0.07% at 6 MeV and 0.20% at 9 MeV. Linearity We evaluated the linearity for each dosimeter to confirm the variation in EBT accuracy according to the irradiation dose, as shown in Fig 3. For irradiation at both 6 and 9 MeV, both dosimeters demonstrated excellent linearity, with R 2 � 0.9990. Sensitivity is defined as the amount of charge (Q) generated per unit dose (D) corresponding to the linear function a, and has the following formula: Accordingly, the PbO and HgO sensitivities, as determined by the slope of the linear function a, were 1.504 and 1.489 at 6 MeV and 1.406 and 1.379 at 9 MeV, respectively. Based on these values, the PbO sensor was evaluated to be relatively superior to the HgO sensor. Dose-rate dependence Next, we evaluated the dose-rate dependence for the PbO and HgO dosimeters. Fig 4 shows the intensity error and RSD for each dose rate. For the PbO dosimeter, the RSD for each dose-rate was calculated as 0.94% at 6 MeV and 1.08% at 9 MeV based on a dose of 100 MU. According to previous studies, the diode was reported to have an RSD of approximately 1.1% under the same conditions [18]. Compared with the diode, the PbO dosimeter presented here shows excellent characteristics, as evidenced by the lower RSD. For the HgO dosimeter, the RSD was determined as 1.16% at 6 MeV and 1.25% at 9 MeV, under the same conditions adopted for the PbO dosimeter measurements. These results exceeded the evaluation criterion of 1.1% (i.e., the RSD reported for the diode), thus reflecting a poor performance in comparison with the PbO dosimeter. Percentage depth ionization The PDI was obtained by conducting measurements at any depth between 0.1 and 6.5 cm. Fig 5 shows the PDI measurement for each dosimeter and a thimble chamber. Here, the value of the thimble chamber is measured by the dongnam institute of radiological and medical sciences of korea. The PDIs of the PbO and HgO dosimeters were compared with their corresponding R 50,dos and R 80,dos values to evaluate their performance relative to that of the thimble chamber. At 6 MeV, the PbO and HgO dosimeters yielded R 50,dos values of 2.596 cm and 2.559 cm, respectively, which became 3.715 cm and 3.657 cm at 9 MeV, respectively. Compared with the R 50,dos values of the thimble chamber (6 MeV: 2.610 cm, 9 MeV: 3.641 cm), the R 50,dos values of the PbO and HgO dosimeters showed differences of 0.014 cm and 0.051 cm, respectively, at 6 MeV, and -0.074 cm and -0.016 cm, respectively, at 9 MeV. At 6 MeV, the PbO and HgO dosimeters yielded R 80,dos values of 2.120 cm and 1.926 cm, respectively, which became 2.743 cm and 2.957 cm at 9 MeV, respectively. Compared with the R 80,dos values of the thimble chamber (6 MeV: 2.106 cm, 9 MeV: 2.952 cm), the R 80,dos values of the PbO and HgO dosimeters showed differences of -0.014 cm and 0.180 cm, respectively, at 6 MeV, and 0.209 cm and -0.005 cm, respectively, at 9 MeV. The overall mean errors of the PbO and HgO dosimeters were approximately -1.78% and -1.55% at 6 MeV, and -3.01% and -5.13% at 9 MeV, respectively, exhibiting a similar tendency to that of the chamber. Therefore, it was possible to measure 6 MeV electron beam quality with a semiconductor dosimeter having an error of less than~2%. Discussion This study analyzed the reproducibility and linearity of polycrystalline PbO and HgO dosimeters fabricated by the PIB deposition method, and further evaluated the dose-rate dependence and the PDI. As a result of reproducibility evaluation, PbO and HgO dosimeters demonstrated an RSD within 1.5%, which satisfies the 95% confidence interval. Additionally, both dosimeters showed excellent linearity, indicated by their R 2 values of 0.9998 or higher. The dose-rate dependence evaluation revealed that the HgO dosimeter underperformed compared with the diode standard (1.1%); however, the PbO dosimeter yielded a lower linearity according to dose rate of 1.04%. Therefore, the PbO dosimeter shows strong potential as a semiconductor dosimeter capable of replacing the diode. The PDI results showed that when comparing the thimble chamber and R 50 value, the maximum errors associated with the PbO and HgO dosimeters were 0.014 cm and 0.051 cm, respectively, at 6 MeV, and -0.074 cm and -0.016 cm, respectively, at 9 MeV. This difference may be caused by the air gap between the slab phantom and slab phantom, which is generated as the measurement depth increases. Based on the overall results, the PbO dosimeter exhibited superior performance properties compared to its HgO counterpart. According to Task Group 142, which is widely used as a recommendation for medical linear accelerator QA, it is recommended that the R 50 value is within ± 1 mm for annual EBT QA [22]. Therefore, as the results of this study confirm the suitability of PIB-fabricated PbO dosimeters as electron dosimeters. Owing to the simplicity of the PIB deposition method relative to the single-crystalline manufacturing method, the film-based polycrystalline monoxide dosimeters proposed here offer a distinct advantage in terms of production cost. Moreover, the accuracy of the dose verification will continue to improve with additional studies on the correction factors for each variable, such as energy dependence and dose-rate dependence. However, the film semiconductor dosimeter proposed in this study does have its disadvantages. When attached to the human body, it can cause electric shock and substance toxicity problems-to prevent these problems, a protective layer must be considered. The physical properties, especially ductility, of the materials used for the composition of the protective layer, as well as of the dosimeter, should be considered, and the corresponding effect on the attenuation rate should be analyzed. Additionally, the applications of various passive layers, and the aging problem of metal oxides must be studied. These flexible digital dosimeters can be used as in vivo dosimeters, with multiple potential applications such as high energy cone beam CT, dental radiography, and radiographic testing. Electron beams contain various parameters, such as scattering in the beam path, irregular fields, and measurement thickness effect. Therefore, the determination of a pixel resolution that can analyze the two-dimensional (2D) dose distribution should be evaluated in future research. Conclusion Image quality improvement is an important topic in the advancement of radiation detectors, with many studies exploring the potential of single-and polycrystalline materials for this purpose. However, in the field of radiation therapy, dosimeters have shown little progress with respect to measuring electron beams and no evaluation of treatment items has been conducted. To address this issue, this study evaluated the performance of two polycrystalline monoxide semiconductor dosimeters based on a flexible functional material T-2 binder in terms of their suitability for measuring the skin dose of electron beams, which, to the best of our knowledge has not been evaluated previously. The study provides useful insights for the development of a flexible 2D dosimeter that can present skin dose distribution as a future dosimeter. Additionally, the QA evaluation aspect of this study can help guide future research directions for the development of an optimized dosimeter. Flexible functional materials are promising materials capable of overcoming the morphological limitations of rigid materials and have a promising future in dosimeter development. Therefore, this study provides basic data for all radiation-based measurement fields. Author Contributions Data curation: Seung Woo Yang.
4,155.2
2021-05-21T00:00:00.000
[ "Medicine", "Engineering", "Physics" ]
Thermospheric zonal temperature gradients observed at low latitudes Abstract. Fabry-Perot interferometer (FPI) measurements of thermospheric temperatures from the Doppler widths of the OI 630 nm nightglow emission line have been carried out at Cachoeira Paulista (23° S, 45° W, 16° S dip latitude), Brazil. The east-west components of the thermospheric temperatures obtained on 73 nights during the period from 1988 to 1992, primarily under quiet geomagnetic conditions, were analyzed and are presented in this paper. It was observed that on 67% of these nights, the temperatures in both the east and west sectors presented similar values and nocturnal variations. However, during 33% of the nights, the observed temperatures in the west sector were usually higher than those observed in the east sector, with zonal temperature gradients in the range of 100 K to 600 K, over about an 800 km horizontal distance. Also, in some cases, the observed temperatures in the east and west sectors show different nocturnal variations. One of the possible sources considered for the observed zonal temperature gradients is the influence of gravity wave dissipation effects due to waves that propagate from lower altitudes to thermospheric heights. The observed zonal temperature gradients could also be produced by orographic gravity waves originated away, over the Andes Cordillera in the Pacific Sector, or by dissipation of orographic gravity waves generated over the Mantiqueira Mountains in the Atlantic sector by tropospheric disturbances (fronts and/or subtropical jet streams). Key words. Atmospheric composition and structure (air-glow and aurora; thermosphere - composition and chemistry) Ionosphere (equatorial ionosphere) Although optical instruments have been widely used to study the upper atmosphere for nearly half a century, the observations of thermospheric neutral winds and temperatures at low latitudes, using a FPI, are still recent and are providing interesting and novel scientific results.Fagundes et al. (1996aFagundes et al. ( , 1998) ) observed unusually large thermospheric zonal temperature gradients at Cachoeira Paulista (23 • S), Brazil, and Meriwether et al. (1996Meriwether et al. ( , 1997) ) observed both zonal temperature and wind gradients at Arequipa (16.5 • S), Peru.In this paper, we present a study of the occurrence and possible sources of the observed thermospheric zonal temperature gradients recorded at Cachoeira Paulista, using a series of 73 nights of observations (the nights studied had more than 3 hours of measurements) obtained during the period from 1988 to 1992, primarily under quiet geomagnetic conditions and mid-high solar activity.It should be pointed out that, to a certain extent, uniformity of temperature in the thermosphere is expected due to its high viscosity.The MSIS-90 model (Hedin, 1991) predicts very small east-west thermospheric temperature gradients for low latitudes. Instrumentation The FPI characteristics have been presented by Sahai et al. (1992a and1992b).The parallelism adjustment of the etalon (15 cm diameter) and the wavelength scanning are performed by three optically contacted piezoelectric pads.Also, the temperature of the etalon is controlled (±0.1 • C).A 64channel digital analyzer is used to scan the interferometer in wavelengths and several scans are added in order to increase the signal-to-noise ratio.The number of additions depends on the ability to maximize the OI 630 intensity level without losing time resolution.The error in the inferred Doppler temperature is ±40 • C for an OI 630 nm emission intensity level of 200 R.The peak height of the OI 630 nm emission is around the 240 to 270 km altitude.Since the FPI can observe in the four cardinal points (north, south, east and west) at an elevation angle of 30 • , the zonal horizontal distance between the observed points is about 800 to 900 km.It should be mentioned that we do not have measurements in the zenith position, and the zonal and meridional winds are calculated from the differences between east-west and northsouth peak wavelength displacements of the observed fringe profiles (Sahai et al. 1992a). Results and discussion During the period of 1988 to 1992, mostly under quiet geomagnetic conditions and mid-high solar activity, a total of 73 nights of thermospheric temperature observations has been analyzed.One of the prominent features in the observed thermospheric temperatures at low latitudes (South American sector) is the occasional presence of strong thermospheric zonal temperature gradients.This feature has been reported by two independent research groups from observations carried out in two different locations in the South American sector.The first report was based on observations made at Cachoeira Paulista (23 • S, 45 • W, Atlantic side) by Fagundes et al. (1996a) and the second one was based on observations made at Arequipa (16.5 • S, 71.5 • W, Pacific side), Peru, by Meriwether et al. (1996Meriwether et al. ( , 1997)).Table 1 lists the dates of the selected nights presented in this study, the mean nocturnal temperature, the observed temperature gradients and the solar-geomagnetic conditions.We consider the existence of a significant temperature gradient when there is a continuous difference that is greater than 100 K in the thermospheric temperatures between the east and west sectors (over about 800 km horizontal distance) for more than three hours. Figure 1 presents a map of South America showing the locations of the Andes (west side) and Mantiqueira (east side) Mountains.Also, the sub-ionospheric points (∼ 270 km altitude) of the FPI beams in the east-west direction are marked on the map for both the Cachoeira Paulista and Arequipa observatories. The temperature gradients are more often and prominently observed in the zonal direction at Cachoeira Paulista, but the meridional temperature gradients were also observed on a few occasions.In this paper, we shall concentrate our analysis and discussion only on the zonal direction observations.Figures 2 and 3 show the temperatures observed in the east and west directions for a few representative nights with and without temperature gradients.The nighttime temperature variations predicted by the MSIS-90 model (Hedin, 1991) are also presented in Figs. 2 and 3 for some nights and for two different longitudes (23 2 shows the details of all the nights analyzed (73 nights) with and without zonal temperature gradients as a function of solar activity. Observations without temperature gradients On most of the nights on which thermospheric temperatures were observed, during the period from 1988 to 1992, the temperature to the east and to the west directions at Cachoeira Paulista presented similar nighttime features and magnitude. Figure 2 shows the temperatures observed in the east and west directions for some representative nights without temperature gradients.On 49 (67%) out of the 73 nights of observation, the temperature behaviour was similar to that shown in Fig. 2. It is noted that the observed temperatures in the west and east sectors presented very similar nocturnal variations during these nights.Also, this behaviour remains the same in both sectors (T E over the sea and T W over the continent) when the observed temperatures presented either smooth nighttime variations (e.g.22 March 1988) or wave-like variations (e.g.11 November 1989).On the nights when the thermospheric temperature variations showed wave-like structures, it is seen that the temperatures in both sectors presented a similar amplitude of variation (100 K to 400 K).These wave-like variations could possibly be caused by the presence of gravity waves at the thermospheric heights, in periods of a few hours. Observations with temperature gradients During the period studied, 24 nights (33%) showed the presence of significant zonal temperature gradients.Table 2 shows the number of nights that presented zonal temperature gradients and the number of nights that did not, for three different levels of solar activity.It is noted that the occurrence of zonal temperature gradients is somewhat higher when F 10.7 > 200 [W/m 2 Hz] (37%) than when F 10.7 is between 150-200 [W/m 2 Hz] (28%).This indicates that the occurrence of zonal temperature gradients has some dependence on solar activity level and this result agrees with that previously presented by Meriwether et al. (1997).However, we find that the occurrence of zonal temperature gradients, when F 10.7 < 150 [W/m 2 Hz] (33%), is similar to that when F 10.7 > 200 [W/m 2 Hz] (37%), but due to the small number of nights (12) when F 10.7 < 150 [W/m 2 Hz], these results must be considered with some reservation.The number of nights analyzed when F 10.7 is between 150-200 [W/m 2 Hz] and F 10.7 > 200 [W/m 2 Hz] are more significant (32 and 29 nights, respectively), so that these results are more representative.The dependence of the observed temperature gradients on solar activity could be associated with the increase in the F-region electron density with solar activity and possible increase in the transmission of gravity waves from the lower to the upper atmosphere (Meriwether et al., 1997).It should be mentioned that Meriwether et al. (1997) observed thermospheric temperature gradients only during the winter in the high solar activity period, whereas in the present investigations, thermospheric temperature gradients occurred in all seasons with medium to high levels of solar activity. 3.3 Possible mechanisms causing zonal temperature gradients Meriwether et al. (1996Meriwether et al. ( , 1997) ) have suggested that the temperature gradients observed at Arequipa (16.5 • S, 71.5 • W) are produced by orographic gravity wave viscosity dissipation.They have suggested that low-frequency components of orographic gravity waves are generated at low-altitudes (tro-posphere) at the Andes Cordillera and propagate vertically to thermospheric heights, where these waves are dissipated, producing a localized region of heating.Also, Meriwether et al. (1997) proposed that the temperature gradients observed at Cachoeira Paulista (23 • S, 45 • W) (Fagundes et al., 1996a) are just a manifestation of the same heating source, since any perturbation in the thermosphere may extend farther to the east, away from the Andes, and then reach Cachoeira Paulista. Since the thermospheric zonal wind flows eastward during the night for almost the whole year, neutral air which is heated at Arequipa could reach Cachoeira Paulista.However, this heated neutral air has to travel a distance of approximately 26 • (∼ 2600 km) and for a typical eastward wind of 100 m/s at thermospheric heights, this neutral air will reach Cachoeira Paulista about 7 hours later.There is no physical restriction for perturbations travelling from the Andes (Pacific) to Brazil (Atlantic), but there are some physical obstacles for its occurrence.The typical horizontal distance be- 1 8 1 9 2 0 2 1 2 2 2 3 2 4 2 5 2 6 2 7 2 8 2 9 3 tween the observed east-west sectors in FPI observations is about 800 km.Since we are observing temperature gradients at Cachoeira Paulista of about 100 K to 600 K (Fig. 3), the neutral air flowing eastward has to lose about 12.5 K to 75 K per each 100 km, respectively.Taking into account that the thermospheric neutral air heated at the Andes has to travel ∼ 2600 km to reach Cachoeira Paulista, the total temperature decrease during this long travel is estimated at 325 K to 1950 K, respectively (considering orographic heating over the Andes as a point heat source).Therefore, it is possible to explain temperature gradients of the order of 100 K at Cachoeira Paulista by taking into account the heating produced at the Andes (Pacific sector).Nevertheless, it is almost impossible to explain the temperature gradients at Cachoeira Paulista which are larger than 200 K, without taking into account other sources of localized heating.However, we have to bear in mind that the spatial structure (longitudinal/latitudinal) of orographic gravity waves generated over the Andes (which is about 300 km across) is not yet known. It is, therefore, necessary to consider other possible sources of thermospheric heating in addition to that proposed by Meriwether et al. (1997) for the observations at Cachoeira Paulista.Ionospheric processes, such as ion-neutral coupling effects, could provide another source leading to thermospheric temperature gradients.However, within the ionosphere itself, the main longitudinal dependent factor is the geomagnetic field geometry or magnetosphere/ionosphere coupling.Since most of the observations reported here were obtained during relatively quiet geomagnetic conditions and the observation regions are fairly close in space, we could rule out the possibility of longitudinal variations contributing to the observed temperature gradients in ionospheric processes.Also, as mentioned by Meriwether et al. (1997), magnetic activity effects would not produce the form of localized heating which is observed.Hines (1960) drew attention to the importance of atmospheric gravity waves at ionospheric heights.Sources for medium-scale gravity waves include tropospheric distur- bances, such as the jet streams, frontal systems and penetrative convection (e.g.Bertin et al., 1978).Table 3 shows the details of the tropospheric disturbances (fronts and subtropical jet streams) present in the region close to the observation site on days with and without thermospheric temperature gradients.The Mantiqueira Mountains have several peaks as high as 2500 m in altitude.Thus, an additional source for some of the observed temperature gradients could be the interaction of the tropospheric disturbances with the Mantiqueira Mountains (which occupies an extensive region around 22 • S, 44 • W), thereby producing large vertical wavelength waves that propagate upward from mountains.Zonal temperature gradients in the thermosphere at low latitudes have only been identified in the last 5-10 years and their cause is still not completely understood.More simultaneous thermospheric and ionospheric observations from regions with different topography will be important to provide additional information for a better understanding of the sources of thermospheric zonal temperature gradients at low latitudes. Conclusions We have analyzed thermospheric temperature variations (on 73 nights) observed at Cachoeira Paulista (23 • S, 45 • W), during the period from 1988 to 1992, under primarily quiet geomagnetic conditions and mid-high solar activity.The main features associated with the occurrence of thermospheric zonal temperature gradients are summarized below: 1. Of the 73 nights studied, during the period from 1988 to 1992, 33% of the nights presented thermospheric zonal temperature gradients.Also, the occurrence of zonal temperature gradients have some dependence on the solar activity level and are observed in all seasons. 2. One of the possible sources for zonal temperature gradients greater than 100 K (over 800 km horizontal distance), at Cachoeira Paulista, is the heating produced in the Andes by orographic gravity waves, as suggested in a recent study by Meriwether et al. (1996Meriwether et al. ( , 1997)).Nevertheless, it is not possible to explain the observed temperature gradients at Cachoeira Paulista larger than 200 K, without taking into account other localized heating sources. 3. Orographic gravity waves may be generated at the Mantiqueira Mountains and their dissipation at thermospheric heights may induce a localized heating and, consequently, produce an additional source for the observed temperature gradients. Fig. 1 . Fig. 1.Map of South America showing the location of the Andes Cordillera (left side -black) and of the Mantiqueira Mountains (right side -hatched) and the zonal sub-ionospheric points (∼ 270 km) for Cachoeira Paulista and Arequipa. Fig. 2 . Fig. 2. Nighttime variations of the observed temperatures to the east and west, for representative nights without thermospheric zonal temperature gradients.The temperatures obtained from the MSIS-90 model are also shown for two different longitudes (23 • S, 41 • W, closed square) and (23 • S, 49 • W, open circle) at 270 km of altitude. Fig. 3 . Fig. 3. Nighttime variations of the observed temperatures to the east and west, for representative nights with thermospheric zonal temperature gradients.The temperatures obtained from the MSIS-90 model are also shown for two different longitudes (230 • S, 41 • W, closed square) and (23 • S, 49 • W, open circle) at 270 km of altitude Table 1 . A list of observation dates, solar-geomagnetic conditions temperature gradients and mean nocturnal temperature, considered in this study Table 2 . Number of nights analyzed with/without thermospheric zonal temperature gradients for three solar flux levels F10.7 [W/m 2 Hz] • S, 41 • W, closed square; and 23 • S, 49 • W, open circle) at the 270 km altitude, but at Cachoeira Paulista local time.These two locations are close to the positions at which the FPI observes the temperatures in the east and west directions, respectively, from Cachoeira Paulista.Notice that the MSIS-90 model gives, in general, a very small east-west temperature gradient (∼ 25 K over a range of 800 km).Sahai et al. (1992)have reported that the observed thermospheric temperatures at Cachoeira Paulista are in good agreement with the MSIS-86 model for the winter and equinox seasons.However, the MSIS model results primarily represent average conditions and do not exhibit the day-to-day variability present in the thermospheric temperatures.Table Table 3 . Details of tropospheric disturbances in the region of observation on the nights of measurements with/without thermospheric temperature gradients (Source: Climanalise, a monthly publication by INPE)
3,831.8
2001-09-30T00:00:00.000
[ "Physics", "Environmental Science" ]
177. Distinctive Features of Ertapenem Mono-Resistant Carbapenem-Resistant Enterobacterales in the United States: A Cohort Study Abstract Background Carbapenem-resistant Enterobacterales (CRE) are highly antibiotic-resistant bacteria. Whether CRE resistant only to ertapenem among carbapenems (ertapenem mono-resistant) represent a unique CRE subset with regards to risk factors, carbapenemase genes, and outcomes is unknown. Methods We analyzed laboratory- and population-based surveillance data from nine sites participating in CDC’s Emerging Infections Program (EIP). We defined an incident case as the first isolation of Enterobacter cloacae complex, Escherichia coli, Klebsiella aerogenes, K. oxytoca, K. pneumoniae, or K. variicola resistant to doripenem, ertapenem, imipenem, or meropenem (determined at clinical laboratory) from a normally sterile site or urine identified from a resident of the EIP catchment area in 2016-2017. We compared risk factors, carbapenemase genes (determined via polymerase chain reaction at CDC), and mortality of cases with ertapenem “mono-resistant” to “other” CRE (resistant to ≥ 1 carbapenem other than ertapenem). We additionally conducted survival analysis to determine the effect of ertapenem mono-resistant status and isolate source (sterile vs. urine) on survival. Results Of 2009 cases, 1249 (62.2%) were ertapenem mono-resistant and 760 (37.8%) were other CRE (Figure 1). Ertapenem mono-resistant CRE cases were more frequently ≥ 80 years old (29.1% vs. 19.5%, p< 0.0001), female (67.9% vs 59.0%, p< 0.0001), and white (62.6% vs. 45.1%, p< 0.0001). Ertapenem mono-resistant isolates were more likely than other CRE to be Enterobacter cloacae complex (48.4% vs. 15.4%, p< 0.0001) but less likely to be isolated from a normally sterile site (7.1% vs. 11.7%, p< 0.01) or have a carbapenemase gene (2.4% vs. 47.4%, p< 0.0001) (Figure 2). Ertapenem mono-resistance was not associated with difference in 90-day mortality (unadjusted odds ratio [OR] 0.82, 95% confidence interval [CI] 0.63-1.06) in logistic models or survival analysis (Figure 3). Figure 1. Flow diagram of carbapenem-resistant Enterobacterales cases included in analysis, 2017-2018. CRE, carbapenem-resistant Enterobacterales; MIC, minimum inhibitory concentration. Ertapenem mono-resistant CRE are only resistant to ertapenem (among carbapenems). Other CRE are resistant to ≥1 carbapenem other than ertapenem. We excluded isolates that (1) had no interpretable MICs for any carbapenem, (2) were only tested against ertapenem, (3) had unknown death status, or (4) were not associated with patient’s first incident case. Figure 2. Proportion of ertapenem mono-resistant carbapenem-resistant Enterobacterales (CRE) vs. other CRE isolates with specific carbapenemase genes. KPC, Klebsiella pneumoniae carbapenemase; NDM, New Delhi metallo-ß-lactamase; OXA, oxacillinase. Ertapenem mono-resistant carbapenem-resistant Enterobacterales (CRE) are only resistant to ertapenem (among carbapenems). Other CRE are resistant to ≥1 carbapenem other than ertapenem. Testing via reverse transcriptase polymerase chain reaction. Figure 3. Survival analysis comparing patients with carbapenem-resistant Enterobacterales (CRE) that are ertapenem mono-resistant to other CRE (i.e., resistant to ≥1 carbapenem other than ertapenem), either total (A) or stratified by isolate site (B). Ertapenem mono-resistant) isolates were not associated with decreased mortality, and sterile isolate source (i.e., non-urinary isolates) was associated with increased mortality regardless of ertapenem mono-resistance. Conclusion Ertapenem mono-resistant CRE rarely have carbapenemase genes and have distinct clinical and microbiologic characteristics compared to other CRE. These findings may inform antibiotic choice particularly when testing for carbapenemases is not readily available. Disclosures All Authors: No reported disclosures Data in each cell is presented as the coefficient and p-value is in parentheses. ^adjusted for region, teaching, urban, bed size, and season. + p<.10 *p <.05 **p <.01 ***p <.001 Conclusion. Our study revealed surprising association between influenza epidemics and GN resistance and corroborated the evidence of correlation between respiratory GP and influenza infections. These insights may help inform targeted antimicrobial stewardship initiatives during influenza season. Distinctive Features of Ertapenem Mono-Resistant Carbapenem-Resistant Methods. We analyzed laboratory-and population-based surveillance data from nine sites participating in CDC's Emerging Infections Program (EIP). We defined an incident case as the first isolation of Enterobacter cloacae complex, Escherichia coli, Klebsiella aerogenes, K. oxytoca, K. pneumoniae, or K. variicola resistant to doripenem, ertapenem, imipenem, or meropenem (determined at clinical laboratory) from a normally sterile site or urine identified from a resident of the EIP catchment area in 2016-2017. We compared risk factors, carbapenemase genes (determined via polymerase chain reaction at CDC), and mortality of cases with ertapenem "mono-resistant" to "other" CRE (resistant to ≥ 1 carbapenem other than ertapenem). We additionally conducted survival analysis to determine the effect of ertapenem mono-resistant status and isolate source (sterile vs. urine) on survival. Conclusion. Ertapenem mono-resistant CRE rarely have carbapenemase genes and have distinct clinical and microbiologic characteristics compared to other CRE. These findings may inform antibiotic choice particularly when testing for carbapenemases is not readily available. Disclosures. Background. Carbapenem-resistant Enterobacterales (CRE) have become endemic and cause significant morbidity and mortality globally. The metallo-beta-lactamase gene bla IMP-4 is a key CRE resistance determinant in Australia and Asia but its genomic context remains unknown. We aimed to determine the genomic epidemiology of bla IMP-4 in clinical and environmental isolates from 2008 -2020 at our institution. Methods. We performed whole genome sequencing on 219 bla IMP-4 -carrying isolates from 134 patients (219 short-read and 75 long-read). Multi-locus sequence types (MLSTs), resistance determinants and plasmid replicons were assessed. High-quality de novo hybrid assemblies were used to identify location of bla IMP-4 gene. We conducted phylogenetic analysis for key MLSTs and plasmids. Results. Bla IMP-4 was noted on a class I integron also harboring aminoglycoside, sulfamethoxazole, chloramphenicol and quaternary ammonium compound resistance genes. This integron was able to migrate over time to 10 bacterial species (42 STs) and 6 different plasmid types (Figure 1 and Figure 2). From 2008-2020, bla IMP-4 was present on IncC plasmids in Serratia marcescens and Klebsiella pneumoniae. We noted small outbreaks of Pseudomonas aeruginosa ST111 with chromosomal integration of bla IMP-4 from 2008IMP-4 from -2018 and Enterobacter cloacae complex ST114 with bla IMP-4 on IncFIB(K)/ IncFIA(HI1) plasmids from 2011-2020 (19 isolates). From 2016-2020, there was an explosion of diverse IncHI2 plasmids carrying bla IMP-4 . This was driven by clonal expansion of E. cloacae complex ST93/ST190 (79 isolates), with spillover of IncHI2 plasmids to Klebsiella spp (13 isolates), Citrobacter spp (2 isolates), S. marcescens (1 isolate), Escherichia coli (4 isolates). In addition to bla IMP-4 , these plasmids carried mcr-9.1, a colistin resistance gene, and resistance determinants to nearly all key classes of Gram-negative antimicrobials. BlaIMP-4 was noted in diverse bacterial species over the study period. Serratia marcescens and Klebsiella pneumoniae were present throughout. Outbreaks of Enterobacter cloacae complex ST114, ST190 and ST93 and Pseudomonas aeruginosa ST111 were noted. Presence of blaIMP-4 on diverse plasmids that varied through the study period was noted. Plasmids were charaterised by analysing de novo hybrid assembly data and co-location of blaIMP-4 and plasmid replicons on the same contigs. Conclusion. Bla IMP-4 spread on a class I integron was responsible for endemic carbapenem resistance at our institution. This mobile genetic element was able to persist due to both clonal spread and entry into diverse plasmids. Concerningly, we noted a large outbreak driven by IncHI2 plasmids harboring colistin resistance genes with spread to multiple bacterial species. Disclosures. All Authors: No reported disclosures
1,624
2021-11-01T00:00:00.000
[ "Biology", "Medicine" ]
Machine learning identifies multi-parametric functional PET/MR imaging cluster to predict radiation resistance in preclinical head and neck cancer models Purpose Tumor hypoxia and other microenvironmental factors are key determinants of treatment resistance. Hypoxia positron emission tomography (PET) and functional magnetic resonance imaging (MRI) are established prognostic imaging modalities to identify radiation resistance in head-and-neck cancer (HNC). The aim of this preclinical study was to develop a multi-parametric imaging parameter specifically for focal radiotherapy (RT) dose escalation using HNC xenografts of different radiation sensitivities. Methods A total of eight human HNC xenograft models were implanted into 68 immunodeficient mice. Combined PET/MRI using dynamic [18F]-fluoromisonidazole (FMISO) hypoxia PET, diffusion-weighted (DW), and dynamic contrast-enhanced MRI was carried out before and after fractionated RT (10 × 2 Gy). Imaging data were analyzed on voxel-basis using principal component (PC) analysis for dynamic data and apparent diffusion coefficients (ADCs) for DW-MRI. A data- and hypothesis-driven machine learning model was trained to identify clusters of high-risk subvolumes (HRSs) from multi-dimensional (1-5D) pre-clinical imaging data before and after RT. The stratification potential of each 1D to 5D model with respect to radiation sensitivity was evaluated using Cohen’s d-score and compared to classical features such as mean/peak/maximum standardized uptake values (SUVmean/peak/max) and tumor-to-muscle-ratios (TMRpeak/max) as well as minimum/valley/maximum/mean ADC. Results Complete 5D imaging data were available for 42 animals. The final preclinical model for HRS identification at baseline yielding the highest stratification potential was defined in 3D imaging space based on ADC and two FMISO PCs (\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$p<0.001$$\end{document}p<0.001). In 1D imaging space, only clusters of ADC revealed significant stratification potential (\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$p=0.002$$\end{document}p=0.002). Among all classical features, only ADCvalley showed significant correlation to radiation resistance (\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$p=0.006$$\end{document}p=0.006). After 2 weeks of RT, FMISO_c1 showed significant correlation to radiation resistance (\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$p=0.04$$\end{document}p=0.04). Conclusion A quantitative imaging metric was described in a preclinical study indicating that radiation-resistant subvolumes in HNC may be detected by clusters of ADC and FMISO using combined PET/MRI which are potential targets for future functional image-guided RT dose-painting approaches and require clinical validation. Supplementary Information The online version contains supplementary material available at 10.1007/s00259-023-06254-9. Introduction About 50% of patients treated with radiochemotherapy (RCT) for locally advanced human papilloma virus-negative head-and-neck cancer (HNC) experience local and regional treatment failure [1,2]. As salvage treatment options are limited, locoregional failure in most patients leads to severe symptoms and ultimately to death. Thus, overcoming treatment resistance by optimized RCT represents an important Simon Boeke, René M. Winter, and Sara Leibfarth contributed equally. area of research. Preclinical and clinical data demonstrates that tumor hypoxia and other microenvironmental factors significantly contribute to tumor radiation resistance [3][4][5][6]. Different quantitative imaging biomarkers (QIBs) related to tumor hypoxia and microenvironment have shown potential for outcome prediction, early response assessment, and RT personalization, e.g., by means of risk adapted radiation dose modulation [7][8][9][10][11][12]. Hypoxia imaging using positron emission tomography (PET) with specific radiotracers such as [18F]-Fluoromisonidazole (FMISO) has proven prognostic power to predict outcome after RCT in HNC [7,[13][14][15]. Similarly, functional magnetic resonance imaging (MRI) techniques, such as diffusion-weighted (DW) imaging assessing tumor cellularity or dynamic contrast-enhanced (DCE) imaging which allows to analyze tissue vascularity and vessel permeability, have been correlated to tumor response after RCT in HNC and other solid tumors [8,9,16,17]. Some studies correlated the spatial distribution of multiple QIB and suggested complementary biological information [18][19][20]. However, the optimal QIB or imaging profile using multiple QIB to predict outcome after RCT in HNC is unknown. Most results were derived from small observational clinical cohorts and none of the previous studies was able to relate relevant QIB to radiation resistance on a biological or pre-clinical level. Future clinical use of QIB to personalize radiation dose to overcome treatment resistance requires a widely available, robust, affordable, and simple method to generate QIB to allow multicenter trials and easy access for patients. In contrast to molecular profiling [21,22], liquid biopsy [23,24], histopathology [25,26], or combination with immunotherapy [27,28], QIBs have the benefit of spatial tumor characterization [29] and thus optimal conditions for focal personalized interventions such as dose-painting, including dose escalation and dose de-escalation [13,30,31]. The aim of this preclinical study was to develop and train a multi-scale model from a broad and unbiased basis for prediction of high-risk subvolumes (HRS) in HNC linked to increased radiation resistance derived from hypoxia PET, DW-, and DCE-MRI. Multi-parametric small animal PET/MRI of xenograft tumors from different human HNC cell lines with variable, known radiation sensitivities were imaged and evaluated by novel machine learning (ML) methods to identify HRS in multi-dimensional imaging space. The hypothesis to be investigated in this study was therefore that with novel ML approaches new QIB or imaging profiles will be discovered to define HRS in a pre-clinical scenario, which may be used for future personalized radiotherapy (RT) interventions in a clinical setting. Study design, animals, and tumor models A total of 68 mice with implanted human HNC cell lines of different, known radiation sensitivities were examined with simultaneous functional PET/MRI before and after 2 weeks of fractionated RT. Details on animals, implanted cell lines, imaging data, and time points are summarized in Table 1. The animal facilities and all experiments were approved according to our institutional guidelines and the German animal welfare regulations (animal allowance no. 35/9185.81-2/R4/16). Two to 5 days before tumor cell injection, 4-to 6-week-old immunodeficient female nude mice (NMRI nu/nu, Charles River Laboratories) received a 4-Gy total body irradiation (6 MV photons, Elekta SL15, Crawley, UK) to further suppress the residual immune system. Eight well-established human HNSCC tumor cell lines (UTSCC-45, XF354, UTSCC-14, UTSCC-8, FaDu, UTSCC-5, CAL-33, SAS) with known radiation sensitivities in vivo [32,33] were grown in cell culture (cf. Table 1). Exponentially growing cells of the third passage were trypsinised, and a single cell suspension with approx. 500,000 cells dissolved in 50 μl phosphate-buffered saline was prepared and injected subcutaneously on the right hind leg of the animal. Animals were checked regularly for weight loss, abnormal behavior, or other signs of distress. Tumor diameter was measured twice weekly. After reaching the target size of 7-10-mm diameter, tumors were examined using multi-modal, small animal PET/MRI before and after 2 weeks of fractionated RT. Multi-modal imaging and radiotherapy All animals were imaged with combined PET/MRI using a small animal 7-T MRI system with a dedicated PET insert [29,34,35]. Animals were anesthetized with a mixture of isoflurane (1.5-2.0%; Abbott, Wiesbaden, Germany) and air (flow rate 1.0-1.5 l/min) with continuous monitoring of the breathing rate and were placed on a warming pad to maintain constant body temperature during imaging. The imaging protocol consisted of simultaneous dynamic FMISO PET, anatomical T2-weighted MRI (T2w-MRI), DW-MRI, and DCE-MRI, with T2w-and DW-MRI in a gated acquisition technique with respiratory triggering (cf. Fig. 1). Dynamic PET was acquired in listmode for 90 min post injection (p.i.) of approximately 10 MBq FMISO in 200 μl of physiological sodium chloride solution (0.9%) into the animal's tail vein. PET data was reconstructed to a total of 65 time frames (36 × 10 s, 18 × 60 s, 11 × 360 s) using 2D-OSEM (4 iterations, 8 subsets). DW-MRI was performed with an echo planar imaging sequence with nine equidistant b-values (b = 0-800 s/mm 2 ). DCE-MRI was acquired for a total duration of 13.5 min starting 1 min before injection of the contrast agent (Gadovist®, Bayer Vital GmbH, Germany), with a temporal resolution of 5.4 s. Details about the pre-clinical image acquisition protocol are given in Table 2. Irradiation with ten fractions of 2 Gy per day was applied for 2 weeks using a dedicated small animal image-guided RT platform (SAIGRT, Dresden, Germany) [36]. For irradiation, the animals were immobilized using plastic tubes fixated on a precisely movable carbon table; the tumor-bearing leg was positioned using a foot holder. Positioning accuracy with respect to the radiation field was checked with portal X-ray imaging (80 kV, 0.8 mA). All irradiations were performed using iso-centric opposed fields with dedicated circular collimators (8-14 mm diameter) depending on tumor volume. Radiation dose and corresponding irradiation time were calculated as a function of tumor size. ML-based identification of radioresistant clusters Image pre-processing During a data preprocessing step, the tumor region as well as a representative muscle region were defined manually based on the T2w-MRI data by an experienced radiation oncologist (SB) using the open-source software 3DSlicer. The tumor region was manually contoured on all image slices to encompass the whole lesion, excluding skin and bony structures. Resulting tumor volumes are Table 1 Preclinical data. Details on animals, head-and-neck cancer cell lines including mean and 95% confidence interval (CI) tumor control dose 50% (TCD50) according to [33], radiation sensitivities grouped into high (H), medium (M), medium/low (ML), and low (L) as well as number of complete imaging data sets, data sets with hypoxia positron emission tomography (PET), diffusion-weighted MR imaging (DWI), and dynamic contrast enhanced (DCE) MRI before the start of radiotherapy (RT) and after 14 Table 1. Muscle tissue was carefully contoured in the ipsilateral leg excluding bones and blood vessels. All quantitative MRI data were resampled to the PET image grid for subsequent processing and analysis. To correct for potential movements of the animal between different acquisitions, local rigid registrations between the respective images were performed using the open-source toolkit elastix (details on registration parameters are given in Supplementary Table S1). The registration result was carefully visually checked by an imaging scientist (SL) and a radiation oncologist (SB) and manually adjusted if necessary. Extraction of quantitative parameter maps Maps of apparent diffusion coefficient (ADC) values were derived from DW-MR images using a mono-exponential fit over all b-values with in-house software developed in python (scipy 0.19.1). FMISO PET data was first transformed into static uptake parameter maps by generating a tumor-to-muscle ratio map from normalized voxel activity concentration with respect to mean muscle uptake in the second last FMISO PET frame (approx. 80 min pi) to avoid potential artifacts caused by the following MRI contrast agent injection. To further extract quantitative parameter maps related to tumor hypoxia from dynamic FMISO PET signals, FMISO activity concentrations were converted into maps of standardized uptake value (SUV) by normalization to body weight and injected activity. Then, a principal component analysis (PCA) was performed using the uncentered data to extract a reduced set of quantitative parameter maps. Based on the variance explained by the individual principal components (PCs), the projection coefficients of the first two PCs (FMISO_c1, FMISO_c2) were found to be sufficient to describe the measured tracer dynamics and kept for further analyses (Fig. 2). Similarly, for DCE-MRI, measured signal intensities S t i were converted to relative signal increase with t i = {1, ⋯ , 150} being the time frames, and S 0 the baseline signal intensity, averaged over 11 frames acquired prior to contrast agent injection. Quantitative parameter maps were then derived from ΔS data using PCA, yielding two final parameter maps containing the two first PC projection coefficients DCE_c1 and DCE_c2 (Fig. 2). Model training for identification of radioresistant clusters We propose a novel method for unbiased identification of tumor clusters defining HRS from multi-parametric quantitative imaging. This method is based on the hypotheses that recurrence after RT originates from such HRS inside the macroscopic tumor, which fails to be controlled by a standard radiation dose and fractionation due to its biological and physiological properties, and that a larger HRS translates into higher levels of radiation resistance. We therefore implemented a method which automatically extracts tumor clusters with similar biological and physiological properties as derived by joint information of quantitative maps from functional imaging and scores their ability to stratify tumor cell lines according to radiation sensitivity. In this way, relevant image parameters were learned which fulfill the hypotheses listed above. A schematic overview of the machine learning approach to identify most relevant parameters in n-dimensional imaging space is provided in Fig. 3. For this analysis, only the imaging data cohort C all = 42 , where all five quantitative parameter maps (ADC, FMISO_c1, FMISO_c2, DCE_c1, DCE_c2) were available for the first imaging time point, were included into the analysis (cf. Table 1). First, the total number of tumor voxels of the training cohort C all was collected in common parameter spaces. 1-to 5-dimensional (1D to 5D) image parameter spaces were built, with each dimension being spanned by one of the five quantitative parameters extracted from functional imaging. Samples in parameter space (tumor voxels) were z-normalized. During parameter space scanning, each 1D to 5D parameter space was scanned for connected clusters of a fixed number N HRS of voxels with similar parameters. According to [33], N HRS was chosen such that the fraction of tumor voxels belonging to HRS resulted in 15.0%, 7.5%, and 0% for tumor cell lines of low, medium, and high radiation sensitivity, respectively. Parameter space scanning was performed by repeating the following steps N it = 5000 times: (1) randomly select one sample as cluster center X cluster ; (2) assign its N HRS nearest neighbors (KNN clustering) using the Euclidean distance from X cluster in parameter space as proximity measure; (3) derive the fraction of voxels in this cluster f cluster for each individual tumor; (4) quantify the stratification potential of f cluster using a stratification score S. Quantification of stratification potential For a robust, score-based assessment of the stratification potential for each tested parameter combination, cell lines were grouped into classes of distinct radiation sensitivity based on previously published tumor control doses (TCD50 , Table 1) [32,33]. Cell lines with overlapping confidence intervals were considered not distinguishable with respect to radiosensitivity and were therefore grouped into the same class. By doing so, three distinct classes of cellular radiation sensitivity could be identified: a class of high (H) sensitivity (UTSCC-45, XF354, UTSCC-14, UTSCC-8), medium (M) sensitivity (FaDu), and low (L) sensitivity (UTSCC-5, SAS). UTSCC-5 could not be successfully implanted into animals. Imaging data of the cell line CAL-33 could not be reproducibly analyzed due to significant differences in image quality; further, no reliable assignment of radiosensitivity class based on the high reported range of TCD50 was possible. Therefore, CAL-33 was excluded from the analysis. The stratification potential, i.e., the capability to separate groups H-M and M-L, respectively, for any investigated parameter combination was quantified by Cohen's d as effect size measure Here, i,j is the mean of the assessed HRS of group i or j based on the different parameter combinations, whereas ij is the pooled standard deviation of groups i and j , defined as with i,j being the group variances and n i,j the number of observations in groups i or j , respectively. The final score was defined as the arithmetic mean Selection of optimal HRS clusters in 1D to 5D imaging space For each n-dimensional image parameter space, the clusters yielding the highest stratification score S HRS,nD and their corresponding cluster centers X HRS,nD were identified and used for comparing the performance of different parameter spaces. Furthermore, the differences of f HRS,nD between radiosensitivity groups H-M and M-L, respectively, were tested for significance using a Wilcoxon rank sum test. P < 0.05 was considered statistically significant. Assessment of robustness To evaluate the robustness of the identified stratification scores S HRS and their cluster centers X HRS , an internal bootstrap validation was performed for each parameter space. Each bootstrap cohort was drawn with replacement from the original training cohort C all , using a total number of N bs = 50 bootstrap cohorts. Robustness was then quantified by deriving bootstrap-based 95% confidence intervals (CIs) for S HRS and X HRS , respectively. For an additional assessment of the robustness of X HRS , the distribution of identified scores after parameter space scanning was visualized as multiple 2D projections. Extended cohort To test the resulting ML models for the best models identified during training, model verification was performed using an extended cohort. For this purpose, an extended cohort consisting of all animal data available for the respective parameter combination C max was used including also incomplete data sets not used during training (cf. # imaging data sets given in brackets in Table 1). Classical imaging parameters and multiple time points For comparison, classical ADC-related imaging parameters reporting mean, minimum, and maximum value in a tumor were reported. In addition, ADC valley was derived by the minimum ADC value in a connected image region of seven voxels to create a robust measure related to minimum ADC but unaffected by artifacts originating from partial volume effects at the edges of the tumor. Similarly, maximum and peak values of FMISO tumor-to-muscle ratio ( TMR max∕peak ) and mean, maximum, and peak as average over seven voxels around the maximum FMISO SUV were calculated using the late PET frame acquired 80 min p.i. for each tumor and correlated to cell-line specific radio sensitivities. The full analysis pipeline described above was also carried out for imaging data acquired after 2 weeks of fractionated RT (w2). Results During model training in 1D to 5D search space on the baseline imaging data, we identified distinct clusters in 1D to 3D imaging parameter space which were able to significantly stratify the xenograft tumors according to their radiation resistance. Using 1D When further increasing the dimensionality of the parameter space, further improvement of S HRS was observed, which, however, was not significant ( p > 0.05 ) according to Mann-Whitney U test based on a bootstrap analysis with respect to S HRS,3D . Best scoring models in 1D to 5D imaging space are summarized in detail in Table 3. A visualization of the n-dimensional search space is presented in Fig. 4, whereas Fig. 5 shows the corresponding stratification potential for the selected 1D, 2D, and 3D clusters. . 4 Visualization of stratification scores in 1D to 3D parameter space. Stratification scores S for the best-scoring 1D, 2D, and 3D imaging parameter spaces. 3D parameter space is shown as corresponding 2D projections for better visualization Figure 6 presents an example of one preclinical tumor (SAS) with annotations of 1D and 3D HRS. Correlation of cell line specific radiation sensitivities with the classical imaging parameters in the tumor region did only yield significant stratification potential for ADC valley ( p = 0.006 ), cf. Figure 7 and Table 4. Figure 7 shows the validation results for the best 1D, 2D, and 3D models identified during training in addition to the only significant classical parameter ADC valley . Stratification results of the different models in the extended cohorts C max are similar to those obtained the training cohort C all , indicating high robustness of the method. Following the same methodology, 1D to 5D parameter space scanning was performed for imaging data obtained after 2 weeks of fractionated RT. Here, only a 1D cluster defined by the FMISO_c1 map measured in w2 yielded significant stratification potential S HRS,1D,w2 = 1.12[0.90 − 3.69] , p = 0.041 . Results of n-dimensional model training in w2 of RT are detailed in Table 5. Discussion In this study, we report pre-clinical training of a multidimensional PET/MRI-based QIB to detect HRS in HNC as potential target for future focal dose escalation. Our findings suggest that a HRS defined by a cluster of ADC values derived from DW-MRI correlates spatial maps of cellularity with individual radiation resistance considering a 1D quantitative functional imaging map as input. Highest stratification potential with respect to cell line specific radiation resistance was found for a 3D QIB created from ADC, and two PCs of dynamic FMISO PET information. Increasing dimensionality further did not significantly increase stratification potential, which may Voxel structure of contours results from resampling of all functional data and GTV delineation to the PET image grid, which had the lowest resolution be due to redundancies hidden in the n-dimensional functional imaging data. Consequently, we identified a QIB profile from PET/MRI using a novel machine learning approach in a pre-clinical setting. Starting from a wide search approach with as few assumptions as possible using the main quantitative imaging techniques which are clinically available today, we were able to identify the most promising multi-parametric QIB for potential usage for future RT individualization. The proposed method relies on the identification of a radioresistant cluster in parameter space only. Consequently, we do not per se assume a spatially connected area of the HRS inside the tumor. If spatial connection is given, HRS may be used for potential future local radiotherapy interventions, such as dose painting. If HRS voxels in contrast would be scattered throughout the tumor, this might be indicative of a generally more radioresistant tumor and dose painting strategies may result in a radiation dose escalation of the whole tumor. However, scattered HRS voxels throughout the GTV might also be caused by noise and potentially weak robustness of the model, which should be clarified in future validation studies in preclinical and ultimately also clinical settings. Due to their limited size and heterogeneity, direct application of the ML models to identify spatially connected HRS regions in patients may not be possible. In this study, eight different cell lines with distinct radiation resistance levels were used, meaning that each small animal tumor must be understood as a role model for one voxel of a patient tumor. Consequently, the final model may not necessarily yield connected HRS areas but will require retraining and validation in patients. ADC has been identified by earlier studies as potential prognostic QIB in HNC [8,16], whereas other studies reported controversial results [37]. The discrepancy of earlier results may be due to over-simplified imaging measures such as mean ADC averaged over the whole tumor in contrast to the sub-volume approach based on clusters in multi-dimensional QIB space proposed in this study. Classical or global imaging parameters investigated in this study demonstrated that ADC valley appears to also be associated with radiation sensitivity. A potential explanation for this observation might be that ADC valley is a mean value calculated from seven voxels around the minimum ADC in a tumor sample and may thus be correlated to the 1D cluster identified during ML training on voxel level. However, when using joint information from ADC maps derived from DW-MRI combined with two PC of dynamic FMISO PET, significantly better stratification was obtained compared to ADC only. This comes however to the expense of acquiring in addition to DW-MRI dynamic hypoxia PET which increases the level of complexity during patient examination and image acquisition enormously. So far, only small hypoxia PET patient data sets were reported due to the complexity of acquisition requiring experimental tracer production, extended scan times, and non-standard data analysis strategies which make a broad roll-out of this technology unrealistic [5]. Nevertheless, these findings corroborate earlier results reported by our group and others that dynamic hypoxia PET has prognostic character with respect to RCT outcome [7,12,15]. Assuming that repeated functional imaging will further enhance the power of image-based adaptive RT interventions, it appears that dynamic hypoxia PET is more complex, costly, and not as broadly available as DW-MRI. Thus, from a pragmatic point of view, DW-MRI appears promising for wider clinical roll-out with change of practice even if less predictive than 3D-HRS combining DW-MRI and FMISO PET. Analysis of the preclinical imaging data acquired 2 weeks after fractionated RT revealed no stratification of radiation resistance groups for most cluster combinations. Sole hypoxia PET yielded slightly significant stratification power at this time point early during RT. As such, this confirms clinical findings of prognostic potential of FMISO PET at the second week during RT [14,38]. However, in this study, the model for w2 was newly trained without any inference from the models obtained for pre-treatment data. Our ML approach used to identify multi-dimensional clusters of radiation resistance is based on several assumptions. First, radiation resistance levels were based on data from earlier pre-clinical studies [32,33], showing significant variation in radiation resistance between experiments. Second, small animal functional imaging is extremely challenging, requires anesthetized animals, and thus deviates from a standard clinical situation. In addition, we assumed a relative HRS size varying between 0 and 20% depending on the radio-resistance levels of the respective cell lines. A further drawback of our method is the fact that parameter space scanning was performed directly on image voxel data, which is more prone to noise and registration inaccuracies compared to volume averaged methods. An alternative would be to combine single voxels to small homogeneous subregions (supervoxels) prior to parameter space scanning, e.g., by means of simple linear iterative clustering [19]. In this study, we used a data-driven ML approach in terms of PCA for extracting a reduced number of QIB maps from dynamic functional imaging. The use of PCA for dynamic data has been shown to be promising by other clinical and pre-clinical studies [39] providing potentially more robust results compared to classical use of compartment models for such data [7,9]. A previous study proposed deriving high-risk tumor subvolumes from joined functional imaging information by clustering patient imaging data [19]. However, this method does not directly use the size of a HRS for patient stratification but apply different intermediate steps to determine heuristic stratification parameters. In contrast, our method uses the relative HRS size which is directly connected to cell line specific hypoxia levels which are only available in a translational approach. This prior represents a major limitation of our study, as no tumor specific hypoxia or radiation resistance levels were measured. This underlines the necessity of independent validation studies, ideally in patients to confirm the hypotheses identified in this experiment. Potential uncertainties of the method making use of multi-dimensional functional imaging data on voxel level originate from manual contouring of tumor regions used as input for the analysis as well as co-registration of the functional imaging data sets which is of crucial importance for the integrity of the data set in higher dimensions. Robustness of the proposed HRS method was evaluated in different ways. The density of visualized scores in parameter space (Fig. 4) shows a smooth distribution as well as a single, compact region of high scores S HRS , indicating robust learning of the cluster center X HRS , which is further supported by the internal bootstrap validation using the training cohort C all . Furthermore, robustness of the model was evaluated using an extended cohort C max including additional tumors which were not part of the initial training cohort C all . Even though this evaluation indicated stability of the model parameters, this approach cannot be considered a full independent validation due to only a small number of additional data sets in C max compared to C all . A potential alternative for tumor stratification based on joint QIB maps might be an end-to-end learning approach using for example convolutional neural networks (CNNs), which have shown to achieve high performance in image processing and classification tasks [40]. We did not investigate such approach since we had only a low number of tumors with the full multi-dimensional imaging parameter space available in this study (n = 42). Therefore, an approach was developed which complements a datadriven learning method with hypotheses about the existence and size of an HRS related to known radio-resistance levels. The final model can easily be interpreted in the sense that learned HRS are fully determined by associated QIB ranges. In contrast, model interpretation using CNN-based end-toend learning might be challenging. In Fig. 5, cell line UTSCC-45 shows distinctly different HRS compared to all other cell lines of the group with high radiation sensitivity (group H). Interestingly, this cell line differs from the other investigated cell lines due to its positive human papilloma virus (HPV) status. The associated genetic difference may cause a shift in radiosensitivity compared to HPV-negative cancer cell lines which seems not to be detectable by quantitative imaging [41]. Therefore, ADC/ FMISO-based HRS radiation dose escalation does not seem an option for low-risk HPV-positive oropharyngeal HNC and future interventional trials should be limited to patients with high-risk profiles (HPV-negative or HPV-positive plus > 20 pack-years smoking history) [42]. As tumor hypoxia and cellularity are subject to change during RCT, individualized RT approaches adapted to the current level of resistance will only be possible if HRS can be identified shortly before treatment. Recently developed hybrid MR-Linacs may allow functional MRI acquisitions before and during RT and open thus unique possibilities in terms of MR-specific QIB-adaptive RT [43]. Recent results on phantom and early clinical data proved that quantitative imaging is possible at hybrid MR-Linac systems [44,45] which is a major pre-requisite for biologically adapted RT dose painting based on ADC clusters. More complex multi-parametric QIB involving different imaging modalities may need to be acquired on dedicated PET/MRI scanners and used for offline response-adaptive RT. Nevertheless, before QIB-based RT dose painting can be applied in clinical RT practice, technical and clinical validation is required including test-retest studies and comparison to diagnostic scanners to ensure repeatability and reproducibility [43,46]. In conclusion, this study used a novel ML approach combined with hypothesis-driven methods, where n-dimensional imaging spaces spanned by hypoxia imaging using dynamic FMISO PET, DW-MRI, and DCE-MRI were scanned to learn characteristic patterns of radiation resistance. Finally, we present the pre-clinical description of a HRS defined by a 3D cluster defined by ADC, FMISO_c1, and FMISO_c2 which identifies spatially resolved tumor subvolumes exhibiting increased radiation resistance and thereby presumably the cause of local tumor recurrence. These results warrant validation and translation to a clinical setting before benefits of PET/MRI-derived, QIB-based RT adaptation can be tested in a clinical trial. Author contribution Daniela Thorwarth, Daniel Zips, Marcel Krueger, Bernd Pichler, Simon Böke, René Winter, and Sara Leibfarth contributed to the study conception and design. Animal and tumor handling, irradiation, and data collection were performed by Simon Böke and René Winter. Marcel Krueger, Gregory Bowden, and Jonathan Cotton were involved in tracer production and small animal imaging. Sara Leibfarth and René Winter developed the machine learning model and performed data analysis as well as visualization of the results. The first draft of the manuscript was written by Sara Leibfarth, René Winter, Simon Böke, and Daniela Thorwarth and all authors commented on previous versions of the manuscript. All authors read and approved the final manuscript. Funding Open Access funding enabled and organized by Projekt DEAL. The research leading to these results have received funding from the European Union's seventh framework program (FP7), European Research Council (ERC) starting grant no. 335367. FP7 Ideas: European Research Council,StG 335367,Daniela Thorwarth Data availability The datasets generated during this study are available from the corresponding author on reasonable request through institutional data transfer agreements. Ethics approval The animal facilities and all experiments were approved according to our institutional guidelines and the German animal welfare regulations (animal allowance no. 35/9185.81-2/R4/16). Competing interests DT and DZ report institutional collaborations outside of this work with financial and non-financial support by the companies Elekta, Philips, PTW Freiburg, Dr. Sennewald, Kaiku, and TheraPanacea. All other authors declare that they have no competing interests. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/.
7,432.6
2023-05-06T00:00:00.000
[ "Medicine", "Physics" ]
A Multi-Terminal HVdc Grid Topology Proposal for O ff shore Wind Farms : Although various topologies of multi-terminal high voltage direct current (MT-HVdc) transmission systems are available in the literature, most of them are prone to loss of flexibility, reliability, stability, and redundancy in the events of grid contingencies. In this research, two new wind farms and substation ring topology (2WF-SSRT) are designed and proposed to address the aforementioned shortcomings. The objective of this paper is to investigate MT-HVdc grid topologies for integrating large o ff shore wind farms with an emphasis on power loss in the event of a dc grid fault or mainland alternating current (ac)grid abnormality. Standards and control of voltage source converter (VSC) based MT-HVdc grids are defined and discussed. High voltage dc switch-gear and dc circuit topologies are appraised based on the necessity of dc cables, HVdc circuit breakers, and extra o ff shore platforms. In this paper, the proposed topology is analyzed and compared with the formers for number and ratings of o ff shore substations, dc breakers, ultra-fast mechanical actuators, dc circuits, cost, flexibility, utilization, and redundancy of HVdc links. Coordinated operation of various topologies is assessed and compared with respect to the designed control scheme via a developed EMTDC / PSCAD simulation platform considering three fault scenarios: dc fault on transmission link connecting the wind farm to mainland power converters, dc fault within substation ring of VSC-HVdc stations, and ultimate disconnection of grid side VSC station. Results show that 2WF-SSRT is a promising topology for future MT-HVdc grids. Introduction In recent years, multi-terminal high voltage direct current (MT-HVdc) systems topologies have attracted great attention for the integration of offshore wind farms to ac grids [1]. In this emerged paradigm, the main objectives are improving the stability and reducing the cost of multi-terminal HVdc grid as a whole and particularly the cost of dc circuits, control systems, the number of offshore substations, and HVdc circuit breakers (DCCB), respectively. Currently, more than 200 point-to-point HVdc transmission systems have been launched around the world. Actually, the parallel connection of HVdc systems was investigated in 1963 [2], while series HVdc was discussed in 1965 [3]. However, in order to get the first working parallel MT-HVdc system, 4. Topology with the least number of offshore stations, DCCBs and reduced dc-link length with maximum flexibility, stability, utilization, and redundancy is checked to endure the MT-HVdc grid codes without a communication system, via simulations-a step to meet the HVdc grid codes by following the standards recommended by the GBSQSS [31]. 5. EMTDC/PSCAD tool is used to analyze and compare the transmission circuit topologies. In [27][28][29], the authors just proposed the circuit layout for MT-HVdc grids without simulations. However, in this paper, time-domain simulated configurations are experienced with the dc line-line faults and ultimate disconnection of the grid side VSC (GS-VSC) to assess the real-time evaluation of topologies. 6. Finally, annotations on MT-HVdc grid topological evaluation are provided which may serve as a guideline for the researchers to understand different norms in this field. The rest of the paper is organized as follows: multi-terminal HVdc transmission systems and its necessities are described in Section 2. Various topologies for the MT-HVdc grid are discussed from the literature in Section 3. A new topology is proposed in Section 4 for MT-HVdc transmission systems and stability of this newly proposed topology is assessed via two dc fault scenarios. A general case study based on techno-economic analysis is presented in Section 5. Simulations are conducted in Section 6 and each topology is tested for (i) GS-VSC disconnection and (ii) dc line-line fault. In Section 7, annotation on the topological evaluation of HVdc circuits is provided. Finally, remarks and conclusions are drawn in Section 8. MT-HVdc Transmission Systems and Necessities Offshore platforms are required for each offshore wind farm to install a VSC converter and a number of connections to connect the HVdc links depending upon the MT-HVdc application. The fundamental design of an MT-HVdc system relies both on economic and technical factors imposed by both society and utility. Further economic factors include geological position, length of dc circuits, and its rating, ultra-fast mechanical actuator (UFMA), and the number of HVdc circuit breakers, VSC converters, and its ratings, need of communication between converters and additional offshore substations. Moreover, technical perspectives can be successful usage of dc circuits, availability of security to the MT-HVdc system under abnormal conditions, dc grid flexibility, and inertia sharing with mainland ac grid and redundancy. Great Britain's security and quality of supply standards have proposed principles for offshore wind farms' connection with onshore ac networks [31]. Hence, an MT-HVdc grid needs to ensure the following: (1) Direct voltage must be regulated during both faulty and normal operating conditions. (2) In the event of fault occurrence, the MT-HVdc system should provide support to the mainland ac grid. (3) In case of any VSC station failure, an MT-HVdc system needs to guarantee that power transferred to the ac network will not be reduced more than the maximum power failure (P max-fail e.g., for Great Britain its 1320 MW [32]). The performance of MT-HVdc greatly depends on the employed control strategy, while control mainly relies on a kind of ac grid connection and the dc network topology [30]. This paper does not investigate control strategies in detail; however, operation and control of the MT-HVdc grid are discussed in our published research [33][34][35]. Accordingly, P−V ac control is deployed to regulate the ac voltages of WFs at a precise level [33]. Proportional integral (PI) control is used for constant voltage generation at 50 Hz. PI controller diminishes the voltage error (e = V WF * − V WF ), which was then employed as a performance index [33,34]. MT-HVdc system specifications and parameters for PI controllers are enlisted in Table 1. Parameters Values DC grid voltage 400 kV Droop coefficients k 1 , k 2 , k 3 , k 4 , k 5 , k 6 As shown in Figure 1, for assessment of MT-HVdc topology, V dc and Q are controlled at GS-VSCs. Furthermore, Figure 2 shows a scheme of droop controller, employed to coordinate the direct voltage between GS-VSCs, where P * pu and V dc * pu are the real power and direct voltage references, respectively. k is droop characteristic slope. Control of Figure 2 is more proficient than customary control and gives the least PI error [33] As shown in Figure 1, for assessment of MT-HVdc topology, Vdc and Q are controlled at GS-VSCs. Furthermore, Figure 2 shows a scheme of droop controller, employed to coordinate the direct voltage between GS-VSCs, where P * pu and Vdc * pu are the real power and direct voltage references, respectively. k is droop characteristic slope. Control of Figure 2 is more proficient than customary control and gives the least PI error [33] by adding limiters. For modeling of the MT-HVdc grid, an average VSC value model is used, while offshore WFs are demonstrated as fixed power sources. The dc transmission lines are modeled via the π model. MT-HVdc Grid Topologies A number of MT-HVdc grid topologies are evaluated and analyzed by considering the ratings of VSC stations and dc breakers, length and capacity of dc cables, additional offshore substation requirements, flexibility, capital and running costs, stability, and redundancy for topological assessment. The authors of [27,28] suggested that wind farm ring topology is the best topology among general ring topology (GRT), substation ring topology (SSRT), star with central switching ring topology (SGRT), wind As shown in Figure 1, for assessment of MT-HVdc topology, Vdc and Q are controlled at GS-VSCs. Furthermore, Figure 2 shows a scheme of droop controller, employed to coordinate the direct voltage between GS-VSCs, where P * pu and Vdc * pu are the real power and direct voltage references, respectively. k is droop characteristic slope. Control of Figure 2 is more proficient than customary control and gives the least PI error [33] by adding limiters. For modeling of the MT-HVdc grid, an average VSC value model is used, while offshore WFs are demonstrated as fixed power sources. The dc transmission lines are modeled via the π model. MT-HVdc Grid Topologies A number of MT-HVdc grid topologies are evaluated and analyzed by considering the ratings of VSC stations and dc breakers, length and capacity of dc cables, additional offshore substation requirements, flexibility, capital and running costs, stability, and redundancy for topological assessment. The authors of [27,28] suggested that wind farm ring topology is the best topology among general ring topology (GRT), substation ring topology (SSRT), star with central switching ring topology (SGRT), wind For modeling of the MT-HVdc grid, an average VSC value model is used, while offshore WFs are demonstrated as fixed power sources. The dc transmission lines are modeled via the π model. MT-HVdc Grid Topologies A number of MT-HVdc grid topologies are evaluated and analyzed by considering the ratings of VSC stations and dc breakers, length and capacity of dc cables, additional offshore substation requirements, flexibility, capital and running costs, stability, and redundancy for topological assessment. The authors of [27,28] suggested that wind farm ring topology is the best topology among general ring topology (GRT), substation ring topology (SSRT), star with central switching ring topology (SGRT), wind farm ring topology (WFRT), point-to-point topology (PPT), and star topology (ST) for a MT-HVdc system. However, authors have not investigated the impact of dc fault as the WFRT loses flexibility and stability under a fault on GS-VSC station or on permanent disconnection of GS-VSC. In the event of such a dc fault, the dc breaker will operate on either side of the fault [28] and, as a result, a substation and two WFs will be disconnected from the MT-HVdc grid, which could not fulfill the third requirement of GBSQSS. Therefore, 2WF-1SST is suggested in [30], which can sustain the effects of anomalies of the system. The proposed topology and the two prominent topologies (WFRT and 2WF-1SST) from literature are described in the subsequent section with their merits and demerits, and then these are simulated. Wind Farm Ring Topology The wind farm ring topology (WFRT) contains offshore WFs accompanied in a ring, possesses the equal number of WFs and dc breakers, and each WF is linked to an associated mainland network as shown in Figure 3 [27,28]. In the event of a dc fault, DCCBs operate on either side of the fault [28], thus a WF side VSC (WF-VSC) and a GS side VSC are disconnected. As a result, the maximum power loss criterion of GBSQSS has not complied. An isolator segregates the faulty region; once the fault current is zero, HVdc breakers reclose its contacts and hence onshore SS and the offshore WF are again functional. WFRT does not permanently cut off the WF under the failure in connecting lines from GS-VSC to WF-VSC, an advantage of WFRT. However, WFRT loses its stability and flexibility when the failure ensues within the GS-VSC, as this leads to a rise of dc-link voltage [30]. Furthermore, in the event of a dc fault within a ring, dc breakers operate on either side of the faulty region [28] and an onshore VSC and an offshore WF are disconnected from the MT-HVdc grid. As a result, the maximum power loss criterion of GBSQSS does not comply. In the event of such a dc fault, the dc breaker will operate on either side of the fault [28] and, as a result, a substation and two WFs will be disconnected from the MT-HVdc grid, which could not fulfill the third requirement of GBSQSS. Therefore, 2WF-1SST is suggested in [30], which can sustain the effects of anomalies of the system. The proposed topology and the two prominent topologies (WFRT and 2WF-1SST) from literature are described in the subsequent section with their merits and demerits, and then these are simulated. Wind Farm Ring Topology The wind farm ring topology (WFRT) contains offshore WFs accompanied in a ring, possesses the equal number of WFs and dc breakers, and each WF is linked to an associated mainland network as shown in Figure 3 [27,28]. In the event of a dc fault, DCCBs operate on either side of the fault [28], thus a WF side VSC (WF-VSC) and a GS side VSC are disconnected. As a result, the maximum power loss criterion of GBSQSS has not complied. An isolator segregates the faulty region; once the fault current is zero, HVdc breakers reclose its contacts and hence onshore SS and the offshore WF are again functional. WFRT does not permanently cut off the WF under the failure in connecting lines from GS-VSC to WF-VSC, an advantage of WFRT. However, WFRT loses its stability and flexibility when the failure ensues within the GS-VSC, as this leads to a rise of dc-link voltage [30]. Furthermore, in the event of a dc fault within a ring, dc breakers operate on either side of the faulty region [28] and an onshore VSC and an offshore WF are disconnected from the MT-HVdc grid. As a result, the maximum power loss criterion of GBSQSS does not comply. Substation Ring Topology SSRT is similar to a WFRT with a ring on GS-VSCs. Each WF-VSC is integrated with a corresponding mainland power converter station as shown in Figure 4 [28]. The difference between WFRT and SSRT is that GS-VSC is isolated in WFRT while WF-VSC in SSRT, from which a dc fault persists in a dc-link. Substation Ring Topology SSRT is similar to a WFRT with a ring on GS-VSCs. Each WF-VSC is integrated with a corresponding mainland power converter station as shown in Figure 4 [28]. The difference between WFRT and SSRT is that GS-VSC is isolated in WFRT while WF-VSC in SSRT, from which a dc fault persists in a dc-link. This configuration offers more flexibility under both faulty and maintenance operations on the mainland ac grid than the wind farm side [30]. The third condition of GBSQSS is not satisfied as, during dc fault in SS ring, an onshore SS and an offshore WF are disconnected (dc breaker operation on either side of the fault), which leads to maximum power loss. Two Wind Farm and a Substation Topology The 2WF-1SST is comprised of two WF side VSCs linked to a GS side VSC via a dc-link such that each WF-VSC is connected with the adjacent unit's WF-VSC through UFMA as depicted in Figure 5 [25]. The application of the MT-HVdc system decides the number of such units. It exhibits better stability, flexibility, and efficiency with the reduced number of DCCBs, but operation of 2WF-1SST will be greatly affected if the fault persists on end sectioned lines (i.e., L12 and L56) or permanent fault persist within sectioned GS-VSCs (SS1 or SS3). Thus, a new better variant is needed. Two Wind Farm and Substation Ring Topology In 2WF-SSRT topology, each unit consists of two wind farms side VSCs connected to one onshore VSC station within a ring of substations through dc-link such that each WF-VSC connected to its Two Wind Farm and a Substation Topology The 2WF-1SST is comprised of two WF side VSCs linked to a GS side VSC via a dc-link such that each WF-VSC is connected with the adjacent unit's WF-VSC through UFMA as depicted in Figure 5 [25]. The application of the MT-HVdc system decides the number of such units. It exhibits better stability, flexibility, and efficiency with the reduced number of DCCBs, but operation of 2WF-1SST will be greatly affected if the fault persists on end sectioned lines (i.e., L 12 and L 56 ) or permanent fault persist within sectioned GS-VSCs (SS 1 or SS 3 ). Thus, a new better variant is needed. Two Wind Farm and a Substation Topology The 2WF-1SST is comprised of two WF side VSCs linked to a GS side VSC via a dc-link such that each WF-VSC is connected with the adjacent unit's WF-VSC through UFMA as depicted in Figure 5 [25]. The application of the MT-HVdc system decides the number of such units. It exhibits better stability, flexibility, and efficiency with the reduced number of DCCBs, but operation of 2WF-1SST will be greatly affected if the fault persists on end sectioned lines (i.e., L12 and L56) or permanent fault persist within sectioned GS-VSCs (SS1 or SS3). Thus, a new better variant is needed. Two Wind Farm and Substation Ring Topology In 2WF-SSRT topology, each unit consists of two wind farms side VSCs connected to one onshore VSC station within a ring of substations through dc-link such that each WF-VSC connected to its Two Wind Farm and Substation Ring Topology In 2WF-SSRT topology, each unit consists of two wind farms side VSCs connected to one onshore VSC station within a ring of substations through dc-link such that each WF-VSC connected to its neighboring-VSC through an ultra-fast mechanical actuator as shown in Figure 6. Proposed topology has features of 2WF-1SST [30] and SSRT [28]. The number of such units in the MT-HVdc system depends upon energy requirements and applications. The proposed scheme aids to increase the stability, flexibility, and efficiency of the system with a reduced number of DCCBs and offshore stations, which minimized the dc-link lengths and GS-VSCs by half to WF-VSCs. In order to analyze the stability of the proposed topology, two dc faults F 1 and F 2 on dc-link L 12 Let's consider a dc line-line fault F1 on the line L12 and the operation of the proposed topology is described through the following steps: (1) At t0, the system is in steady-state as shown in Figure 7a. Let's consider a dc line-line fault F 1 on the line L 12 and the operation of the proposed topology is described through the following steps: (1) At t 0 , the system is in steady-state as shown in Figure 7a. General Comparison of the Topologies General comparative case study for several MT-HVdc grid topologies is presented in Figure 8, considering the techno-economic factors such as length, number, and ratings of dc cables, number and ratings of VSCs and DCCBs, redundancy, and loss of VSC station in case of a fault. The geographical positions of VSCs are presented in Table 2. PPT is not a multi-terminal topology while SGRT is a combination of GRT and ST. Therefore, these are not considered for comparative analysis. ST, GRT, SSRT GRT, and ST require the number of DCCBs equal to the number of WF-VSCs plus GS-VSCs while WFRT and SSRT require DCCBs equal to the WF-VSCs. The 2WF-1SST and the proposed 2WF-SSRT need DCCBs equal to half the number of WF-VSCs. The dc grid carries six GS-VSC and offshore WF-VSC stations, each one 400 MW for PPT. The star topology requires six GS-VSCs of 480 MW and six offshore VSC stations of 400 MW, respectively. Twelve VSCs of 2400 MW are required by the GRT. SSRT and WFRT need six offshore VSCs each of them 400 MW and 800 MW, respectively, with 800 MW and 480 MW mainland substations, respectively [30]. Six WF-VSC of 400 MW and three onshore VSCs of 1200 MW are designed for the 2WF-1SST and proposed 2WF-SSRT topology, respectively. (1) At t 0 , topology is in steady-state (2) At t 1 , fault F 2 occurs on the line L 13 within the substation ring (SSR). (3) At t 2 , CB 12 and CB 56 opened on either side of the fault within the SS ring. (4) At t 3 , IS 562 is open. When the current through the faulted line L 13 becomes zero, the L 13 is disconnected from the SS ring. (5) At t 4 , CB 56 is closed, resulting in SS 3 going back into the system. (6) At t 5 , on dc-link fault F 2 clearance, IS 122 and CB 12 are closed in sequence to restore the 2WF-SSRT topology to its original state. For scenario-2, the sectionalized figure is not included to keep paper simple. General Comparison of the Topologies General comparative case study for several MT-HVdc grid topologies is presented in Figure 8, considering the techno-economic factors such as length, number, and ratings of dc cables, number and ratings of VSCs and DCCBs, redundancy, and loss of VSC station in case of a fault. The geographical positions of VSCs are presented in Table 2. PPT is not a multi-terminal topology while SGRT is a combination of GRT and ST. Therefore, these are not considered for comparative analysis. An extra offshore platform is required for only ST and SGRT to install circuit breakers while dc circuits have the same power ratings as an associated VSC station. The dc circuits in WF/SS rings need a rating equal to the power of two adjacent VSCs while the rating of the dc circuit connecting GS/WF-VSCs from WF/SS ring depends upon the rating of the respective GS/WF-VSC. For 2WF-1SST and proposed 2WF-SSRT, dc-links connecting WFs to the common point need to be rated at double capacity than the WF-VSCs while the dc circuits connecting neighboring-WF's are rated equal to the WF-VSC. Rating of the lines to the mainland grid from common point depends upon the rating of the GS-VSC for 2WF-1SST, while, for 2WF-SSRT, it is equal to the rating of the respective unit. The rating of the cables within the SS ring of 2WF-SSRT would be equal to the rating of two adjacent GS-VSCs. In the event of a dc-link fault, connected WF will be lost for ST and SSRT. However, in general, it will not affect the operation of GRT, WFRT, 2WF-1SST, and 2WF-SSRT topologies. However, the operation of 2WF-1SST will be greatly affected if the fault persists on end sectioned lines (i.e., L12 and L56). Thus, 2WF-SSRT topology shall support in this regard. Conversely, if the permanent fault is within GS-VSC station, ST, GRT, and WFRT will lose the GS-VSC and hence lack in supplying 400 MW to the connected ac system, while 2WF-1SST and 2WF-SSRT can deal with this situation by providing an alternate path to power flow. ST, GRT, SSRT GRT, and ST require the number of DCCBs equal to the number of WF-VSCs plus GS-VSCs while WFRT and SSRT require DCCBs equal to the WF-VSCs. The 2WF-1SST and the proposed 2WF-SSRT need DCCBs equal to half the number of WF-VSCs. The dc grid carries six GS-VSC and offshore WF-VSC stations, each one 400 MW for PPT. The star topology requires six GS-VSCs of 480 MW and six offshore VSC stations of 400 MW, respectively. Twelve VSCs of 2400 MW are required by the GRT. SSRT and WFRT need six offshore VSCs each of them 400 MW and 800 MW, respectively, with 800 MW and 480 MW mainland substations, respectively [30]. Six WF-VSC of 400 MW and three onshore VSCs of 1200 MW are designed for the 2WF-1SST and proposed 2WF-SSRT topology, respectively. An extra offshore platform is required for only ST and SGRT to install circuit breakers while dc circuits have the same power ratings as an associated VSC station. The dc circuits in WF/SS rings need a rating equal to the power of two adjacent VSCs while the rating of the dc circuit connecting GS/WF-VSCs from WF/SS ring depends upon the rating of the respective GS/WF-VSC. For 2WF-1SST and proposed 2WF-SSRT, dc-links connecting WFs to the common point need to be rated at double capacity than the WF-VSCs while the dc circuits connecting neighboring-WF's are rated equal to the WF-VSC. Rating of the lines to the mainland grid from common point depends upon the rating of the GS-VSC for 2WF-1SST, while, for 2WF-SSRT, it is equal to the rating of the respective unit. The rating of the cables within the SS ring of 2WF-SSRT would be equal to the rating of two adjacent GS-VSCs. In the event of a dc-link fault, connected WF will be lost for ST and SSRT. However, in general, it will not affect the operation of GRT, WFRT, 2WF-1SST, and 2WF-SSRT topologies. However, the operation of 2WF-1SST will be greatly affected if the fault persists on end sectioned lines (i.e., L 12 and L 56 ). Thus, 2WF-SSRT topology shall support in this regard. Conversely, if the permanent fault is within GS-VSC station, ST, GRT, and WFRT will lose the GS-VSC and hence lack in supplying 400 MW to the connected ac system, while 2WF-1SST and 2WF-SSRT can deal with this situation by providing an alternate path to power flow. However, if the permanent fault persists within extreme end GS-VSC (i.e., GS-VSC 1 or GS-VSC 3 ), then again power from WF 1 and WF 6 will be lost for 2WF-1SST, while 2WF-SSRT will aid with overcoming this shortcoming of 2WF-1SST by providing intra-connections between units. PPT and ST require the same ratings (400 MW each) of dc cables as their converters. The dc cables of the central ring in GRT demand for power rating equivalent to the total P WF (2400 MW). Rating of dc lines within the SS/WF ring is equal to the sum of the power ratings of the two adjacent VSCs i.e., 800 MW while the rating of cables linking WFs to onshore grids needs to be equal to the ratings of WF-VSCs and GS-VSCs, respectively. For 2WF-1SST and the proposed 2WF-SSRT, lines integrating the WFs to the common point needs to be rated at double capacity of the WF side VSCs while the rating of the link (cable) between neighboring WF-VSCs should be equal to the WF-VSC. However, dc cables rating in SS ring of 2WF-SSRT needs to be equal to the summation of two adjacent VSCs, while dc cables are rated equal to the onshore converter for 2WF-1SST. Likewise, no dc breaker is required for PPT. Six DCCBs of 400 MW are needed at the offshore site, while six HVdc breakers of 480 MW are required at the onshore station for star topology. Twelve HVdc circuit breakers are required for the central ring in GRT, each of them being 2400 MW. Six DCCBs are demanded in the SS/WF ring of SSRT/WFRT, each of them 800 MW. The 2WF-1SST and 2WF-SSRT topologies require only three dc circuit breakers, each of them 1200 MW, respectively. The results of the topological evaluation are summarized in Table 3 from Figure 8 and the above discussion. Among them, 2WF-SSRT topology is the best for WF integration with onshore grids because it gives maximum flexibility, stability, reliability, redundancy, and utilization as it can tolerate all faulty conditions with three DCCB and GS-VSCs, respectively. A VSC power converter station of 1000 MW costs 110 M€, while the cost of DCCB is one-sixth of the power VSC converter price [36,37]. Subsea HVdc cable approximate price is in the range of 1.2 M€-1.4 M€ per km [38]. This information proves that 2WF-SSRT topology offers reduced overall capital and operating cost (economical) even better than 2WF-1SST. Table 1 are used to evaluate WFRT, 2WF-1SST, and the proposed 2WF-SSRT topologies of the MT-HVdc transmission system. Proportional droop control is employed on GS-VSCs to attain dc voltage control [33][34][35]. Maximum P WF is extracted via V ac and P control on WF-VSCs [39]. CIGRE Bologna DCCB is used for fault isolation and dc inductors of 100 mH are added to limit the rate of rising of dc fault current [40,41]. Fault discrimination time is taken as 5 ms [41][42][43]. Permanent VSC disconnection and dc line-line fault tests are performed under the base power of 1000 MW and 400 kV V dc . Figure 3 gives the WFRT topology with four HVdc circuit breakers. This topology is developed in EMTDC/PSCAD with the parameters reported in Table 1. Disconnection of GS-VSC 1 First trial associates with the disconnection of the grid side converter. The dc voltage and power profiles of WFRT are shown in Figure 9. GS-VSC 1 power dropped to zero from −0.6 pu because of the immediate disconnection as this VSC station experienced a symmetric fault on the ac side, causing excessive power in the dc grid. Promptly, dc-link voltage builds up to 1.25 pu while, at t = 2.1 s, V dc drops to 1.07 pu as droop control increases power transfer through GS-VSC 2 , GS-VSC 3 , and GS-VSC 4 , supplied by the WF-VSCs. Wind power extraction stays unchanged. The permissible range of ±10% is violated for dc-link voltage. WFRT does not devise any protection to tackle with such circumstances, which is a drawback. Appl. Sci. 2020, 10, x FOR PEER REVIEW 12 of 21 Table 1 are used to evaluate WFRT, 2WF-1SST, and the proposed 2WF-SSRT topologies of the MT-HVdc transmission system. Proportional droop control is employed on GS-VSCs to attain dc voltage control [33][34][35]. Maximum PWF is extracted via Vac and P control on WF-VSCs [39]. CIGRE Bologna DCCB is used for fault isolation and dc inductors of 100 mH are added to limit the rate of rising of dc fault current [40,41]. Fault discrimination time is taken as 5 ms [41][42][43]. Permanent VSC disconnection and dc line-line fault tests are performed under the base power of 1000 MW and 400 kV Vdc. Figure 3 gives the WFRT topology with four HVdc circuit breakers. This topology is developed in EMTDC/PSCAD with the parameters reported in Table 1. Disconnection of GS-VSC1 First trial associates with the disconnection of the grid side converter. The dc voltage and power profiles of WFRT are shown in Figure 9. GS-VSC1 power dropped to zero from −0.6 pu because of the immediate disconnection as this VSC station experienced a symmetric fault on the ac side, causing excessive power in the dc grid. Promptly, dc-link voltage builds up to 1.25 pu while, at t = 2.1 s, Vdc drops to 1.07 pu as droop control increases power transfer through GS-VSC2, GS-VSC3, and GS-VSC4, supplied by the WF-VSCs. Wind power extraction stays unchanged. The permissible range of ±10% is violated for dc-link voltage. WFRT does not devise any protection to tackle with such circumstances, which is a drawback. Considering the above-described control and operation during the tests, it is proved that the designed system architecture of WFRT is not applicable for the applications where sudden and permanent GS-VSC disconnection or a permanent dc-link fault within WF ring is expected. Therefore, exploring and developing new topology is the need of the time to meet system needs under steady-state and dynamic situations. Simulation Results of 2WF-1SST Topology Simulations of 2WF-1SST transmission system topology of Figure 5 are developed in EMTDC/PSCAD with values of controls given in Table 1, with three dc breakers. Disconnection of GS-VSC1 Grid side VSC disconnection is assessed in the first test. A three-phase permanent fault is experienced on the ac side of GS-VSC1 and thus PGS1 drop to zero from −0.6 pu. Instantly, the dc grid voltage rises to 1.013 pu. Power extraction from WFs is unchanged. Power transported via WF-VSC3 upswings to 0.8 pu via UFMA23, delivered from the WF-VSC2 while power from WF1 is wasted as it has lost its connection to the onshore grid. Power and the dc voltage profiles are shown in Figure 11. PWF1 and PWF2 are shown by dashed lines after the disconnection of GS-VSC1. Again, 2WF-1SST is lacking in satisfying the 3rd condition of Great Britain's security and quality of supply standards. Considering the above-described control and operation during the tests, it is proved that the designed system architecture of WFRT is not applicable for the applications where sudden and permanent GS-VSC disconnection or a permanent dc-link fault within WF ring is expected. Therefore, exploring and developing new topology is the need of the time to meet system needs under steady-state and dynamic situations. Simulation Results of 2WF-1SST Topology Simulations of 2WF-1SST transmission system topology of Figure 5 are developed in EMTDC/PSCAD with values of controls given in Table 1, with three dc breakers. Disconnection of GS-VSC 1 Grid side VSC disconnection is assessed in the first test. A three-phase permanent fault is experienced on the ac side of GS-VSC 1 and thus P GS1 drop to zero from −0.6 pu. Instantly, the dc grid voltage rises to 1.013 pu. Power extraction from WFs is unchanged. Power transported via WF-VSC 3 upswings to 0.8 pu via UFMA 23 , delivered from the WF-VSC 2 while power from WF 1 is wasted as it has lost its connection to the onshore grid. Power and the dc voltage profiles are shown in Figure 11. P WF1 and P WF2 are shown by dashed lines after the disconnection of GS-VSC 1 . Again, 2WF-1SST is lacking in satisfying the 3rd condition of Great Britain's security and quality of supply standards. Considering the above-described control and operation during the tests, it is proved that the designed system architecture of WFRT is not applicable for the applications where sudden and permanent GS-VSC disconnection or a permanent dc-link fault within WF ring is expected. Therefore, exploring and developing new topology is the need of the time to meet system needs under steady-state and dynamic situations. Simulation Results of 2WF-1SST Topology Simulations of 2WF-1SST transmission system topology of Figure 5 are developed in EMTDC/PSCAD with values of controls given in Table 1, with three dc breakers. Disconnection of GS-VSC1 Grid side VSC disconnection is assessed in the first test. A three-phase permanent fault is experienced on the ac side of GS-VSC1 and thus PGS1 drop to zero from −0.6 pu. Instantly, the dc grid voltage rises to 1.013 pu. Power extraction from WFs is unchanged. Power transported via WF-VSC3 upswings to 0.8 pu via UFMA23, delivered from the WF-VSC2 while power from WF1 is wasted as it has lost its connection to the onshore grid. Power and the dc voltage profiles are shown in Figure 11. PWF1 and PWF2 are shown by dashed lines after the disconnection of GS-VSC1. Again, 2WF-1SST is lacking in satisfying the 3rd condition of Great Britain's security and quality of supply standards. Permanent dc Line-Line Fault on Line L 56 In the second test, dc line-line fault is developed on line L 56 , which results in the operation of HVdc circuit breakers CB 56 . This situation results in the disconnection of GS-VSC 3 from WF-VSC 5 and WF-VSC 6 . Consequently, power from WF 5 starts flowing through WF-VSC 4 while P WF6 is lost, not accomplishing the GBSQSS standard-a drawback. The flow of power and the dc-link voltage under this scenario are similar to test-1 (disconnection of GS-VSC 1 ) as shown in Figure 12. In the second test, dc line-line fault is developed on line L56 , which results in the operation of HVdc circuit breakers CB56. This situation results in the disconnection of GS-VSC3 from WF-VSC5 and WF-VSC6. Consequently, power from WF5 starts flowing through WF-VSC4 while PWF6 is lost, not accomplishing the GBSQSS standard-a drawback. The flow of power and the dc-link voltage under this scenario are similar to test-1 (disconnection of GS-VSC1) as shown in Figure 12. It is clear from the conducted tests that the 2WF-1SST is not conforming to the GBSQSS and is hence not viable during GS-VSC disconnection and dc-link fault within remote/far-end/the last unit (unit-2 or unit-3 in Figure 5). As a result, a more feasible configuration is required to accommodate shortcomings of literature's best two topologies i.e., WFRT and 2WF-1SST. Simulation Results of the Proposed 2WF-SSRT Topology Configuration of Figure 6 presents 2WF-SSRT with three dc breakers in SS ring, for six offshore WFs. Simulations are developed in EMTDC/PSCAD with the parameters reported in Table 1. Disconnection of GS-VSC1 Grid side VSC1 is subjected to a permanent three-phase fault and thus it is disconnected from SS ring of 2WF-SSRT following the operation of CB34 and CB12. Power flow and the dc voltage profiles of the proposed topology under this scenario are shown in Figure 13. Power extractions from the WFs remain constant while the power flow through GS-VSC1 drops to zero as depicted with the dashed line. However, the flow of power via GS-VSC2 and GS-VSC3 increased smoothly to −1.0 pu and −1.1 pu from −0.7 pu and −0.8 pu, respectively, because of SS ring beauty. No sharp spikes are experienced. In addition, no violation of Great Britain's security and quality of supply standards is observed. It is clear from the conducted tests that the 2WF-1SST is not conforming to the GBSQSS and is hence not viable during GS-VSC disconnection and dc-link fault within remote/far-end/the last unit (unit-2 or unit-3 in Figure 5). As a result, a more feasible configuration is required to accommodate shortcomings of literature's best two topologies i.e., WFRT and 2WF-1SST. Simulation Results of the Proposed 2WF-SSRT Topology Configuration of Figure 6 presents 2WF-SSRT with three dc breakers in SS ring, for six offshore WFs. Simulations are developed in EMTDC/PSCAD with the parameters reported in Table 1. Disconnection of GS-VSC 1 Grid side VSC 1 is subjected to a permanent three-phase fault and thus it is disconnected from SS ring of 2WF-SSRT following the operation of CB 34 ring of 2WF-SSRT following the operation of CB34 and CB12. Power flow and the dc voltage profiles of the proposed topology under this scenario are shown in Figure 13. Power extractions from the WFs remain constant while the power flow through GS-VSC1 drops to zero as depicted with the dashed line. However, the flow of power via GS-VSC2 and GS-VSC3 increased smoothly to −1.0 pu and −1.1 pu from −0.7 pu and −0.8 pu, respectively, because of SS ring beauty. No sharp spikes are experienced. In addition, no violation of Great Britain's security and quality of supply standards is observed. Permanent dc Line-Line Fault on Line L12 A permanent dc line-line fault F1 is experienced on L12 of unit-1 as explained in scenario-1 of Section 4.1.1, which results in the cutoff of power flow to SS ring via L12. Thus, the flow of powers from WF1 and WF2 of unit-1 is diverted to WF5 of unit-3 and WF3 of unit-2, respectively. Power flow within the SS ring almost remains the same. Power and dc-link voltage profiles are depicted in Figure 14. The dc voltage stabilizes itself to 1.025 pu. Thus, the dc grid parameters are within the safe limit which shows that 2WF-SSRT has even stable topology for dc-link faults within remote/far-end/the last unit. Figure 15. Profiles are unaltered before and after the operation of DCCBs in the event of fault F2, which gives conformability of the proposed topology with all three standards of GBSQSS. All the above three tests prove the superiority of the proposed 2WF-SSRT over the simulated (literature best) WFRT and 2WF-1SST. 2WF-SSRT fulfills the GBSQSS requirements and has enough Figure 14. The dc voltage stabilizes itself to 1.025 pu. Thus, the dc grid parameters are within the safe limit which shows that 2WF-SSRT has even stable topology for dc-link faults within remote/far-end/the last unit. Permanent dc Line-Line Fault on Line L12 A permanent dc line-line fault F1 is experienced on L12 of unit-1 as explained in scenario-1 of Section 4.1.1, which results in the cutoff of power flow to SS ring via L12. Thus, the flow of powers from WF1 and WF2 of unit-1 is diverted to WF5 of unit-3 and WF3 of unit-2, respectively. Power flow within the SS ring almost remains the same. Power and dc-link voltage profiles are depicted in Figure 14. The dc voltage stabilizes itself to 1.025 pu. Thus, the dc grid parameters are within the safe limit which shows that 2WF-SSRT has even stable topology for dc-link faults within remote/far-end/the last unit. Figure 15. Profiles are unaltered before and after the operation of DCCBs in the event of fault F2, which gives conformability of the proposed topology with all three standards of GBSQSS. All the above three tests prove the superiority of the proposed 2WF-SSRT over the simulated (literature best) WFRT and 2WF-1SST. 2WF-SSRT fulfills the GBSQSS requirements and has enough protection to cope with the transients that occurred due to faults and disconnection of the GS-VSCs. Annotations on Topological Evaluation of HVdc Circuits From all of the above discussions, the following remarks are drawn on multi-terminal VSC-HVdc transmission system topologies: (1) In the context of dc breakers, ST, GRT, and SGRT require a dc breaker for each WF and onshore SS (NWF + NSS). SSRT and WFRT demand that DCCB equals the number of WFs (NWFs) while PPT does not need dc breakers (not an MT-HVdc topology). 2WF-1SST and proposed 2WF-SSRT acquire dc breakers equal to the number of substations (NSS). The fewer the requirements of dc breakers, the more economical is the topology. Thus, 2WF-1SST and 2WF-SSRT are more economical configurations than all of them. (2) No extra offshore SS is required for all discussed topologies except ST and SGRT for connections and dc breakers' installations. (3) Pmax-fail criterion imposed by GBSQSS is satisfied during all cases but for a permanent dc fault within WF/SS ring of WFRT/SSRT, on a central star node of star topology and during onshore converter disconnection and dc-link fault within remote/far-end/the last unit (unit-1 and unit-3 of Figure 5) of 2WF-1SST is not satisfied. Stability and flexibility of the MT-HVdc grid are on the stake in the event of permanent failure of GS-VSC station in WFRT, SSRT, ST, SGRT, and GRT, unlike the proposed 2WF-SSRT. Simulations show that 2WF-SSRT is the most stable configuration as it satisfies all operating conditions of GBSQSS. (4) Proposed 2WF-SSRT cuts the complexities such as operation and control approach and the protection scheme because of the standalone operation of each unit of 2WF-SSRT without mediation and burden of the power converters, UFMA, and DCCBs from the adjoining units during the steady-state operation. However, the impact of all VSC stations, UFMAs, dc cables, and DCCBs of a configuration needs to be considered while designing/selecting control and operation, and a protection approach for all other topologies, which proves that 2WF-SSRT is a practical configuration. Comparative evaluation and detailed remarks on MT-HVdc grid topologies are summarized in Table 4 [27][28][29][30]. All the above three tests prove the superiority of the proposed 2WF-SSRT over the simulated (literature best) WFRT and 2WF-1SST. 2WF-SSRT fulfills the GBSQSS requirements and has enough protection to cope with the transients that occurred due to faults and disconnection of the GS-VSCs. Annotations on Topological Evaluation of HVdc Circuits From all of the above discussions, the following remarks are drawn on multi-terminal VSC-HVdc transmission system topologies: (1) In the context of dc breakers, ST, GRT, and SGRT require a dc breaker for each WF and onshore SS (N WF + N SS ). SSRT and WFRT demand that DCCB equals the number of WFs (N WFs ) while PPT does not need dc breakers (not an MT-HVdc topology). 2WF-1SST and proposed 2WF-SSRT acquire dc breakers equal to the number of substations (N SS ). The fewer the requirements of dc breakers, the more economical is the topology. Thus, 2WF-1SST and 2WF-SSRT are more economical configurations than all of them. (2) No extra offshore SS is required for all discussed topologies except ST and SGRT for connections and dc breakers' installations. (3) P max-fail criterion imposed by GBSQSS is satisfied during all cases but for a permanent dc fault within WF/SS ring of WFRT/SSRT, on a central star node of star topology and during onshore converter disconnection and dc-link fault within remote/far-end/the last unit (unit-1 and unit-3 of Figure 5) of 2WF-1SST is not satisfied. Stability and flexibility of the MT-HVdc grid are on the stake in the event of permanent failure of GS-VSC station in WFRT, SSRT, ST, SGRT, and GRT, unlike the proposed 2WF-SSRT. Simulations show that 2WF-SSRT is the most stable configuration as it satisfies all operating conditions of GBSQSS. (4) Proposed 2WF-SSRT cuts the complexities such as operation and control approach and the protection scheme because of the standalone operation of each unit of 2WF-SSRT without mediation and burden of the power converters, UFMA, and DCCBs from the adjoining units during the steady-state operation. However, the impact of all VSC stations, UFMAs, dc cables, and DCCBs of a configuration needs to be considered while designing/selecting control and operation, and a protection approach for all other topologies, which proves that 2WF-SSRT is a practical configuration. Conclusions In this study, available MT-HVdc transmission system topologies, as well as the one proposed by this paper, have been compared for offshore wind farms integration to onshore substations. Each topology has been characterized in terms of techno-economic factors, such as number and rating of dc circuit breakers and VSCs, the necessity of extra offshore stations, number, length and rating of dc-link, stability, flexibility, redundancy, cost, and meeting the maximum loss criterion. Simulations are developed in EMTDC/PSCAD to analyze the operation and control of the configurations in terms of (i) permanent GS-VSC disconnection and (ii) dc line-line faults. Results have shown that WFRT and 2WF-1SST are not conforming to the GBSQSS and hence not viable during GS-VSC disconnection and dc-link fault within remote/far-end/the last unit (unit-2 or unit-3 in Figure 5). Therefore, a more dynamic topology for the MT-HVdc grid is proposed and named as two wind farm and substation ring topology to overcome the aforementioned shortcomings. The proposed 2WF-SSRT is able to improve the stability, utilization, and redundancy while fulfilling the maximum loss criterion in the normal and abnormal conditions by providing an alternate path for the power flow with the least number and ratings of HVdc circuit breakers and grid side VSC-HVdc stations. Therefore, the anticipated 2WF-SSRT is a promising topology for the future wind farm MT-HVdc grids. Conflicts of Interest: The authors declare no conflict of interest.
10,600.8
2020-03-06T00:00:00.000
[ "Engineering", "Physics" ]